text
stringlengths
11
320k
source
stringlengths
26
161
ISO 21482 is a technical standard that specifies the design and usage of a "supplemental radiation warning symbol", a warning symbol . It is intended to warn people of the dangers of radiation hazards posed by sealed sources , and encourage the viewer to get away from the source. The symbol's design was the result of a joint project between the International Atomic Energy Agency (IAEA) and International Organization for Standardization (ISO) in the early 2000s. [ 1 ] The symbol was formally revealed to the public by the IAEA on 15 February 2007. [ 2 ] [ 3 ] Lost nuclear sources, or " orphan sources " have presented a hazard to the public dating from at least the early 1960s. These often originated from larger equipment being dismantled by individuals in search of metal for scrap. Even the source itself was often contained in a metal housing that appeared to be valuable. In the aftermath of repeated incidents where the public was exposed to radiation from orphan sources, a common factor reappeared: individuals who encountered the source were unfamiliar with the trefoil radiation warning symbol, and were in some cases not familiar with the concept of radiation. [ 1 ] During a study in the early 2000s, it was found that only 6% of those surveyed in India, Brazil and Kenya could correctly identify the meaning of the trefoil symbol. [ 1 ] Brazil is notable for being the location of the 1987 Goiânia accident , one of the worst incidents involving a orphan source that killed four people, contaminated at least 250 others and caused the contamination of multiple locations and vehicles. In 2001, the "new warning symbol project" was authorized by IAEA Member States. The objective was to create a single symbol, that would be understood worldwide by someone at any age, with nearly any level of education, or not knowing about radiation, as "Danger—Run Away—Do Not Touch!". [ 1 ] Five years of work developed 50 symbols as part of the first phase. The symbols were taken to the Vienna International School in Austria. This school's many students hail from over 80 countries, and included children not yet able to read. This enabled personnel to narrow the designs to those that gave off an intuitive message of "danger" or "bad". [ 1 ] This process resulted in five symbols for further consideration. These symbols were then checked by IAEA Member States to confirm that the proposed symbols did not have any negative associations or connections to religion, culture and history. [ 1 ] In 2004, the ISO and Gallup Institute conducted further investigations to determine which of the five symbols were most effective. 1,650 individuals, in 11 countries [ a ] were shown the symbols by Gallup staff. These individuals crossed the spectrum: age, education background, gender, rural/urban. [ 1 ] Among things assessed by the researchers were: "What were the respondents' initial reactions to the symbols?" and "What action would they take if they saw these symbols?" [ 1 ] It was an eye-opening experience. Initial interpretations of the symbols were that something bad could happen and caution was needed—but the source of that threat wasn't understood. Many thought it was a warning of AIDS, electricity, toxins or even a road hazard. All five symbols were understood to convey "caution", but only the symbol that included a skull conveyed "danger of death". [ 1 ] The symbol consists of a triangle, with a black border, and a background of Pantone red No. 187, the symbols are black, with a white outline. While yellow is specified in ISO 3864-4 for usage with warning symbols and messages, it was found that red was more effective at conveying "danger" than yellow, which was viewed as the less serious "caution". [ 4 ] The symbol consists of three elements: A trefoil on the top, representing and warning of radiation. Five lines emanate in an arc towards the bottom of the triangle. On the bottom left, there is a skull and crossbones , signifying death. On the bottom right, there is a running figure, with an arrow pointing right, away from the skull and crossbones. [ 4 ] The combined icons created a symbol that virtually uniformly conveyed a message of "Danger—Run Away—Do Not Touch!". [ 1 ] The use of three separate pictograms, contained within a larger triangular symbol was chosen for its success in inducing the desired response from viewers. [ 1 ] The intended usage of the supplementary radiation warning symbol, is to warn and discourage anyone attempting to dismantle a piece of equipment containing either a IAEA Category 1, 2 or 3 sealed radiation source, which is any sealed source that can cause serious injuries or death if a person is exposed to it. [ 4 ] The symbol is to be placed close to the source, either on its shield or a point of access. The smallest the symbol should be produced is 3.0 centimetres (1.2 in), which makes its placement on most sources difficult due to their small size. [ 4 ] The symbol is intended to be hidden from view under normal conditions, and only be revealed to a person when they start attempting to dismantle a piece of equipment, such as removing outer housing of equipment. [ 4 ] The symbol is not intended to replace the trefoil symbol in use since the mid-1940s (ISO 361, also described in ISO 7010/W003 ), but rather supplement it. It is not intended for use on transport or freight containers, transport vehicles, or on doors and walls of buildings and rooms. [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/ISO_21482
ISO 216 is an international standard for paper sizes , used around the world except in North America and parts of Latin America. The standard defines the " A ", " B " and " C " series of paper sizes, which includes the A4 , the most commonly available paper size worldwide. Two supplementary standards, ISO 217 and ISO 269 , define related paper sizes; the ISO 269 " C " series is commonly listed alongside the A and B sizes. All ISO 216, ISO 217 and ISO 269 paper sizes (except some envelopes) have the same aspect ratio , √ 2 :1 , within rounding to millimetres . This ratio has the unique property that when cut or folded in half widthways, the halves also have the same aspect ratio. Each ISO paper size is one half of the area of the next larger size in the same series. [ 1 ] The oldest known mention of the advantages of basing a paper size on an aspect ratio of 2 {\textstyle {\sqrt {2}}} is found in a letter written on 25 October 1786 by the German scientist Georg Christoph Lichtenberg to Johann Beckmann , both at the University of Göttingen . [ 2 ] Early variants of the formats that would become ISO paper sizes A2, A3, B3, B4, and B5 then evolved in France, where they were listed in a 1798 French law on taxation of publications ( French : Loi sur le timbre (Nº 2136) ) that was based in part on page sizes. [ 3 ] Searching for a standard system of paper formats on a scientific basis at the Bridge association ( German : Die Brücke ), as a replacement for the vast variety of other paper formats that had been used before, in order to make paper stocking and document reproduction cheaper and more efficient, in 1911 Wilhelm Ostwald proposed, over a hundred years after the 1798 French law, [ 3 ] a global standard – a world format ( Weltformat ) – for paper sizes based on the ratio 2 {\textstyle {\sqrt {2}}} , referring to the argument advanced by Lichtenberg's 1786 letter, but linking this to the metric system using 1 centimetre (0.39 in) as the width of the base format. Walter Porstmann [ de ] argued in a long article published in 1918, that a firm basis for the system of paper formats, which deal with surfaces, ought not be the length but the area; that is, linking the system of paper formats to the metric system using the square metre rather than the centimetre, constrained by x y = 2 {\textstyle {\tfrac {x}{y}}={\sqrt {2}}} and area a = x × y = 1 {\textstyle a=x\times y=1} square metre, where x {\textstyle x} is the length of the shorter side and y {\textstyle y} is the length of the longer side, for the second equation both in metres. Porstmann also argued that formats for containers of paper, such as envelopes, should be 10% larger than the paper format itself. In 1921, after a long discussion and another intervention by Porstmann, the Standardisation Committee of German Industry ( Normenausschuß der deutschen Industrie , or NADI in short), which is the German Institute for Standardisation ( Deutsches Institut für Normung , or DIN in short) today, published German standard DI Norm 476 the specification of four series of paper formats with ratio 2 {\textstyle {\sqrt {2}}} , with series A as the always preferred formats and basis for the other series. All measures are rounded to the nearest millimetre. A0 has a surface area of 1 square metre (11 sq ft) up to a rounding error , with a width of 841 millimetres (33.1 in) and height of 1,189 millimetres (46.8 in), so an actual area of 0.999949 square metres (10.76336 sq ft); A4 is recommended as standard paper size for business, administrative and government correspondence; and A6 for postcards. Series B is based on B0 with width of 1 metre (3 ft 3 in), C0 is 917 by 1,297 millimetres (36.1 in × 51.1 in), and D0 771 by 1,090 millimetres (30.4 in × 42.9 in). Series C is the basis for envelope formats. The DIN paper-format concept was soon introduced as a national standard in many other countries, for example, Belgium (1924), Netherlands (1925), Norway (1926), Switzerland (1929), Sweden (1930), Soviet Union (1934), Hungary (1938), Italy (1939), Finland (1942), Uruguay (1942), Argentina (1943), Brazil (1943), Spain (1947), Austria (1948), Romania (1949), Japan (1951), Denmark (1953), Czechoslovakia (1953), Israel (1954), Portugal (1954), Yugoslavia (1956), India (1957), Poland (1957), United Kingdom (1959), Venezuela (1962), New Zealand (1963), Iceland (1964), Mexico (1965), South Africa (1966), France (1967), Peru (1967), Turkey (1967), Chile (1968), Greece (1970), Zimbabwe (1970), Singapore (1970), Bangladesh (1972), Thailand (1973), Barbados (1973), Australia (1974), Ecuador (1974), Colombia (1975) and Kuwait (1975). It finally became both an international standard ( ISO 216) as well as the official United Nations document format in 1975, and it is today used in almost all countries in the world, with the exception of several countries in the Americas. In 1977, a large German car manufacturer performed a study of the paper formats found in their incoming mail and concluded that out of 148 examined countries, 88 already used the A series formats. [ 4 ] The main advantage of this system is its scaling. Rectangular paper with an aspect ratio of 2 {\textstyle {\sqrt {2}}} has the unique property that, when cut in two across the midpoints of the longer sides, each half has the same 2 {\textstyle {\sqrt {2}}} aspect ratio as the whole sheet before it was divided. Equivalently, if one lays two same-sized sheets of paper with an aspect ratio of 2 {\textstyle {\sqrt {2}}} side by side along their longer side, they form a larger rectangle with the aspect ratio of 2 {\textstyle {\sqrt {2}}} and double the area of each individual sheet. The ISO system of paper sizes exploits these properties of the 2 {\textstyle {\sqrt {2}}} aspect ratio. In each series of sizes (for example, series A), the largest size is numbered 0 (so in this case A0), and each successive size (A1, A2, etc.) has half the area of the preceding sheet and can be cut by halving the length of the preceding size sheet. The new measurement is rounded down [ dubious – discuss ] to the nearest millimetre. A folded brochure can be made by using a sheet of the next larger size (for example, an A4 sheet is folded in half to make a brochure with size A5 pages). An office photocopier or printer can be designed to reduce a page from A4 to A5 or to enlarge a page from A4 to A3. Similarly, two sheets of A4 can be scaled down to fit one A4 sheet without excess empty paper. This system also simplifies calculating the weight of paper. Under ISO 536 , paper's grammage is defined as a sheet's mass in grams (g) per area in square metres (unit symbol g/m 2 ; the nonstandard abbreviation "gsm" is also used). [ 5 ] One can derive the grammage of other sizes by arithmetic division . A standard A4 sheet made from 80 g/m 2 paper weighs 5 grams (0.18 oz), as it is 1 ⁄ 16 (four halvings, ignoring rounding) of an A0 page. Thus the weight, and the associated postage rate , can be approximated easily by counting the number of sheets used. ISO 216 and its related standards were first published between 1975 and 1995: Paper in the A series format has an aspect ratio of √ 2 (≈ 1.414, when rounded). A0 is defined so that it has an area of 1 m 2 (11 sq ft ) before rounding to the nearest 1 millimetre (0.039 in). Successive paper sizes in the series (A1, A2, A3, etc.) are defined by halving the area of the preceding paper size and rounding down, so that the long side of A( n + 1) is the same length as the short side of A n . Hence, each next size is nearly exactly half the area of the prior size. So, an A1 page can fit two A2 pages inside the same area. The most used of this series is the size A4, which is 210 mm × 297 mm (8.27 in × 11.7 in) and thus almost exactly 1 ⁄ 16 square metre (0.0625 m 2 ; 96.8752 sq in) in area. For comparison, the letter paper size commonly used in North America ( 8 + 1 ⁄ 2 in × 11 in; 216 mm × 279 mm) is about 6 mm ( 0.24 in ) wider and 18 mm ( 0.71 in ) shorter than A4. Then, the size of A5 paper is half of A4, i.e. 148 mm × 210 mm ( 5.8 in × 8.3 in ). [ 6 ] [ 7 ] The geometric rationale for using the square root of 2 is to maintain the aspect ratio of each subsequent rectangle after cutting or folding an A-series sheet in half, perpendicular to the larger side. Given a rectangle with a longer side, x , and a shorter side, y , ensuring that its aspect ratio, ⁠ x / y ⁠ , will be the same as that of a rectangle half its size, ⁠ y / x /2 ⁠ , which means that ⁠ x / y ⁠ = ⁠ y / x /2 ⁠ , which reduces to ⁠ x / y ⁠ = √ 2 ; in other words, an aspect ratio of 1: √ 2 . Any A n paper can be defined as A n = S × L , where (measuring in metres) Therefore The B series is defined in the standard as follows: "A subsidiary series of sizes is obtained by placing the geometrical means between adjacent sizes of the A series in sequence." The use of the geometric mean makes each step in size: B0, A0, B1, A1, B2 ... smaller than the previous one by the same factor. As with the A series, the lengths of the B series have the ratio √ 2 , and folding one in half (and rounding down to the nearest millimetre) gives the next in the series. The shorter side of B0 is exactly 1 metre. There is also an incompatible Japanese B series which the JIS defines to have 1.5 times the area of the corresponding JIS A series (which is identical to the ISO A series). [ 8 ] Thus, the lengths of JIS B series paper are √ 1.5 ≈ 1.22 times those of A-series paper. By comparison, the lengths of ISO B series paper are 4 √ 2 ≈ 1.19 times those of A-series paper. Any B n paper (according to the ISO standard) can be defined as B n = S × L , where (measuring in metres) Therefore The C series formats are geometric means between the B series and A series formats with the same number (e.g. C2 is the geometric mean between B2 and A2). The width to height ratio of C series formats is √ 2 as in the A and B series. A, B, and C series of paper fit together as part of a geometric progression , with ratio of successive side lengths of 8 √ 2 , though there is no size half-way between B n and A( n − 1) : A4, C4, B4, "D4", A3, ...; there is such a D-series in the Swedish extensions to the system. The lengths of ISO C series paper are therefore 8 √ 2 ≈ 1.09 times those of A-series paper. The C series formats are used mainly for envelopes . An unfolded A4 page will fit into a C4 envelope. Due to same width to height ratio, if an A4 page is folded in half so that it is A5 in size, it will fit into a C5 envelope (which will be the same size as a C4 envelope folded in half). Any C n paper can be defined as C n = S × L , where (measuring in metres) Therefore The tolerances specified in the standard are: These are related to comparison between series A, B and C. The ISO 216 formats are organized around the ratio 1: √ 2 ; two sheets next to each other together have the same ratio, sideways. In scaled photocopying, for example, two A4 sheets reduced to A5 size fit exactly onto one A4 sheet, and an A4 sheet in magnified size onto an A3 sheet; in each case, there is neither waste nor want. The principal countries not generally using the ISO paper sizes are the United States and Canada, which use North American paper sizes . Although many Latin American countries have also officially adopted the ISO 216 paper format, Mexico, Panama, Peru, Colombia, the Philippines, and Chile also use mostly U.S. paper sizes. Rectangular sheets of paper with the ratio 1: √ 2 are popular in paper folding , such as origami , where they are sometimes called "A4 rectangles" or "silver rectangles". [ 9 ] In other contexts, the term "silver rectangle" can also refer to a rectangle in the proportion 1:(1 + √ 2 ), known as the silver ratio . An adjunct to the ISO paper sizes, particularly the A series, are the technical drawing line widths specified in ISO 128 . For example, line type A ("Continuous - thick", used for "visible outlines") has a standard thickness of 0.7 mm on an A0-sized sheet, 0.5 mm on an A1 sheet, and 0.35 mm on A2, A3, or A4. [ 10 ] The matching technical pen widths are 0.13, 0.18, 0.25, 0.35, 0.5, 0.7, 1.0, 1.40, and 2.0 mm, as specified in ISO 9175-1 . Colour codes are assigned to each size to facilitate easy recognition by the drafter. Like the paper sizes, these pen widths increase by a factor of √ 2 , so that particular pens can be used on particular sizes of paper, and then the next smaller or larger size can be used to continue the drawing after it has been reduced or enlarged, respectively. [ 4 ] [ 11 ] The earlier DIN 6775 standard upon which ISO 9175-1 is based also specified a term and symbol for easy identification of pens and drawing templates compatible with the standard, called Micronorm , which may still be found on some technical drafting equipment. DIN 476 provides for formats larger than A0, denoted by a prefix factor. In particular, it lists the formats 2A0 and 4A0, which are twice and four times the size of A0 respectively: While not formally defined, ISO 216:2007 notes them in the table of Main series of trimmed sizes (ISO A series) as well: "The rarely used sizes [2A0 and 4A0] which follow also belong to this series." 2A0 is also known by other unofficial names like "A00". [ 12 ]
https://en.wikipedia.org/wiki/ISO_216
The ISO 22715 standard Cosmetics — Packaging and labelling provides guidelines for manufacturers in the best practices for cosmetic packaging and labelling of all cosmetic products . This standard applies to products that fall under the category of cosmetics that are sold or given away as free samples . ISO 22715 was initially published in April 2006. [ 1 ] [ 2 ] The intent of Standard ISO 22715 is to specify how cosmetic products should be packaged and labeled to maintain a certain level of standards within the cosmetic industry . This standard applies to cosmetic products whether the product is sold or given away. It is one of 26 published standards that are devoted to the cosmetic industry sector. [ 1 ] ISO 22715 does not regulate what products are to be considered cosmetic. This determination is left to the national regulations of those countries that follow the ISO 22715 and use it as a guide to best practices for packaging and labeling cosmetic products. Often, these national regulations are stricter than those provided in the standard are. [ 3 ] ISO 22715 supports the need for consumers to know what is in the cosmetic products they purchase, how those products should be used and who has manufactured the products. [ 4 ] To accomplish this, 22715 specifies that a product's packaging should show certain information such as the ingredients used in the product listed in descending order according to the percentage of each ingredient in the product. A list of coloring agents used in the product then follows the list of ingredients. For safety purposes, ISO recommends including an explanation of what the product function is on the package, along with instructions for its use. Any precautionary or warning statements should also be printed on the package to caution consumers in the use of said product. Additional information that should appear on the packaging of cosmetic products includes: The International Organization for Standardization (ISO) was formed in 1947. It is a non-governmental organization (NGO) that is based in Geneva , Switzerland. The ISO has 162 members and the organization represents the interest of international standards of 196 countries. This represents almost 97 percent of the world's population. The ISO's purpose is to create standards that are used to help form public policies and business objectives that benefit people throughout the world. Since May 2016, there has been over 21,500 ISO standards published. The ISO develops new standards when industry sectors and their stakeholders determine that a need exists. The standards are developed under the oversight of the ISO through cooperative efforts between other NGOs, consumer organizations , representatives from government agencies , academics and testing laboratories. [ 5 ] [ 1 ] [ 3 ] ISO 22715 is not legally binding, but it is the common denominator used for developing national regulations that address the labeling and packaging of cosmetic products. Regulators in individual countries often look to ISO standards as the benchmark for best practices for the different industry sectors in which they apply. Many regulators require businesses and manufacturers to comply with applicable ISO standards, in addition to local regulations. [ 3 ] Currently, there are 26 published ISO standards devoted to the cosmetic sector, including ISO 22715. These standards are overseen by the ISO's cosmetic product technical committee that was established in 1998. [ 6 ] This committee is composed of standardization bodies from major markets, such as leading ASEAN countries, most European counties and the United States via ANSI. Thirty-nine countries participate in the creation of standards for cosmetic products, with 27 observing countries within the committee. [ 7 ] To better distribute and update standards as needed, the ISO maintains the copyright on its standards. Most standards are reviewed and updated every five to seven years to remain relevant to the latest technologies within industry sectors . [ 5 ] Often, local regulations may exceed the requirements specified in ISO 22715. In the United States , the Food & Drug Administration ( FDA ) regulates the cosmetic industry [ 8 ] with standards that are provided by the American National Standards Institute (ANSI), in which the FDA is member. [ 9 ] [ 10 ] In the European Union , cosmetic manufacturers must abide by Regulation (EC) No. 1223/2009 Article 19 for the labeling of cosmetic products. [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/ISO_22715
ISO 25119 , titled "Tractors and machinery for agriculture and forestry – Safety-related parts of control systems", is an international standard for functional safety of electrical and/or electronic systems that are installed in tractors and machines used in agriculture and forestry, defined by the International Organization for Standardization (ISO). ISO 25119 consists of following parts: [ 1 ]
https://en.wikipedia.org/wiki/ISO_25119
ISO 25178: Geometrical Product Specifications (GPS) – Surface texture: areal is an International Organization for Standardization collection of international standards relating to the analysis of 3D areal surface texture . Documents constituting the standard: Other documents might be proposed in the future but the structure is now almost defined. Part 600 will replace the common part found in all other parts. When revised, parts 60x will be reduced to only contain descriptions specific to the instrument technology. It is the first international standard taking into account the specification and measurement of 3D surface texture. In particular, the standard defines 3D surface texture parameters and the associated specification operators. It also describes the applicable measurement technologies, calibration methods, together with the physical calibration standards and calibration software that are required. A major new feature incorporated into the standard is coverage of non-contact measurement methods, already commonly used by industry, but up until now lacking a standard to support quality audits within the framework of ISO 9000 . For the first time, the standard brings 3D surface metrology methods into the official domain, following 2D profilometric methods that have been subject to standards for over 30 years. The same thing applies to measurement technologies that are not restricted to contact measurement (with a diamond point stylus ), but can also be optical, such as chromatic confocal gauges and interferometric microscopes . The ISO 25178 standard is considered by TC213 as first and foremost providing a redefinition of the foundations of surface texture, based upon the principle that nature is intrinsically 3D. It is anticipated that future work will extend these new concepts into the domain of 2D profilometric surface texture analysis, requiring a total revision of all current surface texture standards ( ISO 4287 , ISO 4288 , ISO 1302 , ISO 11562 , ISO 12085 , ISO 13565 , etc.) A new vocabulary is imposed: The new available filters are described in the series of technical specifications included in ISO 16610 . These filters include: the Gaussian filter, the spline filter, robust filters, morphological filters, wavelet filters, cascading filters, etc. 3D areal surface texture parameters are written with the capital letter S (or V) followed by a suffix of one or two small letters. They are calculated on the entire surface and no more by averaging estimations calculated on a number of base lengths, as is the case for 2D parameters. In contrast with 2D naming conventions, the name of a 3D parameter does not reflect the filtering context. For example, Sa always appears regardless of the surface, whereas in 2D there is Pa , Ra or Wa depending on whether the profile is a primary, roughness or waviness profile. These parameters involve only the statistical distribution of height values along the z axis. These parameters involve the spatial periodicity of the data, specifically its direction. These parameters relate to the spatial shape of the data. These parameters are calculated from the material ratio curve ( Abbott-Firestone curve ). These feature parameters are derived from a segmentation of the surface into motifs (dales and hills). Segmentation is carried out using a watershed method . A consortium of several companies started to work in 2008 on a free implementation of 3D surface texture parameters. The consortium, called OpenGPS [1] later focused its efforts on an XML file format (X3P) that was published under the ISO standard ISO 25178-72. Several commercial packages provide part or all of the parameters defined in ISO 25178, such as MountainsMap from Digital Surf, SPIP from Image Metrology [2] , TrueMap 6 from TrueGage [3] , as well as the open source Gwyddion. Part 6 of the standard divides the usable technologies for 3D surface texture measurement into three families: and defines each of these technologies. Next, the standard explores a number of these technologies in detail and dedicates two documents to each of them: Parts 601 and 701 describe the contact profilometer, using a diamond point to measure the surface with the assistance of a lateral scanning device. Part 602 describes this type of non-contact profilometer, incorporating a single point white light chromatic confocal sensor. The operating principle is based upon the chromatic dispersion of the white light source along the optical axis, via a confocal device, and the detection of the wavelength that is focused on the surface by a spectrometer . Part 604 describes a class of optical surface measurement methods wherein the localization of interference fringes during a scan of optical path length provides a means to determine surface characteristics such as topography, transparent film structure, and optical properties. The technique encompasses instruments that use spectrally broadband, visible sources (white light) to achieve interference fringe localization). CSI uses either fringe localization alone or in combination with interference fringe phase. Part 606 describes this type of non-contact areal based method. The operating principle is based on a microscope optics with limited depth of field and a CCD camera. By scanning in vertical direction several images with different focus are gathered. This data is then used to calculate a surface data set for roughness measurement.
https://en.wikipedia.org/wiki/ISO_25178
ISO 31 ( Quantities and units , International Organization for Standardization , 1992) is a superseded international standard concerning physical quantities , units of measurement , their interrelationships and their presentation. [ 1 ] It was revised and replaced by ISO/IEC 80000 . The standard comes in 14 parts: A second international standard on quantities and units was IEC 60027 . [ 2 ] The ISO 31 and IEC 60027 Standards were revised by the two standardization organizations in collaboration ( [1] , [2] ) to integrate both standards into a joint standard ISO/IEC 80000 - Quantities and Units in which the quantities and equations used with SI are to be referred as the International System of Quantities (ISQ). ISO/IEC 80000 supersedes both ISO 31 and part of IEC 60027. ISO 31-0 introduced several new words into the English language that are direct spelling- calques from the French . [ 3 ] Some of these words have been used in scientific literature. [ 4 ] [ 5 ] [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/ISO_31
ISO 31-11:1992 was the part of international standard ISO 31 that defines mathematical signs and symbols for use in physical sciences and technology . It was superseded in 2009 by ISO 80000-2:2009 and subsequently revised in 2019 as ISO-80000-2:2019 . [ 1 ] It included definitions for symbols for mathematical logic , set theory , arithmetic and complex numbers , functions and special functions and values, matrices , vectors , and tensors , coordinate systems , and miscellaneous mathematical relations . [ 2 ]
https://en.wikipedia.org/wiki/ISO_31-11
ISO 31-8 is the part of international standard ISO 31 that defines names and symbols for quantities and units related to physical chemistry and molecular physics . In the tables of quantities and their units, the ISO 31-8 standard shows symbols for substances as subscripts (e.g., c B , w B , p B ). It also notes that it is generally advisable to put symbols for substances and their states in parentheses on the same line, as in c (H 2 SO 4 ). This annex contains a list of elements by atomic number , giving the names and standard symbols of the chemical elements from atomic number 1 ( hydrogen , H) to 109 ( unnilennium , Une). The list given in ISO 31-8:1992 was quoted from the 1998 IUPAC "Green Book" Quantities, Units and Symbols in Physical Chemistry and adds in some cases in parentheses the Latin name for information, where the standard symbol has no relation to the English name of the element. Since the 1992 edition of the standard was published, some elements with atomic number above 103 have been discovered and renamed. Symbols for chemical elements shall be written in roman (upright) type. The symbol is not followed by a full-stop. Examples: Attached subscripts or superscripts specifying a nucleotide or molecule have the following meanings and positions: pH is defined operationally as follows. For a solution X, first measure the electromotive force E X of the galvanic cell and then also measure the electromotive force E S of a galvanic cell that differs from the above one only by the replacement of the solution X of unknown pH, pH(X), by a solution S of a known standard pH, pH(S). Then obtain the pH of X as where Defined this way, pH is a quantity of dimension 1, that is it has no unit. Values pH(S) for a range of standard solutions S are listed in Definitions of pH scales, standard reference values, measurement of pH, and related terminology . Pure Appl. Chem. (1985), 57, pp 531–542, where further details can be found. pH has no fundamental meaning; its official definition is a practical one. However, in the restricted range of dilute aqueous solutions having amount-of-substance concentrations less than 0.1 mol/L, and being neither strongly alkaline nor strongly acidic (2 < pH < 12), the definition is such that where c (H + ) denotes the amount-of-substance concentration of hydrogen ion H + and y 1 denotes the activity coefficient of a typical uni-univalent electrolyte in the solution.
https://en.wikipedia.org/wiki/ISO_31-8
ISO 657 ( hot-rolled steel sections ) is an ISO standard that specifies the tolerances for hot-finished circular, square and rectangular structural hollow sections and gives the dimensions and sectional properties for a range of standard sizes. This first edition as an International Standard constitutes a technical revision of ISO Recommendation R 657-1:1968. ISO 657 consists of 21 parts integrating any shapes of sections. ISO 657-1 specifies dimensions of hot-rolled equal-leg angles. This computing article is a stub . You can help Wikipedia by expanding it . This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/ISO_657
ISO 7001 ("public information symbols") is a standard published by the International Organization for Standardization that defines a set of pictograms and symbols for public information. The latest version, ISO 7001:2023, was published in February 2023. [ 1 ] The set is the result of extensive testing in several countries and different cultures and have met the criteria for comprehensibility set up by the ISO. [ 1 ] The design process and testing of ISO 7001 symbols is governed by ISO 22727:2007, Graphical symbols — Creation and design of public information symbols — Requirements . [ 2 ] Common examples of public information symbols include those representing toilets, car parking, and information, and the International Symbol of Access . ISO 7001 was first released in October 1980, with a single amendment in 1985. The second edition was released in February 1990, with one amendment in 1993. The third edition, the latest edition was released in November 2007, and has received four amendments in 2013, 2015, 2016 and 2017. The use of the symbols of ISO 7001 is recommended by the European standard EN 17210. [ 3 ] ISO 7001 sets out some general guidelines for how symbols should be utilized, though large aspects are left up to the decision of the individual or entity designing signage for their facility. Symbols were created with the goal of being able to stand alone, without any accompanying text. However, text can be used to further aid in communicating the message, particularly in a situation where a custom symbol has been designed for a unique situation not covered by standard ISO 7001 symbols. Specific sizes for symbols are not provided in ISO 7001, though symbols are designed with the goal of being clearly understood regardless placed on something as small as a floor plan of a building or as a large as a giant sign hanging from a ceiling in a large open space. [ 2 ] While symbols are intended and recommended to be reproduced as presented in ISO 7001, the ISO acknowledges that situations may exist where a symbol should be modified due to national or cultural needs of a particular situation. Though key elements and the intent of the original symbol design must be retained to ensure it will be effective. [ 2 ] No colours are specified in ISO 7001, with the only guidance being to ensure clear contrast between the symbol and the sign background, as well as the environment the sign is in. There is a clear recommendation against using colors specified in ISO 3864 , due to possible confusion with safety signage using those colors. Of explicit concern is green and white, due to the risk of confusing a green and white 'PI PF 030' direction arrow symbol, for an ISO 7010 evacuation route arrow. [ 2 ] To avoid possible confusion with similar safety symbols of ISO 7010, symbols in ISO 7001 do not use the standard prohibition symbol consisting of a red circle with a red slash. Instead, either a red 'slash' or red 'cross' is used. A slash is used when an object is prohibited, and covers the entire symbol. A cross is used in situations where a behavior is prohibited, with the cross placed over the portion of the symbol depicting the behavior that is being prohibited rather than the entire symbol. [ 2 ] The slash and cross can be added to other symbols, such as a baggage cart to indicate 'no baggage carts'. ISO 7001 states that when symbols are designed, they should not have key elements that would be obstructed by the slash as positioned on the template provided in ISO 22727:2007. The slash or cross must be on top of the symbol, and should be red in color. [ 2 ] The standard consists of 177 symbols, divided into seven categories: accessibility, public facilities, transport facilities, behaviour of the public, commercial facilities, tourism, cultural and heritage and sporting activities. [ 1 ] All symbol reference numbers in this category are prefixed with "AC", for Ac cessibility. All symbol reference numbers in this category are prefixed with "PF", for P ublic F acilities. All symbol reference numbers in this category are prefixed with "TF", for T ransport F acilities. All symbol reference numbers in this category are prefixed with "BP", for B ehaviour of the P ublic. [ b ] All symbol reference numbers in this category are prefixed with "CF", for C ommercial F acilities. All symbol reference numbers in this category are prefixed with "TC", for T ourism, C ulture and heritage. All symbol reference numbers in this category are prefixed with "SA", for S ports A ctivities.
https://en.wikipedia.org/wiki/ISO_7001
ISO 7010 is an International Organization for Standardization technical standard for graphical hazard symbols on hazard and safety signs , including those indicating emergency exits . It uses colours and principles set out in ISO 3864 for these symbols, and is intended to provide "safety information that relies as little as possible on the use of words to achieve understanding." [ 1 ] The standard was published in October 2003, splitting off from ISO 3864:1984 , which set out design standards and colors of safety signage and merging ISO 6309:1987, Fire protection - Safety signs to create a unique and distinct standard for safety symbols. [ 2 ] [ 3 ] As of September 2022 [update] , the latest version is ISO 7010:2019, with 9 published amendments. [ 4 ] This revision canceled and replaced ISO 20712-1:2008, incorporating the water safety signs and beach safety flags specified in it. [ 5 ] ISO 7010 specifies five combinations of shape and colour to distinguish between the type of information presented. [ 6 ] ISO registers and lists recommended pictograms, which it calls "safety signs", on its website, ISO.org. The ISO standard provides a registered number for pictograms that have officially been made part of the ISO 7010 standard. Corresponding with the categories above, in ISO parlance, "E" numbers refer to E mergency (signs showing a safe condition), "F" numbers refer to F ire protection, "P" numbers refer to P rohibited actions, "M" numbers refer to M andatory actions, and "W" numbers refer to W arnings of hazards. [ 8 ] According to the related ISO 3864-1 standard, if a symbol does not exist for a situation, the recommended solution is to use the relevant 'general' symbol (M001, P001, W001), along with a supplemental text message. [ 9 ] ISO 7010 states on all symbols with a first aid cross , that it "may be replaced with another element appropriate to cultural requirements". In countries with a Muslim -majority population, an appropriate symbol is the crescent . The following symbols were previously part of ISO 7010, but have since been withdrawn from the standard; arrow type D in ISO standard 3864-3 are to be used in their stead.
https://en.wikipedia.org/wiki/ISO_7010
ISO 7027:1999 is an ISO standard for water quality that enables the determination of turbidity . [ 1 ] The ISO 7027 technique is used to determine the concentration of suspended particles in a sample of water by measuring the incident light scattered at right angles from the sample. The scattered light is captured by a photodiode , which produces an electronic signal that is converted to a turbidity . This water supply –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/ISO_7027
ISO 704 - Terminology work—Principles and methods It is an ISO standard , which establishes the basic principles and methods for preparing and compiling terminologies both inside and outside the framework of standardization , and describes the links between objects, concepts, and their terminological representations. It also establishes general principles governing the formation of designations and the formulation of definitions. Full and complete understanding of these principles requires some background knowledge of terminology work. The principles are general in nature and this document is applicable to terminology work in scientific , technological , industrial , administrative and other fields of knowledge. ISO 704:2009 does not stipulate procedures for the layout of international terminology standards, which are treated in ISO 10241. ISO 704:2009 This computing article is a stub . You can help Wikipedia by expanding it . This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/ISO_704
ISO 860 Terminology work – Harmonization of concepts and terms is an ISO standard that deals with the principles which are the basis upon which concept systems can be harmonized and with the development of harmonized terminologies, in order to improve the efficiency in interlinguistic communication . This standard specifies a methodology for the harmonization of concepts , definitions , terms, concept systems, and term systems. It is a natural extension of ISO 704 . The standard addresses two types of harmonization: concept harmonization and term harmonization. Concept harmonization means the reduction or elimination of minor differences between two or more closely related concepts. Concept harmonization is not the transfer of a concept system to another language. It involves the comparison and matching of concepts and concept systems in one or more languages or subject fields. Term harmonization refers to the designation of a single concept (in different languages) by terms that reflect similar characteristics or similar forms. Term harmonization is possible only when the concepts the terms represent are almost exactly the same. The standard contains a flow chart for the harmonization process and a description of the procedures for performing it. ISO 860:2007 specifies a methodological approach to the harmonization of concepts, concept systems, definitions and terms. It applies to the development of harmonized terminologies, at either the national or international level, in either a monolingual or a multilingual context. It replaces: ISO 860:1996 This computing article is a stub . You can help Wikipedia by expanding it . This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/ISO_860
ISO 999 ( Information and documentation—Guidelines for the content, organization and presentation of indexes ) is an ISO standard which provides the information industry with guidelines for the content, organisation and presentation of indexes to a wide range of documents including books, Periodicals , electronic documents , films , images , maps , and three-dimensional objects. [ 1 ] It covers the choice and form of headings and subheadings used in index entries once the subjects to be indexed have been determined. ISO999:1996 is a complete revision and expansion of the first (1975) edition of this International Standard on indexes. It was prepared by ISO Technical Committee (TC) 46, Subcommittee (SC) 9 which develops International Standards for the identification and description of information resources. This computing article is a stub . You can help Wikipedia by expanding it . This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/ISO_999
The International Space Station (ISS) Environmental Control and Life Support System ( ECLSS ) is a life support system that provides or controls atmospheric pressure , fire detection and suppression, oxygen levels, proper ventilation, waste management and water supply. It was jointly designed and tested by NASA 's Marshall Space Flight Center , UTC Aerospace Systems , Boeing , Lockheed Martin , and Honeywell . [ 1 ] The system has three primary functions: Water Recovery , Air Revitalization , and Oxygen Generation , the purpose of which is to ensure safe and comfortable environments for personnel aboard the ISS. The system also serves as a potential proof of concept for more advanced systems building off of the ECLSS for use in deep space missions. [ 1 ] The ISS has two water recovery systems. Zvezda contains a water recovery system that processes water vapor from the atmosphere that could be used for drinking in an emergency but is normally fed to the Elektron system to produce oxygen . The American segment has a Water Recovery System installed during STS-126 [ 2 ] that can process water vapour collected from the atmosphere and urine into water that is intended for drinking. The Water Recovery System was installed initially in Destiny on a temporary basis in November 2008 [ 2 ] and moved into Tranquility (Node 3) in February 2010. [ 3 ] The Water Recovery System consists of a Urine Processor Assembly and a Water Processor Assembly, housed in two of the three ECLSS racks. [ 4 ] The Urine Processor Assembly uses a low pressure vacuum distillation process that uses a centrifuge to compensate for the lack of gravity and thus aid in separating liquids and gasses. [ 5 ] The Urine Processor Assembly is designed to handle a load of 9 kg/day, corresponding to the needs of a 6-person crew. [ 2 ] Although the design called for the recovery of 85% of the water content, subsequent experience with calcium sulfate precipitation [ 3 ] (in the free-fall conditions present on the ISS, calcium levels in urine are elevated due to bone density loss) has led to a revised operational level of recovering 70% of the water content. Water from the Urine Processor Assembly and from waste water sources are combined to feed the Water Processor Assembly that filters out gasses and solid materials before passing through filter beds and then a high-temperature catalytic reactor assembly. The water is then tested by onboard sensors and unacceptable water is cycled back through the water processor assembly. [ 4 ] [ 5 ] The Volatile Removal Assembly flew on STS-89 in January 1998 to demonstrate the Water Processor Assembly's catalytic reactor in microgravity. A Vapour Compression Distillation Flight Experiment flew, but was destroyed, in STS-107 . [ 5 ] The distillation assembly of the Urine Processor Assembly failed on 21 November 2008, one day after the initial installation. [ 2 ] One of the three centrifuge speed sensors was reporting anomalous speeds, and high centrifuge motor current was observed. This was corrected by re-mounting the distillation assembly without several rubber vibration isolators. The distillation assembly failed again on 28 December 2008 due to a high motor current and was replaced on 20 March 2009. Ultimately, during post-failure testing, one centrifuge speed sensor was found to be out of alignment and a compressor bearing had failed. [ 3 ] Several systems are currently used on board the ISS to maintain the spacecraft's atmosphere, which is similar to the Earth's . [ 6 ] Normal air pressure on the ISS is 101.3 kPa (14.7 psi ); the same as at sea level on Earth. "While members of the ISS crew could stay healthy even with the pressure at a lower level, the equipment on the Station is very sensitive to pressure. If the pressure were to drop too far, it could cause problems with the Station equipment." [ 7 ] The Elektron system aboard Zvezda and a similar system in Destiny generate oxygen aboard the station. [ 8 ] The crew has a backup option in the form of bottled oxygen and Solid Fuel Oxygen Generation (SFOG) canisters. [ 9 ] Carbon dioxide is removed from the air by the Vozdukh system in Zvezda. One Carbon Dioxide Removal Assembly (CDRA) is located in the U.S. Lab module, and one is in the US Node 3 module. Other by-products of human metabolism, such as methane from flatulence and ammonia from sweat, are removed by activated charcoal filters or by the Trace Contaminant Control System (TCCS). [ 9 ] Carbon dioxide and trace contaminants are removed by the Air Revitalization System. This is a NASA rack, placed in Tranquility , designed to provide a Carbon Dioxide Removal Assembly (CDRA), a Trace Contaminant Control Subassembly (TCCS) to remove hazardous trace contamination from the atmosphere and a Major Constituent Analyser (MCA) to monitor nitrogen , oxygen , carbon dioxide , methane , hydrogen , and water vapour . The Air Revitalization System was flown to the station aboard STS-128 and was temporarily installed in the Japanese Experiment Module pressurised module. The system was scheduled to be transferred to Tranquility after it arrived and was installed during Space Shuttle Endeavour mission STS-130 . [ 10 ] The Oxygen Generating System (OGS) is a NASA rack which electrolyses water from the Water Recovery System to produce oxygen and hydrogen, like the Russian Elektron oxygen generator. The oxygen is delivered to the cabin atmosphere. The unit is installed in the Destiny module . During a spacewalk , STS-117 astronauts installed a hydrogen vent valve required to operate the OGS. [ 11 ] The OGS was delivered in 2006 by STS-121 , and became operational on 12 July 2007. [ 12 ] From 2001, the US orbital segment had used oxygen stored in a pressurized tank on the Quest airlock module, or from the Russian service module. Prior to the activation of the Sabatier System in October 2010, hydrogen and carbon dioxide extracted from the cabin was vented overboard. [ 5 ] In October 2010, the OGS stopped running well due to the water input becoming slightly too acidic. The station crew relied on the Elektron oxygen generator and oxygen brought up from Earth for six months. In March 2011, STS-133 delivered the repair kit, and the OGS was brought into full operation. [ 13 ] The Advanced Closed Loop System (ACLS) is an ESA rack that converts carbon dioxide (CO 2 ) and water into oxygen and methane. The CO 2 is removed from the station air by an amine scrubber, then removed from the scrubber by steam. 50% of the CO 2 is converted to methane and water by a Sabatier reaction . The other 50% of carbon dioxide is jettisoned from the ISS along with the methane that is generated. The water is recycled by electrolysis, producing hydrogen (used in the Sabatier reactor) and oxygen. This is very different from the NASA oxygen-generating rack that is reliant on a steady supply of water from Earth in order to generate oxygen. This water-saving capability reduced the needed water in cargo resupply by 400 liters per year. By itself it can regenerate enough oxygen for three astronauts. [ 14 ] The ACLS was delivered on the Kounotori 7 launch in September 2018 and installed in the Destiny module as a technology demonstrator (planned to operate for one to two years). [ 15 ] It was successful, and remains on board the ISS permanently. [ 16 ] ACLS has three subsystems: The NASA Sabatier system (used from 2010 until 2017) closed the oxygen loop in the ECLSS by combining waste hydrogen from the Oxygen Generating System and carbon dioxide from the station atmosphere using the Sabatier reaction to recover the oxygen. The outputs of this reaction were water and methane. The water was recycled to reduce the total amount of water carried to the station from Earth, and the methane was vented overboard by the hydrogen vent line installed for the Oxygen Generating System. [ 17 ] Elektron is a Russian Electrolytic Oxygen Generator, which was also used on Mir . It uses electrolysis to convert water molecules reclaimed from other uses on board the station into oxygen and hydrogen. The oxygen is vented into the cabin and the hydrogen is vented into space. The three Elektron units on the ISS have been plagued with problems, frequently forcing the crew to use backup sources (either bottled oxygen or the Vika system discussed below). To support a crew of six, NASA added the oxygen generating system discussed above. In 2004, the Elektron unit shut down due to (initially) unknown causes. Two weeks of troubleshooting resulted in the unit starting up again, then immediately shutting down. The cause was eventually traced to gas bubbles in the unit, which remained non-functional until a Progress resupply mission in October 2004. [ 18 ] In 2005, ISS personnel tapped into the oxygen supply of the recently arrived Progress resupply spacecraft when the Elektron unit failed. [ 19 ] In 2006, fumes from a malfunctioning Elektron unit prompted NASA flight engineers to declare a "spacecraft emergency". A burning smell led the ISS crew to suspect another Elektron fire, but the unit was only "very hot". A leak of corrosive, odorless potassium hydroxide forced the ISS crew to don gloves and face masks. It has been conjectured that the smell came from overheated rubber seals. The incident occurred shortly after STS-115 left and just before arrival of a resupply mission (including space tourist Anousheh Ansari ). [ 20 ] The Elektron did not come back online until November 2006, after new valves and cables arrived on the October 2006 Progress resupply vessel. [ 21 ] The ERPTC (Electrical Recovery Processing Terminal Current) was inserted into the ISS to prevent harm to the systems. In October 2020, the Elektron system failed and had to be deactivated for a short time before being repaired. [ 22 ] The Vika or TGK oxygen generator, also known as Solid Fuel Oxygen Generation (SFOG) when used on the ISS, is a chemical oxygen generator originally developed by Roscosmos for Mir , and it provides an alternate oxygen generating system. [ 23 ] It uses canisters of solid lithium perchlorate , which decomposes into gaseous oxygen and solid lithium chloride when heated. [ 23 ] Each canister can supply the oxygen needs of one crewmember for one day. [ 24 ] Another Russian system, Vozdukh (Russian Воздух , meaning "air"), removes carbon dioxide from the air with regenerable absorbers of carbon dioxide gas. [ 25 ] An incident occurred in 2018 when one of the two Vozdukhs (also known as SKVs) deactivated without a command but was reactivated a little while later. [ 26 ] Temperature and Humidity Control (THC) is the subsystem of the ISS ECLSS which maintains a steady air temperature and controls moisture in the station's air supply. Thermal Control System (TCS) is a component part of the THC system and subdivides into the Active Thermal Control System (ATCS) and Passive Thermal Control System (PTCS). Controlling humidity is possible through lowering or raising the temperature and through adding moisture to the air. [ citation needed ] Fire Detection and Suppression (FDS) is the subsystem devoted to identifying that there has been a fire and taking steps to fight it.
https://en.wikipedia.org/wiki/ISS_ECLSS
In electrochemistry , ITIES ( interface between two immiscible electrolyte solutions ) [ 1 ] [ 2 ] [ 3 ] is an electrochemical interface that is either polarisable or polarised. An ITIES is polarisable if one can change the Galvani potential difference, or in other words the difference of inner potentials between the two adjacent phases, without noticeably changing the chemical composition of the respective phases (i.e. without noticeable electrochemical reactions taking place at the interface). An ITIES system is polarised if the distribution of the different charges and redox species between the two phases determines the Galvani potential difference. Usually, one electrolyte is an aqueous electrolyte composed of hydrophilic ions such as NaCl dissolved in water and the other electrolyte is a lipophilic salt such as tetrabutylammonium tetraphenylborate dissolved in an organic solvent immiscible with water such as nitrobenzene , or 1,2-dichloroethane . Three major classes of charge transfer reactions can be studied at an ITIES: The Nernst equation for an ion transfer reaction reads where Δ o w ϕ i ⊖ {\displaystyle \Delta _{\text{o}}^{\text{w}}\phi _{i}^{\ominus }} is the standard transfer potential defined as the Gibbs energy of transfer expressed in a voltage scale. The Nernst equation for a single heterogeneous electron transfer reaction reads where Δ o w ϕ ET ⊖ {\displaystyle \Delta _{o}^{\text{w}}\phi _{\text{ET}}^{\ominus }} is the standard redox potential for the interfacial transfer of electrons defined as the difference the standard redox potentials of the two redox couples but referred to the aqueous standard hydrogen electrode (SHE). To study charge transfer reactions of an ITIES, a four-electrode cell is used. Two reference electrodes are used to control the polarisation of the interface, and two counter electrodes made of noble metals are used to pass the current. The aqueous supporting electrolyte must be hydrophilic, such as LiCl, and the organic electrolyte must be lipophilic, such as tetraheptylammonium tetra-pentafluorophenyl borate. Contrary to a neutral solute, the partition coefficient of an ion depends on the Galvani potential difference between the two phases: When a salt is distributed between two phases, the Galvani potential difference is called the distribution potential and is obtained from the respective Nernst equations for the cation C + and the anion A – to read where γ represents the activity coefficient .
https://en.wikipedia.org/wiki/ITIES
ITU-R 468 (originally defined in CCIR recommendation 468-4, therefore formerly also known as CCIR weighting ; sometimes referred to as CCIR-1k ) is a standard relating to noise measurement , widely used when measuring noise in audio systems. The standard, [ 1 ] now referred to as ITU-R BS.468-4, defines a weighting filter curve, together with a quasi-peak rectifier having special characteristics as defined by specified tone-burst tests. It is currently maintained by the International Telecommunication Union who took it over from the CCIR. It is used especially in the UK, Europe, and former countries of the British Empire such as Australia and South Africa. [ citation needed ] It is less well known in the USA where A-weighting has always been used. [ 2 ] M-weighting is a closely related filter, an offset version of the same curve, without the quasi-peak detector. The A-weighting curve was based on the 40 phon equal-loudness contour derived initially by Fletcher and Munson (1933). Originally incorporated into an ANSI standard for sound level meters , A-weighting was intended for measurement of the audibility of sounds by themselves. It was never specifically intended for the measurement of the more random (near- white or pink ) noise in electronic equipment, though has been used for this purpose by most microphone manufacturers since the 1970s. The human ear responds quite differently to clicks and bursts of random noise, and it is this difference that gave rise to the CCIR-468 weighting curve (now supported as an ITU standard), which together with quasi-peak measurement (rather than the rms measurement used with A-weighting) became widely used by broadcasters throughout Britain, Europe, and former British Commonwealth countries, where engineers were heavily influenced by BBC test methods. Telephone companies worldwide have also used methods similar to ITU-R 468 weighting with quasi-peak measurement to describe objectionable interference induced in one telephone circuit by switching transients in another. Developments in the 1960s, in particular the spread of FM broadcasting and the development of the compact audio cassette with Dolby-B Noise Reduction , alerted engineers to the need for a weighting curve that gave subjectively meaningful results on the typical random noise that limited the performance of broadcast circuits, equipment and radio circuits. A-weighting was not giving consistent results, especially on FM radio transmissions and Compact Cassette recording where preemphasis of high frequencies was resulting in increased noise readings that did not correlate with subjective effect. Early efforts to produce a better weighting curve led to a DIN standard that was adopted for European Hi-Fi equipment measurement for a while. Experiments in the BBC led to BBC Research Department Report EL-17, The Assessment of Noise in Audio Frequency Circuits , [ 3 ] in which experiments on numerous test subjects were reported, using a variety of noises ranging from clicks to tone-bursts to pink noise . Subjects were asked to compare these with a 1 kHz tone, and final scores were then compared with measured noise levels using various combinations of weighting filter and quasi-peak detector then in existence (such as those defined in a now discontinued German DIN standard). This led to the CCIR-468 standard which defined a new weighting curve and quasi-peak rectifier. The origin of the current ITU-R 468 weighting curve can be traced to 1956. The 1968 BBC EL-17 report discusses several weighting curves, including one identified as D.P.B. which was chosen as superior to the alternatives: A.S.A, C.C.I.F and O.I.R.T. The report's graph of the DPB curve is identical to that of the ITU-R 468 curve, except that the latter extends to slightly lower and higher frequencies. The BBC report states that this curve was given in a "contribution by the D.B.P. (The Telephone Administration of the Federal German Republic) in the Red Book Vol. 1 1957 covering the first plenary assembly of the CCITT (Geneva 1956)". D.B.P. is Deutsche Bundespost , the German post office which provides telephone service in Germany as the GPO does in the UK. The BBC report states "this characteristic is based on subjective tests described by Belger." and cites a 1953 paper by E. Belger. Dolby Laboratories took up the new CCIR-468 weighting for use in measuring noise on their noise reduction systems, both in cinema (Dolby A) and on cassette decks (Dolby B), where other methods of measurement were failing to show up the advantage of such noise reduction. Some Hi-Fi column writers took up 468 weighting enthusiastically, observing that it reflected the roughly 10 dB improvement in noise observed subjectively on cassette recordings when using Dolby B while other methods could indicate an actual worsening in some circumstances, because they did not sufficiently attenuate noise above 10 kHz. CCIR Recommendation 468-1 was published soon after this report, and appears to have been based on the BBC work. Later versions up to CCIR 468-4 differed only in minor changes to permitted tolerances. This standard was then incorporated into many other national and international standards (IEC, BSI, JIS, ITU) and adopted widely as the standard method for measuring noise, in broadcasting, professional audio, and ' Hi-Fi ' specifications throughout the 1970s. When the CCIR ceased to exist, the standard was officially taken over by the ITU-R ( International Telecommunication Union ). Current work on this standard occurs primarily in the maintenance of IEC 60268, the international standard for sound systems. [ citation needed ] The CCIR curve differs greatly from A-weighting in the 5 to 8 kHz region where it peaks to +12.2 dB at 6.3 kHz, the region in which we appear to be extremely sensitive to noise. While it has been said (incorrectly) that the difference is due to a requirement for assessing noise intrusiveness in the presence of programme material, rather than just loudness, the BBC report makes clear the fact that this was not the basis of the experiments. The real reason for the difference probably relates to the way in which our ears analyse sounds in terms of spectral content along the cochlea . This behaves like a set of closely spaced filters with a roughly constant Q factor , that is, bandwidths proportional to their centre frequencies. High frequency hair cells would therefore be sensitive to a greater proportion of the total energy in noise than low frequency hair cells. Though hair-cell responses are not exactly constant Q, and matters are further complicated by the way in which the brain integrates adjacent hair-cell outputs, the resultant effect appears roughly as a tilt centred on 1 kHz imposed on the A-weighting. [ citation needed ] Dependent on spectral content, 468-weighted measurements of noise are generally about 11 dB higher than A-weighted, and this is probably a factor in the recent trend away from 468-weighting in equipment specifications as cassette tape use declines. [ citation needed ] The 468 specification covers both weighted and 'unweighted' (using a 22 Hz to 22 kHz 18 dB/octave bandpass filter) measurement and that both use a very special quasi-peak rectifier with carefully devised dynamics (A-weighting uses RMS detection for no particular reason [ citation needed ] ). Rather than having a simple 'integration time' this detector requires implementation with two cascaded 'peak followers' each with different attack time-constants carefully chosen to control the response to both single and repeating tone-bursts of various durations. This ensures that measurements on impulsive noise take proper account of our reduced hearing sensitivity to short bursts. This quasi-peak measurement is also called psophometric weighting . [ citation needed ] This was once more important because outside broadcasts were carried over 'music circuits' that used telephone lines, with clicks from Strowger and other electromechanical telephone exchanges. It now finds fresh relevance in the measurement of noise on computer 'Audio Cards' which commonly suffer clicks as drives start and stop. [ citation needed ] 468-weighting is also used in weighted distortion measurement at 1 kHz. Weighting the distortion residue after removal of the fundamental emphasises high-order harmonics, but only up to 10 kHz or so where the ears response falls off. This results in a single measurement (sometimes called distortion residue measurement) which has been claimed to correspond well with subjective effect even for power amplifiers where crossover distortion is known to be far more audible than normal THD ( total harmonic distortion ) measurements would suggest. 468-weighting is still demanded by the BBC and many other broadcasters, [ 4 ] with increasing awareness of its existence and the fact that it is more valid on random noise where pure tones do not exist. [ citation needed ] Often both A-weighted and 468-weighted figures are quoted for noise, especially in microphone specifications. While not intended for this application, the 468 curve has also been used (offset to place the 0 dB point at 2 kHz rather than 1 kHz) as "M weighting" in standards such as ISO 21727 [ 5 ] intended to gauge loudness or annoyance of cinema soundtracks. This application of the weighting curve does not include the quasi-peak detector specified in the ITU standard. This is not the full definitive standard. The weighting curve is specified by both a circuit diagram of a weighting network and a table of amplitude responses. Above is the ITU-R 468 Weighting Filter Circuit Diagram. The source and sink impedances are both 600 ohms (resistive), as shown in the diagram. The values are taken directly from the ITU-R 468 specification. Since this circuit is purely passive, it cannot create the additional 12 dB gain required; any results must be corrected by a factor of 8.1333, or +18.2 dB. Table of amplitude responses: The values of the amplitude response table slightly differ from those resulting from the circuit diagram, e.g. because of the finite resolution of the numerical values. In the standard it is said that the 33.06 nF capacitor may be adjusted or an active filter may be used. Modeling at hand [ clarification needed ] the circuit above and some calculus give this formula to get the amplitude response in dB for any given frequency value: [ citation needed ] where 5 kHz single bursts: 5 ms, 5 kHz bursts at repetition rate: Uses 22 Hz HPF and 22 kHz LPF 18 dB/decade or greater.
https://en.wikipedia.org/wiki/ITU-R_468_noise_weighting
The ITU-T Study Group 16 ( SG16 ) is a statutory group of the ITU Telecommunication Standardization Sector (ITU-T) concerned with multimedia coding, systems and applications, such as video coding standards. It is responsible for standardization of the "H.26x" line of video coding standards, the "T.8xx" line of image coding standards, and related technologies, as well as various collaborations with the World Health Organization , including on safe listening ( H.870 ) accessibility of e-health ( F.780.2 ), it is also the parent body of VCEG and various Focus Groups, such as the ITU-WHO Focus Group on Artificial Intelligence for Health and its AI for Health Framework . [ 1 ] Administratively, SG16 is a statutory meeting of the World Telecommunication Standardization Assembly (WTSA), [ 2 ] which creates the ITU-T Study Groups and appoints their management teams. The secretariat is provided by the Telecommunication Standardization Bureau (under Director Seizo Onoe ). WTSA instructed ITU to hold the Global Standards Symposium as a part of the deliberations that is open to the public. The goal of SG16 is to produce Recommendations (international standards) for multimedia , including e.g. video coding , audio coding and image coding methods, such as H.264 , H.265 , H.266 , [ 3 ] and JPEG , as well as other types of multimedia related standards such as F.780.2 , H.810 , and H.870 on safe listening, together with the World Health Organization . It is also responsible for "the coordination of related studies across the various ITU-T SGs." Additionally, is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV). [ 4 ] Together with ITU-T Study Group 17 and AI for Good , the study group has been developing technology specifications under Trustworthy AI . Including items on homomorphic encryption , secure multi-party computation , and federated learning .
https://en.wikipedia.org/wiki/ITU-T_Study_Group_16
ITW Mima Packaging Systems is the European marketing division of ITW's Specialty Systems businesses, manufacturing fully automatic stretch wrapping machines in Finland, semi-automatic and automatic machines in Bulgaria and manufacturing film in Belgium and Ireland. Mima was founded in 1976 in the United States to manufacture stretch wrapping machinery. Mima was acquired by ITW in 1986. Alongside this, Matti Haloila started his own company in Finland and began manufacturing Haloila Haloila - Etusivu semi-automatic stretch wrappers in 1976. In 1983, Haloila launched an automatic, rotating ring stretch wrapper with the brand name Octopus. Haloila became part of the Illinois Tool Works (ITW) in 1995, and shortly after this, ITW acquired the stretch film business from Mobil [ 1 ] and ITW Mima Packaging System was formed. ITW Mima Packaging Systems manufacture stretch films in Belgium and Ireland and stretch wrappers in Bulgaria and Finland. In 2006 Mima launched the Octopus Twin a wrapping machine capable of wrapping 150 pallets per hour. ITW Mima delivered their 3000th Octopus stretch wrapper in 2008. [ 2 ] The Octopus is also currently being manufactured for the US market in Canada by ITW Muller. Other Haloila’s wrapping machines include the Cobra, Ecomat and Rolle. ITW Mima Packagings Systems is a member of the Process and Packaging Machinery Association (PPMA) [ 3 ]
https://en.wikipedia.org/wiki/ITW_Mima_Packaging_Systems
The IT History Society ( ITHS ) is an organization that supports the history and scholarship of information technology by encouraging, fostering, and facilitating archival and historical research. Formerly known as the Charles Babbage Foundation , it advises historians, promotes collaboration among academic organizations and museums, and assists IT corporations in preparing and archiving their histories for future studies. The IT History Society [ 2 ] provides background information to those with an interest in the history of Information Technology , including papers that provide advice on how to perform historical work and how historical activities can benefit private sector organizations. It tracks historical projects seeking funding as well as projects underway and completed. It maintains online, publicly available, lists of events pertaining to IT history, IT history resources, an IT Honor Roll acknowledging more than 700 individuals who have made a noteworthy contribution to the information technology industry, and a database of notable technology quotes. A continuing project is one of aggregating the locations and content of IT history archival information around the world to facilitate and encourage IT history research and scholarship. [ 3 ] [ citation needed ] This International Database of Historical and Archival Sites currently consists of 1,663 international information technology historical and archival collections encompassing over 49.8 million documents. [ 3 ] An IT Hardware database has been added consisting of 12,187 entries, an IT Honor Roll with 1,031 entries, and a Technical Quotes database with over 1,000 entries. These databases are being added to on a regular basis an IT Software and IT Companies databases will debut soon. ITHS holds an annual meeting and conference. The International Charles Babbage Society was founded in 1978 and operated out of Palo Alto, California . The following year the American Federation of Information Processing Societies (AFIPS) became a principal sponsor of the society, which was renamed the Charles Babbage Institute . [ 4 ] [ 5 ] In 1980, the institute moved to the University of Minnesota , which contracted with the principals of the Charles Babbage Institute to sponsor and house the institute. [ 6 ] A new entity, the Charles Babbage Foundation , was created to help support and govern the institute, in partnership with the university. In 1989, CBI became an organized research unit of the university. Around 2000, CBF broadened its mission to support the history of information technology through other organizations, collaborating, for example, with the Sloan Foundation , Software History Center , and the Computer History Museum in experimenting with Internet-based archival and historical research. In 2002, the Charles Babbage Foundation broadened its mission to support the entire IT history community. In 2007, CBF changed its name to the IT History Society and reworked its programs to better support the IT history community. The Charles Babbage Institute is a research center at the University of Minnesota specializing in the history of information technology , particularly the history of digital computing, programming/software, and computer networking since 1935. The institute is named for Charles Babbage , the nineteenth-century English inventor of the programmable computer. [ 7 ] The institute is located in Elmer L. Andersen Library at the University of Minnesota Libraries in Minneapolis , Minnesota . In addition to holding important historical archives, in paper and electronic form, its staff of historians and archivists conduct and publish historical and archival research that promotes the study of the history of information technology internationally. [ 8 ] CBI also encourages research in the area and related topics (such as archival methods ); to do this, it offers graduate fellowships [ 9 ] and travel grants, [ 10 ] organizes conferences and workshops, and participates in public programming. It also serves as an international clearinghouse of resources for the history of information technology. Also valuable for researchers are its extensive collection of oral history interviews, more than 400 in total. Oral histories with important early figures in the field have been conducted by CBI staff and collaborating colleagues. [ 11 ] Owing to the poorly documented state of many early computer developments, these oral histories are immensely valuable documents. One author called the set of CBI oral histories "a priceless resource for any historian of computing." [ 12 ] Most of CBI's oral histories are transcribed and available online. [ 13 ] The archival collection also contains manuscripts ; records of professional associations ; corporate records (including the Burroughs corporate records and the Control Data corporate records, among many others); trade publications ; periodicals ; manuals and product literature for older systems, photographic material (stills and moving), and a variety of other rare reference materials. It is now a center at the University of Minnesota , and is located on its Twin Cities , Minneapolis campus, where it is housed in the Elmer L. Andersen Library on the West Bank. The CBI has collections of archival papers and oral histories from many notable figures in computing including: CBI was founded in 1978 by Erwin Tomash and associates as the International Charles Babbage Society , and initially operated in Palo Alto, California . In 1979, the American Federation of Information Processing Societies (AFIPS) became a principal sponsor of the Society, which was renamed the Charles Babbage Institute. In 1980, the institute moved to the University of Minnesota , which contracted with the principals of the Charles Babbage Institute to sponsor and house the institute. In 1989, CBI became an organized research unit of the university.
https://en.wikipedia.org/wiki/IT_History_Society
An Information Technology Assistant (commonly abbreviated to IT Assistant ) is a person who works as an assistant in the IT business . Because the term " Information Technology " is commonly abbreviated "IT", job seekers recruiters often use the abbreviated version of the title. Distinguishing Characteristics: Receives supervision and direction from the Information Technology Manager and Network Specialist . May require flexible work schedules including early morning, weekend and evening hours. [ 1 ] Applicants should have: As an Information Technology Assistant, user problems should be resolved by communicating with end users and by translating technical problems from end-users to technical support staff. You would install hardware , software , and peripherals ; run diagnostic software; utilize mainframe and/or client server software to provide system security access; and accommodate user requests for computer hardware and software. [ 3 ]
https://en.wikipedia.org/wiki/IT_assistant
IT infrastructure deployment typically involves defining the sequence of operations or steps, often referred to as a deployment plan , that must be carried to deliver changes into a target system environment . The individual operations within a deployment plan can be executed manually or automatically. Deployment plans are usually well defined and approved prior to the deployment date. In situations where there is a high potential risk of failure in the target system environment, deployment plans may rehearsed to ensure there are no issues during actual deployment. Structured repeatable deployments are also prime candidates for automation which drives quality and efficiency. The objective of Deployment Planning is to ensure that changes deployed into a target system environment are executed in a structure and repeatable manner in order to reduce the risk of failure. The purpose of release and deployment planning is to: A deployment template is an unbound deployment plan which defines the steps of execution but not the profiles and systems. Deployment templates are patterns from which deployment plans can be created. Typical information captured for each step in the deployment plan is:
https://en.wikipedia.org/wiki/IT_infrastructure_deployment
iTap is a predictive text technology developed for mobile phones , developed by Motorola employees [ 1 ] as a competitor to T9 . It was designed as a replacement for the old letter mappings on phones to help with word entry. This makes some of the modern mobile phones features like text messaging and note-taking easier. When entering three or more characters in a row, iTap guesses the rest of the word. For example, entering "prog" will suggest "program". If a different word is desired, such as "progress" or words formed with different letters but requiring the same keypresses like "prohibited" or "spoil", an arrow key can be pressed to highlight other words in a menu for selection, in order of descending commonality of their use. If the phone does not recognize a word it then stores the word as an optional choice. When the memory space is filled the phone deletes the oldest word to make space for the new word. Similar to XT9 (the most recent version of T9), iTap is also able to complete words and phrases. iTap will guess the best match based upon a built in dictionary, including words sharing the typed prefix. This dictionary also contains phrases and commonly used sentences. This way the predictive guesses iTap offers are enhanced based upon context of the word that is being typed. iTap typically uses a different user interface (UI) than T9 does. However, T9 provides an API that can be used to create a similar UI if phone manufacturers decide to do so. iTap provides suggestions for word completions after only one key press in all cases. However, T9 completes custom words after one key press and on most phones other words that users have entered previously can be retrieved after three key presses. T9 enables these UI decisions to be largely up to the phone manufacturer and so far none of them have chosen to mimic the UI of iTap with T9.
https://en.wikipedia.org/wiki/ITap
ITerating was a Wiki-based software guide, where users could find, compare and give reviews to software products. As of January 2021 the domain is listed as being for sale and the website no longer on-line. Founded in October 2005, and based in New York, ITerating was created by CEO Nicolas Vandenberghe, who saw that there was an industry need for a comprehensive resource to help evaluate software solutions. [ 1 ] The site aims to be a reference guide for the IT industry and includes reviews, ratings, articles, and detailed product feature comparisons. ITerating uses Semantic Web tools (including RDF - Resource Description Framework ) to combine user edits with Web service feeds from other sites. [ 2 ] Designed for use by developers and industry consultants, ITerating allows users to contribute to categories such as Software Engineering Tools; Website Design & Tools; Website Software Tools; Website & Communication Applications & Social Networking; or to create their own category if does not exist yet. [ 3 ] Iterating announced the addition of a Feature Matrix in June 2007, which allows users to dynamically create customized, side-by-side feature comparisons of software solutions. [ 4 ]
https://en.wikipedia.org/wiki/ITerating
iTools [ 1 ] is a distributed infrastructure for managing, discovery, comparison and integration of computational biology resources. iTools employs Biositemap technology to retrieve and service meta-data about diverse bioinformatics data services, tools, and web-services. iTools is developed by the National Centers for Biomedical Computing as part of the NIH Road Map Initiative .
https://en.wikipedia.org/wiki/ITools_Resourceome
IUCLID ( / ˈ juː k l ɪ d / ; International Uniform Chemical Information Database ) is a software application to capture, store, maintain and exchange data on intrinsic and hazard properties of chemical substances . Distributed free of charge, the software is especially useful to chemical industry companies and to government authorities. It is the key tool for chemical industry to fulfill data submission obligations under REACH , the most important European Union legal document covering the production and use of chemical substances. The software is maintained by the European Chemicals Agency , ECHA. [ 1 ] The latest version, version 6, was made available on 29 April 2016. 1993: First version of IUCLID for the European Existing Substances Regulation 793/93/EEC. [ 2 ] 1999: IUCLID becomes the recommended tool for the OECD HPV Programme. [ 3 ] 2000: IUCLID is the software prescribed in the EU Biocides legislation to notify existing active substances (Art. 4 of Commission Regulation (EC) No 1896/2000 [ 4 ] ). IUCLID 4 was used worldwide by about 500 organizations. These included chemical industry companies, EU Member State Competent Authorities, the OECD Secretariat, the US EPA , the Japan METI , and third-party service providers. In 2003, when it became clear that the REACH proposal would be adopted by the European Union , the European Commission decided to completely overhaul IUCLID 4 and to create a new version, IUCLID 5, which would be used by chemical industry companies to fulfill their data submission obligations under REACH . Migration of data in the IUCLID 4 format was supported by IUCLID 5.1. IUCLID 5.1 became available on 13 June 2007. IUCLID is also mentioned in article 111 of the REACH [ 5 ] of the REACH legislation as the format to be used for data collection and submission dossier preparation. The following IUCLID 5 major versions have been released: Data that can be stored and maintained with IUCLID encompass information about: OECD and the European Commission have agreed on a standard XML format ( OECD Harmonized Templates ) in which these data are stored for easy data sharing. IUCLID 5 will be the first application fully implementing this international reporting standard, which has been accepted by many national and international regulatory authorities. Numerous parties were involved in the creation and the review of the OECD Harmonized Templates, among them the Business and Industry Advisory Committee (BIAC) to the OECD , the European Chemical Industry Council (CEFIC) and other bodies and authorities. IUCLID5 can be used to enter robust study summaries summarising toxicologically-relevant endpoints. A Klimisch score is assigned within robust study summary as one field. Anyone can use a local IUCLID 5 installation to collect, store, maintain and exchange relevant data on chemical substances. In addition to dossier creation for REACH, IUCLID 5 data can be (re-)used for a large number of other purposes, due to the compatibility of IUCLID 5 data with the OECD Harmonized Templates . The European Commission IUCLID project team and international authorities are currently in deliberation in order to further promote acceptance of IUCLID5 data in non- REACH jurisdictions. Legislations and programmes under which IUCLID 5 data are certainly accepted are: The IUCLID 5 data model also features Biocides / Pesticides elements. A dataset prepared for a substance under REACH can therefore be quickly complemented with data about possible biocidal or pesticidal properties and be re-used for data reporting obligations under the EU Biocides regulation. The data are available and can be searched through the OECD eChemPortal . IUCLID 5 is a Java -based application, using the Hibernate framework for persistence . It features a Java Swing graphical user interface (GUI) and can be deployed on both single workstation and distributed environments. IUCLID 5 offers the possibility to be deployed in: IUCLID 5 exports and imports files in the I5Z format. Files may be swapped between different IUCLID 5 installations and dossiers may be uploaded to ECHA via REACH-IT . I5Z stands for "IUCLID 5 Zip", as the file uses Zip file compression. IUCLID can be deployed on any current PC. For optimal performance, RAM should not be less than 1 GB. IUCLID 6 was made available on 24 June 2015 as a beta version so that large companies and other organisations could begin preparing their IT systems for the full release of IUCLID 6 in 2016. However, individual users and SMEs could also download the beta version to get a preview, and to become familiar with the user-interface. The first official version of IUCLID 6 was published on 29 April 2016. [ 6 ]
https://en.wikipedia.org/wiki/IUCLID
The International Union for Conservation of Nature (IUCN) Red List of Threatened Species , also known as the IUCN Red List or Red Data Book , founded in 1964, is an inventory of the global conservation status and extinction risk of biological species . [ 1 ] A series of Regional Red Lists , which assess the risk of extinction to species within a political management unit, are also produced by countries and organizations. The goals of the Red List are to provide scientifically based information on the status of species and subspecies at a global level, to draw attention to the magnitude and importance of threatened biodiversity, to influence national and international policy and decision-making, and to provide information to guide actions to conserve biological diversity. [ 2 ] Major species assessors include BirdLife International , the Institute of Zoology (the research division of the Zoological Society of London ), the World Conservation Monitoring Centre , and many Specialist Groups within the IUCN Species Survival Commission (SSC). Collectively, assessments by these organizations and groups account for nearly half the species on the Red List. The IUCN aims to have the category of every species re-evaluated at least every ten years, and every five years if possible. This is done in a peer reviewed manner through IUCN Species Survival Commission Specialist Groups (SSC), which are Red List Authorities (RLA) responsible for a species, group of species or specific geographic area, or in the case of BirdLife International, an entire class ( Aves ). The red list unit works with staff from the IUCN Global Species Programme as well as current program partners to recommend new partners or networks to join as new Red List Authorities. [ 3 ] The number of species which have been assessed for the Red List has been increasing over time. [ 4 ] As of 2023, [update] of 150,388 species surveyed, 42,108 are considered at risk of extinction because of human activity, in particular overfishing , hunting , and land development . [ 5 ] [ 6 ] The idea for a Red Data Book was suggested by Peter Scott in 1963. [ 7 ] Initially the Red Data Lists were designed for specialists and were issued in a loose-leaf format that could be easily changed. The first two volumes of Red Lists were published in 1966 by conservationist Noel Simon, one for mammals and one for birds. [ 8 ] [ 9 ] The third volume that appeared covered reptiles and amphibians. It was created by René E. Honegger in 1968. [ 10 ] In 1970, the IUCN published volume 5 in this series. This was the first Red Data List which focused on plants ( angiosperms only), compiled by Ronald Melville . [ 11 ] The final volume created in the loose leaf style was volume 4 on freshwater fishes. This was published in 1979 by Robert Rush Miller . [ 12 ] The first attempt to create a Red Data Book for a nonspecialist public came in 1969 with The Red Book: Wildlife in Danger . [ 13 ] This book covered varies groups but was predominantly about mammals and birds, with smaller sections on reptiles, amphibians, fishes, and plants. The 2006 Red List, released on 4 May 2006 evaluated 40,168 species as a whole, plus an additional 2,160 subspecies , varieties , aquatic stocks , and subpopulations . [ 14 ] On 12 September 2007, the World Conservation Union (IUCN) released the 2007 IUCN Red List of Threatened Species . In this release, they have raised their classification of both the western lowland gorilla ( Gorilla gorilla gorilla ) and the Cross River gorilla ( Gorilla gorilla diehli ) from endangered to critically endangered , which is the last category before extinct in the wild , due to Ebola virus and poaching , along with other factors. Russ Mittermeier , chief of Swiss -based IUCN's Primate Specialist Group, stated that 16,306 species are endangered with extinction, 188 more than in 2006 (total of 41,415 species on the Red List). The Red List includes the Sumatran orangutan ( Pongo abelii ) in the Critically Endangered category and the Bornean orangutan ( Pongo pygmaeus ) in the Endangered category. [ 15 ] The 2008 Red List was released on 6 October 2008 at the IUCN World Conservation Congress in Barcelona and "confirmed an extinction crisis, with almost one in four [mammals] at risk of disappearing forever". The study shows at least 1,141 of the 5,487 mammals on Earth are known to be threatened with extinction, and 836 are listed as Data Deficient . [ 16 ] The Red List of 2012 was released 19 July 2012 at Rio+20 Earth Summit ; [ 17 ] nearly 2,000 species were added, [ 18 ] with 4 species to the extinct list, 2 to the rediscovered list. [ 19 ] The IUCN assessed a total of 63,837 species which revealed 19,817 are threatened with extinction. [ 20 ] 3,947 were described as "critically endangered" and 5,766 as "endangered", while more than 10,000 species are listed as "vulnerable". [ 21 ] At threat are 41% of amphibian species, 33% of reef-building corals, 30% of conifers, 25% of mammals, and 13% of birds. [ 20 ] The IUCN Red List has listed 132 species of plants and animals from India as "Critically Endangered". [ 22 ] Species are classified by the IUCN Red List into nine groups, [ 23 ] specified through criteria such as rate of decline, population size, area of geographic distribution, and degree of population and distribution fragmentation. [ 24 ] There is an emphasis on the acceptability of applying any criteria in the absence of high quality data including suspicion and potential future threats, "so long as these can reasonably be supported". : 6 [ 25 ] In the IUCN Red List, " threatened " embraces the categories of Critically Endangered, Endangered, and Vulnerable. [ 24 ] The older 1994 list has only a single "Lower Risk" category which contained three subcategories: In the 2001 framework, Near Threatened and Least Concern became their own categories, while Conservation Dependent was removed and its contents merged into Near Threatened . The tag of "possibly extinct" (PE) [ 26 ] is used by Birdlife International , the Red List Authority for birds for the IUCN Red List. [ 27 ] BirdLife International has recommended PE become an official tag for Critically Endangered species, and this has now been adopted, along with a "Possibly Extinct in the Wild" tag for species with populations surviving in captivity but likely to be extinct in the wild. [ 28 ] There have been a number of versions, dating from 1991, including: [ 29 ] [ 30 ] All new IUCN assessments since 2001 have used version 3.1 of the categories and criteria. In 1997, the IUCN Red List received criticism on the grounds of secrecy (or at least poor documentation) surrounding the sources of its data. [ 31 ] These allegations have led to efforts by the IUCN to improve its documentation and data quality, and to include peer reviews of taxa on the Red List. [ 24 ] The list is also open to petitions against its classifications, on the basis of documentation or criteria. [ 32 ] In the November 2002 issue of Trends in Ecology & Evolution , an article suggested that the IUCN Red List and similar works are prone to misuse by governments and other groups that draw possibly inappropriate conclusions on the state of the environment or to affect exploitation of natural resources . [ 33 ] In the November 2016 issue of Science Advances , a research article claims there are serious inconsistencies in the way species are classified by the IUCN. The researchers contend that the IUCN's process of categorization is "out-dated, and leaves room for improvement", and further emphasize the importance of readily available and easy-to-include geospatial data, such as satellite and aerial imaging. Their conclusion questioned not only the IUCN's method but also the validity of where certain species fall on the List. They believe that combining geographical data can significantly increase the number of species that need to be reclassified to a higher risk category. [ 34 ]
https://en.wikipedia.org/wiki/IUCN_Red_List
The IUPAC/IUPAP Joint Working Party is a group convened periodically by the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) to consider claims for discovery and naming of new chemical elements . [ 1 ] [ 2 ] [ 3 ] It is sometimes called the Joint Working Party on Discovery of Elements. [ 4 ] The working party's recommendations are voted on by the General Assembly of the IUPAP. [ 5 ] This article about a scientific organization is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/IUPAC/IUPAP_Joint_Working_Party
The International Union of Pure and Applied Chemistry (IUPAC) publishes many books which contain its complete list of definitions . The definitions are divided initially into seven IUPAC Colour Books : Gold, Green, Blue, Purple, Orange, White, and Red. [ 1 ] There is also an eighth book, the "Silver Book". Nomenclature of Organic Chemistry , commonly referred to by chemists as the Blue Book , is a collection of recommendations on organic chemical nomenclature published at irregular intervals by the International Union of Pure and Applied Chemistry (IUPAC). A full edition was published in 1979, [ 2 ] an abridged and updated version of which was published in 1993 as A Guide to IUPAC Nomenclature of Organic Compounds . [ 3 ] Both of these are now out-of-print in their paper versions, but are available free of charge in electronic versions. After the release of a draft version for public comment in 2004 [ 4 ] and the publication of several revised sections in the journal Pure and Applied Chemistry , a fully revised version was published in print in 2013. [ 5 ] [ 2 ] The Compendium of Chemical Terminology is a book published by the International Union of Pure and Applied Chemistry (IUPAC) containing internationally accepted definitions for terms in chemistry . Work on the first edition was initiated by Victor Gold , thus spawning its informal name: the Gold Book . The first edition was published in 1987 ( ISBN 0-63201-765-1 ) and the second edition ( ISBN 0-86542-684-8 ), edited by A. D. McNaught and A. Wilkinson, was published in 1997. A slightly expanded version of the Gold Book is also freely searchable online . Translations have also been published in French, Spanish and Polish. Quantities, Units and Symbols in Physical Chemistry , commonly known as the Green Book , is a compilation of terms and symbols widely used in the field of physical chemistry. It also includes a table of physical constants, tables listing the properties of elementary particles, chemical elements, and nuclides, and information about conversion factors that are commonly used in physical chemistry. The most recent edition is the third edition ( ISBN 978-0-85404-433-7 ), originally published by IUPAC in 2007. A second printing of the third edition was released in 2008; this printing made several minor revisions to the 2007 text. A third printing of the third edition was released in 2011. The text of the third printing is identical to that of the second printing. The Compendium of Analytical Nomenclature is a book published by the International Union of Pure and Applied Chemistry (IUPAC) containing internationally accepted definitions for terms in analytical chemistry . It has traditionally been published in an orange cover, hence its informal name, the Orange Book . Although the book is described as the "Definitive Rules", there have been three editions published; the first in 1978 ( ISBN 0-08022-008-8 ), the second in 1987 ( ISBN 0-63201-907-7 ) and the third in 1998 ( ISBN 0-86542-615-5 ). The third edition is also available online. A Catalan translation has also been published (1987, ISBN 84-7283-121-3 ). The first edition of the Compendium of Macromolecular Terminology and Nomenclature , known as the Purple Book , was published in 1991. It is about the nomenclature of polymers. The second and latest edition was published in December 2008 [ 6 ] and is also available for download. [ 7 ] Nomenclature of Inorganic Chemistry , by chemists commonly referred to as the Red Book , is a collection of recommendations on inorganic chemical nomenclature . It is published at irregular intervals by the International Union of Pure and Applied Chemistry (IUPAC). The last full edition was published in 2005, [ 8 ] in both paper and electronic versions. The IUPAC also publishes a Silver Book , not listed with the other "colour books", titled Compendium of Terminology and Nomenclature of Properties in Clinical Laboratory Sciences . [ 10 ] The Biochemical Nomenclature and Related Documents (1992) or White Book contains definitions pertaining to biochemical research compiled jointly by IUPAC and the International Union of Biochemistry and Molecular Biology. 11. Website publication of Silver Book: The Silver Book and the NPU Format for Clinical Laboratory Science Reports Regarding Properties, Units, and Symbols . Published Online: 2017-04-25; Published in Print: 2017-04-25
https://en.wikipedia.org/wiki/IUPAC_Color_Books
The Inorganic Chemistry Division of the International Union of Pure and Applied Chemistry (IUPAC), also known as Division II, [ 1 ] deals with all aspects of inorganic chemistry , including materials and bioinorganic chemistry , and also with isotopes , atomic weights and the periodic table . It furthermore advises the Chemical Nomenclature and Structure Representation Division (Division VIII) on issues dealing with inorganic compounds and materials. [ 2 ] For the general public, the most visible result of the division's work is that it evaluates and advises the IUPAC on names and symbols proposed for new elements that have been approved for addition to the periodic table. [ 3 ] [ 4 ] [ 5 ] [ 6 ] For the scientific end educational community the work on isotopic abundances and atomic weights is of fundamental importance as these numbers are continuously checked and updated. [ 7 ] The division has the following subcommittees and commissions: [ 8 ] List of Running Projects of IUPAC Division II [ 10 ] The Inorganic Chemistry Division was a partner in the 2011 Global Chemistry Experiment “Water: A Chemical Solution” that took part during the International Year of Chemistry. [ 13 ] [ 14 ]
https://en.wikipedia.org/wiki/IUPAC_Inorganic_Chemistry_Division
The IUPAC Nomenclature for Organic Chemical Transformations is a methodology for naming a chemical reaction . Traditionally, most chemical reactions, especially in organic chemistry , are named after their inventors, the so-called name reactions , such as Knoevenagel condensation , Wittig reaction , Claisen–Schmidt condensation , Schotten–Baumann reaction , and Diels–Alder reaction . A lot of reactions derive their name from the reagent involved like bromination or acylation . On rare occasions, the reaction is named after the company responsible like in the Wacker process or the name only hints at the process involved like in the halogen dance rearrangement . The IUPAC Nomenclature for Transformations was developed in 1981 and presents a clear-cut methodology for naming an organic reaction. It incorporates the reactant and product in a chemical transformation together with one of three transformation types: The related IUPAC nomenclature of chemistry is designed for naming organic compounds themselves. [ citation needed ]
https://en.wikipedia.org/wiki/IUPAC_nomenclature_for_organic_chemical_transformations
IUPAC nomenclature is a set of recommendations for naming chemical compounds and for describing chemistry and biochemistry in general. The International Union of Pure and Applied Chemistry (IUPAC) is the international authority on chemical nomenclature and terminology. In 1787, Louis-Bernard Guyton de Morveau published his nomenclature recommendations in collaboration with fellow French chemists Berthollet , de Fourcroy and Lavoisier . This work however only covered what we would nowadays deem inorganic compounds . With the expansion of organic chemistry in the of the 19th century, and a greater understanding of the structure of organic compounds, the need of a more global standardised nomenclature became more prominent. Following a series of meetings, he first of which was established in 1860 by August Kekulé , the Geneva Nomenclature of 1892 was created. Another entity called the International Association of Chemical Societies (IACS) existed, and on 1911, gave vital propositions the new organising body should address. [ 1 ] These propositions included: The same entity also established a commission in 1913 by but it's work was interrupted by the start of World War I . In 1919, after the end of the first world war, a group of chemists created the International Union of Pure and Applied Chemistry (IUPAC) with this idea of standardising and expanding nomenclature as well as unionising scientists and strengthening the international trade of science. In 1921 they appointed commissions for nomenclature in organic, inorganic, and biochemistry. In 2019 IUPAC celebrated its 100th anniversary. [ 1 ] IUPAC states that, "As one of its major activities, IUPAC develops Recommendations to establish unambiguous, uniform, and consistent nomenclature and terminology for specific scientific fields, usually presented as: glossaries of terms for specific chemical disciplines; definitions of terms relating to a group of properties; nomenclature of chemical compounds and their classes; terminology, symbols, and units in a specific field; classifications and uses of terms in a specific field; and conventions and standards of practice for presenting data in a specific field." [ 2 ] Recommendations are published in IUPAC's journal, Pure and Applied Chemistry ( PAC ), the publicly available IUPAC Standards Online database, IUPAC Color Books , and other publications. PAC journal issues are freely available the year following publication. [ 2 ] The two IUPAC bodies that lead nomenclature and terminology efforts are Division VIII – Chemical Nomenclature and Structure Representation and the Interdivisional Committee on Terminology, Nomenclature, and Symbols. [ 2 ] This chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/IUPAC_nomenclature_of_chemistry
In chemical nomenclature, the IUPAC nomenclature of inorganic chemistry is a systematic method of naming inorganic chemical compounds , as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in Nomenclature of Inorganic Chemistry (which is informally called the Red Book). [ 1 ] Ideally, every inorganic compound should have a name from which an unambiguous formula can be determined. There is also an IUPAC nomenclature of organic chemistry . The names " caffeine " and " 3,7-dihydro-1,3,7-trimethyl-1H-purine-2,6-dione " both signify the same chemical compound. The systematic name encodes the structure and composition of the caffeine molecule in some detail, and provides an unambiguous reference to this compound, whereas the name "caffeine" simply names it. These advantages make the systematic name far superior to the common name when absolute clarity and precision are required. However, for the sake of brevity, even professional chemists will use the non-systematic name almost all of the time, because caffeine is a well-known common chemical with a unique structure. Similarly, H 2 O is most often simply called water in English, though other chemical names do exist . Positively charged ions are called cations and negatively charged ions are called anions. The cation is always named first. Ions can be metals, non-metals or polyatomic ions. Therefore, the name of the metal or positive polyatomic ion is followed by the name of the non-metal or negative polyatomic ion. The positive ion retains its element name whereas for a single non-metal anion the ending is changed to -ide . When the metal has more than one possible ionic charge or oxidation number the name becomes ambiguous . In these cases the oxidation number (the same as the charge) of the metal ion is represented by a Roman numeral in parentheses immediately following the metal ion name. For example, in uranium(VI) fluoride the oxidation number of uranium is 6. Another example is the iron oxides. FeO is iron(II) oxide and Fe 2 O 3 is iron(III) oxide. An older system used prefixes and suffixes to indicate the oxidation number, according to the following scheme: Thus the four oxyacids of chlorine are called hypochlorous acid (HOCl), chlorous acid (HOClO), chloric acid (HOClO 2 ) and perchloric acid (HOClO 3 ), and their respective conjugate bases are hypochlorite , chlorite , chlorate and perchlorate ions. This system has partially fallen out of use, but survives in the common names of many chemical compounds : the modern literature contains few references to "ferric chloride" (instead calling it "iron(III) chloride"), but names like "potassium permanganate" (instead of "potassium manganate(VII)") and " sulfuric acid " abound. An ionic compound is named by its cation followed by its anion. See polyatomic ion for a list of possible ions. For cations that take on multiple charges, the charge is written using Roman numerals in parentheses immediately following the element name. For example, Cu(NO 3 ) 2 is copper(II) nitrate , because the charge of two nitrate ions ( NO − 3 ) is 2 × −1 = −2, and since the net charge of the ionic compound must be zero, the Cu ion has a 2+ charge. This compound is therefore copper(II) nitrate. In the case of cations with a +4 oxidation state, the only acceptable format for the Roman numeral 4 is IV and not IIII. The Roman numerals in fact show the oxidation number , but in simple ionic compounds (i.e., not metal complexes ) this will always equal the ionic charge on the metal. For a simple overview see [1] Archived 2008-10-16 at the Wayback Machine , for more details see selected pages from IUPAC rules for naming inorganic compounds Archived 2016-03-03 at the Wayback Machine . Monatomic anions: Polyatomic ions : Hydrates are ionic compounds that have absorbed water. They are named as the ionic compound followed by a numerical prefix and -hydrate . The numerical prefixes used are listed below (see IUPAC numerical multiplier ): For example, CuSO 4 ·5H 2 O is "copper(II) sulfate pentahydrate". Inorganic molecular compounds are named with a prefix (see list above) before each element. The more electronegative element is written last and with an -ide suffix. For example, H 2 O (water) can be called dihydrogen monoxide . Organic molecules do not follow this rule. In addition, the prefix mono- is not used with the first element; for example, SO 2 is sulfur dioxide , not "monosulfur dioxide". Sometimes prefixes are shortened when the ending vowel of the prefix "conflicts" with a starting vowel in the compound. This makes the name easier to pronounce; for example, CO is "carbon monoxide" (as opposed to "monooxide"). The "a" of the penta- prefix is not dropped before a vowel. As the IUPAC Red Book 2005 page 69 states, "The final vowels of multiplicative prefixes should not be elided (although 'monoxide', rather than 'monooxide', is an allowed exception because of general usage)." There are a number of exceptions and special cases that violate the above rules. Sometimes the prefix is left off the initial atom: I 2 O 5 is known as iodine pentaoxide , but it should be called diiodine pentaoxide . N 2 O 3 is called nitrogen sesquioxide ( sesqui- means 1 + 1 ⁄ 2 ). The main oxide of phosphorus is called phosphorus pentaoxide . It should actually be diphosphorus pentaoxide , but it is assumed that there are two phosphorus atoms (P 2 O 5 ), as they are needed in order to balance the oxidation numbers of the five oxygen atoms. However, people have known for years that the real form of the molecule is P 4 O 10 , not P 2 O 5 , yet it is not normally called tetraphosphorus decaoxide . In writing formulas, ammonia is NH 3 even though nitrogen is more electronegative (in line with the convention used by IUPAC as detailed in Table VI of the red book). Likewise, methane is written as CH 4 even though carbon is more electronegative ( Hill system ). Nomenclature of Inorganic Chemistry , commonly referred to by chemists as the Red Book , is a collection of recommendations on IUPAC nomenclature, published at irregular intervals by the IUPAC. The last full edition was published in 2005, [ 2 ] in both paper and electronic versions.
https://en.wikipedia.org/wiki/IUPAC_nomenclature_of_inorganic_chemistry
Nomenclature of Inorganic Chemistry, IUPAC Recommendations 2005 is the 2005 version of Nomenclature of Inorganic Chemistry (which is informally called the Red Book ). It is a collection of rules for naming inorganic compounds, as recommended by the International Union of Pure and Applied Chemistry (IUPAC). The 2005 edition replaces their previous recommendations Nomenclature The Red Book of Inorganic Chemistry, IUPAC Recommendations 1990 (Red Book I) , and "where appropriate" (sic) Nomenclature of Inorganic Chemistry II, IUPAC Recommendations 2000 (Red Book II) . The recommendations take up over 300 pages [ 1 ] and the full text can be downloaded from IUPAC. [ 2 ] Corrections have been issued. [ 3 ] Apart from a reorganisation of the content, there is a new section on organometallics and a formal element list to be used in place of electronegativity lists in sequencing elements in formulae and names. The concept of a preferred IUPAC name (PIN), a part of the revised blue book for organic compound naming, has not yet been adopted for inorganic compounds. There are however guidelines as to which naming method should be adopted. The recommendations describe a number of different ways in which compounds can be named. These are: Additionally there are recommendations for the following: For a simple compound such as AlCl 3 the different naming conventions yield the following: Throughout the recommendations the use of the electronegativity of elements for sequencing has been replaced by a formal list which is loosely based on electronegativity. The recommendations still use the terms electropositive and electronegative to refer to an element's relative position in this list. A simple rule of thumb ignoring lanthanides and actinides is: The full list, from highest to lowest "electronegativity" (with the addition of elements 112 through 118, that had not yet been named in 2005, to their respective groups): Note "treat separately" means to use the decision table on each component An indeterminate sample simply takes the element name. For example a sample of carbon (which could be diamond, graphite etc or a mixture) would be named carbon. This is specified by the element symbol followed by the Pearson symbol for the crystal form. (Note that the recommendations specifically italicize the second character.) Examples include P n ,. red phosphorus ; As n , amorphous arsenic. Compositional names impart little structural information and are recommended for use when structural information is not available or does not need to be conveyed. Stoichiometric names are the simplest and reflect either the empirical formula or the molecular formula. The ordering of the elements follows the formal electronegativity list for binary compounds and electronegativity list to group the elements into two classes which are then alphabetically sequenced. The proportions are specified by di-, tri-, etc. (See IUPAC numerical multiplier .) Where there are known to be complex cations or anions these are named in their own right and then these names used as part of the compound name. In binary compounds the more electropositive element is placed first in the formula. The formal list is used. The name of the most electronegative element is modified to end in -ide and the more electropositive elements name is left unchanged. Taking the binary compound of sodium and chlorine: chlorine is found first in the list so therefore comes last in the name. Other examples are The following illustrate the principles. The 1:1:1:1 quaternary compound between bromine, chlorine, iodine and phosphorus: The ternary 2:1:5 compound of antimony, copper and potassium can be named in two ways depending on which element(s) are designated as electronegative. Monatomic cations are named by taking the element name and following it with the charge in brackets e.g Sometimes an abbreviated form of the element name has to be taken, e.g. germide for germanium as germanide refers to GeH − 3 . Polyatomic cations of the same element are named as the element name preceded by di-, tri-, etc. , e.g.: Polyatomic cations made up of different elements are named either substitutively or additively, e.g.: Monatomic anions are named as the element modified with an -ide ending. The charge follows in brackets, (optional for 1−) e.g.: Some elements take their Latin name as the root e.g Polyatomic anions of the same element are named as the element name preceded by di-, tri-, etc. , e.g.: or sometimes as an alternative derived from a substitutive name e.g. Polyatomic anions made up of different elements are named either substitutively or additively, the name endings are -ide and -ate respectively e.g. : A full list of the alternative acceptable non-systematic names for cations and anions is in the recommendations. Many anions have names derived from inorganic acids and these are dealt with later. The presence of unpaired electrons can be indicated by a " · ". For example: The use of the term hydrate is still acceptable e.g. Na 2 SO 4 ·10H 2 O, sodium sulfate decahydrate. The recommended method would be to name it sodium sulfate—water(1/10). Similarly other examples of lattice compounds are: As an alternative to di-, tri- prefixes either charge or oxidation state can be used. Charge is recommended as oxidation state may be ambiguous and open to debate. This naming method generally follows established IUPAC organic nomenclature. Hydrides of the main group elements (groups 13–17) are given -ane base names, e.g. borane, BH 3 . Acceptable alternative names for some of the parent hydrides are water rather than oxidane and ammonia rather than azane. In these cases the base name is intended to be used for substituted derivatives. This section of the recommendations covers the naming of compounds containing rings and chains. Where a compound has non standard bonding as compared to the parent hydride for example PCl 5 the lambda convention is used. For example: A prefix di-, tri- etc. is added to the parent hydride name. Examples are: The recommendations describe three ways of assigning "parent" names to homonuclear monocyclic hydrides (i.e single rings consisting of one element): The stoichiometric name is followed by the number of hydrogen atoms in brackets. For example B 2 H 6 , diborane(6). More structural information can be conveyed by adding the "structural descriptor" closo -, nido -, arachno -, hypho -, klado - prefixes. There is a fully systematic method of numbering the atoms in the boron hydride clusters, and a method of describing the position of bridging hydrogen atoms using the μ symbol. Use of substitutive nomenclature is recommended for group 13–16 main group organometallic compounds. Examples are: For organometallic compounds of groups 1–2 can use additive (indicating a molecular aggregate) or compositional naming. Examples are: However the recommendation notes that future nomenclature projects will be addressing these compounds. This naming has been developed principally for coordination compounds although it can be more widely applied. Examples are: The recommendations include a flow chart which can be summarised very briefly: If the anion name ends in -ide then as a ligand its name is changed to end in -o. For example the chloride anion, Cl − becomes chlorido. This is a difference from organic compound naming and substitutive naming where chlorine is treated as neutral and it becomes chloro, as in PCl 3 , which can be named as either substitutively or additively as trichlorophosphane or trichloridophosphorus respectively. Similarly if the anion names end in -ite, -ate then the ligand names are -ito, -ato. Neutral ligands do not change name with the exception of the following: Ligands are ordered alphabetically by name and precede the central atom name. The number of ligands coordinating is indicated by the prefixes di-, tri-, tetra- penta- etc. for simple ligands or bis-, tris-, tetrakis-, etc. for complex ligands. For example: Where there are different central atoms they are sequenced using the electronegativity list. Ligands may bridge two or more centres. The prefix μ is used to specify a bridging ligand in both the formula and the name. For example the dimeric form of aluminium trichloride : This example illustrates the ordering of bridging and non bridging ligands of the same type. In the formula the bridging ligands follow the non bridging whereas in the name the bridging ligands precede the non bridging. Note the use of the kappa convention to specify that there are two terminal chlorides on each aluminium. Where there are more than two centres that are bridged a bridging index is added as a subscript. For example in basic beryllium acetate which can be visualised as a tetrahedral arrangement of Be atoms linked by 6 acetate ions forming a cage with a central oxide anion, the formula and name are as follows: The μ 4 describes the bridging of the central oxide ion. (Note the use of the kappa convention to describe the bridging of the acetate ion where both oxygen atoms are involved.) In the name where a ligand is involved in different modes of bridging, the multiple bridging is listed in decreasing order of complexity, e.g. μ 3 bridging before μ 2 bridging. The kappa convention is used to specify which ligand atoms are bonding to the central atom and in polynuclear species which atoms, both bridged and unbridged, link to which central atom. For monodentate ligands there is no ambiguity as to which atom is forming the bond to the central atom. However when a ligand has more than one atom that can link to a central atom the kappa convention is used to specify which atoms in a ligand are forming a bond. The element atomic symbol is italicised and preceded by kappa, κ. These symbols are placed after the portion of the ligand name that represents the ring, chain etc where the ligand is located. For example: Where there is more than one bond formed from a ligand by a particular element a numerical superscript gives the count. For example: In polynuclear complexes the use of the kappa symbol is extended in two related ways. Firstly to specify which ligating atoms bind to which central atom and secondly to specify for a bridging ligand which central atoms are involved. The central atoms must be identified, i.e. by assigning numbers to them. (This is formally dealt with in the recommendations). To specify which ligating atoms in a ligand link to which central atom, the central atom numbers precede the kappa symbol, and numerical superscript specifies the number of ligations and this is followed by the atomic symbol. Multiple occurrences are separated by commas. Examples: The use of η to denote hapticity is systematised. The use of η 1 is not recommended. When the specification of the atoms involved is ambiguous the position of the atoms must be specified. This is illustrated by the examples: For any coordination number above 2 more than one coordination geometry is possible. For example four coordinate coordination compounds can be tetrahedral, square planar, square pyramidal or see-saw shaped. The polyhedral symbol is used to describe the geometry. A configuration index is determined from the positions of the ligands and together with the polyhedral symbol is placed at the beginning of the name. For example in the complex ( SP -4-3)-(acetonitrile)dichlorido(pyridine)platinum(II) the ( SP -4-3) at the beginning of the name describes a square planar geometry, 4 coordinate with a configuration index of 3 indicating the position of the ligands around the central atom. For more detail see polyhedral symbol . Additive nomenclature is generally recommended for organometallic compounds of groups 3-12 (transition metals and zinc, cadmium and mercury). Following on from ferrocene —the first sandwich compound with a central Fe atom coordinated to two parallel cyclopentadienyl rings—names for compounds with similar structures such as osmocene and vanadocene are in common usage. The recommendation is that the name-ending ocene should be restricted to compounds where there are discrete molecules of bis(η 5 -cyclopentadienyl)metal (and ring-substituted analogues), where the cyclopentadienyl rings are essentially parallel, and the metal is in the d-block. The terminology does NOT apply to compounds of the s- or p-block elements such as Ba(C 5 H 5 ) 2 or Sn(C 5 H 5 ) 2 . Examples of compounds that meet the criteria are: Examples of compounds that should not be named as metallocenes are: In polynuclear compounds with metal-metal bonds these are shown after the element name as follows: (3 Os — Os ) in Decacarbonyldihydridotriosmium . A pair of brackets contain a count of the bonds formed (if greater than 1), followed by the italicised element atomic symbols separated by an "em-dash". The geometries of polynuclear clusters can range in complexity. A descriptor e.g. tetrahedro or the CEP descriptor e.g. Td -(13)-Δ 4 - closo ] can be used. this is determined by the complexity of the cluster. Some examples are shown below of descriptors and CEP equivalents. (The CEP descriptors are named for Casey, Evans and Powell who described the system. [ 4 ] Examples: decacarbonyldimanganese bis(pentacarbonylmanganese)( Mn — Mn ) dodecacarbonyltetrarhodium tri-μ-carbonyl-1:2κ 2 C ;1:3κ 2 C ;2:3κ 2 C -nonacarbonyl- 1κ 2 C ,2κ 2 C ,3κ 2 C ,4κ 3 C -[ T d -(13)-Δ 4 - closo ]-tetrarhodium(6 Rh — Rh ) or tri-μ-carbonyl-1:2κ 2 C ;1:3κ 2 C ;2:3κ 2 C -nonacarbonyl- 1κ 2 C ,2κ 2 C ,3κ 2 C ,4κ 3 C -tetrahedro-tetrarhodium(6 Rh — Rh ) The recommendations include a description of hydrogen names for acids. The following examples illustrate the method: Note that the difference from the compositional naming method (hydrogen sulfide) as in hydrogen naming there is NO space between the electropositive and electronegative components. This method gives no structural information regarding the position of the hydrons (hydrogen atoms). If this information is to be conveyed then the additive name should be used (see the list below for examples). The recommendations give a full list of acceptable names for common acids and related anions. A selection from this list is shown below. Stoichiometric phases are named compositionally. Non-stoichiometric phases are more difficult. Where possible formulae should be used but where necessary naming such as the following may be used: Generally mineral names should not be used to specify chemical composition. However a mineral name can be used to specify the structure type in a formula e.g. A simple notation may be used where little information on the mechanism for variability is either available or is not required to be conveyed: Where there is a continuous range of composition this can be written e.g., K(Br,Cl) for a mixture of KBr and KCl and (Li 2 ,Mg)Cl 2 for a mixture of LiCl and MgCl 2 . The recommendation is to use the following generalised method e.g. Note that cation vacancies in CoO could be described by CoO 1−x Point defects, site symmetry and site occupancy can all be described using Kröger–Vink notation , note that the IUPAC preference is for vacancies to be specified by V rather than V (the element vanadium). To specify the crystal form of a compound or element the Pearson symbol may be used. The use of Strukturbericht (e.g. A1 etc) or Greek letters is not acceptable. The Pearson symbol may be followed by the space group and the prototype formula. Examples are: It is recommended that polymorphs are identified (e.g. for ZnS where the two forms zincblende (cubic) and wurtzite (hexagonal)), as ZnS( c ) and ZnS( h ) respectively.
https://en.wikipedia.org/wiki/IUPAC_nomenclature_of_inorganic_chemistry_2005
In chemical nomenclature , the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended [ 1 ] [ 2 ] by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book ). [ 3 ] Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry . [ 4 ] To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol , instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas. In chemistry, a number of prefixes , suffixes and infixes are used to describe the type and position of the functional groups in the compound. The steps for naming an organic compound are: [ 5 ] The numbers for that type of side chain will be grouped in ascending order and written before the name of the side-chain. If there are two side-chains with the same alpha carbon , the number will be written twice. Example: 2,2,3-trimethyl- . If there are both double bonds and triple bonds, "en" (double bond) is written before "yne" (triple bond). When the main functional group is a terminal functional group (a group which can exist only at the end of a chain, like formyl and carboxyl groups), there is no need to number it. The resulting name appears as: where each "#" represents a number. The group secondary functional groups and side chains may not look the same as shown here, as the side chains and secondary functional groups are arranged alphabetically. The di- and tri- have been used just to show their usage. (di- after #,#, tri- after #,#,#, etc.) Here is a sample molecule with the parent carbons numbered: For simplicity, here is an image of the same molecule, where the hydrogens in the parent chain are removed and the carbons are shown by their numbers: Now, following the above steps: The final name is (6 E ,13 E )-18-bromo-12-butyl-11-chloro-4,8-diethyl-5-hydroxy-15-methoxytricosa-6,13-dien-19-yne-3,9-dione. Straight-chain alkanes take the suffix " -ane " and are prefixed depending on the number of carbon atoms in the chain, following standard rules. The first few are: For example, the simplest alkane is CH 4 methane, and the nine-carbon alkane CH 3 (CH 2 ) 7 CH 3 is named nonane . The names of the first four alkanes were derived from methanol , ether , propionic acid and butyric acid , respectively. The rest are named with a Greek numeric prefix, with the exceptions of nonane which has a Latin prefix, and undecane which has mixed-language prefixes. Cyclic alkanes are simply prefixed with "cyclo-": for example, C 4 H 8 is cyclobutane (not to be confused with butene ) and C 6 H 12 is cyclohexane (not to be confused with hexene ). Branched alkanes are named as a straight-chain alkane with attached alkyl groups. They are prefixed with a number indicating the carbon the group is attached to, counting from the end of the alkane chain. For example, (CH 3 ) 2 CHCH 3 , commonly known as isobutane , is treated as a propane chain with a methyl group bonded to the middle (2) carbon, and given the systematic name 2-methylpropane. However, although the name 2-methylpropane could be used, it is easier and more logical to call it simply methylpropane – the methyl group could not possibly occur on any of the other carbon atoms (that would lengthen the chain and result in butane, not propane) and therefore the use of the number "2" is unnecessary. If there is ambiguity in the position of the substituent, depending on which end of the alkane chain is counted as "1", then numbering is chosen so that the smaller number is used. For example, (CH 3 ) 2 CHCH 2 CH 3 (isopentane) is named 2-methylbutane, not 3-methylbutane. If there are multiple side-branches of the same size alkyl group, their positions are separated by commas and the group prefixed with multiplier prefixes depending on the number of branches. For example, C(CH 3 ) 4 (neopentane) is named 2,2-dimethylpropane. If there are different groups, they are added in alphabetical order, separated by commas or hyphens. The longest possible main alkane chain is used; therefore 3-ethyl-4-methylhexane instead of 2,3-diethylpentane, even though these describe equivalent structures. The di-, tri- etc. prefixes are ignored for the purpose of alphabetical ordering of side chains (e.g. 3-ethyl-2,4-dimethylpentane, not 2,4-dimethyl-3-ethylpentane). Alkenes are named for their parent alkane chain with the suffix " -ene " and a numerical root indicating the position of the carbon with the lower number for each double bond in the chain: CH 2 =CHCH 2 CH 3 is but-1-ene. Multiple double bonds take the form -diene, -triene, etc., with the size prefix of the chain taking an extra "a": CH 2 =CHCH=CH 2 is buta-1,3-diene. Simple cis and trans isomers may be indicated with a prefixed cis- or trans- : cis -but-2-ene, trans -but-2-ene. However, cis- and trans- are relative descriptors. It is IUPAC convention to describe all alkenes using absolute descriptors of Z- (same side) and E- (opposite) with the Cahn–Ingold–Prelog priority rules (see also E–Z notation ). Alkynes are named using the same system, with the suffix " -yne " indicating a triple bond: ethyne ( acetylene ), propyne ( methylacetylene ). In haloalkanes and haloarenes ( R−X ), Halogen functional groups are prefixed with the bonding position and take the form of fluoro-, chloro-, bromo-, iodo-, etc., depending on the halogen. Multiple groups are dichloro-, trichloro-, etc., and dissimilar groups are ordered alphabetically as before. For example, CHCl 3 ( chloroform ) is trichloromethane. The anesthetic halothane ( CF 3 CHBrCl ) is 2-bromo-2-chloro-1,1,1-trifluoroethane. Alcohols ( R−OH ) take the suffix " -ol " with a numerical suffix indicating the bonding position: CH 3 CH 2 CH 2 OH is propan-1-ol. The suffixes -diol , -triol , -tetrol , etc., are used for multiple −OH groups: Ethylene glycol CH 2 OHCH 2 OH is ethane-1,2-diol. If higher precedence functional groups are present (see order of precedence , below), the prefix "hydroxy" is used with the bonding position: CH 3 CHOHCOOH is 2-hydroxypropanoic acid. Ethers ( R−O−R ) consist of an oxygen atom between the two attached carbon chains. The shorter of the two chains becomes the first part of the name with the -ane suffix changed to -oxy, and the longer alkane chain becomes the suffix of the name of the ether. Thus, CH 3 OCH 3 is methoxymethane, and CH 3 OCH 2 CH 3 is methoxyethane ( not ethoxymethane). If the oxygen is not attached to the end of the main alkane chain, then the whole shorter alkyl-plus-ether group is treated as a side-chain and prefixed with its bonding position on the main chain. Thus CH 3 OCH(CH 3 ) 2 is 2-methoxypropane. Alternatively, an ether chain can be named as an alkane in which one carbon is replaced by an oxygen, a replacement denoted by the prefix "oxa". For example, CH 3 OCH 2 CH 3 could also be called 2-oxabutane, and an epoxide could be called oxacyclopropane. This method is especially useful when both groups attached to the oxygen atom are complex. [ 6 ] Aldehydes ( R−CH=O ) take the suffix " -al ". If other functional groups are present, the chain is numbered such that the aldehyde carbon is in the "1" position, unless functional groups of higher precedence are present. If a prefix form is required, "oxo-" is used (as for ketones), with the position number indicating the end of a chain: CHOCH 2 COOH is 3-oxopropanoic acid. If the carbon in the carbonyl group cannot be included in the attached chain (for instance in the case of cyclic aldehydes ), the prefix "formyl-" or the suffix "-carbaldehyde" is used: C 6 H 11 CHO is cyclohexanecarbaldehyde. If an aldehyde is attached to a benzene and is the main functional group, the suffix becomes benzaldehyde. In general ketones ( R 2 C=O ) take the suffix " -one " (pronounced own , not won ) with a suffixed position number: CH 3 CH 2 CH 2 COCH 3 is pentan-2-one. If a higher precedence suffix is in use, the prefix "oxo-" is used: CH 3 CH 2 CH 2 COCH 2 CHO is 3-oxohexanal. In general, carboxylic acids ( R−C(=O)OH ) are named with the suffix -oic acid (etymologically a back-formation from benzoic acid ). As with aldehydes, the carboxyl functional group must take the "1" position on the main chain and so the locant need not be stated. For example, CH 3 −CH(OH)−COOH ( lactic acid ) is named 2-hydroxypropanoic acid with no "1" stated. Some traditional names for common carboxylic acids (such as acetic acid ) are in such widespread use that they are retained in IUPAC nomenclature, [ 7 ] though systematic names like ethanoic acid are also used. Carboxylic acids attached to a benzene ring are structural analogs of benzoic acid ( Ph −COOH ) and are named as one of its derivatives. If there are multiple carboxyl groups on the same parent chain, multiplying prefixes are used: Malonic acid , CH 2 (COOH) 2 , is systematically named propanedioic acid. Alternatively, the suffix "-carboxylic acid" can be used in place of "oic acid", combined with a multiplying prefix if necessary – mellitic acid is benzenehexacarboxylic acid, for example. In the latter case, the carbon atoms in the carboxyl groups do not count as being part of the main chain, a rule that also applies to the prefix form "carboxy-". Citric acid serves as an example: it is formally named 2-hydroxypropane-1,2,3-tricarboxylic acid rather than 3-carboxy-3-hydroxypentanedioic acid . Salts of carboxylic acids are named following the usual cation -then- anion conventions used for ionic compounds in both IUPAC and common nomenclature systems. The name of the carboxylate anion ( R−C(=O)O − ) is derived from that of the parent acid by replacing the "–oic acid" ending with "–oate" or "carboxylate." For example, NaC 6 H 5 CO 2 , the sodium salt of benzoic acid ( C 6 H 5 COOH ), is called sodium benzoate. Where an acid has both a systematic and a common name (like CH 3 COOH , for example, which is known as both acetic acid and as ethanoic acid), its salts can be named from either parent name. Thus, KCH 3 CO 2 can be named as potassium acetate or as potassium ethanoate. The prefix form, is "carboxylato-". Esters ( R−C(=O)O−R' ) are named as alkyl derivatives of carboxylic acids. The alkyl (R') group is named first. The R−C(=O)O part is then named as a separate word based on the carboxylic acid name, with the ending changed from "-oic acid" to " -oate " or "-carboxylate" For example, CH 3 CH 2 CH 2 CH 2 COOCH 3 is methyl pentanoate, and (CH 3 ) 2 CHCH 2 CH 2 COOCH 2 CH 3 is ethyl 4-methylpentanoate. For esters such as ethyl acetate ( CH 3 COOCH 2 CH 3 ), ethyl formate ( HCOOCH 2 CH 3 ) or dimethyl phthalate that are based on common acids, IUPAC recommends use of these established names, called retained names . The "-oate" changes to "-ate." Some simple examples, named both ways, are shown in the figure above. If the alkyl group is not attached at the end of the chain, the bond position to the ester group is suffixed before "-yl": CH 3 CH 2 CH(CH 3 )OOCCH 2 CH 3 may be called butan-2-yl propanoate or butan-2-yl propionate. [ citation needed ] . The prefix form is "oxycarbonyl-" with the (R') group preceding. Acyl groups are named by stripping the "-ic acid" of the corresponding carboxylic acid and replacing it with "-yl." For example, CH 3 CO−R is called ethanoyl-R. Simply add the name of the attached halide to the end of the acyl group. For example, CH 3 COCl is ethanoyl chloride. An alternate suffix is "-carbonyl halide" as opposed to "-oyl halide". The prefix form is "halocarbonyl-". Acid anhydrides ( R−C(=O)−O−C(=O)−R ) have two acyl groups linked by an oxygen atom. If both acyl groups are the same, then the name of the carboxylic acid with the word acid is replaced with the word anhydride and the IUPAC name consists of two words. If the acyl groups are different, then they are named in alphabetical order in the same way, with anhydride replacing acid and IUPAC name consists of three words. For example, CH 3 CO−O−OCCH 3 is called ethanoic anhydride and CH 3 CO−O−OCCH 2 CH 3 is called ethanoic propanoic anhydride . Amines ( R−NH 2 ) are named for the attached alkane chain with the suffix "-amine" (e.g., CH 3 NH 2 methanamine). If necessary, the bonding position is suffixed: CH 3 CH 2 CH 2 NH 2 propan-1-amine, CH 3 CHNH 2 CH 3 propan-2-amine. The prefix form is "amino-". For secondary amines (of the form R−NH−R ), the longest carbon chain attached to the nitrogen atom becomes the primary name of the amine; the other chain is prefixed as an alkyl group with location prefix given as an italic N : CH 3 NHCH 2 CH 3 is N -methylethanamine. Tertiary amines ( R−NR−R ) are treated similarly: CH 3 CH 2 N(CH 3 )CH 2 CH 2 CH 3 is N -ethyl- N -methylpropanamine. Again, the substituent groups are ordered alphabetically. Amides ( R−C(=O)NH 2 ) take the suffix "-amide", or "-carboxamide" if the carbon in the amide group cannot be included in the main chain. The prefix form is "carbamoyl-". e.g., HCONH 2 methanamide, CH 3 CONH 2 ethanamide. Amides that have additional substituents on the nitrogen are treated similarly to the case of amines: they are ordered alphabetically with the location prefix N : HCON(CH 3 ) 2 is N , N -dimethylmethanamide, CH 3 CON(CH 3 ) 2 is N , N -dimethylethanamide. Nitriles ( R−C≡N ) are named by adding the suffix "-nitrile" to the longest hydrocarbon chain (including the carbon of the cyano group). It can also be named by replacing the "-oic acid" of their corresponding carboxylic acids with "-carbonitrile." The prefix form is "cyano-." Functional class IUPAC nomenclature may also be used in the form of alkyl cyanides. For example, CH 3 CH 2 CH 2 CH 2 C≡N is called pentanenitrile or butyl cyanide. Cycloalkanes and aromatic compounds can be treated as the main parent chain of the compound, in which case the positions of substituents are numbered around the ring structure. For example, the three isomers of xylene CH 3 C 6 H 4 CH 3 , commonly the ortho- , meta- , and para- forms, are 1,2-dimethylbenzene, 1,3-dimethylbenzene, and 1,4-dimethylbenzene. The cyclic structures can also be treated as functional groups themselves, in which case they take the prefix "cyclo alkyl -" (e.g. "cyclohexyl-") or for benzene, "phenyl-". The IUPAC nomenclature scheme becomes rapidly more elaborate for more complex cyclic structures, with notation for compounds containing conjoined rings, and many common names such as phenol being accepted as base names for compounds derived from them. When compounds contain more than one functional group, the order of precedence determines which groups are named with prefix or suffix forms. The table below shows common groups in decreasing order of precedence. The highest-precedence group takes the suffix, with all others taking the prefix form. However, double and triple bonds only take suffix form (-en and -yn) and are used with other suffixes. Prefixed substituents are ordered alphabetically (excluding any modifiers such as di-, tri-, etc.), e.g. chlorofluoromethane, not fluorochloromethane. If there are multiple functional groups of the same type, either prefixed or suffixed, the position numbers are ordered numerically (thus ethane-1,2-diol, not ethane-2,1-diol.) The N position indicator for amines and amides comes before "1", e.g., CH 3 CH(CH 3 )CH 2 NH(CH 3 ) is N ,2-dimethylpropanamine. * Note : These suffixes, in which the carbon atom is counted as part of the preceding chain, are the most commonly used. See individual functional group articles for more details. The order of remaining functional groups is only needed for substituted benzene and hence is not mentioned here. [ clarification needed ] Common nomenclature uses the older names for some organic compounds instead of using the prefixes for the carbon skeleton above. The pattern can be seen below. •Diethyl ketone •Ethyl propyl ketone •Butyl ethyl ketone •Dipropyl ketone •Ethyl pentyl ketone •Butyl propyl ketone •Ethyl hexyl ketone •Pentyl propyl ketone •Dibutyl ketone •Ethyl heptyl ketone •Hexyl propyl ketone •Butyl pentyl ketone ( see below ) Common names for ketones can be derived by naming the two alkyl or aryl groups bonded to the carbonyl group as separate words followed by the word ketone . The first three of the names shown above are still considered to be acceptable IUPAC names . The common name for an aldehyde is derived from the common name of the corresponding carboxylic acid by dropping the word acid and changing the suffix from -ic or -oic to -aldehyde. The IUPAC nomenclature also provides rules for naming ions . Hydron is a generic term for hydrogen cation; protons, deuterons and tritons are all hydrons. The hydrons are not found in heavier isotopes, however. Simple cations formed by adding a hydron to a hydride of a halogen, chalcogen or pnictogen are named by adding the suffix "-onium" to the element's root: H 4 N + is ammonium, H 3 O + is oxonium, and H 2 F + is fluoronium. Ammonium was adopted instead of nitronium, which commonly refers to NO + 2 . If the cationic center of the hydride is not a halogen, chalcogen or pnictogen then the suffix "-ium" is added to the name of the neutral hydride after dropping any final 'e'. H 5 C + is methanium, HO−(O + )H 2 is dioxidanium (HO-OH is dioxidane), and H 2 N−(N + )H 3 is diazanium ( H 2 N−NH 2 is diazane). The above cations except for methanium are not, strictly speaking, organic, since they do not contain carbon. However, many organic cations are obtained by substituting another element or some functional group for a hydrogen. The name of each substitution is prefixed to the hydride cation name. If many substitutions by the same functional group occur, then the number is indicated by prefixing with "di-", "tri-" as with halogenation. (CH 3 ) 3 O + is trimethyloxonium. CH 3 F 3 N + is trifluoromethylammonium.
https://en.wikipedia.org/wiki/IUPAC_nomenclature_of_organic_chemistry
The numerical multiplier (or multiplying affix ) in IUPAC nomenclature indicates how many particular atoms or functional groups are attached at a particular point in a molecule . The affixes are derived from both Latin and Greek . The prefixes are given from the least significant decimal digit up: units, then tens, then hundreds, then thousands. For example: While the use of the affix mono- is rarely necessary in organic chemistry , it is often essential in inorganic chemistry to avoid ambiguity: carbon oxide could refer to either carbon monoxide or carbon dioxide . In forming compound affixes, the numeral one is represented by the term hen- except when it forms part of the number eleven ( undeca- ): hence In compound affixes, the numeral two is represented by do- except when it forms part of the numbers 20 ( icosa- ), 200 ( dicta- ) or 2000 ( dilia- ). IUPAC prefers the spelling icosa- for the affix corresponding to the number twenty on the grounds of etymology . However both the Chemical Abstracts Service and the Beilstein database use the alternative spelling eicosa- . There are two more types of numerical prefixes in IUPAC organic chemistry nomenclature. [ 1 ] Numerical prefixes for multiplication of compound or complex (as in complicated ) features are created by adding kis to the basic numerical prefix, with the exception of numbers 2 and 3, which are bis- and tris-, respectively. An example is the IUPAC name for DDT . Examples are biphenyl or terphenyl . "mono-" is from Greek monos = "alone". "un" = 1 and "nona-" = 9 are from Latin . The others are derived from Greek numbers. The forms 100 and upwards are not correct Greek. In Ancient Greek , hekaton = 100, diakosioi = 200, triakosioi = 300, etc. The numbers 200-900 would be confused easily with 22 to 29 if they were used in chemistry. khīlioi = 1000, diskhīlioi = 2000, triskhīlioi = 3000, etc. 13 to 19 are formed by starting with the Greek word for the number of ones, followed by και (the Greek word for 'and'), followed by δέκα (the Greek word for 'ten'). For instance treiskaideka , as in triskaidekaphobia .
https://en.wikipedia.org/wiki/IUPAC_numerical_multiplier
IUPAC Polymer Nomenclature are standardized naming conventions for polymers set by the International Union of Pure and Applied Chemistry (IUPAC) and described in their publication "Compendium of Polymer Terminology and Nomenclature", which is also known as the "Purple Book". [ 1 ] [ 2 ] Both the IUPAC [ 3 ] and Chemical Abstracts Service (CAS) make similar naming recommendations for the naming of polymers. The terms polymer and macromolecule do not mean the same thing. A polymer is a substance composed of macromolecules. The latter usually have a range of molar masses (unit g mol −1 ), the distributions of which are indicated by dispersity ( Đ ). It is defined as the ratio of the mass-average molar mass ( M m ) to the number-average molar mass ( M n ) i.e. Đ = M m / M n . [ 4 ] Symbols for physical quantities or variables are in italic font but those representing units or labels are in roman font. Polymer nomenclature usually applies to idealized representations meaning minor structural irregularities are ignored. A polymer can be named in one of two ways. Source-based nomenclature can be used when the monomer can be identified. Alternatively, more explicit structure-based nomenclature can be used when the polymer structure is proven. Where there is no confusion, some traditional names are also acceptable. Whatever method is used, all polymer names have the prefix poly , followed by enclosing marks around the rest of the name. The marks are used in the order: {[( )]}. Locants indicate the position of structural features, e.g., poly(4-chlorostyrene). If the name is one word and has no locants, then the enclosing marks are not essential, but they should be used when there might be confusion, e.g., poly(chlorostyrene) is a polymer whereas polychlorostyrene might be a small, multi-substituted molecule. End-groups are described with α- and ω-, e.g., α-chloro-ω-hydroxy-polystyrene. [ 1 ] Homopolymers are named using the name of the real or assumed monomer (the ‘source’) from which it is derived, e.g., poly(methyl methacrylate). [ 5 ] Monomers can be named using IUPAC recommendations, or well-established traditional names. [ 6 ] Should ambiguity arise, class names can be added. For example, the source-based name poly(vinyloxirane) could correspond to either of the structures shown. To clarify, the polymer is named using the polymer class name followed by a colon and the name of the monomer, i.e., class name:monomer name. Thus on the left and right, respectively, are polyalkylene:vinyloxirane and polyether:vinyloxirane. The structure of a copolymer can be described using the most appropriate of the connectives shown in Table 1. [ 7 ] These are written in italic font. a The first name is that of the main chain. Non-linear polymers and copolymers, and polymer assemblies are named using the italicized qualifiers in Table 2. [ 5 ] The qualifier, such as branch , is used as a prefix (P) when naming a (co)polymer, or as a connective (C), e.g., comb , between two polymer names. poly(vinylbenzenesulfonic acid) a a In accordance with IUPAC organic nomenclature, square brackets indicate the nature of the locant sites in fused ring systems. [ 8 ] In place of the monomer name used in source-based nomenclature, structure-based nomenclature uses that of the "preferred constitutional repeating unit" (CRU). [ 9 ] It can be determined as follows: Polymers that are not made up of regular repetitions of a single CRU are called irregular polymers. For these, each constitutional unit (CU) is separated by a slash, e.g., poly(but-1-ene-1,4-diyl/1-vinylethane-1,2-diyl). [ 10 ] a To avoid ambiguity, wavy lines drawn perpendicular to the free bond, which are conventionally used to indicate free valences, [ 11 ] are usually omitted from graphical representations in a polymer context. Double-strand polymers consist of uninterrupted chains of rings. In a spiro polymer, each ring has one atom in common with adjacent rings. In a ladder polymer , adjacent rings have two or more atoms in common. To identify the preferred CRU, the chain is broken so that the senior ring is retained with the maximum number of heteroatoms and the minimum number of free valences. [ 12 ] An example is The preferred CRU is an acyclic subunit of 4 carbon atoms with 4 free valences, one at each atom, as shown. It is oriented so that the lower left atom has the lowest number. The free-valence locants are written before the suffix, and they are cited clockwise from the lower left position as: lower-left, upper-left:upper-right, lower-right. This example is thus named poly(butane-1,4:3,2-tetrayl). For more complex structures, the order of seniority again follows Figure 1. Some regular single-strand inorganic polymers can be named like organic polymers using the rules given above, e.g., −[O−Si(CH 3 ) 2 ] n − and −[Sn(CH 3 ) 2 ] n − are named poly[oxy(dimethylsilanediyl)] and poly(dimethylstannanediyl), respectively. [ 13 ] Inorganic polymers can also be named in accordance with inorganic nomenclature, but the seniority of the elements is different from that in organic nomenclature. However, certain inorganic and inorganic-organic polymers, for example those containing metallocene derivatives, are at present best named using organic nomenclature, e.g., the polymer shown can be named poly[(dimethylsilanediyl)ferrocene-1,1'-diyl]. When they fit into the general pattern of systematic nomenclature, some traditional and trivial names for polymers in common usage, such as polyethylene, polypropylene , and polystyrene , are retained. The bonds between atoms can be omitted, but dashes should be drawn for chain-ends. The seniority of the subunits does not need to be followed. For single-strand (co)polymers, a dash is drawn through the enclosing marks, e.g., poly[oxy(ethane-1,2-diyl)] shown below left. For irregular polymers, the CUs are separated by slashes, and the dashes are drawn inside the enclosing marks. End-groups are connected using additional dashes outside of the enclosing marks, e.g., α-methyl-ω-hydroxy-poly[oxirane- co -(methyloxirane)], shown below right. [ 11 ] [ 14 ] CAS maintains a registry of substances. [ 15 ] In the CAS system, the CRU is called a structural repeating unit (SRU). There are minor differences in the placements of locants, e.g., poly(pyridine-3,5-diylthiophene-2,5-diyl) is poly(3,5-pyridinediyl-2,5-thiophenediyl) in the CAS registry, but otherwise polymers are named using similar methods to those of IUPAC. [ 16 ] [ 17 ]
https://en.wikipedia.org/wiki/IUPAC_polymer_nomenclature
Imaging X-ray Polarimetry Explorer , commonly known as IXPE or SMEX-14 , is a space observatory with three identical telescopes designed to measure the polarization of cosmic X-rays of black holes, neutron stars, and pulsars. [ 6 ] The observatory, which was launched on 9 December 2021, is an international collaboration between NASA and the Italian Space Agency (ASI). It is part of NASA's Explorers program, which designs low-cost spacecraft to study heliophysics and astrophysics. The mission will study exotic astronomical objects and permit mapping of the magnetic fields of black holes , neutron stars , pulsars , supernova remnants , magnetars , quasars , and active galactic nuclei . The high-energy X-ray radiation from these objects' surrounding environment can be polarized – oscillating in a particular direction. Studying the polarization of X-rays reveals the physics of these objects and can provide insights into the high-temperature environments where they are created. [ 7 ] The IXPE mission was announced on 3 January 2017 [ 6 ] and was launched on 9 December 2021. [ 3 ] The international collaboration was signed in June 2017, [ 1 ] when the Italian Space Agency (ASI) committed to provide the X-ray polarization detectors. [ 7 ] The estimated cost of the mission and its two-year operation is US$188 million (the launch cost is US$50.3 million). [ 8 ] [ 7 ] The goal of the IXPE mission is to expand understanding of high-energy astrophysical processes and sources, in support of NASA's first science objective in astrophysics: "Discover how the universe works". [ 1 ] By obtaining X-ray polarimetry and polarimetric imaging of cosmic sources, IXPE addresses two specific science objectives: to determine the radiation processes and detailed properties of specific cosmic X-ray sources or categories of sources; and to explore general relativistic and quantum effects in extreme environments. [ 1 ] [ 6 ] During IXPE's two-year mission, it will study targets such as active galactic nuclei , quasars , pulsars , pulsar wind nebulae , magnetars , accreting X-ray binaries , supernova remnants , and the Galactic Center . [ 4 ] The spacecraft was built by Ball Aerospace & Technologies . [ 1 ] The principal investigator is Martin C. Weisskopf of NASA Marshall Space Flight Center ; he is the chief scientist for X-ray astronomy at NASA's Marshall Space Flight Center and project scientist for the Chandra X-ray Observatory spacecraft. [ 7 ] Other partners include the McGill University , Massachusetts Institute of Technology (MIT), Roma Tre University , Stanford University , [ 5 ] OHB Italia [ 9 ] and the University of Colorado Boulder . [ 10 ] The technical and science objectives include: [ 3 ] The space observatory features three identical telescopes designed to measure the polarization of cosmic X-rays . [ 6 ] The polarization-sensitive detector was invented and developed by Italian scientists of the Istituto Nazionale di AstroFisica (INAF) and the Istituto Nazionale di Fisica Nucleare (INFN) and was refined over several years. [ 4 ] [ 11 ] [ 12 ] IXPE's payload is a set of three identical imaging X-ray polarimetry systems mounted on a common optical bench and co-aligned with the pointing axis of the spacecraft. [ 1 ] Each system operates independently for redundancy and comprises a mirror module assembly that focuses X-rays onto a polarization-sensitive imaging detector developed in Italy. [ 1 ] The 4 m (13 ft) focal length is achieved using a deployable boom. The Gas Pixel Detectors (GPD), [ 13 ] a type of Micropattern gaseous detector , rely on the anisotropy of the emission direction of photoelectrons produced by polarized photons to gauge with high sensitivity the polarization state of X-rays interacting in a gaseous medium. [ 4 ] Position-dependent and energy-dependent polarization maps of such synchrotron-emitting sources will reveal the magnetic-field structure of the X-ray emitting regions. X-ray polarimetric imaging better indicates the magnetic structure in regions of strong electron acceleration. The system is capable to resolve point sources from surrounding nebular emission or from adjacent point sources. [ 4 ] IXPE was launched on 9 December 2021 on a SpaceX Falcon 9 ( B1061.5 ) from LC-39A at NASA's Kennedy Space Center in Florida. The relatively small size and mass of the observatory falls well short of the normal capacity of SpaceX's Falcon 9 launch vehicle . However, Falcon 9 had to work to get IXPE into the correct orbit because IXPE is designed to operate in an almost exactly equatorial orbit with a 0° inclination . Launching from Cape Canaveral , which is located 28.5° above the equator , it was physically impossible to launch directly into a 0.2° equatorial orbit. Instead, the rocket needed to launch due east into a parking orbit and then perform a plane, or inclination, change once in space, as the spacecraft crossed the equator. For Falcon 9, this meant that even the tiny 330 kg (730 lb) IXPE likely still represented about 20–30% of its maximum theoretical performance (1,500–2,000 kg (3,300–4,400 lb)) for such a mission profile, while the same launch vehicle is otherwise able to launch about 15,000 kg (33,000 lb) to the same 540 km (340 mi) orbit IXPE was targeting when no plane change is needed, while recovering the first stage booster. [ 14 ] IXPE is the first satellite dedicated to measuring the polarization of X-rays from a variety of cosmic sources, such as black holes and neutron stars . The orbit hugging the equator will minimize the X-ray instrument's exposure to radiation in the South Atlantic Anomaly , the region where the inner Van Allen radiation belt comes closest to Earth's surface. IXPE is built to last for two years. [ 8 ] After that it may be retired and deorbited or given an extended mission. After launch and deployment of the IXPE spacecraft, NASA pointed the spacecraft at 1ES 1959+650, a black hole, and SMC X-1, a pulsar, for calibration. After that the spacecraft observed its first science target, Cassiopeia A . A first-light image of Cassiopeia A was released on 11 January 2022. [ 15 ] 30 targets are planned to be observed during IXPE's first year. [ 15 ] IXPE communicates with Earth via a ground station in Malindi , Kenya. The ground station is owned and operated by the Italian Space Agency. [ 15 ] At present mission operations for IXPE are controlled by the Laboratory for Atmospheric and Space Physics (LASP) . [ 16 ] In May 2022 the first study of IXPE hinted the possibility of vacuum birefringence on 4U 0142+61 [ 17 ] [ 18 ] and in August another study looked at Centaurus A measuring low polarization degree, suggesting that the X-ray emission is coming from a scattering process rather than arising directly from the accelerated particles of the jet. [ 19 ] [ 20 ] In October 2022 it observed the gamma ray burst GRB 221009A , also known as the "Brightest of all time" (BOAT). [ 21 ] [ 22 ]
https://en.wikipedia.org/wiki/IXPE
IXS Enterprise is a conceptual interstellar superluminal spacecraft designed by NASA scientist Dr. Harold G. White , revealed at SpaceVision 2013, designed for the goal of achieving warp travel . The conceptual spacecraft would make use of a modified version of the Alcubierre drive . Dr. White is currently [ when? ] running the White–Juday warp-field interferometer experiment in order to develop a proof of concept for Alcubierre-style warp travel, when possible. The Alcubierre drive uses exotic matter (not to be confused with antimatter ) to travel faster than light. While the concept had been out since 2013 the design of IXS Enterprise was popularized in June 2014 after a series of media outlets reported on the conceptual artwork done by Dutch artist Mark Rademaker in collaboration with NASA. [ 1 ] According to Mark Rademaker, over 1,600 hours have been spent on the conceptual artwork that he created. [ 2 ] In 2012, NASA reported that it was experimenting with the concept of warp drive and the loophole within Einstein's theory of relativity. [ clarification needed ] By 2014, it was announced that designer Mark Rademaker had created a CGI representation of a new vessel that would achieve warp velocity. The vessel he designed was the IXS Enterprise , named after the famed ship of the Star Trek franchise. The energy required to power the warp drive, according to White, [ 3 ] is approximately the negative (negative energy is required for the Alcubierre drive concept to function) mass–energy equivalence of Voyager 1 , which has a mass of approximately 700 kilograms . Using E=mc 2 , −700 kilograms of mass is equivalent to ~−63 exajoules of energy (this number is not definitive and can be further reduced). The ship has two thick outer rings (to reduce required energy) that generate the warp field—a contraction of space ahead, and expansion of space behind it. The space inside the rings is optimized to fit more space for cargo, crew and equipment. This spacecraft or satellite related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/IXS_Enterprise
I = (PAT) is the mathematical notation of a formula put forward to describe the impact of human activity on the environment . The expression equates human impact on the environment to a function of three factors : population (P), affluence (A) and technology (T). [ 1 ] It is similar in form to the Kaya identity , which applies specifically to emissions of the greenhouse gas carbon dioxide . The validity of expressing environmental impact as a simple product of independent factors, and the factors that should be included and their comparative importance, have been the subject of debate among environmentalists . In particular, some have drawn attention to potential inter-relationships among the three factors; and others have wished to stress other factors not included in the formula, such as political and social structures, and the scope for beneficial, as well as harmful, environmental actions. The equation was developed in 1970 during the course of a debate between Barry Commoner , Paul R. Ehrlich and John Holdren . Commoner argued that environmental impacts in the United States were caused primarily by changes in its production technology following World War II and focused on present-day deteriorating environmental conditions in the United States. Ehrlich and Holdren argued that all three factors were important but emphasized the role of human population growth , focusing on a broader scale, being less specific in space and time. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The equation can aid in understanding some of the factors affecting human impacts on the environment, [ 6 ] but it has also been cited as a basis for many of the dire environmental predictions of the 1970s by Paul Ehrlich , George Wald , Denis Hayes , Lester Brown , René Dubos , and Sidney Ripley that did not come to pass. [ 7 ] Neal Koblitz classified equations of this type as "mathematical propaganda " and criticized Ehrlich's use of them in the media (e.g. on The Tonight Show ) to sway the general public. [ 8 ] The variable "I" in the "I=PAT" equation represents environmental impact. The environment may be viewed as a self-regenerating system that can endure a certain level of impact. The maximum endurable impact is called the carrying capacity . As long as "I" is less than the carrying capacity the associated population, affluence, and technology that make up "I" can be perpetually endured. If "I" exceeds the carrying capacity, then the system is said to be in overshoot , which may only be a temporary state. Overshoot may degrade the ability of the environment to endure impact, therefore reducing the carrying capacity. Impact may be measured using ecological footprint analysis in units of global hectares (gha). Ecological footprint per capita is a measure of the quantity of Earth's biologically productive surface that is needed to regenerate the resources consumed per capita. Impact is modeled as the product of three terms, giving gha as a result. Population is expressed in human numbers; therefore affluence is measured in units of gha per capita. Technology is a unitless efficiency factor. In the I=PAT equation, the variable P represents the population of an area, such as the world. Since the rise of industrial societies, human population has been increasing exponentially. This has caused Thomas Malthus , Paul Ehrlich and many others [ who? ] to postulate that this growth would continue until checked by widespread hunger and famine (see Malthusian growth model ). The United Nations project that world population will increase from 7.7 billion today (2019) to 9.8 billion in 2050 and about 11.2 billion in 2100. [ 9 ] These projections take into consideration that population growth has slowed in recent years as women are having fewer children. This phenomenon is the result of demographic transition all over the world. Although the UN projects that human population may stabilize at around 11.2 billion in 2100, the I=PAT equation will continue to be relevant for the increasing human impact on the environment in the short to mid-term future. Increased population increases humans' environmental impact in many ways, which include but are not limited to: The variable A in the I=PAT equation stands for affluence . It represents the average consumption of each person in the population. As the consumption of each person increases, the total environmental impact increases as well. A common proxy for measuring consumption is through GDP per capita or GNI per capita . While GDP per capita measures production, it is often assumed that consumption increases when production increases. GDP per capita has been rising steadily over the last few centuries and is driving up human impact in the I=PAT equation. Increased consumption significantly increases human environmental impact. This is because each product consumed has wide-ranging effects on the environment. For example, the construction of a car has the following environmental impacts: The more cars per capita, the greater the impact. Ecological impacts of each product are far-reaching; increases in consumption quickly result in large impacts on the environment through direct and indirect sources. The T variable in the I=PAT equation represents how resource intensive the production of affluence is; how much environmental impact is involved in creating, transporting and disposing of the goods, services and amenities used. Improvements in efficiency can reduce resource intensiveness, reducing the T multiplier. Since technology can affect environmental impact in many different ways, the unit for T is often tailored for the situation to which I=PAT is being applied. For example, for a situation where the human impact on climate change is being measured, an appropriate unit for T might be greenhouse gas emissions per unit of GDP. Increases in efficiency from technologies can reduce specific environmental impacts, but due to increasing prosperity these technologies yield for the people and businesses that adopt them, technologies actually end up generating greater overall growth into the resources that sustain us. Criticisms of the I=PAT formula: The I=PAT equation has been criticized for being too simplistic by assuming that P, A, and T are independent of each other. In reality, at least seven interdependencies between P, A, and T could exist, indicating that it is more correct to rewrite the equation as I = f(P,A,T). [ 11 ] For example, a doubling of technological efficiency, or equivalently a reduction of the T-factor by 50%, does not necessarily reduce the environmental impact (I) by 50% if efficiency induced price reductions stimulate additional consumption of the resource that was supposed to be conserved, a phenomenon called the rebound effect or Jevons paradox . As was shown by Alcott, [ 11 ] : Fig. 5 despite significant improvements in the carbon intensity of GDP (i.e., the efficiency in carbon use) since 1980, world fossil energy consumption has increased in line with economic and population growth. Similarly, an extensive historical analysis of technological efficiency improvements has conclusively shown that improvements in the efficiency of energy and material use were almost always outpaced by economic growth, resulting in a net increase in resource use and associated pollution. [ 12 ] [ 13 ] Each factor in the I=PAT equation can either increase or decrease the level of environmental impact, and their interactions are non-linear and dynamic. Although environmental impacts are driven by human activities in specific regions, these impacts often manifest elsewhere due to the globalized nature of environmental systems and human. For instance, economic activity in one area can lead to resource extraction in another or cause pollution that spreads to different locations. [ 14 ] There have also been comments that this model depicts people as being purely detrimental to the environment, ignoring any conservation or restoration efforts that societies have made. [ 15 ] Another major criticism of the I=PAT model is that it ignores the political context and decision-making structures of countries and groups. This means the equation does not account for varying degrees of power, influence, and responsibility of individuals over environmental impact. [ 15 ] Also, the P factor does not account for the complexity of social structures or behaviors, resulting in blame being placed on the global poor. [ 15 ] I=PAT does not account for sustainable resource use among some poor and indigenous populations, unfairly characterizing these populations whose cultures support low-impact practices. [ 15 ] However, it has been argued that the latter criticism not only assumes low impacts for indigenous populations, but also misunderstands the I=PAT equation itself. Environmental impact is a function of human numbers, affluence (i.e., resources consumed per capita) and technology. It is assumed that small-scale societies have low environmental impacts due to their practices and orientations alone but there is little evidence to support this. [ 16 ] [ 17 ] In fact, the generally low impact of small-scale societies compared to state societies is due to a combination of their small numbers and low-level technology. Thus, the environmental sustainability of these societies is largely an epiphenomenon due their inability to significantly affect their environment. [ 18 ] [ 19 ] [ 20 ] That all types of societies are subject to I=PAT was actually made clear in Ehrlich and Holdren's 1972 dialogue with Commoner in The Bulletin of the Atomic Scientists , [ 5 ] where they examine the pre-industrial (and indeed prehistoric) impact of human beings on the environment. Their position is further clarified by Holdren's 1993 paper, A Brief History of "IPAT" . [ 21 ] As a result of the interdependencies between P, A, and T and potential rebound effects, policies aimed at decreasing environmental impacts through reductions in P, A, and T may not only be very difficult to implement (e.g., population control and material sufficiency and degrowth movements have been controversial) but also are likely to be rather ineffective compared to rationing (i.e., quotas) or Pigouvian taxation of resource use or pollution. [ 11 ] The IPAT equation serves as the cornerstone for analyzing the causes of environmental sustainability. It underpins the entire World3 simulation model , which is the most influential sustainability model ever created, and is essentially an extended application of the IPAT equation. [ 22 ]
https://en.wikipedia.org/wiki/I_=_PAT
" I before E, except after C " is a mnemonic rule of thumb for English spelling . If one is unsure whether a word is spelled with the digraph ⟨ei⟩ or ⟨ie⟩ , the rhyme suggests that the correct order is ⟨ie⟩ unless the preceding letter is ⟨c⟩ , in which case it may be ⟨ei⟩ . The rhyme is very well known; Edward Carney calls it "this supreme, and for many people solitary, spelling rule". [ 1 ] However, the short form quoted above has many common exceptions ; for example: The proportion of exceptions can be reduced by restricting application of the rule based on the sound represented by the spelling. Two common restrictions are: Variant pronunciations of some words (such as h ei nous and n ei ther ) complicate application of sound-based restrictions, which do not eliminate all exceptions. Many authorities deprecate the rule as having too many exceptions to be worth learning. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The Middle English language evolved from Old English after the Norman conquest , adding many loanwords from Norman French , whose sounds and spellings changed and were changed by the older English customs. In French loanwords, the digraph ⟨ie⟩ generally represented the sound [eː] , while ⟨ei⟩ represented [ɛː] ; ⟨ie⟩ was later extended to signify [eː] in non-French words. In the Great Vowel Shift , sounds [eː] and [ɛː] were raised to [iː] and [eː] respectively. Later, the meet – meat merger saw the vowel in many [eː] words change to [iː] , so that meat became a homonym of meet , while conceive now rhymed with believe . [ 6 ] [ 7 ] Early Modern English spelling was not fixed; many words were spelled with ⟨ie⟩ and ⟨ei⟩ interchangeably, in printed works of the 17th century and private correspondence of educated people into the 19th century. [ citation needed ] The mnemonic (in its short form) is found as early as 1866, as a footnote in Manual of English Spelling , [ 8 ] edited by schools inspector James Stuart Laurie from the work of a Tavistock schoolmaster named Marshall. [ 9 ] Michael Quinion surmises the rhyme was already established before this date. [ 10 ] An 1834 manual states a similar rule in prose; [ 11 ] others in 1855 and 1862 use different rhymes. [ 12 ] [ 13 ] Many textbooks from the 1870s on use the same rhyme as Laurie's book. [ 10 ] The restriction to the "long e" sound is explicitly made in the 1855 and 1862 books, and applied to the "I before E except after C" rhyme in an 1871 manual. [ 14 ] Mark Wainwright's FAQ posting on the alt.usage.English newsgroup characterises this restricted version as British. [ 15 ] The restriction may be implicit, or may be explicitly included as an extra line such as "when the sound is e" placed before [ 15 ] or after [ 16 ] the main part of the rhyme. A longer form excluding the "long a" sound is found in Rule 37 of Ebenezer Cobham Brewer 's 1880 Rules for English Spelling , along with a list of the "chief exceptions": [ 17 ] The following rhymes contain the substance of the last three rules:— i before e, Except after c, Or when sounded as "a", As in neighbor and weigh But seizure and seize do what they please. "Dr Brewer" is credited as the author by subsequent writers quoting this form of the rhyme, [ 18 ] [ 19 ] [ 20 ] which became common in American schools. [ 10 ] A Dictionary of Modern English Usage discusses "i before e except after c". Henry Watson Fowler 's original 1926 edition called the rule "very useful", restricting it to words with the "long e" sound, stating further that "words in which that sound is not invariable, as either , neither , inveigle , do not come under it", and calling seize "an important exception". [ 21 ] The entry was retained in Ernest Gowers 's 1965 revision. [ 22 ] Robert Burchfield rewrote it for the 1996 edition, stating 'the rule can helpfully be extended "except when the word is pronounced with /eɪ/ "', and giving a longer list of exceptions, including words excluded from Fowler's interpretation. [ 23 ] Robert Allen 's 2008 pocket edition states, "The traditional spelling rule ' i before e except after c ' should be extended to include the statement 'when the combination is pronounced -ee- '". [ 24 ] Jeremy Butterfield's 2015 edition suggests both "when ... pronounced -ee- " and "except when ... pronounced -ay- " as extensions to the rhyme, as well as listing various classes of exception. [ 25 ] In 1932 Leonard B. Wheat examined the rules and word lists found in various American elementary school spelling books. He calculated that, of the 3,876 words listed, 128 had ei or ie in the spelling; of these, 83 conformed to I-before-E, 6 to except-after-C, and 12 to sounded-like-A. He found 14 words with i-e in separate syllables, and 2 with e-i in separate syllables. This left 11 "irregular" words: 3 with cie ( ancient, conscience, efficiency ) and 8 with ei ( either, foreign, foreigner, height, leisure, neither, seize, their ). Wheat concluded, "If it were not for the fact that the jingle of the rule makes it easy to remember (although not necessarily easy to apply), the writer would recommend that the rule be reduced to ' I usually comes before e ,' or that it be discarded entirely". [ 2 ] Sandra Wilde in 1990 claimed the sounded-like-E version of the rule was one of only two sound–letter correspondence rules worth teaching in elementary schools. [ 27 ] The rule was covered by five of nine software programs for spelling education studied by Barbara Mullock in 2012. [ 26 ] Edward Carney's 1994 Survey of English Spelling describes the ["long-e" version of the] rule as "peculiar": [ 1 ] Its practical use is ... simply deciding between two correspondences for /iː/ that are a visual metathesis of each other. It is not a general graphotactic rule applicable to other phonemes. So, although seize and heinous (if you pronounce it with /iː/ rather than /eɪ/ ) are exceptions, heifer , leisure with /e/ ≡⟨ei⟩ or rein , vein with /eɪ/ ≡⟨ei⟩ are not exceptions; ⟨ie⟩ is not a usual spelling of /e/ or /eɪ/ . As to the usefulness of the rule, he says: [ 28 ] Such rules are warnings against common pitfalls for the unwary. Nevertheless, selection among competing correspondences has never been, and could never be, covered by such aids to memory. The converse of the "except after c" part is Carney's spelling-to-sound rule E.16: in the sequence ⟨cei⟩, the ⟨ei⟩ is pronounced /iː/ . [ 29 ] In Carney's test wordlist, all eight words with ⟨cei⟩ conform to this rule, which he thus describes as being a "marginal" rule with an "efficiency" of 100%. [ 29 ] Rarer words not in the wordlist may not conform; for example, in haecceity , ceilidh , and enceinte the ei represents / iː . ɪ / , / eɪ / , and / æ / respectively. [ 30 ] Mark Wainwright's FAQ posting interprets the rule as applying only to the FLEECE vowel, not the NEAR vowel; he regards it as useful if "a little common sense" is used for the exceptions. [ 15 ] The FAQ includes a 1996 response to Wainwright by an American, listing variations on the rule and their exceptions, contending that even the restricted version has too many exceptions, and concluding "Instead of trying to defend the 'rule' or 'guideline', "'i' before 'e' except after 'c'", why don't we all just agree that it is dumb and useless, and be content just to laugh at it?" [ 31 ] Kory Stamper of Merriam-Webster has said the neighbor-and-weigh version is "chocked with tons of exceptions", listing several types. [ 3 ] On Language Log in 2006, Mark Liberman suggested that the alternative "i before e, no matter what" was more reliable than the basic rule. [ 4 ] On the same blog in 2009, Geoff Pullum wrote, 'The rule is always taught, by anyone who knows what they are doing, as "i before e except after c when the sound is 'ee'."' [ 16 ] Teaching English Spelling ( Cambridge University Press , 2000) provides a system of sound–spelling correspondences aimed at correcting common spelling errors among native and ESL students. The chapter "The sound 'e' (/iː/)" has sections on spellings "ee", "ea", "-y" and "ie and ei", the last of which uses "I before E except after C" and lists five "common exceptions" (caffeine, codeine, protein, seize, weird) . [ 32 ] The 2009 edition of Support for Spelling , by the English Department for Education , [ 5 ] suggests an "Extension activity" for Year five (10-year-olds): In the Appendix, after a list of nine "useful spelling guidelines", there is a note: There were widespread media reports of this recommendation, which generated some controversy. [ 10 ] [ 16 ] The Oxford Dictionaries website of Oxford University Press states "The rule only applies when the sound represented is 'ee', though. It doesn't apply to words like science or efficient , in which the –ie- combination does follow the letter c but isn't pronounced 'ee'." [ 33 ] David Crystal discusses the rule in his 2012 history of English spelling. [ 34 ] He first restricts it to the / iː / vowel, then accounts for several classes of exception. He states that, while the exceptions are fewer and rarer than the words that follow the rule, there are too many to learn by heart; the factors are "too great to reduce to a simple rule", but "a basic knowledge of grammar and word-history " can handle them. [ 34 ] Educationalist Greg Brooks says the long-e qualification "is hardly ever mentioned, perhaps because it is difficult to explain to children"; the except-after-C part "works very poorly"; and the mnemonic "should be consigned to oblivion". [ 35 ] The following sections list exceptions to the basic form; many are not exceptions to the augmented forms. Words that break both the "I before E" part and the "except after C" part of the rule include cheiromancies , cleidomancies , eigenfrequencies , obeisancies and oneiromancies , as well as Pleistocene from the geologic time scale. Some large groups of words have cie in the spelling. Few common words have the cei spelling handled by the rule: verbs ending -ceive and their derivatives ( perceive , deceit , transceiver , receipts , etc.), and ceiling . The BBC trivia show QI claimed there were 923 words spelled cie , 21 times the number of words that conform to the rule's stated exception by being written with cei . [ 36 ] These figures were generated by a QI fan from a Scrabble wordlist. [ 37 ] The statistic was repeated by UberFacts . [ 38 ] The vowel represented by ie in words spelled cie is rarely the "long e" vowel of FLEECE ( /iː/ ), so few words are exceptions to the version of the rule restricted to that sound. Among them are specie , species . For those with happy -tensing accents, the final y in words ending -cy has the FLEECE vowel, and therefore so do inflected forms ending -cies or -cied ( fancied , policies , etc.). If the vowel of NEAR ( /ɪər/ ) is considered as "long e", then words ending -cier may also be exceptions. Possible examples include: fancier , if pronounced with two rather than three syllables; or financier , if stressed on the final syllable or pronounced with a happy -tensing accent. These are exceptions to the basic and "long a" versions of the rhyme, but not to the "long e" version. Types include: Many words have ei not preceded by c . In the sections that follow, most derived forms are omitted; for example, as well as seize , there exist disseize and seizure . Words are grouped by the phonemes (sounds) corresponding to ei or ie in the spelling; each phoneme is represented phonetically as at Help:IPA/English and, where applicable, by the keyword in John C. Wells ' lexical sets . An asterisk* after a word indicates the pronunciation implied is one of several found. Some have an /iː/ variant more common in America than Britain (e.g. sheikh , leisure , either have /eɪ/ , /ɛ/ , /aɪ/ respectively). Words where ei , not preceded by c , represents the vowel of FLEECE ( /iː/ ), are the only exceptions to the strictest British interpretation of the "long e" version of the rhyme. Less strict interpretations admit as exceptions those words where eir , not preceded by c , represents the vowel of NEAR ( /ɪər/ ). Some categories of exception: Other exceptions: There are many words where ei , not preceded by c , represents the vowel of FACE ( /eɪ/ ). There are a few where eir , not preceded by c , represents the vowel of SQUARE ( /ɛər/ ). These groups of words are exceptions only to the basic form of the rhyme; they are excluded from both of the common restricted forms. These are exceptions to the basic and "long a" versions of the rhyme, but not to the "long e" version. The rhyme is mentioned in several films and TV episodes about spelling bees , including A Boy Named Charlie Brown , The Simpsons episode " I'm Spelling as Fast as I Can ", The Pen Is Mightier Than the Pencil episode of The Odd Couple , and an episode of Arthur ; and also in the musical The Adventures of Tom Sawyer , when Huckleberry Finn is being taught how to read. The rhyme was used as a climactic plot device in the 1990 TaleSpin episode "Vowel Play" when Kit corrects Baloo's spelling by reciting the second half ("or when sounding like A, as in neighbour or weigh") of the mnemonic. I Before E (Except After C): Old-School Ways To Remember Stuff was a miscellany released in the UK for the Christmas 2007 " stocking filler " market, [ 43 ] which sold well. [ 44 ] "I Before E Except After C" is a song on Yazoo 's 1982 album Upstairs at Eric's . The Jackson 5 's 1970 hit " ABC " has the lyric "I before E except after C". "I before E except after C" was a 1963 episode of the TV series East Side/West Side . I Before E is the name of both a short-story collection by Sam Kieth and a music album by Carissa's Wierd , in each case alluding to the unusual spelling of the creator's name. Until the 1930s, Pierce City, Missouri was named "Peirce City", after Andrew Peirce. A 1982 attempt to revert to the original spelling was rejected by the United States Census Bureau . [ 45 ] Comedian Brian Regan employs the rule in a joke on his debut CD Live in the track Stupid in School, where he states it as "I before E, except after C, and with sounding like A, as in neighbor and weigh, and on weekends and holidays and all throughout May, and you'll always be wrong no matter what you say!" [ 46 ]
https://en.wikipedia.org/wiki/I_before_E_except_after_C
Ian Grant Macdonald FRS (11 October 1928 – 8 August 2023) was a British mathematician known for his contributions to symmetric functions , special functions , Lie algebra theory and other aspects of algebra , algebraic combinatorics , and combinatorics . Born in London, he was educated at Winchester College and Trinity College, Cambridge , graduating in 1952. He then spent five years as a civil servant. He was offered a position at Manchester University in 1957 by Max Newman , on the basis of work he had done while outside academia. In 1960 he moved to the University of Exeter , and in 1963 became a Fellow of Magdalen College, Oxford . Macdonald became Fielden Professor at Manchester in 1972, and professor at Queen Mary College, University of London , in 1976. He worked on symmetric products of algebraic curves , Jordan algebras and the representation theory of groups over local fields . In 1972 he proved the Macdonald identities , after a pattern known to Freeman Dyson . His 1979 book Symmetric Functions and Hall Polynomials has become a classic. Symmetric functions are an old theory, part of the theory of equations , to which both K-theory and representation theory lead. His was the first text to integrate much classical theory, such as Hall polynomials , Schur functions , the Littlewood–Richardson rule , with the abstract algebra approach. It was both an expository work and, in part, a research monograph, and had a major impact in the field. The Macdonald polynomials are now named after him. The Macdonald conjectures from 1982 also proved most influential. Macdonald was elected a Fellow of the Royal Society in 1979. He was an invited speaker in 1970 at the International Congress of Mathematicians (ICM) in Nice [ 1 ] and a plenary speaker in 1998 at the ICM in Berlin. [ 2 ] In 1991 he received the Pólya Prize of the London Mathematical Society . [ 3 ] In 2002 he received an honorary doctorate from the University of Amsterdam . [ 4 ] He was awarded the 2009 Steele Prize for Mathematical Exposition. In 2012 he became a fellow of the American Mathematical Society . [ 5 ] Ian G. Macdonald died on 8 August 2023, at the age of 94. [ 6 ]
https://en.wikipedia.org/wiki/Ian_G._Macdonald
Sir Ian Wilmut (7 July 1944 – 10 September 2023) was a British embryologist and the chair of the Scottish Centre for Regenerative Medicine [ 4 ] at the University of Edinburgh . [ 5 ] He was the leader of the research group that in 1996 first cloned a mammal from an adult somatic cell , a Finnish Dorset lamb named Dolly . [ 6 ] [ 7 ] Wilmut was appointed OBE in 1999 for services to embryo development [ 8 ] and knighted in the 2008 New Year Honours . [ 9 ] He, Keith Campbell and Shinya Yamanaka jointly received the 2008 Shaw Prize for Medicine and Life Sciences for their work on cell differentiation in mammals. [ 3 ] Wilmut was born in Hampton Lucy , Warwickshire , England, on 7 July 1944. [ 10 ] Wilmut's father, Leonard Wilmut, was a mathematics teacher who suffered from diabetes for fifty years, which eventually caused him to become blind. [ 11 ] The younger Wilmut attended the Boys' High School in Scarborough , where his father taught. [ 12 ] His early desire was to embark on a naval career, but he was unable to do so due to his colour blindness . [ 13 ] As a schoolboy, Wilmut worked as a farm hand on weekends, which inspired him to study Agriculture at the University of Nottingham . [ 12 ] [ 14 ] In 1966, Wilmut spent eight weeks working in the laboratory of Christopher Polge , who is credited with developing the technique of cryopreservation in 1949. [ 15 ] The following year Wilmut joined Polge's laboratory to undertake a Doctor of Philosophy degree at the University of Cambridge , from where he graduated in 1971 with a thesis on semen cryopreservation . [ 16 ] During this time he was a postgraduate student at Darwin College . [ 17 ] After completing his PhD, he was involved in research focusing on gametes and embryogenesis, including working at the Roslin Institute . [ 12 ] Wilmut was the leader of the research group that in 1996 first cloned a mammal, a lamb named Dolly . [ 18 ] [ 19 ] She died of a respiratory disease in 2003. In 2008 Wilmut announced that he would abandon the technique of somatic cell nuclear transfer [ 20 ] by which Dolly was created in favour of an alternative technique developed by Shinya Yamanaka . This method has been used in mice to derive pluripotent stem cells from differentiated adult skin cells, thus circumventing the need to generate embryonic stem cells. Wilmut believed that this method holds greater potential for the treatment of degenerative conditions such as Parkinson's disease and to treat stroke and heart attack patients. [ 21 ] Dolly was a bonus, sometimes when scientists work hard, they also get lucky, and that's what happened. [ 22 ] Wilmut led the team that created Dolly, but in 2006 admitted his colleague Keith Campbell [ 23 ] deserved "66 per cent" of the invention that made Dolly's birth possible, and that the statement "I did not create Dolly" was accurate. [ 24 ] His supervisory role is consistent with the post of principal investigator held by Wilmut at the time of Dolly's creation. Wilmut was an Emeritus Professor at the Scottish Centre for Regenerative Medicine [ 25 ] at the University of Edinburgh and in 2008 was knighted in the New Year Honours for services to science. [ 13 ] Wilmut and Campbell, in conjunction with Colin Tudge , published The Second Creation in 2000. [ 26 ] [ 10 ] In 2006 Wilmut's book After Dolly: The Uses and Misuses of Human Cloning was published, [ 27 ] co-authored with Roger Highfield . Wilmut died from complications of Parkinson's disease on 10 September 2023, aged 79. [ 28 ] [ 10 ] In 1998 he received the Lord Lloyd of Kilgerran Award [ 29 ] and the Golden Plate Award of the American Academy of Achievement . [ 30 ] Wilmut was appointed Officer of the Order of the British Empire (OBE) in the 1999 Birthday Honours "for services to Embryo Development" [ 17 ] [ 31 ] and a Fellow of the Royal Society (FRS) in 2002 . [ 32 ] He was also an elected Fellow of the Academy of Medical Sciences in 1999 [ 33 ] and Fellow of the Royal Society of Edinburgh in 2000. [ 34 ] He was elected an EMBO Member in 2003. [ 35 ] In 1997 Wilmut was Time magazine man of the year runner up. [ 22 ] He was knighted in the 2008 New Year Honours for services to science. [ 17 ] [ 36 ]
https://en.wikipedia.org/wiki/Ian_Wilmut
The B method is a method of software development based on B , a tool-supported formal method based on an abstract machine notation , used in the development of computer software . [ 1 ] [ 2 ] B was originally developed in the 1980s by Jean-Raymond Abrial [ 3 ] [ 4 ] in France and the UK . B is related to the Z notation (also originated by Abrial) and supports development of programming language code from specifications. B has been used in major safety-critical system applications in Europe (such as the automatic Paris Métro lines 14 and 1 and the Ariane 5 rocket). [ 5 ] [ 6 ] [ 7 ] It has robust, commercially available tool support for specification , design , proof and code generation . Compared to Z, B is slightly more low-level and more focused on refinement to code rather than just formal specification — hence it is easier to correctly implement a specification written in B than one in Z. In particular, there is good tool support for this. The same language is used in specification, design and programming. Mechanisms include encapsulation and data locality. Subsequently, another formal method called Event-B [ 8 ] [ 9 ] [ 10 ] has been developed based on the B-Method, support by the Rodin Platform. [ 11 ] [ 12 ] Event-B is a formal method aimed at system-level modelling and analysis. Features of Event-B are the use of set theory for modelling, the use of refinement to represent systems at different levels of abstraction, and the use of mathematical proof for verifying consistency between these refinement levels. The B notation depends on set theory and first order logic in order to specify different versions of software that covers the complete cycle of project development. In the first and the most abstract version, which is called Abstract Machine , the designer should specify the goal of the design. The B-Toolkit [ 13 ] [ 14 ] is a collection of programming tools designed to support the use of the B-Tool, [ 15 ] is a set theory-based mathematical interpreter, for the purposes of supporting the B-Method. Development was originally undertaken by Ib Holm Sørensen and others, at BP Research and then at B-Core (UK) Limited. [ 16 ] The toolkit uses a custom X Window Motif Interface [ 17 ] for GUI management and runs primarily on the Linux , Mac OS X and Solaris operating systems. The B-Toolkit source code is now available. [ 18 ] Developed by ClearSy, Atelier B [ 19 ] [ 20 ] is an industrial tool that allows for the operational use of the B Method to develop defect-free proven software (formal software). Two versions are available: 1) Community Edition available to anyone without any restriction; 2) Maintenance Edition for maintenance contract holders only. Atelier B has been used to develop safety automatisms for the various subways installed throughout the world by Alstom and Siemens , and also for Common Criteria certification and the development of system models by ATMEL and STMicroelectronics . The Rodin Platform is a tool that supports Event-B . [ 8 ] [ 21 ] [ 11 ] Rodin is based on an Eclipse software IDE ( integrated development environment ) and provides support for refinement and mathematical proof . The platform is open source and forms part of the Eclipse framework It is extendable using software component plug-ins . The development of Rodin has been supported by the European Union projects DEPLOY (2008–2012), RODIN (2004–2007), and ADVANCE (2011–2014). [ 8 ] BHDL provides a method for the correct design of digital circuits , combining the advantages of the hardware description language VHDL with the formality of B. [ 22 ] APCB ( French : Association de Pilotage des Conférences B , the International B Conference Steering Committee ) has organized meetings associated with the B-Method. [ 23 ] It has organized ZB conferences with the Z User Group and ABZ conferences, including Abstract State Machines (ASM) as well as the Z notation . The following conferences have explicitly included the B-Method and/or Event-B:
https://en.wikipedia.org/wiki/Ib_Sørensen
Ibn al‐Bannāʾ al‐Marrākushī ( Arabic : ابن البناء المراكشي ), full name: Abu'l-Abbas Ahmad ibn Muhammad ibn Uthman al-Azdi al-Marrakushi ( Arabic : أبو العباس أحمد بن محمد بن عثمان الأزدي ) (29 December 1256 – 31 July 1321), was an Arab Muslim polymath who was active as a mathematician , astronomer , Islamic scholar , Sufi and astrologer . [ 3 ] [ 4 ] Ahmad ibn Muhammad ibn Uthman was born in the Qa'at Ibn Nahid Quarter of Marrakesh on 29 or 30 December 1256. [ 2 ] [ 3 ] His nisba al-Marrakushi is in relation to his birth and death in his hometown Marrakesh and al azdi means he was from the big arab tribe Azd. His father was a mason thus the kunya Ibn al-Banna' (lit. the son of the mason). [ 5 ] Ibn al-Banna' studied a variety of subjects under at least 17 masters: Quran under the Qari's Muhammad ibn al-bashir and shaykh al-Ahdab. ʻilm al-ḥadīth under qadi al-Jama'a (chief judge) of Fez َAbu al-Hajjaj Yusuf ibn Ahmad ibn Hakam al-Tujibi, Abu Yusuf Ya'qub ibn Abd al-Rahman al-Jazuli and Abu abd allah ibn. Fiqh and Usul al-Fiqh under Abu Imran Musa ibn Abi Ali az-Zanati al-Marrakushi and Abu al-Hasan Muhammad ibn Abd al-Rahman al-Maghili who taugh him al-Juwayni's Kitab al-Irsahd . He also studied Arabic grammar under Abu Ishaq Ibrahim ibn Abd as-Salam as-Sanhaji and Muhammad ibn Ali ibn Yahya as-sharif al-marrakushi who also taugh him Euclid’s Elements . ʿArūḍ and ʿilm al-farāʾiḍ under Abu Bakr Muhammad ibn Idris ibn Malik al-Quda'i al-Qallusi. Arithmetic under Muhammad ibn Ali, known as Ibn Ḥajala. Ibn al-Banna' also studied astronomy under Abu 'Abdallah Muhammad ibn Makhluf as-Sijilmassi. He also studied medecine under al-Mirrīkh. [ 6 ] [ 7 ] He is known to have attached himself to the founder of the Hazmiriyya zawiya and sufi saint of Aghmat , Abu Zayd Abd al-Rahman al-Hazmiri, who guided his arithmetic skills toward divinational predictions. [ 4 ] Ibn al-Banna' taught classes in Marrakesh and some of his students were: Abd al-Aziz ibn Ali al-Hawari al-Misrati (d.1344), Abd al-Rahman ibn Sulayman al-Laja'i (d. 1369) and Muhammad ibn Ali ibn Ibrahim al-Abli (d. 1356). [ 8 ] He died at Marrakesh on 31 July 1321. [ 4 ] Ibn al-Banna' wrote over 100 works encompassing such varied topics as Astronomy, Astrology, the division of inheritances, Linguistics, Logic, Mathematics, Meteorology, Rhetoric, Tafsir , Usūl al-Dīn and Usul al-Fiqh . [ 8 ] One of his works, called Talkhīṣ ʿamal al-ḥisāb ( Arabic : تلخيص أعمال الحساب ) (Summary of arithmetical operations), includes topics such as fractions and sums of squares and cubes. Another, called Tanbīh al-Albāb , [ 9 ] covers topics related to: He also wrote an introduction to Euclid's Elements . [ 10 ] He also wrote Rafʿ al-Ḥijāb 'an Wujuh A'mal al-Hisab (Lifting the Veil from Faces of the Workings of Calculations) which covered topics such as computing square roots of a number and the theory of simple continued fractions . [ 10 ]
https://en.wikipedia.org/wiki/Ibn_al-Banna'_al-Marrakushi
Ibotta, Inc. is an American mobile technology company headquartered in Denver, Colorado . Founded in 2011, the company offers cash back rewards on various purchases through its Ibotta Performance Network and direct to consumer app. [ 9 ] [ 10 ] [ 11 ] Ibotta partners with CPG ( consumer packaged goods ) brands and network publishers to provide these rewards. As of 2024, the company operates solely in the United States. [ 12 ] [ 13 ] [ 14 ] [ 15 ] The company's rewards-as-a-service offering, the Ibotta Performance Network, went live in 2022. In August 2019, Ibotta received a $1 billion valuation after its Series D funding, and in 2023, the company surpassed $1.5 billion cash rewards paid to over 50 million consumers since the company's founding. [ 16 ] Ibotta became a publicly traded company in April 2024 with a listing on the New York Stock Exchange . [ 17 ] Ibotta was founded by current CEO Bryan Leach . The company was incorporated in 2011 and the app launched to both the App Store and Google Play stores in 2012. [ 18 ] [ 19 ] [ 20 ] Early investors included entrepreneur and computer scientist Jim Clark and Tom “TJ” Jermoluk, Chairman of @Home Network . [ 21 ] In 2015, Ibotta expanded beyond item level grocery, adding the ability to get cash back on in-store retail purchases. In 2016, in-app mobile commerce began, allowing users to navigate from the Ibotta app to its partners' apps to earn cash back on purchases. [ 22 ] In 2016 with a Series C investment, Ibotta had raised over $73 million in funding. [ 23 ] [ 24 ] In March of that year, Ibotta partnered with Anheuser-Busch to offer cash back for adults who purchased its products. [ 14 ] In May, the company partnered with LiveRamp so that companies could use their CRM data to create segmented, personalized campaigns. [ 25 ] At the time, the company had around 200 full- and part-time employees and moved from offices in Lower Downtown Denver (LoDo) to a 40,000-square-foot office in the central Denver business district. [ 26 ] A year later, the company had to expand to a second floor as it added almost another 100 employees. [ 19 ] In 2017, Ibotta added cash back for Uber to its app as well as cash back rewards for online and mobile purchases. [ 19 ] [ 21 ] In 2018, Ibotta was listed on the Inc. 5,000 list as one of the fastest growing private companies in the U.S. [ 27 ] A year later, in January 2019, the Ibotta app had been downloaded more than 30 million times and users had been credited $500 million in cash back rewards. [ 28 ] That year, Ibotta was the largest mobile company in Colorado with six million monthly active users. [ 28 ] In August 2019, Ibotta was valued at $1 billion, following a Series D round of funding. [ 29 ] The round was led by Koch Disruptive Technologies, a subsidiary of Koch Industries . [ 29 ] 2019 was also the year the company introduced Pay with Ibotta, which allowed users to complete purchases at key retailers on the Ibotta app and earn instant cash back in the process. [ 29 ] With that new service, users were able to enter their purchase total and use a QR code to checkout and receive immediate cash back. [ 29 ] [ 27 ] In 2020, the company partnered with Trees for the Future to plant up to 1 million trees as part of an Earth Month campaign to raise awareness about the waste of unused paper coupons. [ 30 ] In response to  the COVID-19 pandemic , Ibotta partnered with CPG brands in their “Here to Help” campaign and together committed over $10 million in cash back to American consumers. [ 31 ] The company added the ability to earn cash back from online grocery pick-up and delivery orders. Later that year, Ibotta started its free Thanksgiving program, providing users with 100% cash back on select groceries needed for a Thanksgiving meal. [ 32 ] [ 33 ] By 2022, the company had provided approximately 10 million Thanksgiving meals. [ 32 ] In 2021, Ibotta acquired the company OctoShop (originally InStok), a shopping browser extension company.The OctoShop app enables users to compare prices across stores, and set restock and price-drop alerts. [ 34 ] In April 2022, the Ibotta Performance Network (IPN) was launched. [ 35 ] The IPN allows brands to deliver digital offers to consumers through third party publishers. Retailers including Walmart , Dollar General and Family Dollar , food delivery services including Instacart , and convenience stores including Shell are all part of the Ibotta Performance Network. [ 36 ] This pay-per-sales or success-based performance network reaches over 200 million consumers. [ 37 ] [ 38 ] [ 35 ] On April 18, 2024, Ibotta had its initial public offering (IPO) , trading on the New York Stock Exchange (NYSE) under the ticker symbol IBTA. It was the largest technology IPO in Colorado history. [ 17 ] Ibotta became the official jersey patch partner of the New Orleans Pelicans for the 2020–2021 season, [ 39 ] and continued the partnership through the 2023-2024 season. [ 40 ] In March 2023, F1 driver Logan Sargeant , the first U.S. racer to compete in F1 since 2015, partnered with Ibotta. [ 41 ] The Ibotta logo was displayed on Sargeant's racing helmet throughout his F1 career. [ 41 ] In June 2023, Ibotta partnered with UConn Huskies basketball player Paige Bueckers and a number of other female collegiate athletes through its female collegiate athlete collaboration with Pearpop and the Brandr Group (TBG). [ 42 ] Ibotta became the official jersey patch partner of the 2023 NBA champion Denver Nuggets beginning in the 2023–2024 season. [ 43 ]
https://en.wikipedia.org/wiki/Ibotta
Ibritumomab tiuxetan (pronounced / ɪ b r ɪ ˈ t uː m oʊ m æ b t aɪ ˈ ʌ k s ɛ t æ n / [ 2 ] ), sold under the trade name Zevalin , is a monoclonal antibody radioimmunotherapy treatment for non-Hodgkin's lymphoma . The drug uses the monoclonal mouse IgG1 antibody ibritumomab in conjunction with the chelator tiuxetan, to which a radioactive isotope (either yttrium-90 or indium-111 ) is added. Tiuxetan is a modified version of DTPA whose carbon backbone contains an isothiocyanatobenzyl and a methyl group. [ 3 ] [ 4 ] Ibritumomab tiuxetan is used to treat relapsed or refractory, low grade or transformed B cell non-Hodgkin's lymphoma (NHL), a lymphoproliferative disorder , and previously untreated follicular NHL in adults who achieve a partial or complete response to first-line chemotherapy . [ 5 ] The treatment starts with an infusion of rituximab . This may be followed by an administration of indium-111 labeled ibritumomab tiuxetan ( 111 In replaces the 90 Y component) to allow the distribution of the medication to be imaged on a gamma camera , before the actual therapy is administered. [ 6 ] The antibody binds to the CD20 antigen found on the surface of normal and malignant B cells (but not B cell precursors), allowing radiation from the attached isotope (mostly beta emission ) to kill it and some nearby cells. In addition, the antibody itself may trigger cell death via antibody-dependent cell-mediated cytotoxicity (ADCC), complement-dependent cytotoxicity (CDC), and apoptosis . Together, these actions eliminate B cells from the body, allowing a new population of healthy B cells to develop from lymphoid stem cells . [ 7 ] Developed by IDEC Pharmaceuticals, now part of Biogen Idec , [ 8 ] ibritumomab tiuxetan was the first radioimmunotherapy drug approved by the US Food and Drug Administration (FDA) in 2002 to treat cancer. It was approved for the treatment of people with relapsed or refractory, low‑grade or follicular B‑cell non‑Hodgkin's lymphoma (NHL), including people with rituximab refractory follicular NHL. [ 9 ] It was given marketing authorization by the European Medicines Agency in 2004 for the treatment of adults with rituximab relapsed or refractory CD20+ follicular B-cell non-Hodgkin's lymphoma but. [ 1 ] The authorization lapsed in July 2024, after it wasn't marketed for more than three consecutive years. [ 1 ] In September 2009, ibritumomab tiuxetan received approval from the FDA for an expanded label to include previously untreated people with a chemotherapy response. [ 5 ] Ibritumomab tiuxetan is under patent protection and not available in generic form . When approved, it was the most expensive medication available given in a single dose, costing over US$37,000 (€30,000) for the average dose. [ 10 ] [ 11 ] Compared with other monoclonal antibody treatments (many of which are well over $40,000 for a course of therapy), it may be considered cost effective. [ 10 ] [ 12 ]
https://en.wikipedia.org/wiki/Ibritumomab_tiuxetan
Ibuprofen is a nonsteroidal anti-inflammatory drug (NSAID) that is used to relieve pain , fever , and inflammation . [ 7 ] This includes painful menstrual periods , migraines , and rheumatoid arthritis . [ 7 ] It can be taken orally (by mouth) or intravenously . [ 7 ] It typically begins working within an hour. [ 7 ] Common side effects include heartburn , nausea , indigestion , and abdominal pain . [ 7 ] Potential side effects include gastrointestinal bleeding . [ 8 ] Long-term use has been associated with kidney failure , and rarely liver failure , and it can exacerbate the condition of people with heart failure . [ 7 ] At low doses, it does not appear to increase the risk of myocardial infarction (heart attack); however, at higher doses it may. [ 8 ] Ibuprofen can also worsen asthma . [ 8 ] While its safety in early pregnancy is unclear, [ 7 ] it appears to be harmful in later pregnancy, so it is not recommended during that period. [ 9 ] It works by inhibiting the production of prostaglandins by decreasing the activity of the enzyme cyclooxygenase (COX). [ 7 ] Ibuprofen is a weaker anti-inflammatory agent than other NSAIDs. [ 8 ] Ibuprofen was discovered in 1961 by Stewart Adams and John Nicholson [ 10 ] while working at Boots UK Limited and initially sold as Brufen. [ 11 ] It is available under a number of brand names including Advil , Brufen , Motrin , and Nurofen . [ 7 ] [ 12 ] Ibuprofen was first sold in 1969 in the United Kingdom and in 1974 in the United States. [ 7 ] [ 11 ] It is on the World Health Organization's List of Essential Medicines . [ 13 ] It is available as a generic medication . [ 7 ] In 2022, it was the 33rd most commonly prescribed medication in the United States, with more than 17 million prescriptions. [ 14 ] [ 15 ] Ibuprofen is used primarily to treat fever (including post-vaccination fever), mild to moderate pain (including pain relief after surgery ), painful menstruation , osteoarthritis , dental pain, headaches , and pain from kidney stones . About 60% of people respond to any NSAID; those who do not respond well to a particular one may respond to another. [ 16 ] A Cochrane medical review of 51 trials of NSAIDs for the treatment of lower back pain found that "NSAIDs are effective for short-term symptomatic relief in patients with acute low back pain". [ 17 ] It is used for inflammatory diseases such as juvenile idiopathic arthritis and rheumatoid arthritis . [ 18 ] [ 19 ] It is also used for pericarditis and to close a patent ductus arteriosus in a premature baby . [ 20 ] [ 7 ] [ 21 ] [ 22 ] In some countries, ibuprofen lysine (the lysine salt of ibuprofen, sometimes called "ibuprofen lysinate") is licensed for treatment of the same conditions as ibuprofen; the lysine salt is used because it is more water-soluble. [ 23 ] However, subsequent studies shown no statistical differences between the lysine salt and ibuprofen base. [ 24 ] [ 25 ] In 2006, ibuprofen lysine was approved in the United States by the Food and Drug Administration (FDA) for closure of patent ductus arteriosus in premature infants weighing between 500 and 1,500 g (1 and 3 lb), who are no more than 32 weeks gestational age when usual medical management (such as fluid restriction, diuretics, and respiratory support) is not effective. [ 26 ] Adverse effects include nausea , heartburn , indigestion , diarrhea , constipation , gastrointestinal ulceration , headache , dizziness , rash, salt and fluid retention, and high blood pressure . [ 7 ] [ 19 ] [ 27 ] Infrequent adverse effects include esophageal ulceration, heart failure , high blood levels of potassium , kidney impairment , confusion, and bronchospasm . [ 19 ] Ibuprofen can exacerbate asthma, sometimes fatally. [ 28 ] Allergic reactions, including anaphylaxis , may occur. [ 29 ] Ibuprofen may be quantified in blood, plasma, or serum to demonstrate the presence of the drug in a person having experienced an anaphylactic reaction, confirm a diagnosis of poisoning in people who are hospitalized, or assist in a medicolegal death investigation. A monograph relating ibuprofen plasma concentration, time since ingestion, and risk of developing renal toxicity in people who have overdosed has been published. [ 30 ] In October 2020, the U.S. FDA required the drug label to be updated for all NSAID medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. [ 31 ] [ 32 ] Along with several other NSAIDs, chronic ibuprofen use is correlated with the risk of progression to hypertension in women, though less than for paracetamol (acetaminophen), [ 33 ] and myocardial infarction (heart attack), [ 34 ] particularly among those chronically using higher doses. On 9 July 2015, the U.S. FDA toughened warnings of increased heart attack and stroke risk associated with ibuprofen and related NSAIDs; the NSAID aspirin is not included in this warning. [ 35 ] The European Medicines Agency (EMA) issued similar warnings in 2015. [ 36 ] [ 37 ] Along with other NSAIDs, ibuprofen has been associated with the onset of bullous pemphigoid or pemphigoid-like blistering. [ 38 ] As with other NSAIDs, ibuprofen has been reported to be a photosensitizing agent, [ 39 ] but it is considered a weak photosensitizing agent compared to other members of the 2-arylpropionic acid class. Like other NSAIDs, ibuprofen is an extremely rare cause of the autoimmune diseases Stevens–Johnson syndrome (SJS) and toxic epidermal necrolysis . [ 40 ] [ 41 ] [ 42 ] The National Health Service recommends against the use of ibuprofen for more than 3 days in pregnancy as it can affect the fetus' kidneys and circulatory system. Paracetamol is considered a safer alternative. [ 43 ] A 2012 Canadian study of pregnant women suggested that those taking any type or amount of NSAIDs (including ibuprofen, diclofenac , and naproxen ) were 2.4 times more likely to miscarry than those not taking the medications. [ 44 ] However, a 2014 Israeli study found no increased risk of miscarriage in the group of mothers using NSAIDs and noted that two previous studies, including the 2012 Canadian study, "did not adjust for important known risk factors" which may have exposed those results to residual confounding . [ 45 ] Drinking alcohol when taking ibuprofen may increase the risk of stomach bleeding . [ 46 ] According to the FDA, "ibuprofen can interfere with the antiplatelet effect of low-dose aspirin , potentially rendering aspirin less effective when used for cardioprotection and stroke prevention". Allowing sufficient time between doses of ibuprofen and immediate-release (IR) aspirin can avoid this problem. The recommended elapsed time between a dose of ibuprofen and a dose of aspirin depends on which is taken first. It would be 30 minutes or more for ibuprofen taken after IR aspirin, and 8 hours or more for ibuprofen taken before IR aspirin. However, this timing cannot be recommended for enteric-coated aspirin. If ibuprofen is taken only occasionally without the recommended timing, though, the reduction of the cardioprotection and stroke prevention of a daily aspirin regimen is minimal. [ 47 ] Ibuprofen combined with paracetamol is considered generally safe in children for short-term usage. [ 48 ] Ibuprofen overdose has become common since it was licensed for over-the-counter (OTC) use. Many overdose experiences are reported in the medical literature , although the frequency of life-threatening complications from ibuprofen overdose is low. [ 49 ] Human responses in cases of overdose range from an absence of symptoms to a fatal outcome despite intensive-care treatment. Most symptoms are an excess of the pharmacological action of ibuprofen and include abdominal pain, nausea, vomiting , drowsiness, dizziness, headache, ear ringing , and nystagmus . Rarely, more severe symptoms such as gastrointestinal bleeding , seizures , metabolic acidosis , hyperkalemia , low blood pressure , slow heart rate , fast heart rate , atrial fibrillation , coma , liver dysfunction, acute kidney failure , cyanosis , respiratory depression , and cardiac arrest have been reported. [ 50 ] The severity of symptoms varies with the ingested dose and the time elapsed; however, individual sensitivity also plays an important role. Generally, the symptoms observed with an overdose of ibuprofen are similar to the symptoms caused by overdoses of other NSAIDs. The correlation between the severity of symptoms and measured ibuprofen plasma levels is weak. Toxic effects are unlikely at doses below 100 mg/kg, but can be severe above 400 mg/kg (around 150 tablets of 200 mg units for an average adult male); [ 51 ] however, large doses do not indicate the clinical course is likely to be lethal. [ 52 ] A precise lethal dose is difficult to determine, as it may vary with age, weight, and concomitant conditions of the person. Treatment to address an ibuprofen overdose is based on how the symptoms present. In cases presenting early, decontamination of the stomach is recommended. This is achieved using activated charcoal ; charcoal absorbs the drug before it can enter the bloodstream . Gastric lavage is now rarely used, but can be considered if the amount ingested is potentially life-threatening, and it can be performed within 60 minutes of ingestion. Purposeful vomiting is not recommended. [ 53 ] Most ibuprofen ingestions produce only mild effects, and the management of overdose is straightforward. Standard measures to maintain normal urine output should be instituted and kidney function monitored. [ 51 ] Since ibuprofen has acidic properties and is also excreted in the urine, forced alkaline diuresis is theoretically beneficial. However, because ibuprofen is highly protein-bound in the blood, the kidneys' excretion of the unchanged drug is minimal. Forced alkaline diuresis is, therefore, of limited benefit. [ 54 ] Ibuprofen works by inhibiting cyclooxygenase (COX) enzymes, which convert arachidonic acid to prostaglandin H2 (PGH 2 ). PGH 2 , in turn, is converted by other enzymes into various prostaglandins (which mediate pain, inflammation , and fever) and thromboxane A2 (which stimulates platelet aggregation and promotes blood clot formation). Like aspirin and indomethacin , ibuprofen is a nonselective COX inhibitor, in that it inhibits two isoforms of cyclooxygenase, COX-1 and COX-2 . The analgesic , antipyretic , and anti-inflammatory activity of NSAIDs appears to operate mainly through inhibition of COX-2, which decreases the synthesis of prostaglandins involved in mediating inflammation, pain, fever, and swelling. Antipyretic effects may be due to action on the hypothalamus, resulting in an increased peripheral blood flow, vasodilation, and subsequent heat dissipation. Inhibition of COX-1 instead would be responsible for unwanted effects on the gastrointestinal tract. [ 55 ] However, the role of the individual COX isoforms in the analgesic, anti-inflammatory, and gastric damage effects of NSAIDs is uncertain, and different compounds cause different degrees of analgesia and gastric damage. [ 56 ] Ibuprofen is administered as a racemic mixture . The R -enantiomer undergoes extensive interconversion to the S -enantiomer in vivo . The S -enantiomer is believed to be the more pharmacologically active enantiomer. [ 58 ] The R -enantiomer is converted through a series of three main enzymes. These enzymes include acyl-CoA-synthetase, which converts the R -enantiomer to (−)- R -ibuprofen I-CoA; 2-arylpropionyl-CoA epimerase, which converts (−)- R -ibuprofen I-CoA to (+)- S -ibuprofen I-CoA; and hydrolase, which converts (+)- S -ibuprofen I-CoA to the S -enantiomer. [ 42 ] In addition to the conversion of ibuprofen to the S -enantiomer, the body can metabolize ibuprofen to several other compounds, including numerous hydroxyl, carboxyl and glucuronyl metabolites. Virtually all of these have no pharmacological effects. [ 42 ] Unlike most other NSAIDs, ibuprofen also acts as an inhibitor of Rho kinase and may be useful in recovery from spinal cord injury. [ 59 ] [ 60 ] Another unusual activity is inhibition of the sweet taste receptor. [ 61 ] After oral administration, peak serum concentration is reached after 1–2 hours, and up to 99% of the drug is bound to plasma proteins . [ 62 ] The majority of ibuprofen is metabolized and eliminated within 24 hours in the urine; however, 1% of the unchanged drug is removed through biliary excretion . [ 58 ] Ibuprofen mainly undergoes hepatic metabolism . The following table shows potential pathways of ibuprofen metabolism. Both hydroxymetabolites and carboxyl-ibuprofen are inactive. [ 63 ] Ibuprofen is practically insoluble in water, but very soluble in most organic solvents like ethanol (66.18 g/100 mL at 40 °C for 90% EtOH), methanol , acetone and dichloromethane . [ 67 ] The original synthesis of ibuprofen by the Boots Group started with the compound isobutylbenzene . The synthesis took six steps. Firstly, isobutylbenzene undergoes Friedel-Crafts acylation with acetic anhydride , yielding p -isobuthylphenyl methyl ketone . Then, through Darzens reaction with ethyl chloroacetate , a α,β-epoxyester is obtained. Then, in acidic environment, it undergoes decarboxylation and hydrolysis , yielding an aldehyde bearing one more carbon atom than the initial ketone. Then, it goes through a reaction with hydroxylamine , yielding a corresponding oxime . Later, it is converted into a nitrile and hydrolyzed into ibuprofen. [ 68 ] A modern, greener technique with fewer waste byproducts (23% of total product mass vs. 60% theoretical value) for the synthesis involves only three steps and was developed in the 1980s by the Celanese Chemical Company . [ 69 ] [ 70 ] The synthesis is initiated with the acylation of isobutylbenzene using the recyclable Lewis acid catalyst hydrogen fluoride . [ 71 ] [ 72 ] The following catalytic hydrogenation of isobutylacetophenone is performed with either Raney nickel or palladium on carbon to lead into the key-step, the carbonylation of 1-(4-isobutylphenyl)ethanol. This is achieved by a PdCl 2 (PPh 3 ) 2 catalyst, at around 50 bar of CO pressure, in the presence of HCl (10%). [ 73 ] The reaction presumably proceeds through the intermediacy of the styrene derivative (acidic elimination of the alcohol) and (1-chloroethyl)benzene derivative ( Markovnikow addition of HCl to the double bond). [ 74 ] Ibuprofen, like other 2-arylpropionate derivatives such as ketoprofen , flurbiprofen and naproxen , contains a stereocenter in the α-position of the propionate moiety. The product sold in pharmacies is a racemic mixture of the S and R -isomers. The S (dextrorotatory) isomer is the more biologically active; this isomer has been isolated and used medically (see dexibuprofen for details). [ 67 ] The isomerase enzyme, alpha-methylacyl-CoA racemase , converts ( R )-ibuprofen into the ( S )- enantiomer . [ 75 ] [ 76 ] [ 77 ] (S)-ibuprofen, the eutomer , harbors the desired therapeutic activity. The inactive (R)-enantiomer, the distomer , undergoes a unidirectional chiral inversion to offer the active (S)-enantiomer. That is, when the ibuprofen is administered as a racemate the distomer is converted in vivo into the eutomer while the latter is unaffected. [ 78 ] [ 79 ] [ 80 ] Ibuprofen was derived from propionic acid by the research arm of Boots Group during the 1960s. [ 81 ] The name is derived from the 3 functional groups: isobutyl (ibu) propionic acid (pro) phenyl (fen). [ 82 ] Its discovery was the result of research during the 1950s and 1960s to find a safer alternative to aspirin . [ 11 ] [ 83 ] The molecule was discovered and synthesized by a team led by Stewart Adams , with a patent application filed in 1961. [ 11 ] Adams initially tested the drug as treatment for his hangover . [ 84 ] In 1985, Boots's worldwide patent for ibuprofen expired and generic products were launched. [ 85 ] The medication was launched as a treatment for rheumatoid arthritis in the United Kingdom in 1969, and in the United States in 1974. Later, in 1983 and 1984, it became the first NSAID (other than aspirin) to be available over-the-counter (OTC) in these two countries. [ 11 ] [ 83 ] Boots was awarded the Queen's Award for Technical Achievement in 1985 for the development of the drug. [ 86 ] In November 2013, work on ibuprofen was recognized by the erection of a Royal Society of Chemistry blue plaque at Boots' Beeston Factory site in Nottingham, [ 87 ] which reads: In recognition of the work during the 1980s by The Boots Company PLC on the development of ibuprofen which resulted in its move from prescription-only status to over-the-counter sale, therefore expanding its use to millions of people worldwide and another at BioCity Nottingham , the site of the original laboratory, [ 87 ] which reads: In recognition of the pioneering research work, here on Pennyfoot Street, by Dr Stewart Adams and Dr John Nicholson in the Research Department of Boots which led to the discovery of ibuprofen used by millions worldwide for the relief of pain. Ibuprofen was made available by prescription in the United Kingdom in 1969 and in the United States in 1974. [ 88 ] Ibuprofen is the international nonproprietary name (INN), British Approved Name (BAN), Australian Approved Name (AAN) and United States Adopted Name (USAN). In the United States, it has been sold under the brand-names Motrin and Advil since 1974 [ 89 ] and 1984, [ 90 ] respectively. In 2009, the first injectable formulation of ibuprofen was approved in the United States, under the brand name Caldolor. [ 91 ] [ 92 ] Ibuprofen can be taken orally (by mouth) and intravenously . [ 7 ] Ibuprofen is sometimes used for the treatment of acne because of its anti-inflammatory properties, and has been sold in Japan in topical form for adult acne. [ 93 ] [ 94 ] As with other NSAIDs, ibuprofen may be useful in the treatment of severe orthostatic hypotension (low blood pressure when standing up). [ 95 ] NSAIDs are of unclear utility in the prevention and treatment of Alzheimer's disease . [ 96 ] [ 97 ] Ibuprofen has been associated with a lower risk of Parkinson's disease and may delay or prevent it. Aspirin , other NSAIDs, and paracetamol (acetaminophen) had no effect on the risk for Parkinson's. [ 98 ] In March 2011, researchers at Harvard Medical School announced that ibuprofen had a neuroprotective effect against the risk of developing Parkinson's disease . [ 99 ] [ 100 ] [ 101 ] People regularly consuming ibuprofen were reported to have a 38% lower risk of developing Parkinson's disease, but no such effect was found for other pain relievers, such as aspirin and paracetamol. Use of ibuprofen to lower the risk of Parkinson's disease in the general population would not be problem-free, given the possibility of adverse effects on the urinary and digestive systems. [ 102 ] Some dietary supplements might be dangerous to take along with ibuprofen and other NSAIDs, but as of 2016 [update] , more research needs to be conducted to be certain. These supplements include those that can prevent platelet aggregation , including ginkgo , garlic , ginger , bilberry , dong quai , feverfew , ginseng , turmeric , meadowsweet ( Filipendula ulmaria ), and willow ( Salix spp.); those that contain coumarin , including chamomile , horse chestnut , fenugreek and red clover ; and those that increase the risk of bleeding, like tamarind . [ 103 ]
https://en.wikipedia.org/wiki/Ibuprofen
Ibutamoren ( INN Tooltip International Nonproprietary Name ; developmental code MK-677 , MK-0677 , LUM-201 , L-163,191 ; former tentative brand name Oratrope ) is a potent , long-acting, orally-active , selective , and non-peptide agonist of the ghrelin receptor and a growth hormone secretagogue , mimicking the growth hormone (GH)-stimulating action of the endogenous hormone ghrelin . [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] It has been shown to increase the secretion of several hormones including GH and insulin-like growth factor 1 (IGF-1) and produces sustained increases in the plasma levels of these hormones while also raising cortisol levels. [ 8 ] Ibutamoren has been shown to sustain activation of the GH–IGF-1 axis , increasing growth hormone secretion by up to 97%, [ 9 ] and to increase lean body mass with no change in total fat mass or visceral fat . It is under investigation as a potential treatment for reduced levels of these hormones, such as in children or elderly adults with growth hormone deficiency , [ 3 ] [ 10 ] [ 11 ] [ 12 ] and human studies have shown it to increase both muscle mass and bone mineral density , [ 13 ] [ 14 ] making it a promising potential therapy for the treatment of frailty in the elderly . [ 15 ] [ 16 ] As of June 2017, ibutamoren is in the preclinical stage of development for growth hormone deficiency. [ 3 ] In a small study of 14 subjects, ibutamoren dosed at 25 mg/day at bedtime was shown to increase rapid eye movement sleep by 20% and 50% in young and older subjects respectively. [ 17 ] Treatment with ibutamoren also resulted in an approximate 50% increase in slow-wave sleep in young subjects. [ 17 ] In a study of children with growth hormone deficiency , ibutamoren performed better than other growth hormone secretagogues at improving growth hormone levels. [ 18 ] An ongoing study compares ibutamoren directly to injectable hGH in terms of height velocity in this population. [ 19 ] Since ibutamoren is still an Investigational New Drug , it has not yet been approved to be marketed for consumption by humans in the United States. [ 3 ] However, it has been used experimentally by some in the bodybuilding community. The use of ibutamoren is banned in most sports. [ 20 ]
https://en.wikipedia.org/wiki/Ibutamoren
In statistical mechanics , the ice-type models or six-vertex models are a family of vertex models for crystal lattices with hydrogen bonds. The first such model was introduced by Linus Pauling in 1935 to account for the residual entropy of water ice. [ 1 ] Variants have been proposed as models of certain ferroelectric [ 2 ] and antiferroelectric [ 3 ] crystals. In 1967, Elliott H. Lieb found the exact solution to a two-dimensional ice model known as "square ice". [ 4 ] The exact solution in three dimensions is only known for a special "frozen" state. [ 5 ] An ice-type model is a lattice model defined on a lattice of coordination number 4. That is, each vertex of the lattice is connected by an edge to four "nearest neighbours". A state of the model consists of an arrow on each edge of the lattice, such that the number of arrows pointing inwards at each vertex is 2. This restriction on the arrow configurations is known as the ice rule . In graph theoretic terms, the states are Eulerian orientations of an underlying 4- regular undirected graph. The partition function also counts the number of nowhere-zero 3-flows . [ 6 ] For two-dimensional models, the lattice is taken to be the square lattice. For more realistic models, one can use a three-dimensional lattice appropriate to the material being considered; for example, the hexagonal ice lattice is used to analyse ice. At any vertex, there are six configurations of the arrows which satisfy the ice rule (justifying the name "six-vertex model"). The valid configurations for the (two-dimensional) square lattice are the following: The energy of a state is understood to be a function of the configurations at each vertex. For square lattices, one assumes that the total energy E {\displaystyle E} is given by for some constants ϵ 1 , … , ϵ 6 {\displaystyle \epsilon _{1},\ldots ,\epsilon _{6}} , where n i {\displaystyle n_{i}} here denotes the number of vertices with the i {\displaystyle i} th configuration from the above figure. The value ϵ i {\displaystyle \epsilon _{i}} is the energy associated with vertex configuration number i {\displaystyle i} . One aims to calculate the partition function Z {\displaystyle Z} of an ice-type model, which is given by the formula where the sum is taken over all states of the model, E {\displaystyle E} is the energy of the state, k B {\displaystyle k_{\rm {B}}} is the Boltzmann constant , and T {\displaystyle T} is the system's temperature. Typically, one is interested in the thermodynamic limit in which the number N {\displaystyle N} of vertices approaches infinity. In that case, one instead evaluates the free energy per vertex f {\displaystyle f} in the limit as N → ∞ {\displaystyle N\to \infty } , where f {\displaystyle f} is given by Equivalently, one evaluates the partition function per vertex W {\displaystyle W} in the thermodynamic limit, where The values f {\displaystyle f} and W {\displaystyle W} are related by Several real crystals with hydrogen bonds satisfy the ice model, including ice [ 1 ] and potassium dihydrogen phosphate KH 2 PO 4 [ 2 ] (KDP). Indeed, such crystals motivated the study of ice-type models. In ice, each oxygen atom is connected by a bond to four hydrogens, and each bond contains one hydrogen atom between the terminal oxygens. The hydrogen occupies one of two symmetrically located positions, neither of which is in the middle of the bond. Pauling argued [ 1 ] that the allowed configuration of hydrogen atoms is such that there are always exactly two hydrogens close to each oxygen, thus making the local environment imitate that of a water molecule, H 2 O . Thus, if we take the oxygen atoms as the lattice vertices and the hydrogen bonds as the lattice edges, and if we draw an arrow on a bond which points to the side of the bond on which the hydrogen atom sits, then ice satisfies the ice model. Similar reasoning applies to show that KDP also satisfies the ice model. In recent years, ice-type models have been explored as descriptions of pyrochlore spin ice [ 7 ] and artificial spin ice systems, [ 8 ] [ 9 ] in which geometrical frustration in the interactions between bistable magnetic moments ("spins") leads to "ice-rule" spin configurations being favoured. Recently such analogies have been extended to explore the circumstances under which spin-ice systems may be accurately described by the Rys F-model. [ 10 ] [ 11 ] [ 12 ] [ 13 ] On the square lattice, the energies ϵ 1 , … , ϵ 6 {\displaystyle \epsilon _{1},\ldots ,\epsilon _{6}} associated with vertex configurations 1-6 determine the relative probabilities of states, and thus can influence the macroscopic behaviour of the system. The following are common choices for these vertex energies. When modeling ice, one takes ϵ 1 = ϵ 2 = … = ϵ 6 = 0 {\displaystyle \epsilon _{1}=\epsilon _{2}=\ldots =\epsilon _{6}=0} , as all permissible vertex configurations are understood to be equally likely. In this case, the partition function Z {\displaystyle Z} equals the total number of valid states. This model is known as the ice model (as opposed to an ice-type model). Slater [ 2 ] argued that KDP could be represented by an ice-type model with energies For this model (called the KDP model ), the most likely state (the least-energy state) has all horizontal arrows pointing in the same direction, and likewise for all vertical arrows. Such a state is a ferroelectric state, in which all hydrogen atoms have a preference for one fixed side of their bonds. The Rys F {\displaystyle F} model [ 3 ] is obtained by setting The least-energy state for this model is dominated by vertex configurations 5 and 6. For such a state, adjacent horizontal bonds necessarily have arrows in opposite directions and similarly for vertical bonds, so this state is an antiferroelectric state. If there is no ambient electric field, then the total energy of a state should remain unchanged under a charge reversal, i.e. under flipping all arrows. Thus one may assume without loss of generality that This assumption is known as the zero field assumption , and holds for the ice model, the KDP model, and the Rys F model. The ice rule was introduced by Linus Pauling in 1935 to account for the residual entropy of ice that had been measured by William F. Giauque and J. W. Stout. [ 14 ] The residual entropy, S {\displaystyle S} , of ice is given by the formula where k B {\displaystyle k_{\rm {B}}} is the Boltzmann constant , N {\displaystyle N} is the number of oxygen atoms in the piece of ice, which is always taken to be large (the thermodynamic limit ) and Z = W N {\displaystyle Z=W^{N}} is the number of configurations of the hydrogen atoms according to Pauling's ice rule. Without the ice rule we would have W = 4 {\displaystyle W=4} since the number of hydrogen atoms is 2 N {\displaystyle 2N} and each hydrogen has two possible locations. Pauling estimated that the ice rule reduces this to W = 1.5 {\displaystyle W=1.5} , a number that would agree extremely well with the Giauque-Stout measurement of S {\displaystyle S} . It can be said that Pauling's calculation of S {\displaystyle S} for ice is one of the simplest, yet most accurate applications of statistical mechanics to real substances ever made. The question that remained was whether, given the model, Pauling's calculation of W {\displaystyle W} , which was very approximate, would be sustained by a rigorous calculation. This became a significant problem in combinatorics . Both the three-dimensional and two-dimensional models were computed numerically by John F. Nagle in 1966 [ 15 ] who found that W = 1.50685 ± 0.00015 {\displaystyle W=1.50685\pm 0.00015} in three-dimensions and W = 1.540 ± 0.001 {\displaystyle W=1.540\pm 0.001} in two-dimensions. Both are amazingly close to Pauling's rough calculation, 1.5. In 1967, Lieb found the exact solution of three two-dimensional ice-type models: the ice model, [ 4 ] the Rys F {\displaystyle F} model, [ 16 ] and the KDP model. [ 17 ] The solution for the ice model gave the exact value of W {\displaystyle W} in two-dimensions as which is known as Lieb's square ice constant . Later in 1967, T. Bill Sutherland generalised Lieb's solution of the three specific ice-type models to a general exact solution for square-lattice ice-type models satisfying the zero field assumption. [ 18 ] Still later in 1967, C. P. Yang [ 19 ] generalised Sutherland's solution to an exact solution for square-lattice ice-type models in a horizontal electric field. In 1969, John Nagle derived the exact solution for a three-dimensional version of the KDP model, for a specific range of temperatures. [ 5 ] For such temperatures, the model is "frozen" in the sense that (in the thermodynamic limit) the energy per vertex and entropy per vertex are both zero. This is the only known exact solution for a three-dimensional ice-type model. The eight-vertex model , which has also been exactly solved, is a generalisation of the (square-lattice) six-vertex model: to recover the six-vertex model from the eight-vertex model, set the energies for vertex configurations 7 and 8 to infinity. Six-vertex models have been solved in some cases for which the eight-vertex model has not; for example, Nagle's solution for the three-dimensional KDP model [ 5 ] and Yang's solution of the six-vertex model in a horizontal field. [ 19 ] This ice model provide an important 'counterexample' in statistical mechanics: the bulk free energy in the thermodynamic limit depends on boundary conditions. [ 20 ] The model was analytically solved for periodic boundary conditions, anti-periodic, ferromagnetic and domain wall boundary conditions. The six vertex model with domain wall boundary conditions on a square lattice has specific significance in combinatorics, it helps to enumerate alternating sign matrices . In this case the partition function can be represented as a determinant of a matrix (whose dimension is equal to the size of the lattice), but in other cases the enumeration of W {\displaystyle W} does not come out in such a simple closed form. Clearly, the largest W {\displaystyle W} is given by free boundary conditions (no constraint at all on the configurations on the boundary), but the same W {\displaystyle W} occurs, in the thermodynamic limit, for periodic boundary conditions, [ 21 ] as used originally to derive W 2 D {\displaystyle W_{2D}} . The number of states of an ice type model on the internal edges of a finite simply connected union of squares of a lattice is equal to one third of the number of ways to 3-color the squares, with no two adjacent squares having the same color. This correspondence between states is due to Andrew Lenard and is given as follows. If a square has color i = 0, 1, or 2, then the arrow on the edge to an adjacent square goes left or right (according to an observer in the square) depending on whether the color in the adjacent square is i +1 or i −1 mod 3. There are 3 possible ways to color a fixed initial square, and once this initial color is chosen this gives a 1:1 correspondence between colorings and arrangements of arrows satisfying the ice-type condition.
https://en.wikipedia.org/wiki/Ice-type_model
Ice Memory is an international initiative which aims to constitute the first world library of archived glacier ice, to preserve this invaluable scientific heritage for the generations to come, when future techniques can obtain even more data from these samples. [ 1 ] In 2015, the Ice Memory project started with the meeting of Jérôme Chappellaz - CNRS - EPFL , Patrick Ginot - IRD (IGE/ UGA -CNRS-IRD-G-INP) from France and Carlo Barbante (CNR/ Ca’Foscari Univ. of Venice) from Italy to conduct drilling expeditions worldwide and safeguard the data present in the ice [ 2 ] - the memory of the ice - in an sanctuary in Antarctica. According to UNESCO and IUCN report " World Heritage glaciers: sentinels of climate change " [ 3 ] announcing that about 30% of glaciers recognized as World Heritage Sites will disappear by 2050 and 50% by 2100 without a drastic and immediate reduction in greenhouse gases, the Ice Memory initiative has been described as urgent and meaningful for humanity wellbeing and acknowledged by UNESCO in 2017 [ 4 ] Indeed, previous glaciology researches made by notably Claude Lorius , Glaciologist and Ice Memory first supporter, Dominique Raynaud and Jean Jouzel aimed to prove the link between atmospheric concentration of greenhouse gases and climate change while studying ice cores . In the coming decades, it is expected that researchers will have new ideas and techniques to develop those scientific results. For instance, they may be able to isolate other information contained in the ice of which we are not aware today. This scientific information trapped in the ice — synthesised and highlighted by the Intergovernmental Panel on Climate Change — is a useful element in the crucial decisions of how to shape international environmental and climate policy. Since its beginnings in 2015, the Ice Memory project conducted nine drilling expeditions in France, Italy, Switzerland, Bolivia, Russia and Norway (Svalbard). By trapping the different components of the atmosphere, ice represents an invaluable source of information for tracing our environmental past, for providing an account of past climate change, and especially for understanding our future. Variations in temperature, atmospheric concentrations of greenhouse gases, natural aerosol emissions, pollutants produced by humans. The science of ice cores can study the dozens of chemical compounds that are trapped in the ice: gases, acids, heavy metals, radioactivity, and water isotopes form the memory of the climates and environments of the past. The objective of Ice Memory Foundation is to sample 20 glaciers in 20 years so that future generations of scientists will have access to undamaged high-quality ice cores to pursue their research and have data to understand the Earth's climate. As of 2025, the Ice Memory Foundation would be storing glacier samples from Europe, Bolivia, and Eurasia Russia in Europe, while waiting for Antarctic storage to become available. Co-founder of the Ice Memory Foundation, Jérome Chappellaz says that teams will collect ice from other sites, including Rocky mountains (Canada/ USA), Himalaya plateau (Tadjikistan, Pakistan, China), Andes plateau (Peru, Argentina), Heard island (Australia), as soon as possible, facing the accelerating rate of melting. [ 9 ] The Ice Memory heritage ice cores will be safeguarded for centuries in Antarctica. A dedicated sanctuary will be built at the French-Italian Concordia station, an international station on the Antarctic Plateau that allows natural storage at -50 °C. First tests of the cave have been jointly managed by IPEV and PNRA . The first cave should be available for the first Ice Memory cores in 2025. Located close to the Concordia Station, the storage site will cover a surface area equivalent to approximately twenty 20-foot containers, or approximately 300 m2. Despite the added complexity of transporting the ice cores to Antarctica, this choice will allow a long-term preservation of the samples using natural storage with no energy consumption required for refrigeration thereby protecting the precious samples from any risk of disrupted cold chain (technical problems, economic crisis, conflict, acts of terrorism, etc.). Difficult access to the samples, combined with restrictive Antarctic logistics will prevent an over-use of the cores. At last, the storage in a polar region managed via the Antarctic Treaty, prevents territorial claims as they are frozen, as signed by the world's major nations. The Ice Memory Foundation was officially created by seven major French, Italian, and Swiss scientific institutions in 2021: the CNRS , the IRD, University Grenoble Alpes , and the French Polar Institute (IPEV) in France; the Italian National Research Council (CNR) and Ca’ Foscari University of Venice in Italy; and the Paul Scherrer Institute (PSI) in Switzerland and is sheltered by University Grenoble Alpes Foundation. Located in France at Université Grenoble Alpes, it aims to collect, save and manage ice cores from selected glaciers in the world currently melting, with their yielded information for decades and centuries to come. The Foundation is directed by Anne Catherine Olhmann since 2015. The Honorary President of the Ice Memory Foundation is His Serene Highness Prince Albert II of Monaco . The Foundation's governance is international, with members from France, Italy, Switzerland, China, and the United States, including two former Intergovernmental Panel on Climate Change ( IPCC ) Vice Presidents, Qin Dahe and Jean Jouzel . A long term governance over the next centuries, ensuring the preservation and the proper use of this Humanity heritage, is investigated in cooperation with International Institutions, notably UNESCO and Antarctic Treaty System ( ATCM ). In 2023 at One Planet Polar Summit in Paris, [ 10 ] the Ice Memory Law and Governance Chair was launched to establish proposals for filling existing legal gaps and to propose a legal framework for the development of the Ice Memory heritage.
https://en.wikipedia.org/wiki/Ice_Memory
Ice algae are any of the various types of algal communities found in annual and multi-year sea, and terrestrial lake ice or glacier ice. On sea ice in the polar oceans, ice algae communities play an important role in primary production . [ 1 ] The timing of blooms of the algae is especially important for supporting higher trophic levels at times of the year when light is low and ice cover still exists. Sea ice algal communities are mostly concentrated in the bottom layer of the ice, but can also occur in brine channels within the ice, in melt ponds, and on the surface. Because terrestrial ice algae occur in freshwater systems, the species composition differs greatly from that of sea ice algae. In particular, terrestrial glacier ice algae communities are significant in that they change the color of glaciers and ice sheets, impacting the reflectivity of the ice itself. Microbial life in sea ice is extremely diverse, [ 2 ] [ 3 ] [ 4 ] and includes abundant algae, bacteria and protozoa. [ 5 ] [ 6 ] Algae in particular dominate the sympagic environment, with estimates of more than 1000 unicellular eukaryotes found to associate with sea ice in the Arctic. [ 7 ] [ 4 ] [ 3 ] [ 2 ] Species composition and diversity vary based on location, ice type, and irradiance . In general, pennate diatoms such as Nitzschia frigida [ 8 ] [ 9 ] (in the Arctic) [ 10 ] and Fragilariopsis cylindrus (in the Antarctic) [ 11 ] are abundant. Melosira arctica , which forms up to meter-long filaments attached to the bottom of the ice, are also widespread in the Arctic and are an important food source for marine species. [ 11 ] While sea ice algae communities are found throughout the column of sea ice, abundance and community composition depends on the time of year. [ 12 ] There are many microhabitats available to algae on and within sea ice, and different algal groups have different preferences. For example, in late winter/early spring, motile diatoms like N. frigida have been found to dominate the uppermost layers of the ice, as far as briny channels reach, and their abundance is greater in multi-year ice (MYI) than in first year ice (FYI). Additionally, dinoflagellates have also been found to dominant in the early austral spring in Antarctic sea ice. [ 5 ] Sea ice algal communities can also thrive at the surface of the ice, in surface melt ponds , and in layers where rafting has occurred. In melt ponds, dominant algal types can vary with pond salinity, with higher concentrations of diatoms being found in melt ponds with higher salinity. [ 13 ] Because of their adaption to low light conditions, the presence of ice algae (in particular, vertical position in the ice pack) is primarily limited by nutrient availability. The highest concentrations are found at the base of the ice because the porosity of that ice enables nutrient infiltration from seawater. [ 14 ] To survive in the harsh sea ice environment, organisms must be able to endure extreme variations in salinity, temperature, and solar radiation. Algae living in brine channels can secrete osmolytes , such as dimethylsulfoniopropionate (DMSP), which allows them to survive the high salinities in the channels after ice formation in the winter, as well as low salinities when the relatively fresh meltwater flushes the channels in the spring and summer. Some sea ice algae species secrete ice-binding proteins (IBP) as a gelatinous extracellular polymeric substance (EPS) to protect cell membranes from damage from ice crystal growth and freeze thaw cycles. [ 15 ] EPS alters the microstructure of the ice and creates further habitat for future blooms. Ice algae survive in environments with little to no light for several months of the year, such as within ice brine pockets. Such algae have specialized adaptations to be able to maintain growth and reproduction during periods of darkness. Some sea ice diatoms have been found to utilize mixotrophy when light levels are low. For example, some Antarctic diatoms downregulate glycolysis in environments with low to no irradiance, while upregulating other mitochondrial metabolic pathways, including the Entner−Doudoroff pathway which provides the TCA cycle (an important component in cellular respiration) with pyruvate when pyruvate cannot be obtained via photosynthesis. [ 16 ] Surface-dwelling algae produce special pigments to prevent damage from harsh ultraviolet radiation . Higher concentrations of xanthophyll pigments act as a sunscreen that protects ice algae from photodamage when they are exposed to damaging levels of ultraviolet radiation upon transition from ice to the water column during the spring. [ 3 ] Algae under thick ice have been reported to show some of the most extreme low light adaptations ever observed. They are able to perform photosynthesis in an environment with just 0.02% of the light at the surface. [ 17 ] Extreme efficiency in light utilization allows sea ice algae to build up biomass rapidly when light conditions improve at the onset of spring. [ 18 ] Sea ice algae play a critical role in primary production and serve as part of the base of the polar food web by converting carbon dioxide and inorganic nutrients to oxygen and organic matter through photosynthesis in the upper ocean of both the Arctic and Antarctic. Within the Arctic, estimates of the contribution of sea ice algae to total primary production ranges from 3-25%, up to 50-57% in high Arctic regions. [ 19 ] [ 20 ] Sea ice algae accumulate biomass rapidly, often at the base of sea ice, and grow to form algal mats that are consumed by amphipods such as krill and copepods . Ultimately, these organisms are eaten by fish, whales, penguins, and dolphins. [ 18 ] When sea ice algal communities detach from the sea ice they are consumed by pelagic grazers, such as zooplankton, as they sink through the water column and by benthic invertebrates as they settle on the seafloor. [ 3 ] Sea ice algae as food are rich in polyunsaturated and other essential fatty acids, and are the exclusive producer of certain essential omega-3 fatty acids that are important for copepod egg production, egg hatching, and zooplankton growth and function. [ 3 ] [ 21 ] The timing of sea ice algae blooms has a significant impact on the entire ecosystem. Initiation of the bloom is primarily controlled by the return of the sun in the spring (i.e. the solar angle). Because of this, ice algae blooms usually occurs before the blooms of pelagic phytoplankton , which require higher light levels and warmer water. [ 21 ] Early in the season, prior to the ice melt, sea ice algae constitute an important food source for higher trophic levels . [ 21 ] However, the total percentage that sea ice algae contribute to the primary production of a given ecosystem depends strongly on the extent of ice cover. The thickness of snow on the sea ice also affects the timing and size of the ice algae bloom by altering light transmission. [ 22 ] This sensitivity to ice and snow cover has the potential to cause a mismatch between predators and their food-source, sea ice algae, within the ecosystem. This so called match/mismatch has been applied to a variety of systems. [ 23 ] Examples have been seen in the relationship between zooplankton species, which rely on sea ice algae and phytoplankton for food, and juvenile walleye pollock in the Bering Sea. [ 24 ] There are several ways in which sea ice algal blooms are thought to start their annual cycle, and hypotheses about these vary depending on water column depth, sea ice age, and taxonomic group. Where sea ice overlays deep ocean, it is proposed that cells trapped in multiyear ice brine pockets are reconnected to the water column below and quickly colonize nearby ice of all ages. This is known as the multiyear sea ice repository hypothesis . [ 12 ] This seeding source has been demonstrated in diatoms, which dominate sympagic blooms. Other groups, such as the dinoflagellates , which also bloom in the spring/summer, have been shown to maintain low cell numbers in the water column itself, and do not primarily overwinter within the ice. [ 25 ] Where sea ice covers ocean that is somewhat shallower, resuspension of cells from the sediment may occur. [ 26 ] Climate change and warming of Arctic and Antarctic regions have the potential to greatly alter ecosystem functioning. Decreasing ice cover in polar regions is expected to lessen the relative proportion of sea ice algae production to measures of annual primary production. [ 27 ] [ 28 ] Thinning ice allows for greater production early in the season but early ice melting shortens the overall growing season of the sea ice algae. This melting also contributes to stratification of the water column that alters the availability of nutrients for algae growth by decreasing the depth of the surface mixed layer and inhibiting the upwelling of nutrients from deep waters. This is expected to cause an overall shift towards pelagic phytoplankton production. [ 28 ] Changes in multiyear ice volume [ 29 ] will also have an impact on ecosystem function in terms of bloom seeding source adjustment. Reduction in MYI, a temporal refugia for diatoms in particular, will likely alter sympagic community composition, resulting in bloom initialization that derives from species that overwinter in the water column or sediments instead. [ 25 ] Because sea ice algae are often the base of the food web, these alterations have implications for species of higher trophic levels. [ 19 ] The reproduction and migration cycles of many polar primary consumers are timed with the bloom of sea ice algae, meaning that a change in the timing or location of primary production could shift the distribution of prey populations necessary for significant keystone species. Production timing may also be altered by the melting through of surface melt ponds to the seawater below, which can alter sea ice algal habitat late in the growing season in such a way as to impact grazing communities as they approach winter. [ 30 ] The production of DMSP by sea ice algae also plays an important role in the carbon cycle . DMSP is oxidized by other plankton to dimethylsulfide (DMS), a compound which is linked to cloud formation. Because clouds impact precipitation and the amount of solar radiation reflected back to space ( albedo ), this process could create a positive feedback loop. [ 31 ] Cloud cover would increase the insolation reflected back to space by the atmosphere, potentially helping to cool the planet and support more polar habitats for sea ice algae. As of 1987, research has suggested that a doubling of cloud-condensation nuclei , of which DMS is one type, would be required to counteract warming due to increased atmospheric CO 2 concentrations. [ 32 ] Sea ice plays a major role in the global climate. [ 33 ] Satellite observations of sea ice extent date back only until the late 1970s, and longer term observational records are sporadic and of uncertain reliability. [ 34 ] While terrestrial ice paleoclimatology can be measured directly through ice cores, historical models of sea ice must rely on proxies. Organisms dwelling on the sea ice eventually detach from the ice and fall through the water column, particularly when the sea ice melts. A portion of the material that reaches the seafloor is buried before it is consumed and is thus preserved in the sedimentary record . There are a number of organisms whose value as proxies for the presence of sea ice has been investigated, including particular species of diatoms, dinoflagellate cysts , ostracods , and foraminifers . Variation in carbon and oxygen isotopes in a sediment core can also be used to make inferences about sea ice extent. Each proxy has advantages and disadvantages; for example, some diatom species that are unique to sea ice are very abundant in the sediment record, however, preservation efficiency can vary. [ 35 ] Lake snow and ice algae Algae can grow within and attached to lake ice as well, especially below clear, black ice . [ 36 ] Within the ice, algae often grows in water-filled air pockets found in the slush layer formed between the ice and snow interface. [ 37 ] For instance, the diatom species Aulacoseira baicalensis endemic to Lake Baikal can reproduce intensively in water-filled pockets within the ice as well as attached to the ice sheet. [ 36 ] Alpine freshwater ice and snow which can last over half a year has been found to support an overall higher microbial biomass and algal activity than the lake water itself as well as specific predatory species of ciliates only found in the slush layer of the ice and snow interface. [ 38 ] Algae living on the snowpack of ice-covered lakes may be especially rich in essential polyunsaturated fatty acids . [ 39 ] Snow and glacier Ice algae Algae also thrive on snow fields, glaciers and ice sheets. The species found in these habitats are distinct from those associated with sea ice because the system is freshwater and the algae are pigmented. Even within these habitats, there is a wide diversity of habitat types and algal assemblages that colonize snow and ice surfaces during melt. For example, cryosestic communities are specifically found on the surface of glaciers where the snow periodically melts during the day. [ 40 ] Research has been done on glaciers and ice sheets across the world and several species have been identified. However, although there seems to be a wide array of species they have not been found in equal amounts. The most abundant species identified on different glaciers are the glacier ice algae Ancylonema nordenskioldii [ 41 ] [ 42 ] [ 43 ] [ 44 ] and the snow algae Chlamydomonas nivalis . [ 44 ] [ 45 ] [ 46 ] Table 1. Algae Species Composition Across Studies on Glaciers and Ice Sheets The rate of glacier melt depends on the surface albedo . Recent research has shown the growth of snow and glacier ice algae darkens local surface conditions, decreasing the albedo and thus increases the melt rate on these surfaces. [ 46 ] [ 45 ] [ 47 ] Melting glaciers and ice sheets have been directly linked to increase in sea level rise . [ 48 ] The second largest ice sheet is the Greenland Ice Sheet which has been retreating at alarming rates. Sea level rise will lead to an increase in both frequency and intensity of storm events. [ 48 ] On enduring ice sheets and snow pack, terrestrial ice algae often color the ice due to accessory pigments, popularly known as " watermelon snow ". The dark pigments within the structure of algae increases sunlight absorption, leading to an increase in the melting rate. [ 41 ] Algae blooms have been shown to appear on glaciers and ice sheets once the snow had begun to melt, which occurs when the air temperature is above the freezing point for a few days. [ 45 ] The abundance of algae changes with the seasons and also spatially on glaciers. Their abundance is highest during the melting season of glaciers which occurs in the summer months. [ 41 ] Climate change is affecting both the start of the melting season and also the length of this period, which will lead to an increase in the amount of algae growth. As the ice/snow begins to melt the area the ice covers decreases which means a higher portion of land is exposed. The land underneath the ice has a higher rate of solar absorption due to it being less reflective and darker. Melting snow also has lower albedo than dry snow or ice because of its optical properties, so as snow begins to melt the albedo decreases, which results in more snow melting, and the loop continues. This feedback loop is referred to as the Ice–albedo feedback loop. This can have drastic effects on the amount of snow melting each season. Algae plays a role in this feedback loop by decreasing the level of albedo of the snow/ice. This growth of algae has been studied but its exact effects on decreasing albedo is still unknown. The Black and Bloom project is conducting research to determine the amount algae are contributing to the darkening of the Greenland Ice Sheet, as well as algae's impact on the melting rates of the ice sheets. [ 49 ] It is important to understand the extent to which algae is changing the albedo on glaciers and ice sheets. Once this is known, it should be incorporated into global climate models and then used to predict sea level rise.
https://en.wikipedia.org/wiki/Ice_algae
An ice detector is an instrument that detects the presence of ice on a surface. Ice detectors are used to identify the presence of icing conditions and are commonly used in aviation, [ 1 ] unmanned aircraft , [ 2 ] marine vessels, [ 3 ] wind energy , [ 4 ] and power lines. [ 5 ] Ice detection can be done with direct and indirect methods. Direct methods identify the presence of atmospheric icing conditions, i.e. the presence of supercooled water droplets. Indirect methods infer the presence of icing conditions by either detecting ice accretions [ 6 ] on a surface, or by changed vehicle performance behavior. [ 7 ]
https://en.wikipedia.org/wiki/Ice_detector
An ice giant is a giant planet composed mainly of elements heavier than hydrogen and helium , such as oxygen , carbon , nitrogen , and sulfur . There are two ice giants in the Solar System : Uranus and Neptune . In astrophysics and planetary science the term "ice" refers to volatile chemical compounds with freezing points above about 100 K , such as water , ammonia , or methane , with freezing points of 273 K (0 °C), 195 K (−78 °C), and 91 K (−182 °C), respectively (see Volatiles ). In the 1990s, it was determined that Uranus and Neptune were a distinct class of giant planet, separate from the other giant planets, Jupiter and Saturn , which are gas giants predominantly composed of hydrogen and helium. [ 1 ] Neptune and Uranus are now referred to as ice giants . Lacking well-defined solid surfaces, they are primarily composed of gases and liquids. Their constituent compounds were solids when they were primarily incorporated into the planets during their formation, either directly in the form of ice or trapped in water ice. Today, very little of the water in Uranus and Neptune remains in the form of ice. Instead, water primarily exists as supercritical fluid at the temperatures and pressures within them. [ 2 ] Uranus and Neptune consist of only about 20% hydrogen and helium by mass, compared to the Solar System's gas giants , Jupiter and Saturn, which are more than 90% hydrogen and helium by mass. In 1952, science fiction writer James Blish coined the term gas giant [ 3 ] and it was used to refer to the large non- terrestrial planets of the Solar System . However, since the late 1940s [ 4 ] the compositions of Uranus and Neptune have been understood to be significantly different from those of Jupiter and Saturn . They are primarily composed of elements heavier than hydrogen and helium , forming a separate type of giant planet altogether. Because during their formation Uranus and Neptune incorporated their material as either ice or gas trapped in water ice, the term ice giant came into use. [ 2 ] [ 4 ] In the early 1970s, the terminology became popular in the science fiction community, e.g., Bova (1971), [ 5 ] but the earliest scientific usage of the terminology was likely by Dunne & Burgess (1978) [ 6 ] in a NASA report. [ 7 ] Modelling the formation of terrestrial planets and gas giants is relatively straightforward and uncontroversial . The terrestrial planets of the Solar System are widely understood to have formed through collisional accumulation of planetesimals within the protoplanetary disk . The gas giants — Jupiter , Saturn , and their extrasolar counterpart planets—are thought to have formed solid cores of around 10 Earth masses ( M E ) through the same process, while accreting gaseous envelopes from the surrounding solar nebula over the course of a few to several million years ( Ma ), [ 8 ] [ 9 ] although alternative models of core formation based on pebble accretion have recently been proposed. [ 10 ] Some extrasolar giant planets may instead have formed via gravitational disk instabilities. [ 9 ] [ 11 ] The formation of Uranus and Neptune through a similar process of core accretion is far more problematic. The escape velocity for the small protoplanets about 20 astronomical units (AU) from the center of the Solar System would have been comparable to their relative velocities . Such bodies crossing the orbits of Saturn or Jupiter would have been liable to be sent on hyperbolic trajectories ejecting them from the system. Such bodies, being swept up by the gas giants, would also have been likely to just be accreted into larger planets or thrown into cometary orbits. [ 11 ] Despite the trouble modelling their formation, many ice giant candidates have been observed orbiting other stars since 2004. This indicates that they may be common in the Milky Way . [ 2 ] Considering the orbital challenges protoplanets 20 AU or more from the centre of the Solar System would experience, a simple solution is that the ice giants formed between the orbits of Jupiter and Saturn before being gravitationally scattered outward to their now more distant orbits. [ 11 ] Gravitational instability of the protoplanetary disk could also produce several gas giant protoplanets out to distances of up to 30 AU. Regions of slightly higher density in the disk could lead to the formation of clumps that eventually collapse to planetary densities. [ 11 ] A disk with even marginal gravitational instability could yield protoplanets between 10 and 30 AU in over one thousand years (ka). This is much shorter than the 100,000 to 1,000,000 years required to produce protoplanets through core accretion of the cloud and could make it viable in even the shortest-lived disks, which exist for only a few million years. [ 11 ] A problem with this model is determining what kept the disk stable before the instability. There are several possible mechanisms allowing gravitational instability to occur during disk evolution. A close encounter with another protostar could provide a gravitational kick to an otherwise stable disk. A disk evolving magnetically is likely to have magnetic dead zones, due to varying degrees of ionization , where mass moved by magnetic forces could pile up, eventually becoming marginally gravitationally unstable. A protoplanetary disk may simply accrete matter slowly, causing relatively short periods of marginal gravitational instability and bursts of mass collection, followed by periods where the surface density drops below what is required to sustain the instability. [ 11 ] Observations of photoevaporation of protoplanetary disks in the Orion Trapezium Cluster by extreme ultraviolet (EUV) radiation emitted by θ 1 Orionis C suggests another possible mechanism for the formation of ice giants. Multiple- Jupiter-mass gas-giant protoplanets could have rapidly formed due to disk instability before having most of their hydrogen envelopes stripped off by intense EUV radiation from a nearby massive star. [ 11 ] In the Carina Nebula , EUV fluxes are approximately 100 times higher than in Trapezium's Orion Nebula . Protoplanetary disks are present in both nebulae. Higher EUV fluxes make this an even more likely possibility for ice-giant formation. The stronger EUV would increase the removal of the gas envelopes from protoplanets before they could collapse sufficiently to resist further loss. [ 11 ] The ice giants represent one of two fundamentally different categories of giant planets present in the Solar System , the other group being the more-familiar gas giants , which are composed of more than 90% hydrogen and helium (by mass). The hydrogen in gas giants is thought to extend all the way down to their rocky cores, where hydrogen molecular ion transitions to metallic hydrogen under extreme pressures of hundreds of gigapascals (GPa). [ 2 ] The ice giants are primarily composed of heavier elements . Based on the abundance of elements in the universe , oxygen , carbon , nitrogen , and sulfur are most likely. Although the ice giants also have hydrogen envelopes , these are much smaller. They account for less than 20% of their mass. Their hydrogen also never reaches the depths necessary for the pressure to create metallic hydrogen. [ 2 ] These envelopes nevertheless limit observation of the ice giants' interiors, and thereby the information on their composition and evolution. [ 2 ] Although Uranus and Neptune are referred to as ice giant planets, it is thought that there is a supercritical water-ammonia ocean beneath their clouds, which accounts for about two-thirds of their total mass. [ 12 ] [ 13 ] The gaseous outer layers of the ice giants have several similarities to those of the gas giants. These include long-lived, high-speed equatorial winds, polar vortices , large-scale circulation patterns, and complex chemical processes driven by ultraviolet radiation from above and mixing with the lower atmosphere. [ 2 ] Studying the ice giants' atmospheric patterns also gives insights into atmospheric physics . Their compositions promote different chemical processes and they receive far less sunlight in their distant orbits than any other planets in the Solar System (increasing the relevance of internal heating on weather patterns). [ 2 ] The largest visible feature on Neptune is the recurring Great Dark Spot . It forms and dissipates every few years, as opposed to the similarly sized Great Red Spot of Jupiter , which has persisted for centuries. Of all known giant planets in the Solar System, Neptune emits the most internal heat per unit of absorbed sunlight, a ratio of approximately 2.6. Saturn , the next-highest emitter, only has a ratio of about 1.8. Uranus emits the least heat, one-tenth as much as Neptune. It is suspected that this may be related to its extreme 98˚ axial tilt . This causes its seasonal patterns to be very different from those of any other planet in the Solar System. [ 2 ] There are still no complete models explaining the atmospheric features observed in the ice giants. [ 2 ] Understanding these features will help elucidate how the atmospheres of giant planets in general function. [ 2 ] Consequently, such insights could help scientists better predict the atmospheric structure and behaviour of giant exoplanets discovered to be very close to their host stars ( pegasean planets ) and exoplanets with masses and radii between that of the giant and terrestrial planets found in the Solar System. [ 2 ] Because of their large sizes and low thermal conductivities, the planetary interior pressures range up to several hundred GPa and temperatures of several thousand kelvins (K). [ 14 ] In March 2012, it was found that the compressibility of water used in ice-giant models could be off by one-third. [ 15 ] This value is important for modeling ice giants, and has a ripple effect on understanding them. [ 15 ] The magnetic fields of Uranus and Neptune are both unusually displaced and tilted. [ 16 ] Their field strengths are intermediate between those of the gas giants and those of the terrestrial planets, being 50 and 25 times that of Earth's, respectively. The equatorial magnetic field strengths of Uranus and Neptune are respectively 75 percent and 45 percent of Earth's 0.305 gauss. [ 16 ] Their magnetic fields are believed to originate in an ionized convecting fluid-ice mantle. [ 16 ] Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Ice_giant
An ice storm , also known as a glaze event or a silver storm , is a type of winter storm characterized by freezing rain . [ 1 ] The U.S. National Weather Service defines an ice storm as a storm which results in the accumulation of at least 0.25-inch (6.4 mm) of ice on exposed surfaces. [ 2 ] [ 3 ] They are generally not violent storms but instead are commonly perceived as gentle rains occurring at temperatures just below freezing. The formation of ice begins with a layer of above-freezing air above a layer of sub-freezing temperatures closer to the surface. Frozen precipitation melts to rain while falling into the warm air layer, and then begins to refreeze in the cold layer below. If the precipitate refreezes while still in the air, it will land on the ground as sleet . Alternatively, the liquid droplets can continue to fall without freezing, passing through the cold air just above the surface. This thin layer of air then cools the rain to a temperature below freezing (0 °C or 32 °F). However, the drops themselves do not freeze, a phenomenon called supercooling (or forming " supercooled drops "). When the supercooled drops strike ground or anything else below 0 °C (32 °F) (e.g. power lines, tree branches, aircraft), a layer of ice accumulates as the cold water drips off, forming a slowly thickening film of ice, hence freezing rain. [ 4 ] [ 5 ] [ 6 ] While meteorologists can predict when and where an ice storm will occur, some storms still occur with little or no warning. [ 5 ] In the United States, most ice storms occur in the northeastern region , but damaging storms have occurred farther south; an ice storm in February 1994 resulted in tremendous ice accumulation as far south as Mississippi, and caused reported damage in nine states. [ 7 ] [ 8 ] The freezing rain from an ice storm covers everything with heavy, smooth glaze ice . [ 9 ] In addition to hazardous driving or walking conditions, branches or even whole trees may break from the weight of ice. Falling branches can block roads, tear down power and telephone lines, and cause other damage. Even without falling trees and tree branches, the weight of the ice itself can easily snap power lines and also break and bring down power/utility poles; even electricity pylons with steel frames. This can leave people without power for anywhere from several days to a month. According to most meteorologists , just 0.25-inch (6.4 mm) of ice accumulation can add about 500 pounds (230 kg) of weight per line span. Damage from ice storms is easily capable of shutting down entire metropolitan areas. Additionally, the loss of power during ice storms has indirectly caused numerous illnesses and deaths due to unintentional carbon monoxide (CO) poisoning . At lower levels, CO poisoning causes symptoms such as nausea , dizziness , fatigue , and headache , but high levels can cause unconsciousness , heart failure , and death. [ 10 ] The relatively high incidence of CO poisoning during ice storms occurs due to the use of alternative methods of heating and cooking during prolonged power outages, common after severe ice storms. [ 11 ] Gas generators, charcoal and propane barbecues, and kerosene heaters contribute to CO poisoning when they operate in confined locations. [ 10 ] CO is produced when appliances burn fuel without enough oxygen present, [ 12 ] such as basements and other indoor locations. Loss of electricity during ice storms can indirectly lead to hypothermia and result in death. It can also lead to ruptured pipes due to water freezing inside the pipes.
https://en.wikipedia.org/wiki/Ice_storm
An ichnotaxon (plural ichnotaxa ) is "a taxon based on the fossilized work of an organism", i.e. the non-human equivalent of an artifact . Ichnotaxon comes from the Ancient Greek ἴχνος ( íchnos ) meaning "track" and English taxon , itself derived from Ancient Greek τάξις ( táxis ) meaning "ordering". [ 1 ] Ichnotaxa are names used to identify and distinguish morphologically distinctive ichnofossils , more commonly known as trace fossils ( fossil records of lifeforms ' movement, rather than of the lifeforms themselves). They are assigned genus and species ranks by ichnologists , much like organisms in Linnaean taxonomy . These are known as ichnogenera and ichnospecies , respectively. "Ichnogenus" and "ichnospecies" are commonly abbreviated as "igen." and "isp.". The binomial names of ichnospecies and their genera are to be written in italics . Most researchers classify trace fossils only as far as the ichnogenus rank, based upon trace fossils that resemble each other in morphology but have subtle differences. Some authors have constructed detailed hierarchies up to ichnosuperclass, recognizing such fine detail as to identify ichnosuperorder and ichnoinfraclass, but such attempts are controversial. Due to the chaotic nature of trace fossil classification, several ichnogenera hold names normally affiliated with animal body fossils or plant fossils. For example, many ichnogenera are named with the suffix -phycus due to misidentification as algae. [ 2 ] Edward Hitchcock was the first to use the now common -ichnus suffix in 1858, with Cochlichnus . [ 2 ] Due to ichnofossils' history of being difficult to classify, there have been several attempts to enforce consistency in the naming of ichnotaxa. The first edition of the International Code of Zoological Nomenclature , published in 1961, ruled that names of taxa published after 1930 should be 'accompanied by a statement that purports to give characters differentiating the taxon'. This had the effect that names for most ichnofossil taxa published after 1930 were unavailable under the code. This restriction was removed for ichnotaxa in the third edition of the code, published in 1985. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Ichnotaxon
In Greek mythology , ichor ( / ˈ aɪ k ər / ) is the ethereal fluid that is the blood of the gods and/or immortals . The Ancient Greek word ἰχώρ ( ikhṓr ) is of uncertain etymology, and has been suggested to be a foreign word, possibly the Pre-Greek substrate . [ 1 ] Ichor originates in Greek mythology , where it is the "ethereal fluid" that is the blood of the Greek gods, sometimes said to retain the qualities of the immortals' food and drink, ambrosia and nectar. [ 2 ] Ichor is described as toxic to humans, killing them instantly if they came in contact with it. [ 3 ] [ 4 ] Great heroes and demigods occasionally attacked gods and released ichor, but gods rarely did so to each other in Homeric myth. [ citation needed ] Iliad V. 339–342: [not] Blood follow'd, but immortal ichor pure, Such as the blest inhabitants of heav'n< May bleed, nectareous; for the Gods eat not Man's food, nor slake as he with sable wine Their thirst, thence bloodless and from death exempt. † †   We are not to understand that the poet ascribes the immortality of the Gods to their abstinence from the drink and food of man, for most animals partake of neither, but the expression is elliptic and requires to be supplied thus – they drink not wine but nectar, eat not the food of mortals, but ambrosia; thence it is that they are bloodless and exempt from death. In Ancient Crete , tradition told of Talos , a giant man of bronze . When Cretan mythology was appropriated by the Greeks , they imagined him more like the Colossus of Rhodes . He possessed a single vein running with ichor that was stoppered by a nail in his back. Talos guarded Europa on Crete and threw boulders at intruders, until the Argonauts came after the acquisition of the Golden Fleece , and the sorceress Medea took out the nail, releasing the ichor and killing him. [ 5 ] It [a magical herb] first appeared in a plant that sprang from the blood-like ichor of Prometheus in his torment, which the flesh-eating Eagle had dropped on the spurs of the Kaukasos . [ 6 ] [ full citation needed ] Prometheus was a Titan , who made humans and stole fire from the gods and gave it to the mortals, and consequently was punished by Zeus for all eternity. Prometheus was chained to a rock for his sin, and his liver was eaten daily by an eagle. His liver would then regrow, just to be eaten again, repeated for all eternity. Prometheus bled ichor, a golden, blood-like substance that would cause a magical herb to sprout when it touched the ground. In pathology , "ichor" is an antiquated term for a watery discharge from a wound or ulcer , with an unpleasant or fetid (offensive) smell. [ 7 ] The Greek Christian writer Clement of Alexandria deliberately confounded ichor in its medical sense as a foul-smelling watery discharge from a wound or ulcer with its mythological sense as the blood of the gods, in a polemic against the pagan Greek gods . As part of his evidence that they are merely mortal, he cites several cases in which the gods are wounded physically, and then asserts that if there are wounds, there is blood. For the ichor of the poets is more repulsive than blood; for the putrefaction of blood is called ichor. [ 8 ]
https://en.wikipedia.org/wiki/Ichor
The Ichthyander Project was the first project involving underwater habitats in the Soviet Union in 1960s. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Inspired by information on experiments on underwater habitats abroad (in particular, by Jacques Cousteau 's Conshelf ), the members of the amateur diving club "Ichthyander" [ 6 ] in Donetsk embarked on the project of their own at a site by Tarkhankut Cape , Crimea . [ 1 ] The name is taken from the name of the protagonist of the Soviet film Amphibian Man . In August 1966, in the first experiment, purely amateur, Ichthyander-66, a person spent three days continuously underwater. After newspaper news, the experiment attracted attention of authorities and scientist, and during Ichthyander-67 the habitat operated for two weeks. After Ichtyander-68 Ichtyander-70, after unsuccessful attempts to elevate it to a professional level, with state support, the project was discontinued. [ 3 ] A 1968 Soviet popular science book Homo Aquaticus writes: "It so happened that after the 1967 expedition, an order was issued to dissolve the club". Ichthyander-68 was carried out during a short-lived attempt the members of the club to attach themselves to the Mining Science-Technical Society (Горное научно-техническое общество) to start research in underwater geodesy and drilling. [ 7 ] A memorial marker exists (a stone with a plaque and steel slabs) at the site. [ 1 ] This project preceded and catalyzed several other early Soviet experiments with underwater habitats, such as Sadko (autumn 1966), Chernomor and Sprut . [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Ichthyander_Project
Ichthyology is the branch of zoology devoted to the study of fish , including bony fish ( Osteichthyes ), cartilaginous fish ( Chondrichthyes ), and jawless fish ( Agnatha ). According to FishBase , 35,800 species of fish had been described as of March 2025, with approximately 250 new species described each year. [ 1 ] [ citation needed ] The word is derived from the Ancient Greek words ἰχθύς , ikhthus , meaning "fish"; and λόγος , logos , meaning "study". [ 2 ] [ 3 ] The study of fish dates from the Upper Paleolithic Revolution (with the advent of "high culture"). The science of ichthyology was developed in several interconnecting epochs, each with various significant advancements. The study of fish receives its origins from humans' desire to feed, clothe, and equip themselves with useful implements. According to Michael Barton , a prominent ichthyologist and professor at Centre College , "the earliest ichthyologists were hunters and gatherers who had learned how to obtain the most useful fish, where to obtain them in abundance, and at what times they might be the most available". Early cultures manifested these insights in abstract and identifiable artistic expressions. Informal, scientific descriptions of fish are represented within the Judeo-Christian tradition. The Old Testament laws of kashrut forbade the consumption of fish without scales or appendages. [ citation needed ] Theologians and ichthyologists believe that the apostle Peter and his contemporaries harvested the fish that are today sold in modern industry along the Sea of Galilee , presently known as Lake Kinneret . These fish include cyprinids of the genera Barbus and Mirogrex , cichlids of the genus Sarotherodon , and Mugil cephalus of the family Mugilidae . Aristotle incorporated ichthyology into formal scientific study. Between 333 and 322 BC, he provided the earliest taxonomic classification of fish, accurately describing 117 species of Mediterranean fish. [ 4 ] Furthermore, Aristotle documented anatomical and behavioral differences between fish and marine mammals . After his death, some of his pupils continued his ichthyological research. Theophrastus , for example, composed a treatise on amphibious fish. The Romans, although less devoted to science, wrote extensively about fish. Pliny the Elder , a notable Roman naturalist , compiled the ichthyological works of indigenous Greeks , including verifiable and ambiguous peculiarities such as the sawfish and mermaid , respectively. Pliny's documentation was the last significant contribution to ichthyology until the European Renaissance . The writings of three 16th-century scholars, Hippolito Salviani , Pierre Belon , and Guillaume Rondelet , signify the conception of modern ichthyology. The investigations of these individuals were based upon actual research in comparison to ancient recitations. This property popularized and emphasized these discoveries. Despite their prominence, Rondelet's De Piscibus Marinis is regarded as the most influential, identifying 244 species of fish. The incremental alterations in navigation and shipbuilding throughout the Renaissance marked the commencement of a new epoch in ichthyology. The Renaissance culminated with the era of exploration and colonization, and upon the cosmopolitan interest in navigation came the specialization in naturalism. Georg Marcgrave of Saxony composed the Naturalis Brasilae in 1648. This document contained a description of 100 species of fish indigenous to the Brazilian coastline. In 1686, John Ray and Francis Willughby collaboratively published Historia Piscium , a scientific manuscript containing 420 species of fish, 178 of these newly discovered. The fish contained within this informative literature were arranged in a provisional system of classification. The classification used within the Historia Piscium was further developed by Carl Linnaeus , the "father of modern taxonomy". His taxonomic approach became the systematic approach to the study of organisms, including fish. Linnaeus was a professor at the University of Uppsala and an eminent botanist ; however, one of his colleagues, Peter Artedi , earned the title "father of ichthyology" through his indispensable advancements. Artedi contributed to Linnaeus's refinement of the principles of taxonomy. Furthermore, he recognized five additional orders of fish: Malacopterygii, Acanthopterygii, Branchiostegi, Chondropterygii, and Plagiuri . Artedi developed standard methods for making counts and measurements of anatomical features that are modernly exploited. Another associate of Linnaeus, Albertus Seba , was a prosperous pharmacist from Amsterdam . Seba assembled a cabinet, or collection, of fish. He invited Artedi to use this assortment of fish; in 1735, Artedi fell into an Amsterdam canal and drowned at the age of 30. Linnaeus posthumously published Artedi's manuscripts as Ichthyologia, sive Opera Omnia de Piscibus (1738). His refinement of taxonomy culminated in the development of the binomial nomenclature , which is in use by contemporary ichthyologists. Furthermore, he revised the orders introduced by Artedi, placing significance on pelvic fins . Fish lacking this appendage were placed within the order Apodes; fish having abdominal, thoracic, or jugular pelvic fins were termed Abdominales, Thoracici, and Jugulares, respectively. However, these alterations were not grounded within evolutionary theory. Therefore, over a century was needed for Charles Darwin to provide the intellectual foundation needed to perceive that the degree of similarity in taxonomic features was a consequence of phylogenetic relationships. Close to the dawn of the 19th century, Marcus Elieser Bloch of Berlin and Georges Cuvier of Paris made attempts to consolidate the knowledge of ichthyology. Cuvier summarized all of the available information in his monumental Histoire Naturelle des Poissons . This manuscript was published between 1828 and 1849 in a 22-volume series. This document describes 4,514 species of fish, 2,311 of these new to science. It remains one of the most ambitious treatises of the modern world. Scientific exploration of the Americas advanced knowledge of the remarkable diversity of fish . Charles Alexandre Lesueur was a student of Cuvier. He made a cabinet of fish dwelling within the Great Lakes and Saint Lawrence River regions. Adventurous individuals such as John James Audubon and Constantine Samuel Rafinesque figure in the faunal documentation of North America. They often traveled with one another. Rafinesque wrote Ichthyologic Ohiensis in 1820. In addition, Louis Agassiz of Switzerland established his reputation through the study of freshwater fish and the first comprehensive treatment of palaeoichthyology, Poisson Fossil's . In the 1840s, Agassiz moved to the United States, where he taught at Harvard University until his death in 1873. Albert Günther published his Catalogue of the Fish of the British Museum between 1859 and 1870, describing over 6,800 species and mentioning another 1,700. Generally considered one of the most influential ichthyologists, David Starr Jordan wrote 650 articles and books on the subject and served as president of Indiana University and Stanford University . Members of this list meet one or more of the following criteria: 1) Author of 50 or more fish taxon names, 2) Author of major reference work in ichthyology, 3) Founder of major journal or museum, 4) Person most notable for other reasons who has also worked in ichthyology.
https://en.wikipedia.org/wiki/Ichthyology
Icky-pick or icky-pic is a gelatinous cable compound used in outdoor-rated communications cables , including both twisted-pair copper cabling and fiber-optic cabling . [ 1 ] [ 2 ] "PIC" is the abbreviation for "plastic insulated cable". The cable is filled with an "icky" substance. The filled cable itself, therefore, is called an "icky PIC". Icky-pick has two primary functions: The actual icky-pick compound is a very thick petroleum -based substance e.g. petroleum jelly , and is only rated for outdoor use, frequently direct-buried in the ground. An outdoor cable spliced onto an indoor terminal block is prone to leak the gelatin, hence in many situations the icky-pic cable is spliced outside the building to a short run of normal cable which runs through a protective conduit into the building. The thick gel stains clothing and hands and is very difficult to remove. When fiber-optic cables are to be spliced, the gel must be removed with solvents and swabs to prevent fouling of the splice. Paint thinner or charcoal starter is a frequently used and commonly available remover and clean-up agent.
https://en.wikipedia.org/wiki/Icky-pick
In geometry , the icosahedral 120-cell , polyicosahedron , faceted 600-cell or icosaplex is a regular star 4-polytope with Schläfli symbol {3,5,5/2}. It is one of 10 regular Schläfli-Hess polytopes . It is constructed by 5 icosahedra around each edge in a pentagrammic figure. The vertex figure is a great dodecahedron . It has the same edge arrangement as the 600-cell , grand 120-cell and great 120-cell , and shares its vertices with all other Schläfli–Hess 4-polytopes except the great grand stellated 120-cell (another stellation of the 120-cell ). As a faceted 600-cell, replacing the simplicial cells of the 600-cell with icosahedral pentagonal polytope cells, it could be seen as a four-dimensional analogue of the great dodecahedron , which replaces the triangular faces of the icosahedron with pentagonal faces. Indeed, the icosahedral 120-cell is dual to the small stellated 120-cell , which could be taken as a 4D analogue of the small stellated dodecahedron , dual of the great dodecahedron. This 4-polytope article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Icosahedral_120-cell
In 4-dimensional geometry , the icosahedral bipyramid is the direct sum of an icosahedron and a segment, {3,5} + { }. Each face of a central icosahedron is attached with two tetrahedra , creating 40 tetrahedral cells, 80 triangular faces, 54 edges, and 14 vertices. [ 1 ] An icosahedral bipyramid can be seen as two icosahedral pyramids augmented together at their bases . It is the dual of a dodecahedral prism , Coxeter-Dynkin diagram , so the bipyramid can be described as . Both have Coxeter notation symmetry [2,3,5], order 240. Having all regular cells (tetrahedra), it is a Blind polytope . This 4-polytope article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Icosahedral_bipyramid
The icosahedral pyramid is a four-dimensional convex polytope , bounded by one icosahedron as its base and by 20 triangular pyramid cells which meet at its apex. Since an icosahedron's circumradius is less than its edge length, [ 1 ] the tetrahedral pyramids can be made with regular faces. Having all regular cells, it is a Blind polytope . Two copies can be augmented to make an icosahedral bipyramid which is also a Blind Polytope. The regular 600-cell has icosahedral pyramids around every vertex. The dual to the icosahedral pyramid is the dodecahedral pyramid , seen as a dodecahedral base, and 12 regular pentagonal pyramids meeting at an apex. Seen in a configuration matrix , all incidence counts between elements are shown. [ 2 ] This 4-polytope article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Icosahedral_pyramid
The icosian calculus is a non-commutative algebraic structure discovered by the Irish mathematician William Rowan Hamilton in 1856. [ 1 ] [ 2 ] In modern terms, he gave a group presentation of the icosahedral rotation group by generators and relations. Hamilton's discovery derived from his attempts to find an algebra of "triplets" or 3-tuples that he believed would reflect the three Cartesian axes . The symbols of the icosian calculus correspond to moves between vertices on a dodecahedron . (Hamilton originally thought in terms of moves between the faces of an icosahedron, which is equivalent by duality . This is the origin of the name "icosian". [ 3 ] ) Hamilton's work in this area resulted indirectly in the terms Hamiltonian circuit and Hamiltonian path in graph theory. [ 4 ] He also invented the icosian game as a means of illustrating and popularising his discovery. The algebra is based on three symbols, ι {\displaystyle \iota } , κ {\displaystyle \kappa } , and λ {\displaystyle \lambda } , that Hamilton described as "roots of unity", by which he meant that repeated application of any of them a particular number of times yields the identity, which he denoted by 1. Specifically, they satisfy the relations, Hamilton gives one additional relation between the symbols, which is to be understood as application of κ {\displaystyle \kappa } followed by application of ι {\displaystyle \iota } . Hamilton points out that application in the reverse order produces a different result, implying that composition or multiplication of symbols is not generally commutative , although it is associative . The symbols generate a group of order 60, isomorphic to the group of rotations of a regular icosahedron or dodecahedron , and therefore to the alternating group of degree five. This, however, is not how Hamilton described them. Hamilton drew comparisons between the icosians and his system of quaternions , but noted that, unlike quaternions, which can be added and multiplied, obeying a distributive law, the icosians could only, as far as he knew, be multiplied. Hamilton understood his symbols by reference to the dodecahedron, which he represented in flattened form as a graph in the plane. The dodecahedron has 30 edges, and if arrows are placed on edges, there are two possible arrow directions for each edge, resulting in 60 directed edges . Each symbol corresponds to a permutation of the set of directed edges. The definitions below refer to the labeled diagram above. The notation ( A , B ) {\displaystyle (A,B)} represents a directed edge from vertex A {\displaystyle A} to vertex B {\displaystyle B} . Vertex A {\displaystyle A} is the tail of ( A , B ) {\displaystyle (A,B)} and vertex B {\displaystyle B} is its head . It is useful to define the symbol μ {\displaystyle \mu } for the operation that produces the directed edge that results from making a left turn at the head of the directed edge to which the operation is applied. This symbol satisfies the relations For example, the directed edge obtained by making a left turn from ( B , C ) {\displaystyle (B,C)} is ( C , P ) {\displaystyle (C,P)} . Indeed, κ {\displaystyle \kappa } applied to ( B , C ) {\displaystyle (B,C)} produces ( D , C ) {\displaystyle (D,C)} and λ {\displaystyle \lambda } applied to ( D , C ) {\displaystyle (D,C)} produces ( C , P ) {\displaystyle (C,P)} . Also, κ 2 {\displaystyle \kappa ^{2}} applied to ( B , C ) {\displaystyle (B,C)} produces ( P , C ) {\displaystyle (P,C)} and ι {\displaystyle \iota } applied to ( P , C ) {\displaystyle (P,C)} produces ( C , P ) {\displaystyle (C,P)} . These permutations are not rotations of the dodecahedron. Nevertheless, the group of permutations generated by these symbols is isomorphic to the rotation group of the dodecahedron, a fact that can be deduced from a specific feature of symmetric cubic graphs , of which the dodecahedron graph is an example. The rotation group of the dodecahedron has the property that for a given directed edge there is a unique rotation that sends that directed edge to any other specified directed edge. Hence by choosing a reference edge, say ( B , C ) {\displaystyle (B,C)} , a one-to-one correspondence between directed edges and rotations is established: let g E {\displaystyle g_{E}} be the rotation that sends the reference edge R {\displaystyle R} to directed edge E {\displaystyle E} . (Indeed, there are 60 directed edges and 60 rotations.) The rotations are permutations of the set of directed edges of a different sort. Let g ( E ) {\displaystyle g(E)} denote the image of edge E {\displaystyle E} under the rotation g {\displaystyle g} . The icosian associated to g {\displaystyle g} sends the reference edge R {\displaystyle R} to the same directed edge as does g {\displaystyle g} , namely to g ( R ) {\displaystyle g(R)} . The result of applying that icosian to any other directed edge E {\displaystyle E} is g E g ( R ) = g E g g E − 1 ( E ) {\displaystyle g_{E}g(R)=g_{E}gg_{E}^{-1}(E)} . [ 5 ] A word consisting of the symbols λ {\displaystyle \lambda } and μ {\displaystyle \mu } corresponds to a sequence of right and left turns in the graph. Specifying such a word along with an initial directed edge therefore specifies a directed path along the edges of the dodecahedron. If the group element represented by the word equals the identity, then the path returns to the initial directed edge in the final step. If the additional requirement is imposed that every vertex of the graph be visited exactly once—specifically that every vertex occur exactly once as the head of a directed edge in the path—then a Hamiltonian circuit is obtained. Finding such a circuit was one of the challenges posed by Hamilton's icosian game. Hamilton exhibited the word ( λ 3 μ 3 ( λ μ ) 2 ) 2 {\displaystyle (\lambda ^{3}\mu ^{3}(\lambda \mu )^{2})^{2}} with the properties described above. [ 5 ] Any of the 60 directed edges may serve as initial edge as a consequence of the symmetry of the dodecahedron, but only 30 distinct Hamiltonian circuits are obtained in this way, up to shift in starting point, because the word consists of the same sequence of 10 left and right turns repeated twice. The word with the roles of λ {\displaystyle \lambda } and μ {\displaystyle \mu } interchanged has the same properties, but these give the same Hamiltonian cycles, up to shift in initial edge and reversal of direction. [ 3 ] Hence Hamilton's word accounts for all Hamiltonian cycles in the dodecahedron, whose number is known to be 30. The icosian calculus is one of the earliest examples of many mathematical ideas, including:
https://en.wikipedia.org/wiki/Icosian_calculus
Integrated discrete Multiple Organ Culture (IdMOC) is an in vitro , cell culture based experimental model for the study of intercellular communication . In conventional in vitro systems, each cell type is studied in isolation ignoring critical interactions between organs or cell types. IdMOC technology is based on the concept that multiple organs signal or communicate via the systemic circulation (i.e., blood). The IdMOC plate consists of multiple inner wells within a large interconnecting chamber. Multiple cell types are first individually seeded in the inner wells and, when required, are flooded with an overlying medium to facilitate well-to-well communication. Test material can be added to the overlying medium and both media and cells can be analyzed individually. Plating of hepatocytes with other organ-specific cells allows evaluation of drug metabolism and organotoxicity. [ 1 ] The IdMOC system has numerous applications in drug development , such as the evaluation of drug metabolism and toxicity . It can simultaneously evaluate the toxic potential of a drug on cells from multiple organs and evaluate drug stability, distribution, metabolite formation, and efficacy. By modeling multiple-organ interactions, IdMOC can examine the pharmacological effects of a drug and its metabolites on target and off-target organs as well as evaluate drug-drug interactions by measuring cytochrome P450 (CYP) induction or inhibition in hepatocytes. IdMOC can also be used for routine and high throughput screening of drugs with desirable ADME or ADME-Tox properties. In vitro toxicity screening using hepatocytes in conjunction with other primary cells such as cardiomyocytes ( cardiotoxicity model), kidney proximal tubule epithelial cells ( nephrotoxicity model), astrocytes ( neurotoxicity model), endothelial cells (vascular toxicity model), and airway epithelial cells ( pulmonary toxicity model) is invaluable to the drug design and discovery process. [ 2 ] The IdMOC was patented by Dr. Albert P. Li in 2004. [ 3 ]
https://en.wikipedia.org/wiki/IdMOC
An ideal chain (or freely-jointed chain ) is the simplest model in polymer chemistry to describe polymers , such as nucleic acids and proteins . It assumes that the monomers in a polymer are located at the steps of a hypothetical random walker that does not remember its previous steps. By neglecting interactions among monomers , this model assumes that two (or more) monomers can occupy the same location. Although it is simple, its generality gives insight about the physics of polymers . In this model, monomers are rigid rods of a fixed length l , and their orientation is completely independent of the orientations and positions of neighbouring monomers. In some cases, the monomer has a physical interpretation, such as an amino acid in a polypeptide . In other cases, a monomer is simply a segment of the polymer that can be modeled as behaving as a discrete, freely jointed unit. If so, l is the Kuhn length . For example, chromatin is modeled as a polymer in which each monomer is a segment approximately 14–46 kbp in length. [ 1 ] N mers form the polymer, whose total unfolded length is: L = N l , {\displaystyle L=N\,l,} where N is the number of mers. In this very simple approach where no interactions between mers are considered, the energy of the polymer is taken to be independent of its shape, which means that at thermodynamic equilibrium , all of its shape configurations are equally likely to occur as the polymer fluctuates in time, according to the Maxwell–Boltzmann distribution . Let us call R → {\displaystyle {\vec {R}}} the total end to end vector of an ideal chain and r → 1 , … , r → N {\displaystyle {\vec {r}}_{1},\ldots ,{\vec {r}}_{N}} the vectors corresponding to individual mers. Those random vectors have components in the three directions of space. Most of the expressions given in this article assume that the number of mers N is large, so that the central limit theorem applies. The figure below shows a sketch of a (short) ideal chain. The two ends of the chain are not coincident, but they fluctuate around each other, so that of course: ⟨ R → ⟩ = ∑ i = 1 N ⟨ r → i ⟩ = 0 → {\displaystyle \left\langle {\vec {R}}\right\rangle =\sum _{i=1}^{N}\left\langle {\vec {r}}_{i}\right\rangle ={\vec {0}}~} Throughout the article the ⟨ ⟩ {\displaystyle \langle \rangle } brackets will be used to denote the mean (of values taken over time) of a random variable or a random vector, as above. Since r → 1 , … , r → N {\displaystyle {\vec {r}}_{1},\ldots ,{\vec {r}}_{N}} are independent , it follows from the central limit theorem that R → {\displaystyle {\vec {R}}} is distributed according to a normal distribution (or gaussian distribution): precisely, in 3D, R x , R y , {\displaystyle R_{x},R_{y},} and R z {\displaystyle R_{z}} are distributed according to a normal distribution of mean 0 and of variance : σ 2 = ⟨ R x 2 ⟩ − ⟨ R x ⟩ 2 = ⟨ R x 2 ⟩ − 0 {\displaystyle \sigma ^{2}=\langle R_{x}^{2}\rangle -\langle R_{x}\rangle ^{2}=\langle R_{x}^{2}\rangle -0} ⟨ R x 2 ⟩ = ⟨ R y 2 ⟩ = ⟨ R z 2 ⟩ = N l 2 3 {\displaystyle \left\langle R_{x}^{2}\right\rangle =\left\langle R_{y}^{2}\right\rangle =\left\langle R_{z}^{2}\right\rangle =N\,{\frac {l^{2}}{3}}} So that ⟨ R 2 ⟩ = N l 2 = L l {\displaystyle \langle {R^{2}}\rangle =N\,l^{2}=L\,l~} . The end to end vector of the chain is distributed according to the following probability density function : P ( R → ) = ( 3 2 π N l 2 ) 3 / 2 e − 3 | R → | 2 2 N l 2 {\displaystyle P({\vec {R}})=\left({\frac {3}{2\pi Nl^{2}}}\right)^{3/2}e^{-{\frac {3\left|{\vec {R}}\right|^{2}}{2Nl^{2}}}}} The average end-to-end distance of the polymer is: ⟨ R 2 ⟩ = N l = L l {\displaystyle {\sqrt {\left\langle {R^{2}}\right\rangle }}={\sqrt {N}}\,l={\sqrt {L\,l}}~} A quantity frequently used in polymer physics is the radius of gyration : ⟨ R G ⟩ = N l 6 {\displaystyle \langle {\mathit {R}}_{G}\rangle ={\frac {{\sqrt {N}}\,l}{{\sqrt {6}}\ }}} It is worth noting that the above average end-to-end distance, which in the case of this simple model is also the typical amplitude of the system's fluctuations, becomes negligible compared to the total unfolded length of the polymer N l {\displaystyle N\,l} at the thermodynamic limit . This result is a general property of statistical systems. Mathematical remark: the rigorous demonstration of the expression of the density of probability P ( R → ) {\displaystyle P({\vec {R}})} is not as direct as it appears above: from the application of the usual (1D) central limit theorem one can deduce that R x {\displaystyle R_{x}} , R y {\displaystyle R_{y}} and R z {\displaystyle R_{z}} are distributed according to a centered normal distribution of variance N l 2 / 3 {\displaystyle N\,l^{2}/3} . Then, the expression given above for P ( R → ) {\displaystyle P({\vec {R}})} is not the only one that is compatible with such distribution for R x {\displaystyle R_{x}} , R y {\displaystyle R_{y}} and R z {\displaystyle R_{z}} . However, since the components of the vectors r → 1 , … , r → N {\displaystyle {\vec {r}}_{1},\ldots ,{\vec {r}}_{N}} are uncorrelated for the random walk we are considering, it follows that R x {\displaystyle R_{x}} , R y {\displaystyle R_{y}} and R z {\displaystyle R_{z}} are also uncorrelated . This additional condition can only be fulfilled if R → {\displaystyle {\vec {R}}} is distributed according to P ( R → ) {\displaystyle P({\vec {R}})} . Alternatively, this result can also be demonstrated by applying a multidimensional generalization of the central limit theorem , or through symmetry arguments. While the elementary model described above is totally unadapted to the description of real-world polymers at the microscopic scale, it does show some relevance at the macroscopic scale in the case of a polymer in solution whose monomers form an ideal mix with the solvent (in which case, the interactions between monomer and monomer, solvent molecule and solvent molecule, and between monomer and solvent are identical, and the system's energy can be considered constant, validating the hypotheses of the model). The relevancy of the model is, however, limited, even at the macroscopic scale, by the fact that it does not consider any excluded volume for monomers (or, to speak in chemical terms, that it neglects steric effects ). Since the N mers are of a rigid, fixed length, the model also does not consider bond stretching, though it can be extended to do so. [ 2 ] Other fluctuating polymer models that consider no interaction between monomers and no excluded volume, like the worm-like chain model, are all asymptotically convergent toward this model at the thermodynamic limit . For purpose of this analogy a Kuhn segment is introduced, corresponding to the equivalent monomer length to be considered in the analogous ideal chain. The number of Kuhn segments to be considered in the analogous ideal chain is equal to the total unfolded length of the polymer divided by the length of a Kuhn segment. If the two free ends of an ideal chain are pulled apart by some sort of device, then the device experiences a force exerted by the polymer. As the ideal chain is stretched, its energy remains constant, and its time-average, or internal energy , also remains constant, which means that this force necessarily stems from a purely entropic effect. This entropic force is very similar to the pressure experienced by the walls of a box containing an ideal gas . The internal energy of an ideal gas depends only on its temperature, and not on the volume of its containing box, so it is not an energy effect that tends to increase the volume of the box like gas pressure does. This implies that the pressure of an ideal gas has a purely entropic origin. What is the microscopic origin of such an entropic force or pressure? The most general answer is that the effect of thermal fluctuations tends to bring a thermodynamic system toward a macroscopic state that corresponds to a maximum in the number of microscopic states (or micro-states) that are compatible with this macroscopic state. In other words, thermal fluctuations tend to bring a system toward its macroscopic state of maximum entropy . What does this mean in the case of the ideal chain? First, for our ideal chain, a microscopic state is characterized by the superposition of the states r → i {\displaystyle {\vec {r}}_{i}} of each individual monomer (with i varying from 1 to N ). In its solvent, the ideal chain is constantly subject to shocks from moving solvent molecules, and each of these shocks sends the system from its current microscopic state to another, very similar microscopic state. For an ideal polymer, as will be shown below, there are more microscopic states compatible with a short end-to-end distance than there are microscopic states compatible with a large end-to-end distance. Thus, for an ideal chain, maximizing its entropy means reducing the distance between its two free ends. Consequently, a force that tends to collapse the chain is exerted by the ideal chain between its two free ends. In this section, the mean of this force will be derived. The generality of the expression obtained at the thermodynamic limit will then be discussed. The case of an ideal chain whose two ends are attached to fixed points will be considered in this sub-section. The vector R → {\displaystyle {\vec {R}}} joining these two points characterizes the macroscopic state (or macro-state) of the ideal chain. Each macro-state corresponds a certain number of micro-states, that we will call Ω ( R → ) {\displaystyle \Omega ({\vec {R}})} (micro-states are defined in the introduction to this section). Since the ideal chain's energy is constant, each of these micro-states is equally likely to occur. The entropy associated to a macro-state is thus equal to: S ( R → ) = k B log ⁡ ( Ω ( R → ) ) , {\displaystyle S({\vec {R}})=k_{\text{B}}\log(\Omega ({\vec {R}})),} where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant The above expression gives the absolute (quantum) entropy of the system. A precise determination of Ω ( R → ) {\displaystyle \Omega ({\vec {R}})} would require a quantum model for the ideal chain, which is beyond the scope of this article. However, we have already calculated the probability density P ( R → ) {\displaystyle P({\vec {R}})} associated with the end-to-end vector of the unconstrained ideal chain, above. Since all micro-states of the ideal chain are equally likely to occur, P ( R → ) {\displaystyle P({\vec {R}})} is proportional to Ω ( R → ) {\displaystyle \Omega ({\vec {R}})} . This leads to the following expression for the classical (relative) entropy of the ideal chain: S ( R → ) = k B log ⁡ ( P ( R → ) ) + C s t , {\displaystyle S({\vec {R}})=k_{\text{B}}\log(P({\vec {R}}))+C_{st},} where C s t {\displaystyle C_{st}} is a fixed constant. Let us call F → {\displaystyle {\vec {F}}} the force exerted by the chain on the point to which its end is attached. From the above expression of the entropy , we can deduce an expression of this force. Suppose that, instead of being fixed, the positions of the two ends of the ideal chain are now controlled by an operator. The operator controls the evolution of the end to end vector R → {\displaystyle {\vec {R}}} . If the operator changes R → {\displaystyle {\vec {R}}} by a tiny amount d R → {\displaystyle d{\vec {R}}} , then the variation of internal energy of the chain is zero, since the energy of the chain is constant. This condition can be written as: 0 = d U = δ W + δ Q {\displaystyle 0=dU=\delta W+\delta Q~} δ W {\displaystyle \delta W} is defined as the elementary amount of mechanical work transferred by the operator to the ideal chain, and δ Q {\displaystyle \delta Q} is defined as the elementary amount of heat transferred by the solvent to the ideal chain. Now, if we assume that the transformation imposed by the operator on the system is quasistatic (i.e., infinitely slow), then the system's transformation will be time-reversible, and we can assume that during its passage from macro-state R → {\displaystyle {\vec {R}}} to macro-state R → + d R → {\displaystyle {\vec {R}}+d{\vec {R}}} , the system passes through a series of thermodynamic equilibrium macro-states. This has two consequences: We are thus led to: ⟨ f → ⟩ = T d S d R → = k B T P ( R → ) d P ( R → ) d R → {\displaystyle \langle {\vec {f}}\rangle =T{\frac {dS}{d{\vec {R}}}}={\frac {k_{\text{B}}T}{P({\vec {R}})}}{\frac {dP({\vec {R}})}{d{\vec {R}}}}} ⟨ f → ⟩ = − k B T 3 R → N l 2 {\displaystyle \langle {\vec {f}}\rangle =-k_{\text{B}}T{\frac {3{\vec {R}}}{Nl^{2}}}} The above equation is the equation of state of the ideal chain. Since the expression depends on the central limit theorem , it is only exact in the limit of polymers containing a large number of monomers (that is, the thermodynamic limit ). It is also only valid for small end-to-end distances, relative to the overall polymer contour length, where the behavior is like a hookean spring. Behavior over larger force ranges can be modeled using a canonical ensemble treatment identical to magnetization of paramagnetic spins. For the arbitrary forces the extension-force dependence will be given by Langevin function L {\displaystyle {\mathcal {L}}} : R N l = coth ⁡ ( f l k B T ) − k B T f l = L ( f l k B T ) , {\displaystyle {\frac {R}{Nl}}=\coth \left({\frac {fl}{k_{\text{B}}T}}\right)-{\frac {k_{\text{B}}T}{fl}}={\mathcal {L}}\left({\frac {fl}{k_{\text{B}}T}}\right),} where the extension is R = | R → | {\displaystyle R=|{\vec {R}}|} . For the arbitrary extensions the force-extension dependence can be approximated by: [ 3 ] f l k B T = L − 1 ( R N l ) ≈ 3 R N l + 1 5 ( R N l ) 2 sin ⁡ ( 7 R 2 N l ) + ( R N l ) 3 1 − R N l , {\displaystyle {\frac {fl}{k_{\text{B}}T}}={\mathcal {L}}^{-1}{\left({\frac {R}{Nl}}\right)}\approx 3{\frac {R}{Nl}}+{\frac {1}{5}}\left({\frac {R}{Nl}}\right)^{2}\sin \left({\frac {7R}{2Nl}}\right)+{\frac {\left({\frac {R}{Nl}}\right)^{3}}{1-{\frac {R}{Nl}}}},} where L − 1 {\displaystyle {\mathcal {L}}^{-1}} is the inverse Langevin function , N is the number of bonds [ 4 ] in the molecule (therefore if the molecule has N bonds it has N + 1 monomers making up the molecule.). Finally, the model can be extended to even larger force ranges by inclusion of a stretch modulus along the polymer contour length. That is, by allowing the length of each unit of the chain to respond elastically to the applied force. [ 5 ] Throughout this sub-section, as in the previous one, the two ends of the polymer are attached to a micro-manipulation device. This time, however, the device does not maintain the two ends of the ideal chain in a fixed position, but rather it maintains a constant pulling force f → op {\displaystyle {\vec {f}}_{\text{op}}} on the ideal chain. In this case the two ends of the polymer fluctuate around a mean position ⟨ R → ⟩ {\displaystyle \langle {\vec {R}}\rangle } . The ideal chain reacts with a constant opposite force f → = − f → op {\displaystyle {\vec {f}}=-{\vec {f}}_{\text{op}}} . For an ideal chain exchanging length with a reservoir, a macro-state of the system is characterized by the vector f → {\displaystyle {\vec {f}}} . The change between an ideal chain of fixed length and an ideal chain in contact with a length reservoir is very much akin to the change between the micro-canonical ensemble and the canonical ensemble (see the Statistical mechanics article about this). [ 6 ] The change is from a state where a fixed value is imposed on a certain parameter, to a state where the system is left free to exchange this parameter with the outside. The parameter in question is energy for the microcanonical and canonical descriptions, whereas in the case of the ideal chain the parameter is the length of the ideal chain. As in the micro-canonical and canonical ensembles, the two descriptions of the ideal chain differ only in the way they treat the system's fluctuations. They are thus equivalent at the thermodynamic limit . The equation of state of the ideal chain remains the same, except that R → {\displaystyle {\vec {R}}} is now subject to fluctuations: f → = − k B T 3 ⟨ R → ⟩ N l 2 . {\displaystyle {\vec {f}}=-k_{\text{B}}T{\frac {3\langle {\vec {R}}\rangle }{Nl^{2}}}~.} Consider a freely jointed chain of N {\displaystyle N} bonds of length l {\displaystyle l} subject to a constant elongational force f {\displaystyle f} applied to its ends along the z {\displaystyle z} axis and an environment temperature T {\displaystyle T} . An example could be a chain with two opposite charges + q {\displaystyle +q} and − q {\displaystyle -q} at its ends in a constant electric field E → {\displaystyle {\vec {E}}} applied along the z {\displaystyle z} axis as sketched in the figure on the right. If the direct Coulomb interaction between the charges is ignored, then there is a constant force f → {\displaystyle {\vec {f}}} at the two ends. Different chain conformations are not equally likely, because they correspond to different energy of the chain in the external electric field. U = − q E → ⋅ R → = − f → ⋅ R → = − f R z {\displaystyle U=-q{\vec {E}}\cdot {\vec {R}}=-{\vec {f}}\cdot {\vec {R}}=-fR_{z}} Thus, different chain conformation have different statistical Boltzmann factors exp ⁡ ( − U / k B T ) {\displaystyle \exp(-U/k_{\text{B}}T)} . [ 4 ] The partition function is: Z = ∑ states exp ⁡ ( − U / k B T ) = ∑ states exp ⁡ ( f R z k B T ) {\displaystyle Z=\sum _{\text{states}}\exp(-U/k_{\text{B}}T)=\sum _{\text{states}}\exp \left({\frac {fR_{z}}{k_{\text{B}}T}}\right)} Every monomer connection in the chain is characterized by a vector r → i {\displaystyle {\vec {r}}_{i}} of length l {\displaystyle l} and angles θ i , φ i {\displaystyle \theta _{i},\varphi _{i}} in the spherical coordinate system . The end-to-end vector can be represented as: R z = ∑ i = 1 N l cos ⁡ θ i {\textstyle R_{z}=\sum _{i=1}^{N}l\cos \theta _{i}} . Therefore: Z = ∫ exp ⁡ ( f l k B T ∑ i = 1 N cos ⁡ θ i ) ∏ i = 1 N sin ⁡ θ i d θ i d φ i = [ ∫ 0 π 2 π sin ⁡ θ i exp ⁡ ( f l k B T cos ⁡ θ i ) d θ i ] N = [ 2 π f l / ( k B T ) ( exp ⁡ ( f l k B T ) − exp ⁡ ( − f l k B T ) ) ] N = [ 4 π sinh ⁡ ( f l / ( k B T ) ) f l / ( k B T ) ] N {\displaystyle {\begin{aligned}Z&=\int \exp \left({\frac {fl}{k_{\text{B}}T}}\sum _{i=1}^{N}\cos \theta _{i}\right)\prod _{i=1}^{N}\sin \theta _{i}\,d\theta _{i}\,d\varphi _{i}\\&=\left[\int _{0}^{\pi }2\pi {\text{ }}\sin \theta _{i}{\text{ }}\exp \left({\frac {fl}{k_{\text{B}}T}}\cos \theta _{i}\right)\,d\theta _{i}\right]^{N}\\&=\left[{\frac {2\pi }{fl/(k_{\text{B}}T)}}\left(\exp \left({fl \over k_{\text{B}}T}\right)-\exp \left(-{\frac {fl}{k_{\text{B}}T}}\right)\right)\right]^{N}\\&=\left[{4\pi \sinh(fl/(k_{\text{B}}T)) \over fl/(k_{\text{B}}T)}\right]^{N}\end{aligned}}} The Gibbs free energy G can be directly calculated from the partition function: G ( T , f , N ) = − k B T ln ⁡ Z ( T , f , N ) = − N k B T [ ln ⁡ ( 4 π sinh ⁡ ( f l k B T ) ) − ln ⁡ ( f l k B T ) ] {\displaystyle G(T,f,N)=-k_{\text{B}}T\,\ln Z(T,f,N)=-Nk_{\text{B}}T\left[\ln \left(4\pi \sinh \left({\frac {fl}{k_{\text{B}}T}}\right)\right)-\ln \left({\frac {fl}{k_{\text{B}}T}}\right)\right]} The Gibbs free energy is used here because the ensemble of chains corresponds to constant temperature T {\displaystyle T} and constant force f {\displaystyle f} (analogous to the isothermal–isobaric ensemble , which has constant temperature and pressure). The average end-to-end distance corresponding to a given force can be obtained as the derivative of the free energy: ⟨ R ⟩ = − ∂ G ∂ f = N l [ coth ⁡ ( f l k B T ) − k B T f l ] {\displaystyle \langle R\rangle =-{\frac {\partial G}{\partial f}}=Nl\left[\coth \left({\frac {fl}{k_{\text{B}}T}}\right)-{\frac {k_{\text{B}}T}{fl}}\right]} This expression is the Langevin function L {\displaystyle {\mathcal {L}}} , also mentioned in previous paragraphs: L ( α ) = coth ⁡ ( α ) − 1 α {\displaystyle {\mathcal {L}}(\alpha )=\coth(\alpha )-{1 \over \alpha }} where α = f l k B T {\displaystyle \alpha ={\frac {fl}{k_{\text{B}}T}}} . For small relative elongations ( ⟨ R ⟩ ≪ R max = l N {\displaystyle \langle R\rangle \ll R_{\text{max}}=lN} ) the dependence is approximately linear, L ( α ) ≅ α 3 for α ≪ 1 {\displaystyle {\mathcal {L}}(\alpha )\cong {\frac {\alpha }{3}}\qquad {\text{ for }}\alpha \ll 1} and follows Hooke's law as shown in previous paragraphs: f → = k B T 3 ⟨ R → ⟩ N l 2 {\displaystyle {\vec {f}}=k_{\text{B}}T{\frac {3\langle {\vec {R}}\rangle }{Nl^{2}}}}
https://en.wikipedia.org/wiki/Ideal_chain
In electrochemistry , there are two types of ideal electrode , the ideal polarizable electrode and the ideal non-polarizable electrode . Simply put, the ideal polarizable electrode is characterized by charge separation at the electrode-electrolyte boundary and is electrically equivalent to a capacitor, while the ideal non-polarizable electrode is characterized by no charge separation and is electrically equivalent to a short. An ideal polarizable electrode (also ideally polarizable electrode or ideally polarized electrode or IPE ) is a hypothetical electrode characterized by an absence of net DC current between the two sides of the electrical double layer , i.e., no faradic current exists between the electrode surface and the electrolyte. Any transient current that may be flowing is considered non-faradaic . [ 1 ] The reason for this behavior is that the electrode reaction is infinitely slow, with zero exchange current density , and behaves electrically as a capacitor. The concept of the ideal polarizability has been first introduced by F.O. Koenig in 1934. [ 1 ] An ideal non-polarizable electrode , is a hypothetical electrode in which a faradic current can freely pass (without polarization ). Its potential does not change from its equilibrium potential upon application of current. The reason for this behavior is that the electrode reaction is infinitely fast, having an infinite exchange current density , and behaves as an electrical short. The classical examples of the two nearly ideal types of electrodes, polarizable and non-polarizable, are the mercury droplet electrode in contact with an oxygen-free KCl solution and the silver/silver chloride electrode , respectively. [ 2 ] [ 3 ] This electrochemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ideal_electrode
The ideal gas law , also called the general gas equation , is the equation of state of a hypothetical ideal gas . It is a good approximation of the behavior of many gases under many conditions, although it has several limitations. It was first stated by Benoît Paul Émile Clapeyron in 1834 as a combination of the empirical Boyle's law , Charles's law , Avogadro's law , and Gay-Lussac's law . [ 1 ] The ideal gas law is often written in an empirical form: p V = n R T {\displaystyle pV=nRT} where p {\displaystyle p} , V {\displaystyle V} and T {\displaystyle T} are the pressure , volume and temperature respectively; n {\displaystyle n} is the amount of substance ; and R {\displaystyle R} is the ideal gas constant . It can also be derived from the microscopic kinetic theory , as was achieved (apparently independently) by August Krönig in 1856 [ 2 ] and Rudolf Clausius in 1857. [ 3 ] The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: the appropriate SI unit is the kelvin . [ 4 ] The most frequently introduced forms are: p V = n R T = n k B N A T = N k B T {\displaystyle pV=nRT=nk_{\text{B}}N_{\text{A}}T=Nk_{\text{B}}T} where: In SI units , p is measured in pascals , V is measured in cubic metres , n is measured in moles , and T in kelvins (the Kelvin scale is a shifted Celsius scale , where 0 K = −273.15 °C, the lowest possible temperature ). R has for value 8.314 J /( mol · K ) = 1.989 ≈ 2 cal /(mol·K), or 0.0821 L⋅ atm /(mol⋅K). How much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, an alternative form of the ideal gas law may be useful. The chemical amount, n (in moles), is equal to total mass of the gas ( m ) (in kilograms) divided by the molar mass , M (in kilograms per mole): By replacing n with m / M and subsequently introducing density ρ = m / V , we get: Defining the specific gas constant R specific as the ratio R / M , This form of the ideal gas law is very useful because it links pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specific volume v , the reciprocal of density, as It is common, especially in engineering and meteorological applications, to represent the specific gas constant by the symbol R . In such cases, the universal gas constant is usually given a different symbol such as R ¯ {\displaystyle {\bar {R}}} or R ∗ {\displaystyle R^{*}} to distinguish it. In any case, the context and/or units of the gas constant should make it clear as to whether the universal or specific gas constant is being used. [ 5 ] In statistical mechanics , the following molecular equation (i.e. the ideal gas law in its theoretical form) is derived from first principles: where p is the absolute pressure of the gas, n is the number density of the molecules (given by the ratio n = N / V , in contrast to the previous formulation in which n is the number of moles ), T is the absolute temperature , and k B is the Boltzmann constant relating temperature and energy, given by: where N A is the Avogadro constant . The form can be furtherly simplified by defining the kinetic energy corresponding to the temperature: so the ideal gas law is more simply expressed as: From this we notice that for a gas of mass m , with an average particle mass of μ times the atomic mass constant , m u , (i.e., the mass is μ Da ) the number of molecules will be given by and since ρ = m / V = nμm u , we find that the ideal gas law can be rewritten as In SI units, p is measured in pascals , V in cubic metres, T in kelvins, and k B = 1.38 × 10 −23 J⋅K −1 in SI units . Combining the laws of Charles, Boyle, and Gay-Lussac gives the combined gas law , which can take the same functional form as the ideal gas law. This form does not specify the number of moles, and the ratio of P V {\displaystyle PV} to T {\displaystyle T} is simply taken as a constant: [ 6 ] where P {\displaystyle P} is the pressure of the gas, V {\displaystyle V} is the volume of the gas, T {\displaystyle T} is the absolute temperature of the gas, and k {\displaystyle k} is a constant. More commonly, when comparing the same substance under two different sets of conditions, the law is written as: According to the assumptions of the kinetic theory of ideal gases, one can consider that there are no intermolecular attractions between the molecules, or atoms, of an ideal gas. In other words, its potential energy is zero. Hence, all the energy possessed by the gas is the kinetic energy of the molecules, or atoms, of the gas. This corresponds to the kinetic energy of n moles of a monoatomic gas having 3 degrees of freedom ; x , y , z . The table here below gives this relationship for different amounts of a monoatomic gas. The table below essentially simplifies the ideal gas equation for a particular process, making the equation easier to solve using numerical methods. A thermodynamic process is defined as a system that moves from state 1 to state 2, where the state number is denoted by a subscript. As shown in the first column of the table, basic thermodynamic processes are defined such that one of the gas properties ( P , V , T , S , or H ) is constant throughout the process. For a given thermodynamic process, in order to specify the extent of a particular process, one of the properties ratios (which are listed under the column labeled "known ratio") must be specified (either directly or indirectly). Also, the property for which the ratio is known must be distinct from the property held constant in the previous column (otherwise the ratio would be unity, and not enough information would be available to simplify the gas law equation). In the final three columns, the properties ( p , V , or T ) at state 2 can be calculated from the properties at state 1 using the equations listed. ^ a. In an isentropic process, system entropy ( S ) is constant. Under these conditions, p 1 V 1 γ = p 2 V 2 γ , where γ is defined as the heat capacity ratio , which is constant for a calorifically perfect gas . The value used for γ is typically 1.4 for diatomic gases like nitrogen (N 2 ) and oxygen (O 2 ), (and air, which is 99% diatomic). Also γ is typically 1.6 for mono atomic gases like the noble gases helium (He), and argon (Ar). In internal combustion engines γ varies between 1.35 and 1.15, depending on constitution gases and temperature. ^ b. In an isenthalpic process, system enthalpy ( H ) is constant. In the case of free expansion for an ideal gas, there are no molecular interactions, and the temperature remains constant. For real gases, the molecules do interact via attraction or repulsion depending on temperature and pressure, and heating or cooling does occur. This is known as the Joule–Thomson effect . For reference, the Joule–Thomson coefficient μ JT for air at room temperature and sea level is 0.22 °C/ bar . [ 7 ] The equation of state given here ( PV = nRT ) applies only to an ideal gas, or as an approximation to a real gas that behaves sufficiently like an ideal gas. There are in fact many different forms of the equation of state. Since the ideal gas law neglects both molecular size and intermolecular attractions, it is most accurate for monatomic gases at high temperatures and low pressures. The molecular size becomes less important for lower densities, i.e. for larger volumes at lower pressures, because the average distance between adjacent molecules becomes much larger than the molecular size. The relative importance of intermolecular attractions diminishes with increasing thermal kinetic energy , i.e., with increasing temperatures. More detailed equations of state , such as the van der Waals equation , account for deviations from ideality caused by molecular size and intermolecular forces. The empirical laws that led to the derivation of the ideal gas law were discovered with experiments that changed only 2 state variables of the gas and kept every other one constant. All the possible gas laws that could have been discovered with this kind of setup are: where P stands for pressure , V for volume , N for number of particles in the gas and T for temperature ; where C 1 , C 2 , C 3 , C 4 , C 5 , C 6 {\displaystyle C_{1},C_{2},C_{3},C_{4},C_{5},C_{6}} are constants in this context because of each equation requiring only the parameters explicitly noted in them changing. To derive the ideal gas law one does not need to know all 6 formulas, one can just know 3 and with those derive the rest or just one more to be able to get the ideal gas law, which needs 4. Since each formula only holds when only the state variables involved in said formula change while the others (which are a property of the gas but are not explicitly noted in said formula) remain constant, we cannot simply use algebra and directly combine them all. This is why: Boyle did his experiments while keeping N and T constant and this must be taken into account (in this same way, every experiment kept some parameter as constant and this must be taken into account for the derivation). Keeping this in mind, to carry the derivation on correctly, one must imagine the gas being altered by one process at a time (as it was done in the experiments). The derivation using 4 formulas can look like this: at first the gas has parameters P 1 , V 1 , N 1 , T 1 {\displaystyle P_{1},V_{1},N_{1},T_{1}} Say, starting to change only pressure and volume, according to Boyle's law ( Equation 1 ), then: After this process, the gas has parameters P 2 , V 2 , N 1 , T 1 {\displaystyle P_{2},V_{2},N_{1},T_{1}} Using then equation ( 5 ) to change the number of particles in the gas and the temperature, After this process, the gas has parameters P 2 , V 2 , N 2 , T 2 {\displaystyle P_{2},V_{2},N_{2},T_{2}} Using then equation ( 6 ) to change the pressure and the number of particles, After this process, the gas has parameters P 3 , V 2 , N 3 , T 2 {\displaystyle P_{3},V_{2},N_{3},T_{2}} Using then Charles's law (equation 2) to change the volume and temperature of the gas, After this process, the gas has parameters P 3 , V 3 , N 3 , T 3 {\displaystyle P_{3},V_{3},N_{3},T_{3}} Using simple algebra on equations ( 7 ), ( 8 ), ( 9 ) and ( 10 ) yields the result: P 1 V 1 N 1 T 1 = P 3 V 3 N 3 T 3 {\displaystyle {\frac {P_{1}V_{1}}{N_{1}T_{1}}}={\frac {P_{3}V_{3}}{N_{3}T_{3}}}} or P V N T = k B , {\displaystyle {\frac {PV}{NT}}=k_{\text{B}},} where k B {\displaystyle k_{\text{B}}} stands for the Boltzmann constant . Another equivalent result, using the fact that n R = N k B {\displaystyle nR=Nk_{\text{B}}} , where n is the number of moles in the gas and R is the universal gas constant , is: P V = n R T , {\displaystyle PV=nRT,} which is known as the ideal gas law. If three of the six equations are known, it may be possible to derive the remaining three using the same method. However, because each formula has two variables, this is possible only for certain groups of three. For example, if you were to have equations ( 1 ), ( 2 ) and ( 4 ) you would not be able to get any more because combining any two of them will only give you the third. However, if you had equations ( 1 ), ( 2 ) and ( 3 ) you would be able to get all six equations because combining ( 1 ) and ( 2 ) will yield ( 4 ), then ( 1 ) and ( 3 ) will yield ( 6 ), then ( 4 ) and ( 6 ) will yield ( 5 ), as well as would the combination of ( 2 ) and ( 3 ) as is explained in the following visual relation: where the numbers represent the gas laws numbered above. If you were to use the same method used above on 2 of the 3 laws on the vertices of one triangle that has a "O" inside it, you would get the third. For example: Change only pressure and volume first: then only volume and temperature: then as we can choose any value for V 3 {\displaystyle V_{3}} , if we set V 1 = V 3 {\displaystyle V_{1}=V_{3}} , equation ( 2' ) becomes: combining equations ( 1' ) and ( 3' ) yields P 1 T 1 = P 2 T 2 {\displaystyle {\frac {P_{1}}{T_{1}}}={\frac {P_{2}}{T_{2}}}} , which is equation ( 4 ), of which we had no prior knowledge until this derivation. The ideal gas law can also be derived from first principles using the kinetic theory of gases , in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved. First we show that the fundamental assumptions of the kinetic theory of gases imply that Consider a container in the x y z {\displaystyle xyz} Cartesian coordinate system. For simplicity, we assume that a third of the molecules moves parallel to the x {\displaystyle x} -axis, a third moves parallel to the y {\displaystyle y} -axis and a third moves parallel to the z {\displaystyle z} -axis. If all molecules move with the same velocity v {\displaystyle v} , denote the corresponding pressure by P 0 {\displaystyle P_{0}} . We choose an area S {\displaystyle S} on a wall of the container, perpendicular to the x {\displaystyle x} -axis. When time t {\displaystyle t} elapses, all molecules in the volume v t S {\displaystyle vtS} moving in the positive direction of the x {\displaystyle x} -axis will hit the area. There are N v t S {\displaystyle NvtS} molecules in a part of volume v t S {\displaystyle vtS} of the container, but only one sixth (i.e. a half of a third) of them moves in the positive direction of the x {\displaystyle x} -axis. Therefore, the number of molecules N ′ {\displaystyle N'} that will hit the area S {\displaystyle S} when the time t {\displaystyle t} elapses is N v t S / 6 {\displaystyle NvtS/6} . When a molecule bounces off the wall of the container, it changes its momentum p 1 {\displaystyle \mathbf {p} _{1}} to p 2 = − p 1 {\displaystyle \mathbf {p} _{2}=-\mathbf {p} _{1}} . Hence the magnitude of change of the momentum of one molecule is | p 2 − p 1 | = 2 m v {\displaystyle |\mathbf {p} _{2}-\mathbf {p} _{1}|=2mv} . The magnitude of the change of momentum of all molecules that bounce off the area S {\displaystyle S} when time t {\displaystyle t} elapses is then | Δ p | = 2 m v N ′ / V = N t S m v 2 / ( 3 V ) = n t S m v 2 / 3 {\displaystyle |\Delta \mathbf {p} |=2mvN'/V=NtSmv^{2}/(3V)=ntSmv^{2}/3} . From F = | Δ p | / t {\displaystyle F=|\Delta \mathbf {p} |/t} and P 0 = F / S {\displaystyle P_{0}=F/S} we get We considered a situation where all molecules move with the same velocity v {\displaystyle v} . Now we consider a situation where they can move with different velocities, so we apply an "averaging transformation" to the above equation, effectively replacing P 0 {\displaystyle P_{0}} by a new pressure P {\displaystyle P} and v 2 {\displaystyle v^{2}} by the arithmetic mean of all squares of all velocities of the molecules, i.e. by v rms 2 . {\displaystyle v_{\text{rms}}^{2}.} Therefore which gives the desired formula. Using the Maxwell–Boltzmann distribution , the fraction of molecules that have a speed in the range v {\displaystyle v} to v + d v {\displaystyle v+dv} is f ( v ) d v {\displaystyle f(v)\,dv} , where and k {\displaystyle k} denotes the Boltzmann constant. The root-mean-square speed can be calculated by Using the integration formula it follows that from which we get the ideal gas law: Let q = ( q x , q y , q z ) and p = ( p x , p y , p z ) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then (two times) the time-averaged kinetic energy of the particle is: where the first equality is Newton's second law , and the second line uses Hamilton's equations and the equipartition theorem . Summing over a system of N particles yields By Newton's third law and the ideal gas assumption, the net force of the system is the force applied by the walls of the container, and this force is given by the pressure P of the gas. Hence where d S is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is the divergence theorem implies that where dV is an infinitesimal volume within the container and V is the total volume of the container. Putting these equalities together yields which immediately implies the ideal gas law for N particles: where n = N / N A is the number of moles of gas and R = N A k B is the gas constant . For a d -dimensional system, the ideal gas pressure is: [ 8 ] where L d {\displaystyle L^{d}} is the volume of the d -dimensional domain in which the gas exists. The dimensions of the pressure changes with dimensionality.
https://en.wikipedia.org/wiki/Ideal_gas_law
In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices . [ 1 ] Ideal lattices naturally occur in many parts of number theory , but also in other areas. In particular, they have a significant place in cryptography . Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign . Ideal lattices also form the basis for quantum computer attack resistant cryptography based on the Ring Learning with Errors. [ 2 ] These cryptosystems are provably secure under the assumption that the shortest vector problem (SVP) is hard in these ideal lattices. In general terms, ideal lattices are lattices corresponding to ideals in rings of the form Z [ x ] / ⟨ f ⟩ {\displaystyle \mathbb {Z} [x]/\langle f\rangle } for some irreducible polynomial f {\displaystyle f} of degree n {\displaystyle n} . [ 1 ] All of the definitions of ideal lattices from prior work are instances of the following general notion: let R {\displaystyle R} be a ring whose additive group is isomorphic to Z n {\displaystyle \mathbb {Z} ^{n}} (i.e., it is a free Z {\displaystyle \mathbb {Z} } -module of rank n {\displaystyle n} ), and let σ {\displaystyle \sigma } be an additive isomorphism mapping R {\displaystyle R} to some lattice σ ( R ) {\displaystyle \sigma (R)} in an n {\displaystyle n} -dimensional real vector space (e.g., R n {\displaystyle \mathbb {R} ^{n}} ). The family of ideal lattices for the ring R {\displaystyle R} under the embedding σ {\displaystyle \sigma } is the set of all lattices σ ( I ) {\displaystyle \sigma (I)} , where I {\displaystyle I} is an ideal in R . {\displaystyle R.} [ 3 ] Let f ∈ Z [ x ] {\displaystyle f\in \mathbb {Z} [x]} be a monic polynomial of degree n {\displaystyle n} , and consider the quotient ring Z [ x ] / ⟨ f ⟩ {\displaystyle \mathbb {Z} [x]/\langle f\rangle } . Using the standard set of representatives { ( g mod f ) : g ∈ Z [ x ] } {\displaystyle \lbrace (g{\bmod {f}}):g\in \mathbb {Z} [x]\rbrace } , and identification of polynomials with vectors, the quotient ring Z [ x ] / ⟨ f ⟩ {\displaystyle \mathbb {Z} [x]/\langle f\rangle } is isomorphic (as an additive group ) to the integer lattice Z n {\displaystyle \mathbb {Z} ^{n}} , and any ideal I ⊆ Z [ x ] / ⟨ f ⟩ {\displaystyle I\subseteq \mathbb {Z} [x]/\langle f\rangle } defines a corresponding integer sublattice L ( I ) ⊆ Z n {\displaystyle {\mathcal {L}}(I)\subseteq \mathbb {Z} ^{n}} . An ideal lattice is an integer lattice L ( B ) ⊆ Z n {\displaystyle {\mathcal {L}}(B)\subseteq \mathbb {Z} ^{n}} such that B = { g mod f : g ∈ I } {\displaystyle B=\lbrace g{\bmod {f}}:g\in I\rbrace } for some monic polynomial f {\displaystyle f} of degree n {\displaystyle n} and ideal I ⊆ Z [ x ] / ⟨ f ⟩ {\displaystyle I\subseteq \mathbb {Z} [x]/\langle f\rangle } . It turns out that the relevant properties of f {\displaystyle f} for the resulting function to be collision resistant are: The first property implies that every ideal of the ring Z [ x ] / ⟨ f ⟩ {\displaystyle \mathbb {Z} [x]/\langle f\rangle } defines a full-rank lattice in Z n {\displaystyle \mathbb {Z} ^{n}} and plays a fundamental role in proofs. Lemma: Every ideal I {\displaystyle I} of Z [ x ] / ⟨ f ⟩ {\displaystyle \mathbb {Z} [x]/\langle f\rangle } , where f {\displaystyle f} is a monic, irreducible integer polynomial of degree n {\displaystyle n} , is isomorphic to a full-rank lattice in Z n {\displaystyle \mathbb {Z} ^{n}} . Ding and Lindner [ 4 ] gave evidence that distinguishing ideal lattices from general ones can be done in polynomial time and showed that in practice randomly chosen lattices are never ideal. They only considered the case where the lattice has full rank, i.e. the basis consists of n {\displaystyle n} linear independent vectors . This is not a fundamental restriction because Lyubashevsky and Micciancio have shown that if a lattice is ideal with respect to an irreducible monic polynomial, then it has full rank, as given in the above lemma. Algorithm: Identifying ideal lattices with full rank bases Data: A full-rank basis B ∈ Z ( n , n ) {\displaystyle B\in \mathbb {Z} ^{(n,n)}} Result: true and q {\displaystyle {\textbf {q}}} , if B {\displaystyle B} spans an ideal lattice with respect to q {\displaystyle {\textbf {q}}} , otherwise false . where the matrix M is Using this algorithm, it can be seen that many lattices are not ideal lattices . For example, let n = 2 {\displaystyle n=2} and k ∈ Z ∖ { 0 , ± 1 } {\displaystyle k\in \mathbb {Z} \smallsetminus \lbrace 0,\pm 1\rbrace } , then is ideal, but is not. B 2 {\displaystyle B_{2}} with k = 2 {\displaystyle k=2} is an example given by Lyubashevsky and Micciancio. [ 5 ] Performing the algorithm on it and referring to the basis as B, matrix B is already in Hermite Normal Form so the first step is not needed. The determinant is d = 2 {\displaystyle d=2} , the adjugate matrix and finally, the product P = A M B mod d {\displaystyle P=AMB{\bmod {d}}} is At this point the algorithm stops, because all but the last column of P {\displaystyle P} have to be zero if B {\displaystyle B} would span an ideal lattice . Micciancio [ 6 ] introduced the class of structured cyclic lattices, which correspond to ideals in polynomial rings Z [ x ] / ( x n − 1 ) {\displaystyle \mathbb {Z} [x]/(x^{n}-1)} , and presented the first provably secure one-way function based on the worst-case hardness of the restriction of Poly( n )-SVP to cyclic lattices. (The problem γ -SVP consists in computing a non-zero vector of a given lattice, whose norm is no more than γ times larger than the norm of a shortest non-zero lattice vector.) At the same time, thanks to its algebraic structure, this one-way function enjoys high efficiency comparable to the NTRU scheme O ~ ( n ) {\displaystyle {\tilde {O}}(n)} evaluation time and storage cost). Subsequently, Lyubashevsky and Micciancio [ 5 ] and independently Peikert and Rosen [ 7 ] showed how to modify Micciancio's function to construct an efficient and provably secure collision resistant hash function . For this, they introduced the more general class of ideal lattices , which correspond to ideals in polynomial rings Z [ x ] / f ( x ) {\displaystyle \mathbb {Z} [x]/f(x)} . The collision resistance relies on the hardness of the restriction of Poly(n)-SVP to ideal lattices (called Poly( n )-Ideal-SVP). The average-case collision-finding problem is a natural computational problem called Ideal-SIS, which has been shown to be as hard as the worst-case instances of Ideal-SVP. Provably secure efficient signature schemes from ideal lattices have also been proposed, [ 1 ] [ 8 ] but constructing efficient provably secure public key encryption from ideal lattices was an interesting open problem . The fundamental idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding and provided a state of the art description of a quantum resistant key exchange using Ring LWE . The paper [ 9 ] appeared in 2012 after a provisional patent application was filed in 2012. In 2014, Peikert [ 10 ] presented a key transport scheme following the same basic idea of Ding's, where the new idea of sending additional signal for rounding in Ding's construction is also utilized. A digital signature using the same concepts was done several years earlier by Vadim Lyubashevsky in, "Lattice Signatures Without Trapdoors." [ 11 ] Together, the work of Peikert and Lyubashevsky provide a suite of Ring-LWE based quantum attack resistant algorithms with the same security reductions. The main usefulness of the ideal lattices in cryptography stems from the fact that very efficient and practical collision resistant hash functions can be built based on the hardness of finding an approximate shortest vector in such lattices. [ 1 ] Independently constructed collision resistant hash functions by Peikert and Rosen, [ 7 ] as well as Lyubashevsky and Micciancio, based on ideal lattices (a generalization of cyclic lattices), and provided a fast and practical implementation. [ 3 ] These results paved the way for other efficient cryptographic constructions including identification schemes and signatures. Lyubashevsky and Micciancio [ 5 ] gave constructions of efficient collision resistant hash functions that can be proven secure based on worst case hardness of the shortest vector problem for ideal lattices . They defined hash function families as: Given a ring R = Z p [ x ] / ⟨ f ⟩ {\displaystyle R=\mathbb {Z} _{p}[x]/\langle f\rangle } , where f ∈ Z p [ x ] {\displaystyle f\in \mathbb {Z} _{p}[x]} is a monic, irreducible polynomial of degree n {\displaystyle n} and p {\displaystyle p} is an integer of order roughly n 2 {\displaystyle n^{2}} , generate m {\displaystyle m} random elements a 1 , … , a m ∈ R {\displaystyle a_{1},\dots ,a_{m}\in R} , where m {\displaystyle m} is a constant. The ordered m {\displaystyle m} -tuple h = ( a 1 , … , a m ) ∈ R m {\displaystyle h=(a_{1},\ldots ,a_{m})\in R^{m}} determines the hash function. It will map elements in D m {\displaystyle D^{m}} , where D {\displaystyle D} is a strategically chosen subset of R {\displaystyle R} , to R {\displaystyle R} . For an element b = ( b 1 , … , b m ) ∈ D m {\displaystyle b=(b_{1},\ldots ,b_{m})\in D^{m}} , the hash is h ( b ) = ∑ i = 1 m α i ⋅ b i {\displaystyle h(b)=\sum _{i=1}^{m}\alpha _{i}\centerdot b_{i}} . Here the size of the key (the hash function ) is O ( m n log ⁡ p ) = O ( n log ⁡ n ) {\displaystyle O(mn\log p)=O(n\log n)} , and the operation α i ⋅ b i {\displaystyle \alpha _{i}\centerdot b_{i}} can be done in time O ( n log ⁡ n log ⁡ log ⁡ n ) {\displaystyle O(n\log n\log \log n)} by using the Fast Fourier Transform (FFT) [ citation needed ] , for appropriate choice of the polynomial f {\displaystyle f} . Since m {\displaystyle m} is a constant, hashing requires time O ( n log ⁡ n log ⁡ log ⁡ n ) {\displaystyle O(n\log n\log \log n)} . They proved that the hash function family is collision resistant by showing that if there is a polynomial-time algorithm that succeeds with non-negligible probability in finding b ≠ b ′ ∈ D m {\displaystyle b\neq b'\in D^{m}} such that h ( b ) = h ( b ′ ) {\displaystyle h(b)=h(b')} , for a randomly chosen hash function h ∈ R m {\displaystyle h\in R^{m}} , then a certain problem called the “ shortest vector problem ” is solvable in polynomial time for every ideal of the ring Z [ x ] / ⟨ f ⟩ {\displaystyle \mathbb {Z} [x]/\langle f\rangle } . Based on the work of Lyubashevsky and Micciancio in 2006, Micciancio and Regev [ 12 ] defined the following algorithm of hash functions based on ideal lattices : Here n , m , q , d {\displaystyle n,m,q,d} are parameters, f is a vector in Z n {\displaystyle \mathbb {Z} ^{n}} and A {\displaystyle A} is a block-matrix with structured blocks A ( i ) = F ∗ a ( i ) {\displaystyle A^{(i)}=F\ast a^{(i)}} . Finding short vectors in Λ q ⊥ ( [ F ∗ a 1 | … | F ∗ a m / n ] ) {\displaystyle \Lambda _{q}^{\perp }([F\ast a_{1}|\ldots |F\ast a_{m/n}])} on the average (even with just inverse polynomial probability) is as hard as solving various lattice problems (such as approximate SVP and SIVP) in the worst case over ideal lattices , provided the vector f satisfies the following two properties: The first property is satisfied by the vector F = ( − 1 , 0 , … , 0 ) {\displaystyle \mathbf {F} =(-1,0,\ldots ,0)} corresponding to circulant matrices , because all the coordinates of [F∗u]v are bounded by 1, and hence ‖ [ F ∗ u ] v ‖ ≤ n {\displaystyle \lVert [{\textbf {F}}\ast {\textbf {u}}]{\textbf {v}}\rVert \leq {\sqrt {n}}} . However, the polynomial x n − 1 {\displaystyle x^{n}-1} corresponding to f = ( − 1 , 0 , … , 0 ) {\displaystyle \mathbf {f} =(-1,0,\ldots ,0)} is not irreducible because it factors into ( x − 1 ) ( x n − 1 + x n − 2 + ⋯ + x + 1 ) {\displaystyle (x-1)(x^{n-1}+x^{n-2}+\cdots +x+1)} , and this is why collisions can be efficiently found. So, f = ( − 1 , 0 , … , 0 ) {\displaystyle \mathbf {f} =(-1,0,\ldots ,0)} is not a good choice to get collision resistant hash functions , but many other choices are possible. For example, some choices of f for which both properties are satisfied (and therefore, result in collision resistant hash functions with worst-case security guarantees) are Digital signatures schemes are among the most important cryptographic primitives. They can be obtained by using the one-way functions based on the worst-case hardness of lattice problems. However, they are impractical. A number of new digital signature schemes based on learning with errors, ring learning with errors and trapdoor lattices have been developed since the learning with errors problem was applied in a cryptographic context. Their direct construction of digital signatures based on the complexity of approximating the shortest vector in ideal (e.g., cyclic) lattices. [ 8 ] The scheme of Lyubashevsky and Micciancio [ 8 ] has worst-case security guarantees based on ideal lattices and it is the most asymptotically efficient construction known to date, yielding signature generation and verification algorithms that run in almost linear time . [ 12 ] One of the main open problems that was raised by their work is constructing a one-time signature with similar efficiency, but based on a weaker hardness assumption. For instance, it would be great to provide a one-time signature with security based on the hardness of approximating the Shortest Vector Problem (SVP) (in ideal lattices ) to within a factor of O ~ ( n ) {\displaystyle {\tilde {O}}(n)} . [ 8 ] Their construction is based on a standard transformation from one-time signatures (i.e. signatures that allow to securely sign a single message) to general signature schemes, together with a novel construction of a lattice based one-time signature whose security is ultimately based on the worst-case hardness of approximating the shortest vector in all lattices corresponding to ideals in the ring Z [ x ] / ⟨ f ⟩ {\displaystyle \mathbb {Z} [x]/\langle f\rangle } for any irreducible polynomial f {\displaystyle f} . Key-Generation Algorithm: Input : 1 n {\displaystyle 1^{n}} , irreducible polynomial f ∈ Z {\displaystyle f\in \mathbb {Z} } of degree n {\displaystyle n} . Signing Algorithm: Input: Message z ∈ R {\displaystyle z\in R} such that ‖ z ‖ ∞ ≤ 1 {\displaystyle \lVert z\rVert _{\infty }\leq 1} ; signing key ( k ^ , l ^ ) {\displaystyle ({\hat {k}},{\hat {l}})} Output: s ^ ⟵ k ^ z + l ^ {\displaystyle {\hat {s}}\longleftarrow {\hat {k}}z+{\hat {l}}} Verification Algorithm: Input: Message z {\displaystyle z} ; signature s ^ {\displaystyle {\hat {s}}} ; verification key ( h , h ( k ^ ) , h ( l ^ ) ) {\displaystyle (h,h({\hat {k}}),h({\hat {l}}))} Output: “ACCEPT”, if ‖ s ^ ‖ ∞ ≤ 10 φ p 1 / m n log 2 ⁡ n {\displaystyle \lVert {\hat {s}}\rVert _{\infty }\leq 10\varphi p^{1/m}n\log ^{2}n} and s ^ = k ^ z + l ^ {\displaystyle {\hat {s}}={\hat {k}}z+{\hat {l}}} “REJECT”, otherwise. The hash function is quite efficient and can be computed asymptotically in O ~ ( m ) {\displaystyle {\tilde {O}}(m)} time using the Fast Fourier Transform (FFT) over the complex numbers . However, in practice, this carries a substantial overhead. The SWIFFT family of hash functions defined by Micciancio and Regev [ 12 ] is essentially a highly optimized variant of the hash function above using the (FFT) in Z q {\displaystyle \mathbb {Z} _{q}} . The vector f is set to ( 1 , 0 , … , 0 ) ∈ Z n {\displaystyle (1,0,\dots ,0)\in \mathbb {Z} ^{n}} for n {\displaystyle n} equal to a power of 2, so that the corresponding polynomial x n + 1 {\displaystyle x^{n}+1} is irreducible . Let q {\displaystyle q} be a prime number such that 2 n {\displaystyle 2n} divides q − 1 {\displaystyle q-1} , and let W ∈ Z q n × n {\displaystyle {\textbf {W}}\in \mathbb {Z} _{q}^{n\times n}} be an invertible matrix over Z q {\displaystyle \mathbb {Z} _{q}} to be chosen later. The SWIFFT hash function maps a key a ~ ( 1 ) , … , a ~ ( m / n ) {\displaystyle {\tilde {a}}^{(1)},\ldots ,{\tilde {a}}^{(m/n)}} consisting of m / n {\displaystyle m/n} vectors chosen uniformly from Z q n {\displaystyle \mathbb {Z} _{q}^{n}} and an input y ∈ { 0 , … , d − 1 } m {\displaystyle y\in \lbrace 0,\ldots ,d-1\rbrace ^{m}} to W ⋅ f A ( y ) mod q {\displaystyle {\textbf {W}}^{\centerdot }f_{A}(y){\bmod {\ }}q} where A = [ F ∗ α ( 1 ) , … , F ∗ α ( m / n ) ] {\displaystyle {\textbf {A}}=[{\textbf {F}}\ast \alpha ^{(1)},\ldots ,{\textbf {F}}\ast \alpha ^{(m/n)}]} is as before and α ( i ) = W − 1 a ~ ( i ) mod q {\displaystyle \alpha ^{(i)}={\textbf {W}}^{-1}{\tilde {a}}^{(i)}{\bmod {q}}} . Multiplication by the invertible matrix W − 1 {\displaystyle {\textbf {W}}^{-1}} maps a uniformly chosen a ~ ∈ Z q n {\displaystyle {\tilde {a}}\in \mathbb {Z} _{q}^{n}} to a uniformly chosen α ∈ Z q n {\displaystyle \alpha \in \mathbb {Z} _{q}^{n}} . Moreover, W ⋅ f A ( y ) = W ⋅ f A ( y ′ ) ( mod q ) {\displaystyle {\textbf {W}}^{\centerdot }f_{A}(y)={\textbf {W}}^{\centerdot }f_{A}(y'){\pmod {q}}} if and only if f A ( y ) = f A ( y ′ ) ( mod q ) {\displaystyle f_{A}(y)=f_{A}(y'){\pmod {q}}} . Together, these two facts establish that finding collisions in SWIFFT is equivalent to finding collisions in the underlying ideal lattice function f A {\displaystyle f_{A}} , and the claimed collision resistance property of SWIFFT is supported by the connection to worst case lattice problems on ideal lattices . The algorithm of the SWIFFT hash function is: Learning with errors (LWE) problem has been shown to be as hard as worst-case lattice problems and has served as the foundation for many cryptographic applications. However, these applications are inefficient because of an inherent quadratic overhead in the use of LWE . To get truly efficient LWE applications, Lyubashevsky, Peikert and Regev [ 3 ] defined an appropriate version of the LWE problem in a wide class of rings and proved its hardness under worst-case assumptions on ideal lattices in these rings. They called their LWE version ring-LWE. Let f ( x ) = x n + 1 ∈ Z [ x ] {\displaystyle f(x)=x^{n}+1\in \mathbb {Z} [x]} , where the security parameter n {\displaystyle n} is a power of 2, making f ( x ) {\displaystyle f(x)} irreducible over the rationals. (This particular f ( x ) {\displaystyle f(x)} comes from the family of cyclotomic polynomials , which play a special role in this work). Let R = Z [ x ] / ⟨ f ( x ) ⟩ {\displaystyle R=\mathbb {Z} [x]/\langle f(x)\rangle } be the ring of integer polynomials modulo f ( x ) {\displaystyle f(x)} . Elements of R {\displaystyle R} (i.e., residues modulo f ( x ) {\displaystyle f(x)} ) are typically represented by integer polynomials of degree less than n {\displaystyle n} . Let q ≡ 1 mod 2 n {\displaystyle q\equiv 1{\bmod {2}}n} be a sufficiently large public prime modulus (bounded by a polynomial in n {\displaystyle n} ), and let R q = R / ⟨ q ⟩ = Z q [ x ] / ⟨ f ( x ) ⟩ {\displaystyle R_{q}=R/\langle q\rangle =\mathbb {Z} _{q}[x]/\langle f(x)\rangle } be the ring of integer polynomials modulo both f ( x ) {\displaystyle f(x)} and q {\displaystyle q} . Elements of R q {\displaystyle R_{q}} may be represented by polynomials of degree less than n {\displaystyle n} -whose coefficients are from { 0 , … , q − 1 } {\displaystyle \lbrace 0,\dots ,q-1\rbrace } . In the above-described ring, the R-LWE problem may be described as follows. Let s = s ( x ) ∈ R q {\displaystyle s=s(x)\in R_{q}} be a uniformly random ring element, which is kept secret. Analogously to standard LWE, the goal of the attacker is to distinguish arbitrarily many (independent) ‘random noisy ring equations’ from truly uniform ones. More specifically, the noisy equations are of the form ( a , b ≈ a ⋅ s ) ∈ R q × R q {\displaystyle (a,b\approx a\centerdot s)\in R_{q}\times R_{q}} , where a is uniformly random and the product a ⋅ s {\displaystyle a\centerdot s} is perturbed by some ‘small’ random error term, chosen from a certain distribution over R {\displaystyle R} . They gave a quantum reduction from approximate SVP (in the worst case) on ideal lattices in R {\displaystyle R} to the search version of ring-LWE, where the goal is to recover the secret s ∈ R q {\displaystyle s\in R_{q}} (with high probability, for any s {\displaystyle s} ) from arbitrarily many noisy products. This result follows the general outline of Regev's iterative quantum reduction for general lattices, [ 13 ] but ideal lattices introduce several new technical roadblocks in both the ‘algebraic’ and ‘geometric’ components of the reduction. They [ 3 ] used algebraic number theory, in particular, the canonical embedding of a number field and the Chinese Remainder Theorem to overcome these obstacles. They got the following theorem: Theorem Let K {\displaystyle K} be an arbitrary number field of degree n {\displaystyle n} . Let α = α ( n ) ∈ ( 0 , 1 ) {\displaystyle \alpha =\alpha (n)\in (0,1)} be arbitrary, and let the (rational) integer modulus q = q ( n ) ≥ 2 {\displaystyle q=q(n)\geq 2} be such that α ⋅ q ≥ ω ( log ⁡ n ) {\displaystyle \alpha \centerdot q\geq \omega ({\sqrt {\log n}})} . There is a probabilistic polynomial-time quantum reduction from K {\displaystyle K} - D G S γ {\displaystyle DGS_{\gamma }} to O K {\displaystyle {\mathcal {O}}_{K}} - L W E q , Ψ ≤ α {\displaystyle LWE_{q,\Psi \leq \alpha }} , where γ = η ϵ ( I ) ⋅ ω ( log ⁡ n ) / α {\displaystyle \gamma =\eta _{\epsilon }(I)\centerdot \omega ({\sqrt {\log n}})/\alpha } . In 2013, Guneysu, Lyubashevsky, and Poppleman proposed a digital signature scheme based on the Ring Learning with Errors problem. [ 14 ] In 2014, Peikert presented a Ring Learning with Errors Key Exchange (RLWE-KEX) in his paper, "Lattice Cryptography for the Internet." [ 10 ] This was further developed by the work of Singh. [ 15 ] Stehle, Steinfeld, Tanaka and Xagawa [ 16 ] defined a structured variant of LWE problem (Ideal-LWE) to describe an efficient public key encryption scheme based on the worst case hardness of the approximate SVP in ideal lattices. This is the first CPA-secure public key encryption scheme whose security relies on the hardness of the worst-case instances of O ~ ( n 2 ) {\displaystyle {\tilde {O}}(n^{2})} -Ideal-SVP against subexponential quantum attacks. It achieves asymptotically optimal efficiency: the public/private key length is O ~ ( n ) {\displaystyle {\tilde {O}}(n)} bits and the amortized encryption/decryption cost is O ~ ( 1 ) {\displaystyle {\tilde {O}}(1)} bit operations per message bit (encrypting Ω ~ ( n ) {\displaystyle {\tilde {\Omega }}(n)} bits at once, at a O ~ ( n ) {\displaystyle {\tilde {O}}(n)} cost). The security assumption here is that O ~ ( n 2 ) {\displaystyle {\tilde {O}}(n^{2})} -Ideal-SVP cannot be solved by any subexponential time quantum algorithm. It is noteworthy that this is stronger than standard public key cryptography security assumptions. On the other hand, contrary to the most of public key cryptography , lattice-based cryptography allows security against subexponential quantum attacks. Most of the cryptosystems based on general lattices rely on the average-case hardness of the Learning with errors (LWE) . Their scheme is based on a structured variant of LWE, that they call Ideal-LWE. They needed to introduce some techniques to circumvent two main difficulties that arise from the restriction to ideal lattices. Firstly, the previous cryptosystems based on unstructured lattices all make use of Regev's worst-case to average-case classical reduction from Bounded Distance Decoding problem (BDD) to LWE (this is the classical step in the quantum reduction from SVP to LWE ). This reduction exploits the unstructured-ness of the considered lattices, and does not seem to carry over to the structured lattices involved in Ideal-LWE. In particular, the probabilistic independence of the rows of the LWE matrices allows to consider a single row. Secondly, the other ingredient used in previous cryptosystems, namely Regev's reduction from the computational variant of LWE to its decisional variant, also seems to fail for Ideal-LWE: it relies on the probabilistic independence of the columns of the LWE matrices. To overcome these difficulties, they avoided the classical step of the reduction. Instead, they used the quantum step to construct a new quantum average-case reduction from SIS (average-case collision-finding problem) to LWE . It also works from Ideal-SIS to Ideal-LWE. Combined with the reduction from worst-case Ideal-SVP to average-case Ideal-SIS, they obtained the a quantum reduction from Ideal-SVP to Ideal-LWE. This shows the hardness of the computational variant of Ideal-LWE. Because they did not obtain the hardness of the decisional variant, they used a generic hardcore function to derive pseudorandom bits for encryption. This is why they needed to assume the exponential hardness of SVP . A fully homomorphic encryption (FHE) scheme is one which allows for computation over encrypted data, without first needing to decrypt. The problem of constructing a fully homomorphic encryption scheme was first put forward by Rivest, Adleman and Dertouzos [ 17 ] in 1978, shortly after the invention of RSA by Rivest, Adleman and Shamir. [ 18 ] An encryption scheme ε = ( K e y G e n , E n c r y p t , D e c r y p t , E v a l ) {\displaystyle \varepsilon =({\mathsf {KeyGen}},{\mathsf {Encrypt}},{\mathsf {Decrypt}},{\mathsf {Eval}})} is homomorphic for circuits in C {\displaystyle {\mathcal {C}}} if, for any circuit C ∈ C {\displaystyle C\in {\mathcal {C}}} , given P K , S K ← K e y G e n ( 1 λ ) {\displaystyle PK,SK\leftarrow {\mathsf {KeyGen}}(1^{\lambda })} , y = E n c r y p t ( P K , x ) {\displaystyle y={\mathsf {Encrypt}}(PK,x)} , and y ′ = E v a l ( P K , C , y ) {\displaystyle y'={\mathsf {Eval}}(PK,C,y)} , it holds that D e c r y p t ( S K , y ′ ) = C ( x ) {\displaystyle {\mathsf {Decrypt}}(SK,y')=C(x)} . ε {\displaystyle \varepsilon } is fully homomorphic if it is homomorphic for all circuits of size poly ⁡ ( λ ) {\displaystyle \operatorname {poly} (\lambda )} where λ {\displaystyle \lambda } is the scheme's security parameter. In 2009, Gentry [ 19 ] proposed the first solution to the problem of constructing a fully homomorphic encryption scheme. His scheme was based on ideal lattices.
https://en.wikipedia.org/wiki/Ideal_lattice
The term ideal machine refers to a hypothetical mechanical system in which energy and power are not lost or dissipated through friction, deformation, wear, or other inefficiencies. Ideal machines have the theoretical maximum performance, and therefore are used as a baseline for evaluating the performance of real machine systems. [ 1 ] [ 2 ] A simple machine , such as a lever, pulley, or gear train, is "ideal" if the power input is equal to the power output of the device, which means there are no losses. In this case, the mechanical efficiency is 100%. Mechanical efficiency is the performance of the machine compared to its theoretical maximum as performed by an ideal machine. The mechanical efficiency of a simple machine is calculated by dividing the actual power output by the ideal power output. This is usually expressed as a percentage. Power loss in a real system can occur in many ways, such as through friction, deformation, wear, heat losses, incomplete chemical conversion, magnetic and electrical losses. A machine consists of a power source and a mechanism for the controlled use of this power. The power source often relies on chemical conversion to generate heat which is then used to generate power. Each stage of the process of power generation has a maximum performance limit which is identified as ideal. Once the power is generated the mechanism components of the machine direct it toward useful forces and movement. The ideal mechanism does not absorb any power, which means the power input is equal to the power output. An example is the automobile engine ( internal combustion engine ) which burns fuel (an exothermic chemical reaction) inside a cylinder and uses the expanding gases to drive a piston . [ 3 ] The movement of the piston rotates the crank shaft. The remaining mechanical components such as the transmission , drive shaft , differential , axles and wheels form the power transmission mechanism that directs the power from the engine into friction forces on the road to move the automobile. The ideal machine has the maximum energy conversion performance combined with a lossless power transmission mechanism that yields maximum performance.
https://en.wikipedia.org/wiki/Ideal_machine
In number theory , an ideal number is an algebraic integer which represents an ideal in the ring of integers of a number field ; the idea was developed by Ernst Kummer , and led to Richard Dedekind 's definition of ideals for rings. An ideal in the ring of integers of an algebraic number field is principal if it consists of multiples of a single element of the ring. By the principal ideal theorem , any non-principal ideal becomes principal when extended to an ideal of the Hilbert class field . This means that there is an element of the ring of integers of the Hilbert class field, which is an ideal number, such that the original non-principal ideal is equal to the collection of all multiples of this ideal number by elements of this ring of integers that lie in the original field's ring of integers. For instance, let y {\displaystyle y} be a root of y 2 + y + 6 = 0 {\displaystyle y^{2}+y+6=0} , then the ring of integers of the field Q ( y ) {\displaystyle \mathbb {Q} (y)} is Z [ y ] {\displaystyle \mathbb {Z} [y]} , which means all a + b ⋅ y {\displaystyle a+b\cdot y} with a {\displaystyle a} and b {\displaystyle b} integers form the ring of integers. An example of a nonprincipal ideal in this ring is the set of all 2 a + y ⋅ b {\displaystyle 2a+y\cdot b} where a {\displaystyle a} and b {\displaystyle b} are integers; the cube of this ideal is principal, and in fact the class group is cyclic of order three. The corresponding class field is obtained by adjoining an element w {\displaystyle w} satisfying w 3 − w − 1 = 0 {\displaystyle w^{3}-w-1=0} to Q ( y ) {\displaystyle \mathbb {Q} (y)} , giving Q ( y , w ) {\displaystyle \mathbb {Q} (y,w)} . An ideal number for the nonprincipal ideal 2 a + y ⋅ b {\displaystyle 2a+y\cdot b} is ι = ( − 8 − 16 y − 18 w + 12 w 2 + 10 y w + y w 2 ) / 23 {\displaystyle \iota =(-8-16y-18w+12w^{2}+10yw+yw^{2})/23} . Since this satisfies the equation ι 6 − 2 ι 5 + 13 ι 4 − 15 ι 3 + 16 ι 2 + 28 ι + 8 = 0 {\displaystyle \iota ^{6}-2\iota ^{5}+13\iota ^{4}-15\iota ^{3}+16\iota ^{2}+28\iota +8=0} it is an algebraic integer. All elements of the ring of integers of the class field which when multiplied by ι {\displaystyle \iota } give a result in Z [ y ] {\displaystyle \mathbb {Z} [y]} are of the form a ⋅ α + y ⋅ β {\displaystyle a\cdot \alpha +y\cdot \beta } , where and The coefficients α and β are also algebraic integers, satisfying and respectively. Multiplying a ⋅ α + b ⋅ β {\displaystyle a\cdot \alpha +b\cdot \beta } by the ideal number ι {\displaystyle \iota } gives 2 a + b ⋅ y {\displaystyle 2a+b\cdot y} , which is the nonprincipal ideal. Kummer first published the failure of unique factorization in cyclotomic fields in 1844 in an obscure journal; it was reprinted in 1847 in Liouville's journal. In subsequent papers in 1846 and 1847 he published his main theorem, the unique factorization into (actual and ideal) primes. It is widely believed that Kummer was led to his "ideal complex numbers " by his interest in Fermat's Last Theorem ; there is even a story often told that Kummer, like Lamé , believed he had proven Fermat's Last Theorem until Lejeune Dirichlet told him his argument relied on unique factorization; but the story was first told by Kurt Hensel in 1910 and the evidence indicates it likely derives from a confusion by one of Hensel's sources. Harold Edwards says the belief that Kummer was mainly interested in Fermat's Last Theorem "is surely mistaken" (Edwards 1977, p. 79). Kummer's use of the letter λ to represent a prime number, α to denote a λth root of unity, and his study of the factorization of prime number p ≡ 1 ( mod λ ) {\displaystyle p\equiv 1{\pmod {\lambda }}} into "complex numbers composed of λ {\displaystyle \lambda } th roots of unity" all derive directly from a paper of Jacobi which is concerned with higher reciprocity laws . Kummer's 1844 memoir was in honor of the jubilee celebration of the University of Königsberg and was meant as a tribute to Jacobi. Although Kummer had studied Fermat's Last Theorem in the 1830s and was probably aware that his theory would have implications for its study, it is more likely that the subject of Jacobi's (and Gauss's ) interest, higher reciprocity laws, held more importance for him. Kummer referred to his own partial proof of Fermat's Last Theorem for regular primes as "a curiosity of number theory rather than a major item" and to the higher reciprocity law (which he stated as a conjecture) as "the principal subject and the pinnacle of contemporary number theory." On the other hand, this latter pronouncement was made when Kummer was still excited about the success of his work on reciprocity and when his work on Fermat's Last Theorem was running out of steam, so it may perhaps be taken with some skepticism. The extension of Kummer's ideas to the general case was accomplished independently by Kronecker and Dedekind during the next forty years. A direct generalization encountered formidable difficulties, and it eventually led Dedekind to the creation of the theory of modules and ideals . Kronecker dealt with the difficulties by developing a theory of forms (a generalization of quadratic forms ) and a theory of divisors . Dedekind's contribution would become the basis of ring theory and abstract algebra , while Kronecker's would become major tools in algebraic geometry .
https://en.wikipedia.org/wiki/Ideal_number
Ideal observer analysis is a method for investigating how information is processed in a perceptual system . [ 1 ] [ 2 ] [ 3 ] It is also a basic principle that guides modern research in perception . [ 4 ] [ 5 ] The ideal observer is a theoretical system that performs a specific task in an optimal way. If there is uncertainty in the task, then perfect performance is impossible and the ideal observer will make errors. Ideal performance is the theoretical upper limit of performance. It is theoretically impossible for a real system to perform better than ideal. Typically, real systems are only capable of sub-ideal performance. This technique is useful for analyzing psychophysical data (see psychophysics ). Many definitions of this term have been offered. Geisler (2003) [ 6 ] (slightly reworded): The central concept in ideal observer analysis is the ideal observer , a theoretical device that performs a given task in an optimal fashion given the available information and some specified constraints. This is not to say that ideal observers perform without error, but rather that they perform at the physical limit of what is possible in the situation. The fundamental role of uncertainty and noise implies that ideal observers must be defined in probabilistic (statistical) terms. Ideal observer analysis involves determining the performance of the ideal observer in a given task and then comparing its performance to that of a real perceptual system , which (depending on the application) might be the system as a whole, a subsystem, or an elementary component of the system (e.g. a neuron). In sequential ideal observer analysis , [ 7 ] the goal is to measure a real system's performance deficit (relative to ideal) at different processing stages. Such an approach is useful when studying systems that process information in discrete (or semi-discrete) stages or modules. To facilitate experimental design in the laboratory, an artificial task may be designed so that the system's performance in the task may be studied. If the task is too artificial, the system may be pushed away from a natural mode of operation. Depending on the goals of the experiment, this may diminish its external validity . In such cases, it may be important to keep the system operating naturally (or almost naturally) by designing a pseudo-natural task. Such tasks are still artificial, but they attempt to mimic the natural demands placed on a system. For example, the task might employ stimuli that resemble natural scenes and might test the system's ability to make potentially useful judgments about these stimuli. Natural scene statistics are the basis for calculating ideal performance in natural and pseudo-natural tasks. This calculation tends to incorporate elements of signal detection theory , information theory , or estimation theory . Das and Geisler [ 8 ] described and computed the detection and classification performance of ideal observers when the stimuli are normally distributed. These include the error rate and confusion matrix for ideal observers when the stimuli come from two or more univariate or multivariate normal distributions (i.e. yes/no, two-interval , multi-interval tasks and general multi-category classification tasks), the discriminability index of the ideal observer ( Bayes discriminability index ) and its relation to the receiver operating characteristic .
https://en.wikipedia.org/wiki/Ideal_observer_analysis
In hyperbolic geometry , an ideal point , omega point [ 1 ] or point at infinity is a well-defined point outside the hyperbolic plane or space. Given a line l and a point P not on l , right- and left- limiting parallels to l through P converge to l at ideal points . Unlike the projective case, ideal points form a boundary , not a submanifold. So, these lines do not intersect at an ideal point and such points, although well-defined, do not belong to the hyperbolic space itself. The ideal points together form the Cayley absolute or boundary of a hyperbolic geometry . For instance, the unit circle forms the Cayley absolute of the Poincaré disk model and the Klein disk model . The real line forms the Cayley absolute of the Poincaré half-plane model . [ 2 ] Pasch's axiom and the exterior angle theorem still hold for an omega triangle, defined by two points in hyperbolic space and an omega point. [ 3 ] if all vertices of a triangle are ideal points the triangle is an ideal triangle . Some properties of ideal triangles include: if all vertices of a quadrilateral are ideal points, the quadrilateral is an ideal quadrilateral. While all ideal triangles are congruent, not all convex ideal quadrilaterals are. They can vary from each other, for instance, in the angle at which their two diagonals cross each other. Nevertheless all convex ideal quadrilaterals have certain properties in common: The ideal quadrilateral where the two diagonals are perpendicular to each other form an ideal square. It was used by Ferdinand Karl Schweikart in his memorandum on what he called "astral geometry", one of the first publications acknowledging the possibility of hyperbolic geometry . [ 5 ] An ideal n -gon can be subdivided into ( n − 2) ideal triangles, with area ( n − 2) times the area of an ideal triangle. In the Klein disk model and the Poincaré disk model of the hyperbolic plane the ideal points are on the unit circle (hyperbolic plane) or unit sphere (higher dimensions) which is the unreachable boundary of the hyperbolic plane. When projecting the same hyperbolic line to the Klein disk model and the Poincaré disk model both lines go through the same two ideal points (the ideal points in both models are on the same spot). Given two distinct points p and q in the open unit disk the unique straight line connecting them intersects the unit circle in two ideal points, a and b , labeled so that the points are, in order, a , p , q , b so that |aq| > |ap| and |pb| > |qb|. Then the hyperbolic distance between p and q is expressed as Given two distinct points p and q in the open unit disk then the unique circle arc orthogonal to the boundary connecting them intersects the unit circle in two ideal points, a and b , labeled so that the points are, in order, a , p , q , b so that |aq| > |ap| and |pb| > |qb|. Then the hyperbolic distance between p and q is expressed as Where the distances are measured along the (straight line) segments aq, ap, pb and qb. In the Poincaré half-plane model the ideal points are the points on the boundary axis. There is also another ideal point that is not represented in the half-plane model (but rays parallel to the positive y-axis approach it). In the hyperboloid model there are no ideal points.
https://en.wikipedia.org/wiki/Ideal_point
An ideal solution or ideal mixture is a solution that exhibits thermodynamic properties analogous to those of a mixture of ideal gases . [ 1 ] [ 2 ] The enthalpy of mixing is zero [ 3 ] as is the volume change on mixing. [ 2 ] The vapor pressures of all components obey Raoult's law across the entire range of concentrations, [ 2 ] and the activity coefficient (which measures deviation from ideality) is equal to one for each component. [ 4 ] The concept of an ideal solution is fundamental to both thermodynamics and chemical thermodynamics and their applications, such as the explanation of colligative properties . Ideality of solutions is analogous to ideality for gases , with the important difference that intermolecular interactions in liquids are strong and cannot simply be neglected as they can for ideal gases. Instead we assume that the mean strength of the interactions are the same between all the molecules of the solution. More formally, for a mix of molecules of A and B, then the interactions between unlike neighbors ( U AB ) and like neighbors U AA and U BB must be of the same average strength, i.e., 2 U AB = U AA + U BB and the longer-range interactions must be nil (or at least indistinguishable). If the molecular forces are the same between AA, AB and BB, i.e., U AB = U AA = U BB , then the solution is automatically ideal. If the molecules are almost identical chemically, e.g., 1-butanol and 2-butanol , then the solution will be almost ideal. Since the interaction energies between A and B are almost equal, it follows that there is only a very small overall energy (enthalpy) change when the substances are mixed. The more dissimilar the nature of A and B, the more strongly the solution is expected to deviate from ideality. Different related definitions of an ideal solution have been proposed. The simplest definition is that an ideal solution is a solution for which each component obeys Raoult's law p i = x i p i ∗ {\displaystyle p_{i}=x_{i}p_{i}^{*}} for all compositions. Here p i {\displaystyle p_{i}} is the vapor pressure of component i {\displaystyle i} above the solution, x i {\displaystyle x_{i}} is its mole fraction and p i ∗ {\displaystyle p_{i}^{*}} is the vapor pressure of the pure substance i {\displaystyle i} at the same temperature. [ 2 ] [ 5 ] [ 6 ] This definition depends on vapor pressure, which is a directly measurable property, at least for volatile components. The thermodynamic properties may then be obtained from the chemical potential μ (which is the partial molar Gibbs energy g ) of each component. If the vapor is an ideal gas, The reference pressure p u {\displaystyle p^{u}} may be taken as P o {\displaystyle P^{o}} = 1 bar, or as the pressure of the mix, whichever is simpler. On substituting the value of p i {\displaystyle p_{i}} from Raoult's law, This equation for the chemical potential can be used as an alternate definition for an ideal solution. However, the vapor above the solution may not actually behave as a mixture of ideal gases. Some authors therefore define an ideal solution as one for which each component obeys the fugacity analogue of Raoult's law f i = x i f i ∗ {\displaystyle f_{i}=x_{i}f_{i}^{*}} . Here f i {\displaystyle f_{i}} is the fugacity of component i {\displaystyle i} in solution and f i ∗ {\displaystyle f_{i}^{*}} is the fugacity of i {\displaystyle i} as a pure substance. [ 7 ] [ 8 ] Since the fugacity is defined by the equation this definition leads to ideal values of the chemical potential and other thermodynamic properties even when the component vapors above the solution are not ideal gases. An equivalent statement uses thermodynamic activity instead of fugacity. [ 9 ] If we differentiate this last equation with respect to p {\displaystyle p} at T {\displaystyle T} constant we get: Since we know from the Gibbs potential equation that: with the molar volume v {\displaystyle v} , these last two equations put together give: Since all this, done as a pure substance, is valid in an ideal mix just adding the subscript i {\displaystyle i} to all the intensive variables and changing v {\displaystyle v} to v i ¯ {\displaystyle {\bar {v_{i}}}} , with optional overbar, standing for partial molar volume : Applying the first equation of this section to this last equation we find: which means that the partial molar volumes in an ideal mix are independent of composition. Consequently, the total volume is the sum of the volumes of the components in their pure forms: Proceeding in a similar way but taking the derivative with respect to T {\displaystyle T} we get a similar result for molar enthalpies : Remembering that ( ∂ g T ∂ T ) P = − h T 2 {\displaystyle \left({\frac {\partial {\frac {g}{T}}}{\partial T}}\right)_{P}=-{\frac {h}{T^{2}}}} we get: which in turn means that h i ¯ = h i ∗ {\displaystyle {\bar {h_{i}}}=h_{i}^{*}} and that the enthalpy of the mix is equal to the sum of its component enthalpies. Since u i ¯ = h i ¯ − p v i ¯ {\displaystyle {\bar {u_{i}}}={\bar {h_{i}}}-p{\bar {v_{i}}}} and u i ∗ = h i ∗ − p v i ∗ {\displaystyle u_{i}^{*}=h_{i}^{*}-pv_{i}^{*}} , similarly It is also easily verifiable that Finally since we find that Since the Gibbs free energy per mole of the mixture G m {\displaystyle G_{m}} is G m = ∑ i x i g i {\displaystyle G_{m}=\sum _{i}x_{i}{g_{i}}} then At last we can calculate the molar entropy of mixing since g i ∗ = h i ∗ − T s i ∗ {\displaystyle g_{i}^{*}=h_{i}^{*}-Ts_{i}^{*}} and g i ¯ = h i ¯ − T s i ¯ {\displaystyle {\bar {g_{i}}}={\bar {h_{i}}}-T{\bar {s_{i}}}} Solvent–solute interactions are the same as solute–solute and solvent–solvent interactions, on average. Consequently, the enthalpy of mixing (solution) is zero and the change in Gibbs free energy on mixing is determined solely by the entropy of mixing . Hence the molar Gibbs free energy of mixing is or for a two-component ideal solution where m denotes molar, i.e., change in Gibbs free energy per mole of solution, and x i {\displaystyle x_{i}} is the mole fraction of component i {\displaystyle i} . Note that this free energy of mixing is always negative (since each x i ∈ [ 0 , 1 ] {\displaystyle x_{i}\in [0,1]} , each ln ⁡ x i {\displaystyle \ln x_{i}} or its limit for x i → 0 {\displaystyle x_{i}\to 0} must be negative (infinite)), i.e., ideal solutions are miscible at any composition and no phase separation will occur. The equation above can be expressed in terms of chemical potentials of the individual components where Δ μ i , m i x = R T ln ⁡ x i {\displaystyle \Delta \mu _{i,\mathrm {mix} }=RT\ln x_{i}} is the change in chemical potential of i {\displaystyle i} on mixing. If the chemical potential of pure liquid i {\displaystyle i} is denoted μ i ∗ {\displaystyle \mu _{i}^{*}} , then the chemical potential of i {\displaystyle i} in an ideal solution is Any component i {\displaystyle i} of an ideal solution obeys Raoult's Law over the entire composition range: where ( p i ) pure {\displaystyle (p_{i})_{\text{pure}}} is the equilibrium vapor pressure of pure component i {\displaystyle i} and x i {\displaystyle x_{i}\,} is the mole fraction of component i {\displaystyle i} in solution. Deviations from ideality can be described by the use of Margules functions or activity coefficients . A single Margules parameter may be sufficient to describe the properties of the solution if the deviations from ideality are modest; such solutions are termed regular . In contrast to ideal solutions, where volumes are strictly additive and mixing is always complete, the volume of a non-ideal solution is not, in general, the simple sum of the volumes of the component pure liquids and solubility is not guaranteed over the whole composition range. By measurement of densities, thermodynamic activity of components can be determined.
https://en.wikipedia.org/wiki/Ideal_solution
An ideal solid surface is flat, rigid, perfectly smooth, and chemically homogeneous, and has zero contact angle hysteresis. Zero hysteresis implies the advancing and receding contact angles are equal. In other words, only one thermodynamically stable contact angle exists. When a drop of liquid is placed on such a surface, the characteristic contact angle is formed as depicted in Fig. 1. Furthermore, on an ideal surface, the drop will return to its original shape if it is disturbed. [ 1 ] The following derivations apply only to ideal solid surfaces; they are only valid for the state in which the interfaces are not moving and the phase boundary line exists in equilibrium. Figure 3 shows the line of contact where three phases meet. In equilibrium , the net force per unit length acting along the boundary line between the three phases must be zero. The components of net force in the direction along each of the interfaces are given by: where α, β, and θ are the angles shown and γ ij is the surface energy between the two indicated phases. These relations can also be expressed by an analog to a triangle known as Neumann’s triangle, shown in Figure 4. Neumann’s triangle is consistent with the geometrical restriction that α + β + θ = 2 π {\displaystyle \alpha +\beta +\theta =2\pi } , and applying the law of sines and law of cosines to it produce relations that describe how the interfacial angles depend on the ratios of surface energies. [ 2 ] Because these three surface energies form the sides of a triangle , they are constrained by the triangle inequalities, γ ij < γ jk + γ ik meaning that no one of the surface tensions can exceed the sum of the other two. If three fluids with surface energies that do not follow these inequalities are brought into contact, no equilibrium configuration consistent with Figure 3 will exist. If the β phase is replaced by a flat rigid surface, as shown in Figure 5, then β = π, and the second net force equation simplifies to the Young equation, [ 3 ] which relates the surface tensions between the three phases: solid , liquid and gas . Subsequently, this predicts the contact angle of a liquid droplet on a solid surface from knowledge of the three surface energies involved. This equation also applies if the "gas" phase is another liquid, immiscible with the droplet of the first "liquid" phase. The Young equation assumes a perfectly flat and rigid surface. In many cases, surfaces are far from this ideal situation, and two are considered here: the case of rough surfaces and the case of smooth surfaces that are still real (finitely rigid). Even in a perfectly smooth surface, a drop will assume a wide spectrum of contact angles ranging from the so-called advancing contact angle, θ A {\displaystyle \theta _{\mathrm {A} }} , to the so-called receding contact angle, θ R {\displaystyle \theta _{\mathrm {R} }} . The equilibrium contact angle ( θ c {\displaystyle \theta _{\mathrm {c} }} ) can be calculated from θ A {\displaystyle \theta _{\mathrm {A} }} and θ R {\displaystyle \theta _{\mathrm {R} }} as was shown by Tadmor [ 5 ] as, where The Young–Dupré equation (Thomas Young 1805, Lewis Dupré 1855) dictates that neither γ SG nor γ SL can be larger than the sum of the other two surface energies. The consequence of this restriction is the prediction of complete wetting when γ SG > γ SL + γ LG and zero wetting when γ SL > γ SG + γ LG . The lack of a solution to the Young–Dupré equation is an indicator that there is no equilibrium configuration with a contact angle between 0 and 180° for those situations. A useful parameter for gauging wetting is the spreading parameter S , When S > 0, the liquid wets the surface completely (complete wetting). When S < 0, partial wetting occurs. Combining the spreading parameter definition with the Young relation yields the Young–Dupré equation: which only has physical solutions for θ when S < 0.
https://en.wikipedia.org/wiki/Ideal_surface
Ideal tasks arise during task analysis . Ideal tasks are different from real tasks. They are ideals in the Platonic sense of a circle being an ideal whereas a drawn circle is flawed and real. The study of theoretically best or “mathematically ideal” tasks (Green & Swets, 1966), has been the basis of the branch of stimulus control in psychology called Psychophysics as well as being part of Artificial Intelligence (e. g. Goel & Chandrasekaran, 1992). Such studies include the instantiation of such ideal tasks in the real world. The notion of the ideal task has also played an important role in information theory . Tasks are defined as sequences of contingencies, each presenting stimuli and requiring an action or a sequence of actions to occur in some non-arbitrary fashion. These contingencies may not only provide stimuli that require the discrimination of relations among actions and events but among task actions themselves. Again, Task actions, E, are actions that are required to complete tasks. Properties of tasks (usually the stimuli, or the relationship among stimuli and actions) are varied, and responses to them can be measured and analyzed.
https://en.wikipedia.org/wiki/Ideal_tasks
The temperatures of a planet's surface and atmosphere are governed by a delicate balancing of their energy flows. The idealized greenhouse model is based on the fact that certain gases in the Earth's atmosphere , including carbon dioxide and water vapour , are transparent to the high-frequency solar radiation , but are much more opaque to the lower frequency infrared radiation leaving Earth's surface. Thus heat is easily let in , but is partially trapped by these gases as it tries to leave . Rather than get hotter and hotter, Kirchhoff's law of thermal radiation says that the gases of the atmosphere also have to re-emit the infrared energy that they absorb, and they do so, also at long infrared wavelengths, both upwards into space as well as downwards back towards the Earth's surface. In the long-term, the planet's thermal inertia is surmounted and a new radiative equilibrium is reached when all energy arriving on the planet is leaving again at the same rate. In this steady-state model, the greenhouse gases cause the surface of the planet to be warmer than it would be without them, in order for a balanced amount of heat energy to finally be radiated out into space from the top of the atmosphere. [ 1 ] Essential features of this model where first published by Svante Arrhenius in 1896. [ 2 ] It has since become a common introductory "textbook model" of the radiative heat transfer physics underlying Earth's energy balance and the greenhouse effect . [ 3 ] [ 4 ] [ 5 ] The planet is idealized by the model as being functionally "layered" with regard to a sequence of simplified energy flows, but dimensionless (i.e. a zero-dimensional model ) in terms of its mathematical space . [ 6 ] The layers include a surface with constant temperature T s and an atmospheric layer with constant temperature T a . For diagrammatic clarity, a gap can be depicted between the atmosphere and the surface. Alternatively, T s could be interpreted as a temperature representative of the surface and the lower atmosphere, and T a could be interpreted as the temperature of the upper atmosphere, also called the skin temperature . In order to justify that T a and T s remain constant over the planet, strong oceanic and atmospheric currents can be imagined to provide plentiful lateral mixing. Furthermore, the temperatures are understood to be multi-decadal averages such that any daily or seasonal cycles are insignificant. The model will find the values of T s and T a that will allow the outgoing radiative power, escaping the top of the atmosphere, to be equal to the absorbed radiative power of sunlight. When applied to a planet like Earth, the outgoing radiation will be longwave and the sunlight will be shortwave. These two streams of radiation will have distinct emission and absorption characteristics. In the idealized model, we assume the atmosphere is completely transparent to sunlight. The planetary albedo α P is the fraction of the incoming solar flux that is reflected back to space (since the atmosphere is assumed totally transparent to solar radiation, it does not matter whether this albedo is imagined to be caused by reflection at the surface of the planet or at the top of the atmosphere or a mixture). The flux density of the incoming solar radiation is specified by the solar constant S 0 . For application to planet Earth, appropriate values are S 0 =1366 W m −2 and α P =0.30. Accounting for the fact that the surface area of a sphere is 4 times the area of its intercept (its shadow), the average incoming radiation is S 0 /4. For longwave radiation, the surface of the Earth is assumed to have an emissivity of 1 (i.e. it is a black body in the infrared, which is realistic). The surface emits a radiative flux density F according to the Stefan–Boltzmann law : where σ is the Stefan–Boltzmann constant . A key to understanding the greenhouse effect is Kirchhoff's law of thermal radiation . At any given wavelength the absorptivity of the atmosphere will be equal to the emissivity. Radiation from the surface could be in a slightly different portion of the infrared spectrum than the radiation emitted by the atmosphere. The model assumes that the average emissivity (absorptivity) is identical for either of these streams of infrared radiation, as they interact with the atmosphere. Thus, for longwave radiation, one symbol ε denotes both the emissivity and absorptivity of the atmosphere, for any stream of infrared radiation. The infrared flux density out of the top of the atmosphere is computed as: In the last term, ε represents the fraction of upward longwave radiation from the surface that is absorbed, the absorptivity of the atmosphere. The remaining fraction (1-ε) is transmitted to space through an atmospheric window . In the first term on the right, ε is the emissivity of the atmosphere, the adjustment of the Stefan–Boltzmann law to account for the fact that the atmosphere is not optically thick . Thus ε plays the role of neatly blending, or averaging, the two streams of radiation in the calculation of the outward flux density. Zero net radiation leaving the top of the atmosphere requires: Zero net radiation entering the surface requires: Energy equilibrium of the atmosphere can be either derived from the two above equilibrium conditions, or independently deduced: Note the important factor of 2, resulting from the fact that the atmosphere radiates both upward and downward. Thus the ratio of T a to T s is independent of ε: Thus T a can be expressed in terms of T s , and a solution is obtained for T s in terms of the model input parameters: or The solution can also be expressed in terms of the effective emission temperature T e , which is the temperature that characterizes the outgoing infrared flux density F, as if the radiator were a perfect radiator obeying F=σT e 4 . This is easy to conceptualize in the context of the model. T e is also the solution for T s , for the case of ε=0, or no atmosphere: With the definition of T e : For a perfect greenhouse, with no radiation escaping from the surface, or ε=1: Using the parameters defined above to be appropriate for Earth, For ε=1: For ε=0.78, This value of T s happens to be close to the published 287.2 K of the average global "surface temperature" based on measurements. [ 7 ] ε=0.78 implies 22% of the surface radiation escapes directly to space, consistent with the statement of 15% to 30% escaping in the greenhouse effect . The radiative forcing for doubling carbon dioxide is 3.71 W m −2 , in a simple parameterization. This is also the value endorsed by the IPCC . From the equation for F ↑ {\displaystyle F\uparrow } , Using the values of T s and T a for ε=0.78 allows for Δ F ↑ {\displaystyle \Delta F\uparrow } = -3.71 W m −2 with Δε=.019. Thus a change of ε from 0.78 to 0.80 is consistent with the radiative forcing from a doubling of carbon dioxide. For ε=0.80, Thus this model predicts a global warming of ΔT s = 1.2 K for a doubling of carbon dioxide. A typical prediction from a GCM is 3 K surface warming, primarily because the GCM allows for positive feedback , notably from increased water vapor. A simple surrogate for including this feedback process is to posit an additional increase of Δε=.02, for a total Δε=.04, to approximate the effect of the increase in water vapor that would be associated with an increase in temperature. [ 8 ] This idealized model then predicts a global warming of ΔT s = 2.4 K for a doubling of carbon dioxide, roughly consistent with the IPCC. The one-level atmospheric model can be readily extended to a multiple-layer atmosphere. [ 9 ] [ 10 ] In this case the equations for the temperatures become a series of coupled equations. These simple energy-balance models always predict a decreasing temperature away from the surface, and all levels increase in temperature as "greenhouse gases are added". Neither of these effects are fully realistic: in the real atmosphere temperatures increase above the tropopause , and temperatures in that layer are predicted (and observed) to decrease as GHG's are added. [ 11 ] This is directly related to the non-greyness of the real atmosphere. An interactive version of a model with 2 atmospheric layers, and which accounts for convection, is available online. [ 12 ]
https://en.wikipedia.org/wiki/Idealized_greenhouse_model
In abstract algebra , the idealizer of a subsemigroup T of a semigroup S is the largest subsemigroup of S in which T is an ideal . [ 1 ] Such an idealizer is given by In ring theory , if A is an additive subgroup of a ring R , then I R ( A ) {\displaystyle \mathbb {I} _{R}(A)} (defined in the multiplicative semigroup of R ) is the largest subring of R in which A is a two-sided ideal. [ 2 ] [ 3 ] In Lie algebra , if L is a Lie ring (or Lie algebra ) with Lie product [ x , y ], and S is an additive subgroup of L , then the set is classically called the normalizer of S , however it is apparent that this set is actually the Lie ring equivalent of the idealizer. It is not necessary to specify that [ S , r ] ⊆ S , because anticommutativity of the Lie product causes [ s , r ] = −[ r , s ] ∈ S . The Lie "normalizer" of S is the largest subring of L in which S is a Lie ideal. Often, when right or left ideals are the additive subgroups of R of interest, the idealizer is defined more simply by taking advantage of the fact that multiplication by ring elements is already absorbed on one side. Explicitly, if T is a right ideal, or if L is a left ideal. In commutative algebra , the idealizer is related to a more general construction. Given a commutative ring R , and given two subsets A and B of a right R -module M , the conductor or transporter is given by In terms of this conductor notation, an additive subgroup B of R has idealizer When A and B are ideals of R , the conductor is part of the structure of the residuated lattice of ideals of R . The multiplier algebra M ( A ) of a C*-algebra A is isomorphic to the idealizer of π ( A ) where π is any faithful nondegenerate representation of A on a Hilbert space H . This group theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Idealizer
An ideally hard superconductor is a type II superconductor material with an infinite pinning force . In the external magnetic field it behaves like an ideal diamagnet if the field is switched on when the material is in the superconducting state, so-called "zero field cooled" (ZFC) regime. In the field cooled (FC) regime, the ideally hard superconductor screens perfectly the change of the magnetic field rather than the magnetic field itself. Its magnetization behavior can be described by Bean's critical state model . The ideally hard superconductor is a good approximation for the melt-textured [ 1 ] high temperature superconductors ( HTSC ) used in large scale HTSC applications such as flywheels , HTSC bearings, HTSC motors, etc. [ 2 ] This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ideally_hard_superconductor
Ideation is the creative process of generating, developing, and communicating new ideas, where an idea is understood as a basic unit of thought that can be either visual, concrete, or abstract. [ 1 ] Ideation comprises all stages of a thought cycle, from innovation , to development, to actualization. [ 2 ] Ideation can be conducted by individuals, organizations, or crowds. As such, it is an essential part of the design process , both in education and practice. [ 3 ] [ 4 ] The word "ideation" has come under informal criticism as being a term of meaningless jargon, [ 5 ] as well as being inappropriately similar to the psychiatric term for suicidal ideation . [ 6 ] There are many methods and approaches for ideation. A list of common ideation techniques is as follows:
https://en.wikipedia.org/wiki/Ideation_(creative_process)
In linear algebra , an idempotent matrix is a matrix which, when multiplied by itself, yields itself. [ 1 ] [ 2 ] That is, the matrix A {\displaystyle A} is idempotent if and only if A 2 = A {\displaystyle A^{2}=A} . For this product A 2 {\displaystyle A^{2}} to be defined , A {\displaystyle A} must necessarily be a square matrix . Viewed this way, idempotent matrices are idempotent elements of matrix rings . Examples of 2 × 2 {\displaystyle 2\times 2} idempotent matrices are: [ 1 0 0 1 ] [ 3 − 6 1 − 2 ] {\displaystyle {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\qquad {\begin{bmatrix}3&-6\\1&-2\end{bmatrix}}} Examples of 3 × 3 {\displaystyle 3\times 3} idempotent matrices are: [ 1 0 0 0 1 0 0 0 1 ] [ 2 − 2 − 4 − 1 3 4 1 − 2 − 3 ] {\displaystyle {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\qquad {\begin{bmatrix}2&-2&-4\\-1&3&4\\1&-2&-3\end{bmatrix}}} If a matrix ( a b c d ) {\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}} is idempotent, then Thus, a necessary condition for a 2 × 2 {\displaystyle 2\times 2} matrix to be idempotent is that either it is diagonal or its trace equals 1. For idempotent diagonal matrices, a {\displaystyle a} and d {\displaystyle d} must be either 1 or 0. If b = c {\displaystyle b=c} , the matrix ( a b b 1 − a ) {\displaystyle {\begin{pmatrix}a&b\\b&1-a\end{pmatrix}}} will be idempotent provided a 2 + b 2 = a , {\displaystyle a^{2}+b^{2}=a,} so a satisfies the quadratic equation which is a circle with center (1/2, 0) and radius 1/2. In terms of an angle θ, However, b = c {\displaystyle b=c} is not a necessary condition: any matrix The only non- singular idempotent matrix is the identity matrix ; that is, if a non-identity matrix is idempotent, its number of independent rows (and columns) is less than its number of rows (and columns). This can be seen from writing A 2 = A {\displaystyle A^{2}=A} , assuming that A has full rank (is non-singular), and pre-multiplying by A − 1 {\displaystyle A^{-1}} to obtain A = I A = A − 1 A 2 = A − 1 A = I {\displaystyle A=IA=A^{-1}A^{2}=A^{-1}A=I} . When an idempotent matrix is subtracted from the identity matrix, the result is also idempotent. This holds since If a matrix A is idempotent then for all positive integers n, A n = A {\displaystyle A^{n}=A} . This can be shown using proof by induction. Clearly we have the result for n = 1 {\displaystyle n=1} , as A 1 = A {\displaystyle A^{1}=A} . Suppose that A k − 1 = A {\displaystyle A^{k-1}=A} . Then, A k = A k − 1 A = A A = A {\displaystyle A^{k}=A^{k-1}A=AA=A} , since A is idempotent. Hence by the principle of induction, the result follows. An idempotent matrix is always diagonalizable . [ 3 ] Its eigenvalues are either 0 or 1: if x {\displaystyle \mathbf {x} } is a non-zero eigenvector of some idempotent matrix A {\displaystyle A} and λ {\displaystyle \lambda } its associated eigenvalue, then λ x = A x = A 2 x = A λ x = λ A x = λ 2 x , {\textstyle \lambda \mathbf {x} =A\mathbf {x} =A^{2}\mathbf {x} =A\lambda \mathbf {x} =\lambda A\mathbf {x} =\lambda ^{2}\mathbf {x} ,} which implies λ ∈ { 0 , 1 } . {\displaystyle \lambda \in \{0,1\}.} This further implies that the determinant of an idempotent matrix is always 0 or 1. As stated above, if the determinant is equal to one, the matrix is invertible and is therefore the identity matrix . The trace of an idempotent matrix — the sum of the elements on its main diagonal — equals the rank of the matrix and thus is always an integer. This provides an easy way of computing the rank, or alternatively an easy way of determining the trace of a matrix whose elements are not specifically known (which is helpful in statistics , for example, in establishing the degree of bias in using a sample variance as an estimate of a population variance ). In regression analysis, the matrix M = I − X ( X ′ X ) − 1 X ′ {\displaystyle M=I-X(X'X)^{-1}X'} is known to produce the residuals e {\displaystyle e} from the regression of the vector of dependent variables y {\displaystyle y} on the matrix of covariates X {\displaystyle X} . (See the section on Applications.) Now, let X 1 {\displaystyle X_{1}} be a matrix formed from a subset of the columns of X {\displaystyle X} , and let M 1 = I − X 1 ( X 1 ′ X 1 ) − 1 X 1 ′ {\displaystyle M_{1}=I-X_{1}(X_{1}'X_{1})^{-1}X_{1}'} . It is easy to show that both M {\displaystyle M} and M 1 {\displaystyle M_{1}} are idempotent, but a somewhat surprising fact is that M M 1 = M {\displaystyle MM_{1}=M} . This is because M X 1 = 0 {\displaystyle MX_{1}=0} , or in other words, the residuals from the regression of the columns of X 1 {\displaystyle X_{1}} on X {\displaystyle X} are 0 since X 1 {\displaystyle X_{1}} can be perfectly interpolated as it is a subset of X {\displaystyle X} (by direct substitution it is also straightforward to show that M X = 0 {\displaystyle MX=0} ). This leads to two other important results: one is that ( M 1 − M ) {\displaystyle (M_{1}-M)} is symmetric and idempotent, and the other is that ( M 1 − M ) M = 0 {\displaystyle (M_{1}-M)M=0} , i.e., ( M 1 − M ) {\displaystyle (M_{1}-M)} is orthogonal to M {\displaystyle M} . These results play a key role, for example, in the derivation of the F test. Any similar matrices of an idempotent matrix are also idempotent. Idempotency is conserved under a change of basis . This can be shown through multiplication of the transformed matrix S A S − 1 {\displaystyle SAS^{-1}} with A {\displaystyle A} being idempotent: ( S A S − 1 ) 2 = ( S A S − 1 ) ( S A S − 1 ) = S A ( S − 1 S ) A S − 1 = S A 2 S − 1 = S A S − 1 {\displaystyle (SAS^{-1})^{2}=(SAS^{-1})(SAS^{-1})=SA(S^{-1}S)AS^{-1}=SA^{2}S^{-1}=SAS^{-1}} . Idempotent matrices arise frequently in regression analysis and econometrics . For example, in ordinary least squares , the regression problem is to choose a vector β of coefficient estimates so as to minimize the sum of squared residuals (mispredictions) e i : in matrix form, where y {\displaystyle y} is a vector of dependent variable observations, and X {\displaystyle X} is a matrix each of whose columns is a column of observations on one of the independent variables . The resulting estimator is where superscript T indicates a transpose , and the vector of residuals is [ 2 ] Here both M {\displaystyle M} and X ( X T X ) − 1 X T {\displaystyle X\left(X^{\textsf {T}}X\right)^{-1}X^{\textsf {T}}} (the latter being known as the hat matrix ) are idempotent and symmetric matrices, a fact which allows simplification when the sum of squared residuals is computed: The idempotency of M {\displaystyle M} plays a role in other calculations as well, such as in determining the variance of the estimator β ^ {\displaystyle {\hat {\beta }}} . An idempotent linear operator P {\displaystyle P} is a projection operator on the range space ⁠ R ( P ) {\displaystyle R(P)} ⁠ along its null space ⁠ N ( P ) {\displaystyle N(P)} ⁠ . P {\displaystyle P} is an orthogonal projection operator if and only if it is idempotent and symmetric .
https://en.wikipedia.org/wiki/Idempotent_matrix
Identification in biology is the process of assigning a pre-existing taxon name to an individual organism . Identification of organisms to individual scientific names (or codes) may be based on individualistic natural body features, [ 1 ] experimentally created individual markers (e.g., color dot patterns), or natural individualistic molecular markers (similar to those used in maternity or paternity identification tests ). Individual identification is used in ecology , wildlife management and conservation biology . The more common form of identification is the identification of organisms to common names (e. g., "lion") or scientific name (e. g., " Panthera leo "). By necessity this is based on inherited features ("characters") of the sexual organisms, the inheritance forming the basis of defining a class. The features may, e. g., be morphological, anatomical, physiological, behavioral, or molecular. The term "determination" may occasionally be used as a synonym for identification (e. g.), [ 2 ] or as in "determination slips". [ 3 ] Identification methods may be manual or computerized and may involve using identification keys , browsing through fields guide that contain (often illustrated) species accounts, comparing the organism with specimens from natural history collections, or taking images to be analyzed and compared against a pre-trained knowledge base with species information. [ 4 ]
https://en.wikipedia.org/wiki/Identification_(biology)
In biology , an identification key , taxonomic key , or frequently just key , is a printed or computer-aided device that aids in the identification of biological organisms. Historically, the most common type of identification key is the dichotomous key , a type of single-access key which offers a fixed sequence of identification steps, each with two alternatives. The earliest examples of identification keys originate in the seventeenth, but their conceptual history can be traced back to antiquity. Modern multi-access keys allow the user to freely choose the identification steps and any order. They were traditionally performed using punched cards but now almost exclusively take the form of computer programs. The conceptual origins of the modern identification key can be traced back to antiquity. Theophrastus categorized organisms into "subdivisions" based on dichotomous characteristics. The seventeenth-century Chinese herbalist, Pao Shan, in his treatise Yeh-ts'ai Po-Iu , included a systematic categorization of plants based on their apparent characteristics specifically for the purposes of identification. [ 1 ] : 2 Seventeenth-century naturalists, including John Ray , Rivinius , and Nehemiah Grew , published examples of bracketed tables. However, these examples were not strictly keys in the modern sense of an analytical device used to identify a single specimen, since they often did not lead to a single end point, and instead functioned more as synopses of classification schemes. [ 1 ] : 3–8 The first analytical identification key is credited to Lamarck who included several in his 1778 book, Flore Françoise. Lamarck's key follows more or less the same design as the modern dichotomous, bracketed key. [ 1 ] : 10 Alphonso Wood was the first American to use identification keys in 1845. Other early instances of keys are found in the works of Asa Gray and W. H. Evans . [ 1 ] : 12–14 Identification keys are known historically and contemporarily by many names, including analytical key, entomological key, artificial key, [ 1 ] diagnostic key, [ 2 ] determinator, [ 3 ] and taxonomic key [ 4 ] Within the biological literature, identification keys are referred to simply as keys . [ 5 ] They are also commonly referred to in general as dichotomous keys, [ 6 ] though this term strictly refers to a specific type of identification key (see Types of keys ). Identification keys are used in systematic biology and taxonomy to identify the genus or species of a specimen organism from a set of known taxa . They are commonly used in the fields of microbiology, plant taxonomy, and entomology, as groups of related taxa in these fields tend to be very large. [ 3 ] However, they have also been used to classify non-organisms, such as birds nests, and in non-biological sciences such as geology. [ 1 ] : 14–15 Similar methods have also been used in computer science [ 7 ] A user of a key selects from a series of choices, representing mutually exclusive features of the specimen, with the aim to arrive at the sole remaining identity from the group of taxa. [ 8 ] Each step in the key employs a character : a distinguishing feature of an organism that is conveniently observable. [ 3 ] Identification keys are sometimes also referred to as artificial keys to differential them from other diagrams that visualize a classification schemes, often in the form of a key or tree structure. These diagrams are called natural keys or synopses and are not used for identifying specimens. In contrast, an artificial identification key is a tool that utilizes characters that are the easiest to observe and most practical for arriving at an identity. [ 2 ] : 7 [ 6 ] : 225 Identification keys can be divided into two main types. A single-access key (also called a sequential key or an analytical key), has a fixed structure and sequence. The user must begin at the first step of the key and proceed until the end. A single-access key has steps that consist of two mutually exclusive statements ( leads ) is called a dichotomous key . Most single-access keys are dichotomous. [ 3 ] A single-access key with more than two leads per step is referred to as polytomous. [ 9 ] Dichotomous keys can be presented in two main styles: linked and nested. In the linked style (also referred to as open, parallel, linked, and juxtaposition [ 9 ] : 63 ), each pair of leads (called a couplet ) are printed together. In the nested style (also referred to as closed, yoked, and indented [ 9 ] : 63 ), the subsequent steps after choosing a lead are printed directly underneath it, in succession. To follow the second lead of the couplet, the user must skip over the nested material that follows logically from the first lead of the couplet. [ 2 ] Nested keys are more commonly known as indented , but unfortunately this refers to an accidental (albeit frequent) rather than essential quality. Nested keys may be printed without indentation to preserve space (relying solely on corresponding lead symbols) and linked keys may be indented to enhance the visibility of the couplet structure. [ 9 ] : 63 A multi-access key (free-access key, [ 9 ] or polyclave [ 8 ] ) allows a user to specify characters in any order. Therefore, a multi-access key can be thought of as "the set of all possible single-access keys that arise by permutating the order of characters." [ 9 ] : 60 While there are print versions of multi-access keys, they were historically created using punched card systems. [ 8 ] Today, multi-access keys are computer-aided tools. [ 9 ] : 61 An early attempt to standardize the construction of keys was offered by E. B. Williamson in the June 1922 volume of Science. [ 10 ] More recently, Richard Pankhurst published a guidelines and practical tips for key construction in a section of his 1978 book, Biological Identification. [ 2 ] : 15–22 Identification errors may have serious consequences in both pure and applied disciplines, including ecology , medical diagnosis, pest control, forensics , etc. [ 11 ] The first computer programs for constructing identification keys were created in the early 1970s. [ 12 ] [ 13 ] Since then, several popular programs have been developed, including DELTA, XPER, and LucID. [ 3 ] : 379–80 Single-access keys, until recently, have been developed only rarely as computer-aided, interactive tools. Noteworthy developments in this area are the commercial LucID Phoenix application, the FRIDA/Dryades software, the KeyToNature Open Key Editor, and the open source WikiKeys and jKey application on biowikifarm. [ 9 ] : 62 This article incorporates text from a free content work. Licensed under CC BY-SA ( license statement/permission ). Text taken from Types of identification keys​ , Gregor Hagedorn, Gerhard Rambold, Stefano Martellos, Edizioni Università di Trieste. Pankhurst, Richard John (1991). Practical taxonomic computing . Cambridge: Cambridge university press. ISBN 978-0-521-41760-0 . Chapters 4-6.
https://en.wikipedia.org/wiki/Identification_key
Standards for the identification of cell death have changed. Cell death used to be defined and described based on morphology . Now there is a switch in classifying it basing on molecular and genetic definitions. This description is more functional and applies to both in vitro and in vivo , so cell death subroutines are now described by a series of precise, measurable, biochemical features. A set of recommendations for describing the terminology of cell death was proposed by the Nomenclature Committee on Cell Death (NCCD) in 2009, because misusing words and concepts may slow down progress in the area of cell death research. [ 1 ] The classic definition of death defines it as a state characterized by the cessation of signs of life. It is when a cell has lost the integrity of its plasma membrane and/or has undergone complete disintegration, including its nucleus, and/or its fragments have been engulfed by a neighboring cell in vivo. It is caused by an irreversible functional imbalance and collapse of the internal organization of a system. The role of cell death is the maintenance of tissue and organ homeostasis , for example, the regular loss of skin cells or a more active role seen in involuting tissues like the thymus. Cells die either by accident or design. In fact there are two mechanisms of cell death; necrosis and apoptosis (apoptosis in invertebrates is called cell deletion). Dying cells are engaged in a process that is reversible until a first irreversible phase or "point-of-no-return" is trespassed. Necrosis is an unprogrammed death of cells, which involves early plasma membrane changes leading to loss of calcium and sodium imbalance. This causes acidosis, osmotic shock, clumping of chromatin and nuclear pyknosis . These changes are accompanied by a loss of oxidative phosphorylation , a drop in ATP production, and a loss of homeostatic capability. There are also mitochondrial changes which include calcium overload and activation of phospholipases leading to membrane diffusion signals, a stage of irreversible damage. The secondary stage involves swelling of the lysosome , dilation of the endoplasmic reticulum, a leakage of enzymes and proteins and a loss of compartmentalization. Apoptosis, or programmed cell death, is generally characterized by distinct morphological characteristics and energy-dependent biochemical mechanisms. It is considered a vital component of various processes of life including normal cell turnover, proper development and functioning of the immune system, hormone dependent atrophy, embryonic development and chemical-induced cell death. For example, the differentiation of fingers and toes in a developing human embryo occurs because cells between the fingers apoptose, resulting in separate digits. FACS quantification of hypodiploid events (sub-G 1 peak) FACS colocalization studies FACS quantification by means of fluorogenic substrates or specific antibodies Immunoblotting after subcellular fractionation [ 1 ] The morphometric method is a way to demonstrate cell death in the laboratory. Morphometric measurement provides the result of cell death as a volume, size, weight and length of tissue, organ and the whole organism that compares with before and after the occurrence of cell death. [ 2 ] This method was observed by Attalah and Johnson who used electronic particle analyses to determine cell viability. Another indicator of cell death is acid hydrolysis, which is released from digestion during phagocytosis of dead cells by macrophages or neighboring cells, and the intravital dye is a marker of secondary phagocytosis. To demonstrate cell death in some cases a vital dye is used to detect when cellular function is disrupted. This procedure uses living tissue that is immersed in diluted 1:0000 solution of Nile blue sulphate in saline . The measurement of cell death by using this dye is observing a change of color or the formation of fluorescence . When the cell died the nucleus went through destruction stages, one of them pyknosis, which lead to the release of a basic histone group and this happened when the irreversible condensation of chromatins occurred. The phagocytosis process took place in secondary lysosomes and the autophagy and heterophagy controlled the dead cell by acid hydrolysis activity. The techniques used to explained this is by the detection of (6-3H)-thymidine and acid phosphates' activity in cryostat. Specimen was injected with (6-3H)-thymidine, and then the tissue was sacrificed, removed after 1 hour and quenched in liquid nitrogen. Then 4 μm cryostat sections were cut and mounted on clean cover slips, the cover slips were held with sections in cryostat, fixed in cold analar acetone for 10 minutes and the cover slips were rinsed in buffer-incubated in acid phosphate medium (15 minutes). Naphthol AS TR phosphate [ 3 ] was used as the substrate and hexazonium paraosaniline as coupler. Again the sections were rinsed thoroughly in distilled water and the sections were dipped in autoradiography emulsion (I1ford L4 diluted 1:5). Preparations were exposed to be (0-4 °C) 2 to 3 weeks in dark room. The photographical slide was processed, counterstained in haematoxylin and mounted for microscopy . The result of this experiment is the red color which is caused by azo-dye technique done above and this is an indicator that cell autolysis occurred. This was the major aim in the morphometric method. Another thing is the production of silver grains in the photographic emulsion . This change in color is due to fine homogeneous red reaction of acid phosphatase activity. In the lysosome there's a lot of indication of cell death like the free hydrolase. Incorporate tritiated thymidine gives silver grains in the photographic emulsion which happened in the cell nuclei. The ideal tissue for this procedure is thymus tissue. This discussion focuses on two changes that occur in the thymus of mouse as an example study. The first change is the ratio of cells that are dying (diffusing acid phosphatase) and the second is the thymidine incorporating cells (cells synthesizing DNA). The results are compared according to the age of the mouse. After measuring the ratios and numbers, the conclusion is that the level of cell death in involuting thymus doubled in comparison to the young thymus and the thymidine decreased in the older thymus compared to the young thymus. To further show these results it was observed that some thymocytes contained lysosomal sites of acid phosphatase activity. When the macrophages engulfed the dying cells the levels of acid phosphatase increased. Lewis employed autoradiographic incorporation of 3H-thymidine to calculate mitotic indices and estimate pyknotic indices. This technique can be used to study the tissue kinetics of tumors and has applications in the scanning electron microscope . [ 4 ] The scanning electron microscope was employed by Hodges and Muir (1975) to study autoradiograph. This approach was combined with the cytochemical method for demonstrating free acid phosphate and cell lysis . Dying cells which are rich in free acid phosphate will contain a brominated reaction product and will give a characteristic signal for bromine when subjected to x-ray microanalysis . Fine structural studies There are common fine-structural changes occurring in dying cells. This was concluded after some attempts from the scientists like Kerr (1972) who proposed the general concept of apoptosis in vertebrates, While Scheweichel and Merker (1973) described induced and physiological cell death in prenatal mouse tissues. Using fine structural distinctions, it is possible to recognize and differentiate between the types of cell death, Acid phosphate and cell deletion: Acid phosphate is an enzyme or group of iso-enzymes which can be used in demonstrating cell death and this can be easily observed by the electron microscope. P-nitrophoenyl phosphate activity can be used as a good marker for cell death. This marker has been used to localize the cell death in cells during embryological development, and the result noticed was the release of exoplasmic nonlysosomal acid phosphate. This appears as a sign of cell death. There are many experiments showing that the ectoplasmic p-nitrophoenyl phosphate which was released is related to ribosomes not to lysosomes. Programmed cell death means genetic control of the process, thus genes specifying cell death in a developmental sequence must be present. Many authors show that there may be a premonitory increase in protein synthesis as a primer for programmed cell death.
https://en.wikipedia.org/wiki/Identification_of_cell_death
Apple 's Identifier for Advertisers ( IDFA ) is a unique random device identifier Apple generates and assigns to every device. It is intended to be used by advertisers to deliver personalized ads and attribute ad interactions for ad retargeting . [ 1 ] Users can opt-out of IDFA via the "Limit Ad Tracking" (LAT) setting (and an estimated 20% do). [ 2 ] Starting in iOS 14.5 , iPadOS 14.5, and tvOS 14.5, users are prompted to decide whether to opt-in or out of IDFA sharing before apps can query it. This choice can be altered in Settings. [ 3 ] [ 4 ] In May, 2021, Verizon-owned advertisement analytics company Flurry Analytics reported that 96% of US users opted out of IDFA sharing. [ 5 ] In iOS 10, Apple introduced "Limit Ad Tracking" setting for users who do not wish to be tracked by advertising networks. If the setting is enabled the system returns a default all-zero id for that device. As of December 2020, it's estimated that approximately 20% of users turn on this setting. [ 2 ] On September 3, 2020, Apple announced plans to restrict access to IDFA and require websites and apps to obtain an explicit permission from users before being granted access to IDFA. Since January 2021, users and developers could test this change by installing iOS 14 beta release. [ 6 ] In July 2020, Facebook stated that this transparency requirement would likely hurt their advertising targeting. [ 7 ] Facebook said that these changes "may render [their tracking] so ineffective on iOS 14 that it may not make sense to offer it on iOS 14" and Facebook apps on iOS 14, including Facebook , WhatsApp , Instagram , Messenger , and others will not collect IDFA on iOS 14. [ 8 ] [ 9 ] In early September, Apple postponed these restrictions until early 2021. [ 10 ] In December 2020, the Mozilla Foundation expressed support for Apple restricting access to IDFA and asked users to sign a petition to "help strengthen [Apple's] resolve to protect consumer privacy". [ 11 ] On December 15, 2020, Facebook launched a "Speak Up for Small Businesses" campaign against Apple. In this campaign, Facebook purchased full-page advertisements in newspapers and created a web page claiming Facebook tries to help small businesses. This campaign became controversial even within Facebook itself, because some employees thought Facebook was "trying to justify doing a bad thing by hiding behind people with a sympathetic message." [ 12 ] On January 27, 2021, Google announced that when the new requirement goes into effect, a "handful" of Google apps will stop collecting IDFAs (and thus the apps will avoid displaying a prompt for allowing tracking user activity). [ 13 ] In February 2021, Post-IDFA alliance surveyed 600 customers and noted that 38.5% of them said they plan to allow tracking by tapping "yes" in the App Tracking Transparency prompt. [ 14 ] [ 15 ] On March 18, 2021, Facebook changed its stance. Facebook CEO, Mark Zuckerberg , claimed that these changes might even strengthen Facebook's position "if Apple’s changes encourage more businesses to conduct more commerce on [Facebook's] platforms by making it harder for them to use their data in order to find the customers that would want to use their products outside of [Facebook's] platforms". [ 16 ] On April 1, 2021, the Apple App Store started rejecting apps which used Adjust SDK and attempted to circumvent App Tracking Transparency rules via device fingerprinting (collecting device and usage data to create a unique identifier in order to track the user). [ 17 ] [ 18 ] On April 2, Adjust removed the offending code and app developers might pass App Store review after updating to the new Adjust SDK version. [ 19 ] [ 20 ] In May, 2021, Verizon-owned advertisement analytics company Flurry reported that 96% of US users opted out of IDFA sharing. Approximately 3% of US users restricted IDFA sharing system-wide. [ 5 ] Apple unconditionally disables Apple IDFA sharing for some Apple ID accounts. In this case apps do not display permission prompt and the Settings entry "Allow Apps to Request to Track" is grayed out. Restrictions apply if the Apple ID is: [ 21 ] In March 2021, the China Advertising Association announced that it was backing a device fingerprinting system as a work-around for Apple's new IDFA restrictions called CAID . [ 22 ] Companies testing the system reportedly include ByteDance and Tencent . [ 22 ]
https://en.wikipedia.org/wiki/Identifier_for_Advertisers
Identifiers.org is a project providing stable and perennial identifiers for data records used in the Life Sciences. The identifiers are provided in the form of Uniform Resource Identifiers (URIs). It is also a resolving system that relies on collections listed in the MIRIAM Registry to provide direct access to different instances of the identified records. The Identifiers.org URIs [ 1 ] [ 2 ] are perennial identifiers, that specify at once the data collection, using the namespaces of the Registry, and the record identifier within the collection in the form of a unique resolvable URI . The Identifiers.org resolving system is built upon the information stored in the MIRIAM Registry , [ 3 ] which is a database that stores namespaces assigned to commonly used data collections (databases and ontologies ) for the Life Sciences. It transforms an Identifiers.org URI into the various URLs leading to the various instances of the record identified by the URI. Identifiers.org is part of the ELIXIR Interoperability Platform . An Identifiers.org URI is formed of several parts: The system allows a consistent and uniform annotation of datasets. This in turn facilitates data alignment and integration. Identifiers.org URIs are used to encode the metadata in the standard formats of the COMBINE initiative, [ 4 ] such as SBML . In particular, databases such as BioModels Database and Reactome export their data in SBML with cross-references encoded using Identifiers.org URIs. These URIs are also used in various semantic web projects such as Bio2RDF , Open PHACTS and the EBI RDF platform [ 5 ] Identifiers.org is part of the Interoperability platform of the European life-sciences Infrastructure for biological Information . Identifiers.org URIs have been developed since 2011 as a resolvable version of the MIRIAM identifiers, developed since 2005, which were of a URN form, and not directly resolvable. Identifiers.org URIs are similar to PURLs , albeit providing alternative resolutions for collections with several instances. They are also similar to DOIs , but provide human readable collection names, and re-use the record identifier assigned by the data provider.
https://en.wikipedia.org/wiki/Identifiers.org
In quantum information theory , the identity channel is a noise-free quantum channel . That is, the channel outputs exactly what was put in. [ 1 ] The identity channel is commonly denoted as I {\displaystyle I} , i d {\displaystyle {\mathsf {id}}} or I {\displaystyle \mathbb {I} } . This signal processing -related article is a stub . You can help Wikipedia by expanding it . This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Identity_channel
Identity correlation is, in information systems , a process that reconciles and validates the proper ownership of disparate user account login IDs ( user names ) that reside on systems and applications throughout an organization and can permanently link ownership of those user account login IDs to particular individuals by assigning a unique identifier (also called primary or common keys) to all validated account login IDs. [ 1 ] The process of identity correlation validates that individuals only have account login IDs for the appropriate systems and applications a user should have access to according to the organization's business policies, access control policies, and various application requirements. In the context of identity correlation, a unique identifier is one that is guaranteed to be unique among those used for a group and for a specific purpose. There are three main types, each corresponding to a different generation strategy: For identity correlation, a unique identifier is typically a serial or random number. In this context, a unique identifier is typically represented as an additional attribute in the directory associated with each particular data source. However, adding an attribute to each system-specific directory may affect application or specific business requirements, depending on the requirements of the organization. Under these circumstances, unique identifiers may not be an acceptable addition. Identity Correlation involves several factors: Many organizations must find a method to comply with audits that require them to link disparate application user identities with the actual people who are associated with those user identities. Some individuals may have a fairly common first and/or last name , which makes it difficult to link the right individual to the appropriate account login ID, especially when those account login IDs are not linked to enough specific identity data to remain unique. A typical construct of the login ID, for example, can be the 1st character of givenname + next 7 of a erial number, with incremental uniqueness. This would produce login IDs like jsmith12, jsmith 13, jsmith14, etc. for users John Smith, James Smith, and Jack Smith, respectively. Conversely, one individual might undergo a name change either formally or informally, which can cause new account login IDs that the individual appropriates to appear drastically different in nomenclature from the account login IDs that the individual acquired prior to any change. For example, a woman could get married and decide to use her new surname professionally. If her name was originally Mary Jones but she is now Mary Smith, she could call HR and ask them to update her contact information and email address with her new surname. This request would update her Microsoft Exchange login ID to mary. smith to reflect that surname change, but it might not actually update her information or login credentials in any other system she has access to. In this example, she could still be mjones in Active Directory and mj5678 in RACF . Identity correlation should link the appropriate system account login IDs to individuals who might be indistinguishable, as well as to those who appear to be drastically different from a system-by-system standpoint. Still, it should be associated with the same individual. Inconsistencies in identity data typically develop over time in organizations as applications are added, removed, or changed and as individuals attain or retain an ever-changing stream of access rights as they matriculate into and out of the organization. Application user login IDs do not always have a consistent syntax across different applications or systems. Many user login IDs are not specific enough to directly correlate back to one particular individual within an organization. User data inconsistencies can also occur due to manual input errors, non-standard nomenclature, or name changes that might not be identically updated across all systems. The identity correlation process should consider these inconsistencies to link up identity data that might seem unrelated upon initial investigation. Organizations can expand and consolidate through mergers and acquisitions, which increases the complexity of business processes, policies, and procedures. As an outcome of these events, users are subject to moving to different parts of the organization, attaining a new position within the organization, or matriculating out of the organization altogether. At the same time, each new application that is added has the potential to produce a new, completely unique user ID. Some identities may become redundant, others may violate application-specific or more widespread departmental policies, others could be related to non-human or system account IDs, and still others may no longer be applicable to a particular user environment. Projects that span different parts of the organization or focus on more than one application become difficult to implement because user identities are often not properly organized or recognized as defunct due to business process changes. An identity correlation process must identify all orphan or defunct account identities that no longer belong to such drastic shifts in an organization's infrastructure. Under such regulations as Sarbanes-Oxley and Gramm-Leach-Bliley Act , it is required for organizations to ensure the integrity of each user across all systems and account for all access a user has to various back-end systems and applications in an organization. If implemented correctly, identity correlation will expose compliance issues. Auditors frequently ask organizations to account for who has access to what resources. For companies that have not already fully implemented an enterprise identity management solution, identity correlation and validation are required to adequately attest to the true state of an organization's user base. This validation process typically requires interaction with individuals within an organization who are familiar with the organization's user base from an enterprise-wide perspective and those who are responsible and knowledgeable of each individual system and/or application-specific user base. In addition, much of the validation process might ultimately involve direct communication with the individual in question to confirm particular identity data that is associated with that specific individual. In response to various compliance pressures, organizations can introduce unique identifiers for their entire user base to validate that each user belongs in each specific system or application in which he/she has login capabilities. To effectuate such a policy, various individuals familiar with the organization's entire user base and each system-specific user base must be responsible for validating that certain identities should be linked together and other identities should be disassociated from each other. Once the validation process is complete, a unique identifier can be assigned to that individual and his or her associated system-specific account login IDs. As mentioned above, in many organizations, users may sign into different systems and applications using different login IDs. There are many reasons to link these into 'enterprise-wide' user profiles. There are a number of basic strategies to perform this correlation, or "ID Mapping:" Often, any process that requires an in-depth look into identity data brings up a concern for privacy and disclosure issues. Part of the identity correlation process infers that each particular data source will need to be compared against an authoritative data source to ensure consistency and validity against relevant corporate policies and access controls . Any such comparison that involves exposure of enterprise-wide, authoritative, HR-related identity data will require various non-disclosure agreements either internally or externally, depending on how an organization decides to undergo an identity correlation exercise. Because authoritative data is frequently highly confidential and restricted, such concerns may bar the way from performing an identity correlation activity thoroughly and sufficiently. Most organizations experience difficulties understanding the inconsistencies and complexities within their identity data across all their data sources. Typically, the process can not be completed accurately or sufficiently by manually comparing two lists of identity data or executing simple scripts to find matches between two different data sets. Even if an organization can dedicate full-time individuals to such an effort, the methodologies usually do not expose an adequate enough percentage of defunct identities, validate an adequate enough percentage of matched identities, or identify system (non-person) account IDs to pass the typical requirements of an identity-related audit. Manual efforts to accomplish identity correlation require a great deal of time and people effort and do not guarantee that the effort will be completed successfully or in a compliant fashion. Because of this, automated identity correlation solutions have recently entered the marketplace to provide more effortless ways of handling identity correlation exercises. Typical automated identity correlation solution functionality includes the following characteristics: Identity correlation solutions can be implemented under three distinct delivery models. These delivery methodologies are designed to offer a solution that is flexible enough to correspond to various budget and staffing requirements and meet both short and/or long-term project goals and initiatives. Software Purchase – This is the classic Software Purchase model where an organization purchases a software license and runs the software within its own hardware infrastructure. Identity Correlation as a Service (ICAS) – ICAS is a subscription-based service where a client connects to a secure infrastructure to load and run correlation activities. This offering provides full functionality offered by the identity correlation solution without owning and maintaining hardware and related support staff. Turn-Key Identity Correlation – A Turn-key methodology requires a client to contract with and provide data to a solutions vendor to perform the required identity correlation activities. Once completed, the solutions vendor will return correlated data, identify mismatches, and provide data integrity reports. Validation activities will still require some direct feedback from individuals within the organization who understand the state of the organizational user base from an enterprise-wide viewpoint and those within the organization who are familiar with each system-specific user base. In addition, some validation activities might require direct feedback from individuals within the user base itself. A Turn-Key solution can be performed as a single one-time activity, monthly, quarterly, or even as part of an organization's annual validation activities. Additional services are available, such as: Related or associated topics that fall under the category of identity correlation may include: Compliance Regulations / Audits Management of identities Access control Directory services Other categories
https://en.wikipedia.org/wiki/Identity_correlation
In a 2-dimensional Cartesian coordinate system , with x representing the abscissa and y the ordinate , the identity line [ 1 ] [ 2 ] or line of equality [ 3 ] is the y = x line. The line, sometimes called the 1:1 line , has a slope of 1. [ 4 ] When the abscissa and ordinate are on the same scale, the identity line forms a 45° angle with the abscissa, and is thus also, informally, called the 45° line . [ 5 ] The line is often used as a reference in a 2-dimensional scatter plot comparing two sets of data expected to be identical under ideal conditions. When the corresponding data points from the two data sets are equal to each other, the corresponding scatters fall exactly on the identity line. [ 6 ] In economics , an identity line is used in the Keynesian cross diagram to identify equilibrium, as only on the identity line does aggregate demand equal aggregate supply. [ 7 ] This statistics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Identity_line
The identity of indiscernibles is an ontological principle that states that there cannot be separate objects or entities that have all their properties in common. That is, entities x and y are identical if every predicate possessed by x is also possessed by y and vice versa. It states that no two distinct things (such as snowflakes ) can be exactly alike, but this is intended as a metaphysical principle rather than one of natural science. A related principle is the indiscernibility of identicals, discussed below. A form of the principle is attributed to the German philosopher Gottfried Wilhelm Leibniz . While some think that Leibniz's version of the principle is meant to be only the indiscernibility of identicals, others have interpreted it as the conjunction of the identity of indiscernibles and the indiscernibility of identicals (the converse principle). Because of its association with Leibniz, the indiscernibility of identicals is sometimes known as Leibniz's law . It is considered to be one of his great metaphysical principles, the other being the principle of noncontradiction and the principle of sufficient reason (famously used in his disputes with Newton and Clarke in the Leibniz–Clarke correspondence ). Some philosophers have decided, however, that it is important to exclude certain predicates (or purported predicates) from the principle in order to avoid either triviality or contradiction. An example (detailed below) is the predicate that denotes whether an object is equal to x (often considered a valid predicate). As a consequence, there are a few different versions of the principle in the philosophical literature, of varying logical strength—and some of them are termed "the strong principle" or "the weak principle" by particular authors, in order to distinguish between them. [ 1 ] The identity of indiscernibles has been used to motivate notions of noncontextuality within quantum mechanics. Associated with this principle is also the question as to whether it is a logical principle, or merely an empirical principle. Both identity and indiscernibility are expressed by the word "same". [ 2 ] [ 3 ] Identity is about numerical sameness , and is expressed by the equality sign ("="). It is the relation each object bears only to itself. [ 4 ] Indiscernibility , on the other hand, concerns qualitative sameness : two objects are indiscernible if they have all their properties in common. [ 1 ] Formally, this can be expressed as " ∀ F ( F x ↔ F y ) {\displaystyle \forall F(Fx\leftrightarrow Fy)} ". The two senses of sameness are linked by two principles: the principle of indiscernibility of identicals and the principle of identity of indiscernibles . The principle of indiscernibility of identicals is uncontroversial and states that if two entities are identical with each other then they have the same properties. [ 3 ] The principle of identity of indiscernibles , on the other hand, is more controversial in making the converse claim that if two entities have the same properties then they must be identical. [ 3 ] This entails that "no two distinct things exactly resemble each other". [ 1 ] Note that these are all second-order expressions. Neither of these principles can be expressed in first-order logic (are nonfirstorderizable ). Formally, the two principles can be expressed in the following way: The indiscernibility of identicals is usually taken to be uncontroversially true, whereas the identity of indiscernibles is more controversial, [ 5 ] having been famously disputed by Max Black . [ 6 ] The conjunction of these two principles is sometimes called "Leibniz's Law", [ 7 ] [ 1 ] although this name has sometimes been used for either of the two other principles, [ 5 ] or for other principles. [ 8 ] It may be stated as a biconditional : Some logicians have regarded this principle as essential to identity and equality : Alfred Tarski listed it among the logical axioms governing the notion of identity, [ 9 ] and Rudolf Carnap defined the equals sign for identity (=) in terms of this biconditional. [ 10 ] In a universe of two distinct objects A and B, all predicates F are materially equivalent to one of the following properties: If ∀F applies to all such predicates, then the second principle as formulated above reduces trivially and uncontroversially to a logical tautology . In that case, the objects are distinguished by IsA, IsB, and all predicates that are materially equivalent to either of these. This argument can combinatorially be extended to universes containing any number of distinct objects. The equality relation expressed by the sign "=" is an equivalence relation in being reflexive (everything is equal to itself), symmetric (if x is equal to y then y is equal to x ) and transitive (if x is equal to y and y is equal to z then x is equal to z ). The indiscernibility of identicals and identity of indiscernables can jointly be used to define the equality relation. The symmetry and transitivity of equality follow from the first principle, whereas reflexivity follows from the second. Both principles can be combined into a single axiom by using a biconditional operator ( ↔ {\displaystyle \leftrightarrow } ) in place of material implication ( → {\displaystyle \rightarrow } ). [ 11 ] [ citation needed ] Indiscernibility is usually defined in terms of shared properties: two objects are indiscernible if they have all their properties in common. [ 12 ] The plausibility and strength of the principle of identity of indiscernibles depend on the conception of properties used to define indiscernibility. [ 12 ] [ 13 ] One important distinction in this regard is between pure and impure properties. Impure properties are properties that, unlike pure properties , involve reference to a particular substance in their definition. [ 12 ] So, for example, being a wife is a pure property while being the wife of Socrates is an impure property due to the reference to the particular "Socrates". [ 14 ] Sometimes, the terms qualitative and non-qualitative are used instead of pure and impure . [ 15 ] Discernibility is usually defined in terms of pure properties only. The reason for this is that taking impure properties into consideration would result in the principle being trivially true since any entity has the impure property of being identical to itself, which it does not share with any other entity. [ 12 ] [ 13 ] Another important distinction concerns the difference between intrinsic and extrinsic properties . [ 13 ] A property is extrinsic to an object if having this property depends on other objects (with or without reference to particular objects), otherwise it is intrinsic . For example, the property of being an aunt is extrinsic while the property of having a mass of 60 kg is intrinsic. [ 16 ] [ 17 ] If the identity of indiscernibles is defined only in terms of intrinsic pure properties, one cannot regard two books lying on a table as distinct when they are intrinsically identical . But if extrinsic and impure properties are also taken into consideration, the same books become distinct so long as they are discernible through the latter properties. [ 12 ] [ 13 ] Max Black has argued against the identity of indiscernibles by counterexample. Notice that to show that the identity of indiscernibles is false, it is sufficient that one provide a model in which there are two distinct (numerically nonidentical) things that have all the same properties. He claimed that in a symmetric universe wherein only two symmetrical spheres exist, the two spheres are two distinct objects even though they have all their properties in common. [ 18 ] Black argues that even relational properties (properties specifying distances between objects in space-time) fail to distinguish two identical objects in a symmetrical universe. Per his argument, two objects are, and will remain, equidistant from the universe's plane of symmetry and each other. Even bringing in an external observer to label the two spheres distinctly does not solve the problem, because it violates the symmetry of the universe. As stated above, the principle of indiscernibility of identicals—that if two objects are in fact one and the same, they have all the same properties—is mostly uncontroversial. However, one famous application of the indiscernibility of identicals was by René Descartes in his Meditations on First Philosophy . Descartes concluded that he could not doubt the existence of himself (the famous cogito argument), but that he could doubt the existence of his body. This argument is criticized by some modern philosophers on the grounds that it allegedly derives a conclusion about what is true from a premise about what people know. What people know or believe about an entity, they argue, is not really a characteristic of that entity. A response may be that the argument in the Meditations on First Philosophy is that the inability of Descartes to doubt the existence of his mind is part of his mind's essence . One may then argue that identical things should have identical essences. [ 19 ] Numerous counterexamples are given to debunk Descartes' reasoning via reductio ad absurdum , such as the following argument based on a secret identity :
https://en.wikipedia.org/wiki/Identity_of_indiscernibles
Identity safety cues are aspects of an environment or setting that signal to members of stigmatized groups that the threat of discrimination is limited within that environment and / or that their social identities are welcomed and valued. [ 1 ] Identity safety cues have been shown to reduce the negative impacts impact of social identity threats , which are when people experience situations where they feel devalued on the basis of a social identity (see stereotype threat ). [ 2 ] Such threats have been shown to undermine performance in academic and work-related contexts and make members of stigmatized groups feel as though they do not belong. [ 3 ] Identity safety cues have been proposed as a way of alleviating the negative impact of stereotype threat or other social identity threats, reducing disparities in academic performance for members of stigmatized groups (see achievement gaps in the United States ), and reducing health disparities caused by identity related stressors. Research has shown that identity safety cues targeted towards one specific group can lead individuals with other stigmatized identities to believe their identities will be respected and valued in that environment. [ 4 ] Further, the implementation of identity safety cues in existing research did not cause members of non-stigmatized groups feeling threatened or uncomfortable. In fact, some work has suggested that the benefits of identity safety cues extend to members of non-stigmatized groups. [ 5 ] For example, implementation of identity safety cues within a university context has been shown to increase student engagement, efficacy, and reduce the average number of student absences for all students, but especially those from stigmatized groups. [ 6 ] [ 7 ] [ 8 ] Several types of identity safety cues have been identified. [ 9 ] There is evidence suggesting that when individuals or organizations communicate that they value diversity highly, concerns about identity threats are reduced. [ 10 ] For example, Hall and colleagues tested the impact of communicating gender inclusive policies on self-reported belonging of women working at engineering firms. Across two studies, Hall and colleagues found that when women working at engineering firms were presented with information communicating gender inclusive policies, they reported increased belonging, fewer concerns about experiencing gender stereotyping in the workplace, and expected to have more pleasant conversations with male coworkers. Within a classroom context, exposure to information stating that instructors or schools hold multicultural philosophies has been shown to increase student agency, self-confidence, and classroom engagement for students from stigmatized groups. [ 11 ] [ 12 ] [ 13 ] Exposure to diversity philosophies and programming can have a lasting effect. In a recent study, Birnbaum and colleagues had first-year college students read a diversity statement that represented the schools’ diversity philosophy as either being in favor of multiculturalism or colorblindness. [ 14 ] The students’ academic progress was tracked over the course of the next two years. Students from stigmatized groups who read the multicultural diversity statement had increased academic performance over the course of the two years compared to students who read the colorblind diversity statement. Similarly, a 2021 study found that when university students were presented with information about equity and non-discrimination policies in the classroom, students from stigmatized groups reported greater belonging within the classroom and reported fewer absences than students who were not presented with the same equity and non-discrimination policies. [ 8 ] Further, students in this study also reported perceiving the instructor as behaving in a more inclusive manner and reported greater concerns about addressing social inequities when they were presented with information about equity and non-discrimination policies. However, the evidence for the effectiveness of diversity philosophies and programming alone is mixed. For example, Valarie Purdie-Jones and colleagues ran a study comparing the effects of Black representation within the workplace and organizational claims of valuing diversity on Black professionals’ sense of organizational trust and belonging. [ 1 ] Black professionals who were presented with information showing that an organization a higher number of Black employees reported feeling greater organizational trust and belonging. Similarly, organizational claims of valuing diversity led to an increased sense of organizational trust and belonging. However, the type of diversity philosophy communicated influenced how effective the philosophy was at increasing organizational trust and belong. Black professionals who received information stating that the organization held a color-blind philosophy of diversity (i.e., the idea that differences are insignificant and should not be attended to; See Color Blindness ) felt lower organizational trust and belonging than Black professionals who received information stating that the organization held a multicultural perspective (i.e., the idea that differences between social groups are meaningful as diverse perspectives offer unique insight and strengths; See Multiculturalism ). Similarly, a 2015 study from Wilton and colleagues exposed participants to either a colorblind or multicultural diversity statement and then measured their expectations about anticipated bias and racial and gender diversity. Participants who were exposed to a colorblind diversity statement expected to experience increased levels of bias and expected less racial and gender diversity than participants who were exposed to a multicultural diversity statement. [ 15 ] One form of identity safety cues that has shown promise is invoking the real or imagined presence of other members of stigmatized groups as a way of suggesting that ones’ social identity will not be devalued and are safe. [ 9 ] The majority of this work has been done amongst racial minorities and women in contexts where they represent a numerical minority (e.g., STEM contexts, in male dominated workplaces). For example, a 2007 study explored the impacts of signaling balanced versus unbalanced gender ratios in STEM on belonging for female and male STEM majors. [ 16 ] In this study, women who watch a video that showed a much larger number of men at a STEM conference exhibited greater levels of cognitive and physiological vigilance, reported a lower sense of belonging, and less desire to participate in the conference. However, women who watched a video showing a roughly equal number of men and women at the same STEM conference exhibited less vigilance, reported a heightened sense of belonging, and a greater desire to participate in the conference. Men's vigilance, sense of belonging, and desire to participate in the conference were unaffected by watching either video. Having a role model with a shared stigmatized identity (e.g., female students having a female professor role model) has also been shown to have similar positive effects. [ 17 ] [ 18 ] For example, a 2019 study explored the effects of having a Roma or non-Roma role model on Roma children in Slovakia. [ 17 ] Presenting Roma children with a known role model from their ethnic group was shown to increase academic achievement and reduce stereotype threat, as opposed to presenting children of role models from different ethnic groups. Similarly, research has found that being exposed to a female role model can help to reduce the identity threat women experience after being exposed to information about the biases women face in STEM. [ 18 ] Even though the majority of this work has been done in STEM contexts, similar work has been done in the context of the workplace. For instance, a 2008 study found that Black professionals who were presented with information showing that an organization had a higher number of Black employees felt a greater sense of trust and belonging in that organization compared to Black professionals who were presented with information showing that an organization had a small number of Black employees. [ 1 ] However, for these cues to be effective, they must reasonably reflect the actual percentage of individuals that hold a stigmatized identity within a given context. A 2020 study found that whenever racial and ethnic minorities perceive an organization as falsely inflating the percent of employees that hold a stigmatized identity, racial and ethnic minorities report increased concerns about belonging, performance, and expressing themselves. [ 19 ] Environmental cues are features of an environment that reduce identity threat by communicating inclusive norms and values. Typically, these cues include background objects (e.g., posters, items on a table) or counter-stereotypic imagery (e.g., a rainbow flag in a gym predominantly frequented by heterosexual men and women). [ 20 ] Studies exploring the impact of environmental cues on belonging in STEM by members of marginalized groups have found strong evidence for the power of environmental cues to influence one's sense of belonging and concerns about discrimination. For example, a 2009 study found that changing objects in a computer science classroom from objects considered stereotypical of computer science (e.g., Star Trek poster, video games) to non-stereotypical objects (e.g., nature poster, phone books) increased female STEM majors’ sense of belonging and interest in computer science by reducing associations between computer science and masculine stereotypes. [ 21 ] In a different domain, the presence of gender-inclusive bathrooms is associated with greater perceived fairness within the workplace, more positive perceptions of workplace climate for women and racial and ethnic minorities, and increased perceptions of the workplace as egalitarian. [ 4 ] The presence of environmental cues has also been associated with differences in academic outcomes as well. For example, a 2013 study randomly assigned male and female students to give a persuasive speech in a virtual-reality classroom that had a photograph of a male world leader, a female world leader, or no photograph. [ 22 ] When the room featured either a photograph of a male world leader or no picture, male students gave speeches that were longer and rated as better than the female students’ speeches. However, the presence of female leader photographs increased female students’ speaking time and their speeches were rated as higher quality. In a similar study, American Indian high school students who were randomly assigned to see stereotypic American Indian imagery in a classroom (e.g., Chief Wahoo of the Cleveland Indians ) were less likely to mention academic achievement when asked about where they imagined themselves in the future than American Indian students who saw no image or a counter-stereotypic poster of an American Indian woman in front of a microscope. [ 23 ] Another form of identity safety cues that has shown promise is providing members of stigmatized groups with information that reduces the importance or relevance of negative stereotypes, conveys non-biased expectations, and/or conveys a positive climate for members of stigmatized groups. [ 24 ] [ 25 ] The majority of this work has been done in academic contexts in order to reduce the impact of stereotype threat. [ 9 ] For example, a prominent 1999 study explored if stereotype threat among female students could be reduced by telling the class that prior administrations of the math exam they were about to take had revealed no gender differences in performance. When students were informed that they were taking a “gender fair” math exam, female students performed equally well to male students taking the same exam. [ 26 ] However, when female students were told before that the exam had been shown to produce gender differences, female students performed worse than male students. Similarly, telling women that there are no differences in women's and men's leadership abilities has been shown to eliminate gender gaps in leadership aspirations. [ 27 ] However, other studies have found that merely providing identity safe information alone is sometimes not enough to reduce stereotype threat or identity threat. [ 28 ] For example, in one study women were presented with a text explaining that stereotypes and not gender differences were responsible for academic performance gaps between men and women and were then asked to complete a math task. It was found that women who were presented with information about stereotype threat and gender differences in academic outcomes performed significantly worse at the math task. More recently, information about expectations for discrimination (or lack thereof) have also been explored as an identity safety cue. For example, a 2020 study from Murrar and colleagues explored the impact of informing university students that most of their peers endorsed positive diversity related values, cared strongly about inclusion in university classrooms, and typically behaved in a non-discriminatory manner. [ 29 ] Being presented with this information caused all students, regardless of their background, to evaluate classroom climate more positively and to report more positive attitudes toward members of stigmatized groups. Further, students from stigmatized groups reported greater sense of belonging and better self-reported physical health. In a similar study, Black women who were informed of the presence of a non-Black female ally reported an increased sense of belonging in the workplace. [ 30 ] Much of the research on identity safety cues came from early attempts to mitigate the detrimental effects of stereotype threat. [ 2 ] [ 26 ] For instance, one of the first studies to use what is now known as an identity safety cue explored the impact of telling female students that there were no gender differences in a math exam (i.e., presenting identity safe information). [ 26 ] A large portion of current research on identity safety cues continues to explore ways to reduce educational disparities between members of stigmatized groups and members of stigmatized groups. [ 30 ] [ 31 ] Another major focus of identity safety cue research is on methods that can successfully increase the belonging and retention of members of stigmatized groups within the workplace. [ 1 ] [ 32 ] [ 25 ] For example, a 2015 study explored the impact of different philosophies of intelligence on female employees expectations to be stereotyped in the workplace and organizational belonging. [ 33 ] When a consulting company displayed the belief that intelligence and abilities are malleable on their mission statement or website compared to the belief that intelligence is fixed women trusted the company more and expected to be stereotyped less. However, gender representation within the company did not affect women's trust in the company. Similarly, a 2019 study found that Latina women felt greater trust, belonging, and interest in a fictional STEM company when learning about a Latinx scientist employee than a White scientist (regardless of the gender of the scientist). [ 34 ] More recently, a 2021 study explored whether the presence of an employee's pronouns in an employee biography acted as an identity-safety cue for sexual and gender minorities. [ 25 ] The inclusion of pronouns resulted in more positive organizational attitudes among gender and sexual minority participants and increased perceptions of coworker allyship, regardless of whether the disclosure of pronouns was required or optional by the organization. While the majority of research on identity safety cues has been done in either academic or workplace contexts, there has been a recent push to explore the effectiveness of these cues in healthcare contexts to see if they can help address disparities in health outcomes between members of stigmatized groups and members of non-stigmatized groups. For example, a recent study explored the impact of minority representation cues and communicating organizational diversity philosophies on Black and Latinx participants’ perceptions of a physicians’ racial biases, cultural competence, and general expectations of a visit with that physician. While physicians’ diversity statements did not influence participants’ anticipated quality of the visit, being informed that the physician had a diverse clientele increased greater anticipated comfort and perceptions of receiving better treatment for Black and Latinx participants. [ 35 ] Similarly, in another recent study, researchers explored how minority representation cues and physicians’ diversity statements might influence sexual minorities’ perceptions of physician bias, cultural competence, anticipated comfort, expectations, and comfort disclosing their sexuality while visiting a physician. Both the diversity statement and minority representation cues reduced perceptions of physician bias, but only minority representation cues increased perceptions of the physician as culturally competent, increased anticipated comfort and quality, and led to greater comfort disclosing their sexuality. [ 36 ] Related work has also been done with fathers in medical contexts. For instance, a 2019 study found that prenatal doctor's offices with environmental safety cues (e.g., pictures of fathers with babies) increased expectant fathers’ comfort with attending prenatal appointments and led to greater parenting confidence, comfort, increased intentions to learn about pregnancy, and greater intentions engage in healthy habits to aid their partner (e.g., avoiding smoking and alcohol during their partner's pregnancy). [ 37 ]
https://en.wikipedia.org/wiki/Identity_safety_cues
In real analysis and complex analysis , branches of mathematics , the identity theorem for analytic functions states: given functions f and g analytic on a domain D (open and connected subset of R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } ), if f = g on some S ⊆ D {\displaystyle S\subseteq D} , where S {\displaystyle S} has an accumulation point in D , then f = g on D . [ 1 ] Thus an analytic function is completely determined by its values on a single open neighborhood in D , or even a countable subset of D (provided this contains a converging sequence together with its limit). This is not true in general for real-differentiable functions, even infinitely real-differentiable functions . In comparison, analytic functions are a much more rigid notion. Informally, one sometimes summarizes the theorem by saying analytic functions are "hard" (as opposed to, say, continuous functions which are "soft"). [ citation needed ] The underpinning fact from which the theorem is established is the expandability of a holomorphic function into its Taylor series . The connectedness assumption on the domain D is necessary. For example, if D consists of two disjoint open sets , f {\displaystyle f} can be 0 {\displaystyle 0} on one open set, and 1 {\displaystyle 1} on another, while g {\displaystyle g} is 0 {\displaystyle 0} on one, and 2 {\displaystyle 2} on another. If two holomorphic functions f {\displaystyle f} and g {\displaystyle g} on a domain D agree on a set S which has an accumulation point c {\displaystyle c} in D {\displaystyle D} , then f = g {\displaystyle f=g} on a disk in D {\displaystyle D} centered at c {\displaystyle c} . To prove this, it is enough to show that f ( n ) ( c ) = g ( n ) ( c ) {\displaystyle f^{(n)}(c)=g^{(n)}(c)} for all n ≥ 0 {\displaystyle n\geq 0} , since both functions are analytic. If this is not the case, let m {\displaystyle m} be the smallest nonnegative integer with f ( m ) ( c ) ≠ g ( m ) ( c ) {\displaystyle f^{(m)}(c)\neq g^{(m)}(c)} . By holomorphy, we have the following Taylor series representation in some open neighborhood U of c {\displaystyle c} : By continuity, h {\displaystyle h} is non-zero in some small open disk B {\displaystyle B} around c {\displaystyle c} . But then f − g ≠ 0 {\displaystyle f-g\neq 0} on the punctured set B − { c } {\displaystyle B-\{c\}} . This contradicts the assumption that c {\displaystyle c} is an accumulation point of { f = g } {\displaystyle \{f=g\}} . This lemma shows that for a complex number a ∈ C {\displaystyle a\in \mathbb {C} } , the fiber f − 1 ( a ) {\displaystyle f^{-1}(a)} is a discrete (and therefore countable) set, unless f ≡ a {\displaystyle f\equiv a} . Define the set on which f {\displaystyle f} and g {\displaystyle g} have the same Taylor expansion : S = { z ∈ D | f ( k ) ( z ) = g ( k ) ( z ) for all k ≥ 0 } = ⋂ k = 0 ∞ { z ∈ D | ( f ( k ) − g ( k ) ) ( z ) = 0 } . {\displaystyle S=\left\{z\in D\mathrel {\Big \vert } f^{(k)}(z)=g^{(k)}(z){\text{ for all }}k\geq 0\right\}=\bigcap _{k=0}^{\infty }\left\{z\in D\mathrel {\Big \vert } {\bigl (}f^{(k)}-g^{(k)}{\bigr )}(z)=0\right\}.} We'll show S {\displaystyle S} is nonempty, open, and closed . Then by connectedness of D {\displaystyle D} , S {\displaystyle S} must be all of D {\displaystyle D} , which implies f = g {\displaystyle f=g} on S = D {\displaystyle S=D} . By the lemma, f = g {\displaystyle f=g} in a disk centered at c {\displaystyle c} in D {\displaystyle D} , they have the same Taylor series at c {\displaystyle c} , so c ∈ S {\displaystyle c\in S} , S {\displaystyle S} is nonempty. As f {\displaystyle f} and g {\displaystyle g} are holomorphic on D {\displaystyle D} , ∀ w ∈ S {\displaystyle \forall w\in S} , the Taylor series of f {\displaystyle f} and g {\displaystyle g} at w {\displaystyle w} have non-zero radius of convergence . Therefore, the open disk B r ( w ) {\displaystyle B_{r}(w)} also lies in S {\displaystyle S} for some r {\displaystyle r} . So S {\displaystyle S} is open. By holomorphy of f {\displaystyle f} and g {\displaystyle g} , they have holomorphic derivatives, so all f ( n ) , g ( n ) {\displaystyle \textstyle f^{(n)},g^{(n)}} are continuous. This means that { z ∈ D | ( f ( k ) − g ( k ) ) ( z ) = 0 } {\displaystyle \textstyle {\bigl \{}z\in D\mathrel {\big \vert } {\bigl (}f^{(k)}-g^{(k)}{\bigr )}(z)=0{\bigr \}}} is closed for all k {\displaystyle k} . S {\displaystyle S} is an intersection of closed sets, so it's closed. Since the identity theorem is concerned with the equality of two holomorphic functions , we can simply consider the difference (which remains holomorphic) and can simply characterise when a holomorphic function is identically 0 {\textstyle 0} . The following result can be found in. [ 2 ] Let G ⊆ C {\textstyle G\subseteq \mathbb {C} } denote a non-empty, connected open subset of the complex plane. For h : G → C {\textstyle h:G\to \mathbb {C} } the following are equivalent. The directions (1 ⇒ {\textstyle \Rightarrow } 2) and (1 ⇒ {\textstyle \Rightarrow } 3) hold trivially. For (3 ⇒ {\textstyle \Rightarrow } 1) , by connectedness of G {\textstyle G} it suffices to prove that the non-empty subset, G ∗ ⊆ G {\textstyle G_{\ast }\subseteq G} , is clopen (since a topological space is connected if and only if it has no proper clopen subsets). Since holomorphic functions are infinitely differentiable, i.e. h ∈ C ∞ ( G ) {\textstyle h\in C^{\infty }(G)} , it is clear that G ∗ {\textstyle G_{\ast }} is closed. To show openness, consider some u ∈ G ∗ {\textstyle u\in G_{\ast }} . Consider an open ball U ⊆ G {\textstyle U\subseteq G} containing u {\textstyle u} , in which h {\textstyle h} has a convergent Taylor-series expansion centered on u {\textstyle u} . By virtue of u ∈ G ∗ {\textstyle u\in G_{\ast }} , all coefficients of this series are 0 {\textstyle 0} , whence h ≡ 0 {\textstyle h\equiv 0} on U {\textstyle U} . It follows that all n {\textstyle n} -th derivatives of h {\textstyle h} are 0 {\textstyle 0} on U {\textstyle U} , whence U ⊆ G ∗ {\textstyle U\subseteq G_{\ast }} . So each u ∈ G ∗ {\textstyle u\in G_{\ast }} lies in the interior of G ∗ {\textstyle G_{\ast }} . Towards (2 ⇒ {\textstyle \Rightarrow } 3) , fix an accumulation point z 0 ∈ G 0 {\textstyle z_{0}\in G_{0}} . We now prove directly by induction that z 0 ∈ G n {\textstyle z_{0}\in G_{n}} for each n ∈ N 0 {\textstyle n\in \mathbb {N} _{0}} . To this end let r ∈ ( 0 , ∞ ) {\textstyle r\in (0,\infty )} be strictly smaller than the convergence radius of the power series expansion of h {\textstyle h} around z 0 {\textstyle z_{0}} , given by ∑ k ∈ N 0 h ( k ) ( z 0 ) k ! ( z − z 0 ) k {\textstyle \sum _{k\in \mathbb {N} _{0}}{\frac {h^{(k)}(z_{0})}{k!}}(z-z_{0})^{k}} . Fix now some n ≥ 0 {\textstyle n\geq 0} and assume that z 0 ∈ G k {\textstyle z_{0}\in G_{k}} for all k < n {\textstyle k<n} . Then for z ∈ B ¯ r ( z 0 ) ∖ { z 0 } {\textstyle z\in {\bar {B}}_{r}(z_{0})\setminus \{z_{0}\}} manipulation of the power series expansion yields Note that, since r {\textstyle r} is smaller than radius of the power series, one can readily derive that the power series R ( ⋅ ) {\textstyle R(\cdot )} is continuous and thus bounded on B ¯ r ( z 0 ) {\textstyle {\bar {B}}_{r}(z_{0})} . Now, since z 0 {\textstyle z_{0}} is an accumulation point in G 0 {\textstyle G_{0}} , there is a sequence of points ( z ( i ) ) i ⊆ G 0 ∩ B r ( z 0 ) ∖ { z 0 } {\textstyle (z^{(i)})_{i}\subseteq G_{0}\cap B_{r}(z_{0})\setminus \{z_{0}\}} convergent to z 0 {\textstyle z_{0}} . Since h ≡ 0 {\textstyle h\equiv 0} on G 0 {\textstyle G_{0}} and since each z ( i ) ∈ G 0 ∩ B r ( z 0 ) ∖ { z 0 } {\textstyle z^{(i)}\in G_{0}\cap B_{r}(z_{0})\setminus \{z_{0}\}} , the expression in ( 1 ) yields By the boundedness of R ( ⋅ ) {\textstyle R(\cdot )} on B ¯ r ( z 0 ) {\textstyle {\bar {B}}_{r}(z_{0})} , it follows that h ( n ) ( z 0 ) = 0 {\textstyle h^{(n)}(z_{0})=0} , whence z 0 ∈ G n {\textstyle z_{0}\in G_{n}} . Via induction the claim holds. Q.E.D.
https://en.wikipedia.org/wiki/Identity_theorem
In systematics, an ideotype is a specimen identified as belonging to a specific taxon by the author of that taxon, but collected from somewhere other than the type locality. The concept of ideotype in plant breeding was introduced by Donald in 1968 to describe the idealized appearance of a plant variety. [ 1 ] It literally means 'a form denoting an idea'. According to Donald, an ideotype is a biological model which is expected to perform or behave in a particular manner within a defined environment: "a crop ideotype is a plant model, which is expected to yield a greater quantity or quality of grain, oil or other useful product when developed as a cultivar." Donald and Hamblin (1976) proposed the concepts of isolation, competition and crop ideotypes. Market ideotype, climatic ideotype, edaphic ideotype, stress ideotype and disease/pest ideotypes are its other concepts. The term ideotype has the following synonyms: model plant type, ideal model plant type and ideal plan type. [ 2 ] The term is also used in cognitive science and cognitive psychology , where Ronaldo Vigo (2011, 2013, 2014) introduced it to refer to a type of concept metarepresentation that is a compound memory trace consisting of the structural information detected by humans in categorical stimuli. [ 3 ] [ 4 ] [ 5 ] This molecular biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ideotype
Idiobiology is a branch of biology which studies individual organisms , or the study of organisms as individuals. [ 1 ] [ 2 ] This biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Idiobiology
Iditol is a sugar alcohol [ 2 ] which accumulates in galactokinase deficiency . [ citation needed ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Iditol
Idling refers to running a vehicle's engine and the vehicle is not in motion, or when the vehicle drops to its resting point of RPMs. This commonly occurs when drivers are stopped at a red light, waiting while parked outside a business or residence, or otherwise stationary with the engine running. When idling, the engine runs without any loads except the engine accessories, and without the additional fuel via the gas pedal. If the vehicle moves while in gear and idling, the "idle speed" mechanically should be adjusted. Idle speed , sometimes simply called " idle ", is the rotational speed an engine runs at when the engine is idling, that is when the engine is uncoupled from the drivetrain and the throttle pedal is not depressed. In combustion engines , idle speed is generally measured in revolutions per minute (rpm) of the crankshaft . At idle speed, the engine generates enough power to run reasonably smoothly and operate its ancillaries ( water pump , alternator , and, if equipped, other accessories such as power steering ), but usually not enough to perform useful work, such as moving an automobile unless it is set too high. The opposite of idle speed is redline , the maximum rotational speed the engine can be ran at without risking serious engine damage. For a passenger car engine, idle speed is customarily between 600 and 1000 rpm. For medium and heavy duty trucks , it is approximately 600 rpm. [ 1 ] For many single-cylinder motorcycle engines , idle speed is set between 900 and 1100 rpm. Two-cylinder motorcycle engines are often set around 1000 rpm. [ 2 ] If the engine is operating a large number of accessories, particularly air conditioning , the idle speed must be raised to ensure that the engine generates enough power to run smoothly and operate the accessories. Most air conditioning-equipped engines have an automatic adjustment feature in the carburetor or fuel injection system that raises the idle when the air conditioning is running. Engines modified for power at high engine speeds, such as auto racing engines, tend to have very rough (unstable) idle unless the idle speed is raised significantly. Idle speed may refer to the idle creep of a vehicle with an automatic transmission. Commercial aircraft descend with a minimum thrust, that is, the engines are operating at idle speed. This situation happens when an aircraft is gliding and during the landing flare, for approach the engines are usually not operated at idle power. Both running an engine and idling an engine produce several pollutants that are monitored in the United States by the Environmental Protection Agency (EPA): [ 3 ] It is often believed that stopping and restarting the engine uses more fuel than idling. [ citation needed ] According to the Environmental Defense Fund citing an engine studies report from 2000 by Environment and Climate Change Canada , an engine restart uses fuel approximately equal to 10 seconds of idling. [ 4 ] Consequently, recommendations have been made to shut off a car's engine after ten seconds of idling to reduce emissions. [ 5 ] Assuming a temperature of -1 °C (30 °F), and with a gasoline Reid vapor pressure (RVP) of 896 hPa (13.0) psi. Legend: Assuming a temperature of 24 °C (75 °F), and with a gasoline Reid vapor pressure of 620 hPa (9.0 psi). Source: [ 6 ] [ 3 ] Health effects of idling are related to engine exhaust, and include acute effects such as eye, throat, and bronchial irritation; nausea; cough, phlegm congestion; allergic or asthma-like respiratory response; increased risk for cardiac events; cancer, and chronic effects, such as bronchitis, decreased lung function, damage to reproductive function (low birth weight and damage to sperm chromatin and DNA). [ 7 ] [ 8 ] [ 9 ] These health effects are more damaging in those with preexisting heart disease, asthma, or other lung problems. Children are also more susceptible, due to their faster breathing rate and the fact that their respiratory system is still developing. Idling pollutants also disproportionately affect the elderly, who have limited physiological reserve to compensate for the adverse effects of the pollutants. [ 10 ] Effort has been made to reduce the amount of time engines spend idling, chiefly due to fuel economy and emissions concerns, although some engines can also be damaged if kept idling for extended periods. In the United States, about a billion gallons (3.8 billion liters) of fuel is consumed by idling heavy-duty truck and locomotive engines each year. [ 11 ] Many newer semi-trucks have small auxiliary power units (APUs) to run accessories more efficiently while the truck is parked. Hybrid vehicles typically shut down their internal combustion engines while stopped, although some conventional vehicles are also including start-stop systems to shut off the engine when it would otherwise idle. At the macro level, governments can implement strategies to reduce reliance on motorised transport, including investing in public transport and implementing transit-oriented development. Idling is forbidden unless there is a specific reason to do so (version 1975-2026, art. 8.6 ), [ 12 ] to be superseded by version 2026-09, art. 8.7 [ 13 ] The city of Toronto enacted the first idling bylaw (No. 673-1998 Chapter 517 in the Municipal Code) in Canada in 1996 to reduce idle time to 3 minutes for vehicles and marine vessels. [ 14 ] [ 15 ] There are plans by the health department to ask for the bylaw to be amended to a limit of one minute and no exemptions to be made for the city's fleet, including the Toronto Transit Commission buses. [ 14 ] Other Canadian municipalities have followed Toronto's lead: In a bid to reduce air pollution, the Hong Kong Government enacted the Motor Vehicle Idling (Fixed Penalty) Ordinance from December 2011. The law prohibits drivers from idling for more than three minutes in any 60-minute period. Both police traffic wardens and inspectors of the Environmental Protection Department can fine offenders HK$320. [ 27 ] Both the Department of Energy and the Environmental Protection Agency have programs in place to reduce idling. The DOE is funding research and development for alternative and advanced vehicles, which includes the gathering of quantitative data on medium-duty trucks, examining idling reduction alternatives, and the CoolCab project for semi-truck curtains and installation. [ 28 ] The EPA's programs include the Environmental Technology Verification Program, [ 29 ] the Smart Way Transport Partnership (freight incentives), the Model State Idling Law (diesel) and Clean School Bus USA. [ 30 ] All but 11 states have at least one incentive or law in place to reduce idling, while 7 states have at least four. [ 31 ] The state of Colorado has in place a tax credit for alternative fuel and qualified idle reduction technologies, as well as the Green Truck Grant Program which allows the Governor's Energy Office to provide reimbursement of up to 25% of costs to owners of commercial trucks used in interstate commerce to reduce emissions. [ 32 ] There are many local ordinances and programs to discourage idling, such as ordinances limiting the minutes per hour in which a vehicle can idle. [ 33 ] One example of a local program is Denver, Colorado's "Engines Off!" citywide anti-idling campaign, which aims to improve air quality and reduce greenhouse gas emissions by promoting voluntary behavior change in idling behavior. [ 34 ]
https://en.wikipedia.org/wiki/Idle_(engine)
An idle scan is a TCP port scan method for determining what services are open on a target computer [ 1 ] without leaving traces pointing back at oneself. This is accomplished by using packet spoofing to impersonate another computer (called a " zombie ") so that the target believes it's being accessed by the zombie. The target will respond in different ways depending on whether the port is open, which can in turn be detected by querying the zombie. [ 2 ] This action can be done through common software network utilities such as nmap and hping . The attack involves sending forged packets to a specific machine target in an effort to find distinct characteristics of another zombie machine. The attack is sophisticated because there is no interaction between the attacker computer and the target: the attacker interacts only with the " zombie " computer. This exploit functions with two purposes, as a port scanner and a mapper of trusted IP relationships between machines. The target system interacts with the " zombie " computer and difference in behavior can be observed using different "zombies" with evidence of different privileges granted by the target to different computers. [ 3 ] The overall intention behind the idle scan is to "check the port status while remaining completely invisible to the targeted host." [ 4 ] Discovered by Salvatore Sanfilippo (also known by his handle "Antirez") in 1998, [ 5 ] the idle scan has been used by many black hat "hackers" to covertly identify open ports on a target computer in preparation for attacking it. Although it was originally named dumb scan , the term idle scan was coined in 1999, after the publication of a proof of concept 16-bit identification field (IPID) scanner named idlescan , by Filipe Almeida (aka LiquidK). [ 6 ] This type of scan can also be referenced as zombie scan ; all the nomenclatures are due to the nature of one of the computers involved in the attack. The design and operation of the Internet is based on the Internet Protocol Suite , commonly also called TCP/IP. IP is the primary protocol in the Internet Layer of the Internet Protocol Suite and has the task of delivering datagrams from the source host to the destination host solely based on their addresses . For this purpose, IP defines addressing methods and structures for datagram encapsulation . It is a connectionless protocol and relies on the transmission of packets. Every IP packet from a given source has an ID that uniquely identifies IP datagram. [ clarification needed ] TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer. TCP is the protocol that major Internet applications rely on, such as the World Wide Web , e-mail , and file transfer. Each of these applications (web server, email server, FTP server) is called a network service . In this system, network services are identified using two components: a host address and a port number. There are 65536 distinct and usable port numbers per host. Most services use a limited range of numbers by default, and the default port number for a service is almost always used. Some port scanners scan only the most common port numbers, or ports most commonly associated with vulnerable services, on a given host. See: List of TCP and UDP port numbers . The result of a scan on a port is usually generalized into one of three categories: Open ports present two vulnerabilities of which administrators must be wary: Filtered ports do not tend to present vulnerabilities. The host in a local network can be protected by a firewall that filters, according with rules that its administrator set up, packets. This is done to deny services to hosts not known and prevent intrusion in the inside network. The IP protocol is network layer transmission protocol. Idle scans take advantage of predictable Identification field value from IP header : every IP packet from a given source has an ID that uniquely identifies fragments of an original IP datagram; the protocol implementation assigns values to this mandatory field generally by a fixed value (1) increment. Because transmitted packets are numbered in a sequence you can say how many packets are transmitted between two packets that you receive. An attacker would first scan for a host with a sequential and predictable sequence number (IPID). The latest versions of Linux , Solaris , OpenBSD , and Windows Vista are not suitable as zombie, since the IPID has been implemented with patches [ 7 ] that randomized the IPID. [ 1 ] Computers chosen to be used in this stage are known as "zombies". [ 2 ] Once a suitable zombie is found the next step would be to try to establish a TCP connection with a given service (port) of the target system, impersonating the zombie. It is done by sending a SYN packet to the target computer, spoofing the IP address from the zombie, i.e. with the source address equal to zombie IP address. If the port of the target computer is open it will accept the connection for the service, responding with a SYN/ACK packet back to the zombie. The zombie computer will then send a RST packet to the target computer (to reset the connection) because it did not actually send the SYN packet in the first place. Since the zombie had to send the RST packet it will increment its IPID. This is how an attacker would find out if the target's port is open. The attacker will send another packet to the zombie. If the IPID is incremented only by a step then the attacker would know that the particular port is closed. The method assumes that zombie has no other interactions: if there is any message sent for other reasons between the first interaction of the attacker with the zombie and the second interaction other than RST message, there will be a false positive . The first step in executing an idle scan is to find an appropriate zombie. It needs to assign IP ID packets incrementally on a global (rather than per-host it communicates with) basis. It should be idle (hence the scan name), as extraneous traffic will bump up its IP ID sequence, confusing the scan logic. The lower the latency between the attacker and the zombie, and between the zombie and the target, the faster the scan will proceed. [ 8 ] Note that when a port is open, IPIDs increment by 2. Following is the sequence: 1. Attacker to target -> SYN, target to zombie ->SYN/ACK, Zombie to target -> RST (IPID increment by 1) 2. Now attacker tries to probe zombie for result. Attacker to Zombie ->SYN/ACK, Zombie to Attacker -> RST (IPID increment by 1) So, in this process IPID increments by 2 finally. When an idle scan is attempted, tools (for example nmap) tests the proposed zombie and reports any problems with it. If one doesn't work, try another. Enough Internet hosts are vulnerable that zombie candidates aren't hard to find. A common approach is to simply execute a ping sweep of some network. Choosing a network near your source address, or near the target, produces better results. You can try an idle scan using each available host from the ping sweep results until you find one that works. As usual, it is best to ask permission before using someone's machines for unexpected purposes such as idle scanning. Simple network devices often make great zombies because they are commonly both underused (idle) and built with simple network stacks which are vulnerable to IP ID traffic detection. While identifying a suitable zombie takes some initial work, you can keep re-using the good ones. Alternatively, there have been some research on utilizing unintended public web services as zombie hosts to perform similar idle scans. Leveraging the way some of these services perform outbound connections upon user submissions can serve as some kind of poor's man idle scanning. [ 9 ] The hping method for idle scanning provides a lower level example for how idle scanning is performed. In this example the target host (172.16.0.100) will be scanned using an idle host (172.16.0.105). An open and a closed port will be tested to see how each scenario plays out. First, establish that the idle host is actually idle, send packets using hping2 and observe the id numbers increase incrementally by one. If the id numbers increase haphazardly, the host is not actually idle or has an OS that has no predictable IP ID. Send a spoofed SYN packet to the target host on a port you expect to be open. In this case, port 22 (ssh) is being tested. Since we spoofed the packet, we did not receive a reply and hping reports 100% packet loss. The target host replied directly to the idle host with a syn/ack packet. Now, check the idle host to see if the id number has increased. Notice that the proxy hosts id increased from id=1379 to id=1381. 1380 was consumed when the idle host replied to the target host's syn/ack packet with an rst packet. Run through the same processes again testing a port that is likely closed. Here we are testing port 23 (telnet). Notice that this time, the id did not increase because the port was closed. When we sent the spoofed packet to the target host, it replied to the idle host with an rst packet which did not increase the id counter. The first thing the user would do is to find a suitable zombie on the LAN : Performing a port scan and OS identification (-O option in nmap) on the zombie candidate network rather than just a ping scan helps in selecting a good zombie. As long as verbose mode (-v) is enabled, OS detection will usually determine the IP ID sequence generation method and print a line such as “IP ID Sequence Generation: Incremental”. If the type is given as Incremental or Broken little-endian incremental, the machine is a good zombie candidate. That is still no guarantee that it will work, as Solaris and some other systems create a new IP ID sequence for each host they communicate with. The host could also be too busy. OS detection and the open port list can also help in identifying systems that are likely to be idle. Another approach to identifying zombie candidates is the run the ipidseq NSE script against a host. This script probes a host to classify its IP ID generation method, then prints the IP ID classification much like the OS detection does. Like most NSE scripts, ipidseq.nse can be run against many hosts in parallel, making it another good choice when scanning entire networks looking for suitable hosts. nmap -v -O -sS 192.168.1.0/24 This tells nmap to do a ping sweep and show all hosts that are up in the given IP range. Once you have found a zombie, next you would send the spoofed packets: nmap -P0 -p <port> -sI <zombie IP> <target IP> The images juxtaposition show both of these stages in a successful scenario. Although many Operating Systems are now immune from being used in this attack, some popular systems are still vulnerable; [ 1 ] making the idle scan still very effective. Once a successful scan is completed there is no trace of the attacker's IP address on the target's firewall or Intrusion-detection system log. Another useful possibility is the chance of by-passing a firewall because you are scanning the target from the zombie's computer, [ 10 ] which might have more rights than the attacker's.
https://en.wikipedia.org/wiki/Idle_scan
An idler-wheel is a wheel which serves only to transmit rotation from one shaft to another, in applications where it is undesirable to connect them directly. For example, connecting a motor to the platter of a phonograph , or the crankshaft-to-camshaft gear train of an automobile. Because it does no work itself, it is called an " idler ". An idler-wheel may be used as part of a friction drive mechanism. For example, to connect a metal motor shaft to a metal platter without gear noise, early phonographs used a rubber idler wheel. Likewise, the pinch roller in a magnetic tape transport is a type of idler wheel, which presses against the driven capstan to increase friction. In a belt drive system, idlers are often used to alter the path of the belt, where a direct path would be impractical. Idler pulleys are also often used to press against the back of a pulley in order to increase the wrap angle (and thus contact area) of a belt against the working pulleys, increasing the force-transfer capacity. Belt drive systems commonly incorporate one movable pulley which is spring- or gravity-loaded to act as a belt tensioner , to accommodate stretching of the belt due to temperature or wear. An idler wheel is usually used for this purpose, in order to avoid having to move the power-transfer shafts. An idler gear is a gear wheel that is inserted between two or more other gear wheels. The purpose of an idler gear can be two-fold. Firstly, the idler gear will change the direction of rotation of the output shaft. Secondly, an idler gear can assist to reduce the size of the input/output gears whilst maintaining the spacing of the shafts. An idler gear does not affect the gear ratio between the input and output shafts. Note that in a sequence of gears chained together, the ratio depends only on the number of teeth on the first and last gear. The intermediate gears, regardless of their size, do not alter the overall gear ratio of the chain, except to change the direction of rotation of the final gear. (That is, each intermediate gear changes the sign of the gear ratio.) Likewise, the size of an idler wheel in a non-geared friction drive system does not affect the gear ratio between the input and output shafts. The surface speed of the input shaft is transferred directly to the surface speed of the idler wheel, and then from the idler wheel to the output shaft. A larger or smaller idler wheel maintains the same surface speed (which equals the surface speed of the input shaft), therefore the output shaft is driven at a constant speed regardless of the size of the idler wheel (unless of course there is slippage, which should not occur in most friction drive systems when operating correctly; however, there are instances where an idler wheel can double as a clutch, or if there is a sudden or unusually heavy load on the system. These situations can cause the ratio of rotations between the wheels to vary, unlike a gear system, which will always rotate at a certain rate unless something is very wrong and the gears starts skipping teeth, or teeth are broken off). An intermediate gear which does not drive a shaft to perform any work is called an idler gear. Sometimes, a single idler gear is used to reverse the direction, in which case it may be referred to as a reverse idler. For instance, the typical automobile manual transmission engages reverse gear by means of inserting a reverse idler between two gears. Since a driven gear (gear "A") rotating clockwise will drive a second gear ("B") counterclockwise, adding a third gear to the string means that gear "C" will be spinning the same direction as "A". A typical transmission is designed with "A" and "B" gears, so when the engine spins, the outputs shaft spins the opposite direction, which drives the vehicle forward . A straight idler gear setup is actually typically an "A" and a "C" gear, which are not in contact with each other until a "B" gear is moved between them. Since the transmission is designed to move the car forwards when the output is spinning in the opposite direction from the input shaft, when added to the "B" idler gear, it forces the "C" gear to spin in the same direction as the "A" gear, and thus the input and output shafts are spinning in the same direction, which drives the car in reverse. Another scenario is a series of rollers, such as used for pressing paper. Each roller has to be powered, but adding a motor to each one is wasteful (and it can be difficult to synchronize rotational speed with independent drive systems). One could simply add a gear onto the end of the shaft of each roller, but that means that each roller would be spinning the opposite direction of the one before (and therefore rubbing against each other as the turn). By simply adding a small idler gear between each larger gear, the result is a series of rollers, all being powered in the same direction. Idler gears can also transmit rotation among distant shafts in situations where it would be impractical to simply make the distant gears larger to bring them together. Not only do larger gears occupy more space, but the mass and rotational inertia ( moment of inertia ) of a gear is quadratic in proportion to its radius . Instead of idler gears, of course, a toothed belt or a roller chain can be used to transmit torque over distance. For short distances, a train of idlers may be used; whether an odd or even number is used determines whether the final output gear rotates the same direction as the input gear or not. For longer distances, a roller chain or belt is quieter and creates less friction, although gears are typically stronger, depending on the strength of the roller chain. A case where numerous idler gears might be used is as described above, where there are a number of output gears that need to be driven simultaneously. Caterpillar track idler wheels A tracked vehicle uses a combination of wheels and rollers, including drive sprockets , idler wheels , track return rollers and road wheels . It is quite similar in concept to a conveyor belt , only instead of a machine carrying objects on top of a powered continuous belt, it's a machine that moves itself over a continuous belt. In a typical application, power is transmitted to a drive sprocket (or drive wheel ), which drives the track around its loop. On the opposite end of the vehicle, there is an idler wheel , which provides a pulley wheel of sorts. On some applications, the drive sprocket and idler wheel carry some of the weight of the vehicle, for the purposes of this description, we will assume the drive sprocket and idler wheel are not weight-bearing units, and the drive sprocket is on the front. Since the drive sprocket can be at either the front (many WWII tanks like the M4 Sherman ) or the rear (most modern tanks like the T-90 ) of the vehicle depending on the design, the idler wheel either carries the track back off the ground and returns it to the drive sprocket (rear idler wheel), or receives track from the drive sprocket and lays it down in front of the road wheels (front idler wheel). The idler wheel is not powered, just like an idler gear. Although it technically reverses the tracks direction (but not its rotation ), this has nothing to do with the term "idler"; it is not related to an idler gear other than that they are both "idle", or not doing any work, only transmitting power ("idle" being a term for something or someone who isn't working). The road wheels are a series of non-powered wheels between the drive sprocket and idler wheel that serve to support the weight of the vehicle (and thus aren't considered "idle", even though they are unpowered). In higher speed applications, such as tanks and other AFV's, these road wheels are typically given some sort of suspension system to ease the ride, increase controllability, and decrease wear and tear. Due to the complications in adding suspension systems to the idler wheel and, in particular, the drive sprocket, in such vehicles, the road wheels typically carry all of the weight of the vehicle. In low speed applications, such as bulldozers, these road wheels lack any kind of suspension system, as the low speeds don't demand the cushioning. This also allows the idler and drive wheels to carry some of the weight, as their lack of suspension is made irrelevant. Track return rollers may or may not be used, and are simply small rollers which support the weight of the track as it's transferred from rear to front to be laid down again. The track simply provides a solid "road" for the road wheels to roll over on over all surfaces: the road wheels roll the vehicle along the self-created "road", while the drive sprocket forces the vehicle forward along the track and lays down "fresh" track. The idler picks the "used" track back up, and returns it back to the drive sprocket in the front. This is why an early term for a tracked vehicle was a "track-laying machine" (not to be confused with railroad track laying equipment). Transporting vehicles over muddy ground often required planks or logs to be placed along the track (see corduroy road , plank road ). In the later 19th century, inventors figured out a way to make a rolling machine that would lay its own plank road wherever it went, negating the need for farmers to lay down logs in order to traverse muddy areas. Other benefits were discovered later. Note that there are some non-powered tracked transports (i.e. trailers that roll on tracks rather than wheels), which have two idler wheels rather than a drive sprocket. There are also certain pieces of equipment, such as the Caterpillar D9 bulldozer (and numerous other Caterpillar brand bulldozers), Tucker Sno-cat and Mattracks rubber track conversion kits, which configure their tracks in the shape of a triangle, or pyramid (when viewed from the side), with the drive sprocket at the tip of the pyramid. In this configuration, there are two idler/roadwheels and one drive sprocket (as well as a number of small, load-bearing roadwheels). In very rare cases, the vehicle lacks an idler wheel at all; in Northern regions, one way people got better traction in deep snow was to take a simple three- axle truck, and install a simple continuous track around the rear wheels, thus forming a basic half-track system which featured two drive wheels, and no idler or road wheels. One almost never sees this on true tracked vehicles, however, as the second drive wheel is redundant.
https://en.wikipedia.org/wiki/Idler-wheel
Idun Reiten (born 1 January 1942) is a Norwegian professor of mathematics . She is considered to be one of Norway's greatest mathematicians today. [ 2 ] With national and international honors and recognition, she has supervised 11 students and has 28 academic descendants as of March 2024. She is an expert in representation theory , and is known for work in tilting theory and Artin algebras . [ 3 ] She took her PhD degree at the University of Illinois in 1971, becoming the second Norwegian woman to earn a PhD in mathematics. [ 4 ] She was appointed as a professor at the University of Trondheim in 1982, [ 5 ] now named the Norwegian University of Science and Technology . Her research area is representation theory for Artinian algebras , commutative algebra , and homological algebra . Her work with Maurice Auslander now forms the part of the study of Artinian algebras known as Auslander–Reiten theory . This theory utilizes such concepts as almost-split sequences and Auslander-Reiten quivers, which were developed in a series of papers. In 2005, Reiten received the Humboldt Research Award . [ 6 ] In 2007, Reiten was awarded the Möbius prize. In 2009 she was awarded Fridtjof Nansen's award for successful researchers , (in the field of mathematics and the natural sciences), and the Nansen Medal for Outstanding Research . [ 7 ] In 2007, she was elected a foreign member of the Royal Swedish Academy of Sciences . She is also a member of the Norwegian Academy of Science and Letters , the Royal Norwegian Society of Sciences and Letters , and Academia Europaea . [ 8 ] In 2012, she became a fellow of the American Mathematical Society . [ 9 ] She was named MSRI Clay Senior Scholar and Simons Professor for 2012-13. [ 10 ] She delivered the Emmy Noether Lecture at the International Congress of Mathematicians (ICM) in 2010 in Hyderabad [ 11 ] and was an Invited Speaker at the ICM in 1998 in Berlin. [ 12 ] In 2014, the Norwegian King appointed Reiten as commander of the Order of St. Olav "for her work as a mathematician". [ 13 ] She is the namesake of the IDUN: From PhD to Professor program at the Norwegian University of Science and Technology Faculty of Information Technology and Electrical Engineering , which aimed at "increasing the number of female scientists in top positions at NTNU's Faculty of Computer Science and Electrical Engineering." [ 14 ] This article about a Norwegian scientist is a stub . You can help Wikipedia by expanding it . This article about a European mathematician is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Idun_Reiten