id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
4,559,334
https://en.wikipedia.org/wiki/Worldwide%20Youth%20in%20Science%20and%20Engineering
The Worldwide Youth in Science and Engineering (WYSE) is a program run by the Grainger College of Engineering at the University of Illinois at Urbana-Champaign that offers STEM programs to pre-college students, including summer programs and mentorships, as well as the Academic Challenge, a high school academic competition run by Eastern Illinois University. Summer Programs High School Summer Research The High School Summer Research programs offers STEM research to rising high school juniors and seniors from the Champaign–Urbana metropolitan area. The programs are non-residential, last for 6 weeks, and take place at the University of Illinois at Urbana-Champaign, where students are able to research various STEM fields. High School Summer Camps About a dozen overnight engineering camps are offered to any rising high school students. There are various activities designed to expose students to different areas of engineering. Middle School Summer Camps Middle school summer camps are offered to rising 6th, 7th, 8th, and 9th grade students in the local area. It involves various activities to expose students to engineering and requires a limited enrollment merit-based application. STEM Mentorship The WYSE STEM Mentorship is a collaboration between WYSE and its partner programs Catalyzing Inclusive Stem Experience All Year Round (CISTEME365), Chicago Pre-College Science & Engineering Program (Chi S&E), and Chicago Pre-College Science & Engineering Program (Chi S&E). It provides afterschool and weekend STEM programs to K-12 youth. WYSE LEADers The WYSE LEADers program provides STEM education designed to develop leadership and communication skills to pre-college students. Academic Challenge The Academic Challenge was founded by the University of Illinois at Urbana-Champaign and was run by it for about 40 years, but it is now run by Eastern Illinois University. The team competition consists teams of 6 to 14 people from multiple high schools, with each competitor taking two exams. Competitors are also allowed to enter as "At-Large" competitors, in which they enter as individuals not representing a school team. There are seven subject areas from which each student chooses their two tests: Biology, Chemistry, Computer Science, Engineering Graphics/Drafting, English, Mathematics, and Physics. Awards are given to both teams and individuals at three progressively harder levels; Regionals, Sectionals, and the State Finals. Test format The tests are 40 minute multiple choice tests. Each test has a different number of questions. Computer Science and Mathematics are 30 question exams; Physics 35; Biology 50; Chemistry and Engineering Graphics 40; and English 80. These questions are divided into subcategories in each field; for instance, there are 14 Algebra questions, 7 Geometry questions, etc. on the Mathematics exam at the Regional Level. Scoring system Individual Tests are graded and ranked from highest score to lowest score based on the number of correct answers. At Regionals and Sectionals, the top 3 scores in each test, including ties, are awarded medals. At the State Finals, the top 6 scores including ties are awarded medals. Team Individual tests are graded and ranked from highest score to lowest score. The two highest scores in each subject for a school are added together to determine the school's Team Subject Raw Score for that subject. If a school has only one score in a subject, the Raw Score is zero. Each Team Subject Raw Score is normalized by multiplying it by 100 and then dividing it by the greatest Team Subject Raw Score among all competing school teams. Once the normalized scores have been found, the school's score is determined by adding the normalized scores for five of the seven tests together. The five tests are English, Chemistry, Mathematics, and the two highest normalized scores of the remaining four tests (Biology, Computer Science, Engineering Graphics, and Physics). Advancement Individuals and teams advance from Regionals to Sectionals to the State Finals based on their placement at the current level of competition. Individual Individuals who place 1st or 2nd, including ties, at either Regionals or Sectionals advance to the next level. Thus, if there is one 1st place individual and a four-way tie for 2nd, five individuals will advance to the next level for that subject. At the Sectional level, individuals can also advance to the State Finals by scoring a pre-determined qualifying score on a test. This prevents Sectionals from advancing only the top two scores when there are additional high scores below the 2nd place finisher(s). If an individual qualifies to advance for one test, that individual still takes two tests at the next level, even if their school does not advance with the individual. Team Teams advance to the next level based on their finish compared to the number of schools competing. If there are 1 or 2 teams competing, both advance. If there are from 3–7 teams, the top 2 advance. If there are from 8–12 teams, the top 3 advance. If there are from 13–16 teams, the top 4 advance. If there are more than 16 teams, the top 5 advance. References External links WYSE Website Academic Challenge Website Science competitions Youth science
Worldwide Youth in Science and Engineering
Technology
1,022
78,071,340
https://en.wikipedia.org/wiki/Reinhard%20Diestel
Reinhard Diestel (born 1959) is a German mathematician specializing in graph theory, including the interplay among graph minors, matroid theory, tree decomposition, and infinite graphs. He holds the chair of discrete mathematics at the University of Hamburg. Education and career Diestel has a Ph.D. from the University of Cambridge in England, completed in 1986. His dissertation, Simplicial Decompositions and Universal Graphs, was supervised by Béla Bollobás. He continued at Cambridge as a fellow of St. John's College, Cambridge until 1990. In 1994, he took a professorship at the Chemnitz University of Technology, and in 1999 he was given his current chair at the University of Hamburg. At Hamburg, his doctoral students have included Daniela Kühn and Maya Stein. Books Diestel's books include: Graph Decompositions: A Study in Infinite Graph Theory (Oxford University Press, 1990) Graph Theory (Graduate Texts in Mathematics 173, Springer, 1997; 6th ed., 2024). Originally published in German as Graphentheorie (1996), and translated into Chinese, Japanese, and Russian. Tangles: A Structural Approach to Artificial Intelligence in the Empirical Sciences (Cambridge University Press, 2024; ) References External links Home page Graph Theory home page including free online preview version 1959 births Living people German mathematicians Graph theorists Alumni of the University of Cambridge Fellows of St John's College, Cambridge Academic staff of the Chemnitz University of Technology Academic staff of the University of Hamburg
Reinhard Diestel
Mathematics
306
50,338,185
https://en.wikipedia.org/wiki/Electro-Dynamic%20Light%20Company
The Electro-Dynamic Light Company of New York was a lighting and electrical distribution company organized in 1878. The company held the patents for the first practical incandescent electric lamp and electrical distribution system of incandescent electric lighting. They also held a patent for an electric meter to measure the amount of electricity used. The inventions were those of Albon Man and William E. Sawyer. They gave the patent rights to the company, which they had formed with a group of businessmen. It was the first company in the world formally established to provided electric lighting and was the first company organized specifically to manufacture and sell incandescent electric light bulbs. Man, an attorney from New York City, supplied money for experimentation to Sawyer, an electrical engineer. This partnership developed into the Electro-Dynamic Light Company that brought in other investors that became partners. Sawyer devised a unique electrical distribution system where electrical power could be obtained anywhere in the city from an electrical generator with the turn of a switch to light up electric lamps to produce glowing light like a gas lamp. It was unique in that it produced this power without consumers having to maintain local galvanic batteries and at a fraction of the cost of producing the same lighting as from gas lamps. Other features of the system were that safety devices were built in to prevent the early destroying of the other electric lamps in the circuit should there be a power surge due to a lamp burning up early and leaving the distribution circuit. The patents for the Man and Sawyer system were in place before any other electrical companies had similar systems. History Albon Man, an attorney from New York City, and William E. Sawyer, an electrical engineer, officially formed the Electro-Dynamic Light Company of New York on July 8, 1878. This was by way of a partnership with Man supplying money to Sawyer for experiments. This was the first formally established electric-lighting company in the United States and was the first company organized specifically to manufacture and sell incandescent electric light bulbs to put into a system. Man and Sawyer patented the first practical electrical distribution system for incandescent electric lighting and gave the patents to the company. The United States Electric Lighting Company was organized in 1878, weeks after the Electro-Dynamic Company. Thomas Edison's electric lighting discoveries were first shown in September 1878. The Edison Electric-Light Company of New York was organized on October 17, three months after the Electro-Dynamic Company was formally established. The company had a patent for an electrical distribution system of which Sawyer filed June 27, 1877, and which was granted August 14, 1877. The following extracts from the specifications of his patent show the direction of his early work. The object of his invention was to supply the streets blocks or buildings of a town or city in a practicable manner with any desired quantity of electricity for the purposes of electrical illumination, electroplating, electric heating, and the running of electric motors. The strategy of the invention was to place the electrical generator in any convenient locality where there can be placed electrical wiring over or under ground to the streets blocks or buildings in which the electric current is to be utilized. The advantages of the invention was to enable householders or businesses to obtain a supply of electricity for any purposes without the care and inconvenience of maintenance of local galvanic batteries, that it reduces the cost of electricity considerably to consumers, and that it rendered practical lighting of buildings by electricity. The object of Sawyer's invention was to supply streets and houses with electricity for local use, the same as natural gas or water was supplied to streets and houses at the present time, so that whenever electricity was desired, it was available from a switched supply. A week later Sawyer obtained a second patent for an electric lamp or burner on August 21, 1877. The invention consisted of an arrangement and combination of parts whereby it enabled to place several electric lights with carbon filaments in a single circuit and to get rid of the use of arcing carbon points normally employed in electric lights at the time. The purpose of the electric current was to heat to incandescence a filament wire in a glass bulb that had an inert gas and a glowing light resulted. Other investor-partners of the Electro-Dynamic Company included Hugh McCulloch (Man's uncle), William Hercules Hays, James P. Kernochan, Lawrence Myers, and Jacob Hays. Sawyer was around 28 years old and Man about 52 years old at the time the company was formed. They planned on lighting New York City with electricity for one-fortieth the cost of gas lighting. The new company started with capital of $10,000 cash and $290,000 of scrip. It was formed for the purpose of the production of light and power by means of electricity for the lighting of streets and buildings. The company was to make all the equipment necessary to generate and distribute electricity. The distribution of electricity produced by the company was not only for lighting, but for other purposes as well. The company was the first in the United States specifically organized for the manufacture and sale of incandescent electric light bulbs. In 1878, the company demonstrated an electric light that was the invention of Sawyer and Man. An exhibition was set up in New York City on October 29, 1878. The same exhibition was mentioned several weeks later in a newspaper of Princeton, Minnesota, and Bismarck, North Dakota. The lamp was described as a strip of pencil carbon graphite connected with two wires to an electric generator. The carbon strip was in a hermetically sealed glass bulb that was filled with nitrogen gas. When electricity was applied, the internal strip developed a temperature of between 5,000 and . Since there was no oxygen in the glass globe the carbon filament did not burn out and produced light instead when it got hot. The demonstration consisted of five electric light bulbs hanging from chandeliers in an office building at the corner of Elm and Walker streets. Wires came from the electric lights and went to an adjacent room where there was a generator set up to produce electricity. The wires passed through keyholes to the adjoining room. A key was put into the keyhole and turned to switch on the electric current. As the key was turned further around, the electric lights got brighter. This switch idea was demonstrated with all five chandeliers with the electric lamps. An electric meter to measure the amount of electricity used in an office or house for billing purposes was also demonstrated. What was unique to Man and Sawyer's lighting system patent No. 205,303, dated June 5, 1878, was that of a safety switch and current regulator, which was termed the lamp-lighter. Attempts to produce an electric lamp with a carbon filament had been made by others and it had been found impossible to prevent the filament from being destroyed with the application electrical current and the sudden change in temperature of the filament as a consequence of the full voltage applied at once. The lamp-lighter switch and regulator avoided this by automatically regulating the lamps electrical current. In an electric lighting system with an electric generator supplying current to several electric lamps in a Man and Sawyer electrical distribution system there was an independent electromagnetic safety switch of the electrical current. It acted automatically upon the occurrence of any sudden change of resistance condition in the system. A sudden change could be another lamp that burned out and was no longer in the circuit or an accidental short circuit, which then produced a sudden power surge to the remaining lamps still in the circuit. The lamp-lighter safety switch would notice this and instantly turn off the electricity. Then it set up a new combination series of resistors and switched the electricity back on gradually. That regulated the quantity or intensity of the electrical current going to the remaining lamps in the distribution system so that the other lamps would not be destroyed by the sudden power surge. Legal Patents were taken out by Man and Sawyer for the incandescent electric lamp and all the items needed for electric current distribution for electrifying a large number of lights. The patents were for the benefit of Electro-Dynamic Light Company of New York. Man and Sawyer were involved in many legal actions between 1880 and 1884 to protect these patents for electric lighting and electrical power distribution. One such patent was their invention of the incandescent electric lamp formed of carbonized paper. The patent was originally decided for Man and Sawyer on January 20, 1882. It was later referred back to the Examiner of Interferences on the request of Edison who indicated that he had new testimony to offer that was relevant and should be looked at. The Examiner of this time reviewed Edison's testimony at length and went over all the additional points. He held that Man and Sawyer must be adjudged to be the prior inventors of the electric light, as was already decided twice before. He determined after a careful reexamination of the entire record it must be held that Man and Sawyer had put into practice the usage of the electric light at least by the autumn of 1878 and that earliest date the invention can be shown put into use by Edison is a year later. This settled permanently the long contested conflict over the question of who was first in the invention of the arch-shaped fibrous carbon filament electric lamp, which occupied the attention of the U.S. Patent Office for nearly five years to make this final determination. The Electro-Dynamic Light Company was no longer in existence after 1882. See also History of electronic engineering References Sources 1878 establishments in New York (state) Companies established in 1878 Energy companies established in 1878 History of science and technology in the United States 1881 disestablishments in New York (state) Companies disestablished in 1881 Electric power transmission History of electrical engineering
Electro-Dynamic Light Company
Engineering
1,952
1,889,610
https://en.wikipedia.org/wiki/TRON%20%28encoding%29
TRON Code is a multi-byte character encoding used in the TRON project. It is similar to Unicode but does not use Unicode's Han unification process: each character from each CJK character set is encoded separately, including archaic and historical equivalents of modern characters. This means that Chinese, Japanese, and Korean text can be mixed without any ambiguity as to the exact form of the characters; however, it also means that many characters with equivalent semantics will be encoded more than once, complicating some operations. TRON has room for 150 million code points. Separate code points for Chinese, Korean, and Japanese variants of the 70,000+ Han characters in Unicode 4.1 (if that were deemed necessary) would require more than 200,000 code points in TRON. TRON includes the non-Han characters from Unicode 2.0, but it has not been keeping up to date with recent editions to Unicode as Unicode expands beyond the Basic Multilingual Plane and adds characters to existing scripts. The TRON encoding has been updated to include other recent code page updates like JIS X 0213. Fonts for the TRON encoding are available, but they have restrictions for commercial use. Structure Each character in TRON Code is encoded as two bytes (as long as it is present in the current encoding plane). Similarly to ISO/IEC 2022, the TRON character encoding handles characters in multiple character sets within a single character encoding by using escape sequences, referred to as language specifier codes, to switch between planes of 48,400 code points. Character sets incorporated into TRON Code include existing character sets such as JIS X 0208 and GB 2312, as well as other character sources such as the Dai Kan-Wa Jiten, and some scripts not included in other encodings such as Dongba symbols. Owing to the incorporation of entire character sets into TRON Code, many characters with equivalent semantics are encoded multiple times; for example, all of the kanji characters in the GT Typeface receive their own codepoints, despite many of them overlapping with other kanji character sets that are already included such as JIS X 0208. One such example is the character 亜 (located in Unicode at U+4E9C) which appears in the JIS X 0208 region at , the GT Typeface region at , and the Dai Kan-Wa Jiten region at . Control codes Bytes in the range 0x00 to 0x20 and 0x7F are reserved for use in control codes. Character codes Characters in each plane are divided into four zones. Each zone is allocated separately; for example, in plane 1 JIS X 0208 characters reside in Zone A starting at 0x2121, JIS X 0213 characters reside in both Zone A and Zone B, and GB 2312 characters reside in Zone C starting at 0x2180. TRON code points are notated as "X-YYYY", where "X" is the plane number in decimal and "YYYY" is the code point in hexadecimal. Alternatively, the notation "0xNNYYYY" can be used, where "NN" is the second byte in hexadecimal of the language specifier code. A text format "&TNNYYYY;" can be used to denote a TRON code point in ASCII text, in a similar manner to numeric character references in HTML, SGML or XML. However, a standard and conforming HTML or XML parser would treat them as named entities, that can't be directly and easily mapped to valid and unambiguous sequences of code points in the UCS, without an extensive DTD to define them (possibly by using some private use characters for TRON escapes, or Unicode variation selectors mapped to TRON characters for encoding different TRON characters represented as the same character in the UCS): a different SGML-based parser will be needed to support the TRON text format in a way interoperable with standard UTF's for the UCS. Language specifier codes Language specifier codes are prefixed with 0xFE. Valid suffixes are 0x21 to 0x7E (referencing planes 1 to 94) and 0x80 to 0xFE (for future planes), many of which are unallocated. Special and escape codes Special codes are prefixed with 0xFF. Planes The following are the planes allocated for use in TRON Code, along with their corresponding language specifier codes and a description of the character sets included in each plane. Planes 11 to 15 were originally allocated to store the Mojikyō character set, but disputes have led to the planes being excluded. All other planes up to 31 are currently reserved for future allocation. See also TRON project BTRON ITRON External links TRONコード体系 Tron code system in BTRON specification document TRON文字収録センター Tron character collection center 超漢字 Operating system with BTRON standard GT明朝 Tron GT-Mincho font ITRON Project Archive Active TRON character page The Handling of Chinese Characters and TRON Code References Character sets TRON project
TRON (encoding)
Technology
1,076
66,617,145
https://en.wikipedia.org/wiki/Geometrical%20Product%20Specification%20and%20Verification
Geometrical Product Specification and Verification (GPS&V) is a set of ISO standards developed by ISO Technical Committee 213. The aim of those standards is to develop a common language to specify macro geometry (size, form, orientation, location) and micro-geometry (surface texture) of products or parts of products so that the language can be used consistently worldwide. Background GPS&V standards cover: dimensional specifications macrogeometrical specifications (form, orientation, location and run-out) surface texture specifications measuring equipment and calibration requirements uncertainty management for measurement and specification acceptance Other ISO technical committees are strongly related to ISO TC 213. ISO Technical Committee 10 is in charge of the standardization and coordination of technical product documentation (TPD). The GPS&V standards describe the rules to define geometrical specifications which are further included in the TPD. The TPD is defined as the: "means of conveying all or part of a design definition or specification of a product". The TPD can be either a conventional documentation made of two dimensional Engineering drawings or a documentation based on Computer-aided design (CAD) models with 3RD annotations. The ISO rules to write the documentation are mainly described in ISO 128 and ISO 129 series while the rules for 3RD annotations are described in ISO 16792. ISO Technical Committee 184 develops standards that are closely related to GPS&V standards. In particular ISO TC 184/SC4 develops ISO 10303 standard known as STEP standard. GPS&V shall not to be confused with the use of ASME Y.14.5 which is often referred to as Geometric Dimension and Tolerance (GD&T). History and concepts History ISO TC 213 was born in 1996 by merging three previous committees: ISO Technical Committee 10 Sub-committee 5 (ISO/TC 10/SC5) Geometrical Tolerancing ISO Technical Committee 57 (ISO/TC 57) Surface Texture ISO Technical Committee 3 (ISO/TC 3) Limits and fits Operation GPS&V standards are built on several basic operations defined in ISO 17450-1:2011: skin model partition extraction filtration association collection construction reconstruction reduction Those operations are supposed to completely describe the process of tolerancing from the point of view of the design and from the point of view of the measurement. They are presented in ISO 17450 standard series. Some of them are further described in other standards e.g ISO 16610 series for filtration. Those concepts are based on academic works. The key idea is to start from the real part with its imperfect geometry (skin model) and then to apply a sequence of well defined operations to completely describe the tolerancing process. The operations are used in the GPS&V standards to define the meaning of dimensional, geometrical or surface texture specifications. Skin model The skin model is a representation of the surface of the real part. The model in CAD systems describes the nominal geometry of the parts of a product. The nominal geometry is perfect. However, the geometrical tolerancing has to take into account the geometrical deviations that arise inevitably from the manufacturing process in order to limit them to what is considered as acceptable by the designer for the part and the complete product to be functional. This is why a representation of the real part with geometrical deviations (skin model) is introduced as the starting point in the tolerancing process. Partition The skin model is a representation of a whole real part. However, the designer very often, if not always, needs to identify some specific geometrical features of the part to apply well-suited specifications. The process of identifying geometrical features from the skin model or the nominal model is called a partition. The standardization of this operation is a work in progress in ISO TC 213 (ISO 18183 series). Several methods can be used to obtain a partition from a skin model as described in Extraction The skin model and the partitioned geometrical features are usually considered as continuous, however it is often necessary when measuring the part to consider only points extracted from a line or a surface. The process of e.g. selecting the number of points, their distribution over the real geometrical feature and the way to obtain them is part of the extraction operation. This operation is described in ISO 14406:2011 Filtration Filtration is an operation that is useful to select features of interest from other features in the data. This operation is heavily used for surface texture specifications however, it is a general operation that can be applied to define other specifications. This operation is well known in signal processing where it can be used for example to isolate some specific wave length in a raw signal. The filtration is standardized in ISO 16610 series where a lot of different filters are described. Association Association is useful when we need to fit an ideal (perfect) geometrical feature to a real geometrical feature e.g. to find a perfect cylinder that approximates a cloud of points that have been extracted from a real (imperfect) cylindrical geometrical feature. This can be viewed as a mathematical optimization process. A criterion for optimization has to be defined. This criterion can be the minimisation of a quantity such as the squares of the distances from the points to the ideal surface for example. Constraints can also be added such as a condition for the ideal geometrical feature to lie outside the material of the part or to have a specific orientation or location from an other geometrical feature. Different criteria and constraints are used as defaults throughout the GPS&V standards for different purposes such as geometrical specification on geometrical features or datum establishment for example. However, standardization of association as a whole is a work in progress in ISO TC 213. Collection Collection is a grouping operation. The designer can define a group of geometrical features that are contributing to the same function. It could be used to group two or more holes because they constitute one datum used for the assembly of a part. It could also be used to group nominally planar geometrical features that are constrained to lie inside the same flatness tolerance zone. This operation is described throughout several GPS&V standards. It is heavily used in ISO 5458:2018 for grouping planar geometrical feature and cylindrical geometrical features (holes or pins). The collection operation can be viewed as applying constraints of orientation and or constraints of location among the geometrical features of the considered group. Construction Construction is described as an operation used to build ideal geometrical features with perfect geometry from other geometrical features. An example, given in ISO 17450-1:2011 is the construction of a straight line resulting from the intersection of two perfect planes. No specific standard addresses this operation, however it is used and defined throughout a lot of standards in GPS&V system. Reconstruction Reconstruction is an operation allowing the build of a continuous geometrical feature from a discrete geometrical feature. It is useful for example when there is a need to obtain a point between two extracted points as can be the case when identifying a dimension between two opposite points in a particular section in the process of obtaining a linear size of a cylinder. The reconstruction operation is not yet standardized in the GPS&V system however the operation has been described in academic papers Reduction Reduction is an operation allowing to compute a new geometrical feature from an existing one. The new geometrical feature is a derived geometrical feature. Dimensional specification Dimensional tolerances are dealt with in ISO 14405: ISO 14405-1:2016 Linear sizes ISO 14405-2:2018 Dimensions other than linear or angular sizes ISO 14405-3:2016 Angular sizes The linear size is indicated above a line ended with arrows and numerical values for the nominal size and the tolerance.The linear size of a geometrical feature of size is defined by default, as the distances between opposite points taken from the surface of the real part. The process to build both the sections and the directions needed to identify the opposite points is defined in ISO 14405-1 standard. This process includes the definition of an associated perfect geometrical feature of the same type as the nominal geometrical feature. By default a least-squares criterion is used. This process is defined only for geometrical features where opposite points exist. ISO 14405-2 illustrates cases where dimensional specification are often misused because opposite points don't exist. In these cases, the use of linear dimensions is considered as ambiguous (see example). The recommendation is to replace dimensional specifications with geometrical specifications to properly specify the location of a geometrical feature with respect to an other geometrical feature, the datum feature (see examples). Angular sizes are useful for cones, wedges or opposite straight lines. They are defined in ISO 14405-3. The definition implies to associate perfect geometrical features e.g. planes for a wedge and to measure the angle between lines of those perfect geometrical features in different sections. The angular sizes are indicated with an arrow and numerical values for the nominal size and the tolerance. It is to be noted that angular size specification is different from angularity specification. Angularity specification controls the shape of the toleranced feature but it is not the case for angular size specification. Size of a cylinder We consider here the specification of a size of a cylinder to illustrate the definition of a size according to ISO 14405-1. The nominal model is assumed to be a perfect cylinder with a dimensional specification of the diameter without any modifiers changing the default definition of size. According to ISO 14405-1:2016 annex D, the process to establish a dimension between two opposite points starting from the real surface of the manufactured part which is nominally a cylinder is as follows: partition of the real surface to identify the portion of the whole surface of the part that is submitted to the specification extract points from the partitioned surface reconstruct the surface from extracted points if the number of extracted points is not infinite filter the reconstructed surface associate a perfect cylinder to the filtered surface using a least-squares criterion identify the straight line which is the axis of the associated cylinder built a plane perpendicular to the associated cylinder axis to identify a cross section consider the section line which is the intersection of the plane perpendicular to the associated cylinder axis, with the filtered surface associate a perfect circle to the section line using a least-squares criterion consider a straight line in the cross section passing through the centre of the associated circle two opposite points are defined as the intersection between the straight line and the section line See example hereafter for an illustration. Dimension with envelope requirement Ⓔ The envelope requirement is specified by adding the symbol Ⓔ after the tolerance value of a dimensional specification. The symbol Ⓔ modifies the definition of the dimensional specification in the following way (ISO 14405-1 3.8): the dimensional specification is applied between two opposite points for the least material side of the dimensional specification, the maximum inscribed dimension specification (for internal geometrical feature like a cylindrical hole) or the minimum circumscribed dimension specification (for external geometrical feature like a cylindrical pin) is applied. The maximum inscribed dimension for a nominally cylindrical hole is defined as the maximum diameter of a perfect cylinder associated to the real surface with a constraint applied to the associated cylinder to stay outside the material of the part. The minimum circumscribed dimension for a nominally cylindrical pin is defined as the minimum diameter of a perfect cylinder associated to the real surface with a constraint applied to the associated cylinder to stay outside the material of the part. See example hereafter for an illustration. Use of the envelope requirement The use of the envelope symbol Ⓔ is closely related to the very common function of fitting parts together. A dimensional specification without envelope on the two parts to be fitted is not sufficient to ensure the fitting because the shape deviation of the parts is not limited by the dimensional specifications. The fitting of a cylindrical pin inside a cylindrical hole, for example requires to limit the sizes of both geometrical features but also to limit the deviation of straightness of both geometrical features as it is the combination of the size specification and the geometrical specification (straightness) that will allow the fitting of the two parts. The use of the envelope requirement on a cylindrical hole allows to accept only the combinations of size and shape that guarantee a minimum passage for a perfect cylinder. The use of the envelope requirement on a cylindrical pin allows to accept only the combinations of size and shape that guarantee that the material of the pin is inside a maximum perfect cylinder. Then the cylindrical pin and the cylindrical hole will fit even in the worst conditions without over constraining the parts with specific form specifications. It is to be noted that the use of dimensional size with envelope does not constrain the orientation nor the location of the parts. The use of geometrical specification together with the maximum material requirement (symbol Ⓜ) allows to ensure fitting of parts when additional constraints on orientation or location are required. ISO 2692:2021 describes the use of the maximum material modifier. Form, orientation, location and run-out specifications GPS&V standards dealing with geometrical specifications are listed below: ISO 1101:2017 Tolerances of form, orientation, location and run-out ISO 5459:2011 Datums and datum systems ISO 5458:2018 Pattern and combined geometrical specification ISO 1660:2017 Profile tolerancing The word geometry, used in this paragraph is to be understood as macrogeometry as opposed to surface texture specifications which are dealt with in other standards. The main source for geometrical specifications in GPS&V standards is ISO 1101. ISO 5459 can be considered as a companion standard with ISO 1101 as it defines datum which are heavily used in ISO 1101. ISO 5458 and ISO 1660 are only focussing on subsets of ISO 1101. However, those standards are very useful for the user of GPS&V systems as they cover very common aspects of geometrical tolerancing namely groups of cylinders or planes and profile specifications (lines and surfaces). A geometrical specification allows to define the three following objects: toleranced features datums, if they are specified tolerance zones The steps to read a geometrical specification can be summarised as in follows: identify the toleranced feature as a portion of the skin model or a feature that can be built from the skin model like an imperfect line representing an axis for example, build the specified datum by first associating perfect geometrical features to a (real) datum feature and then building a situation feature from those associated datums to obtain the specified datum, build the tolerance zone as a perfect volume or surface that can be constrained in orientation or location from the datum check whether the toleranced feature lies entirely inside the tolerance zone. Toleranced feature Toleranced features are defined in ISO 1101. The toleranced feature is a real geometrical feature with imperfect geometry identified either directly from the skin model (integral feature) or by a process starting from the skin model (derived feature). The integral feature is a portion of the skin model directly identified by a partition with extraction and possibly filtration. The derived feature is built from the skin model from a specific process that is defined by default in GPS&V standards. For example, when the axis of a cylinder is indicated by the geometrical specification (see example) then the toleranced feature is a line made of the centres of associated circles in each section. The sections are defined to be perpendicular to the axis of a cylinder associated to the integral feature. The least-squares criterion is used by default. Whether the toleranced feature is an integral feature or a derived feature depends upon the precise writing of the corresponding specification: if the arrow of the leader line of the specification is in the prolongation of a dimension line otherwise it is an integral feature. A Ⓐ modifier can also be used in the specification to designate a derived feature. The nominal toleranced feature is a geometrical feature with perfect geometry defined in the TPD corresponding to the toleranced feature. Datum Datums are defined in ISO 5459 as a simulation of a contact partner at a single part specification, where the contact partner is missing. The contacts „planar touch“ and „fit of lineare size“ are covered by defaults. With this simulation a specification mistake appears against the nature function, which appears in assembly constrains. In essence, the datum is used to link the toleranced feature (imperfect real geometry) to the toleranced zone (perfect geometry). As such the datum object is a three folded object: the datum feature is a geometrical feature of imperfect geometry obtained from the skin model (real part) by a partition. The nominal datum is identified on the nominal model by a triangle connected to a frame containing the name of the datum (capital letter), the associated datum feature is obtained by associating a geometrical feature with perfect geometry to the datum feature (real). The default process and criterion to be applied for the association is defined in ISO 5459. The criterion can be different for different geometrical features. the specified datum is a situation feature built from the associated datums. The link between the orientation, location or run-out specification and the datums is specified in the geometrical specification frame as follows: the primary datum is in the third cell of a geometrical specification, if any; the secondary datum is in the fourth cell of the geometrical specification, if any; the tertiary datum is in the fifth cell of the geometrical specification, if any. Some geometrical specification may not have any datum section at all (e.g. form specification). The content of each cell can be either: a single datum identified by a capital letter such as 'A' (or several capital letters without separators like 'AA' or 'AAA') or a common datum identified by a sequence of capital letters with a dash separator such as A-B (or a sequence of several capital letters separated by dashes like 'AA-BBB'). The process to build a datum system is first described and the process for building a common datum follows. Datum system A datum is identified by at most three cells in the geometrical specification frame corresponding to primary, secondary and tertiary datums. For the primary, secondary and tertiary datum, a perfect geometry feature of the same kind as the nominal feature is associated to the real feature as described hereafter: The primary datum is built by associating a feature of perfect geometry with the default association. In ISO 5459:2011 for a plane, the default association is to minimize the maximum distance between the associated feature (a perfect plane) and the real feature with a constraint for the associated feature to stay outside the material of the part. The secondary datum is built in the same way as the primary datum with an additional constraint for the associated feature to be oriented from the primary datum as described on the nominal model. The tertiary datum is built in the same way as the secondary datum with an additional constraint for the associated feature to be oriented from the secondary datum as described on the nominal model. The result is a set of associated features. Finally, this set of associated features is used to build a situation feature which is the specified datum. Common datum The datum features are identified on the skin model from the datum component in the dash separated list of nominal datum appearing in a particular cell of an orientation or location specification. The common datum can be used as primary, secondary or tertiary datum. In either cases, the process to build a common datum is the same however additional orientation constraints shall be added when the common datum is used as secondary or tertiary datum as is done for datum systems and explained hereafter. The criterion for association of common datum is applied on all the associated features together with the following constraints: external material constraints orientation and location constraints between the associated features of the common datum addition orientation constraint with respect to preceding datum in the hierarchy. The result is a set of associated feature. Finally, this set of associated features is used to build a situation feature which is the specified datum. Situation feature The final step in the datum establishment process is to combine the associated features to obtain a final object defined as situation feature which is identified to the specified datum (ISO 5459:2011 Table B.1). It is a member of the following set: a point a straight line a plane a straight line containing a point a plane containing a straight line a plane containing a straight line and a point How to build the situation features and therefore the specified datum, is currently mainly defined through examples in ISO 5459:2011. More specific rules are under development. The specified datum concept is closely related to classes of surfaces invariant through displacements. It has been shown that surfaces can be classified according to the displacements that let them invariant. The number of classes is seven. If a displacement let a surface invariant then this displacement cannot be locked by the corresponding specified datum. So the displacement that are not invariant are used to lock specific degrees of freedom of the tolerance zone. For example a set of associated datums made of three mutually perpendicular planes corresponds to the following situation feature: a plane containing a straight line containing a point. The plane is the first associated plane obtained, the line is the intersection between the second associated plane and the first one and the point is the intersection between the line and the third associated plane. The specified datum is therefore belonging to the complex invariance class () and all the degrees of freedom of a tolerance zone can be locked with this specified datum. The invariance class graphic symbols are not defined in ISO standards but only used in literature as a useful reminder. An Helicoidal class () can also be defined however it is generally replaced with a cylindrical class in real world applications. Tolerance zone Tolerance zones are defined in ISO 1101. The tolerance zone is a surface or a volume with perfect geometry. It is a surface when it is intended to contain a tolerance feature which is a line. It is a volume when it is intended to contain a tolerance feature which is a surface It can often be described as a rigid body with the following attributes: the shape, is in most cases the volume between two opposite parallel planes (resp. the area between two parallel lines) or a cylinder if the symbol ⌀ is preceding the numerical value in the second section of the geometrical specification frame or a sphere if the symbol S⌀ is used, the size, given by a numerical value in the second section of the geometrical specification frame orientation constraints with respect to the specified datum from the geometrical specification frame if the geometrical specification is an orientation or a location specification, location constraints with respect to the specified datum from the geometrical specification frame if the geometrical specification is a location specification, orientation and location constraints between tolerance zones if the modifier CZ ('Combined Zone') is indicated in the second cell of the geometrical specification. Theoretical Exact Dimension (TED) TED are identified on a nominal model by dimensions with a framed nominal value without any tolerance. Those dimensions are not specification by themselves but are needed when applying constraints to build datum or to determine the orientation or location of the tolerance zone. TED can also be used for other purposes e.g. to define the nominal shape or dimensions of a profile. When applying constraints generally two types of TED are to be taken into account: explicit TED which are written on an engineering drawing or that may be obtained by querying a CAD model. implicit TED which are the distance of 0 mm for two coincident lines, 0° (modulo 180°) for parallel lines or 90° (modulo 180°) for perpendicular lines Geometrical specification families The geometrical specifications are divided into three categories: form orientation location Run-out specification is another family that involves both form and location. Examples Presentation This paragraph contains examples of dimensional and geometrical specification to illustrate the definition and use of dimensional and positional specifications. The dimensions and tolerance values (displayed in blue in the figures) shall be numerical values on actual drawings. d, l1, l2 are used for length values. Δd is used for a dimensional tolerance value and t, t1, t2 for positional tolerance values. For each example we present: the drawing showing the geometry of the nominal model and a specification figures illustrating the meaning of the specification on a particular real part with deviations The deviations are enlarged compared to actual parts in order to show as clearly as possible the steps necessary to build the GPS&V operators. The first angle projection is used in technical drawing. Dimensional specifications Diameter of a cylindrical part The drawing above shows a cylindrical part with the specification of the diameter. The nominal value d and the tolerance value Δd shall be replaced with numerical values on an actual drawing. The real part above (1) in orange is shown with its deviation. The green lines (2) represent an associated cylinder. The red axis line (3) represents the axis of the associated cylinder. The blue lines (4) represent two particular sections. All sections (an infinite number) shall be considered theoretically. At the verification stage only some sections will be measured introducing uncertainty in the result. A section of the real part is represented above with the real line in orange (4). The blue line (3) is an associated circle. The blue cross (2) is the centre of the associated circle. The green cross (1) represents the axis of the associated cylinder shown in green in the real part figure. The two dots (6) represent two opposite points on the real surface. The dimension (5) is one of the local dimension measured. Diameter of a cylindrical part with envelope Ⓔ The drawing shows a cylindrical part with the specification of the diameter with a modifier Ⓔ for the envelope requirement. The nominal value d and the tolerance value Δd shall be replaced with numerical values on an actual drawing. The real part (1) in orange is shown with its deviation. The green lines (2) represent an associated cylinder. The red axis line (3) represents the axis of the associated cylinder. The blue lines (4) represent two particular sections. All sections shall be considered. The orange dimensions (6) represent dimensions in particular sections. The purple line (5) represents the envelope cylinder (perfect cylinder). The dimension in purple (7) is the dimension of the envelope, specifically d+Δd/2. The verification is twofold: the local dimensions shall be greater than d-Δd, the surface of the real part shall fit into the envelope. Ambiguous dimension The drawing above shows a part with a dimensional specification. The red cross over this specification means that this type of specification is discouraged in ISO 14405-2 because it is not possible to find opposite points over the complete surface extent. The nominal value d and the tolerance value Δd shall be replaced with numerical values on an actual drawing. The real part above in orange is shown with its deviation. The upper dimension (orange) has two opposite points and therefore, could be defined however the lower one is missing an opposite point so that the dimensional specification is considered ambiguous and should be replaced with a geometrical specification. This example is often surprising for new practitioners of GPS&V. However, it is a direct consequence of the definition of a linear dimension in ISO 14405-1. The function targeted here is probably to locate the two planes, therefore a location specification on one surface with respect to the other surface or the location of the two surfaces with respect to one another is considered the right way to achieve the function. See examples. Positional specifications Location of a plane with respect to another plane (case 1) The drawing above shows a part with a location specification with respect to the datum named A which is indicated on the left planar surface. The real part below in orange is shown with its deviation. The process to build or identify the toleranced feature, the specified datum and the tolerance zone is described in the table below. This specification could be useful when one surface (datum plane in this case) has a higher priority in the assembly process. For example a second part could be required to fit inside the slot being guided by the plane where the datum has been indicated. The part is not conformant to the specification for this particular real part, as the toleranced feature (orange line segment) is not included in the tolerance zone (green). Location of a plane with respect to another plane (case 2) The drawing above shows a part with a location specification with respect to the datum named A which is indicated on the right planar surface. The real part below in orange is shown with its deviation. The process to build or identify the tolerance feature, the specified datum and the tolerance zone is described in the table here after. This case 2 is similar to case 1 above however the toleranced feature and the datum are switched so that the result is totally different as explained above. This specification could be useful when one surface (datum plane) has a higher priority over the other surface in the assembly process. For example a second part could be required to fit inside the slot being guided by the plane where the datum has been indicated. The part is not conformant to the specification for this particular real part, as the toleranced feature (orange line segment) is not included in the tolerance zone (green) Location of planes with respect to one another (case 3) The drawing above shows a part with a location specification with a CZ symbol. No datums are indicated on purpose. The real part above in orange is shown with its deviation. The building or identification of the toleranced feature and the tolerance zone is described in the table hereafter This specification could be useful when the two surfaces (plane in this case) have the same priority in the assembly process. For example a second part could be required to fit inside the slot being guided by the two planes. The part is conformant to the specification for this particular real part, as the toleranced feature (two orange line segments) is included in the tolerance zone (green). Location of a hole with respect to the edges of a plate The drawing above shows a part with a location specification for a hole with respect to a system of datums. The real part below in orange is shown with its deviation. The process to identify and build the toleranced feature, the specified datum and the tolerance zone is indicated below This specification could be useful when the holes is actually located from the edges of the plates in an assembly process and where the A surface has a higher priority over B. If the assembly process is modified then the datum specification shall be adapted in accordance. The order of the datum is important in a datum system as the resulting specified datum can be very different. The part is conformant to the specification for this particular real part, as the toleranced feature (purple line on the left, purple dot on the right) is included in the tolerance zone (green). Surface texture ISO 1302:2002 Indication of surface texture in technical product documentation Measuring equipment and calibration requirements ISO 14978:2018 General concepts and requirements for GPS measuring equipment ISO 10360 Acceptance and reverification tests for coordinate measuring machines (CMM) Uncertainty management for measurement and specification acceptance ISO 14253-1:2017 Inspection by measurement of workpieces and measuring equipment - Part 1: Decision rules for verifying conformity or nonconformity with specifications ISO 18391:2016 Population specification Notes References External links GPS Booklet TC 213 web site ISO standards Metrology Geometric measurement
Geometrical Product Specification and Verification
Physics,Mathematics
6,454
23,400,395
https://en.wikipedia.org/wiki/Microbial%20electrolysis%20cell
A microbial electrolysis cell (MEC) is a technology related to Microbial fuel cells (MFC). Whilst MFCs produce an electric current from the microbial decomposition of organic compounds, MECs partially reverse the process to generate hydrogen or methane from organic material by applying an electric current. The electric current would ideally be produced by a renewable source of power. The hydrogen or methane produced can be used to produce electricity by means of an additional PEM fuel cell or internal combustion engine. Microbial electrolysis cells MEC systems are based on a number of components: Microorganisms – are attached to the anode. The identity of the microorganisms determines the products and efficiency of the MEC. Materials – The anode material in a MEC can be the same as an MFC, such as carbon cloth, carbon paper, graphite felt, graphite granules or graphite brushes. Platinum can be used as a catalyst to reduce the overpotential required for hydrogen production. The high cost of platinum is driving research into biocathodes as an alternative. Or as other alternative for catalyst, the stainless steel plates were used as cathode and anode materials. Other materials include membranes (although some MECs are membraneless), and tubing and gas collection systems. Generating hydrogen Electrogenic microorganisms consuming an energy source (such as acetic acid) release electrons and protons, creating an electrical potential of up to 0.3 volts. In a conventional MFC, this voltage is used to generate electrical power. In a MEC, an additional voltage is supplied to the cell from an outside source. The combined voltage is sufficient to reduce protons, producing hydrogen gas. As part of the energy for this reduction is derived from bacterial activity, the total electrical energy that has to be supplied is less than for electrolysis of water in the absence of microbes. Hydrogen production has reached up to 3.12 m3H2/m3d with an input voltage of 0.8 volts. The efficiency of hydrogen production depends on which organic substances are used. Lactic and acetic acid achieve 82% efficiency, while the values for unpretreated cellulose or glucose are close to 63%. The efficiency of normal water electrolysis is 60 to 70 percent. As MEC's convert unusable biomass into usable hydrogen, they can produce 144% more usable energy than they consume as electrical energy. Depending on the organisms present at the cathode, MECs can also produce methane by a related mechanism. CalculationsOverall hydrogen recovery was calculated as RH2 = CERCat. The Coulombic efficiency is CE=(nCE/nth), where nth is the moles of hydrogen that could be theoretically produced and nCE = CP/(2F) is the moles of hydrogen that could be produced from the measured current, CP is the total coulombs calculated by integrating the current over time, F is Faraday's constant, and 2 is the moles of electrons per mole of hydrogen. The cathodic hydrogen recovery was calculated as RCat = nH2/nCE, where nH2 is the total moles of hydrogen produced. Hydrogen yield (YH2) was calculated as YH2 = nH2 /ns, where ns is substrate removal calculated on the basis of chemical oxygen demand (22). Uses Hydrogen and methane can both be used as alternatives to fossil fuels in internal combustion engines or for power generation. Like MFCs or bioethanol production plants, MECs have the potential to convert waste organic matter into a valuable energy source. Hydrogen can also be combined with the nitrogen in the air to produce ammonia, which can be used to make ammonium fertilizer. Ammonia has been proposed as a practical alternative to fossil fuel for internal combustion engines. See also Hydrogen technologies Microbial electrosynthesis Microbial fuel cells Microbial electrolysis carbon capture References M.Y. Azwar, M.A. Hussain, A.K. Abdul-Wahab (2014). Development of biohydrogen production by photobiological, fermentation and electrochemical processes: A review. Renewable and Sustainable Energy Reviews.Volume 31, March 2014, Pages 158–173. Copyright 2017 Elsevier B.V. http://doi.org/10.1016/j.rser.2013.11.022 External links National Science Foundation The University of Queensland Scientific Blogging Biotechnology Electric power Fuel cells Hydrogen production
Microbial electrolysis cell
Physics,Engineering,Biology
926
22,497,638
https://en.wikipedia.org/wiki/Double%20bond%20rule
In chemistry, the double bond rule states that elements with a principal quantum number (n) greater than 2 for their valence electrons (period 3 elements and higher) tend not to form multiple bonds (e.g. double bonds and triple bonds). Double bonds for these heavier elements, when they exist, are often weak due to poor orbital overlap between the n>2 orbitals of the two atoms. Although such compounds are not intrinsically unstable, they instead tend to dimerize or even polymerize. Moreover, the multiple bonds of the elements with n=2 are much stronger than usual, because lone pair repulsion weakens their sigma bonding but not their pi bonding. An example is the rapid polymerization that occurs upon condensation of disulfur, the heavy analogue of . Numerous exceptions to the rule exist. Several exceptions of this rule has been already made. Triple bonds Other meanings Another unrelated double bond rule exists that relates to the enhanced reactivity of sigma bonds attached to an atom adjacent to a double bond. In bromoalkenes, the C–Br bond is very stable, but in an allyl bromide, this bond is very reactive. Likewise, bromobenzenes are generally inert, whereas benzylic bromides are reactive. The first to observe the phenomenon was Conrad Laar in 1885. The name for the rule was coined by Otto Schmidt in 1932. References Chemical bonding
Double bond rule
Physics,Chemistry,Materials_science
289
9,550,090
https://en.wikipedia.org/wiki/Globoside
Globosides (also known as globo-series glycosphingolipids) are a sub-class of the lipid class glycosphingolipid with three to nine sugar molecules as the side chain (or R group) of ceramide. The sugars are usually a combination of N-acetylgalactosamine, D-glucose or D-galactose. One characteristic of globosides is that the "core" sugars consists of Glucose-Galactose-Galactose (Ceramide-βGlc4-1βGal4-1αGal), like in the case of the most basic globoside, globotriaosylceramide (Gb3), also known as pk-antigen. Another important characteristic of globosides is that they are neutral at pH 7, because they usually do not contain neuraminic acid, a sugar with an acidic carboxy-group. However, some globosides with the core structure Cer-Glc-Gal-Gal do contain neuraminic acid, e.g. the globo-series glycosphingolipid "SSEA-4-antigen". The side chain can be cleaved by galactosidases and glucosidases. The deficiency of α-galactosidase A causes Fabry's disease, an inherited metabolic disease characterized by the accumulation of the globoside globotriaosylceramide. Globoside-4 (Gb4) Globoside 4 (Gb4) has been known as the receptor for parvovirus B19, due to observations that B19V binds to the thin-layered chromatogram of the structure. However, the binding on its surface does not match well with the virus, which raised debates on whether or not GB4 is the cause for productive infection. Additional research using the technique Knockout Cell Line has shown that although GB4 does not have the direct entry receptor for B19V, it plays a post-entry role in productive infection. Globoside 4 (Gb4) are a type of SSEA, stage-specific embryonic antigen, that is present in cellular development and tumorous tissues without the mechanism of Gb4 being completely known. However a study has shown Gb4 directly activates the epidermal growth factor receptor through a ERK signaling. When the globo-series glycosphingolipid (GSL) was reduced in the experiment the ERK signaling from the receptor tyrosine kinase was also inhibited. The ERK was reactivated with the addition of the Gb4 and henceforth heightened proliferation of tumorous cells and opened up the possibility of testing Gb4 for further studies on potential drugs that can target cancerous cells. Globoside-5 (Gb5) Globoside-5 is also known as stage-specific embryonic antigen 3. References External links Glycolipids Blood antigen systems Transfusion medicine
Globoside
Chemistry
635
41,184,901
https://en.wikipedia.org/wiki/Garfield%20Thomas%20Water%20Tunnel
The Garfield Thomas Water Tunnel is one of the U.S. Navy's principal experimental hydrodynamic research facilities and is operated by the Penn State Applied Research Laboratory. The facility was completed and entered operation in 1949. The facility is named after Lieutenant W. Garfield Thomas Jr., a Penn State journalism graduate who was killed in World War II. For a long time, the Garfield Thomas Water Tunnel was the largest circulating water tunnel in the world. It has been declared a historic mechanical engineering landmark by the American Society of Mechanical Engineers. Today, in addition to many of its Navy projects, the facility tunnel-based research has expanded into pumps for the Space Shuttle, advanced propulsors for ships, heating and cooling systems, artificial heart valves, vacuum cleaner fans, and other pump and propulsor related products. History After the end of WW II, the US military started investing heavily in higher education nationwide. At the same time, Harvard terminated its Underwater Sound Laboratory (USL) which invented the first acoustical homing torpedo (FIDO); consequently Penn State hired Eric Walker, USL's assistant director to head its electrical engineering department, and the Navy transferred USL's torpedo division to Penn State - where it became the Ordnance Research Laboratory (ORL). The ORL eventually became the Applied Research Laboratory. The Garfield Thomas Water Tunnel was built at Penn State in cooperation with ORL by the ARL for further torpedo research. Construction completed on October 7, 1949, and began operating six months later. Since then, the facility has expanded into viscosity, sound, wave, and wind research. In 1992, the facility underwent a complete overhaul. Capabilities The facility consists of a number of closed circuit, closed jet and open jet facilities. Water Tunnels The facility operates four water tunnels. Garfield Thomas Water Tunnel The Garfield Thomas Water Tunnel is the facility's largest water tunnel. The 100 feet long, 32 feet high, 100,000 gallons tunnel is a closed-circuit, closed-jet. The system is powered by 1,491 kW (2,000-hp) pump, with a 4-blade adjustable pitch impeller and can produce a maximum water velocity of 18.29 m/s (40.91 mph). The system is capable of producing pressures between 413.7 and 20.7 kPa. The tunnel is equipped with an array of instruments including: Propeller dynamometers, Five-hole pressure probe, Pitot probes, lasers, pressure sensors, hydrophones, planar motion mechanism (PMM), force balances, accelerometers, and acoustics arrays. Smaller Water Tunnels The facility operates two additional smaller water tunnels with diameters of 12 inches and 6 inches. Both are closed-circuit, closed-jet. The 12-incher is a 150 horsepower (111.8 kW) system capable of producing maximum water velocity of 24.38 m/s (54.53 mph). The 6-incher is a 25 hp (18.64 kW) system that can deliver a max velocity of 21.34 m/s (47.74 mph). Both tunnels are equipped with lasers, pressure sensors, pressure transducers, and hydrophones Ultra-High Speed Cavitation Tunnel The facility also has a 1.5 inch closed-circuit, closed-jet cavitation tunnel capable of producing a maximum velocity of 83.8 m/s (187 mph). The stainless steel, 75 hp (55.9 kW) tunnel supports pressures as high as 41.4 kPa and temperatures of 16 °C to 176 °C. Other facilities In addition to the water tunnels, the facility operates an array of wind tunnels, glycerin tunnels, and anechoic chamber for used in many physics problems. The Boundary Layer Research Facility (BLRF) operates a 12-inch turbulent pipe flow of glycerine. Additionally, the facility operates a 20 hp (14.91 kW), open-jet, 1,750 rpm Axial-Flow Fan with a 36.58 m/s (81.83 mph) maximum velocity used for basic engineering research in turbomachinery blading. Another 2.75 meter diameter, 100 hp (74.6 kW) closed-circuit used specifically for research in viscous sublayer and in modeling of turbulent flow of fluids next to a wall at large scale. See also Pennsylvania State University Applied Research Laboratory Water tunnel (hydrodynamic) Ship model basin Pennsylvania State University List of historic mechanical engineering landmarks References External links ARL Homepage Ship design Model boats Pennsylvania State University campus United States Navy installations 1949 establishments in Pennsylvania Landmarks in Pennsylvania
Garfield Thomas Water Tunnel
Physics
947
12,115,249
https://en.wikipedia.org/wiki/Avco-Lycoming%20AGT1500
The Avco-Lycoming AGT1500 is a gas turbine engine. It is the main powerplant of the M1 Abrams series of tanks. The engine was originally designed and produced by the Lycoming Turbine Engine Division in the Stratford Army Engine Plant. In 1995, production was moved to the Anniston Army Depot in Anniston, Alabama, after the Stratford Army Engine Plant was shut down. Specifications Engine output peaks at , with of torque at that peak, which occurs at 3,000 rpm. The turbine can provide torque in excess of at significantly lower RPMs. The engine weighs approximately and occupies a volume of , measuring . The engine can use a variety of fuels, including jet fuel, gasoline, diesel and marine diesel. The engine is a three-shaft machine composed of five sub-modules: Recuperator – a fixed cylindrical regenerative heat exchanger that extracts waste heat from the exhaust gases and uses it to preheat the compressed air Rotating Gas Producer – the five-stage, dual-spool compressor which achieves a 14.5:1 compression ratio at full power, driven by the compressor turbine, which operates with a maximum turbine inlet temperature of Accessory Gearbox – bevel gears that extract from the high-pressure spool to operate the fuel control unit, starter, oil pump, and vehicle hydraulic pump Power Turbines – the first stage of the two-stage power turbine is driven by a variable-geometry nozzle to improve efficiency Reduction Gearbox – reduces power turboshaft speed History Development had started by 1964 with a contract given to Chrysler in 1976, originally as an engine for the later cancelled MBT-70. In the early 1970s, the AGT1500 was developed into the PLT27, a flight-weight turboshaft for use in helicopters. This engine lost to the General Electric GE12 (T700) in three separate competitions to power the UH-60, AH-64, and SH-60. Serial production of the AGT1500 began in 1980; by 1992, more than 11,000 engines had been delivered. In 1986, with the Cold War about to wind down, Textron Lycoming began developing a commercial marine derivative, which they called the TF15. See also Anselm Franz, lead designer of AGT1500 at the early stage References External links AGT1500 Gas Turbine Engine AGT1500 Turbine Technology pdf on Honeywell.com Gas turbines Tank engines Honeywell Aerospace aircraft engines
Avco-Lycoming AGT1500
Technology
503
3,950,489
https://en.wikipedia.org/wiki/Diode%20modelling
In electronics, diode modelling refers to the mathematical models used to approximate the actual behaviour of real diodes to enable calculations and circuit analysis. A diode's I-V curve is nonlinear. A very accurate, but complicated, physical model composes the I-V curve from three exponentials with a slightly different steepness (i.e. ideality factor), which correspond to different recombination mechanisms in the device; at very large and very tiny currents the curve can be continued by linear segments (i.e. resistive behaviour). In a relatively good approximation a diode is modelled by the single-exponential Shockley diode law. This nonlinearity still complicates calculations in circuits involving diodes so even simpler models are often used. This article discusses the modelling of p-n junction diodes, but the techniques may be generalized to other solid state diodes. Large-signal modelling Shockley diode model The Shockley diode equation relates the diode current of a p-n junction diode to the diode voltage . This relationship is the diode I-V characteristic: , where is the saturation current or scale current of the diode (the magnitude of the current that flows for negative in excess of a few , typically 10−12A). The scale current is proportional to the cross-sectional area of the diode. Continuing with the symbols: is the thermal voltage (, about 26 mV at normal temperatures), and is known as the diode ideality factor (for silicon diodes is approximately 1 to 2). When the formula can be simplified to: . This expression is, however, only an approximation of a more complex I-V characteristic. Its applicability is particularly limited in case of ultra-shallow junctions, for which better analytical models exist. Diode-resistor circuit example To illustrate the complications in using this law, consider the problem of finding the voltage across the diode in Figure 1. Because the current flowing through the diode is the same as the current throughout the entire circuit, we can lay down another equation. By Kirchhoff's laws, the current flowing in the circuit is . These two equations determine the diode current and the diode voltage. To solve these two equations, we could substitute the current from the second equation into the first equation, and then try to rearrange the resulting equation to get in terms of . A difficulty with this method is that the diode law is nonlinear. Nonetheless, a formula expressing directly in terms of without involving can be obtained using the Lambert W-function, which is the inverse function of , that is, . This solution is discussed next. Explicit solution An explicit expression for the diode current can be obtained in terms of the Lambert W-function (also called the Omega function). A guide to these manipulations follows. A new variable is introduced as . Following the substitutions : and : rearrangement of the diode law in terms of w becomes: , which using the Lambert -function becomes . The final explicit solution being . With the approximations (valid for the most common values of the parameters) and , this solution becomes . Once the current is determined, the diode voltage can be found using either of the other equations. For large x, can be approximated by . For common physical parameters and resistances, will be on the order of 1040. Iterative solution The diode voltage can be found in terms of for any particular set of values by an iterative method using a calculator or computer. The diode law is rearranged by dividing by , and adding 1. The diode law becomes . By taking natural logarithms of both sides the exponential is removed, and the equation becomes . For any , this equation determines . However, also must satisfy the Kirchhoff's law equation, given above. This expression is substituted for to obtain , or . The voltage of the source is a known given value, but is on both sides of the equation, which forces an iterative solution: a starting value for is guessed and put into the right side of the equation. Carrying out the various operations on the right side, we come up with a new value for . This new value now is substituted on the right side, and so forth. If this iteration converges the values of become closer and closer together as the process continues, and we can stop iteration when the accuracy is sufficient. Once is found, can be found from the Kirchhoff's law equation. Sometimes an iterative procedure depends critically on the first guess. In this example, almost any first guess will do, say . Sometimes an iterative procedure does not converge at all: in this problem an iteration based on the exponential function does not converge, and that is why the equations were rearranged to use a logarithm. Finding a convergent iterative formulation is an art, and every problem is different. Graphical solution Graphical analysis is a simple way to derive a numerical solution to the transcendental equations describing the diode. As with most graphical methods, it has the advantage of easy visualization. By plotting the I-V curves, it is possible to obtain an approximate solution to any arbitrary degree of accuracy. This process is the graphical equivalent of the two previous approaches, which are more amenable to computer implementation. This method plots the two current-voltage equations on a graph and the point of intersection of the two curves satisfies both equations, giving the value of the current flowing through the circuit and the voltage across the diode. The figure illustrates such method. Piecewise linear model In practice, the graphical method is complicated and impractical for complex circuits. Another method of modelling a diode is called piecewise linear (PWL) modelling. In mathematics, this means taking a function and breaking it down into several linear segments. This method is used to approximate the diode characteristic curve as a series of linear segments. The real diode is modelled as 3 components in series: an ideal diode, a voltage source and a resistor. The figure shows a real diode I-V curve being approximated by a two-segment piecewise linear model. Typically the sloped line segment would be chosen tangent to the diode curve at the Q-point. Then the slope of this line is given by the reciprocal of the small-signal resistance of the diode at the Q-point. Mathematically idealized diode Firstly, consider a mathematically idealized diode. In such an ideal diode, if the diode is reverse biased, the current flowing through it is zero. This ideal diode starts conducting at 0 V and for any positive voltage an infinite current flows and the diode acts like a short circuit. The I-V characteristics of an ideal diode are shown below: Ideal diode in series with voltage source Now consider the case when we add a voltage source in series with the diode in the form shown below: When forward biased, the ideal diode is simply a short circuit and when reverse biased, an open circuit. If the anode of the diode is connected to 0V, the voltage at the cathode will be at Vt and so the potential at the cathode will be greater than the potential at the anode and the diode will be reverse biased. In order to get the diode to conduct, the voltage at the anode will need to be taken to Vt. This circuit approximates the cut-in voltage present in real diodes. The combined I-V characteristic of this circuit is shown below: The Shockley diode model can be used to predict the approximate value of . Using and : Typical values of the saturation current at room temperature are: for silicon diodes; for germanium diodes. As the variation of goes with the logarithm of the ratio , its value varies very little for a big variation of the ratio. The use of base 10 logarithms makes it easier to think in orders of magnitude. For a current of 1.0mA: for silicon diodes (9 orders of magnitude); for germanium diodes (3 orders of magnitude). For a current of 100mA: for silicon diodes (11 orders of magnitude); for germanium diodes (5 orders of magnitude). Values of 0.6 or 0.7 volts are commonly used for silicon diodes. Diode with voltage source and current-limiting resistor The last thing needed is a resistor to limit the current, as shown below: The I-V characteristic of the final circuit looks like this: The real diode now can be replaced with the combined ideal diode, voltage source and resistor and the circuit then is modelled using just linear elements. If the sloped-line segment is tangent to the real diode curve at the Q-point, this approximate circuit has the same small-signal circuit at the Q-point as the real diode. Dual PWL-diodes or 3-Line PWL model When more accuracy is desired in modelling the diode's turn-on characteristic, the model can be enhanced by doubling-up the standard PWL-model. This model uses two piecewise-linear diodes in parallel, as a way to model a single diode more accurately. Small-signal modelling Resistance Using the Shockley equation, the small-signal diode resistance of the diode can be derived about some operating point (Q-point) where the DC bias current is and the Q-point applied voltage is . To begin, the diode small-signal conductance is found, that is, the change in current in the diode caused by a small change in voltage across the diode, divided by this voltage change, namely: . The latter approximation assumes that the bias current is large enough so that the factor of 1 in the parentheses of the Shockley diode equation can be ignored. This approximation is accurate even at rather small voltages, because the thermal voltage at 300K, so tends to be large, meaning that the exponential is very large. Noting that the small-signal resistance is the reciprocal of the small-signal conductance just found, the diode resistance is independent of the ac current, but depends on the dc current, and is given as . Capacitance The charge in the diode carrying current is known to be , where is the forward transit time of charge carriers: The first term in the charge is the charge in transit across the diode when the current flows. The second term is the charge stored in the junction itself when it is viewed as a simple capacitor; that is, as a pair of electrodes with opposite charges on them. It is the charge stored on the diode by virtue of simply having a voltage across it, regardless of any current it conducts. In a similar fashion as before, the diode capacitance is the change in diode charge with diode voltage: , where is the junction capacitance and the first term is called the diffusion capacitance, because it is related to the current diffusing through the junction. Variation of forward voltage with temperature The Shockley diode equation has an exponential of , which would lead one to expect that the forward-voltage increases with temperature. In fact, this is generally not the case: as temperature rises, the saturation current rises, and this effect dominates. So as the diode becomes hotter, the forward-voltage (for a given current) decreases. Here is some detailed experimental data, which shows this for a 1N4005 silicon diode. In fact, some silicon diodes are used as temperature sensors; for example, the CY7 series from OMEGA has a forward voltage of 1.02V in liquid nitrogen (77K), 0.54V at room temperature, and 0.29V at 100 °C. In addition, there is a small change of the material parameter bandgap with temperature. For LEDs, this bandgap change also shifts their colour: they move towards the blue end of the spectrum when cooled. Since the diode forward-voltage drops as its temperature rises, this can lead to thermal runaway due to current hogging when paralleled in bipolar-transistor circuits (since the base-emitter junction of a BJT acts as a diode), where a reduction in the base-emitter forward voltage leads to an increase in collector power-dissipation, which in turn reduces the required base-emitter forward voltage even further. See also Bipolar junction transistor Semiconductor device modelling References Electronic device modeling
Diode modelling
Physics
2,589
47,634,311
https://en.wikipedia.org/wiki/Balliol-Trinity%20Laboratories
The Balliol-Trinity Laboratories in Oxford, England, was an early chemistry laboratory at the University of Oxford. The laboratory was located between Balliol College and Trinity College, hence the name. It was especially known for physical chemistry. Chemistry was first recognized as a separate discipline at Oxford University in the 19th century. From 1855, a chemistry laboratory existed in a basement at Balliol College. In 1879, Balliol and Trinity agreed to have a laboratory at the boundary of the two colleges. The laboratory became the strongest of the Oxford college research institutions in chemistry. It remained in operation until the Second World War when a new Physical Chemistry Laboratory (PCL) was constructed by Oxford University in the Science Area. People The following scientists of note worked in the Balliol-Trinity Laboratories: E. J. Bowen Sir John Conroy Sir Harold Hartley Sir Cyril Norman Hinshelwood (Nobel Prize winner) Henry Moseley See also Abbot's Kitchen, Oxford, another early chemistry laboratory in Oxford Department of Chemistry, University of Oxford Physical Chemistry Laboratory, which replaced the Balliol-Trinity Laboratories References 1879 establishments in England 1940 disestablishments in England Buildings and structures completed in 1879 Buildings and structures of the University of Oxford History of the University of Oxford University and college laboratories in the United Kingdom Chemistry laboratories Demolished buildings and structures in Oxfordshire Balliol College, Oxford Trinity College, Oxford Physical chemistry
Balliol-Trinity Laboratories
Physics,Chemistry
280
1,011,848
https://en.wikipedia.org/wiki/Fixed-point%20theorem
In mathematics, a fixed-point theorem is a result saying that a function F will have at least one fixed point (a point x for which F(x) = x), under some conditions on F that can be stated in general terms. In mathematical analysis The Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, the procedure of iterating a function yields a fixed point. By contrast, the Brouwer fixed-point theorem (1911) is a non-constructive result: it says that any continuous function from the closed unit ball in n-dimensional Euclidean space to itself must have a fixed point, but it doesn't describe how to find the fixed point (see also Sperner's lemma). For example, the cosine function is continuous in [−1, 1] and maps it into [−1, 1], and thus must have a fixed point. This is clear when examining a sketched graph of the cosine function; the fixed point occurs where the cosine curve y = cos(x) intersects the line y = x. Numerically, the fixed point (known as the Dottie number) is approximately x = 0.73908513321516 (thus x = cos(x) for this value of x). The Lefschetz fixed-point theorem (and the Nielsen fixed-point theorem) from algebraic topology is notable because it gives, in some sense, a way to count fixed points. There are a number of generalisations to Banach fixed-point theorem and further; these are applied in PDE theory. See fixed-point theorems in infinite-dimensional spaces. The collage theorem in fractal compression proves that, for many images, there exists a relatively small description of a function that, when iteratively applied to any starting image, rapidly converges on the desired image. In algebra and discrete mathematics The Knaster–Tarski theorem states that any order-preserving function on a complete lattice has a fixed point, and indeed a smallest fixed point. See also Bourbaki–Witt theorem. The theorem has applications in abstract interpretation, a form of static program analysis. A common theme in lambda calculus is to find fixed points of given lambda expressions. Every lambda expression has a fixed point, and a fixed-point combinator is a "function" which takes as input a lambda expression and produces as output a fixed point of that expression. An important fixed-point combinator is the Y combinator used to give recursive definitions. In denotational semantics of programming languages, a special case of the Knaster–Tarski theorem is used to establish the semantics of recursive definitions. While the fixed-point theorem is applied to the "same" function (from a logical point of view), the development of the theory is quite different. The same definition of recursive function can be given, in computability theory, by applying Kleene's recursion theorem. These results are not equivalent theorems; the Knaster–Tarski theorem is a much stronger result than what is used in denotational semantics. However, in light of the Church–Turing thesis their intuitive meaning is the same: a recursive function can be described as the least fixed point of a certain functional, mapping functions to functions. The above technique of iterating a function to find a fixed point can also be used in set theory; the fixed-point lemma for normal functions states that any continuous strictly increasing function from ordinals to ordinals has one (and indeed many) fixed points. Every closure operator on a poset has many fixed points; these are the "closed elements" with respect to the closure operator, and they are the main reason the closure operator was defined in the first place. Every involution on a finite set with an odd number of elements has a fixed point; more generally, for every involution on a finite set of elements, the number of elements and the number of fixed points have the same parity. Don Zagier used these observations to give a one-sentence proof of Fermat's theorem on sums of two squares, by describing two involutions on the same set of triples of integers, one of which can easily be shown to have only one fixed point and the other of which has a fixed point for each representation of a given prime (congruent to 1 mod 4) as a sum of two squares. Since the first involution has an odd number of fixed points, so does the second, and therefore there always exists a representation of the desired form. List of fixed-point theorems Atiyah–Bott fixed-point theorem Banach fixed-point theorem Bekić's theorem Borel fixed-point theorem Bourbaki–Witt theorem Browder fixed-point theorem Brouwer fixed-point theorem Rothe's fixed-point theorem Caristi fixed-point theorem Diagonal lemma, also known as the fixed-point lemma, for producing self-referential sentences of first-order logic Lawvere's fixed-point theorem Discrete fixed-point theorems Earle-Hamilton fixed-point theorem Fixed-point combinator, which shows that every term in untyped lambda calculus has a fixed point Fixed-point lemma for normal functions Fixed-point property Fixed-point theorems in infinite-dimensional spaces Injective metric space Kakutani fixed-point theorem Kleene fixed-point theorem Knaster–Tarski theorem Lefschetz fixed-point theorem Nielsen fixed-point theorem Poincaré–Birkhoff theorem proves the existence of two fixed points Ryll-Nardzewski fixed-point theorem Schauder fixed-point theorem Topological degree theory Tychonoff fixed-point theorem See also Trace formula Footnotes References External links Fixed Point Method Closure operators Iterative methods
Fixed-point theorem
Mathematics
1,231
7,712,632
https://en.wikipedia.org/wiki/Renix
Renix (Renix Electronique) was a joint venture by Renault and Bendix that designed and manufactured automobile electronic ignitions, fuel injection systems, electronic automatic transmission controls, and various engine sensors. Major applications included various Renault and Volvo vehicles. The name became synonymous in the U.S. with the computer and fuel injection system used on the AMC/Jeep 2.5 L I4 and 4.0 L I6 engines. Use of name The term Renix also has several applications. In specific carburetor equipped Renault and Volvo models, it provides an electronic ignition system consisting of an engine control unit (ECU) to replace the job of contact breaker points in the distributor. The system uses an angle sensor and several fuel sensors to provide a maintenance-free ignition system. The ECU is sealed and cannot be serviced, and the EPROM cannot be re-programmed. Later, the name was synonymous with a form of fuel injection. In such an application, it consists of an ECU and many sensors. It was first seen in engines produced by Renault (Renault 21, 25, and Espace) in and capacities. It is better known in America for its application in the AMC 4.0 L displacing straight-6 engines. Production began by American Motors Corporation (AMC) with the 1987 Jeep Cherokee (XJ) models. It was preceded by the AMC Computerized Engine Control and followed by Chrysler's Mopar MPI system. Renix Electronique Renix Electronique S.A. was established in 1981 as a joint venture by Renault with 51% interest and Bendix with 49% that was headquartered in Toulouse. Renix Corporation of America was the North American subsidiary of Renix Electronique to provide sales, logistics, engineering, and quality support to American Motors. When Renault encountered financial troubles in 1985, it sold its interest in Renix to AlliedSignal, a major auto industry supplier and the new owner of Bendix. Renix Corporation of America was also absorbed by AlliedSignal Corporation when it purchased Renix Electronique from Renault. The US$200 million Allied buyout of Renault's 51% of Renix made it part of Bendix Electronics and Engine Controls. Renix products are produced in France and marketed worldwide under the Bendix brand name. In 1988, the French Bendix site was sold to Siemens VDO. In 2008, Siemens AG sold its VDO branch to Continental Automotive Group. Renault applications The Renix system was used in the F series engines as fitted to the Renault 19 16V, Renault 21, Renault 25, and the Renault Espace. It is a multi-point fuel injection system, as opposed to a single-point system, with several air, throttle, and pinking sensors and an advanced computer. Application of the system could first be seen in 1984, three years before its American debut. The Renix system pushed the power of the carburetor-fed engine from . It could also be found in 2.2 L engines fitted to R21, R25, and Espace models. The Renix fuel injection and a Garrett turbocharger were used in the mid-mounted V6 powering the Renault Alpine GTA/A610 sports car. AMC/Jeep applications The fully integrated electronic engine control system made by Renix consists of a solid-state Ignition Control Module (ICM), a distributor, a crankshaft position sensor, and an Electronic Control Unit (ECU). The Renix ECU has a powerful microprocessor that was advanced technology for its time. It also incorporates an engine knocking sensor that allows the computer to know if detonation is occurring, thus allowing the computer to make adaptive control by individual cylinder corrections to prevent pinging. The knock detection uses the signal from a wide band accelerometer mounted on the cylinder head. Good signal-to-noise ratio is obtained primarily through angular discrimination. The Renix system has more inputs than the later Mopar system and, in some ways, is more complex. Its knock sensor automatically tunes the spark advance curve to an optimum mix for each cylinder. Some Renix-controlled engines will get better fuel economy by using higher octane fuel. The system on the AMC 4.0 L I6 engine is flexible allowing the use of a modified camshafts and modifications to the cylinder heads without significantly changing the base computer. The Renix computer was first used in 1986 AMC 2.5 L four-cylinder engines. The system improved the drivability of the Jeep Cherokee and Comanche over carbureted models. The power increase was also noticeable. The Renix system was used through the 1990 model year. However, it is handicapped because few scan tools can be "plugged in" to this on-board diagnostics computer. The Renix control system before 1991 can be tested only with Chrysler's DRB tester, and the diagnostic test modes for 1989 and later engines with SBEC controllers differ from those provided for 1988 and earlier models. Model years: 1986 - Renix TBI became standard on Jeep 2.5 L AMC four-cylinder engines 1987 - introduction of the new Renix controlled 4.0 L six-cylinder engine was rated at and of torque 1988 - the Renix controlled 4.0 L engine output increased to and of torque, due to higher compression ratio 1989 - Changed to Renix MPFI 1991 - Chrysler Corporation replaced the Renix control system with OBD-I-compliant control electronics, the Chrysler HO EFI. The Renix control system was exclusive on the 1986 through 1990 Jeep Cherokee and Comanche with AMC-designed engines. The control setup used with the V6 was OBD-I General Motors. The early Renault turbodiesel I4 used its specific control setup. The Jeep Wrangler (YJ) did not get the AMC 4.0 L I6 engine until 1991, when Chrysler-designed electronics accompanied it. Until then, it retained the AMC engine with a carburetor. No other Jeep vehicle was equipped with Renix electronic controls. Operation In a typical Jeep application, the ignition control module (ICM) is in the engine compartment. It consists of a solid-state ignition circuit and an integrated ignition coil that can be removed and serviced separately. Electronic signals from the ECU to the ICM determine the amount of ignition timing or retard needed to meet engine power requirements. The ECU provides an input signal to the ICM. The ICM has outputs for a tach signal to the tachometer and a high voltage signal from the coil to the distributor. The crankshaft position sensor senses TDC (Top Dead Center) and BDC crankshaft positions, as well as engine rpm. This sensor is secured by special shouldered bolts to the flywheel/drive plate housing and is not adjustable. Inspection stations The Renix control system is "pre-OBD" and, therefore, does not have a "Check Engine Lamp". It also does not "store" or "throw" Diagnostic Trouble Codes (DTCs) or "Parameter IDs" (PIDs). This is a common problem at vehicle inspection, particularly in California and other jurisdictions with emission standards. Most inspection stations are unaware and will try to explain that the CEL/MIL "does not work". Skoda applications In the 1980s, Skoda manufactured a few rear-engined cars with Renix fuel injection. These were originally destined for Canada, but ended up in Europe. These are usually known as 135 GLi or 135 RiC. Fuel system parts may be available from Chrysler-Jeep dealers. Volvo applications The Volvo 700 Series and some of the Volvo 300 Series used a B200K 2.0 L inline-4 naturally aspirated engine with Renix ignition and some 300 series Volvos with Renault powerplants. The 300 Volvo series is unknown in the U.S. market. It was manufactured in the Netherlands (with a limited production of cars in Malaysia through the CKD process). All 300 series Volvo cars with gasoline engines came with Renix/Bendix ignition from 1983 until 1991, when production of the 300 series stopped. Volvo "Redblock" engines equipped with Renix ignition: B200K 1986 cc Solex Cisac Z34 twin barrel carburetor 1985-1989 B200E 1986 cc Bosch LE-jetronic injected unit 1985-1989 B200F 1986 cc Bosch LU-jetronic injected and catalyzed engine 1987-1990 B230K Renault-derived units with Renix ignition: B18E(D) 1721 cc OHC engines in 400 series 1986-1988 B172K 1721 cc OHC Solex Cisaz Z32 twin barrel carburetor 1986-1989 B14.4E/S 1397 cc OHV Weber 32DIR twin barrel carburetor 1985-1991 B14.3E/S 1397 cc OHV single barrel Solex carburetor version 1983-1985 B13.4E 1289 cc OHV Weber 32DIR twin-barrel carburetor unit aimed for the Finnish market (where 1.3 L was a tax-class) 1989-1991 B172K and B18 were based on the Renault FnN (n being 1, 2, or 3) engines from Renault, and B14.x was based on the Renault C1J; both types were modified for Volvo to varying extents. See also AMC Straight-6 engine AMC Engines List of Chrysler engines Notes References External links Renault AMC engines American Motors Engine control systems Engine technology Jeep engines Onboard computers Bendix Corporation
Renix
Technology
1,956
36,929,890
https://en.wikipedia.org/wiki/Autism-friendly
Autism-friendly means being aware of social engagement and environmental factors affecting Autistic people, with modifications to communication methods and physical space to better suit individuals' unique and special needs. Overview Autistic Individuals take in information from their senses as do allistic (non-autistic) people. The difference is they are not able to process it in the same manner as their neurotypical peers and can become overwhelmed by the amount of information that they are receiving and withdraw as a coping mechanism. Additionally, it may be that an autistic person is actually taking in more sensory information and is merely overwhelmed by the sheer amount of input. As such, they may experience difficulty in public settings due to inhibited communication, social interaction or flexibility of thought development. Knowing about these differences and how to react effectively helps to create a more inclusive society. It also better suits the needs of autistic individuals. Being autism friendly means being understanding and flexible in interpersonal conversation, public programs and public settings. For example, a person might think that someone is being rude if they will not look them in the eyes or does not understand cliches like "it's a piece of cake", when in fact there may be a reason for this. Depending upon the individual's degree of language ability, a person who hears "it's a piece of cake" may take that literally and not understand that what is really meant is "it will be easy". For Autistic people, being in an autism-friendly environment means they will have a manageable degree of sensory stimuli, which will make them calmer, better able to process the sensory stimulation they receive, and better able to relate to others. Communication and social interaction Organizations interested in spreading awareness about autism and how to be autism friendly, such as The Autism Directory, have created training programs for communities to illustrate how autistic people may communicate or interact differently from neurotypical people, or people without autism. There are also suggestions for how to modify one's reaction to improve communication. Some training examples are: When one finds out that someone may not be able to look them in their eyes, one should realize that they are not trying to be rude, and it is uncomfortable for them to do so. A person may have difficulty understanding clichés or figurative expressions and interpret a phrase literally. By speaking directly and factually, like saying "It's easy" as compared to "It's a piece of cake", one is more likely to understand the line. Body language, facial expressions, gestures, and turning away from someone may be cues that are missed by an autistic person. This is another opportunity for one to be direct and factual, realizing that their body language or social cues may not be picked up. The person may have limited vocabulary or speech perception. Patience is helpful here. Allow time for the person to comprehend what was said. Ask how you can help. If they use sign language or a symbol set to communicate, adapt as you are able. Other pointers are: avoid making loud sounds; do not surprise them, let them know your plans; limit or avoid vigorous activities; and talk or engage in activities that they care about. Environment Some Autistic people may be hypersensitive to changes in sight, touch, smell, taste and sound; The sensory stimulus could be very distracting or they could result in pain or anxiety. There are other people who are hyposensitive and may not feel extreme changes in temperature or pain. Each of these has implications for making an autism-friendly environment. Social factors There are several factors in creating a supportive environment. One of them is adherence to a standard routine and structure. Since change of routine can be quite anxiety-producing for many Autistic people, a structured, predictable routine makes for calmer and happier transitions during the day. Another important factor is creating a low-arousal space. Environments with the least amount of disruption will help Autistic people remain calm. It is important to speak in quiet, non-disruptive tones and to use a physical space that has a low level of disruption. Having a positive, empathetic attitude and ensuring consistent habits in work, school and recreational activities also help to minimize anxiety and distress and help an autistic person succeed. This is the SPELL approach: StructurePositiveEmpathyLow arousalLinks. Social Stories can be used to communicate ways in which an autistic person can prepare themselves for social interaction. Physical space There are several ways that the physical space can be designed and organized to be autism friendly. It is important for rooms to be decorated with serenity in mind, like painting the walls with calming colors. Thick carpeting and double-paned glass help to minimize distracting noise (e.g., noise pollution). Materials within the rooms may be organized, grouped and labeled with words or symbols to make items easier to locate. Topics Daily life Autism friendliness can have a significant impact on an individual's interpersonal life and work life, benefited by consistency across all areas of one's life. Vacations As the break of routine in family vacations causes distress to some autistic people, many families may avoid taking vacations. Steps can be taken to help make for a successful family vacation. One is sharing information like pictures or internet web pages. There are organizations that will make accommodations, if requested, to better manage common stressors such as uncertainty, crowds and noise disruption. This includes theme parks that allow autistic people to skip long lines and airlines or airports that may allow for a dry-run prior to the trip. Another tip is to prepare prior to the trip so that there is a plan for managing boredom. Entertainment Theatre In the United States, the Theatre Development Fund (TDF) created a program in 2011 to "make theatre accessible to children and adults on the autism spectrum". Called the Autism Theatre Initiative, it is part of their Accessibility Programs, and was done in conjunction with Autism Speaks, Disney and experts who reviewed the performance for areas of modification. Adjustments that have been named for the initiative include: quiet areas in theatre lobby, performance changes that reduced strobe light use and noise, and areas where people can go perform an activity if they leave the theatre. Social Stories, which explain what the experience will be like (such as loud noises, needing a break and moving through a crowd), were made available prior to the performance. These performances included The Lion King and Mary Poppins. On London's West End theatre was the premiere of the first autism-friendly performance of Wicked, on 14 May 2016. Movie cinema Going to a movie theater can be an overwhelming experience for someone on the autism spectrum. Crowding as people queue up to buy tickets, loud movie volume, and dark theater lighting can all be sensory overload triggers that keep some autistic people from ever seeing movies at the cinema. Some movie theaters are becoming more autism friendly: the lighting is adjusted so it is not so dark, the volume is reduced and queues are managed to prevent crowding. Odeon Cinemas in London has implemented such "sensory-friendly" nights. In the United States there are also "sensory-friendly" moviegoing experiences to be had through collaboration with the Autism Society of America. Monthly, AMC Theatres (AMC) will provide nights when people on the autism spectrum and their families may experience an autism-friendly movie night. The program is also intended for people with other disabilities whose moviegoing experience will also be improved in such a setting. Santa Claus In Canada, malls operated by Oxford Properties established a process by which autistic children could visit Santa Claus at the mall without having to contend with crowds. The malls open early to allow entry only to families with autistic children, who have a private visit with Santa Claus. In 2012, the Southcentre Mall in Calgary was the first mall to offer this service. The children are given a booklet explaining the process, and upon arrival at the mall are placed in a waiting area near Santa Claus before their visit "to ensure their comfort". Education Providing the best outcomes for an autistic child may be difficult, complicated by each child's unique way of managing communication and interaction with others, associated disorders that make each child's situation unique, and emerging understandings of neurodiversity. Teacher effectiveness can be optimized based upon their awareness of the differences along the autism spectrum, acceptance that each child is unique, engagement of the child in social and educational activities and employment of teaching methods that are found to be helpful with people with developmental disability. Teachers play a key role in the success of an autistic student by helping them to understand directions, organize tasks and support their achievements. One example is organizing and grouping materials together for activities in specific ways. Teachers give autistic students extra time to answer when they ask them a question. Autistic children take time to process information but they are listening and will respond. Schools dedicated to being autism friendly, like Pathlight School in Singapore, designed their campus to offer students "dignity" in an autism-friendly environment. There the campus was architecturally designed, landscaped and the interior created with a simple color scheme. All of this helps to avoid triggering sensory overload. There is a high teacher to student ratio, a focus on nurturing, and a comprehensive life-skills training and education program. In regards to students who show a significant delay in acquiring academic and verbal communication skills, one available option is Applied Behavior Analysis (ABA)-style placement for their child. There is evidence, however, that ABA's methods may lead to autistic people developing PTSD. Empathising–systemising theory Empathising–systemising theory with video technology can be used to present information in an autism-friendly way that promotes understanding. For instance, computer applications or DVDs of actors making facial expressions to inform how body language provides clues about how someone might be feeling. Or, in the case of The Transporters, interesting items like trains are used to wear faces, drawing in the viewer into the faces. Justice and law Being met with an individual in a dark uniform can be intimidating to an autistic person, particularly when they have been a crime victim or are injured. Police and emergency responders may become frustrated, not knowing a person that they are talking to is autistic. The responders may not be communicating in a way that will create understanding and make the situation less stressful. A program has been launched in London, Ontario, Canada to enter information into a database about autistic people so that responding police and emergency personnel are notified when they will be meeting an autistic person and may then communicate in a way that increases understanding and makes the situation less stressful. Autism Alert Cards, for example, are available for autistic people in London, England, UK so that police and emergency personnel will recognize autistic individuals and respond appropriately. The cards, which encourage autism-friendly interaction, have a couple of key points about interacting with autistic people. Life events Neurotypical people and Autistic people may have very different ways of communicating their feelings about life events, including: Coping with illness, injury and recuperation Dealing with dying and death Incorporating rituals and traditions for managing life events Managing emotions Learning from life events Just because people may process and communicate their feelings differently, though, does not mean it is right or wrong. It is best to be honest and literal to help an Autistic person to manage major life events. Providing information, and allowing them time to process it, are other important factors. Lastly, communication tools will also help to process and manage the event. Autistic people can help themselves manage situations by being aware of what they are feeling and thinking — and expressing their thoughts to important people in their life. Other tools are being aware of when they need help and asking for it — and thanking people when they have received assistance or a gift. Technology Educational technological applications for people with autism include: Digital talking books Digital talking books are used to assist disabled people, commonly people who are blind, and also for Autistic people. One such use is for taped church programs. Mobile applications One of the providers of autism-friendly applications is iPad, which was an interface between the child and the storyteller on a video. By repeating what the narrator says, the children hear themselves tell the story, like Tom the Talking Cat. Reading the stories aloud helps children improve their language and communication skills, as well as improving fine motor skills, social skills and sensory skills.<ref>"iPad Apps That Help Autistic Children's Development." Huffington Post. November 17, 2011.</ref> Apple iPod applications can be used by people on the autism spectrum to manage tasks at work. It can manage a checklist of tasks and reminder prompts. This helps a person be more calm and effective and rely less on managers or job coaches to prompt for needed work. Tony Gentry, who led research on the application at Virginia Commonwealth University said: "This is an exciting time for anyone in the fields of education, physical rehabilitation, and vocational support, where we are seeing a long-awaited merging of consumer products and assistive technologies for all." Motion-controlled gadgets Social media Types of technology Emotion Markup Language is a general-purpose emotion annotation and representation language, which should be usable in a large variety of technological contexts where emotions need to be represented. Emotion-oriented computing (or "affective computing") is gaining importance as interactive technological systems become more sophisticated. For people on the autism spectrum, it can be used to make the emotional intent of content explicit. This would enable people with learning disabilities to realize the emotional context of the content. Training for businesses As the prevalence of autism increases, it becomes increasingly important to ensure that customer-facing organizations have basic tools for communicating with people on the autism spectrum. Tesco, a multinational supermarket chain, has implemented training for its employees to meet the needs of its customers who are on the autism spectrum, which is estimated to be one of every 100 people in the United Kingdom. Employees use an online training site and respond to a questionnaire to assess the extent to which they became more aware of autism spectrum disorders (ASD). Tesco is the first company to participate in an awareness program led by the Welsh Local Government Association (WLGA). The online training and questionnaire tool is intended to be used by many organizations in Wales to identify and commend businesses that are "ASD Aware". The SERVICE principles have been developed to guide businesses in making no-cost or low-cost changes to their premises and practices. The principles address common challenges that people on the autism spectrum face when they are out in the community. By applying as many principles as possible, businesses can address the most frequently occurring challenges reported by autistic people. Corrimal, New South Wales, Australia is working towards being the first autism-friendly community in Australia. Recreational facilities Inclusive recreation Inclusive recreation, also called adaptive recreation, is when recreational activities are modified to accommodate disabled people. Community involvement Organizations or programs that promote autism-friendly efforts are: Autism Awareness Campaign UK The Autism Directory in England awards an "Autism Friendly" mark to those companies that undergo The Autism Directory's free autism awareness training. It shows that this particular company has a basic awareness of autism and acts as a good indicator to any potential autistic customers Autism Research Institute (US) National Autistic Society (UK) https://autismfriendlycharter.org.au/ The Autism Friendly Charter is a free online learning platform and inclusive business directory that was developed in partnership with individuals on the autism spectrum and their families to assist businesses, organisations and venues to build understanding, awareness, inclusivity and capacity of the autism spectrum. Autism rights movement The autism rights movement encourages autistic people to "embrace their neurodiversity" and encourages society to accept autistics as they are. The movement advocates giving children more tools to cope with the non-autistic world instead of trying to change them into neurotypicals, and says society should learn to tolerate harmless behaviours such as tics and stims like hand-flapping or humming. Autism rights activists say that "tics, like repetitive rocking and violent outbursts" can be managed if others make an effort to understand autistic people, while other autistic traits, "like difficulty with eye contact, with grasping humor or with breaking from routines", would not require corrective efforts if others were more tolerant. Autistic pride Autistic pride refers to pride in autism and shifting views of autism from "disease" to "difference". Autistic pride emphasizes the innate potential in all human phenotypic expressions and celebrates the diversity various neurological types express. Autistic pride asserts that autistic people are not sick; rather, they have a unique set of characteristics that provide them many rewards and challenges, not unlike their non-autistic peers. See also Universal design Curb cut effect Sensory friendly References Further reading Bishop, Beverly (author) and Craig Bishop (Illustrator). (2011). My Friend with Autism: Enhanced Edition with FREE CD of Coloring Pages! Future Horizons. . Beadle-Brown J., Roberts R. and Mills R. (2009). "Person-centred approaches to supporting children and adults with autism spectrum disorders." Tizard Learning Disability Review. 14:(3). pp. 18–26. It is available from the National Autistic Society (NAS) Information Centre, UK Fahrety, Catherine (author) and Gary B. Mesibov, Ph.D. (contributor). (2008). Understanding Death and Illness and What They Teach about Life: An Interactive Guide for Individuals with Autism or Asperger's and Their Loved Ones. Future Horizons. . Mills, R. (Winter 1999). "Q & A: SPELL." Communication. pp. 27–28. It is available from the National Autistic Society (NAS) Information Centre, UK. Povey C. (2009). "Commentary on person-centred approaches to supporting children and adults with autism spectrum disorders." Tizard Learning Disability Review.'' 14:(3). pp. 27–29. It is available from the National Autistic Society (NAS) Information Centre, UK. External links Autism Awareness presentation or training material Autism Awareness Training Presentation Introduction to ASD. Autism Awareness Aid Sunflower Lanyard Autism Awareness Headphones Aspergers List of Asperger Traits Asperger: female traits, differences between male and female traits Other information Book reviews for iPad applications for autism and Aspergers syndrome. Autism Tips & Helps in areas of potty training, coping through meltdowns, and sensory issues Autism rights movement Accessibility Sensory accommodations
Autism-friendly
Engineering
3,860
1,251,985
https://en.wikipedia.org/wiki/Planar%20process
The planar process is a manufacturing process used in the semiconductor industry to build individual components of a transistor, and in turn, connect those transistors together. It is the primary process by which silicon integrated circuit chips are built, and it is the most commonly used method of producing junctions during the manufacture of semiconductor devices. The process utilizes the surface passivation and thermal oxidation methods. The planar process was developed at Fairchild Semiconductor in 1959 and process proved to be one of the most important single advances in semiconductor technology. Overview The key concept is to view a circuit in its two-dimensional projection (a plane), thus allowing the use of photographic processing concepts such as film negatives to mask the projection of light exposed chemicals. This allows the use of a series of exposures on a substrate (silicon) to create silicon oxide (insulators) or doped regions (conductors). Together with the use of metallization, and the concepts of p–n junction isolation and surface passivation, it is possible to create circuits on a single silicon crystal slice (a wafer) from a monocrystalline silicon boule. The process involves the basic procedures of silicon dioxide (SiO2) oxidation, SiO2 etching and heat diffusion. The final steps involves oxidizing the entire wafer with an SiO2 layer, etching contact vias to the transistors, and depositing a covering metal layer over the oxide, thus connecting the transistors without manually wiring them together. History Development In 1955 at Bell Labs, Carl Frosch and Lincoln Derick accidentally grew a layer of silicon dioxide over a silicon wafer, for which they observed surface passivation properties. In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors, the first transistors in which drain and source were adjacent at the surface, showing that silicon dioxide surface passivation protected and insulated silicon wafers. At Bell Labs, the importance of Frosch's technique was immediately realized. Results of their work circulated around Bell Labs in the form of BTL memos before being published in 1957. At Shockley Semiconductor, Shockley had circulated the preprint of their article in December 1956 to all his senior staff, including Jean Hoerni. Later, Hoerni attended a meeting where Atalla presented a paper about passivation based on the previous results at Bell Labs. Taking advantage of silicon dioxide's passivating effect on the silicon surface, Hoerni proposed to make transistors that were protected by a layer of silicon dioxide. Jean Hoerni, while working at Fairchild Semiconductor, had first patented the planar process in 1959. K. E. Daburlos and H. J. Patterson of Bell Laboratories continued on the work of C. Frosch and L. Derick, and developed a process similar to Hoerni’s about the same time. Together with the use of metallization (to join together the integrated circuits), and the concept of p–n junction isolation (from Kurt Lehovec), the researchers at Fairchild were able to create circuits on a single silicon crystal slice (a wafer) from a monocrystalline silicon boule. In 1959, Robert Noyce built on Hoerni's work with his conception of an integrated circuit (IC), which added a layer of metal to the top of Hoerni's basic structure to connect different components, such as transistors, capacitors, or resistors, located on the same piece of silicon. The planar process provided a powerful way of implementing an integrated circuit that was superior to earlier conceptions of the integrated circuit. Noyce's invention was the first monolithic IC chip. Early versions of the planar process used a photolithography process using near-ultraviolet light from a mercury vapor lamp. As of 2011, small features are typically made with 193 nm "deep" UV lithography. As of 2022, the ASML NXE platform uses 13.5 nm extreme ultraviolet (EUV) light, generated by a tin-based plasma source, as part of the extreme ultraviolet lithography process. See also Semiconductor device fabrication References External links A compendium of articles and other information on the development of integrated circuits, including the development of oxide masking, photolithography, the advent of silicon, the integrated circuit and the planar process. The Planar Process An overview of the steps in fabrication of an integrated circuit from the Nobel Prize website. This is a section of the work Techville: The integrated circuit. Semiconductor device fabrication Swiss inventions Planes (geometry)
Planar process
Materials_science,Mathematics
958
359,238
https://en.wikipedia.org/wiki/Prescription%20drug
A prescription drug (also prescription medication, prescription medicine or prescription-only medication) is a pharmaceutical drug that is permitted to be dispensed only to those with a medical prescription. In contrast, over-the-counter drugs can be obtained without a prescription. The reason for this difference in substance control is the potential scope of misuse, from drug abuse to practicing medicine without a license and without sufficient education. Different jurisdictions have different definitions of what constitutes a prescription drug. In North America, , usually printed as "Rx", is used as an abbreviation of the word "prescription". It is a contraction of the Latin word "recipe" (an imperative form of "recipere") meaning "take". Prescription drugs are often dispensed together with a monograph (in Europe, a Patient Information Leaflet or PIL) that gives detailed information about the drug. The use of prescription drugs has been increasing since the 1960s. Regulation Australia In Australia, the Standard for the Uniform Scheduling of Medicines and Poisons (SUSMP) governs the manufacture and supply of drugs with several categories: Schedule 1 – Defunct Drug. Schedule 2 – Pharmacy Medicine Schedule 3 – Pharmacist-Only Medicine Schedule 4 – Prescription-Only Medicine/Prescription Animal Remedy Schedule 5 – Caution/Poison. Schedule 6 – Poison Schedule 7 – Dangerous Poison Schedule 8 – Controlled Drug (Possession without authority illegal) Schedule 9 – Prohibited Substance (Possession illegal without a license legal only for research purposes) Schedule 10 – Controlled Poison. Unscheduled Substances. As in other developed countries, the person requiring a prescription drug attends the clinic of a qualified health practitioner, such as a physician, who may write the prescription for the required drug. Many prescriptions issued by health practitioners in Australia are covered by the Pharmaceutical Benefits Scheme, a scheme that provides subsidised prescription drugs to residents of Australia to ensure that all Australians have affordable and reliable access to a wide range of necessary medicines. When purchasing a drug under the PBS, the consumer pays no more than the patient co-payment contribution, which, as of January 1, 2022, is A$42.50 for general patients. Those covered by government entitlements (low-income earners, welfare recipients, Health Care Card holders, etc.) and or under the Repatriation Pharmaceutical Benefits Scheme (RPBS) have a reduced co-payment, which is A$6.80 in 2022. The co-payments are compulsory and can be discounted by pharmacies up to a maximum of A$1.00 at cost to the pharmacy. United Kingdom In the United Kingdom, the Medicines Act 1968 and the Prescription Only Medicines (Human Use) Order 1997 contain regulations that cover the supply of sale, use, prescribing and production of medicines. There are three categories of medicine: Prescription-only medicines (POM), which may be dispensed (sold in the case of a private prescription) by a pharmacist only to those to whom they have been prescribed Pharmacy medicines (P), which may be sold by a pharmacist without a prescription General sales list (GSL) medicines, which may be sold without a prescription in any shop The simple possession of a prescription-only medicine without a prescription is legal unless it is covered by the Misuse of Drugs Act 1971. A patient visits a medical practitioner or dentist, who may prescribe drugs and certain other medical items, such as blood glucose-testing equipment for diabetics. Also, qualified and experienced nurses, paramedics and pharmacists may be independent prescribers. Both may prescribe all POMs (including controlled drugs), but may not prescribe Schedule 1 controlled drugs, and 3 listed controlled drugs for the treatment of addiction; which is similar to doctors, who require a special licence from the Home Office to prescribe schedule 1 drugs. Schedule 1 drugs have little or no medical benefit, hence their limitations on prescribing. District nurses and health visitors have had limited prescribing rights since the mid-1990s; until then, prescriptions for dressings and simple medicines had to be signed by a doctor. Once issued, a prescription is taken by the patient to a pharmacy, which dispenses the medicine. Most prescriptions are NHS prescriptions, subject to a standard charge that is unrelated to what is dispensed. The NHS prescription fee was increased to £9.90 for each item in England in May 2024; prescriptions are free of charge if prescribed and dispensed in Scotland, Wales and Northern Ireland, and for some patients in England, such as inpatients, children, those over 60s or with certain medical conditions, and claimants of certain benefits. The pharmacy charges the NHS the actual cost of the medicine, which may vary from a few pence to hundreds of pounds. A patient can consolidate prescription charges by using a prescription payment certificate (informally a "season ticket"), effectively capping costs at £31.25 a quarter or £111.60 for a year. Outside the NHS, private prescriptions are issued by private medical practitioner and sometimes under the NHS for medicines that are not covered by the NHS. A patient pays the pharmacy the normal price for medicine prescribed outside the NHS. Survey results published by Ipsos MORI in 2008 found that around 800,000 people in England were not collecting prescriptions or getting them dispensed because of the cost, the same as in 2001. United States In the United States, the Federal Food, Drug, and Cosmetic Act defines what substances, known as legend drugs, require a prescription for them to be dispensed by a pharmacy. The federal government authorizes physicians (of any specialty), physician assistants, nurse practitioners and other advanced practice nurses, veterinarians, dentists, and optometrists to prescribe any controlled substance. They are issued unique DEA numbers. Many other mental and physical health technicians, including basic-level registered nurses, medical assistants, emergency medical technicians, most psychologists, and social workers, are not authorized to prescribe legend drugs. The federal Controlled Substances Act (CSA) was enacted in 1970. It regulates manufacture, importation, possession, use, and distribution of controlled substances, which are drugs with potential for abuse or addiction. The legislation classifies these drugs into five schedules, with varying qualifications for each schedule. The schedules are designated schedule I, schedule II, schedule III, schedule IV, and schedule V. Many drugs other than controlled substances require a prescription. The safety and the effectiveness of prescription drugs in the US are regulated by the 1987 Prescription Drug Marketing Act (PDMA). The Food and Drug Administration (FDA) is charged with implementing the law. As a general rule, over-the-counter drugs (OTC) are used to treat a condition that does not need care from a healthcare professional if have been proven to meet higher safety standards for self-medication by patients. Often, a lower strength of a drug will be approved for OTC use, but higher strengths require a prescription to be obtained; a notable case is ibuprofen, which has been widely available as an OTC pain killer since the mid-1980s, but it is available by prescription in doses up to four times the OTC dose for severe pain that is not adequately controlled by the OTC strength. Herbal preparations, amino acids, vitamins, minerals, and other food supplements are regulated by the FDA as dietary supplements. Because specific health claims cannot be made, the consumer must make informed decisions when purchasing such products. By law, American pharmacies operated by "membership clubs" such as Costco and Sam's Club must allow non-members to use their pharmacy services and may not charge more for these services than they charge as their members. Physicians may legally prescribe drugs for uses other than those specified in the FDA approval, known as off-label use. Drug companies, however, are prohibited from marketing their drugs for off-label uses. Some prescription drugs are commonly abused, particularly those marketed as analgesics, including fentanyl (Duragesic), hydrocodone (Vicodin), oxycodone (OxyContin), oxymorphone (Opana), propoxyphene (Darvon), hydromorphone (Dilaudid), meperidine (Demerol), and diphenoxylate (Lomotil). Some prescription painkillers have been found to be addictive, and unintentional poisoning deaths in the United States have skyrocketed since the 1990s according to the National Safety Council. Prescriber education guidelines as well as patient education, prescription drug monitoring programs and regulation of pain clinics are regulatory tactics which have been used to curtail opioid use and misuse. Expiration date The expiration date, required in several countries, specifies the date up to which the manufacturer guarantees the full potency and safety of a drug. In the United States, expiration dates are determined by regulations established by the FDA. The FDA advises consumers not to use products after their expiration dates. A study conducted by the U.S. Food and Drug Administration covered over 100 drugs, prescription and over-the-counter. The results showed that about 90% of them were safe and effective far past their original expiration date. At least one drug worked 15 years after its expiration date. Joel Davis, a former FDA expiration-date compliance chief, said that with a handful of exceptions—notably nitroglycerin, insulin, and some liquid antibiotics (outdated tetracyclines can cause Fanconi syndrome)—most expired drugs are probably effective. The American Medical Association issued a report and statement on Pharmaceutical Expiration Dates. The Harvard Medical School Family Health Guide notes that, with rare exceptions, "it's true the effectiveness of a drug may decrease over time, but much of the original potency still remains even a decade after the expiration date". The expiration date is the final day that the manufacturer guarantees the full potency and safety of a medication. Drug expiration dates exist on most medication labels, including prescription, over-the-counter and dietary supplements. U.S. pharmaceutical manufacturers are required by law to place expiration dates on prescription products prior to marketing. For legal and liability reasons, manufacturers will not make recommendations about the stability of drugs past the original expiration date. Cost Prices of prescription drugs vary widely around the world. Prescription costs for biosimilar and generic drugs are usually less than brand names, but the cost is different from one pharmacy to another. To lower prescription drug costs, some U.S. states have sought federal approval to buy drugs in Canada, as of 2022. Generics undergo strict scrutiny to meet the equal efficacy, safety, dosage, strength, stability, and quality of brand name drugs. Generics are developed after the brand name has already been established, and so generic drug approval in many aspects has a shortened approval process because it replicates the brand name drug. Brand name drugs cost more due to time, money, and resources that drug companies invest in them to conduct development, including clinical trials that the FDA requires for the drug to be marketed. Because drug companies have to invest more in research costs to do this, brand name drug prices are much higher when sold to consumers. When the patent expires for a brand name drug, generic versions of that drug are produced by other companies and are sold for lower price. By switching to generic prescription drugs, patients can save significant amounts of money: e.g. one study by the FDA showed an example with more than 52% savings of a consumer's overall costs of their prescription drugs. Strategies to limit drug prices in the United States In the United States there are many resources available to patients to lower the costs of medication. These include copayments, coinsurance, and deductibles. The Medicaid Drug Rebate Program is another example. Generic drug programs lower the amount of money patients have to pay when picking up their prescription at the pharmacy. As their name implies, they only cover generic drugs. Co-pay assistance programs are programs that help patients lower the costs of specialty medications; i.e., medications that are on restricted formularies, have limited distribution, and/or have no generic version available. These medications can include drugs for HIV, hepatitis C, and multiple sclerosis. Patient Assistance Program Center (RxAssist) has a list of foundations that provide co-pay assistance programs. Co-pay assistance programs are for under-insured patients. Patients without insurance are not eligible for this resource; however, they may be eligible for patient assistance programs. Patient assistance programs are funded by the manufacturer of the medication. Patients can often apply to these programs through the manufacturer's website. This type of assistance program is one of the few options available to uninsured patients. The out-of-pocket cost for patients enrolled in co-pay assistance or patient assistance programs is $0. It is a major resource to help lower costs of medicationshowever, many providers and patients are not aware of these resources. Environment Traces of prescription drugsincluding antibiotics, anti-convulsants, mood stabilizers and sex hormoneshave been detected in drinking water. Pharmaceutically active compounds (PhACs) discarded from human therapy and their metabolites may not be eliminated entirely by sewage treatment plants and have been detected at low concentrations in surface waters downstream from those plants. The continuous discarding of incompletely treated water may interact with other environmental chemicals and lead to uncertain ecological effects. Due to most pharmaceuticals being highly soluble, fish and other aquatic organisms are susceptible to their effects. The long-term effects of pharmaceuticals in the environment may affect survival and reproduction of such organisms. However, levels of medical drug waste in the water is at a low enough level that it is not a direct concern to human health. However, processes, such as biomagnification, are potential human health concerns. On the other hand, there is clear evidence of harm to aquatic animals and fauna. Recent advancements in technology have allowed scientists to detect smaller, trace quantities of pharmaceuticals in the ng/ml range. Despite being found at low concentrations, female hormonal contraceptives may cause feminizing effects on male vertebrate species, such as fish, frogs and crocodiles. The FDA established guidelines in 2007 to inform consumers should dispose of prescription drugs. When medications do not include specific disposal instructions, patients should not flush medications in the toilet, but instead use medication take-back programs to reduce the amount of pharmaceutical waste in sewage and landfills. If no take-back programs are available, prescription drugs can be discarded in household trash after they are crushed or dissolved and then mixed in a separate container or sealable bag with undesirable substances like cat litter or other unappealing material (to discourage consumption). See also U.S. Controlled Substances Act Co-pay card Classification of Pharmaco-Therapeutic Referrals Drug policy – policy regulating drugs considered dangerous, rather than only medicinal Inverse benefit law List of pharmaceutical companies Package insert Pharmacy (shop) Pharmacy automation Pill splitting Prescription drug prices in the United States Regulation of therapeutic goods References Pharmaceuticals policy Prescription of drugs Pharmacy es:Medicamento
Prescription drug
Chemistry
3,139
1,928,503
https://en.wikipedia.org/wiki/Ernst%20Gehrcke
Ernst J. L. Gehrcke (1 July 1878 in Berlin – 25 January 1960 in Hohen-Neuendorf) was a German experimental physicist. He was director of the optical department at the Reich Physical and Technical Institute. Concurrently, he was a professor at the University of Berlin. He developed the Lummer–Gehrcke method in interferometry and the multiplex interferometric spectroscope for precision resolution of spectral-line structures. As an anti-relativist, he was a speaker at an event organized in 1920 by the Working Society of German Scientists. He sat on the board of trustees of the Potsdam Astrophysical Observatory. After World War II, he worked at Carl Zeiss Jena, and he helped to develop and become the director of the Institute for Physiological Optics at the University of Jena. In 1949, he began work at the German Office for Materials and Product Testing. In 1953, he became the director of the optical department of the German Office for Weights and Measures. Education Gehrcke studied at the Friedrich-Wilhelms-Universität (today, the Humboldt-Universität zu Berlin) from 1897 to 1901. He received his doctorate under Emil Warburg in 1901. Career In 1901, Gehrcke joined the Physikalisch-Technische Reichsanstalt (PTR, Reich Physical and Technical Institute, after 1945 renamed the Physikalisch-Technische Bundesanstalt). In 1926, he became the director of the optical department, a position he held until 1946. Concurrent with his position at the PTR, he was a Privatdozent at the Friedrich-Wilhelms-Universität from 1904 to 1921 and an außerordentlicher Professor (extraordinarius professor) from 1921 to 1946. After the close of World War II, the University was in the Russian sector of Berlin. In 1946, Gehrcke worked at Carl Zeiss AG in Jena, and he helped to develop and become the director of the Institute for Physiological Optics at the Friedrich-Schiller-Universität Jena. In 1949, he went to East Berlin to the Deutsches Amt für Materialprüfung (German Office for Materials and Product Testing). In 1953, he became the director of the optical department of the Deutsches Amt für Maß und Gewicht (DAMG, German Office for Weights and Measures) in East Berlin, the East German equivalent to the West German Physikalisch-Technische Bundesanstalt (Federal Physical and Technical Institute). Gehrcke contributed to the experimental techniques of interference spectroscopy (interferometry), physiological optics, and the physics of electrical discharges in gases. In 1903, with Otto Lummer, he developed the Lummer–Gehrcke method in interferometry. In 1927, with Ernst Gustav Lau, he developed the multiplex interferometric spectroscope for precision resolution of spectral-line structures. Like a number of other prominent physicists of the time (including the leading Dutch theoretician H. A. Lorentz) Gehrcke, an experimentalist, was not prepared to give up the concept of the luminiferous aether, and for this and various other reasons had been highly critical of Einstein's theories of relativity at least since 1911. This led to an invitation to an event organized in 1920 by Paul Weyland. Weyland, a radical political activist, professional agitator, small-time criminal, and editor of the vehemently anti-Semitic periodical Völkische Monatshefte, believed that Einstein's theories had been excessively promoted in the Berlin press, which he imagined was dominated by Jews who were sympathetic to Einstein's cause for other than scientific reasons. In response, Weyland organized the Arbeitsgemeinschaft deutscher Naturforscher zur Erhaltung reiner Wissenschaft (Working Group of German Natural Scientists for the Preservation of Pure Science), which was never officially registered. Weyland tried to enlist the support of some prominent conservative scientists, such as the Nobel Laureate Philipp Lenard, to build support for the Society (although Lenard declined to participate in Weyland's meetings). The Society held its first and only event on 24 August 1920, featuring lectures against Albert Einstein’s theory of relativity. Weyland gave the first presentation in which he accused Einstein of being a plagiarizer. Gehrcke gave the second and last talks, in which he presented detailed criticisms of Einstein's theories. Einstein attended the event with Walther Nernst. Max von Laue, Walther Nernst, and Heinrich Rubens published a brief and dignified response to the event, in the leading Berlin daily Tägliche Rundschau, on 26 August. Einstein published his own somewhat lengthy reply on 27 August, which he later came to regret. Rising anti-Semitism and antipathy to recent trends in theoretical physics (especially with respect to the theory of relativity and quantum mechanics) were key motivational factors for the Deutsche Physik movement. Under advice from some of his closest associates, Einstein later publicly challenged his critics to debate him in a more professional environment, and several of his scientific adversaries, including Gehrcke and Lenard, accepted. The ensuing debate took place at the 86th meeting of the German Society of Scientists and Physicians in Bad Nauheim on 20 September, chaired by Friedrich von Müller, with Hendrik Lorentz, Max Planck, and Hermann Weyl present. In this meeting Gehrcke pressed his criticism that Einstein's general theory of relativity now admitted superluminal velocities in rotating frames of reference, which the special theory of relativity had ruled out (see Criticism of the theory of relativity). The physics Nobel Laureate Philipp Lenard suggested Gehrcke for the Nobel Prize in Physics in 1921. From 1922 to 1925, Gehrcke was also a member of the Kuratorium (board of trustees) of the Potsdam Astrophysical Observatory. On 9 February 1922, Max Planck nominated Gehrcke, Max von Laue, G. Müller, Walther Nernst to sit on the Kuratorium, and they were installed by the Preußische Akademie der Wissenschaften (Prussian Academy of Sciences). Gehrcke represented the Physikalisch-Technische Reichsanstalt. During their appointment, they sat four times with Albert Einstein present. This was a surprising collaboration in view of what had happened just 18 months earlier at the gathering under the auspices of the Arbeitsgemeinschaft deutscher Naturforscher and the responses in the press by Einstein, Laue, and Nernst. Memberships Gehrcke was a member of professional organizations, which included: Deutsche Physikalische Gesellschaft (German Physical Society) Berlin Society of Anthropology, Ethnology and Prehistory Literature by Gehrcke Ernst Gehrcke and Rudolf Seeliger Über das Leuchten der Gase unter dem Einfluss von Kathodenstrahlen, Verh. D. Deutsch. Phys. Ges. (2) 15, 534–539 (1912), cited in Mehra, Volume 1, Part 2, p. 776. Gehrcke, Ernst Die gegen die Relativitätstheorie erhobenen Einwände, Die Naturwissenschaften Volume 1, 62–66 (1913) Gehrcke, Ernst Zur Kritik und Geschichte der neueren Gravitationstheorien, Annalen der Physik Volume 51, Number 4, 119 – 124 (1916) Gehrcke, Ernst Berichtigung zum Dialog über die Relativitätstheorie, Die Naturwissenschaften Volume 7, 147 – 148 (1919) Gehrcke, Ernst Zur Diskussion über den Äther, Zeitschrift der Physik Volume 2, 67 – 68 (1920) Gehrcke, Ernst Wie die Energieverteilung der schwarzen Strahlung in Wirklichkeit gefunden wurde, Physikalische Zeitschrift Volume 37, 439 – 440 (1936) Books by Gehrcke Gehrcke, Ernst (editor) Handbuch der physikalischen Optik. In zwei Bänden (Barth, 1927–1928) Bibliography Beyerchen, Alan D. Scientists Under Hitler: Politics and the Physics Community in the Third Reich (Yale, 1977) Einstein, Albert Meine Antwort. Über die anti-relativitätstheoretische G.M.b.H., Berliner Tageblatt Volume 49, Number 402, Morning Edition A, p. 1 (27 August 1920), translated and published as Document #1, Albert Einstein: My Reply. On the Anti-Relativity Theoretical Co., Ltd. [August 27, 1920] in Klaus Hentschel (editor) and Ann M. Hentschel (editorial assistant and translator) Physics and National Socialism: An Anthology of Primary Sources (Birkhäuser, 1996) pp. 1 – 5. Clark, Ronald W. Einstein: The Life and Times (World, 1971) Goenner, Hubert The Reaction to Relativity Theory I: The Anti-Einstein Campaign in Germany in 1920 pp. 107–136 in Mara Beller (editor), Robert S. Cohen (editor), and Jürgen Renn Einstein in Context (Cambridge, 1993) (paperback) Heilbron, J. L. The Dilemmas of an Upright Man: Max Planck and the Fortunes of German Science (Harvard, 2000) Hentschel, Klaus (Editor) and Ann M. Hentschel (Editorial Assistant and Translator) Physics and National Socialism: An Anthology of Primary Sources (Birkhäuser, 1996) Mehra, Jagdish, and Helmut Rechenberg The Historical Development of Quantum Theory. Volume 1 Part 2 The Quantum Theory of Planck, Einstein, Bohr and Sommerfeld 1900 – 1925: Its Foundation and the Rise of Its Difficulties. (Springer, 2001) van Dongen, Jeroen Reactionaries and Einstein’s Fame: “German Scientists for the Preservation of Pure Science,” Relativity, and the Bad Nauheim Meeting, Physics in Perspective Volume 9, Number 2, 212–230 (June, 2007). Institutional affiliations of the author: (1) Einstein Papers Project, California Institute of Technology, Pasadena, CA 91125, USA, and (2) Institute for History and Foundations of Science, Utrecht University, P.O. Box 80.000, 3508 TA Utrecht, The Netherlands. References 1878 births 1960 deaths 20th-century German physicists Relativity critics
Ernst Gehrcke
Physics
2,238
35,568,276
https://en.wikipedia.org/wiki/LocationSmart
LocationSmart, originally called TechnoCom Location Platform, is a location-as-a-service (LaaS) company based in Carlsbad, California, that provides location APIs to enterprises and operates a secure, cloud-based and privacy-protected platform. In February 2015, it acquired a competitor, Locaid. LocationSmart provides near real-time location data for devices including smartphones, feature phones, tablets, M2M, IoT and other connected devices on Tier 1 and Tier 2 wireless networks in the U.S. and Canada. This includes AT&T, Verizon Wireless, T-Mobile US, Sprint Corporation, MetroPCS, U.S. Cellular, Rogers Communications, Bell Canada and Telus. History Founded in 1995, TechnoCom Corporation was headquartered in Los Angeles and called one of the area's fastest growing companies in 2003 by Deloitte & Touche. In 2012, the TechnoCom Location Platform was rebranded as LocationSmart. On May 17, 2018, media outlets reported that the LocationSmart website allowed anyone to obtain the realtime location of any cell phone using any of the major U.S. wireless carriers (including AT&T, Verizon, T-Mobile, and Sprint), as well as some Canadian carriers, to within a few hundred yards, given only the phone number. Approximately 200 million customers may have been exposed. No consent was required, and there was no ability to opt-out. In addition, the data could be requested by anyone anonymously, with no authentication, authorization, or payment required. Security researcher Robert Xiao, who discovered the vulnerability, stated that the LocationSmart API failed to implement basic checks and that the vulnerability could have been found by anyone with little effort. In response, LocationSmart took the vulnerable service offline and claimed the company "takes privacy seriously". Clients LocationSmart uses a variety of location methods that include cellular network location, Wi-Fi location, IP address location, landline location, hybrid location via a software development kit (SDK), browser location and global site identification (GSID) location. LocationSmart provides location services for enterprises and Fortune 500 companies. LocationSmart is also used by US states to offer online gaming within the state borders. LocationSmart is a member of CTIA, the Transportation Intermediaries Association (TIA), and the International Association of Privacy Professionals (IAPP). See also Location-based service Location as a Service References External links www.locationsmart.com Companies based in San Diego County, California Companies based in Carlsbad, California Internet geolocation Location-based software Mobile technology companies Wireless locating
LocationSmart
Technology
538
22,264,758
https://en.wikipedia.org/wiki/List%20of%20Compact%20Disc%20and%20DVD%20copy%20protection%20schemes
This is a list of notable CD and DVD copy protection schemes. For other medias, see List of Copy Protection Schemes. Commercial CD protection schemes CD-Cops Requires the user to enter CD-code (or reads embedded CD-code) that describes geometry of CD to correctly locate data on the disc. SafeDisc (versions 1–5) Adds unique digital signature at the time of manufacturing which is designed to be difficult to copy or transfer so that software is able to detect copied media. SafeCast The encryption key will expire after pre-determined date so the media can be used only temporarily. Also used to implement trial editions of programs. SecuROM Limits the number of PCs activated at the same time from the same key. StarForce Asks for Serial ID at install or startup to verify the license. TAGES Verifies authentic copy by checking existence of "twin sectors" which are sectors with same logical address but different data. However, twin sectors may be hard to read in order to copy but are easy to write. Commercial DVD protection schemes Analog Protection System Adds pulses to analog video signals to negatively impact the AGC circuit of a recording device so the images on copied DVDs become garbled. Sony ARccOS Protection Inserting corrupted sectors in areas where normal players will not access but ripping software does to trigger errors during replication. Burst Cutting Area Writing barcode in circular area near the center of the disc (referred to as burst cutting area) which cannot be written without using special equipment. DVD-Cops See CD-Cops in previous section. DVD region code Restricts region where media can be played by matching region number with configuration flag in DVD players. LaserLock Includes hidden directory on the CD containing corrupted data which will cause errors while being copied. SafeDisc (version 4) See SafeDisc (versions 1-5) in previous section. SecuROM See previous section. TAGES See previous section. Commercial Audio CD/DVD protection schemes Cactus Data Shield Works by intentionally violating Red Book CD Digital Audio standards, such as erroneous disc navigation and corrupted data, preventing successful ripping of the data. However, the original disc itself does not play correctly in some CD/DVD players. Wavy data track Discs' data track is wavy instead of straight, so only discs with the same wavy-shaped data track will be playable. Extended Copy Protection (XCP) Installs software on the computer after agreement to EULA at the first time the media is inserted, and the software will watch for any ripper software trying to access the CD-drive. This copy protection can be defeated simply by using a computer that is not running Microsoft Windows, not using an account with administrative privileges, or preventing the installer from running, and has long since been discontinued due to a public relations disaster caused by the software behaving identically to a rootkit. Key2Audio Another deliberate violation of the Red Book standard intended to make the CD play only on CD players and not on computers by applying bogus data track onto the disc during manufacturing, which CD players will ignore as non-audio tracks. The system could be disabled by tracing the outer edge of a CD with a felt-tip marker. MediaMax CD3 Installs software on the computer that tries to play the media so other software cannot read data directly from audio discs in the CD-ROM drive. Silently installing software on a computer created a controversy about modifying a computer's behaviour without a user's consent. Console CD/DVD protection schemes Dreamcast (GD-ROM) Multiple table of contents (TOC) made normal CD players unable read beyond the first track. However, one could read GD-ROM on CD reader by swapping the disc after reading fake TOC. FADE Creates fake scratches on the disk image which copying programs will automatically try to fix. Instead of alerting the user that the copied disc is detected, the program will play the game in a buggy manner. PlayStation (CD-ROM) The authority pattern pressed on internal circumference of the media, which could not be copied, is used to detect authorized copies. Some titles also use the Libcrypt mechanism to validate the disc by using checksum as magic number to subroutines. PlayStation 2 (CD-ROM, DVD-ROM) A map file that contains all of the exact positions and file size info of the disc is stored at a position that is beyond the file limit. The game calls this place directly so that burned copy with no data beyond file limit cannot be played. PSP (Universal Media Disc) Since no blank media or writer exists, the media itself cannot be copied, but one could make ISO image (a file version of the UMD) on a memory card. The unique format also made the media expensive and difficult to adapt. Xbox (DVD) Two sets of media descriptors are used. Initially, and on typical DVD-ROM drives, only a short partition containing a brief DVD Video can be seen. The lead-out section of the disk stores a second set of media descriptors describing the bounds of the main partition. It also contains a partially-encrypted "security sector" used for further authentication. The lead-out area is not typically directly accessible with consumer DVD-ROM hardware. Furthermore, the key for the security sector is located in the sector's raw header. This header information, unlike the raw headers of CD-ROM disks, is not accessible by default on nearly all DVD-ROM drives. Additional "challenges" are implemented in the security sector through a table, with more challenge types added over the lifespan of Xbox and Xbox 360. These include, as an example from their earliest form, checks for unreadable sectors in predetermined ranges. References CD CD List List List Copy protection Copy protection
List of Compact Disc and DVD copy protection schemes
Technology
1,183
2,273,245
https://en.wikipedia.org/wiki/Pittsburgh%20toilet
A Pittsburgh toilet, or Pittsburgh potty, is a basement toilet configuration commonly found in the area of Pittsburgh in the United States. It consists of an ordinary flush toilet with no surrounding walls. Most of these toilets are paired with a crude basement shower apparatus and large sink, which often doubles as a laundry room. Origin The most popular explanation for the Pittsburgh toilet is related to Pittsburgh's status as a major industrial city in the 20th century. According to this explanation, toilets such as these were said to be used by steelworkers and miners who, grimy from the day's labor, could use an exterior door to enter the basement directly from outside and use the basement's shower and toilet before heading upstairs. Alternatively, they may have served to prevent sewage backups from flooding the living areas of homes. As sewage backups tend to flood the lowest fixture in a residence, a Pittsburgh toilet would be the fixture to overflow, containing the sewage leak in the basement. References External links Pittsburgh Post-Gazette article mentions Pittsburgh toilet Pittsburgh Magazine article on the "Pittsburgh Potty" Toilets History of Pittsburgh Culture of Pittsburgh Working-class culture in Pennsylvania
Pittsburgh toilet
Biology
232
11,444,659
https://en.wikipedia.org/wiki/Fumarylacetoacetic%20acid
Fumarylacetoacetic acid (fumarylacetoacetate) is an intermediate in the metabolism of tyrosine. It is formed through the conversion of maleylacetoacetate into fumarylacetoacetate by the enzyme maleylacetoacetate isomerase. See also Fumarylacetoacetate hydrolase References Dicarboxylic acids Beta-keto acids Enones
Fumarylacetoacetic acid
Chemistry,Biology
89
55,480,763
https://en.wikipedia.org/wiki/NGC%204474
NGC 4474 is an edge-on lenticular galaxy located about 50 million light-years away in the constellation Coma Berenices. NGC 4474 was discovered by astronomer William Herschel on April 8, 1784. It is a member of the Virgo Cluster. See also List of NGC objects (4001–5000) References External links Lenticular galaxies Coma Berenices 4474 41241 7634 Astronomical objects discovered in 1784 Virgo Cluster
NGC 4474
Astronomy
94
1,156,511
https://en.wikipedia.org/wiki/Phthalocyanine
Phthalocyanine () is a large, aromatic, macrocyclic, organic compound with the formula and is of theoretical or specialized interest in chemical dyes and photoelectricity. It is composed of four isoindole units linked by a ring of nitrogen atoms. = has a two-dimensional geometry and a ring system consisting of 18 π-electrons. The extensive delocalization of the π-electrons affords the molecule useful properties, lending itself to applications in dyes and pigments. Metal complexes derived from , the conjugate base of , are valuable in catalysis, organic solar cells, and photodynamic therapy. Properties Phthalocyanine and derived metal complexes (MPc) tend to aggregate and, thus, have low solubility in common solvents. Benzene at 40 °C dissolves less than a milligram of or CuPc per litre. and CuPc dissolve easily in sulfuric acid due to the protonation of the nitrogen atoms bridging the pyrrole rings. Many phthalocyanine compounds are, thermally, very stable and do not melt but can be sublimed. CuPc sublimes at above 500 °C under inert gases (nitrogen, ). Substituted phthalocyanine complexes often have much higher solubility. They are less thermally stable and often can not be sublimed. Unsubstituted phthalocyanines strongly absorb light between 600 and 700 nm, thus these materials are blue or green. Substitution can shift the absorption towards longer wavelengths, changing color from pure blue to green to colorless (when the absorption is in the near infrared). There are many derivatives of the parent phthalocyanine, where either carbon atoms of the macrocycle are exchanged for nitrogen atoms or the peripheral hydrogen atoms are substituted by functional groups like halogens, hydroxyl, amine, alkyl, aryl, thiol, alkoxy and nitrosyl groups. These modifications allow for the tuning of the electrochemical properties of the molecule such as absorption and emission wavelengths and conductance. History In 1907, an unidentified blue compound, now known to be phthalocyanine, was reported. In 1927, Swiss researchers serendipitously discovered copper phthalocyanine, copper naphthalocyanine, and copper octamethylphthalocyanine in an attempted conversion of o-dibromobenzene into phthalonitrile. They remarked on the enormous stability of these complexes but did not further characterize them. In the same year, iron phthalocyanine was discovered at Scottish Dyes of Grangemouth, Scotland (later ICI). It was not until 1934 that Sir Patrick Linstead characterized the chemical and structural properties of iron phthalocyanine. Synthesis Phthalocyanine is formed through the cyclotetramerization of various phthalic acid derivatives including phthalonitrile, diiminoisoindole, phthalic anhydride, and phthalimides. Alternatively, heating phthalic anhydride in the presence of urea yields . Using such methods, approximately 57,000 tonnes (63,000 Imperial tons) of various phthalocyanines were produced in 1985. More often, MPc is synthesized rather than due to the greater research interest in the former. To prepare these complexes, the phthalocyanine synthesis is conducted in the presence of metal salts. Two copper phthalocyanines are shown in the figure below. {|align="center" class="wikitable" ||| |} Halogenated and sulfonated derivatives of copper phthalocyanines are commercially important as dyes. Such compounds are prepared by treating CuPc with chlorine, bromine or oleum. Applications At the initial discovery of Pc, its uses were primarily limited to dyes and pigments. Modification of the substituents attached to the peripheral rings allows for the tuning of the absorption and emission properties of Pc to yield differently colored dyes and pigments. There has since been significant research on H2Pc and MPc resulting in a wide range of applications in areas including photovoltaics, photodynamic therapy, nanoparticle construction, and catalysis. The electrochemical properties of MPc make them effective electron-donors and -acceptors. As a result, MPc-based organic solar cells with power conversion efficiencies at or below 5% have been developed. Furthermore, MPcs have been used as catalysts for the oxidation of methane, phenols, alcohols, polysaccharides, and olefins; MPcs can also be used to catalyze C–C bond formation and various reduction reactions. Silicon and zinc phthalocyanines have been developed as photosensitizers for non-invasive cancer treatment. Various MPcs have also shown the ability to form nanostructures which have potential applications in electronics and biosensing. Phthalocyanine is also used on some recordable DVDs. Related compounds Phthalocyanines are structurally related to other tetrapyrrole macrocyles including porphyrins and porphyrazines. They feature four pyrrole-like subunits linked to form a 16 membered inner ring composed of alternating carbon and nitrogen atoms. Structurally larger analogues include naphthalocyanines. The pyrrole-like rings within are closely related to isoindole. Both porphyrins and phthalocyanines function as planar tetradentate dianionic ligands that bind metals through four inwardly projecting nitrogen centers. Such complexes are formally derivatives of , the conjugate base of . Soluble phthalocyanines Of fundamental but little practical value, soluble phthalocyanines have been prepared. Long alkyl chains can be added to improve their solubility in organic solvents. Soluble derivatives can be used for spin-coating or drop-casting. Alternatively, introducing ionic or hydrophilic groups into the structure can confer water solubility. Solubilization can also be achieved through axial coordination. For instance, the axial ligand functionalization of silicon phthalocyanine has been extensively studied. Toxicity and hazards No evidence has been reported for acute toxicity or carcinogenicity of phthalocyanine compounds. The (rats, oral) is 10 g/kg. Footnotes References External links Macrocycles Chelating agents
Phthalocyanine
Chemistry
1,342
34,317,728
https://en.wikipedia.org/wiki/Houtermans%20Award
The Houtermans Award is given annually by the European Association of Geochemistry for outstanding contributions to geochemistry made by scientists under 35 years old or within 6 years of their PhD award. The award is named after Fritz Houtermans and consists of an engraved medal and an honorarium of 1000 Euros. The F.G. Houtermans award is the only category from Geochemica Society and European Association of Geochemistry awards with equal gender representation in the last decade. Award winners Source: ERG See also List of geology awards References European science and technology awards Geochemistry Geology awards
Houtermans Award
Chemistry,Technology
123
7,768,976
https://en.wikipedia.org/wiki/List%20of%20sailboat%20designers%20and%20manufacturers
This is a list of notable sailboat designers and manufacturers, which are described by an article in English Wikipedia. Sailboat design and manufacturing is done by a number of companies and groups. Notable designers Sailboat designer articles in Wikipedia: Alan Payne Ben Lexcen Bill Langan Bill Lapworth Bill Lee Bill Luders Britton Chance Jr. Bruce Farr Bruce Kirby Bruce Nelson Carl Alberg Charles Ernest Nicholson Charley Morgan C. Raymond Hunt Associates Dennison J. Lawlor Doug Peterson Edward Burgess Edwin Augustus Stevens Jr., Cox & Stevens E.G. van de Stadt Frank Bethwaite Gary Mull Germán Frers George Cassian George Harding Cuthbertson George Hinterhoeller George Lennox Watson George Steers Graham & Schlageter Greg Elliott Gregory C. Marshall Naval Architect Ltd. Group Finot Jens Quorning Johann Tanzer John Alden John Beavor-Webb John Illingworth John Laurent Giles John Marples John Westell Juan Kouyoumdjian J&J Design Rodney Johnstone Laurie Davidson Lyle Hess Mark Ellis McCurdy & Rhodes Myron Spaulding Nathanael Greene Herreshoff Olin Stephens & Roderick Stephens, Sparkman & Stephens Philip Rhodes Paolo Cori Reichel/Pugh Raymond Creekmore Robert Perry Robert W. Ball Ron Holland Sandy Douglass Starling Burgess Ted Gozzard Ted Hood Ted Irwin Tony Castro VPLP William Fife William Ion Belton Crealock William Shaw Notable manufacturers Sailboat manufacturer articles in Wikipedia: Aegean Yacht Albin Marine Alexander Stephen and Sons Alloy Yachts Aloha Yachts Alsberg Brothers Boatworks Amel Yachts Archambault Boats Ariel Patterson Austral Yachts Baltic Yachts Bavaria Bayfield Boat Yard Beneteau Bowman Yachts Bristol Yachts C&C Yachts C. & R. Poillon Cabo Rico Yachts Cal Yachts Calgan Marine Cape Cod Shipbuilding Capital Yachts Cape Dory Yachts Caribbean Sailing Yachts Cascade Yachts Catalina Yachts Cavalier Yachts Clark Boat Company Classic Yachts Columbia Yachts Com-Pac Yachts Cooper Enterprises Cornish Crabbers Coronado Yachts Coastal Recreation CS Yachts CW Hood Yachts David Carll Dehler Yachts Douglass & McLeod Down East Yachts Dragonfly Trimarans Dufour Yachts Endeavour Yacht Corporation Ericson Yachts ETAP Yachting Freedom Yachts George Lawley & Son Grampian Marine Hake Yachts Hallberg-Rassy Hans Christian Yachts Hanse Yachts Hinckley Yachts Hinterhoeller Yachts Hodgdon Yachts Holland Jachtbouw Hughes Boat Works Hunter Boats Hylas Yachts Irwin Yachts Island Packet Yachts Jensen Marine J/Boats Jakobson Shipyard Jeanneau Jesse Carll (shipbuilder) Jeremy Rogers Limited Johnson Boat Works Jongert J. Samuel White Laguna Yachts Lancer Yachts Laser Performance Lockley Newport Boats MacGregor Yacht Corporation Marlow-Hunter Marine Montgomery Marine Products Melges Performance Sailboats Menger Boatworks Mirage Yachts Moses Adams (shipbuilder) Najad Yachts Nauticat Yachts Oy Nautor's Swan O'Day Corp. Ontario Yachts Oyster Marine Paceship Yachts Pacific Seacraft Palmer Johnson Pearson Yachts Perini Navi PlastiGlass Pogo Structures Precision Boat Works Rosetti Marino Royal Huisman Royal Denship Rustler Yachts S2 Yachts Seafarer Yachts Seidelmann Yachts Skene Boats Smith and Rhuland South Coast Seacraft Sovereign Yachts Su Marine Yachts Tanzer Industries Tartan Marine Universal Marine Vanguard Sailboats Vandestadt and McGruer Limited Varne Marine W. D. Schock Corporation Wally Yachts Watkins Yachts William H. Brown X-Yachts See also List of sailing boat types List of large sailing yachts Sailboat Sailboat Sailboat manufacturers
List of sailboat designers and manufacturers
Engineering
770
37,688,059
https://en.wikipedia.org/wiki/NFPA%201670
NFPA 1670 (Standard on Operations and Training for Technical Search and Rescue Incidents) is a standard published by the National Fire Protection Association. The standard identifies and establishes levels of functional capability for conducting operations at technical search and rescue incidents while minimizing threats to rescuers. The last edition was published in 2017, with subsequent editions being integrated into the NFPA 2500. References NFPA Standards Fire protection
NFPA 1670
Engineering
84
26,937,492
https://en.wikipedia.org/wiki/Barbigerone
Barbigerone is one of a few pyranoisoflavones among several groups of isoflavones. It was first isolated from the seed of a leguminous plant Tephrosia barbigera; hence the name "barbigerone". Members of the genus Millettia are now known to be rich in barbigerone, including M. dielsiena, M. ferruginea, M. usaramensis, and M. pachycarpa. It has also been isolated from the medicinal plant Sarcolobus globosus. Barbigerone from S. globosus is validated to have significant antioxidant property. Barbigerone exhibits profound antiplasmodial activity against the malarial parasite Plasmodium falciparum. It is also demonstrated that it has anti-cancer potential as it causes apoptosis of murine lung-cancer cells. References External links Metabolome J-Global Isoflavones Flavonoid antioxidants Chemopreventive agents
Barbigerone
Chemistry
228
18,477,013
https://en.wikipedia.org/wiki/Efaproxiral
Efaproxiral (INN) is an analogue of bezafibrate [a lipid-lowering agent], developed for the treatment of depression, traumatic brain injury, ischemia, stroke, myocardial infarction, diabetes, hypoxia, sickle cell disease, hypercholesterolemia and as a radio sensitiser. The chemical is a derivative of propanoic acid. One use for efaproxiral is to increase the efficacy of certain chemotherapy drugs which have reduced efficacy against hypoxic tumours, and can thus be made more effective by increased offloading of oxygen into the tumour tissues. No benefit was seen for efaproxiral in phase III clinical trials. The increased oxygenation of tissues could theoretically also produce enhanced exercise capacity in feline, rat and canine models for approximately 100 min. immediately after a high dosage 45 min. intravenous infusion. This has led World Anti-Doping Agency to categorise efaproxiral under a prohibited method to artificially enhance the uptake, transport or delivery of oxygen. There is no existing evidence that efaproxiral can effectively enhance performance in humans. Efaproxiral can be absorbed via transdermal, rectal, inhalation and gastrointestinal routes, though not at plasma concentrations great enough to alter the oxygen-haemoglobin dissociation curve. Efaproxiral is explicitly excluded from the 2012 World Anti-Doping Agency list of Prohibited Substances and is explicitly included in the Prohibited Methods section M1 as a forbidden procedure to alter the oxygen-haemoglobin dissociation curve in order to allosterically modify haemoglobin. References Antineoplastic drugs Acetanilides Phenol ethers Carboxylic acids
Efaproxiral
Chemistry
376
35,892,029
https://en.wikipedia.org/wiki/Zimmert%20set
In mathematics, a Zimmert set is a set of positive integers associated with the structure of quotients of hyperbolic three-space by a Bianchi group. Definition Fix an integer d and let D be the discriminant of the imaginary quadratic field Q(√-d). The Zimmert set Z(d) is the set of positive integers n such that 4n2 < -D-3 and n ≠ 2; D is a quadratic non-residue of all odd primes in d; n is odd if D is not congruent to 5 modulo 8. The cardinality of Z(d) may be denoted by z(d). Property For all but a finite number of d we have z(d) > 1: indeed this is true for all d > 10476. Application Let Γd denote the Bianchi group PSL(2,Od), where Od is the ring of integers of. As a subgroup of PSL(2,C), there is an action of Γd on hyperbolic 3-space H3, with a fundamental domain. It is a theorem that there are only finitely many values of d for which Γd can contain an arithmetic subgroup G for which the quotient H3/G is a link complement. Zimmert sets are used to obtain results in this direction: z(d) is a lower bound for the rank of the largest free quotient of Γd and so the result above implies that almost all Bianchi groups have non-cyclic free quotients. References Integer sequences Hyperbolic geometry
Zimmert set
Mathematics
340
26,613,365
https://en.wikipedia.org/wiki/Scalos
Scalos is a desktop replacement for the original Amiga Workbench GUI, based on a subset of APIs and its own front-end window manager of the same name. Scalos is NOT an AmigaOS replacement, although its name suggests otherwise. Its goal is to emulate the real Workbench behaviour, plus integrating additional functionality and an enhanced look. As stated on its website, the name "Scalos" was inspired by the fictional time-accelerated planet Scalos in the Star Trek episode "Wink of an Eye". History Scalos is a former commercial product originally written in 1999 by programmer Stefan Sommerfield for a software house called AlienDesign. The purpose was to recreate the mouse-and-click experience on Amiga, offering an alternative to the Workbench interface present in versions 3.0 and 3.1 of AmigaOS (at that time already considered obsolete). A group of English programmers known as Satanic Dreams Software (a software firm developing for Windows, Macintosh and Linux) took over. The release versions 1.1 and 1.2 (internally versions 39.2) came out in 2000 as freeware. These may be found on the Amiga Aminet official online repository. Scalos was finally open sourced in 2012. The last release candidate is version 41.8 RC1; it is compatible with AmigaOS 3 for the Motorola 68000 family of processors, with AmigaOS 4 and MorphOS on PowerPC machines, and with AROS, at the moment on computers with processors from Intel 80386 onwards. The Scalos project can be found on SourceForge. Versions v1.0 (V39.201) – November 1999 v1.1 (V39.212) – 1999 (?) v1.2b (39.220) – June 6, 2000 v1.2d (39.222) – 2000 (latest public beta executable) v1.3 (40.7) (beta) – August 2, 2001 v1.3 (40.22) – September 25, 2002 v1.4 (40.32) (beta) March 31, 2005 v1.6 (41.4) – March 27, 2007 v1.7 (41.5) – August 12, 2007 (41.6) – March 12, 2009 (41.7) (beta) – March 15, 2010 (41.8) (RC1) – August 25, 2012 Features Scalos is a Workbench-compatible replacement which is declared by its developers 100-percent compatible with the original Amiga interface. It features internal 64-bit arithmetic which allows support for hard disks over 64 GB, and a complete internal multitasking system (each window drawn on the desktop is represented in the system by its own task). It is completely adjustable by the user, and features a system for drawing and managing windows (as in the standard Amiga Intuition system). Each window may have its own background pattern (sporting an optimized pattern routine and scaling) and automatic content-refresh. Menus are editable. Standard Amiga "Palette" and windows "Pattern" preferences have been replaced with new ones. Scalos maintains its own API and its own plug-in system for the benefit of developers who want to create software for Scalos and enhance the system. Scalos supports as standard icon sets the Amiga NewIcons replacement icons, and the Amiga GlowIcons set on older versions like AmigaOS 3.5, including thumbnail previews of files as icons. It therefore represents a whole Amiga icon system Datatype capable of supporting various types of icons. This includes PNG icons with alpha channel and transparencies, and scalable icons (the aforementioned NewIcons and GlowIcons). Scalos is also fully truecolor-compliant. References External links Scalos Homepage Scalos 1.2 Info Page at Aminet Amiga Official Repository Scalos article at AmigaHistory site. Amiga software AmigaOS AmigaOS 4 software AROS software MorphOS software Desktop shell replacement
Scalos
Technology
842
24,363,565
https://en.wikipedia.org/wiki/TCP%20fusion
TCP Fusion is a feature for providing TCP loopback and is implemented in the Transmission Control Protocol (TCP) stack within Oracle' Solaris-10 and Solaris-11 operating systems as well as a number of software projects based on the open source codebase from the OpenSolaris project. The idea is trivial in that a client and server connection on a local loopback interface within the same system should not need the entire TCP/IP protocol stack to exchange data. Therefore, provide a faster data path with the fusion of the two end points. The source code is well documented in inet/tcp/tcp_fusion.c which clearly states: The feature may be enabled or disabled via the /etc/system config file for the Solaris or genunix kernel and the only line required is "set ip:do_tcp_fusion = 0x0" which set the feature off or FALSE while of "0x1" for hexadecimal TRUE. See https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/inet/tcp/ Fusion
TCP fusion
Technology
252
497,380
https://en.wikipedia.org/wiki/Chosen%20people
Throughout history, various groups of people have considered themselves to be the chosen people of a deity, for a particular purpose. The phenomenon of "chosen people" is well known among the Israelites and Jews, where the term () originally referred to the Israelites as being selected by Yahweh to worship only him and to fulfill the mission of proclaiming his truth throughout the world. Some claims of chosenness are based on parallel claims of Israelite ancestry, as is the case for the Christian Identity and Black Hebrew sects—both which claim themselves (and not Jews) to be the "true Israel". Others claim that the concept is spiritual, where individuals who genuinely believe in God are considered to be the "true" chosen people. This view is common among most Christian denominations, who historically believed that the church replaced Israel as the people of God. Anthropologists commonly regard claims of chosenness as a form of ethnocentrism. Judaism In Judaism, "chosenness" is the belief that the Jews, via descent from the ancient Israelites, are the chosen people, i.e., chosen to be in a covenant with God. The idea of the Israelites being chosen by God is found most directly in the Book of Deuteronomy, where it is applied to Israel at Mount Sinai upon the condition of their acceptance of the Mosaic covenant between themselves and God. The decalogue immediately follows, and the seventh day sabbath is given as the sign of the covenant, with a requirement that Israel keep it, or else be cut off. The verb 'bahar (), and is alluded to elsewhere in the Hebrew Bible using other terms such as "holy people". Much is written about these topics in rabbinic literature. The three largest Jewish denominations—Orthodox Judaism, Conservative Judaism and Reform Judaism—maintain the belief that the Jews have been chosen by God for a purpose. Sometimes this choice is seen as charging the Jewish people with a specific mission—to be a light unto the nations, and to exemplify the covenant with God as described in the Torah. This is first prominently outlined in Genesis 12:2. While the concept of "choseness" may be understood by some to connote ethnic supremacy, Conservative Judaism denies this, as it claims that as a result of being chosen, Jews also bear the greatest responsibility, which incurs the most severe punishment upon disobedience. "Few beliefs have been subject to as much misunderstanding as the 'Chosen People' doctrine. The Torah and the Prophets clearly stated that this does not imply any innate Jewish superiority. In the words of Amos (3:2) 'You alone have I singled out of all the families of the earth—that is why I will call you to account for your iniquities.' The Torah tells us that we are to be "a kingdom of priests and a holy nation" with obligations and duties which flowed from our willingness to accept this status. Far from being a license for special privilege, it entailed additional responsibilities not only toward God but to our fellow human beings. As expressed in the blessing at the reading of the Torah, our people have always felt it to be a privilege to be selected for such a purpose. For the modern traditional Jew, the doctrine of the election and the covenant of Israel offers a purpose for Jewish existence which transcends its own self interests. It suggests that because of our special history and unique heritage we are in a position to demonstrate that a people that takes seriously the idea of being covenanted with God can not only thrive in the face of oppression, but can be a source of blessing to its children and its neighbors. It obligates us to build a just and compassionate society throughout the world and especially in the land of Israel where we may teach by example what it means to be a 'covenant people, a light unto the nations.'"Likewise, Rabbi Lord Immanuel Jakobovits views the concept of "choseness" as God choosing different nations, and by extension individuals, to perform unique contributions to the world, similar to the concept of division of labor. "Yes, I do believe that the chosen people concept as affirmed by Judaism in its holy writ, its prayers, and its millennial tradition. In fact, I believe that every people—and indeed, in a more limited way, every individual—is "chosen" or destined for some distinct purpose in advancing the designs of Providence. Only, some fulfill their mission and others do not. Maybe the Greeks were chosen for their unique contributions to art and philosophy, the Romans for their pioneering services in law and government, the British for bringing parliamentary rule into the world, and the Americans for piloting democracy in a pluralistic society. The Jews were chosen by God to be 'peculiar unto Me' as the pioneers of religion and morality; that was and is their national purpose." Christianity and derivatives Seventh-day Adventism Mormonism In Mormonism, all Latter Day Saints are viewed as covenant, or chosen, people because they have accepted the name of Jesus Christ through the ordinance of baptism. In contrast to supersessionism, Latter Day Saints do not dispute the "chosen" status of the Jewish people. Most practicing Mormons receive a patriarchal blessing that reveals their lineage in the House of Israel. This lineage may be blood related or through "adoption;" therefore, a child may not necessarily share the lineage of her parents (but will still be a member of the tribes of Israel). It is a widely held belief that most members of the faith are in the tribe of Ephraim or the tribe of Manasseh. Christian Identity Christian Identity is a belief which holds the view that only either Germanic, Anglo-Saxon, Celtic, Nordic, or Aryan people and those of kindred blood are the descendants of Abraham, Isaac and Jacob and hence the descendants of the ancient Israelites. Independently practiced by individuals, independent congregations, and some prison gangs, it is not an organized religion, nor is it connected with specific Christian denominations. Its theology promotes a racial interpretation of Christianity. Christian Identity beliefs were primarily developed and promoted by American authors who regarded Europeans as the "chosen people" and Jews as the cursed offspring of Cain, the "serpent hybrid" or serpent seed, a belief known as the two-seedline doctrine. White supremacist sects and gangs later adopted many of these teachings. Christian Identity holds that all non-whites (people not of wholly European descent) will either be exterminated or enslaved in order to serve the white race in the new Heavenly Kingdom on Earth under the reign of Jesus Christ. Its doctrine states that only "Adamic" (white) people can achieve salvation and paradise. Mandaeism Mandaeans formally refer to themselves as Nasurai (Nasoraeans) meaning guardians or possessors of secret rites and knowledge. Another early self-appellation is bhiri zidqa meaning 'elect of righteousness' or 'the chosen righteous', a term found in the Book of Enoch and Genesis Apocryphon II, 4. Rastafari Based on Jewish biblical tradition and Ethiopian legend via Kebra Nagast, Rastas believe that Israel's King Solomon, together with Ethiopian Queen of Sheba, conceived a child which began the Solomonic line of kings in Ethiopia, rendering the Ethiopian people as the true children of Israel, and thereby chosen. Reinforcement of this belief occurred when Beta Israel, Ethiopia's ancient Israelite First Temple community, were rescued from Sudanese famine and brought to Israel during Operation Moses in 1985. Unification Church Sun Myung Moon taught that Korea is the chosen nation, selected to serve a divine mission and was "chosen by God to be the birthplace of the leading figure of the age" and was the birthplace of "Heavenly Tradition", ushering in God's kingdom. Maasai religion The traditional religion of the Maasai people from East Africa maintains that the Supreme God Ngai has chosen them to herd all cattle in the world, and this belief has been used to justify stealing from other tribes. See also 144,000 Christ of Europe Exceptionalism One true church Religiocentrism The Chosen One (trope) Theocracy References Further reading Ethnocentrism Exceptionalism Racism Religious practices Religious belief and doctrine Religious terminology Christianity and Judaism related controversies
Chosen people
Biology
1,695
13,108,226
https://en.wikipedia.org/wiki/Radio%20occultation
Radio occultation (RO) is a remote sensing technique used for measuring the physical properties of a planetary atmosphere or ring system. Satellites carrying onboard GNSS-Radio occultation instruments include CHAMP, GRACE and GRACE-FO, MetOp and the recently launched COSMIC-2. Atmospheric radio occultation Atmospheric radio occultation relies on the detection of a change in a radio signal as it passes through a planet's atmosphere, i.e. as it is occulted by the atmosphere. When electromagnetic radiation passes through the atmosphere, it is refracted (or bent). The magnitude of the refraction depends on the gradient of refractivity normal to the path, which in turn depends on the density gradient. The effect is most pronounced when the radiation traverses a long atmospheric limb path. At radio frequencies the amount of bending cannot be measured directly; instead, the bending can be calculated using the Doppler shift of the signal given the geometry of the emitter and receiver. The amount of bending can be related to the refractive index by using an Abel transform on the formula relating bending angle to refractivity. In the case of the neutral atmosphere (below the ionosphere), information on the atmosphere's temperature, pressure and water vapor content can be derived, thus giving radio occultation data applications in meteorology. GNSS radio occultation GNSS radio occultation (GNSS-RO), historically also known as GPS radio occultation (GPS-RO or GPSRO), is a type of radio occultation that relies on radio transmissions from GPS (Global Positioning System), or more generally from GNSS (Global Navigation Satellite System), satellites. This is a relatively new technique (first applied in 1995) for performing atmospheric measurements. It is used as a weather forecasting tool, and could also be harnessed in monitoring climate change. The technique involves a low-Earth-orbit satellite receiving a signal from a GNSS satellite. The signal has to pass through the atmosphere and gets refracted along the way. The magnitude of the refraction depends on the temperature and water vapor concentration in the atmosphere. GNSS radio occultation amounts to an almost instantaneous depiction of the atmospheric state. The relative position between the GNSS satellite and the low-Earth-orbit satellite changes over time, allowing for a vertical scanning of successive layers of the atmosphere. GNSS-RO observations can also be conducted from aircraft or on high mountaintops. Planetary satellite missions Current missions include REX on New Horizons. Satellite missions CLARREO Microlab 1 FORMOSAT-3/COSMIC FORMOSAT-7/COSMIC-2 CHAMP GRACE Oceansat Sentinel-6 Michael Freilich GRAS sensor onboard MetOp satellite Spire LEMUR cubesats Yunyao 1 See also Atmospheric limb sounding Bistatic radar References 9. Alexander, P., A. de la Torre, and P. Llamedo (2008), Interpretation of gravity wave signatures in GPS radio occultations, J. Geophys. Res., 113, D16117, doi:10.1029/2007JD009390. External links COSMIC Project Website GeoOptics LLC Website - First commercial operational RO Constellation PlanetIQ Website ROM SAF monitoring ROM SAF website ECMWF monitoring GENESIS Website Planetary science Satellite navigation Satellite meteorology Radio
Radio occultation
Astronomy
688
17,850,439
https://en.wikipedia.org/wiki/Erythravine
Erythravine is a tetrahydroisoquinoline alkaloid found in the plant Erythrina mulungu and other species of the genus Erythrina. Biological activity Some laboratory research has investigated the biological activity of erythravine, but the relevance to effects in humans is unknown. Nicotinic acetylcholine receptor It has been shown to be a potent nicotinic receptor antagonist in animal models with an IC50 of 6μM at the α7 site and 13 nM for the α4β2 receptor. Anxiolytic It appears to have anxiolytic effects in animal models of anxiety. Further studies suggest that the anxiolytic effects are only reproducible with the whole extract of Erythrina mulungu but not with the pure alkaloids. Anticonvulsant Erythravine inhibited seizures evoked by bicuculline, pentylenetetrazole, and kainic acid as well as increasing the latency of seizures induced by NMDA. Treatment with erythravine prevented death in all the animals tested with the four convulsants except a few of those treated with kainic acid. See also Bark isolates References Norsalsolinol ethers Cyclohexenols Anticonvulsants Nicotinic antagonists Anxiolytics Isoquinoline alkaloids Conjugated dienes
Erythravine
Chemistry
294
939,170
https://en.wikipedia.org/wiki/Messier%202
Messier 2 or M2 (also designated NGC 7089) is a globular cluster in the constellation Aquarius, five degrees north of the star Beta Aquarii. It was discovered by Jean-Dominique Maraldi in 1746, and is one of the largest known globular clusters. Discovery and Visibility M2 was discovered by the French astronomer Jean-Dominique Maraldi in 1746 while observing a comet with Jacques Cassini. Charles Messier rediscovered it in 1760, but thought that it is a nebula without any stars associated with it. William Herschel, in 1783, was the first to resolve individual stars in the cluster. M2 is, under extremely good conditions, just visible to the naked eye. Binoculars or a small telescope will identify this cluster as non-stellar, while larger telescopes will resolve individual stars, of which the brightest are of apparent magnitude 6.5. Characteristics M2 is about 55,000 light-years distant from Earth. At 175 light-years in diameter, it is one of the larger globular clusters known. The cluster is rich, compact, and significantly elliptical. It is 12.5 billion years old and one of the older globular clusters associated with the Milky Way galaxy. M2 contains about 150,000 stars, including 21 known variable stars. Its brightest stars are red and yellow giant stars. The overall spectral type is F4. M2 is part of the Gaia Sausage, the hypothesized remains of a merged dwarf galaxy. Data from Gaia has led to the discovery of an extended tidal stellar stream, about 45 degrees long and 300 light-years (100 pc) wide, that is likely associated with M2. It was possibly perturbed due to the presence of the Large Magellanic Cloud. Messier 2 is located within our Milky Way galaxy, and is one of the oldest clusters of stars designated to the Milky Way. Like most globular clusters, M2 is found within the galactic halo, specifically in the southern galactic cap. This places it right below the southern pole of the Milky Way. Oosterhoff Classification M2 is defined as an Oosterhoff type II globular cluster. Oosterhoff type is a classification system of globular clusters originally observed by Pieter Oosterhoff in where globular clusters are generally separated into two types. Oosterhoff type is determined by metallicity, age, and average pulsation period of type ab RR Lyrae variable stars of the cluster. A cluster metallicity below −1.6, an age above 13 billion years, and an average RRab Lyrae pulsation period around .64 days indicates a type II cluster. This .64 day value, coupled with a metallicity of −1.65, provides evidence that M2 follows the Oosterhoff Gap phenomena. This is an observed gap in the grouping of type I and type II clusters in the Milky Way on a metallicity vs average RRab pulsation period plot. M2 is a bit of an anomaly in reference to Oosterhoff type. While it satisfies the metallicity and RRab Lyrae pulsation period conditions, it actually has an age of 12.5 Gyr, well below the cutoff age of 13 Gyr normal for a Oosterhoff type II cluster. This is unexpected because age of a cluster is generally determined from metallicity. However, this abnormality is explained in an article by Marín-Franch. References See also List of Messier objects External links M2,SEDS Messier pages M2, Galactic Globular Clusters Database page Globular clusters NGC objects 002 Gaia-Enceladus Aquarius (constellation) 17460911
Messier 2
Astronomy
759
1,564,205
https://en.wikipedia.org/wiki/Real-time%20computer%20graphics
Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion. Computers have been capable of generating 2D images such as simple lines, images and polygons in real time since their invention. However, quickly rendering detailed 3D objects is a daunting task for traditional Von Neumann architecture-based systems. An early workaround to this problem was the use of sprites, 2D images that could imitate 3D graphics. Different techniques for rendering now exist, such as ray-tracing and rasterization. Using these techniques and advanced hardware, computers can now render images quickly enough to create the illusion of motion while simultaneously accepting user input. This means that the user can respond to rendered images in real time, producing an interactive experience. Principles of real-time 3D computer graphics The goal of computer graphics is to generate computer-generated images, or frames, using certain desired metrics. One such metric is the number of frames generated in a given second. Real-time computer graphics systems differ from traditional (i.e., non-real-time) rendering systems in that non-real-time graphics typically rely on ray tracing. In this process, millions or billions of rays are traced from the camera to the world for detailed rendering—this expensive operation can take hours or days to render a single frame. Real-time graphics systems must render each image in less than 1/30th of a second. Ray tracing is far too slow for these systems; instead, they employ the technique of z-buffer triangle rasterization. In this technique, every object is decomposed into individual primitives, usually triangles. Each triangle gets positioned, rotated and scaled on the screen, and rasterizer hardware (or a software emulator) generates pixels inside each triangle. These triangles are then decomposed into atomic units called fragments that are suitable for displaying on a display screen. The fragments are drawn on the screen using a color that is computed in several steps. For example, a texture can be used to "paint" a triangle based on a stored image, and then shadow mapping can alter that triangle's colors based on line-of-sight to light sources. Video game graphics Real-time graphics optimizes image quality subject to time and hardware constraints. GPUs and other advances increased the image quality that real-time graphics can produce. GPUs are capable of handling millions of triangles per frame, and modern DirectX/OpenGL class hardware is capable of generating complex effects, such as shadow volumes, motion blurring, and triangle generation, in real-time. The advancement of real-time graphics is evidenced in the progressive improvements between actual gameplay graphics and the pre-rendered cutscenes traditionally found in video games. Cutscenes are typically rendered in real-time—and may be interactive. Although the gap in quality between real-time graphics and traditional off-line graphics is narrowing, offline rendering remains much more accurate. Advantages Real-time graphics are typically employed when interactivity (e.g., player feedback) is crucial. When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making. Teams of people are typically involved in the making of these decisions. In real-time computer graphics, the user typically operates an input device to influence what is about to be drawn on the display. For example, when the user wants to move a character on the screen, the system updates the character's position before drawing the next frame. Usually, the display's response-time is far slower than the input device—this is justified by the immense difference between the (fast) response time of a human being's motion and the (slow) perspective speed of the human visual system. This difference has other effects too: because input devices must be very fast to keep up with human motion response, advancements in input devices (e.g., the current Wii remote) typically take much longer to achieve than comparable advancements in display devices. Another important factor controlling real-time computer graphics is the combination of physics and animation. These techniques largely dictate what is to be drawn on the screen—especially where to draw objects in the scene. These techniques help realistically imitate real world behavior (the temporal dimension, not the spatial dimensions), adding to the computer graphics' degree of realism. Real-time previewing with graphics software, especially when adjusting lighting effects, can increase work speed. Some parameter adjustments in fractal generating software may be made while viewing changes to the image in real time. Rendering pipeline The graphics rendering pipeline ("rendering pipeline" or simply "pipeline") is the foundation of real-time graphics. Its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures and more. Architecture The architecture of the real-time rendering pipeline can be divided into conceptual stages: application, geometry and rasterization. Application stage The application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software that developers optimize for performance. This stage may perform processing such as collision detection, speed-up techniques, animation and force feedback, in addition to handling user input. Collision detection is an example of an operation that would be performed in the application stage. Collision detection uses algorithms to detect and respond to collisions between (virtual) objects. For example, the application may calculate new positions for the colliding objects and provide feedback via a force feedback device such as a vibrating game controller. The application stage also prepares graphics data for the next stage. This includes texture animation, animation of 3D models, animation via transforms, and geometry morphing. Finally, it produces primitives (points, lines, and triangles) based on scene information and feeds those primitives into the geometry stage of the pipeline. Geometry stage The geometry stage manipulates polygons and vertices to compute what to draw, how to draw it and where to draw it. Usually, these operations are performed by specialized hardware or GPUs. Variations across graphics hardware mean that the "geometry stage" may actually be implemented as several consecutive stages. Model and view transformation Before the final model is shown on the output device, the model is transformed onto multiple spaces or coordinate systems. Transformations move and manipulate objects by altering their vertices. Transformation is the general term for the four specific ways that manipulate the shape or position of a point, line or shape. Lighting In order to give the model a more realistic appearance, one or more light sources are usually established during transformation. However, this stage cannot be reached without first transforming the 3D scene into view space. In view space, the observer (camera) is typically placed at the origin. If using a right-handed coordinate system (which is considered standard), the observer looks in the direction of the negative z-axis with the y-axis pointing upwards and the x-axis pointing to the right. Projection Projection is a transformation used to represent a 3D model in a 2D space. The two main types of projection are orthographic projection (also called parallel) and perspective projection. The main characteristic of an orthographic projection is that parallel lines remain parallel after the transformation. Perspective projection utilizes the concept that if the distance between the observer and model increases, the model appears smaller than before. Essentially, perspective projection mimics human sight. Clipping Clipping is the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage. Once those primitives are removed, the primitives that remain will be drawn into new triangles that reach the next stage. Screen mapping The purpose of screen mapping is to find out the coordinates of the primitives during the clipping stage. Rasterizer stage The rasterizer stage applies color and turns the graphic elements into pixels or picture elements. See also Bounding interval hierarchy Demoscene Geometry instancing Optical feedback Quartz Composer Real time (media) Real-time raytracing Tessellation (computer graphics) Video art Video display controller References Bibliography External links RTR Portal – a trimmed-down "best of" set of links to resources Computer graphics Computer graphics
Real-time computer graphics
Technology
1,777
20,345,231
https://en.wikipedia.org/wiki/BioEthanol%20for%20Sustainable%20Transport
BioEthanol for Sustainable Transport (BEST) was a four-year project financially supported by the European Union for promoting the introduction and market penetration of bioethanol as a vehicle fuel, and the introduction and wider use of flexible-fuel vehicles and ethanol-powered vehicles on the world market. The project began in January 2006 and continued until the end of 2009, and had nine participating regions or cities in Europe, Brazil, and China. Goals The BEST project targets included the introduction of more than 10,000 flex-fuel or ethanol cars and 160 ethanol buses; to promote the opening of 135 E85 and 13 ED95 public fuel stations; and to promote the development and testing of hydrous E15 and anhydrous low ethanol blends with gasoline and diesel. Participants There were ten participating cities and regions, and several commercial partners. Stockholm (Sweden) was the coordinating city, and other participants were Basque Country and Madrid (Spain), the Biofuel Region in Sweden, Brandenburg (Germany), La Spezia (Italy), Nanyang (China), Rotterdam (Netherlands), São Paulo (Brazil), and Somerset (UK). The commercial partners were Ford Europe, Saab Automobile and several bioethanol suppliers. Implemented projects Flexible-fuel vehicles A major activity in BEST was the promotion of E85 flexifuel vehicles (FFVs). During the project nine BEST sites introduced over 77,000 FFVs, far exceeding the original project's target of 10,000 vehicles. In 2008, out of the 170,000 flexifuel vehicles in operation in Europe, 45% of the vehicles operated at BEST sites; and out of 2,200 E85 pumps installed in the EU, 80% are found in the BEST countries. Sweden stands out with 70% of all flexifuel vehicles operating in the EU. BEST sites also evaluated both dedicated E85 pumps and flexifuel pumps and found very few problems. Ethanol-powered buses The project included the demonstration of two types of bioethanol-powered buses, a diesel engine Scania bus running on ED95 (sugarcane ethanol plus an ignition improver) and a Dongfeng bus capable of running on both E100 and petrol (flexible-fuel bus). Fuel pumps were also installed at bus depots in the five participating cities. BEST demonstrated more than 138 bioethanol ED95 buses and 12 ED95 pumps at five sites, three in Europe, one in China and one in Brazil. These trials helped increase knowledge about bioethanol buses in the participating cities. An innovation within BEST was the demonstration of two dual-tank E100 buses developed by the Chinese vehicle producer Dongfeng Motor. All BEST sites will continue to drive their bioethanol buses in regular traffic and some cities are already planning to expand their fleets. The trial demonstrations showed that ethanol-powered ED95 buses: reduce greenhouse gas emissions and local air pollution. are reliable and appreciated by drivers and passengers. cost more to purchase and operate than diesel buses. require more scheduled maintenance than diesel buses. Taxing fuel by volume instead of energy content penalises bioethanol buses. ED95 can be safely handled at depots and has the potential for wider use in heavy vehicles such as trucks. Brazil Under the auspices of the BEST project, the first ED95 bus began operations in São Paulo city in December 2007 as a trial project. The bus is a Scania with a modified diesel engine capable of running with 95% hydrous ethanol with 5% ignition improver. Scania adjusted the compression ratio from 18:1 to 28:1, added larger fuel injection nozzles, and altered the injection timing. During the first year trial period performance and emissions were monitored by the National Reference Center on Biomass (CENBIO - ) at the Universidade de São Paulo, and compared with similar diesel models, with special attention to carbon monoxide and particulate matter emissions. Performance is also important as previous tests have shown a reduction in fuel economy of around 60% when E95 is compared to regular diesel. In November 2009, a second ED95 bus began operating in São Paulo city. The bus was a Swedish Scania engine and chassis with a CAIO bus body. The second bus was scheduled to operate between Lapa and Vila Mariana, passing through Avenida Paulista, one of the main business centers of São Paulo city. CENBIO laboratory tests found that as compared to diesel, carbon dioxide emissions are 80% lower with ED95, particulate drops by 90%, nitrogen oxide emissions are 62% lower, and there are no sulphur emissions. During the trial was observed that the first bus began presenting sudden halts of the engine in slow running. The problem manifested more frequently in hot days, when the ambient temperature reached 26 °C or more and on top of long grades. After analyzing carefully the problem in the engine's fuel power line, it was discovered that the bus was developed for the European temperate climate, where average temperatures are lower than in tropical climate. In hotter days, the temperature of the fuel line reached up to 58 °C, a temperature that could increase even more when the engine would be slow running. The excessive heating was causing the vaporization of the fuel in the power line of the engine. The solution found was to deviate the fuel return from the engine straight to the tank, and thus, adapting the engine to Brazilian climate conditions. Based on the satisfactory results obtained during the 3-year trial operation of the two buses, in November 2010 the municipal government of São Paulo city signed an agreement with UNICA, Cosan, Scania and Viação Metropolitana", a local bus operator, to introduce a fleet of 50 ethanol-powered ED95 buses by May 2011. The city's government objective is to reduce the carbon footprint of the city's bus fleet which is made of 15,000 diesel-powered buses, and the final goal is for the entire bus fleet to use only renewable fuels by 2018. The first ethanol-powered buses were delivered in May 2011, and the 50 ethanol-powered ED95 buses will begin regular service in June 2011. China In Nanyang, Henan, a new type of bioethanol flexible-fuel bus capable of running on petrol or neat ethanol fuel (E100) was developed by Dongfeng Motor. The buses look like conventional buses and have two fuel tanks, one for petrol and one for E100. Two buses were demonstrated by local bioethanol producer Tianguan, who also supplied E100 for the buses. One fuel pump was set up for the trial. One of the buses uses a modified petrol-engine and the other uses a modified natural gas engine. The new bus types were developed to overcome import duties and are a low-cost alternative for Chinese cities seeking to introduce bioethanol to their public transport systems. Each E100 bus developed by Dongfeng costs around , which is more expensive than a conventional petrol bus. Italy Three ED95 buses and one fuel pump was installed in La Spezia. Spain Five ED95 buses operated in Madrid and one fuel pump was installed. Sweden In Stockholm a total of 127 ED95 buses and five ED95 ethanol fuel stations were funded within the BEST project. See also Common ethanol fuel mixtures Electric bus Ethanol fuel Ethanol fuel by country Ethanol fuel in Brazil Ethanol fuel in Sweden Flexible-fuel vehicle Fuel cell bus Green vehicle Hybrid electric bus Issues relating to biofuels Neat ethanol vehicle References External links BEST Europe home website (in English) BEST project website in Brazil - CENBIO) (in Portuguese) BEST project website in Italy (in Italian) BEST project website in the Netherlands (in Dutch) BEST project website in Spain (in Spanish) Report: Results and recommendations from the European BEST project Ethanol fuel Flexible-fuel vehicles Sustainable transport Ethanol Petroleum products Vehicles by fuel
BioEthanol for Sustainable Transport
Physics,Chemistry
1,608
10,989,910
https://en.wikipedia.org/wiki/3rd%20Space%20Vest
The ForceWear Vest is a haptic suit that was unveiled at the Game Developers Conference in San Francisco in March 2007. The vest was mentioned in several articles about next-generation gaming accessories. The vest was released in November 2007, and reviews of the product have been generally favorable. The vest uses eight trademarked "contact points" that simulate gunfire, body slams or G-forces associated with race car driving. It is unique because unlike traditional force feedback accessories, the vest is directional, so that action taking place outside the players' field of view can also be felt. A player hit by gunfire from behind will actually feel the shot in his back while he may not be otherwise aware of this using standard visual display cues. Gaming reporter Charlie Demerjian of The Inquirer said, "If they can keep the price reasonable and have a few good games, this has a chance of becoming a useful gaming accessory." Currently, players have three ways to utilize the vest. Playing games with Direct Integration, such as TN Games' own 3rd Space Incursion, using the 3rd space game drivers whilst playing a game (drivers currently in Beta 2), or installing specially made mods for a game. The vest works with many games, including Call of Duty 2: 3rd Space Edition, 3rd Space Incursion, Half-Life 2: Episodes 1 & 2, Crysis, Enemy Territory Quake Wars, Clive Barker's Jericho, Unreal Tournament 3, F.E.A.R., Medal of Honor: Airborne, Quake 4 and Doom 3. References External links 3rd Space Vest FPS Vest Video game accessories
3rd Space Vest
Technology
328
39,913,402
https://en.wikipedia.org/wiki/Pydlpoly
Pydlpoly is a molecular dynamics simulation package which is a modified version of DL-POLY with a Python language interface. Pydlpoly is written by Rochus Schmid in Ruhr University Bochum, Germany. Molecular dynamics software
Pydlpoly
Chemistry
54
6,046,005
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Kac%20theorem
In number theory, the Erdős–Kac theorem, named after Paul Erdős and Mark Kac, and also known as the fundamental theorem of probabilistic number theory, states that if ω(n) is the number of distinct prime factors of n, then, loosely speaking, the probability distribution of is the standard normal distribution. ( is sequence A001221 in the OEIS.) This is an extension of the Hardy–Ramanujan theorem, which states that the normal order of ω(n) is log log n with a typical error of size . Precise statement For any fixed a < b, where is the normal (or "Gaussian") distribution, defined as More generally, if f(n) is a strongly additive function () with for all prime p, then with Kac's original heuristic Intuitively, Kac's heuristic for the result says that if n is a randomly chosen large integer, then the number of distinct prime factors of n is approximately normally distributed with mean and variance log log n. This comes from the fact that given a random natural number n, the events "the number n is divisible by some prime p" for each p are mutually independent. Now, denoting the event "the number n is divisible by p" by , consider the following sum of indicator random variables: This sum counts how many distinct prime factors our random natural number n has. It can be shown that this sum satisfies the Lindeberg condition, and therefore the Lindeberg central limit theorem guarantees that after appropriate rescaling, the above expression will be Gaussian. The actual proof of the theorem, due to Erdős, uses sieve theory to make rigorous the above intuition. Numerical examples The Erdős–Kac theorem means that the construction of a number around one billion requires on average three primes. For example, 1,000,000,003 = 23 × 307 × 141623. The following table provides a numerical summary of the growth of the average number of distinct prime factors of a natural number with increasing . Around 12.6% of 10,000 digit numbers are constructed from 10 distinct prime numbers and around 68% are constructed from between 7 and 13 primes. A hollow sphere the size of the planet Earth filled with fine sand would have around 1033 grains. A volume the size of the observable universe would have around 1093 grains of sand. There might be room for 10185 quantum strings in such a universe. Numbers of this magnitude—with 186 digits—would require on average only 6 primes for construction. It is very difficult if not impossible to discover the Erdős-Kac theorem empirically, as the Gaussian only shows up when starts getting to be around . More precisely, Rényi and Turán showed that the best possible uniform asymptotic bound on the error in the approximation to a Gaussian is References External links Timothy Gowers: The Importance of Mathematics (part 6, 4 mins in) and (part 7) Kac theorem Normal distribution Theorems about prime numbers
Erdős–Kac theorem
Mathematics
640
77,209,781
https://en.wikipedia.org/wiki/SBS%200953%2B549
SBSS 0953+549 also known as SBS 0953+549 and QSO J0957+5440, is a quasar located in the constellation of Ursa Major. With a redshift of 2.58, the object is located 10.8 billion light-years away from Earth. It is a broad absorption-line quasar (BAL QSO) according to Sloan Digital Sky Survey. It was first studied in the 1980s, as part of the Second Byurakan Survey. SBS 0953+549 contains a strong and weak absorption systems. It has powerful ultraviolet emission lines and an energic nuclear module. A further investigation also shows it has an emission line profile alike of P Cygni-type with Lyman α and β pairs. References Quasars Ursa Major 2MASS objects Principal Galaxies Catalogue objects SDSS objects Starburst galaxies Active galaxies
SBS 0953+549
Astronomy
185
59,839,734
https://en.wikipedia.org/wiki/FSR%201758
FSR 1758 (also known as the Sequoia Cluster) is a large and bright but heavily obscured globular cluster belonging to the Milky Way galaxy. It is located at a distance of about 11.5 kpc from the Sun and about 3.7 kpc from the center of the galaxy. As FSR 1758 lies behind the galactic bulge, it is heavily obscured by the foreground stars and dust. It was first noticed in 2007 in 2MASS data and believed to be an open cluster, until data from the Gaia mission revealed in 2018 that it is a globular cluster. The size and brightness of FSR 1758 may be comparable to or exceed that of the Omega Centauri cluster, which is widely believed to be the nucleus of a dwarf galaxy that merged into Milky Way in the past. Therefore, FSR 1758 may be the nucleus of dwarf galaxy tentatively named Scorpius Dwarf galaxy. It may also be similar to another globular cluster, Messier 54, which is known to be the nucleus of Sagittarius Dwarf Spheroidal Galaxy. After Barbá et al. used the term Sequoia to describe the size of FSR 1758, Myeong et al. used the term Sequoia in a slightly different way. They believe that FSR 1758 was one of five globular clusters that populated a dwarf galaxy that Myeong et al. re-name as the Sequoia dwarf galaxy. This dwarf was accreted into the Milky Way in the Sequoia Event. The members of Sequoia have a retrograde galactic orbit. This term has been adopted by several other groups. References Globular clusters Scorpius
FSR 1758
Astronomy
349
154,738
https://en.wikipedia.org/wiki/Hydrogen%20sulfide
Hydrogen sulfide is a chemical compound with the formula . It is a colorless chalcogen-hydride gas, and is poisonous, corrosive, and flammable, with trace amounts in ambient atmosphere having a characteristic foul odor of rotten eggs. Swedish chemist Carl Wilhelm Scheele is credited with having discovered the chemical composition of purified hydrogen sulfide in 1777. Hydrogen sulfide is toxic to humans and most other animals by inhibiting cellular respiration in a manner similar to hydrogen cyanide. When it is inhaled or its salts are ingested in high amounts, damage to organs occurs rapidly with symptoms ranging from breathing difficulties to convulsions and death. Despite this, the human body produces small amounts of this sulfide and its mineral salts, and uses it as a signalling molecule. Hydrogen sulfide is often produced from the microbial breakdown of organic matter in the absence of oxygen, such as in swamps and sewers; this process is commonly known as anaerobic digestion, which is done by sulfate-reducing microorganisms. It also occurs in volcanic gases, natural gas deposits, and sometimes in well-drawn water. Properties Hydrogen sulfide is slightly denser than air. A mixture of and air can be explosive. Oxidation In general, hydrogen sulfide acts as a reducing agent, as indicated by its ability to reduce sulfur dioxide in the Claus process. Hydrogen sulfide burns in oxygen with a blue flame to form sulfur dioxide () and water: If an excess of oxygen is present, sulfur trioxide () is formed, which quickly hydrates to sulfuric acid: Acid-base properties It is slightly soluble in water and acts as a weak acid (pKa = 6.9 in 0.01–0.1 mol/litre solutions at 18 °C), giving the hydrosulfide ion . Hydrogen sulfide and its solutions are colorless. When exposed to air, it slowly oxidizes to form elemental sulfur, which is not soluble in water. The sulfide anion is not formed in aqueous solution. Extreme temperatures and pressures At pressures above 90 GPa (gigapascal), hydrogen sulfide becomes a metallic conductor of electricity. When cooled below a critical temperature this high-pressure phase exhibits superconductivity. The critical temperature increases with pressure, ranging from 23 K at 100 GPa to 150 K at 200 GPa. If hydrogen sulfide is pressurized at higher temperatures, then cooled, the critical temperature reaches , the highest accepted superconducting critical temperature as of 2015. By substituting a small part of sulfur with phosphorus and using even higher pressures, it has been predicted that it may be possible to raise the critical temperature to above and achieve room-temperature superconductivity. Hydrogen sulfide decomposes without a presence of a catalyst under atmospheric pressure around 1200 °C into hydrogen and sulfur. Tarnishing Hydrogen sulfide reacts with metal ions to form metal sulfides, which are insoluble, often dark colored solids. Lead(II) acetate paper is used to detect hydrogen sulfide because it readily converts to lead(II) sulfide, which is black. Treating metal sulfides with strong acid or electrolysis often liberates hydrogen sulfide. Hydrogen sulfide is also responsible for tarnishing on various metals including copper and silver; the chemical responsible for black toning found on silver coins is silver sulfide (), which is produced when the silver on the surface of the coin reacts with atmospheric hydrogen sulfide. Coins that have been subject to toning by hydrogen sulfide and other sulfur-containing compounds may have the toning add to the numismatic value of a coin based on aesthetics, as the toning may produce thin-film interference, resulting in the coin taking on an attractive coloration. Coins can also be intentionally treated with hydrogen sulfide to induce toning, though artificial toning can be distinguished from natural toning, and is generally criticised among collectors. Production Hydrogen sulfide is most commonly obtained by its separation from sour gas, which is natural gas with a high content of . It can also be produced by treating hydrogen with molten elemental sulfur at about 450 °C. Hydrocarbons can serve as a source of hydrogen in this process. The very favorable thermodynamics for the hydrogenation of sulfur implies that the dehydrogenation (or cracking) of hydrogen sulfide would require very high temperatures. A standard lab preparation is to treat ferrous sulfide with a strong acid in a Kipp generator: For use in qualitative inorganic analysis, thioacetamide is used to generate : Many metal and nonmetal sulfides, e.g. aluminium sulfide, phosphorus pentasulfide, silicon disulfide liberate hydrogen sulfide upon exposure to water: This gas is also produced by heating sulfur with solid organic compounds and by reducing sulfurated organic compounds with hydrogen. It can also be produced by mixing ammonium thiocyanate to concentrated sulphuric acid and adding water to it. Biosynthesis Hydrogen sulfide can be generated in cells via enzymatic or non-enzymatic pathways. Three enzymes catalyze formation of : cystathionine γ-lyase (CSE), cystathionine β-synthetase (CBS), and 3-mercaptopyruvate sulfurtransferase (3-MST). CBS and CSE are the main proponents of biogenesis, which follows the trans-sulfuration pathway. These enzymes have been identified in a breadth of biological cells and tissues, and their activity is induced by a number of disease states. These enzymes are characterized by the transfer of a sulfur atom from methionine to serine to form a cysteine molecule. 3-MST also contributes to hydrogen sulfide production by way of the cysteine catabolic pathway. Dietary amino acids, such as methionine and cysteine serve as the primary substrates for the transulfuration pathways and in the production of hydrogen sulfide. Hydrogen sulfide can also be derived from proteins such as ferredoxins and Rieske proteins. Sulfate-reducing (resp. sulfur-reducing) bacteria generate usable energy under low-oxygen conditions by using sulfates (resp. elemental sulfur) to oxidize organic compounds or hydrogen; this produces hydrogen sulfide as a waste product. Water heaters can aid the conversion of sulfate in water to hydrogen sulfide gas. This is due to providing a warm environment sustainable for sulfur bacteria and maintaining the reaction which interacts between sulfate in the water and the water heater anode, which is usually made from magnesium metal. Signalling role in the body acts as a gaseous signaling molecule with implications for health and in diseases. Hydrogen sulfide is involved in vasodilation in animals, as well as in increasing seed germination and stress responses in plants. Hydrogen sulfide signaling is moderated by reactive oxygen species (ROS) and reactive nitrogen species (RNS). has been shown to interact with the NO pathway resulting in several different cellular effects, including the inhibition of cGMP phosphodiesterases, as well as the formation of another signal called nitrosothiol. Hydrogen sulfide is also known to increase the levels of glutathione, which acts to reduce or disrupt ROS levels in cells. The field of biology has advanced from environmental toxicology to investigate the roles of endogenously produced in physiological conditions and in various pathophysiological states. has been implicated in cancer, in Down syndrome and in vascular disease. At lower concentrations, it stimulates mitochondrial function via multiple mechanisms including direct electron donation. However, at higher concentrations, it inhibits Complex IV of the mitochondrial electron transport chain, which effectively reduces ATP generation and biochemical activity within cells. Uses Production of sulfur Hydrogen sulfide is mainly consumed as a precursor to elemental sulfur. This conversion, called the Claus process, involves partial oxidation to sulfur dioxide. The latter reacts with hydrogen sulfide to give elemental sulfur. The conversion is catalyzed by alumina. Production of thioorganic compounds Many fundamental organosulfur compounds are produced using hydrogen sulfide. These include methanethiol, ethanethiol, and thioglycolic acid. Hydrosulfides can be used in the production of thiophenol. Production of metal sulfides Upon combining with alkali metal bases, hydrogen sulfide converts to alkali hydrosulfides such as sodium hydrosulfide and sodium sulfide: Sodium sulfides are used in the paper making industry. Specifically, salts of break bonds between lignin and cellulose components of pulp in the Kraft process. As indicated above, many metal ions react with hydrogen sulfide to give the corresponding metal sulfides. Oxidic ores are sometimes treated with hydrogen sulfide to give the corresponding metal sulfides which are more readily purified by flotation. Metal parts are sometimes passivated with hydrogen sulfide. Catalysts used in hydrodesulfurization are routinely activated with hydrogen sulfide. Hydrogen sulfide was a reagent in the qualitative inorganic analysis of metal ions. In these analyses, heavy metal (and nonmetal) ions (e.g., Pb(II), Cu(II), Hg(II), As(III)) are precipitated from solution upon exposure to . The components of the resulting solid are then identified by their reactivity. Miscellaneous applications Hydrogen sulfide is used to separate deuterium oxide, or heavy water, from normal water via the Girdler sulfide process. A suspended animation-like state has been induced in rodents with the use of hydrogen sulfide, resulting in hypothermia with a concomitant reduction in metabolic rate. Oxygen demand was also reduced, thereby protecting against hypoxia. In addition, hydrogen sulfide has been shown to reduce inflammation in various situations. Occurrence Volcanoes and some hot springs (as well as cold springs) emit some . Hydrogen sulfide can be present naturally in well water, often as a result of the action of sulfate-reducing bacteria. Hydrogen sulfide is produced by the human body in small quantities through bacterial breakdown of proteins containing sulfur in the intestinal tract; it therefore contributes to the characteristic odor of flatulence. It is also produced in the mouth (halitosis). A portion of global emissions are due to human activity. By far the largest industrial source of is petroleum refineries: The hydrodesulfurization process liberates sulfur from petroleum by the action of hydrogen. The resulting is converted to elemental sulfur by partial combustion via the Claus process, which is a major source of elemental sulfur. Other anthropogenic sources of hydrogen sulfide include coke ovens, paper mills (using the Kraft process), tanneries and sewerage. arises from virtually anywhere where elemental sulfur comes in contact with organic material, especially at high temperatures. Depending on environmental conditions, it is responsible for deterioration of material through the action of some sulfur oxidizing microorganisms. It is called biogenic sulfide corrosion. In 2011 it was reported that increased concentrations of were observed in the Bakken formation crude, possibly due to oil field practices, and presented challenges such as "health and environmental risks, corrosion of wellbore, added expense with regard to materials handling and pipeline equipment, and additional refinement requirements". Besides living near gas and oil drilling operations, ordinary citizens can be exposed to hydrogen sulfide by being near waste water treatment facilities, landfills and farms with manure storage. Exposure occurs through breathing contaminated air or drinking contaminated water. In municipal waste landfill sites, the burial of organic material rapidly leads to the production of anaerobic digestion within the waste mass and, with the humid atmosphere and relatively high temperature that accompanies biodegradation, biogas is produced as soon as the air within the waste mass has been reduced. If there is a source of sulfate bearing material, such as plasterboard or natural gypsum (calcium sulfate dihydrate), under anaerobic conditions sulfate reducing bacteria converts this to hydrogen sulfide. These bacteria cannot survive in air but the moist, warm, anaerobic conditions of buried waste that contains a high source of carbon – in inert landfills, paper and glue used in the fabrication of products such as plasterboard can provide a rich source of carbon – is an excellent environment for the formation of hydrogen sulfide. In industrial anaerobic digestion processes, such as waste water treatment or the digestion of organic waste from agriculture, hydrogen sulfide can be formed from the reduction of sulfate and the degradation of amino acids and proteins within organic compounds. Sulfates are relatively non-inhibitory to methane forming bacteria but can be reduced to by sulfate reducing bacteria, of which there are several genera. Removal from water A number of processes have been designed to remove hydrogen sulfide from drinking water. Continuous chlorination For levels up to 75 mg/L chlorine is used in the purification process as an oxidizing chemical to react with hydrogen sulfide. This reaction yields insoluble solid sulfur. Usually the chlorine used is in the form of sodium hypochlorite. Aeration For concentrations of hydrogen sulfide less than 2 mg/L aeration is an ideal treatment process. Oxygen is added to water and a reaction between oxygen and hydrogen sulfide react to produce odorless sulfate. Nitrate addition Calcium nitrate can be used to prevent hydrogen sulfide formation in wastewater streams. Removal from fuel gases Hydrogen sulfide is commonly found in raw natural gas and biogas. It is typically removed by amine gas treating technologies. In such processes, the hydrogen sulfide is first converted to an ammonium salt, whereas the natural gas is unaffected. The bisulfide anion is subsequently regenerated by heating of the amine sulfide solution. Hydrogen sulfide generated in this process is typically converted to elemental sulfur using the Claus Process. Safety The underground mine gas term for foul-smelling hydrogen sulfide-rich gas mixtures is stinkdamp. Hydrogen sulfide is a highly toxic and flammable gas (flammable range: 4.3–46%). It can poison several systems in the body, although the nervous system is most affected. The toxicity of is comparable with that of carbon monoxide. It binds with iron in the mitochondrial cytochrome enzymes, thus preventing cellular respiration. Its toxic properties were described in detail in 1843 by Justus von Liebig. Even before hydrogen sulfide was discovered, Italian physician Bernardino Ramazzini hypothesized in his 1713 book De Morbis Artificum Diatriba that occupational diseases of sewer-workers and blackening of coins in their clothes may be caused by an unknown invisible volatile acid (moreover, in late 18th century toxic gas emanation from Paris sewers became a problem for the citizens and authorities). Although very pungent at first (it smells like rotten eggs), it quickly deadens the sense of smell, creating temporary anosmia, so victims may be unaware of its presence until it is too late. Safe handling procedures are provided by its safety data sheet (SDS). Low-level exposure Since hydrogen sulfide occurs naturally in the body, the environment, and the gut, enzymes exist to metabolize it. At some threshold level, believed to average around 300–350 ppm, the oxidative enzymes become overwhelmed. Many personal safety gas detectors, such as those used by utility, sewage and petrochemical workers, are set to alarm at as low as 5 to 10 ppm and to go into high alarm at 15 ppm. Metabolism causes oxidation to sulfate, which is harmless. Hence, low levels of hydrogen sulfide may be tolerated indefinitely. Exposure to lower concentrations can result in eye irritation, a sore throat and cough, nausea, shortness of breath, and fluid in the lungs. These effects are believed to be due to hydrogen sulfide combining with alkali present in moist surface tissues to form sodium sulfide, a caustic. These symptoms usually subside in a few weeks. Long-term, low-level exposure may result in fatigue, loss of appetite, headaches, irritability, poor memory, and dizziness. Chronic exposure to low level (around 2 ppm) has been implicated in increased miscarriage and reproductive health issues among Russian and Finnish wood pulp workers, but the reports have not (as of 1995) been replicated. High-level exposure Short-term, high-level exposure can induce immediate collapse, with loss of breathing and a high probability of death. If death does not occur, high exposure to hydrogen sulfide can lead to cortical pseudolaminar necrosis, degeneration of the basal ganglia and cerebral edema. Although respiratory paralysis may be immediate, it can also be delayed up to 72 hours. Inhalation of resulted in about 7 workplace deaths per year in the U.S. (2011–2017 data), second only to carbon monoxide (17 deaths per year) for workplace chemical inhalation deaths. Exposure thresholds Exposure limits stipulated by the United States government: 10 ppm REL-Ceiling (NIOSH): recommended permissible exposure ceiling (the recommended level that must not be exceeded, except once for 10 min. in an 8-hour shift, if no other measurable exposure occurs) 20 ppm PEL-Ceiling (OSHA): permissible exposure ceiling (the level that must not be exceeded, except once for 10 min. in an 8-hour shift, if no other measurable exposure occurs) 50 ppm PEL-Peak (OSHA): peak permissible exposure (the level that must never be exceeded) 100 ppm IDLH (NIOSH): immediately dangerous to life and health (the level that interferes with the ability to escape) 0.00047 ppm or 0.47 ppb is the odor threshold, the point at which 50% of a human panel can detect the presence of an odor without being able to identify it. 10–20 ppm is the borderline concentration for eye irritation. 50–100 ppm leads to eye damage. At 100–150 ppm the olfactory nerve is paralyzed after a few inhalations, and the sense of smell disappears, often together with awareness of danger. 320–530 ppm leads to pulmonary edema with the possibility of death. 530–1000 ppm causes strong stimulation of the central nervous system and rapid breathing, leading to loss of breathing. 800 ppm is the lethal concentration for 50% of humans for 5 minutes' exposure (LC50). Concentrations over 1000 ppm cause immediate collapse with loss of breathing, even after inhalation of a single breath. Treatment Treatment involves immediate inhalation of amyl nitrite, injections of sodium nitrite, or administration of 4-dimethylaminophenol in combination with inhalation of pure oxygen, administration of bronchodilators to overcome eventual bronchospasm, and in some cases hyperbaric oxygen therapy (HBOT). HBOT has clinical and anecdotal support. Incidents Hydrogen sulfide was used by the British Army as a chemical weapon during World War I. It was not considered to be an ideal war gas, partially due to its flammability and because the distinctive smell could be detected from even a small leak, alerting the enemy to the presence of the gas. It was nevertheless used on two occasions in 1916 when other gases were in short supply. On September 2, 2005, a leak in the propeller room of a Royal Caribbean Cruise Liner docked in Los Angeles resulted in the deaths of 3 crewmen due to a sewage line leak. As a result, all such compartments are now required to have a ventilation system. A dump of toxic waste containing hydrogen sulfide is believed to have caused 17 deaths and thousands of illnesses in Abidjan, on the West African coast, in the 2006 Côte d'Ivoire toxic waste dump. In September 2008, three workers were killed and two suffered serious injury, including long term brain damage, at a mushroom growing company in Langley, British Columbia. A valve to a pipe that carried chicken manure, straw and gypsum to the compost fuel for the mushroom growing operation became clogged, and as workers unclogged the valve in a confined space without proper ventilation the hydrogen sulfide that had built up due to anaerobic decomposition of the material was released, poisoning the workers in the surrounding area. An investigator said there could have been more fatalities if the pipe had been fully cleared and/or if the wind had changed directions. In 2014, levels of hydrogen sulfide as high as 83 ppm were detected at a recently built mall in Thailand called Siam Square One at the Siam Square area. Shop tenants at the mall reported health complications such as sinus inflammation, breathing difficulties and eye irritation. After investigation it was determined that the large amount of gas originated from imperfect treatment and disposal of waste water in the building. In 2014, hydrogen sulfide gas killed workers at the Promenade shopping center in North Scottsdale, Arizona, USA after climbing into 15 ft deep chamber without wearing personal protective gear. "Arriving crews recorded high levels of hydrogen cyanide and hydrogen sulfide coming out of the sewer." In November 2014, a substantial amount of hydrogen sulfide gas shrouded the central, eastern and southeastern parts of Moscow. Residents living in the area were urged to stay indoors by the emergencies ministry. Although the exact source of the gas was not known, blame had been placed on a Moscow oil refinery. In June 2016, a mother and her daughter were found dead in their still-running 2006 Porsche Cayenne SUV against a guardrail on Florida's Turnpike, initially thought to be victims of carbon monoxide poisoning. Their deaths remained unexplained as the medical examiner waited for results of toxicology tests on the victims, until urine tests revealed that hydrogen sulfide was the cause of death. A report from the Orange-Osceola Medical Examiner's Office indicated that toxic fumes came from the Porsche's starter battery, located under the front passenger seat. In January 2017, three utility workers in Key Largo, Florida, died one by one within seconds of descending into a narrow space beneath a manhole cover to check a section of paved street. In an attempt to save the men, a firefighter who entered the hole without his air tank (because he could not fit through the hole with it) collapsed within seconds and had to be rescued by a colleague. The firefighter was airlifted to Jackson Memorial Hospital and later recovered. A Monroe County Sheriff officer initially determined that the space contained hydrogen sulfide and methane gas produced by decomposing vegetation. On May 24, 2018, two workers were killed, another seriously injured, and 14 others hospitalized by hydrogen sulfide inhalation at a Norske Skog paper mill in Albury, New South Wales. An investigation by SafeWork NSW found that the gas was released from a tank used to hold process water. The workers were exposed at the end of a 3-day maintenance period. Hydrogen sulfide had built up in an upstream tank, which had been left stagnant and untreated with biocide during the maintenance period. These conditions allowed sulfate-reducing bacteria to grow in the upstream tank, as the water contained small quantities of wood pulp and fiber. The high rate of pumping from this tank into the tank involved in the incident caused hydrogen sulfide gas to escape from various openings around its top when pumping was resumed at the end of the maintenance period. The area above it was sufficiently enclosed for the gas to pool there, despite not being identified as a confined space by Norske Skog. One of the workers who was killed was exposed while investigating an apparent fluid leak in the tank, while the other who was killed and the worker who was badly injured were attempting to rescue the first after he collapsed on top of it. In a resulting criminal case, Norske Skog was accused of failing to ensure the health and safety of its workforce at the plant to a reasonably practicable extent. It pleaded guilty, and was fined AU$1,012,500 and ordered to fund the production of an anonymized educational video about the incident. In October 2019, an Odessa, Texas employee of Aghorn Operating Inc. and his wife were killed due to a water pump failure. Produced water with a high concentration of hydrogen sulfide was released by the pump. The worker died while responding to an automated phone call he had received alerting him to a mechanical failure in the pump, while his wife died after driving to the facility to check on him. A CSB investigation cited lax safety practices at the facility, such as an informal lockout-tagout procedure and a nonfunctioning hydrogen sulfide alert system. Suicides The gas, produced by mixing certain household ingredients, was used in a suicide wave in 2008 in Japan. The wave prompted staff at Tokyo's suicide prevention center to set up a special hotline during "Golden Week", as they received an increase in calls from people wanting to kill themselves during the annual May holiday. As of 2010, this phenomenon has occurred in a number of US cities, prompting warnings to those arriving at the site of the suicide. These first responders, such as emergency services workers or family members are at risk of death or injury from inhaling the gas, or by fire. Local governments have also initiated campaigns to prevent such suicides. In 2020, ingestion was used as a suicide method by Japanese pro wrestler Hana Kimura. In 2024, Lucy-Bleu Knight, stepdaughter of famed musician Slash, also used ingestion to commit suicide. Hydrogen sulfide in the natural environment Microbial: The sulfur cycle Hydrogen sulfide is a central participant in the sulfur cycle, the biogeochemical cycle of sulfur on Earth. In the absence of oxygen, sulfur-reducing and sulfate-reducing bacteria derive energy from oxidizing hydrogen or organic molecules by reducing elemental sulfur or sulfate to hydrogen sulfide. Other bacteria liberate hydrogen sulfide from sulfur-containing amino acids; this gives rise to the odor of rotten eggs and contributes to the odor of flatulence. As organic matter decays under low-oxygen (or hypoxic) conditions (such as in swamps, eutrophic lakes or dead zones of oceans), sulfate-reducing bacteria will use the sulfates present in the water to oxidize the organic matter, producing hydrogen sulfide as waste. Some of the hydrogen sulfide will react with metal ions in the water to produce metal sulfides, which are not water-soluble. These metal sulfides, such as ferrous sulfide FeS, are often black or brown, leading to the dark color of sludge. Several groups of bacteria can use hydrogen sulfide as fuel, oxidizing it to elemental sulfur or to sulfate by using dissolved oxygen, metal oxides (e.g., iron oxyhydroxides and manganese oxides), or nitrate as electron acceptors. The purple sulfur bacteria and the green sulfur bacteria use hydrogen sulfide as an electron donor in photosynthesis, thereby producing elemental sulfur. This mode of photosynthesis is older than the mode of cyanobacteria, algae, and plants, which uses water as electron donor and liberates oxygen. The biochemistry of hydrogen sulfide is a key part of the chemistry of the iron-sulfur world. In this model of the origin of life on Earth, geologically produced hydrogen sulfide is postulated as an electron donor driving the reduction of carbon dioxide. Animals Hydrogen sulfide is lethal to most animals, but a few highly specialized species (extremophiles) do thrive in habitats that are rich in this compound. In the deep sea, hydrothermal vents and cold seeps with high levels of hydrogen sulfide are home to a number of extremely specialized lifeforms, ranging from bacteria to fish. Because of the absence of sunlight at these depths, these ecosystems rely on chemosynthesis rather than photosynthesis. Freshwater springs rich in hydrogen sulfide are mainly home to invertebrates, but also include a small number of fish: Cyprinodon bobmilleri (a pupfish from Mexico), Limia sulphurophila (a poeciliid from the Dominican Republic), Gambusia eurystoma (a poeciliid from Mexico), and a few Poecilia (poeciliids from Mexico). Invertebrates and microorganisms in some cave systems, such as Movile Cave, are adapted to high levels of hydrogen sulfide. Interstellar and planetary occurrence Hydrogen sulfide has often been detected in the interstellar medium. It also occurs in the clouds of planets in our solar system. Mass extinctions Hydrogen sulfide has been implicated in several mass extinctions that have occurred in the Earth's past. In particular, a buildup of hydrogen sulfide in the atmosphere may have caused, or at least contributed to, the Permian-Triassic extinction event 252 million years ago. Organic residues from these extinction boundaries indicate that the oceans were anoxic (oxygen-depleted) and had species of shallow plankton that metabolized . The formation of may have been initiated by massive volcanic eruptions, which emitted carbon dioxide and methane into the atmosphere, which warmed the oceans, lowering their capacity to absorb oxygen that would otherwise oxidize . The increased levels of hydrogen sulfide could have killed oxygen-generating plants as well as depleted the ozone layer, causing further stress. Small blooms have been detected in modern times in the Dead Sea and in the Atlantic Ocean off the coast of Namibia. See also Hydrogen sulfide chemosynthesis Marsh gas References Additional resources External links International Chemical Safety Card 0165 Concise International Chemical Assessment Document 53 National Pollutant Inventory - Hydrogen sulfide fact sheet NIOSH Pocket Guide to Chemical Hazards NACE (National Association of Corrosion Epal) Acids Foul-smelling chemicals Hydrogen compounds Triatomic molecules Industrial gases Airborne pollutants Sulfides Flatulence Gaseous signaling molecules Blood agents
Hydrogen sulfide
Physics,Chemistry
6,213
652,248
https://en.wikipedia.org/wiki/Bogdanov%20affair
The Bogdanov affair was an academic dispute over the legitimacy of the doctoral degrees obtained by French twins Igor and Grichka Bogdanov (usually spelled Bogdanoff in French language publications) and a series of theoretical physics papers written by them in order to obtain degrees. The papers were published in reputable scientific journals, and were alleged by their authors to culminate in a theory for describing what occurred before and at the Big Bang. The controversy began in 2002, with an allegation that the twins, popular celebrities in France for hosting science-themed TV shows, had obtained PhDs with nonsensical work. Rumors spread on Usenet newsgroups that their work was a deliberate hoax intended to target weaknesses in the peer review system that physics journals use to select papers for publication. While the Bogdanov brothers continued to defend the legitimacy of their work, the debate over whether it represented a contribution to physics spread from Usenet to many other internet forums, eventually receiving coverage in the mainstream media. A Centre national de la recherche scientifique (CNRS) internal report later concluded that their theses had no scientific value. The incident prompted criticism of the Bogdanovs' approach to science popularization, led to a number of lawsuits, and provoked reflection among physicists as to how and why the peer review system can fail. Origin of the affair The Bogdanov brothers were born in 1949 in the small village of Saint-Lary, in the Gascony region of southwest France. The brothers each studied applied mathematics in Paris, but then began careers in television, hosting several popular programs on science and science fiction. The first of these, Temps X (Time X), ran from 1979 to 1989. In 1991 the Bogdanovs published a book, Dieu et la Science (God and Science), drawn from interviews with Catholic philosopher Jean Guitton, which became a French bestseller. This book provoked a dispute of its own when University of Virginia astronomy professor Trinh Xuan Thuan accused the Bogdanovs of plagiarizing his 1988 book The Secret Melody: And Man Created the Universe. After a legal battle in France, during which a judge initially ruled in Thuan's favour, Thuan and the Bogdanovs settled out of court, and the Bogdanovs later denied all wrongdoing. Thuan suggests that the plagiarism suit pressed the brothers to obtain doctorates as fast as possible, since (according to Thuan) the back cover of the book claimed that the Bogdanovs held doctorates when they did not. In 1993, the brothers began work toward doctorates, first working under the mathematical physicist of the University of Burgundy. Flato died in 1998, and his colleague Daniel Sternheimer (of CNRS) took over the job of supervising the Bogdanovs. According to Sternheimer, the twins viewed themselves as "the Einstein brothers" and had a propensity to voice vague, "impressionistic" statements; he considered guiding their efforts "like teaching My Fair Lady to speak with an Oxford accent." As he told The Chronicle of Higher Education, Sternheimer did not consider himself an expert in all the topics Grichka Bogdanov included in his thesis, but judged that those portions within his specialty were PhD-quality work. Grichka Bogdanov was given a PhD by the University of Burgundy (Dijon) in 1999, though this doctorate is sometimes erroneously described as having been granted by the École Polytechnique. He originally applied for a degree in physics, but was instead given one in mathematics, and was first required to significantly rewrite his thesis, de-emphasizing the physics content. Around the same time, Igor Bogdanov failed the defense of his thesis. His advisors subsequently agreed to allow him to obtain a doctorate if he could publish three peer-reviewed journal articles. In 2002, after publishing the requisite articles, Igor was given a PhD in theoretical physics from the University of Burgundy. Both of the brothers received the lowest passing grade of "honorable", which is seldom given, as Daniel Sternheimer told The New York Times science reporter Dennis Overbye. In justifying the conferring of doctoral degrees to the Bogdanovs, Sternheimer told the Times, "These guys worked for 10 years without pay. They have the right to have their work recognized with a diploma, which is nothing much these days." In 2001 and 2002 the brothers published five papers in peer-reviewed physics journals, including Annals of Physics and Classical and Quantum Gravity. The controversy over the Bogdanovs' work began on October 22, 2002, with an email sent by University of Tours physicist Max Niedermaier to University of Pittsburgh physicist Ezra T. Newman. Niedermaier suggested that the Bogdanovs' PhD theses and papers were "spoof[s]", created by throwing together instances of theoretical-physics jargon, including terminology from string theory: "The abstracts are delightfully meaningless combinations of buzzwords ... which apparently have been taken seriously." Copies of the email reached American mathematical physicist John Baez, and on 23 October he created a discussion thread about the Bogdanovs' work on the Usenet newsgroup sci.physics.research, titled "Physics bitten by reverse Alan Sokal hoax?" Baez was comparing the Bogdanovs' publications to the 1996 Sokal affair, in which physicist Alan Sokal successfully submitted an intentionally nonsensical paper to a cultural studies journal in order to criticize that field's lax standards for discussing science. The Bogdanovs quickly became a popular discussion topic, with most respondents agreeing that the papers were flawed. The story spread in public media, prompting Niedermaier to offer an apology to the Bogdanovs, admitting that he had not read the papers firsthand. The Bogdanovs' background in entertainment lent some plausibility to the idea that they were attempting a deliberate hoax, but Igor Bogdanov quickly denied the accusation. The Bogdanov brothers themselves participated in the online discussions, sometimes using pseudonyms or represented by friends acting as proxies. They used these methods to defend their work and sometimes to insult their critics, among them the Nobel Prize recipient Georges Charpak. In October 2002, the Bogdanovs released an email containing apparently supportive statements by Laurent Freidel, then a visiting professor at the Perimeter Institute. Soon after, Freidel denied writing any such remarks, telling the press that he had forwarded a message containing that text to a friend. The Bogdanovs then attributed the quoted passages to Freidel, who said, "I'm very upset about that because I have received e-mail from people in the community asking me why I've defended the Bogdanov brothers. When your name is used without your consent, it's a violation." At the start of the controversy in the moderated group sci.physics.research, Igor Bogdanov denied that their published papers were a hoax, but when asked precise questions from physicists Steve Carlip and John Baez regarding mathematical details in the papers, failed to convince any other participants that these papers had any real scientific value. The New York Times reporter George Johnson described reading through the debate as "like watching someone trying to nail Jell-O to a wall", for the Bogdanovs had "developed their own private language, one that impinges on the vocabulary of science only at the edges." Media reports and comments from scientists The online discussion was quickly followed by media attention. The Register reported on the dispute on November 1, 2002, and stories in The Chronicle of Higher Education, Nature, The New York Times, and other publications appeared soon after. These news stories included commentary by physicists. Thesis readers One of the scientists who approved Igor Bogdanov's thesis, Roman Jackiw of the Massachusetts Institute of Technology, spoke to The New York Times reporter Dennis Overbye. Overbye wrote that Jackiw was intrigued by the thesis, although it contained many points he did not understand. Jackiw defended the thesis:All these were ideas that could possibly make sense. It showed some originality and some familiarity with the jargon. That's all I ask. In contrast, Ignatios Antoniadis (of the École Polytechnique), who approved Grichka Bogdanov's thesis, later reversed his judgment of it. Antoniadis told Le Monde, I had given a favorable opinion for Grichka's defense, based on a rapid and indulgent reading of the thesis text. Alas, I was completely mistaken. The scientific language was just an appearance behind which hid incompetence and ignorance of even basic physics. Pre- and post-publication official commentary on the journal articles In May 2001, the journal Classical and Quantum Gravity (CQG) reviewed an article authored by Igor and Grichka Bogdanov, titled "Topological theory of the initial singularity of spacetime". One of the referees' reports stated that the article was "Sound, original, and of interest. With revisions I expect the paper to be suitable for publication." The paper was accepted by the journal seven months later. However, after the publication of the article and the publicity surrounding the controversy, mathematician Greg Kuperberg posted to Usenet a statement written by the journal's senior publisher, Andrew Wray, and its co-editor, Hermann Nicolai. The statement read, in part, Regrettably, despite the best efforts, the refereeing process cannot be 100% effective. Thus the paper ... made it through the review process even though, in retrospect, it does not meet the standards expected of articles in this journal... The paper was discussed extensively at the annual Editorial Board meeting ... and there was general agreement that it should not have been published. Since then several steps have been taken to further improve the peer review process in order to improve the quality assessment on articles submitted to the journal and reduce the likelihood that this could happen again. The paper in question was, however, not officially withdrawn by the journal. Later, the editor-in-chief of the journal issued a slightly different statement on behalf of the Institute of Physics, which owns the journal, in which he insisted on the fact that their usual peer-review procedures had been followed, but no longer commented on the value of the paper. Moreover, Die Zeit quoted Nicolai as saying that had the paper reached his desk, he would have immediately rejected it. In 2001, the Czechoslovak Journal of Physics accepted an article written by Igor Bogdanov, entitled "Topological Origin of Inertia". The referee's report concluded: "In my opinion the results of the paper can be considered as original ones. I recommend the paper for publication but in a revised form." The following year, the Chinese Journal of Physics published Igor Bogdanov's "The KMS state of spacetime at the Planck scale". The report stated that "the viewpoint presented in this paper can be interesting as a possible approach of the Planck scale physics." Some corrections were requested. Not all review evaluations were positive. Eli Hawkins, acting as a referee on behalf of the Journal of Physics A, suggested rejecting one of the Bogdanovs' papers: "It is difficult to describe what is wrong in Section 4, since almost nothing is right. [...] It would take up too much space to enumerate all the mistakes: indeed it is difficult to say where one error ends and the next begins. In conclusion, I would not recommend that this paper be published in this, or any, journal." Online criticism of the papers After the start of the Usenet discussion, most comments were critical of the Bogdanovs' work. For example, John C. Baez stated that the Bogdanov papers are "a mishmash of superficially plausible sentences containing the right buzzwords in approximately the right order. There is no logic or cohesion in what they write." Jacques Distler voiced a similar opinion, proclaiming "The [Bogdanovs'] papers consist of buzzwords from various fields of mathematical physics, string theory and quantum gravity, strung together into syntactically correct, but semantically meaningless prose." Others compared the quality of the Bogdanov papers with that seen over a wider arena. "The Bogdanoffs' work is significantly more incoherent than just about anything else being published", wrote Peter Woit. He continued, "But the increasingly low standard of coherence in the whole field is what allowed them to think they were doing something sensible and to get it published." Woit later devoted a chapter of his book Not Even Wrong (2006) to the Bogdanov affair. Participants in the discussions were particularly unconvinced by a statement in the "Topological origin of inertia" paper that "whatever the orientation, the plane of oscillation of Foucault's pendulum is necessarily aligned with the initial singularity marking the origin of physical space." In addition, the paper claimed, the Foucault pendulum experiment "cannot be explained satisfactorily in either classical or relativistic mechanics". The physicists commenting on Usenet found these claims and subsequent attempts at their explanation peculiar, since the trajectory of a Foucault pendulum—a standard museum piece—is accurately predicted by classical mechanics. The Bogdanovs explained that these claims would only be clear in the context of topological field theory. Baez and Russell Blackadar attempted to determine the meaning of the "plane of oscillation" statement; after the Bogdanovs issued some elaborations, Baez concluded that it was a complicated way of rephrasing the following: Since the big bang happened everywhere, no matter which way a pendulum swings, the plane in which it swings can be said to "intersect the big bang". However, Baez pointed out, this statement does not in fact concern the Big Bang, and is entirely equivalent to the following: No matter which way a pendulum swings, there is some point on the plane in which it swings. Yet this rephrasing is itself equivalent to the following statement: Any plane contains a point. If this was the essence of the statement, Baez noted, it cannot be very useful in "explaining the origin of inertia". Urs Schreiber, then a postdoctoral researcher at the University of Hamburg, noted that the mention of the Foucault pendulum was at odds with the papers' general tone, since they generally relied upon more "modern terminology". (According to George Johnson, the Foucault pendulum is "an icon of French science that would belong in any good Gallic spoof.") Schreiber identified five central ideas in the Bogdanovs' work—"'result' A" through "'result' E"—which are expressed in the jargon of statistical mechanics, topological field theory and cosmology. One bit of jargon, the Hagedorn temperature, comes from string theory, but as Schreiber notes, the paper does not use this concept in any detail; moreover, since the paper is manifestly not a string theory treatise, "considering the role the Hagedorn temperature plays in string cosmology, this is bordering on self-parody." Schreiber concludes that the fourth "result" (that the spacetime metric "at the initial singularity" must be Riemannian) contradicts the initial assumption of their argument (an FRW cosmology with pseudo-Riemannian metric). The fifth and last "result", Schreiber notes, is an attempt to resolve this contradiction by "invok[ing] quantum mechanics". The Bogdanovs themselves described Schreiber's summary as "very accurate"; for more on this point, see below. Schreiber concluded, Just to make sure: I do not think that any of the above is valid reasoning. I am writing this just to point out what I think are the central 'ideas' the authors had when writing their articles and how this led them to their conclusions. Eli Hawkins of Pennsylvania State University voiced a similar concern about "The KMS state of spacetime at the Planck scale". The main result of this paper is that this thermodynamic equilibrium should be a KMS state. This almost goes without saying; for a quantum system, the KMS condition is just the concrete definition of thermodynamic equilibrium. The hard part is identifying the quantum system to which the condition should be applied, which is not done in this paper. Both Baez and, later, Peter Woit noted that content was largely repeated from one Bogdanov paper to another. Damien Calaque of the Louis Pasteur University, Strasbourg, criticized Grichka Bogdanov's unpublished preprint "Construction of cocycle bicrossproducts by twisting". In Calaque's estimation, the results presented in the preprint did not have sufficient novelty and interest to merit an independent journal article, and moreover the principal theorem was, in its current formulation, false: Grichka Bogdanov's construction yields a bialgebra which is not necessarily a Hopf algebra, the latter being a type of mathematical object which must satisfy additional conditions. Eventually, the controversy attracted mainstream media attention, opening new avenues for physicists' comments to be disseminated. quoted Alain Connes, recipient of the 1982 Fields Medal, as saying, "I didn't need long to convince myself that they're talking about things that they haven't mastered." The New York Times reported that the physicists David Gross, Carlo Rovelli and Lee Smolin considered the Bogdanov papers nonsensical. Nobel laureate Georges Charpak later stated on a French talk show that the Bogdanovs' presence in the scientific community was "nonexistent". The most positive comments about the papers themselves came from string theorist Luboš Motl: ...Some of the papers of the Bogdanoff brothers are really painful and clearly silly ... But the most famous paper about the solution of the initial singularity is a bit different; it is more sophisticated. ...it does not surprise me much that Roman Jackiw said that the paper satisfied everything he expects from an acceptable paper—the knowledge of the jargon and some degree of original ideas. (And be sure that Jackiw, Kounnas, and Majid were not the only ones with this kind of a conclusion.) ...Technically, their paper connects too many things. It would be too good if all these ideas and (correct) formulae were necessary for a justification of a working solution to the initial singularity problem. But if one accepts that the papers about these difficult questions don't have to be just a well-defined science but maybe also a bit of inspiring art, the brothers have done a pretty good job, I think. And I want to know the answers to many questions that are opened in their paper. Motl's measured support for "Topological field theory of the initial singularity of spacetime", however, stands in stark contrast to Robert Oeckl's official MathSciNet review, which states that the paper is "rife with nonsensical or meaningless statements and suffers from a serious lack of coherence," follows up with several examples to illustrate his point, and concludes that the paper "falls short of scientific standards and appears to have no meaningful content." An official report from the Centre national de la recherche scientifique (CNRS), which became public in 2010, concluded that the paper "ne peut en aucune façon être qualifié de contribution scientifique" ("cannot in any way be considered a scientific contribution"). The CNRS report summarized the Bogdanovs' theses thusly: "Ces thèses n’ont pas de valeur scientifique. […] Rarement aura-t-on vu un travail creux habillé avec une telle sophistication" ("These theses have no scientific value. [...] Rarely have we seen a hollow work dressed with such sophistication"). Aftermath Claims of pseudonymous activity One episode after the heyday of the affair involved the participation of an unidentified "Professor Yang". Using an e-mail address at the domain th-phys.edu.hk, an individual publishing under this name wrote to a number of individuals and on the Internet to defend the Bogdanov papers. This individual wrote to physicists John Baez, Jacques Distler and Peter Woit; to The New York Times journalist Dennis Overbye; and on numerous physics blogs and forums, signing his name "Professor L. Yang—Theoretical Physics Laboratory, International Institute of Mathematical Physics—HKU/Clear Water Bay, Kowloon, Hong Kong." It is the Hong Kong University of Science and Technology which is located in Clear Water Bay, not Hong Kong University (HKU), whose main campus is located in the Mid-Levels of Hong Kong Island. The Bogdanovs have claimed that the "domain name 'th-phys.edu.hk' was owned by Hong Kong University." This was not confirmed officially by HKU and no Prof. Yang existed on the roster of the HKU physics department; nor did the university have an "International Institute of Mathematical Physics". Suspicions were consequently raised that Professor L. Yang was actually a pseudonym of the Bogdanovs. However, Igor Bogdanov has maintained that Professor Yang is a real mathematical physicist with expertise in KMS theory, a friend of his, and that he was posting anonymously from Igor's apartment. Rayons X and Avant Le Big Bang In 2002, the Bogdanovs launched a new weekly TV show Rayons X (X Rays) on French public channel France 2. In August 2004, they presented a 90-minute special cosmology program in which they introduced their theory among other cosmological scenarios. The French mainstream media, in both the press and on the Internet, covered the renewed controversy to some extent; media outlets that reported upon it include Acrimed and Ciel et Espace. In 2004, the Bogdanovs published a commercially successful popular science book, Avant Le Big Bang (Before the Big Bang), based on a simplified version of their theses, where they also presented their point of view about the affair. Both the book and the Bogdanovs' television shows have been criticized for elementary scientific and mathematical inaccuracies. Critics cite examples from Avant Le Big Bang including a statement that the "golden number" φ (phi) is transcendental, an assumption that the limit of a decreasing sequence is always zero, and that the expansion of the Universe implies that the planets of the Solar System have grown farther apart. In October 2004, a journalist from Ciel et Espace interviewed Shahn Majid of Queen Mary, University of London about his report on Grichka Bogdanov's thesis. Majid said that the French version of his report on Grichka's thesis was "an unauthorized translation partially invented by the Bogdanovs." In one sentence, the English word "interesting" was translated as the French "important". A "draft [mathematical] construction" became "la première construction [mathématique]" ("the first [mathematical] construction"). Elsewhere, an added word demonstrated, according to Majid, that "Bogdanov does not understand his own draft results." Majid also described more than ten other modifications of meaning, each one biased towards "surestimation outrancière"—"outrageous over-estimation". Majid said that his original report described a "very weak" student who nevertheless demonstrated "an impressive amount of determination to obtain a doctorate." Later, Majid claimed in a Usenet post that, in an addendum to Avant Le Big Bang, Grichka intentionally misquoted Majid's opinion on the way this interview had been transcribed. Additionally, in the same addendum, a critical analysis of their work made by Urs Schreiber, and affirmed by the Bogdanovs as "very accurate", was included with the exception of the concluding remark "Just to make sure: I do not think that any of the above is valid reasoning", thus inverting the meaning from criticism into ostensible support. Moreover, a comment by physicist Peter Woit written as, "It's certainly possible that you have some new worthwhile results on quantum groups", was translated as "Il est tout à fait certain que vous avez obtenu des résultats nouveaux et utiles dans les groupes quantiques" ("It is completely certain that you have obtained new and useful results on quantum groups") and published by the Bogdanovs in the addendum of their book. Disputes on French and English Wikipedias At the beginning of 2004, Igor Bogdanov began to post on French Usenet physics groups and Internet forums, continuing the pattern of behavior seen on sci.physics.research. A controversy began on the French Wikipedia when Igor Bogdanov and his supporters began to edit that encyclopedia's article on the brothers, prompting the creation of a new article dedicated to the debate (Polémique autour des travaux des frères Bogdanov—"Debate surrounding the work of the Bogdanov brothers"). The dispute then spread to the English Wikipedia. In November 2005, this led the Arbitration Committee, a dispute resolution panel that acts as the project's court of last resort, to ban anyone deemed to be a participant in the external dispute from editing the English Wikipedia's article on the Bogdanov Affair. A number of English Wikipedia users, including Igor Bogdanov himself, were explicitly named in this ban. In 2006, Baez wrote on his website that for some time the Bogdanovs and "a large crowd of sock puppets" had been attempting to rewrite the English Wikipedia article on the controversy "to make it less embarrassing to them". "Nobody seems to be fooled", he added. Lawsuits In December 2004, the Bogdanovs sued Ciel et Espace for defamation over the publication of a critical article entitled "The Mystification of the Bogdanovs". In September 2006, the case was dismissed after the Bogdanovs missed court deadlines; they were ordered to pay 2,500 euros to the magazine's publisher to cover its legal costs. There was never a substantive ruling on whether the Bogdanovs had been defamed. Alain Riazuelo, an astrophysicist at the Institut d'astrophysique de Paris, participated in many of the online discussions of the Bogdanovs' work. He posted an unpublished version of Grichka Bogdanov's PhD thesis on his personal website, along with his critical analysis. Bogdanov subsequently described this version as "dating from 1991 and too unfinished to be made public". Rather than suing Riazuelo for defamation, Bogdanov filed a criminal complaint of copyright (droit d'auteur) violation against him in May 2011. The police detained and interrogated Riazuelo. He came to trial and was convicted in March 2012. A fine of 2,000 euros the court imposed was suspended, and only one euro of damages was awarded. But in passing judgement the court stated that the scientist had "lacked prudence", given "the fame of the plaintiff". The verdict outraged many scientists, who felt that the police and courts should have no say in a discussion of the scientific merits of a piece of work. In April 2012, a group of 170 scientists published an open letter titled L'affaire Bogdanoff: Liberté, Science et Justice, Des scientifiques revendiquent leur droit au blâme (The Bogdanov Affair: Liberty, Science and Justice, scientists claim their right of critique). In 2014, the Bogdanovs sued the weekly magazine Marianne for defamation, on account of reporting the magazine had published in 2010 which had brought the CNRS report to light. The weekly was eventually ordered to pay 64,000 euros in damages, a quantity less than the Bogdanovs had originally demanded (in excess of 800,000 euros each). The Bogdanovs also sued the CNRS for 1.2 million euros in damages, claiming that the CNRS report had "porté atteinte à leur honneur, à leur réputation et à leur crédit" ("undermined their honor, reputation and credit") and calling the report committee a "Stasi scientifique", but a tribunal ruled against them in 2015 and ordered them to pay 2,000 euros. Megatrend University In 2005, the Bogdanovs became professors at Megatrend University in Belgrade where they were appointed Chairs of Cosmology and said to direct the Megatrend Laboratory of Cosmology. Mića Jovanović, the rector and owner of Megatrend University, wrote a preface for the Serbian edition of Avant le Big Bang. Jovanović later became embroiled in controversy and resigned his post when it was revealed that he had not obtained a PhD at the London School of Economics as he had claimed. This scandal, combined with the presence of the Bogdanovs, contributed to an atmosphere of controversy surrounding Megatrend. L'équation Bogdanov In 2008, published L'équation Bogdanov: le secret de l'origine de l'univers? (The Bogdanov Equation: The Secret of the Origin of the Universe?), officially written in English by Luboš Motl and translated into French. A review in Science et Vie found that the book was light on detail and never actually said what the "Bogdanov equation" is: "Et arrivé à la conclusion, on n'est même plus très certain qu'elle existe réellement" ("Arriving at the conclusion, one is no longer even very certain that it really exists"). Reflections upon the peer-review system During the heyday of this affair, some media coverage cast a negative light on theoretical physics, stating or at least strongly implying that it has become impossible to distinguish a valid paper from a hoax. Overbye's article in The New York Times voiced this opinion, for example, as did Declan Butler's piece in Nature. Posters on blogs and Usenet used the affair to criticize the present status of string theory; for the same reason, Peter Woit devoted a chapter of Not Even Wrong, a book emphatically critical of string theory, to the affair. On the other hand, George Johnson's report in The New York Times concludes that physicists have generally decided the papers are "probably just the result of fuzzy thinking, bad writing and journal referees more comfortable with correcting typos than challenging thoughts." String theorist Aaron Bergman riposted in a review of Not Even Wrong that Woit's conclusion is undermined by a number of important elisions in the telling of the story, the most important of which is that the writings of the Bogdanovs, to the extent that one can make sense of them, have almost nothing to do with string theory. ... I first learned of the relevant papers in a posting on the internet by Dr. John Baez. Having found a copy of one of the relevant papers available online, I posted that "the referee clearly didn't even glance at it." While the papers were full of rather abstruse prose about a wide variety of technical areas, it was easy to identify outright nonsense in the areas about which I had some expertise. ... A pair of non-string theorists were able to get nonsensical papers generally not about string theory published in journals not generally used by string theorists. This is surely an indictment of something, but its relevance to string theory is marginal at best. Jacques Distler argued that the tone of the media coverage had more to do with journalistic practices than with physics. The much-anticipated New York Times article on the Bogdanov scandal has appeared. Alas, it suffers from the usual journalistic conceit that a proper newspaper article must cover a "controversy". There must be two sides to the controversy, and the reporter's job is to elicit quotes from both parties and present them side-by-side. Almost inevitably, this "balanced" approach sheds no light on the matter, and leaves the reader shaking his head, "There they go again..." Distler also suggested that the fact that the Bogdanovs had not uploaded their papers to the arXiv prior to publication, as was standard practice by that time, meant that the physics community must have paid vanishingly little attention to those papers before the hoax rumors broke. The affair prompted many comments about the possible shortcomings of the referral system for published articles, and also on the criteria for acceptance of a PhD thesis. Frank Wilczek, who edited Annals of Physics (and who would later share the 2004 Nobel Prize in Physics), told the press that the scandal motivated him to correct the journal's slipping standards, partly by assigning more reviewing duties to the editorial board. Prior to the controversy, the reports on the Bogdanov theses and most of the journal referees' reports spoke favorably of their work, describing it as original and containing interesting ideas. This has been the basis of concerns raised about the efficacy of the peer-review system that the scientific community and academia use to determine the merit of submitted manuscripts for publication; one concern is that over-worked and unpaid referees may not be able to thoroughly judge the value of a paper in the little time they can afford to spend on it. Regarding the Bogdanov publications, physicist Steve Carlip remarked: Referees are volunteers, who as a whole put in a great deal of work for no credit, no money, and little or no recognition, for the good of the community. Sometimes a referee makes a mistake. Sometimes two referees make mistakes at the same time. I'm a little surprised that anyone is surprised at this. Surely you've seen bad papers published in good journals before this! ... referees give opinions; the real peer review begins after a paper is published. Similarly, Richard Monastersky, writing in The Chronicle of Higher Education, observed, "There is one way...for physicists to measure the importance of the Bogdanovs' work. If researchers find merit in the twins' ideas, those thoughts will echo in the references of scientific papers for years to come." Before the controversy over their work arose, the scientific community had shown practically no interest in the Bogdanovs' papers; indeed, according to Stony Brook physics professor Jacobus Verbaarschot, who served on Igor Bogdanov's dissertation committee, without the hoax rumors "probably no one would have ever known about their articles." , the Bogdanovs' most recent paper was "Thermal Equilibrium and KMS Condition at the Planck Scale", which was submitted to the Chinese Annals of Mathematics in 2001 and appeared in 2003. That journal ceased publication in 2005. One retrospective commented, Up to 2007 the databanks mention a total of six citations for the Bogdanovs' publications. Four of them are citations among themselves and only two are by other physicists. Comparisons with the Sokal affair Several sources have referred to the Bogdanov affair as a "reverse Sokal" hoax, drawing a comparison with the Sokal affair, where the physicist Alan Sokal published a deliberately fraudulent and indeed nonsensical article in the humanities journal Social Text. Sokal's original aim had been to test the effects of the intellectual trend he called, "for want of a better term, postmodernism". Worried by what he considered a "more-or-less explicit rejection of the rationalist tradition of the Enlightenment", Sokal decided to perform an experiment which he later cheerfully admitted was both unorthodox and uncontrolled, provoking a maelstrom of reactions which, to his surprise, received coverage in and even the front page of The New York Times. The physicist John Baez compared the two events in his October 2002 post to the sci.physics.research newsgroup. Sociologist of science Harry Collins noted that all of the early reports of the incident made reference to the Sokal affair, and he speculated that without Sokal's precedent bringing the idea of hoax publications to mind, the Bogdanov papers would have sunk into the general obscurity of non-influential scientific writing. Igor and Grichka Bogdanov have and had vigorously insisted upon the validity of their work, while in contrast, Sokal was an outsider to the field in which he was publishing—a physicist, publishing in a humanities journal—and promptly issued a statement himself that his paper was a deliberate hoax. Replying on sci.physics.research, Sokal referred readers to his follow-up essay, in which he notes "the mere fact of publication of my parody" only proved that the editors of one particular journal—and a "rather marginal" one at that—had applied a lax intellectual standard. (According to The New York Times, Sokal was "almost disappointed" that the Bogdanovs had not attempted a hoax after his own style. "What's sauce for the goose is sauce for the gander", he said.) Baez, who made a comparison between the two affairs, later retracted, saying that the brothers "have lost too much face for [withdrawing the work as a hoax] to be a plausible course of action". In a letter to The New York Times, Cornell physics professor Paul Ginsparg wrote that the contrast between the cases was plainly evident: "here, the authors were evidently aiming to be credentialed by the intellectual prestige of the discipline rather than trying to puncture any intellectual pretension." He added that the fact some journals and scientific institutions have low or variable standards is "hardly a revelation". The observation was later confirmed by studies showing that high-prestige journals struggle to reach average reliability. See also List of topics characterized as pseudoscience Notes References External links Mathematical Center of Riemannian Cosmology – Igor Bogdanov's website Initial discussion Physics bitten by reverse Alan Sokal hoax? Theses and papers Scientific publications by Igor and Grichka Bogdanov Grichka Bogdanov's PhD thesis Igor Bogdanov's PhD thesis Critical websites John Baez's discussion of the Bogdanov affair Rapport des Sections 01 et 02 du Comité du CNRS sur Deux Thèses de Doctorat archived by Libération «Pot-Pourri» from Igor & Grichka Bogdanov's Before the Big Bang by Jean-Pierre Messager A small journey in the Bogdanoff universe by Alain Riazuelo Theoretical physics Pseudoscience Academic journal articles Academic scandals Hoaxes in science 2002 hoaxes
Bogdanov affair
Physics
8,134
1,393,026
https://en.wikipedia.org/wiki/Flanimals
Flanimals is a book series written by comedian Ricky Gervais and illustrated by Rob Steen. It depicts an assortment of seemingly useless or inadequate fictional animals and their behaviour. The cover Flanimal is the Grundit. The book is published by Faber and Faber, which has also published the sequels More Flanimals, Flanimals of the Deep and Flanimals: The Day of the Bletchling. Flanimals: Pop Up was published October 2009 by Walker Books in the UK and in March 2010 by Candlewick Press in the US. List of Flanimals Coddleflop: A green mush puddle that absorbs substances and flips over to protect its soft top. However, since its bottom is equally soft, this strategy is never successful. Plamglotis: A purple ape-like Flanimal with no legs so it swallows its hands to walk around to find food, which it cannot eat because its mouth is full. Mernimbler: A fluffy, pink, round Flanimal that feeds on honey water and soft cloud bits. It transforms into an aggressive, ogre-like adult stage when someone comments on its cuteness. They devour everything and die of chronic indigestion. Grundit: A heavily built blue Flanimal with a bump on its head from falling off Puddloflaj. Their small brain protects them though. They bully Gum Spudlets and Squash Cuddleflop. Puddloflaj: A pink water balloon-like Flanimal often ridden by Grundits for no clear reason. They are very cowardly as their eyes will pop out. It is almost 100% water and can be used as water balloons when young. Flemping Bunt-Himmler: A mimic and predator of the baby Mernimbler, only wider and flatter. They often get eaten after the Mernimblers grow up. Underblenge: A grey, blobby Flanimal that cannot move from where it was born due to it having extremely strong suction cups (designed for suffocating prey) on its underside. Blunging: A yellow dinosaur-like Flanimal that lives in large family groups. They hate seeing their young being devoured by Adult Mernimblers. Munty Flumple: A brown humanoid Flanimal that stares and falls in love with every Flanimal it sees. They are apparently the cutest creatures as babies. Splunge: A brain-like Flanimal so terrified of everything that it "splunges" at birth, which causes both parents to do so. Honk: A small, pink, tapir-like Flanimal that sleeps all day until it randomly wakes up to make a loud honking sound from its nose and then goes back to sleep. Hemel Sprot: A green blobby Flanimal that always looks where it has been and never where it is going. Sprot Guzzlor: A large blue Flanimal that preys on Hemel Sprots. Clunge Ambler: An ape-like Flanimal that hugs everything it sees. It always gets buried and pops back up to hug the Flanimal that buried it. Wobboid Mump: A blind eye in jelly that spends its entire life looking for the reason for its existence. Sprine Bloat-Trunker: An orange Flanimal that erupts from Sprog and Hemel Sprot recycling plants, and immediately joins the queue to be recycled. Print: A humanoid Flanimal that dives off high places but always lands on its head. It dies from ankle sprains and strong wind. Gum Spudlet: A Flanimal that resembles a Bumpy Coddleflop, and is eaten by Grundits. They are dipped in Coddleflop by Grundit. Sprog: A small, vicious, beetroot-like Flanimal that is angry at its own smell. It’s often chewed and spat out by Grundit. Munge Fuddler: A crab-like Flanimal that "fuddles" everything it sees, until it fuddles the wrong thing. Frappled Humpdumbler: An octopus-like Flanimal with an eye on one side of its head and a nose on the other. Offledermis: A Flanimal born inside out to escape its own smell. It has a heart above its inside out eyeballs and constantly leaks. Plumboid Doppler: A round green Flanimal with eyestalks. Blimble Sprent: A yellow, fast-moving Flanimal without arms that sprints everywhere, avoiding its destination until it dies of exhaustion at the very spot where it started. Glonk: A green reptilian humanoid Flanimal that does absolutely nothing and dies. It is also known that it eats pizza. Originality dispute In August 2010, Norwich-based writer and artist John Savage issued a High Court writ, claiming that the original Flanimals book was based on his own Captain Pottie's Wildlife Encyclopedia, and that his artistic and literary copyright had been infringed. A spokeswoman for Gervais said that the concept and illustrations existed before Savage's work. Adaptations ITV commissioned a television series based on the books, with a planned air date of 2009, but it was later cancelled. On 28 April 2009, Variety reported that an animated feature film was in production at Illumination Entertainment, known for its 2010 summer blockbuster Despicable Me. It said that Gervais would be the executive producer and would voice the lead character, and that The Simpsons writer Matt Selman would write the script. However, it has since been removed from the development schedule, leaving its future uncertain, and no further details have been released about it since 2009. References External links Pictures from the book in the BBC website Flanimals on Ricky Gervais's site Flanimals on Rob Steen's site Flanimals on MySpace Book series introduced in 2004 2004 children's books British children's books British picture books Children's fiction books Books by Ricky Gervais Faber & Faber books Speculative evolution Books involved in plagiarism controversies
Flanimals
Biology
1,300
42,831,222
https://en.wikipedia.org/wiki/Environmentally%20extended%20input%E2%80%93output%20analysis
Environmentally extended input–output analysis (EEIOA) is used in environmental accounting as a tool which reflects production and consumption structures within one or several economies. As such, it is becoming an important addition to material flow accounting. Introduction In recognition of the increasing importance of global resource use mediated by international trade for environmental accounting and policy, new perspectives have been and are currently being developed within environmental accounting. The most prominent among these are consumption-based accounts compiled using environmentally extended input-output analysis. Consumption-based indicators of material use are commonly referred to as “material footprints” (comparable to carbon footprints and water footprints) or as raw material equivalents (RME) for imported and exported goods. Raw material equivalents or material footprints of traded goods comprise the material inputs required along the entire supply chain associated with their production. This includes both direct and indirect flows: For example, the ore mined to extract the metal contained in a mobile phone as well as the coal needed to generate the electricity needed to produce the metal concentrates would be included. In order to allocate domestic extraction to exported goods, information on the production and trade structure of an economy is required. In monetary terms, information on the production structure is contained in commonly available economy-wide input-output tables (IOT) which recently have been combined with trade statistics to form multi-regional IO (MRIO) tables. Input-output analysis In the following, a short introduction to input-output analysis and its environmental extension for the calculation of material footprints or RME indicators is provided. The inter-industry flows within an economy form an matrix and the total output of each industry forms an vector . By dividing each flow into an industry (i.e., each element of ) by the total output of that same industry, we obtain an matrix of so-called technical coefficients . In matrix algebra, this reads as follows: where: represents the vector diagonalized into a matrix () Matrix contains the multipliers for the inter-industry inputs required to supply one unit of industry output. A certain total economic output is required to satisfy a given level of final demand . This final demand may be domestic (for private households as well as the public sector) or foreign (exports) and can be written as an vector. When this vector of final demand is multiplied by the Leontief inverse , we obtain total output . is the identity matrix so that the following matrix equation is the result of equivalence operations in our previous equation: The Leontief inverse contains the multipliers for the direct and indirect inter-industry inputs required to provide 1 unit of output to final demand. Next to the inter-industry flows recorded in , each industry requires additional inputs (e.g. energy, materials, capital, labour) and outputs (e.g. emissions) which can be introduced into the calculation with the help of an environmental extension. This commonly takes the shape of an matrix of total factor inputs or outputs: Factors are denoted in a total of rows and the industries by which they are required are included along columns. Allocation of factors to the different industries in the compilation of the extension matrix requires a careful review of industry statistics and national emissions inventories. In case of lacking data, expert opinions or additional modelling may be required to estimate the extension. Once completed, can be transformed into a direct factor requirements matrix per unit of useful output , and the calculation is analogous to determination of the monetary direct multipliers matrix (see first equation): Consumption-based accounting of resource use and emissions can be performed by post-multiplying the monetary input-output relation by the industry-specific factor requirements: This formula is the core of environmentally extended input-output analysis: The final demand vector can be split up into a domestic and a foreign (exports) component, which makes it possible to calculate the material inputs associated with each. The matrix integrates material (factor) flow data into input-output analysis. It allows us to allocate economy-wide material (factor) requirements to specific industries. In the language of life-cycle assessment, the matrix is called the intervention matrix. With the help of the coefficients contained in the Leontief inverse , the material requirements can be allocated to domestic or foreign (exports) final demand. In order to consider variations in production structures across different economies or regions, national input-output tables are combined to form so-called multi-regional input-output (MRIO) models. In these models, the sum total of resources allocated to final consumption equals the sum total of resources extracted, as recorded in the material flow accounts for each of the regions. Critical issues Environmentally extended input–output analysis comes with a number of assumptions which have to be kept in mind when interpreting the results of such studies: Homogeneity of products: Calculations based on the standard IO model make it necessary to assume that each economic activity produces only one physically homogeneous product. In reality, however, the high level of aggregation of activities (e.g., in most European IO tables, all mining is included in the same activity irrespective of the specific material) leads to inhomogeneous outputs. In addition, many industries generate by-products (e.g., a paper mill may also produce saw dust); and this additionally violates the assumption of homogeneity of outputs. Along the same lines, when this method is used to ascribe environmental impacts, not all the products in a given sector have the same emissions. An average is used. But for instance in terms of power generation, the emissions from coal based power generation are very different from those of solar power generation. An assumption is made here that the global mixture is being used, when actually power generation may be available only from one source. Homogeneity of prices: In using the standard IO model, it is also necessary to assume that each industry sells its characteristic output to all other economic activities and to final consumers at the same price. In reality, however, this is not always true as illustrated by the example of electricity which costs less in the primary than in the tertiary sectors and/or final consumption. In addition, the aforementioned heterogeneity of industry output will cause this assumption to be violated: For example, a sector buying mostly aluminum from the non-ferrous metal industries is likely to pay a different price than a sector that mostly buys rare earth metals. In other words, the issue of price heterogeneity among users can be coped with by increasing the sector resolution of the input-output table. Under an ideal condition when the same price of a product applies to all its users, the monetary input-output table can be regarded as equavalent to a physical input-output table, that is, a table measured in physical units. Constant Returns to Scale: IO models assume that when production is scaled, all the inputs and outputs scale by the same factor. However, it is imperative to acknowledge that deviating from this simplifying assumption greatly increases the complexity of IO models, thereby diminishing their primary analytical efficacy: A closed solution as equation () will no longer be available. Furthermore, acquiring dependable data pertaining to input-output relationships at the macroeconomic level, encompassing a large number of sectors, poses formidable challenges and substantial financial burdens. This foundational assumption also underpins life-cycle assessment (LCA). Allocation of investments: In creating a consumption-based account of material flows, it is necessary to decide how investments are allocated within the production and consumption structure. In national accounting, investments are reported as part of final demand. From a consumption-based perspective, they can also be thought of as an input into the production process (e.g., machinery and production infrastructure are necessary inputs to production). The manner in which capital investments are included and how (or if) they are depreciated, significantly impacts the results obtained for the raw material equivalents of exports. If infrastructure investments (whether in monetary terms or as domestic extraction of construction materials) are not depreciated over time, importing one and the same product from an emerging economy currently building up its infrastructure will be associated with much more embodied material than importing it from a mature economy which has significantly invested into its infrastructure in the past. For recent developments regarding the treatment of issues related to capital stock and investment flows, please refer to. Understanding the impact and eventually resolving these methodological issues will become important items on the environmental accounting research agenda. At the same time, interest is already growing in the interpretability of the results of such consumption-based approaches. It has yet to be determined how responsibility for material investments into the production of exports should be shared in general: While it is true that the importing economy receives the benefit of the ready-made product, it is also true that the exporting economy receives the benefit of income. Further extensions Avoiding double counting in footprint analysis Let's define as a vector of the same size as , where all elements are zero except for the -th one. From (), the environmental footprint of product can be given by Applying this calculation to materials such as metals and basic chemicals requires caution because only a small portion of them will be consumed by final demand. Conversely, using the model based on gross output, , as would result in the double-counting of emissions at each processing stage, leading to incorrect total environmental impacts (here, represents a column vector of the size as with all elements equal to zero except for the -th one). To address this problem, Dente et al. developed an innovative method based on the concept of "target sectors", which was further elaborated by Cabernard et al. Distributing environmental responsibility Footprint calculation based on () completely allocates the environmental impacts to the final consumers. This is called Consumer-based responsibility. An alternative way of allocation is one based on direct impacts, , where the impacts are allocated to the producers. This is called Production-based responsibility. These are examples of the full responsibility approach, where the impacts/pressures are allocated completely to a particular group or agents. Recently, several hybrid allocation schems have been proposed, including Income-based ones and Sharedness. Waste and waste management When the intervention matrix refers to waste, () could be used to assess the waste-footprint of products. However, it overlooks the crucial point that waste typically undergoes treatment before recycling or final disposal, leading to a form less harmful to the environment. Additionally, the treatment of emissions results in residues that require proper handling for recycling or final disposal (for instance, the pollution abatement process of sulfur dioxide involves its conversion into gypsum or sulfuric acid). To address these complexities, Nakamura and Kondo extended the standard EEIO model by incorporating physical waste flows generated and treated alongside monetary flows of products and services. They developed the Waste Input-Output (WIO) model, which accounts for the transformation of waste during treatment into secondary waste and residues, as well as recycling and final disposal processes. See also Anthropogenic metabolism Environmental accounting Environmental Accounts Embedded emissions Greenhouse gas emissions accounting Industrial metabolism Input-output model Material flow accounting Material flow analysis Social metabolism Urban metabolism Wassily Leontief Waste input-output model Notes References External links LIAISE KIT: Economy-wide accounts Environmentally extended input-output tables and models for Europe Environmental economics Economic planning National accounts
Environmentally extended input–output analysis
Environmental_science
2,312
46,692,161
https://en.wikipedia.org/wiki/Fecundity%20selection
Fecundity selection, also known as fertility selection, is the fitness advantage resulting from selection on traits that increases the number of offspring (i.e. fecundity). Charles Darwin formulated the theory of fecundity selection between 1871 and 1874 to explain the widespread evolution of female-biased sexual size dimorphism (SSD), where females were larger than males. Along with the theories of natural selection and sexual selection, fecundity selection is a fundamental component of the modern theory of Darwinian selection. Fecundity selection is distinct in that large female size relates to the ability to accommodate more offspring, and a higher capacity for energy storage to be invested in reproduction. Darwin's theory of fecundity selection predicts the following: Fecundity depends on variation in female size, which is associated with fitness. Strong fecundity selection favors large female size, which creates asymmetrical female-biased sexual size dimorphism. Although sexual selection and fecundity selection are distinct, it still may be difficult to interpret whether sexual dimorphism in nature is due to fecundity selection, or to sexual selection. Examples of fecundity selection in nature include self-incompatibility flowering plants, where pollen of some potential mates are not effective in forming seed, as well as bird, lizard, fly, and butterfly and moth species that are spread across an ecological gradient. Moreau–Lack's rule In 1944, Reginald Ernest Moreau suggested that in more seasonal environments or higher latitudes, fecundity depends on high mortality. David Lack suggested in 1954 that differential food availability and management across latitudes play a role in offspring and parental fitness. Lack also highlighted that more opportunities for parents to collect food due to an increase in day-length towards the poles is an advantage. This means that moderately higher altitudes provide more successful conditions to produce more offspring. However, extreme day-lengths (i.e. at the poles) may work against parental survival as repetitive food searching would exhaust the parent. Together, the Moreau–Lack rule hypothesizes that fecundity increases with increasing latitude. Evidence supporting and doubting this claim has led to the consolidation of other predictions, which may better explain Moreau–Lack's rule. Seasonality and Ashmole's hypothesis Ashmole (1963) suggested (bird) fecundity depends on seasonality patterns. Food differences in availability between seasons are greater towards higher latitudes, so birds are predicted to experience low survival during the winter due to limited resources. This decline in population may be advantageous for survivors, since there is more food available by the next breeding season. This leads to an enhancement of energy when invested in fitness as a result of higher fecundity. Therefore, Ashmole's hypothesis is dependent upon resource availability as a factor fecundity. Differences in nest predation Areas with severe nest predation tend to be those of large clutches/litters, especially in the tropics, as they are more noticeable to predators (frequent parental care, noisier offspring). This predation pressure may lead to the selection for multiple nests of smaller size, with shorter development time. A criticism of this hypothesis is that it indirectly assumes that these nest-predators are visually-oriented, however, they may be chemically oriented, too, with heightened olfactory senses. Length of breeding season (LBS) hypothesis Populations at higher latitudes experience an increasing seasonality and shorter warm seasons. As a result, these populations have more chances of having multiple reproductive episodes. Intense fecundity selection depends on the length of breeding season (LBS). Factors that may delay LBS or the start of breeding season, are snow cover or delayed food growth, which, in turn, minimizes the chance for these populations to reproduce. Long breeding seasons towards the tropics favor smaller clutches since females are able to balance energy reserved for reproduction, and the risk of predation. Fecundity selection acts by favoring early reproduction and higher clutch size in species that reproduce frequently. The opposite trend is seen in populations that reproduce less frequently, where delayed reproduction is favored. The 'bet-hedging strategy' hypothesis The total fecundity per year depends on the length of breeding season (LBS), which also determines the number of breeding episodes. In addition, the total fecundity also depends on nest predation, as it describes differential survival over a variety of populations. When food is limited, and the breeding season is long, and nest predation is intense, selection tends to favor a 'bet-hedging' strategy, where the risk of predation is spread over many smaller clutches. This means that the success of the number of offspring depends on whether they are large in size or not. The strategy suggests that fewer, but larger, clutches in higher latitudes are a result of food seasonality, nest predation, and LBS. In nature The findings below are based on individual research studies. Southern and Northern Hemisphere birds It has been assumed that parents of fewer offspring, with a high probability of adult survival, should permit less risk to themselves. Even though this compromises their young, the overall fitness of their offspring is reduced, which is a strategy to invest in producing more offspring in the future. It was found that within and between regions, there is a negative correlation between clutch size and adult survival. Southern-Hemisphere parents were inclined to reduce mortality risk to themselves, even at a cost to their offspring, whereas Northern parents experienced greater risk to themselves to reduce risk to their offspring. Liolaemus lizard Liolaemus species span from the Atacama Desert to austral rain forests and Patagonia, and across a wide range of altitudes. Due to radiation, life history strategies have diversified within this genus. In turn, it was found that increased fecundity does not lead to female-biased SSD, which is also not effected by latitude-elevation. Drosophila melanogaster In lines of D. melanogaster selected for increased fecundity (i.e. more eggs laid over an 18-hour period), females experienced an increase in thorax and abdomen width than males. In general, SSD increased with selection for increased fecundity. These results support the hypothesis that in response to fecundity selection, SSD can evolve rapidly. Lepidoptera butterfly and moth species Female-biased SSD in many Lepidopteran species are initiated during their developmental period. Since females of this species, as in many other species, reserve their larval resources for reproduction, fecundity depends on larger (female) size. In this way, larger females can enhance fecundity as well as their survival by having multiple partners. Other types of selection Natural selection is defined as the differential survival and/or reproduction of organisms as a function of their physical attributes, where their 'fitness' is the ability to adapt to the environment and produce more (fertile) offspring. The trait(s) that contribute to survival or reproduction of offspring has a higher chance of being expressed in the population. Sexual selection acts to refine secondary sexual (i.e. non-genital) phenotypes, such as the morphological differences between males and females (sexual dimorphism), or even differences between species of the same sex. As a refinement to Darwin's theory of selection, Trivers (1974) observed that: Females are the limiting sex and invest more in offspring than males Because males tend to be in excess, males tend to develop ornaments for attracting mates (female choice), as well competing with other males. See also Fecundity selection theory References Selection Biological interactions Natural selection
Fecundity selection
Biology
1,599
20,090,151
https://en.wikipedia.org/wiki/HD%2048265%20b
HD 48265 b is an extrasolar planet located approximately 293 light-years away in the constellation of Puppis, orbiting the 8th magnitude G-type main sequence star HD 48265. It has a minimum mass of 1.47 times that of Jupiter. Because its inclination is not known, its true mass is not known. It orbits at a distance of 1.81 AU with an orbital eccentricity of 0.08. As part of the NameExoWorlds project of the IAU, HD 48265 b was named "Naqaỹa" ("brother") and HD 48265 "Nosaxa" ("springtime") in the Moqoit language by Argentine respondents to an online poll. References External links Exoplanets discovered in 2008 Giant planets Puppis Exoplanets detected by radial velocity Exoplanets with proper names
HD 48265 b
Astronomy
181
47,158,945
https://en.wikipedia.org/wiki/Alain%20Berton
Alain-Edgard Berton (1912–1979) was a French chemical engineer who specialized in toxicology and in the analysis of air components in industrial environments. In the late 1950s he invented the "Osmopile", a measuring device, dubbed "the first artificial nose," which initiated, through the use of highly sensitive galvanic cells, the electrochemical analysis of air to detect dangerous components. Biography Alain Berton was born in Coro Coro in Bolivia on 27 August 1912. He was the son of Adrien Berton, a mining engineer, and Justine Rodriguez. He was educated at the Lycée Hoche in Versailles, and became a chemical engineer at the Chemical Institute of the University of Paris in 1933. From 1935 to 1937 he was a Ramsay Fellow at the Institute of Technology in London, in the laboratory of Prof. William Lawrence Bragg, at the Royal Institution. He began his career in 1938 at the French National Centre for Scientific Research as a "boursier" (fellow) in Georges Urbain's laboratory (dedicated to war chemical studies, protection against poison gas). Following Georges Urbain's death that same year, he was assigned to Paul Lebeau's laboratory as "chargé de recherche" (researcher). From 1959 till 1969 he was head of research. In parallel, from 1959 to 1978, he was head of the Toxicology Laboratory for the Regional Social Security Fund in Paris. The Osmopile By the end of the war in 1944, post war recovery started: Alain Berton's work on the application of absorption and emission spectroscopy in the ultraviolet and infrared, and within the frame of concerns about labor force protection, the specific dosage of atmospheric pollutants became of vital interest in factories to effectively detect and remedy industrial pollution. Thus, in the 1950s, based on the method of gas chromatography analysis by low temperature followed by pyrolysis, he managed to isolate chlorinated substances and acid vapors components in the air. He was able to individualize traces of gas and vapors by using ultra-sensitive galvanic batteries and galvanic microcell detectors. He presented his research in the preamble to the convention of the Analytical Chemistry Group in 1958. Alain Berton named his invention "the Osmopile," later nicknamed "the sniffing cells" by the scientific journal Atomes. The first "artificial nose" was thus born. His invention was adopted and developed in the US and went around the world with a report from the Associated Press dated December 8, 1958. Berton’s Osmopile was marketed by Jouan, a laboratory equipment manufacturer founded in the 1940s by a researcher from the Pasteur Institute and acquired in 2003 by Thermo Electron. The Osmopile device was modernized over time and used in the fight against industrial pollution. Through his invention, Alain Berton proved to be an ecology pioneer. Patents Berton, Alain Edgard : Montre-bracelet-réveil tactile. 5 décembre 1949 : FR953313-A Berton Alain Edgard : Photomètre infrarouge utilisable en analyse physique et chimique. 24 janvier 1955 : FR1084823-A Berton Alain: Photomètre simple ultraviolet, avec enregistreur photographique à lecture instantanée, utilisable en analyse physique et chimique. 27 juin 1958 : FR1159401-A Berton Alain : Apparatus for detecting and measuring traces of impurities in a gas. 16 juin 1960 : FR1223277-A; idem, 27 juin 1960 : FR1224831-A Berton Alain : Dispositif chimique d'enregistrement sans contact. 14 octobre 1960 : FR1234235-A Berton Alain : Analyseur colorimétrique de vapeurs, portatif. 17 mars 1961 : FR1255988-A Berton Alain : Dispositif électrochimique de détection d'impuretés dans les gaz. 10 août 1962 : FR1300917-A Award Alain Berton was awarded the Medal of the International Bureau of Analytical Chemistry (BICA)- International fight against chemical weapons. led by :fr:Paul Nicolardot. See also Thermal decomposition Enthalpy Notes and references The information on this page is partially translated from the equivalent page in French :fr:Alain Berton (Chimiste) licensed under the Creative Commons/Attribution Sharealike . History of contributions can be checked here: 20th-century French chemists Analytical chemists 1912 births 1979 deaths French expatriates in Bolivia
Alain Berton
Chemistry
978
51,759,087
https://en.wikipedia.org/wiki/HD%20117939
HD 117939 is a Sun-like star in the southern constellation of Centaurus. With an apparent visual magnitude of 7.29 it is too faint to be viewed with the naked eye, but is within the range of binoculars or a small telescope. It is located at a distance of 98.5 light years from the Sun based on parallax measurements, and is drifting further away with a radial velocity of +82 km/s. This is an intermediate disk star with a high proper motion, traversing the celestial sphere at an angular rate of . An ordinary G-type main-sequence star with a stellar classification of G4V, this star is an "excellent photometric match for the Sun"; the atmospheric properties of the star make it a near solar twin. It is older than the sun at 6.1 billion years, but is more chromospherically active. To date no exact solar twin (precisely matching all important properties of the Sun) has been found. However, there are some stars that come very close to being identical to the Sun, and as such are dubbed solar twins by astronomers. An exact solar twin would be a 4.6 billion years old G2V star with a 5,778K temperature, the correct metallicity, and a 0.1% solar luminosity variation. G2V stars with an age of 4.6 billion years or more have typically reached their most stable state. Proper metallicity and size are also important to low luminosity variation. Sun comparison Chart compares the sun to HD 117939. See also List of nearest stars References G-type main-sequence stars Solar analogs Centaurus Durchmusterung objects 9450 117939 066238
HD 117939
Astronomy
356
13,111,519
https://en.wikipedia.org/wiki/Cargo%20scanning
Cargo scanning or non-intrusive inspection (NII) refers to non-destructive methods of inspecting and identifying goods in transportation systems. It is often used for scanning of intermodal freight shipping containers. In the US, it is spearheaded by the Department of Homeland Security and its Container Security Initiative (CSI) trying to achieve one hundred percent cargo scanning by 2012 as required by the US Congress and recommended by the 9/11 Commission. In the US the main purpose of scanning is to detect special nuclear materials (SNMs), with the added bonus of detecting other types of suspicious cargo. In other countries the emphasis is on manifest verification, tariff collection and the identification of contraband. In February 2009, approximately 80% of US incoming containers were scanned. To bring that number to 100% researchers are evaluating numerous technologies, described in the following sections. Radiography Gamma-ray radiography Gamma-ray radiography systems capable of scanning trucks usually use cobalt-60 or caesium-137 as a radioactive source and a vertical tower of gamma detectors. This gamma camera is able to produce one column of an image. The horizontal dimension of the image is produced by moving either the truck or the scanning hardware. The cobalt-60 units use gamma photons with a mean energy 1.25 MeV, which can penetrate up to 15–18 cm of steel. The systems provide good quality images which can be used for identifying cargo and comparing it with the manifest, in an attempt to detect anomalies. It can also identify high-density regions too thick to penetrate, which would be the most likely to hide nuclear threats. X-ray radiography X-ray radiography is similar to gamma-ray radiography but instead of using a radioactive source, it uses a high-energy bremsstrahlung spectrum with energy in the 5–10 MeV range created by a linear particle accelerator (LINAC). Such X-ray systems can penetrate up to 30–40 cm of steel in vehicles moving with velocities up to 13 km/h. They provide higher penetration but also cost more to buy and operate. They are more suitable for the detection of special nuclear materials than gamma-ray systems. They also deliver about 1000 times higher dose of radiation to potential stowaways. Dual-energy X-ray radiography Dual-energy X-ray radiography Backscatter X-ray radiography Backscatter X-ray radiography Neutron activation systems Examples of neutron activation systems include: pulsed fast neutron analysis (PFNA), fast neutron analysis (FNA), and thermal neutron analysis (TNA). All three systems are based on neutron interactions with the inspected items and examining the resultant gamma rays to determine the elements being radiated. TNA uses thermal neutron capture to generate the gamma rays. FNA and PFNA use fast neutron scattering to generate the gamma rays. Additionally, PFNA uses a pulsed collimated neutron beam. With this, PFNA generates a three-dimensional elemental image of the inspected item. Passive radiation detectors Muon tomography Muon tomography is a technique that uses cosmic ray muons to generate three-dimensional images of volumes using information contained in the Coulomb scattering of the muons. Since muons are much more deeply penetrating than X-rays, muon tomography can be used to image through much thicker material than x-ray based tomography such as CT scanning. The muon flux at the Earth's surface is such that a single muon passes through a volume the size of a human hand per second. Muon imaging was originally proposed and demonstrated by Alvarez. The method was re-discovered and improved upon by a research team at Los Alamos National Laboratory, muon tomography is completely passive, exploiting naturally occurring cosmic radiation. This makes the technology ideal for high throughput scanning of volume material where operators are present, such as at a marine cargo terminal. In these cases, truck drivers and customs personnel do not have to leave the vehicle or exit an exclusion zone during scanning, expediting cargo throughput. Multi-mode passive detection systems (MMPDS), based upon muon tomography, are currently in use by Decision Sciences International Corporation at Freeport, Bahamas, and the Atomic Weapons Establishment in the United Kingdom. An MMPDS system has also been contracted by Toshiba to determine the location and the condition of the nuclear fuel in the Fukushima Daiichi Nuclear Power Plant. Gamma radiation detectors Radiological materials emit gamma photons, which gamma radiation detectors, also called radiation portal monitors (RPM), are good at detecting. Systems currently used in US ports (and steel mills) use several (usually 4) large PVT panels as scintillators and can be used on vehicles moving up to 16 km/h. They provide very little information on energy of detected photons, and as a result, they were criticized for their inability to distinguish gammas originating from nuclear sources from gammas originating from a large variety of benign cargo types that naturally emit radioactivity, including bananas, cat litter, granite, porcelain, stoneware, etc. Those naturally occurring radioactive materials, called NORMs account for 99% of nuisance alarms. Some radiation, like in the case of large loads of bananas is due to potassium and its rarely occurring (0.0117%) radioactive isotope potassium-40, other is due to radium or uranium that occur naturally in earth and rock, and cargo types made out of them, like cat litter or porcelain. Radiation originating from earth is also a major contributor to background radiation. Another limitation of gamma radiation detectors is that gamma photons can be easily suppressed by high-density shields made from lead or steel, preventing detection of nuclear sources. Those types of shields do not stop fission neutrons produced by plutonium sources, however. As a result, radiation detectors usually combine gamma and neutron detectors, making shielding only effective for certain uranium sources. Neutron radiation detectors Fissile materials emit neutrons. Some nuclear materials, such as the weapons usable plutonium-239, emit large quantities of neutrons, making neutron detection a useful tool to search for such contraband. Radiation Portal Monitors often use Helium-3 based detectors to search for neutron signatures. However, a global supply shortage of He-3 has led to the search for other technologies for neutron detection. See also Industrial radiography Gamma spectroscopy References Special nuclear materials Freight transport Electromagnetic spectrum Radioactivity Radiography United States Department of Homeland Security X-rays
Cargo scanning
Physics,Chemistry
1,324
25,193,533
https://en.wikipedia.org/wiki/Pyriprole
Pyriprole, sold under the brand name Prac-tic, is a veterinary medication used for dogs against external parasites such as fleas and ticks. Pyriprole is a phenylpyrazole derivative similar to fipronil. Although introduced (in the 2000s) and under patent protection it is a "classic" insecticide. It is only approved in the EU and a few other countries for use on dogs. It is not approved for use on cats or livestock. It has not been introduced as an agricultural or hygiene pesticide. Pyriprole applied as a spot-on is highly effective against fleas and several ticks species. Efficacy against fleas is comparable to that of other modern insecticidal active ingredients such as fipronil, imidacloprid or spinosad. As most flea spot-ons it controls existing flea and tick infestations in about 1 to 2 days, and provides about 4 weeks protection against re-infestations. Mechanism of action Pyriprole is an insecticide and acaricide. It inhibits γ-aminobutyric acid (GABA)-gated chloride channels (GABAA receptors) resulting in uncontrolled hyperactivity of the central nervous system of fleas and ticks. Parasites are killed through contact rather than by systemic exposure. Following topical administration pyriprole is rapidly distributed in the hair coat of dogs within one day after application. It can be found in the hair coat throughout the treatment interval. Insecticidal efficacy duration against new infestations with fleas persists for a minimum of four weeks. The substance can be used as part of a treatment strategy for the control of flea allergy dermatitis (FAD). References Insecticides 2-Pyridyl compounds Pyrazoles Nitriles Organofluorides Trifluoromethyl compounds Thioethers Chlorobenzene derivatives Synthetic insecticides Dog medications
Pyriprole
Chemistry
413
47,546,141
https://en.wikipedia.org/wiki/Suaeda%20pulvinata
Suaeda pulvinata is an endemic seepweed from Mexico. It lives in the shores of Lake Texcoco and Lake Totolcingo. It lives underwater as an aquatic plant for half of the year and on dry land as a terrestrial plant for the other half due to the changing levels of the lakes that it inhabits. It is a perennial flat herb with prostrate stems. Its leaves and inflorescences are green to reddish in color. This species is important for people that live in the states of Puebla and Tlaxcala, as it is an edible vegetable. The dish that is prepared using this species is known as romeritos. It has been found in molecular phylogenetic studies that this taxon is monophyletic. Due to differences in its phylogenetic position in its nuclear ITS tree and its chloroplast rpl32-trnL tree, it is thought this species is the result of hybridization of ancestral species of Suaeda. The first scientific collector who found this plant was Efraim Hernández Xolocotzi. (Later, it was cited by Guadalupe Ramos in her university degree thesis.) However, he misidentified it for S. nigra. It was in 2013 that Ernesto Alvarado Reyes and Hilda Flores Olvera noticed it was a different species. References External links http://www.tropicos.org/Name/100425076 pulvinata Halophytes Flora of Mexico Edible plants Barilla plants
Suaeda pulvinata
Chemistry
311
12,268,846
https://en.wikipedia.org/wiki/First%20Databank
First Databank (FDB) is a major provider of drug and medical device databases that help inform healthcare professionals to make decisions. FDB partners with information system developers to deliver useful medication- and medical device-related information to clinicians, business associates, and patients. FDB is part of Hearst and the Hearst Health network. History First Databank was founded in 1977 as a company that published a quarterly magazine of drug prices. They were bought by Hearst Corporation in 1980. First Databank then evolved to become a provider of clinical and descriptive drug knowledge that is integrated into healthcare information systems globally. FDB has its headquarters in San Francisco, California, and has other offices in Indianapolis, Indiana, Exeter, England, Dubai, UAE and Hyderabad, India. The firm's drug databases support pharmacy dispensing, formulary management, drug pricing analysis, claims processing, computerized physician order entry (CPOE), electronic health records (EHR), electronic medical records (EMR), electronic prescribing (e-Prescribing), electronic medication administration records (EMAR), population health and telemedicine/telehealth. Beginning in 2011, First Databank's set of National Drug Codes (NDCs) have been integrated into RxNorm's standard clinical drug vocabulary that includes all medications available on the US market. RxNorm is produced and maintained by the United States National Library of Medicine (NLM). In 2017, FDB acquired Polygot Systems, which simplifies drug information for patients and translates that information into 21 languages. In 2018, FDB partnered with PetIQ to release the first veterinary medication database to provide information on pet medications structured for integration into pharmacy systems. Beginning in 2020, FDB partnered with Amazon and its Alexa devices to provide drug information and answer medication questions. During the COVID-19 pandemic, FDB posted drug data (regarding remdesivir, chloroquine, and hydroxychloroquine) and medical device-related coronavirus information to its website. In August 2021, the company announced a partnership with RxRevu to provide integrated decision support tools to improve patient access to care, delivering patient-specific pharmacy benefit information to EHR workflows via direct connections with pharmacy benefit managers. The technology displays accurate, real-time data at the point of prescribing, allowing physicians to find affordable alternatives for medications specific to a patient's health needs and insurance benefits. FDB will offer RxRevu's prescription cost and coverage solution to current and future hospital, health system, and EHR clients. Operations FDB MedKnowledge (formerly National Drug Data File Plus) First Databank's MedKnowledge provides prices, descriptions, and collateral clinical information on drugs approved by the US Food and Drug Administration (FDA), plus unapproved drugs, commonly used over-the-counter drugs, herbal remedies, medical foods and nutritional supplements. FDB OrderKnowledge (formerly OrderView Med Knowledge Base) First Databank has developed a drug ordering knowledge base that enables physicians to look up and order drugs. Drug orders are generated based on patient parameters such as age, weight, renal and hepatic impairment, thereby reducing lists of candidate drugs to a minimum. The database is expected to affect the number of adverse drug reactions and side effects at facilities that have adopted the electronic order entry systems. FDB AlertSpace A web-based software tool that enables institution-specific modification of medication alerts using FDB MedKnowledge clinical modules based on clinician input, localized clinical experience, and other available evidence. The tool allows users to edit or turn off individual alerts, track all alert customizations and create an audit record, and view FDB updates in comparison with the user's own modifications. Users can load the results of their modifications directly into their medication decision support system for immediate use in the workflow. The approach follows the normal update process. FDB Prizm The FDB Prizm medical device database provides structured, categorized, and normalized information about medical device products that are implanted into patients; hospital and durable medical equipment; and medical supplies. The medical device content comes from a variety of sources such as the FDA, medical device manufacturers, and industry data pools. Also, it encompasses additional information from clinical, operational, and financial attributes and codes. Use of this database within supply chain and other information systems is designed to help decision makers to build and maintain device libraries, identify and document medical devices in case of recalls and adverse events, and group and analyze medical device utilization. Meducation Meducation comprises simplified medication instructions and medication regimen calendars using patient-specific information from the electronic health record (EHR). All material is written at a 5th to 8th grade reading level with supporting pictograms in more than 20 languages and is designed to help reduce medication errors and improve medication adherence for all patients. FDB Targeted Medication Warnings FDB Targeted Medication Warnings provides patient-specific clinical decision support (CDS) for medications and is integrated directly into the EHR workflow. This content uses lab results, risk scores, and other patient data to suggest clinical guidance that is most relevant to the patient context. The CDS derived from this solution provides specificity for clinical decisions and is linked to the related generic medication alerts which can be filtered out/customized using FDB AlertSpace. FDB Pet MedKnowledge FDB Pet MedKnowledge is a comprehensive drug database with veterinary FDA approved medications, allowing veterinarians to access pet medication information whenever and wherever is needed. The service is modeled on the FDB MedKnowledge human medication database. Litigation A consumer coalition filed separate suits in a Boston, Massachusetts federal court against drug wholesaler McKesson Corporation and First Databank, accusing the companies of artificially inflating drug prices. The lawsuits say that McKesson and FDB conspired from 2002 through 2005 to set the list prices artificially high. The suit against First DataBank accused it of limiting its survey of wholesalers to a single company, McKesson. First Databank agreed to a settlement, tentatively approved by the federal court, in which it would not pay damages to the plaintiffs, but agreed to reduce average wholesale prices (AWPs) listed in its databases by five percent for about 2,033 drugs. McKesson chose to fight the suits. The settlement required First DataBank (FDB) to reduce the AWP mark-up from 1.25 to 1.20 times the Wholesale Acquisition Cost (WAC) for 1,442 NDCs identified in the litigation. FDB set the mark-up at 1.20 for all drugs independent of the litigation on September 26, 2009. The roll back of the WAC-to-AWP spread led to a 4% reduction in their AWP. FDB also stopped publishing AWP data on September 26, 2011, two years after the rollback adjustments were implemented. First DataBank continues to publish non-AWP drug pricing information, including WAC, Direct Price, and suggested wholesale price. References External links Official website Publishing companies established in 1982 Hearst Communications publications Pharmaceutical industry Pharmaceuticals policy Publishing companies of the United States Companies based in San Francisco 1982 establishments in the United States
First Databank
Chemistry,Biology
1,511
75,281,541
https://en.wikipedia.org/wiki/Anusha%20Shah
Anusha Shah is an Indian-born civil engineer. Elected the 159th President of the Institution of Civil Engineers, she became the third woman and first person of colour to hold the position, taking office in November 2023. Early life and education Shah grew up in Kashmir. She studied civil engineering at Jamia Millia Islamia in New Delhi, India. In 1999, after winning a Commonwealth scholarship, she studied for an MSc in water and environmental engineering at the University of Surrey in the UK. Professional career Shah specialised in water and environmental engineering from the late 1990s. After completing her first degree, she worked as a project engineer for New Delhi-based Development Alternatives, overseeing production of compressed earth building blocks, and sparking a career interest in sustainable development. She then joined IramConsult, a local partner of Royal HaskoningDHV, to work in Kashmir on rehabilitating a lake. After completing her masters, she was seconded by Black & Veatch to work for Clancy Docwra as a design engineer on United Utilities' Haweswater scheme in the UK's Lake District. In 2008, Shah moved to Jacobs, becoming technical director for sustainable solutions and utilities in 2010, and a director of the firm in 2018. In 2019, she moved to Arcadis, becoming senior director for resilient cities and UK climate adaptation lead. She is currently seconded to the Eiffage, Kier, Ferrovial and BAM Nuttall joint venture on High Speed 2 as senior director of environmental consents. Institutional and board roles Shah is a Fellow of the Institution of Civil Engineers. Prior to succeeding Keith Howells and becoming President of the Institution of Civil Engineers in November 2023, she served on the Thomas Telford board, the ICE Executive Board, ICE's Fairness, Inclusion and Respect panel, the ICE research and development panel and the ICE qualifications panel. Shah is a non-executive director of the Met Office, UK and a Green Alliance trustee. She represents Arcadis at the London Climate Change Partnership and 50L Home Initiative of the World Business Council for Sustainable Development. She is a past chair of the Thames Estuary Partnership Board, which works towards sustainable management of the River Thames. Shah has been a chair and also a judge of the Ofwat Water Breakthrough Challenge for two consecutive terms. Academia In 2021, Shah was made an honorary professor by the University of Wolverhampton for knowledge transfer. In the same year, she received an honorary doctorate from the University of East London for her contributions to climate change in engineering. Shah is a visiting professor at the University of Edinburgh and is a Royal Academy of Engineering visiting professor at King's College London. Awards Shah won the Civil Engineering Contractors Association Fairness Inclusion and Respect Inspiring Engineers Award 2019, and was honoured in New Civil Engineer's 2019 Recognising Women in Engineering awards for her contributions to gender diversity. In 2020, she was named as one of Climate Reframe's leading BAME voices on climate change in the UK. In 2023, she was selected by the Women's Engineering Society as one the UK's Top 50 Women in Sustainability. References Civil engineering Year of birth missing (living people) Living people People from Jammu and Kashmir Presidents of the Institution of Civil Engineers 21st-century Indian people Jamia Millia Islamia alumni Alumni of the University of Surrey
Anusha Shah
Engineering
681
2,543,416
https://en.wikipedia.org/wiki/Infusion%20pump
An infusion pump infuses fluids, medication or nutrients into a patient's circulatory system. It is generally used intravenously, although subcutaneous, arterial and epidural infusions are occasionally used. Infusion pumps can administer fluids in ways that would be impractically expensive or unreliable if performed manually by nursing staff. For example, they can administer as little as 0.1 mL per hour injections (too small for a drip), injections every minute, injections with repeated boluses requested by the patient, up to maximum number per hour (e.g. in patient-controlled analgesia), or fluids whose volumes vary by the time of day. Because they can also produce quite high but controlled pressures, they can inject controlled amounts of fluids subcutaneously (beneath the skin), or epidurally (just within the surface of the central nervous system – a very popular local spinal anesthesia for childbirth). Types of infusion The user interface of pumps usually requests details on the type of infusion from the technician or nurse that sets them up: Continuous infusion usually consists of small pulses of infusion, usually between 500 nanoliters and 10 milliliters, depending on the pump's design, with the rate of these pulses depending on the programmed infusion speed. Intermittent infusion has a "high" infusion rate, alternating with a low programmable infusion rate to keep the cannula open. The timings are programmable. This mode is often used to administer antibiotics, or other drugs that can irritate a blood vessel. To get the entire dose of antibiotics into the patient, the "volume to be infused" or VTBI must be programmed for at least 30 CCs more than is in the medication bag; failure to do so can potentially result in up to half of the antibiotic being left in the IV tubing. Patient-controlled is infusion on-demand, usually with a preprogrammed ceiling to avoid intoxication. The rate is controlled by a pressure pad or button that can be activated by the patient. It is the method of choice for patient-controlled analgesia (PCA), in which repeated small doses of opioid analgesics are delivered, with the device coded to stop administration before a dose that may cause hazardous respiratory depression is reached. Total parenteral nutrition usually requires an infusion curve similar to normal mealtimes. Some pumps offer modes in which the amounts can be scaled or controlled based on the time of day. This allows for circadian cycles which may be required for certain types of medication. Types of pump There are two basic classes of pumps. Large volume pumps can pump fluid replacement such as saline solution, medications such as antibiotics or nutrient solutions large enough to feed a patient. Small-volume pumps infuse hormones, such as insulin, or other medicines, such as opiates. Within these classes, some pumps are designed to be portable, others are designed to be used in a hospital, and there are special systems for charity and battlefield use. Large-volume pumps usually use some form of peristaltic pump. Classically, they use computer-controlled rollers compressing a silicone-rubber tube through which the medicine flows. Another common form is a set of fingers that press on the tube in sequence. Small-volume pumps usually use a computer-controlled motor turning a screw that pushes the plunger on a syringe. The classic medical improvisation for an infusion pump is to place a blood pressure cuff around a bag of fluid. The battlefield equivalent is to place the bag under the patient. The pressure on the bag sets the infusion pressure. The pressure can actually be read-out at the cuff's indicator. The problem is that the flow varies dramatically with the cuff's pressure (or patient's weight), and the needed pressure varies with the administration route, potentially causing risk when attempted by an individual not trained in this method. Places that must provide the least-expensive care often use pressurized infusion systems. One common system has a purpose-designed plastic "pressure bottle" pressurized with a large disposable plastic syringe. A combined flow restrictor, air filter and drip chamber helps a nurse set the flow. The parts are reusable, mass-produced sterile plastic, and can be produced by the same machines that make plastic soft-drink bottles and caps. A pressure bottle, restrictor and chamber requires more nursing attention than electronically controlled pumps. In the areas where these are used, nurses are often volunteers, or very inexpensive. The restrictor and high pressure helps control the flow better than the improvised schemes because the high pressure through the small restrictor orifice reduces the variation of flow caused by patients' blood pressures. An air filter is an essential safety device in a pressure infusor, to keep air out of the patients' veins. Small bubbles could cause harm in arteries, but in the veins they pass through the heart and leave in the patients' lungs. The air filter is just a membrane that passes gas but not fluid or pathogens. When a large air bubble reaches it, it bleeds off. Some of the smallest infusion pumps use osmotic power. Basically, a bag of salt solution absorbs water through a membrane, swelling its volume. The bag presses medicine out. The rate is precisely controlled by the salt concentrations and pump volume. Osmotic pumps are usually recharged with a syringe. Spring-powered clockwork infusion pumps have been developed, and are sometimes still used in veterinary work and for ambulatory small-volume pumps. They generally have one spring to power the infusion, and another for the alarm bell when the infusion completes. Battlefields often have a need to perfuse large amounts of fluid quickly, with dramatically changing blood pressures and patient condition. Specialized infusion pumps have been designed for this purpose, although they have not been deployed. Many infusion pumps are controlled by a small embedded system. They are carefully designed so that no single cause of failure can harm the patient. For example, most have batteries in case the wall-socket power fails. Additional hazards are uncontrolled flow causing an overdose, uncontrolled lack of flow, causing an underdose, reverse flow, which can siphon blood from a patient, and air in the line, which can cause an air embolism. Elastomeric pumps, also known as balloon pumps or ball pumps, rely on the gradual contraction of an internal elastomeric reservoir to deliver medication at a pre-determined flow rate over several hours or days. These pumps do not require electricity and offer simplicity and portability, making them suitable for administering various medications, including antibiotics, in situations where continuous, low-rate infusion is required. These features make them useful for infusions in outpatient settings, such as outpatient parenteral antibiotic therapy (OPAT). However, due to their limited features and programmability, they are not suitable for all medications or flow rates. Safety features available on some pumps The range of safety features varies widely with the age and make of the pump. A state of the art pump might have had the following safety features: Certified to have no single point of failure. That is, no single cause of failure should cause the pump to silently fail to operate correctly. It should at least stop pumping and make at least an audible error indication. This is a minimum requirement on all human-rated infusion pumps of whatever age. It is not required for veterinary infusion pumps. Batteries, so the pump can operate if the power fails or is unplugged. Anti-free-flow devices prevent blood from draining from the patient, or infusate from freely entering the patient, when the infusion pump is being set up. A "down pressure" sensor will detect when the patient's vein is blocked, or the line to the patient is kinked. This may be configurable for high (subcutaneous and epidural) or low (venous) applications. An "air-in-line" detector. A typical detector will use an ultrasonic transmitter and receiver to detect when air is being pumped. Some pumps actually measure the volume, and may even have configurable volumes, from 0.1 to 2 ml of air. None of these amounts can cause harm, but sometimes the air can interfere with the infusion of a low-dose medicine. An "up pressure" sensor can detect when the bag or syringe is empty, or even if the bag or syringe is being squeezed. A drug library with customizable programmable limits for individual drugs that helps to avoid medication errors. Mechanisms to avoid uncontrolled flow of drugs in large volume pumps (often in combination with a giving st based free flow clamp) and increasingly also in syringe pumps (piston-brake) Many pumps include an internal electronic log of the last several thousand therapy events. These are usually tagged with the time and date from the pump's clock. Usually, erasing the log is a feature protected by a security code, specifically to detect staff abuse of the pump or patient. Many makes of infusion pump can be configured to display only a small subset of features while they are operating, in order to prevent tampering by patients, untrained staff and visitors. By 2019 intravenous smart pumps were being introduced. They could include wireless connectivity, drug libraries, profiles of care areas, and soft and hard limits. Safety issues Infusion pumps have been a source of multiple patient safety concerns, and problems with such pumps have been linked to more than 56,000 adverse event reports from 2005 to 2009, including at least 500 deaths. As a result, the U.S. Food and Drug Administration (FDA) has launched a comprehensive initiative to improve their safety, called the Infusion Pump Improvement Initiative. The initiative proposed stricter regulation of infusion pumps. It cited software defects, user interface issues, and mechanical or electrical failures as the main causes of adverse events. See also Intravenous drip Pharmacy informatics Syringe driver Total parenteral nutrition References Medical pumps Drug delivery devices Dosage forms
Infusion pump
Chemistry
2,102
8,524
https://en.wikipedia.org/wiki/Deuterium
Deuterium (hydrogen-2, symbol H or D, also known as heavy hydrogen) is one of two stable isotopes of hydrogen; the other is protium, or hydrogen-1, H. The deuterium nucleus (deuteron) contains one proton and one neutron, whereas the far more common H has no neutrons. Deuterium has a natural abundance in Earth's oceans of about one atom of deuterium in every 6,420 atoms of hydrogen. Thus, deuterium accounts for about 0.0156% by number (0.0312% by mass) of all hydrogen in the ocean: tonnes of deuterium – mainly as HOD (or HOH or HHO) and only rarely as DO (or HO) (deuterium oxide, also known as heavy water) – in tonnes of water. The abundance of H changes slightly from one kind of natural water to another (see Vienna Standard Mean Ocean Water). The name deuterium comes from Greek deuteros, meaning "second". American chemist Harold Urey discovered deuterium in 1931. Urey and others produced samples of heavy water in which the H had been highly concentrated. The discovery of deuterium won Urey a Nobel Prize in 1934. Deuterium is destroyed in the interiors of stars faster than it is produced. Other natural processes are thought to produce only an insignificant amount of deuterium. Nearly all deuterium found in nature was produced in the Big Bang 13.8 billion years ago, as the basic or primordial ratio of H to H (≈26 atoms of deuterium per 10 hydrogen atoms) has its origin from that time. This is the ratio found in the gas giant planets, such as Jupiter. The analysis of deuterium–protium ratios (HHR) in comets found results very similar to the mean ratio in Earth's oceans (156 atoms of deuterium per 10 hydrogen atoms). This reinforces theories that much of Earth's ocean water is of cometary origin. The HHR of comet 67P/Churyumov–Gerasimenko, as measured by the Rosetta space probe, is about three times that of Earth water. This figure is the highest yet measured in a comet. HHR's thus continue to be an active topic of research in both astronomy and climatology. Differences from common hydrogen (protium) Chemical symbol Deuterium is often represented by the chemical symbol D. Since it is an isotope of hydrogen with mass number 2, it is also represented by H. IUPAC allows both D and H, though H is preferred. A distinct chemical symbol is used for convenience because of the isotope's common use in various scientific processes. Also, its large mass difference with protium (H) confers non-negligible chemical differences with H compounds. Deuterium has a mass of , about twice the mean hydrogen atomic weight of , or twice protium's mass of . The isotope weight ratios within other elements are largely insignificant in this regard. Spectroscopy In quantum mechanics, the energy levels of electrons in atoms depend on the reduced mass of the system of electron and nucleus. For a hydrogen atom, the role of reduced mass is most simply seen in the Bohr model of the atom, where the reduced mass appears in a simple calculation of the Rydberg constant and Rydberg equation, but the reduced mass also appears in the Schrödinger equation, and the Dirac equation for calculating atomic energy levels. The reduced mass of the system in these equations is close to the mass of a single electron, but differs from it by a small amount about equal to the ratio of mass of the electron to the nucleus. For H, this amount is about , or 1.000545, and for H it is even smaller: , or 1.0002725. The energies of electronic spectra lines for H and H therefore differ by the ratio of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the corresponding lines of light hydrogen, by 0.0272%. In astronomical observation, this corresponds to a blue Doppler shift of 0.0272% of the speed of light, or 81.6 km/s. The differences are much more pronounced in vibrational spectroscopy such as infrared spectroscopy and Raman spectroscopy, and in rotational spectra such as microwave spectroscopy because the reduced mass of the deuterium is markedly higher than that of protium. In nuclear magnetic resonance spectroscopy, deuterium has a very different NMR frequency (e.g. 61 MHz when protium is at 400 MHz) and is much less sensitive. Deuterated solvents are usually used in protium NMR to prevent the solvent from overlapping with the signal, though deuterium NMR on its own right is also possible. Big Bang nucleosynthesis Deuterium is thought to have played an important role in setting the number and ratios of the elements that were formed in the Big Bang. Combining thermodynamics and the changes brought about by cosmic expansion, one can calculate the fraction of protons and neutrons based on the temperature at the point that the universe cooled enough to allow formation of nuclei. This calculation indicates seven protons for every neutron at the beginning of nucleogenesis, a ratio that would remain stable even after nucleogenesis was over. This fraction was in favor of protons initially, primarily because the lower mass of the proton favored their production. As the Universe expanded, it cooled. Free neutrons and protons are less stable than helium nuclei, and the protons and neutrons had a strong energetic reason to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium. Through much of the few minutes after the Big Bang during which nucleosynthesis could have occurred, the temperature was high enough that the mean energy per particle was greater than the binding energy of weakly bound deuterium; therefore, any deuterium that was formed was immediately destroyed. This situation is known as the deuterium bottleneck. The bottleneck delayed formation of any helium-4 until the Universe became cool enough to form deuterium (at about a temperature equivalent to 100 keV). At this point, there was a sudden burst of element formation (first deuterium, which immediately fused into helium). However, very soon thereafter, at twenty minutes after the Big Bang, the Universe became too cool for any further nuclear fusion or nucleosynthesis. At this point, the elemental abundances were nearly fixed, with the only change as some of the radioactive products of Big Bang nucleosynthesis (such as tritium) decay. The deuterium bottleneck in the formation of helium, together with the lack of stable ways for helium to combine with hydrogen or with itself (no stable nucleus has a mass number of 5 or 8) meant that an insignificant amount of carbon, or any elements heavier than carbon, formed in the Big Bang. These elements thus required formation in stars. At the same time, the failure of much nucleogenesis during the Big Bang ensured that there would be plenty of hydrogen in the later universe available to form long-lived stars, such as the Sun. Abundance Deuterium occurs in trace amounts naturally as deuterium gas (H or D), but most deuterium atoms in the Universe are bonded with H to form a gas called hydrogen deuteride (HD or HH). Similarly, natural water contains deuterated molecules, almost all as semiheavy water HDO with only one deuterium. The existence of deuterium on Earth, elsewhere in the Solar System (as confirmed by planetary probes), and in the spectra of stars, is also an important datum in cosmology. Gamma radiation from ordinary nuclear fusion dissociates deuterium into protons and neutrons, and there is no known natural process other than Big Bang nucleosynthesis that might have produced deuterium at anything close to its observed natural abundance. Deuterium is produced by the rare cluster decay, and occasional absorption of naturally occurring neutrons by light hydrogen, but these are trivial sources. There is thought to be little deuterium in the interior of the Sun and other stars, as at these temperatures the nuclear fusion reactions that consume deuterium happen much faster than the proton–proton reaction that creates deuterium. However, deuterium persists in the outer solar atmosphere at roughly the same concentration as in Jupiter, and this has probably been unchanged since the origin of the Solar System. The natural abundance of H seems to be a very similar fraction of hydrogen, wherever hydrogen is found, unless there are obvious processes at work that concentrate it. The existence of deuterium at a low but constant primordial fraction in all hydrogen is another one of the arguments in favor of the Big Bang over the Steady State theory of the Universe. The observed ratios of hydrogen to helium to deuterium in the universe are difficult to explain except with a Big Bang model. It is estimated that the abundances of deuterium have not evolved significantly since their production about 13.8 billion years ago. Measurements of Milky Way galactic deuterium from ultraviolet spectral analysis show a ratio of as much as 23 atoms of deuterium per million hydrogen atoms in undisturbed gas clouds, which is only 15% below the WMAP estimated primordial ratio of about 27 atoms per million from the Big Bang. This has been interpreted to mean that less deuterium has been destroyed in star formation in the Milky Way galaxy than expected, or perhaps deuterium has been replenished by a large in-fall of primordial hydrogen from outside the galaxy. In space a few hundred light years from the Sun, deuterium abundance is only 15 atoms per million, but this value is presumably influenced by differential adsorption of deuterium onto carbon dust grains in interstellar space. The abundance of deuterium in Jupiter's atmosphere has been directly measured by the Galileo space probe as 26 atoms per million hydrogen atoms. ISO-SWS observations find 22 atoms per million hydrogen atoms in Jupiter. and this abundance is thought to represent close to the primordial Solar System ratio. This is about 17% of the terrestrial ratio of 156 deuterium atoms per million hydrogen atoms. Comets such as Comet Hale-Bopp and Halley's Comet have been measured to contain more deuterium (about 200 atoms per million hydrogens), ratios which are enriched with respect to the presumed protosolar nebula ratio, probably due to heating, and which are similar to the ratios found in Earth seawater. The recent measurement of deuterium amounts of 161 atoms per million hydrogen in Comet 103P/Hartley (a former Kuiper belt object), a ratio almost exactly that in Earth's oceans (155.76 ± 0.1, but in fact from 153 to 156 ppm), emphasizes the theory that Earth's surface water may be largely from comets. Most recently the HHR of 67P/Churyumov–Gerasimenko as measured by Rosetta is about three times that of Earth water. This has caused renewed interest in suggestions that Earth's water may be partly of asteroidal origin. Deuterium has also been observed to be concentrated over the mean solar abundance in other terrestrial planets, in particular Mars and Venus. Production Deuterium is produced for industrial, scientific and military purposes, by starting with ordinary water—a small fraction of which is naturally occurring heavy water—and then separating out the heavy water by the Girdler sulfide process, distillation, or other methods. In theory, deuterium for heavy water could be created in a nuclear reactor, but separation from ordinary water is the cheapest bulk production process. The world's leading supplier of deuterium was Atomic Energy of Canada Limited until 1997, when the last heavy water plant was shut down. Canada uses heavy water as a neutron moderator for the operation of the CANDU reactor design. Another major producer of heavy water is India. All but one of India's atomic energy plants are pressurized heavy water plants, which use natural (i.e., not enriched) uranium. India has eight heavy water plants, of which seven are in operation. Six plants, of which five are in operation, are based on D–H exchange in ammonia gas. The other two plants extract deuterium from natural water in a process that uses hydrogen sulfide gas at high pressure. While India is self-sufficient in heavy water for its own use, India also exports reactor-grade heavy water. Properties Data for molecular deuterium Formula: or Density: 0.180 kg/m at STP (0 °C, 101325 Pa). Atomic weight: 2.0141017926 Da. Mean abundance in ocean water (from VSMOW) 155.76 ± 0.1 atoms of deuterium per million atoms of all isotopes of hydrogen (about 1 atom of in 6420); that is, about 0.015% of all atoms of hydrogen (any isotope) Data at about 18 K for H (triple point): Density: Liquid: 162.4 kg/m Gas: 0.452 kg/m Liquefied HO: 1105.2 kg/m at STP Viscosity: 12.6 μPa·s at 300 K (gas phase) Specific heat capacity at constant pressure c: Solid: 2950 J/(kg·K) Gas: 5200 J/(kg·K) Physical properties Compared to hydrogen in its natural composition on Earth, pure deuterium (H) has a higher melting point (18.72 K vs. 13.99 K), a higher boiling point (23.64 vs. 20.27 K), a higher critical temperature (38.3 vs. 32.94 K) and a higher critical pressure (1.6496 vs. 1.2858 MPa). The physical properties of deuterium compounds can exhibit significant kinetic isotope effects and other physical and chemical property differences from the protium analogs. HO, for example, is more viscous than normal . There are differences in bond energy and length for compounds of heavy hydrogen isotopes compared to protium, which are larger than the isotopic differences in any other element. Bonds involving deuterium and tritium are somewhat stronger than the corresponding bonds in protium, and these differences are enough to cause significant changes in biological reactions. Pharmaceutical firms are interested in the fact that H is harder to remove from carbon than H. Deuterium can replace H in water molecules to form heavy water (HO), which is about 10.6% denser than normal water (so that ice made from it sinks in normal water). Heavy water is slightly toxic in eukaryotic animals, with 25% substitution of the body water causing cell division problems and sterility, and 50% substitution causing death by cytotoxic syndrome (bone marrow failure and gastrointestinal lining failure). Prokaryotic organisms, however, can survive and grow in pure heavy water, though they develop slowly. Despite this toxicity, consumption of heavy water under normal circumstances does not pose a health threat to humans. It is estimated that a person might drink of heavy water without serious consequences. Small doses of heavy water (a few grams in humans, containing an amount of deuterium comparable to that normally present in the body) are routinely used as harmless metabolic tracers in humans and animals. Quantum properties The deuteron has spin +1 ("triplet state") and is thus a boson. The NMR frequency of deuterium is significantly different from normal hydrogen. Infrared spectroscopy also easily differentiates many deuterated compounds, due to the large difference in IR absorption frequency seen in the vibration of a chemical bond containing deuterium, versus light hydrogen. The two stable isotopes of hydrogen can also be distinguished by using mass spectrometry. The triplet deuteron nucleon is barely bound at , and none of the higher energy states are bound. The singlet deuteron is a virtual state, with a negative binding energy of . There is no such stable particle, but this virtual particle transiently exists during neutron–proton inelastic scattering, accounting for the unusually large neutron scattering cross-section of the proton. Nuclear properties (deuteron) Deuteron mass and radius The deuterium nucleus is called a deuteron. It has a mass of (just over ). The charge radius of a deuteron is Like the proton radius, measurements using muonic deuterium produce a smaller result: . Spin and energy Deuterium is one of only five stable nuclides with an odd number of protons and an odd number of neutrons. (H, Li, B, N, Ta; the long-lived radionuclides K, V, La, Lu also occur naturally.) Most odd–odd nuclei are unstable to beta decay, because the decay products are even–even, and thus more strongly bound, due to nuclear pairing effects. Deuterium, however, benefits from having its proton and neutron coupled to a spin-1 state, which gives a stronger nuclear attraction; the corresponding spin-1 state does not exist in the two-neutron or two-proton system, due to the Pauli exclusion principle which would require one or the other identical particle with the same spin to have some other different quantum number, such as orbital angular momentum. But orbital angular momentum of either particle gives a lower binding energy for the system, mainly due to increasing distance of the particles in the steep gradient of the nuclear force. In both cases, this causes the diproton and dineutron to be unstable. The proton and neutron in deuterium can be dissociated through neutral current interactions with neutrinos. The cross section for this interaction is comparatively large, and deuterium was successfully used as a neutrino target in the Sudbury Neutrino Observatory experiment. Diatomic deuterium (H) has ortho and para nuclear spin isomers like diatomic hydrogen, but with differences in the number and population of spin states and rotational levels, which occur because the deuteron is a boson with nuclear spin equal to one. Isospin singlet state of the deuteron Due to the similarity in mass and nuclear properties between the proton and neutron, they are sometimes considered as two symmetric types of the same object, a nucleon. While only the proton has electric charge, this is often negligible due to the weakness of the electromagnetic interaction relative to the strong nuclear interaction. The symmetry relating the proton and neutron is known as isospin and denoted I (or sometimes T). Isospin is an SU(2) symmetry, like ordinary spin, so is completely analogous to it. The proton and neutron, each of which have isospin-1/2, form an isospin doublet (analogous to a spin doublet), with a "down" state (↓) being a neutron and an "up" state (↑) being a proton. A pair of nucleons can either be in an antisymmetric state of isospin called singlet, or in a symmetric state called triplet. In terms of the "down" state and "up" state, the singlet is , which can also be written : This is a nucleus with one proton and one neutron, i.e. a deuterium nucleus. The triplet is and thus consists of three types of nuclei, which are supposed to be symmetric: a deuterium nucleus (actually a highly excited state of it), a nucleus with two protons, and a nucleus with two neutrons. These states are not stable. Approximated wavefunction of the deuteron The deuteron wavefunction must be antisymmetric if the isospin representation is used (since a proton and a neutron are not identical particles, the wavefunction need not be antisymmetric in general). Apart from their isospin, the two nucleons also have spin and spatial distributions of their wavefunction. The latter is symmetric if the deuteron is symmetric under parity (i.e. has an "even" or "positive" parity), and antisymmetric if the deuteron is antisymmetric under parity (i.e. has an "odd" or "negative" parity). The parity is fully determined by the total orbital angular momentum of the two nucleons: if it is even then the parity is even (positive), and if it is odd then the parity is odd (negative). The deuteron, being an isospin singlet, is antisymmetric under nucleons exchange due to isospin, and therefore must be symmetric under the double exchange of their spin and location. Therefore, it can be in either of the following two different states: Symmetric spin and symmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (+1) from spin exchange and (+1) from parity (location exchange), for a total of (−1) as needed for antisymmetry. Antisymmetric spin and antisymmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (−1) from spin exchange and (−1) from parity (location exchange), again for a total of (−1) as needed for antisymmetry. In the first case the deuteron is a spin triplet, so that its total spin s is 1. It also has an even parity and therefore even orbital angular momentum l. The lower its orbital angular momentum, the lower its energy. Therefore, the lowest possible energy state has , . In the second case the deuteron is a spin singlet, so that its total spin s is 0. It also has an odd parity and therefore odd orbital angular momentum l. Therefore, the lowest possible energy state has , . Since gives a stronger nuclear attraction, the deuterium ground state is in the , state. The same considerations lead to the possible states of an isospin triplet having , or , . Thus, the state of lowest energy has , , higher than that of the isospin singlet. The analysis just given is in fact only approximate, both because isospin is not an exact symmetry, and more importantly because the strong nuclear interaction between the two nucleons is related to angular momentum in spin–orbit interaction that mixes different s and l states. That is, s and l are not constant in time (they do not commute with the Hamiltonian), and over time a state such as , may become a state of , . Parity is still constant in time, so these do not mix with odd l states (such as , ). Therefore, the quantum state of the deuterium is a superposition (a linear combination) of the , state and the , state, even though the first component is much bigger. Since the total angular momentum j is also a good quantum number (it is a constant in time), both components must have the same j, and therefore . This is the total spin of the deuterium nucleus. To summarize, the deuterium nucleus is antisymmetric in terms of isospin, and has spin 1 and even (+1) parity. The relative angular momentum of its nucleons l is not well defined, and the deuteron is a superposition of mostly with some . Magnetic and electric multipoles In order to find theoretically the deuterium magnetic dipole moment μ, one uses the formula for a nuclear magnetic moment with g and g are g-factors of the nucleons. Since the proton and neutron have different values for g and g, one must separate their contributions. Each gets half of the deuterium orbital angular momentum and spin . One arrives at where subscripts p and n stand for the proton and neutron, and . By using the same identities as here and using the value , one gets the following result, in units of the nuclear magneton μ For the , state (), we obtain For the , state (), we obtain The measured value of the deuterium magnetic dipole moment, is , which is 97.5% of the value obtained by simply adding moments of the proton and neutron. This suggests that the state of the deuterium is indeed to a good approximation , state, which occurs with both nucleons spinning in the same direction, but their magnetic moments subtracting because of the neutron's negative moment. But the slightly lower experimental number than that which results from simple addition of proton and (negative) neutron moments shows that deuterium is actually a linear combination of mostly , state with a slight admixture of , state. The electric dipole is zero as usual. The measured electric quadrupole of the deuterium is . While the order of magnitude is reasonable, since the deuteron radius is of order of 1 femtometer (see below) and its electric charge is e, the above model does not suffice for its computation. More specifically, the electric quadrupole does not get a contribution from the state (which is the dominant one) and does get a contribution from a term mixing the and the states, because the electric quadrupole operator does not commute with angular momentum. The latter contribution is dominant in the absence of a pure contribution, but cannot be calculated without knowing the exact spatial form of the nucleons wavefunction inside the deuterium. Higher magnetic and electric multipole moments cannot be calculated by the above model, for similar reasons. Applications Nuclear reactors Deuterium is used in heavy water moderated fission reactors, usually as liquid HO, to slow neutrons without the high neutron absorption of ordinary hydrogen. This is a common commercial use for larger amounts of deuterium. In research reactors, liquid H is used in cold sources to moderate neutrons to very low energies and wavelengths appropriate for scattering experiments. Experimentally, deuterium is the most common nuclide used in fusion reactor designs, especially in combination with tritium, because of the large reaction rate (or nuclear cross section) and high energy yield of the deuterium–tritium (DT) reaction. There is an even higher-yield H–He fusion reaction, though the breakeven point of H–He is higher than that of most other fusion reactions; together with the scarcity of He, this makes it implausible as a practical power source, at least until DT and deuterium–deuterium (DD) fusion have been performed on a commercial scale. Commercial nuclear fusion is not yet an accomplished technology. NMR spectroscopy Deuterium is most commonly used in hydrogen nuclear magnetic resonance spectroscopy (proton NMR) in the following way. NMR ordinarily requires compounds of interest to be analyzed as dissolved in solution. Because of deuterium's nuclear spin properties which differ from the light hydrogen usually present in organic molecules, NMR spectra of hydrogen/protium are highly differentiable from that of deuterium, and in practice deuterium is not "seen" by an NMR instrument tuned for H. Deuterated solvents (including heavy water, but also compounds like deuterated chloroform, CDCl or CHCl, are therefore routinely used in NMR spectroscopy, in order to allow only the light-hydrogen spectra of the compound of interest to be measured, without solvent-signal interference. Nuclear magnetic resonance spectroscopy can also be used to obtain information about the deuteron's environment in isotopically labelled samples (deuterium NMR). For example, the configuration of hydrocarbon chains in lipid bilayers can be quantified using solid state deuterium NMR with deuterium-labelled lipid molecules. Deuterium NMR spectra are especially informative in the solid state because of its relatively small quadrupole moment in comparison with those of bigger quadrupolar nuclei such as chlorine-35, for example. Mass spectrometry Deuterated (i.e. where all or some hydrogen atoms are replaced with deuterium) compounds are often used as internal standards in mass spectrometry. Like other isotopically labeled species, such standards improve accuracy, while often at a much lower cost than other isotopically labeled standards. Deuterated molecules are usually prepared via hydrogen isotope exchange reactions. Tracing In chemistry, biochemistry and environmental sciences, deuterium is used as a non-radioactive, stable isotopic tracer, for example, in the doubly labeled water test. In chemical reactions and metabolic pathways, deuterium behaves somewhat similarly to ordinary hydrogen (with a few chemical differences, as noted). It can be distinguished from normal hydrogen most easily by its mass, using mass spectrometry or infrared spectrometry. Deuterium can be detected by femtosecond infrared spectroscopy, since the mass difference drastically affects the frequency of molecular vibrations; H–carbon bond vibrations are found in spectral regions free of other signals. Measurements of small variations in the natural abundances of deuterium, along with those of the stable heavy oxygen isotopes O and O, are of importance in hydrology, to trace the geographic origin of Earth's waters. The heavy isotopes of hydrogen and oxygen in rainwater (meteoric water) are enriched as a function of the environmental temperature of the region in which the precipitation falls (and thus enrichment is related to latitude). The relative enrichment of the heavy isotopes in rainwater (as referenced to mean ocean water), when plotted against temperature falls predictably along a line called the global meteoric water line (GMWL). This plot allows samples of precipitation-originated water to be identified along with general information about the climate in which it originated. Evaporative and other processes in bodies of water, and also ground water processes, also differentially alter the ratios of heavy hydrogen and oxygen isotopes in fresh and salt waters, in characteristic and often regionally distinctive ways. The ratio of concentration of H to H is usually indicated with a delta as δH and the geographic patterns of these values are plotted in maps termed as isoscapes. Stable isotopes are incorporated into plants and animals and an analysis of the ratios in a migrant bird or insect can help suggest a rough guide to their origins. Contrast properties Neutron scattering techniques particularly profit from availability of deuterated samples: The H and H cross sections are very distinct and different in sign, which allows contrast variation in such experiments. Further, a nuisance problem of normal hydrogen is its large incoherent neutron cross section, which is nil for H. The substitution of deuterium for normal hydrogen thus reduces scattering noise. Hydrogen is an important and major component in all materials of organic chemistry and life science, but it barely interacts with X-rays. As hydrogen atoms (including deuterium) interact strongly with neutrons; neutron scattering techniques, together with a modern deuteration facility, fills a niche in many studies of macromolecules in biology and many other areas. Nuclear weapons See below. Most stars, including the Sun, generate energy over most of their lives by fusing hydrogen into heavier elements; yet such fusion of light hydrogen (protium) has never been successful in the conditions attainable on Earth. Thus, all artificial fusion, including the hydrogen fusion in hydrogen bombs, requires heavy hydrogen (deuterium, tritium, or both). Drugs A deuterated drug is a small molecule medicinal product in which one or more of the hydrogen atoms in the drug molecule have been replaced by deuterium. Because of the kinetic isotope effect, deuterium-containing drugs may have significantly lower rates of metabolism, and hence a longer half-life. In 2017, deutetrabenazine became the first deuterated drug to receive FDA approval. Reinforced essential nutrients Deuterium can be used to reinforce specific oxidation-vulnerable C–H bonds within essential or conditionally essential nutrients, such as certain amino acids, or polyunsaturated fatty acids (PUFA), making them more resistant to oxidative damage. Deuterated polyunsaturated fatty acids, such as linoleic acid, slow down the chain reaction of lipid peroxidation that damage living cells. Deuterated ethyl ester of linoleic acid (RT001), developed by Retrotope, is in a compassionate use trial in infantile neuroaxonal dystrophy and has successfully completed a Phase I/II trial in Friedreich's ataxia. Thermostabilization Live vaccines, such as oral polio vaccine, can be stabilized by deuterium, either alone or in combination with other stabilizers such as MgCl. Slowing circadian oscillations Deuterium has been shown to lengthen the period of oscillation of the circadian clock when dosed in rats, hamsters, and Gonyaulax dinoflagellates. In rats, chronic intake of 25% HO disrupts circadian rhythm by lengthening the circadian period of suprachiasmatic nucleus-dependent rhythms in the brain's hypothalamus. Experiments in hamsters also support the theory that deuterium acts directly on the suprachiasmatic nucleus to lengthen the free-running circadian period. History Suspicion of lighter element isotopes The existence of nonradioactive isotopes of lighter elements had been suspected in studies of neon as early as 1913, and proven by mass spectrometry of light elements in 1920. At that time the neutron had not yet been discovered, and the prevailing theory was that isotopes of an element differ by the existence of additional protons in the nucleus accompanied by an equal number of nuclear electrons. In this theory, the deuterium nucleus with mass two and charge one would contain two protons and one nuclear electron. However, it was expected that the element hydrogen with a measured average atomic mass very close to , the known mass of the proton, always has a nucleus composed of a single proton (a known particle), and could not contain a second proton. Thus, hydrogen was thought to have no heavy isotopes. Deuterium detected It was first detected spectroscopically in late 1931 by Harold Urey, a chemist at Columbia University. Urey's collaborator, Ferdinand Brickwedde, distilled five liters of cryogenically produced liquid hydrogen to of liquid, using the low-temperature physics laboratory that had recently been established at the National Bureau of Standards (now National Institute of Standards and Technology) in Washington, DC. The technique had previously been used to isolate heavy isotopes of neon. The cryogenic boiloff technique concentrated the fraction of the mass-2 isotope of hydrogen to a degree that made its spectroscopic identification unambiguous. Naming of the isotope and Nobel Prize Urey created the names protium, deuterium, and tritium in an article published in 1934. The name is based in part on advice from Gilbert N. Lewis who had proposed the name "deutium". The name comes from Greek deuteros 'second', and the nucleus was to be called a "deuteron" or "deuton". Isotopes and new elements were traditionally given the name that their discoverer decided. Some British scientists, such as Ernest Rutherford, wanted to call the isotope "diplogen", from Greek diploos 'double', and the nucleus to be called "diplon". The amount inferred for normal abundance of deuterium was so small (only about 1 atom in 6400 hydrogen atoms in seawater [156 parts per million]) that it had not noticeably affected previous measurements of (average) hydrogen atomic mass. This explained why it hadn't been suspected before. Urey was able to concentrate water to show partial enrichment of deuterium. Lewis, Urey's graduate advisor at Berkeley, had prepared and characterized the first samples of pure heavy water in 1933. The discovery of deuterium, coming before the discovery of the neutron in 1932, was an experimental shock to theory; but when the neutron was reported, making deuterium's existence more explicable, Urey was awarded the Nobel Prize in Chemistry only three years after the isotope's isolation. Lewis was deeply disappointed by the Nobel Committee's decision in 1934 and several high-ranking administrators at Berkeley believed this disappointment played a central role in his suicide a decade later. "Heavy water" experiments in World War II Shortly before the war, Hans von Halban and Lew Kowarski moved their research on neutron moderation from France to Britain, smuggling the entire global supply of heavy water (which had been made in Norway) across in twenty-six steel drums. During World War II, Nazi Germany was known to be conducting experiments using heavy water as moderator for a nuclear reactor design. Such experiments were a source of concern because they might allow them to produce plutonium for an atomic bomb. Ultimately it led to the Allied operation called the "Norwegian heavy water sabotage", the purpose of which was to destroy the Vemork deuterium production/enrichment facility in Norway. At the time this was considered important to the potential progress of the war. After World War II ended, the Allies discovered that Germany was not putting as much serious effort into the program as had been previously thought. The Germans had completed only a small, partly built experimental reactor (which had been hidden away) and had been unable to sustain a chain reaction. By the end of the war, the Germans did not even have a fifth of the amount of heavy water needed to run the reactor, partially due to the Norwegian heavy water sabotage operation. However, even if the Germans had succeeded in getting a reactor operational (as the U.S. did with Chicago Pile-1 in late 1942), they would still have been at least several years away from the development of an atomic bomb. The engineering process, even with maximal effort and funding, required about two and a half years (from first critical reactor to bomb) in both the U.S. and U.S.S.R., for example. In thermonuclear weapons The 62-ton Ivy Mike device built by the United States and exploded on 1 November 1952, was the first fully successful hydrogen bomb (thermonuclear bomb). In this context, it was the first bomb in which most of the energy released came from nuclear reaction stages that followed the primary nuclear fission stage of the atomic bomb. The Ivy Mike bomb was a factory-like building, rather than a deliverable weapon. At its center, a very large cylindrical, insulated vacuum flask or cryostat, held cryogenic liquid deuterium in a volume of about 1000 liters (160 kilograms in mass, if this volume had been completely filled). Then, a conventional atomic bomb (the "primary") at one end of the bomb was used to create the conditions of extreme temperature and pressure that were needed to set off the thermonuclear reaction. Within a few years, so-called "dry" hydrogen bombs were developed that did not need cryogenic hydrogen. Released information suggests that all thermonuclear weapons built since then contain chemical compounds of deuterium and lithium in their secondary stages. The material that contains the deuterium is mostly lithium deuteride, with the lithium consisting of the isotope lithium-6. When the lithium-6 is bombarded with fast neutrons from the atomic bomb, tritium (hydrogen-3) is produced, and then the deuterium and the tritium quickly engage in thermonuclear fusion, releasing abundant energy, helium-4, and even more free neutrons. "Pure" fusion weapons such as the Tsar Bomba are believed to be obsolete. In most modern ("boosted") thermonuclear weapons, fusion directly provides only a small fraction of the total energy. Fission of a natural uranium-238 tamper by fast neutrons produced from D–T fusion accounts for a much larger (i.e. boosted) energy release than the fusion reaction itself. Modern research In August 2018, scientists announced the transformation of gaseous deuterium into a liquid metallic form. This may help researchers better understand gas giant planets, such as Jupiter, Saturn and some exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields. Antideuterium An antideuteron is the antimatter counterpart of the deuteron, consisting of an antiproton and an antineutron. The antideuteron was first produced in 1965 at the Proton Synchrotron at CERN and the Alternating Gradient Synchrotron at Brookhaven National Laboratory. A complete atom, with a positron orbiting the nucleus, would be called antideuterium, but antideuterium has not yet been created. The proposed symbol for antideuterium is , that is, D with an overbar. See also Isotopes of hydrogen Tokamak References External links Environmental isotopes Isotopes of hydrogen Neutron moderators Nuclear fusion fuels Nuclear materials Subatomic particles with spin 1 Medical isotopes
Deuterium
Physics,Chemistry
8,627
41,868
https://en.wikipedia.org/wiki/Wideband%20modem
In telecommunications, the term wideband modem has the following meanings: A modem whose modulated output signal can have an essential frequency spectrum that is broader than that which can be wholly contained within, and faithfully transmitted through, a voice channel with a nominal 4 kHz bandwidth. A modem whose bandwidth capability is greater than that of a narrowband modem. References Networking hardware Modems
Wideband modem
Technology,Engineering
80
21,581,762
https://en.wikipedia.org/wiki/Herbert%20Van%20de%20Sompel
Herbert Van de Sompel is a Belgian librarian, computer scientist, and musician, most known for his role in the development of the Open Archives Initiative (OAI) and standards such as OpenURL, Object Reuse and Exchange, and the OAI Protocol for Metadata Harvesting. Career Van de Sompel was born March 20, 1957, in Ghent, Belgium. His PhD dissertation in 2000 developed the idea of context-sensitive and dynamic linking of scholarly information resources. In this work, he developed the "concepts underlying the OpenURL framework for open reference linking in the web-based scholarly information environment." Van de Sompel's OpenURL framework allowed for the development of software known as link resolvers which take metadata about a source and attempt to find its full text. Link resolver web services have become widely used within academic libraries. His Ph.D dissertation and later development of the OpenURL standard led to the development and commercialization of the SFX link resolver and a series of products for libraries using this approach pioneered by Van de Sompel. In 2006 Van de Sompel was named the first recipient of the SPARC Innovator Award by the Scholarly Publishing and Academic Resources Coalition (SPARC) for starting the Open Archives Initiative (OAI) and the open reference linking framework (OpenURL). In 2017, Van de Sompel received the Paul Evan Peters Award from the Coalition for Networked Information (CNI). From 2002 to the end of 2018, Van de Sompel held positions at the Los Alamos National Laboratory Research Library, including as the Lead of the Digital Library Research and Prototyping Team. Notable work includes the Memento Project, which aims to make accessing archived web content as easy as visiting current web pages. From 2019 to early 2021, he served as chief innovation officer of Data Archiving and Networked Services (DANS) in The Netherlands. Following his wife after her appointment to a position in Vienna, he resigned at DANS while keeping professional connections to both DANS and Ghent University. Though best known for his work as a computer scientist, Van de Sompel also made music in his earlier life, including two albums, "Unploughed" and "Not a Festschrift- Geschnitzte Figuren," with his group, Young Farmers Claim Future. References External links Herbert van de Sompel: Dynamic and context-sensitive linking of scholarly information. Dissertation Universiteit Gent, 2000. Internet architecture Living people Belgian librarians 1957 births Los Alamos National Laboratory personnel
Herbert Van de Sompel
Technology
522
11,406,931
https://en.wikipedia.org/wiki/Umbrella%20species
Umbrella species are species selected for making conservation-related decisions, typically because protecting these species indirectly protects the many other species that make up the ecological community of its habitat (the umbrella effect). Species conservation can be subjective because it is hard to determine the status of many species. The umbrella species is often either a flagship species whose conservation benefits other species or a keystone species which may be targeted for conservation due to its impact on an ecosystem. Umbrella species can be used to help select the locations of potential reserves, find the minimum size of these conservation areas or reserves, and to determine the composition, structure, and processes of ecosystems. Definitions Two commonly used definitions are: "A wide-ranging species whose requirements include those of many other species" A species with large area requirements for which protection of the species offers protection to other species that share the same habitat Other descriptions include: "Traditional umbrella species, relatively large-bodied and wide-ranging species of higher vertebrates" Animals may also be considered umbrella species if they are charismatic. The hope is that species that appeal to popular audiences, such as pandas, will attract support for habitat conservation in general. In land use management In the two decades after its inception, the use of umbrella species as a conservation tool has been highly debated. The term was first used by Bruce Wilcox in 1984, who defined an umbrella species as one whose minimum area requirements are at least as comprehensive of the rest of the community for which protection is sought through the establishment and management of a protected area. Some scientists have found that the use of an umbrella species approach can provide a more streamlined way to manage ecological communities. Others have proposed that umbrella species in combination with other tools will more effectively protect other species in land management reserves than using umbrella species alone. Individual invertebrate species can be good umbrella species because they can protect older, unique ecosystems. There have been cases where umbrella species have protected a large amount of area which has been beneficial to surrounding species. Dunk, Zielinski and Welsh (2006) reported that the reserves in Northern California (the Klamath-Siskiyou forests), set aside for the northern spotted owl, also protect mollusks and salamanders within that habitat. They found that the reserves set aside for the northern spotted owl "serve as a reasonable coarse-filter umbrella species for the taxa evaluated", which were mollusks and salamanders. Gilby and colleagues (2017) found that using threatened species as umbrellas or "surrogates" for management targets could improve conservation outcomes in coastal areas. Wildlife corridors The concept of an umbrella species is further utilized to create wildlife corridors with what are termed focal species. These focal species are chosen for a number of reasons and fall into several types, generally measured by their potential for an umbrella effect. By carefully choosing species based on this criterion, a linked or networked habitat can be created from single-species corridors. These criteria are determined with the assistance of geographic information systems on the larger scale. Regardless of the location or scale of conservation, the umbrella effect is a measurement of a species' impact on others and is an important part of determining an approach. In the Endangered Species Act (US) The bay checkerspot butterfly has been on the Endangered Species List since 1987. Launer and Murphy (1994) tried to determine whether this butterfly could be considered an umbrella species in protecting the native grassland it inhabits. They discovered that the Endangered Species Act has a loophole excluding federally protected plants on private property. However, the California Environmental Quality Act reinforces state conservation regulations. Using the Endangered Species Act to protect termed umbrella species and their habitats can be controversial because they are not as well enforced in some states as others (such as California) to protect overall biodiversity. Examples Northern spotted owls and old-growth forest: Molluscs and salamanders are within the protective boundaries of the northern spotted owl. Bay checkerspot butterfly and grasslands Red-cockaded woodpeckers and Southeastern pine grasslands Amur tigers in the Russian Far East are considered umbrella/keystone species due to their impact on the deer and boar in their ecosystem Right whales Sharks Giant pandas and mountain ranges in China Jaguars and herpetofauna Canebrake and other species See also Conservation biology Dominant species Ecological network Ecosystem engineer Foundation species Green corridors Flagship species Indigenous Indicator species Introduced species Keystone species Landscape ecology References Further reading External links NOAA The Endangered Species Act of 1973 U.S. Fish and Wildlife Service Bay checkerspot butterfly Northern Spotted Owl Conservation biology
Umbrella species
Biology
913
72,057,432
https://en.wikipedia.org/wiki/Ancient%20Apocalypse
Ancient Apocalypse is a Netflix series, where the British writer Graham Hancock presents his pseudoarchaeological theory that there was an advanced civilization during the last ice age and that it was destroyed as a result of meteor impacts around 12,000 years ago. He argues that the survivors passed on their knowledge to hunter-gatherers around the world, giving rise to all earliest known civilizations. The episodes feature Hancock visiting archaeological sites and natural features which he claims show evidence of this. He repeatedly alleges that archaeologists are ignoring or covering up the evidence. Archaeologists and other experts say that the series presents pseudoscientific claims that lack evidence, cherry picks, and fails to present the counter-evidence. The documentary was also criticised for delegitimising the achievements of Indigenous peoples. Some non-academic reviewers also found the theories unconvincing and criticized Hancock's complaints about 'mainstream archaeology' as one-sided and evocative of conspiracy theories. Some experts featured in the first series complained that footage of them was presented in a misleading way. The first season of the series, produced by ITN Productions, was released on Netflix in November 2022. A second season, featuring actor Keanu Reeves alongside Hancock, is focused on the Americas and was released in October 2024. Synopsis In the series, Hancock argues that there was an advanced civilization during the last ice age. He speculates that it was destroyed around 12,000 years ago by sudden climate change during the Younger Dryas cool period, but that its few survivors taught agriculture, monumental architecture and astronomy to primitive hunter-gatherers around the world. Hancock does not accept that the earliest known civilizations could have arisen independently or that faraway peoples developed the same ideas, and argues that they all came from one advanced ice age civilization. He attempts to show how several ancient monuments and myths are evidence of this, and claims that archaeologists are ignoring or covering up this alleged evidence. It incorporates the controversial Younger Dryas impact hypothesis, which has been comprehensively refuted, and which attributes climate change to an impact winter caused by a massive meteor bombardment. Production and release The series was produced by ITN Productions and released by Netflix on 10 November 2022. Hancock's son Sean is a manager at Netflix responsible for "unscripted originals". It was the second most-watched series on Netflix in its week of release. Two archaeologists who were featured in the first season, Katya Stroud, a senior curator at Heritage Malta, and Necmi Karul, the director of excavations at Göbekli Tepe, said that their interviews were manipulated and presented out of context. A second season was released on Netflix on 16 October 2024 and featured the actor Keanu Reeves alongside Hancock. Plans to film parts of the second season of the show in the USA were cancelled following opposition from Indigenous groups over Hancock's depiction of their history and culture. Episodes Season one Season two (Ancient Apocalypse: The Americas) Reception Archaeologists and other experts say that the theories presented in the series are pseudoscientific, lack evidence, and that many claims are easily disproven. The Society for American Archaeology objected to the classification of the series as a documentary and requested that Netflix reclassify it as science fiction, stating: Archaeologist Flint Dibble said the show is "lacking in evidence to support Hancock's theory", while there is "a plethora of evidence" which contradicts the dates Hancock gives. John Hoopes, an archaeologist who has written about pseudoarcheology, said the series fails to present alternative interpretations or evidence contradicting Hancock. Archaeologist David Connolly said that Hancock's work relied on cherry-picked evidence for his claims, noting, "what he'll do is take a piece of real research [by others], insert a piece of 'why not?' and then finish it off with a bit of real research [by others]". Dr. Colin Elder, supervising archaeologist at the University of Salford, said Hancock is "not trying to corroborate with multiple sources ... He's finding one person who agrees with him, and putting them on TV. He's not looking at the counterarguments". In the same vein, archaeologist Julien Riel-Salvatore argues that it is simple, from a scientific point of view, to demonstrate that the main theses of Ancient Apocalypse are wrong. He also believes that the series undermines critical thinking. Answering Hancock's claims of a coverup, archaeologists said they and their colleagues would be thrilled to uncover an ice age civilization and would take Hancock's theory seriously if the evidence really existed. Courrier International notes that Hancock's claims are never questioned on screen: in Ancient Apocalypse, he calls the archaeologists "pseudo-experts" and repeats that they treat him patronizingly, but he does not name them nor explains their arguments. The Guardian opined that Netflix had "gone out of its way to court the conspiracy theorists" with the series, speculating that Hancock's son's role as head of unscripted originals at the company may explain why it was commissioned. Author Jason Colavito said Ancient Apocalypse was "not the worst show in its genre" but criticized it for "casting doubt on expertise, privileging emotion over evidence, and bending history to ideological ends ... making common cause with the right against academia". Writing in The Spectator, conservative commentator James Delingpole (who described himself as a "huge fan of Hancock" who finds his ideas plausible) criticized the series' production for "continually reminding [the viewer] that this is niche, crazy stuff that respectable 'experts' shun" and for portraying Hancock as "slippery and unreliable". German scholar Andreas Grünschloß describes Hancock as misrepresenting Indigenous traditions to support his ideas, for example the descriptions of Quetzalcoatl as "white", which were a Spanish colonial invention. He says that Hancock is a writer who presents his science fiction as independent "research". In one episode, Hancock says the Megalithic Temples of Malta, built in 3600–2500 BC, were actually built ten thousand years earlier during the last ice age. Maltese archaeologists dismissed these claims. Experts in Pacific geography and archaeology characterized Hancock's claims about Nan Madol as "incredibly insulting to the ancestors of the Pohnpeian [islanders] that did create these structures", linking them to 19th century "racist" and "white supremacist" ideologies. Writing in Skeptic magazine, impact physicist Mark Boslough criticized the series' presentation on the largely discredited Younger Dryas impact hypothesis. See also Archaeology and racism Legends of the Lost with Megan Fox, a 2018 documentary series References Further reading External links 2022 British television series debuts 2024 British television series endings 2020s British documentary television series British English-language television shows Fringe theories Netflix original documentary television series Pseudoarchaeology Archaeology and racism Younger Dryas impact hypothesis Göbekli Tepe Conspiracist media Race-related controversies in television Native American-related controversies
Ancient Apocalypse
Biology
1,444
8,504,422
https://en.wikipedia.org/wiki/CFD-ACE%2B
CFD-ACE+ is a commercial computational fluid dynamics solver developed by Applied Materials. It solves the conservation equations of mass, momentum, energy, chemical species and other scalar transport equations using the finite volume method. These equations enable coupled simulations of fluid, thermal, chemical, biological, electrical and mechanical phenomena. CFD-ACE+ solver allows for coupled heat and mass transport along with complex multi-step gas-phase and surface reactions which makes it especially useful for designing and optimizing semiconductor equipment and processes such as chemical vapor deposition (CVD). Researchers at the Ecole Nationale Superieure d'Arts et Metiers used CFD-ACE+ to simulate the rapid thermal chemical vapor deposition (RTCVD) process. They predicted the deposition rate along the substrate diameter for silicon deposition from silane. They also used CFD-ACE+ to model transparent conductive oxide (TCO) thin film deposition with ultrasonic spray chemical vapor deposition (CVD). The University of Louisville and the Oak Ridge National Laboratory used CFD-ACE+ to develop the yttria-stabilized zirconia CVD process for application of thermal barrier coatings for fossil energy systems. CFD-ACE+ was used by the Indian Institute of Technology Bombay to model the interplay of multiphysics phenomena involved in microfluidic devices such as fluid flow, structure, surface and interfaces etc. Numerical simulation of electroosmotic effect on pressure-driven flows in the serpentine channel of a micro fuel cell with variable zeta potential on the side walls was investigated and reported. Based on their extensive study of CFD software tools for microfluidic applications, researchers at IMTEK, University of Freiburg concluded that generally CFD-ACE+ can be recommended for simulation of free surface flows involving capillary forces. CFD-ACE+ has also been used to design and optimize the various fuel cell components and stacks. Researchers at Ballard Power Systems used the PEMFC module in CFD-ACE+ to improve the design of its latest fuel-cell. Amongst other energy applications, CFD-ACE+ was employed by ABB researchers to simulate the three-dimensional geometry of a high-current constricted vacuum arc drive by a strong magnetic field. Flow velocities were up to several thousand meters per second so the time step of the simulation was in the range of tens of nanoseconds. A movement of the arc over almost one full circle was simulated. Researchers at the University of Akron used CFD-ACE+ to simulation flow patterns and pressure profiles inside a rectangular pocket of a hydrostatic journal bearing. The numerical results made it possible to determine the three-dimensional flow field and pressure profile throughout the pocket, clearance and adjoining lands. The inertia effects and pressure drops across the pocket were incorporated in the numerical model. Stanford University researchers used CFD-ACE+ to investigate the suppression of wake instabilities of a pair of circular cylinders in a freestream flow at a Reynolds number of 150. The simulation showed that when the cylinders are counter-rotated, unsteady vortex wakes can be eliminated. References Physics software
CFD-ACE+
Physics
640
50,793,047
https://en.wikipedia.org/wiki/Hexestrol%20dipropionate
Hexestrol dipropionate (brand name Hexanoestrol, Retalon Oleosum), or hexestrol dipropanoate, is a synthetic, nonsteroidal estrogen of the stilbestrol group related to diethylstilbestrol. It is an ester of hexestrol, and has been known since at least 1931. The drug has been used in the past to inhibit lactation in women. See also Hexestrol diacetate Hexestrol dicaprylate Hexestrol diphosphate References Abandoned drugs Estrogen esters Propionate esters Stilbenoids Synthetic estrogens
Hexestrol dipropionate
Chemistry
141
59,129,360
https://en.wikipedia.org/wiki/Beilinson%E2%80%93Bernstein%20localization
In mathematics, especially in representation theory and algebraic geometry, the Beilinson–Bernstein localization theorem relates D-modules on flag varieties G/B to representations of the Lie algebra attached to a reductive group G. It was introduced by . Extensions of this theorem include the case of partial flag varieties G/P, where P is a parabolic subgroup in and a theorem relating D-modules on the affine Grassmannian to representations of the Kac–Moody algebra in . Statement Let G be a reductive group over the complex numbers, and B a Borel subgroup. Then there is an equivalence of categories On the left is the category of D-modules on G/B. On the right χ is a homomorphism χ : Z(U(g)) → C from the centre of the universal enveloping algebra, corresponding to the weight -ρ ∈ t* given by minus half the sum over the positive roots of g. The above action of W on t* = Spec Sym(t) is shifted so as to fix -ρ. Twisted version There is an equivalence of categories for any λ ∈ t* such that λ-ρ does not pair with any positive root α to give a nonpositive integer (it is "regular dominant"): Here χ is the central character corresponding to λ-ρ, and Dλ is the sheaf of rings on G/B formed by taking the *-pushforward of DG/U along the T-bundle G/U → G/B, a sheaf of rings whose center is the constant sheaf of algebras U(t), and taking the quotient by the central character determined by λ (not λ-ρ). Example: SL2 The Lie algebra of vector fields on the projective line P1 is identified with sl2, and via It can be checked linear combinations of three vector fields C ⊂ P1 are the only vector fields extending to ∞ ∈ P1. Here, is sent to zero. The only finite dimensional sl2 representation on which Ω acts by zero is the trivial representation k, which is sent to the constant sheaf, i.e. the ring of functions O ∈ D-Mod. The Verma module of weight 0 is sent to the D-Module δ supported at 0 ∈ P1. Each finite dimensional representation corresponds to a different twist. References Hotta, R. and Tanisaki, T., 2007. D-modules, perverse sheaves, and representation theory (Vol. 236). Springer Science & Business Media. Beilinson, A. and Bernstein, J., 1993. A proof of Jantzen conjectures. ADVSOV, pp. 1–50. Representation theory Lie algebras Algebraic geometry
Beilinson–Bernstein localization
Mathematics
564
61,325
https://en.wikipedia.org/wiki/Wilhelm%20R%C3%B6ntgen
Wilhelm Conrad Röntgen (; ; anglicized as Roentgen; 27 March 184510 February 1923) was a German physicist, who, on 8 November 1895, produced and detected electromagnetic radiation in a wavelength range known as X-rays or Röntgen rays, an achievement that earned him the inaugural Nobel Prize in Physics in 1901. In honour of Röntgen's accomplishments, in 2004, the International Union of Pure and Applied Chemistry (IUPAC) named element 111, roentgenium, a radioactive element with multiple unstable isotopes, after him. The non-SI unit of radiation exposure, the roentgen (R), is also named after him. Biographical history Education He was born to Friedrich Conrad Röntgen, a German merchant and cloth manufacturer, and Charlotte Constanze Frowein. When he was aged three, his family moved to the Netherlands, where his mother's family lived. Röntgen attended high school at Utrecht Technical School in Utrecht, Netherlands. He followed courses at the Technical School for almost two years. In 1865, he was unfairly expelled from high school when one of his teachers intercepted a caricature of one of the teachers, which was drawn by someone else. Without a high school diploma, Röntgen could only attend university in the Netherlands as a visitor. In 1865, he tried to attend Utrecht University without having the necessary credentials required for a regular student. Upon hearing that he could enter the Federal Polytechnic Institute in Zürich (today known as the ETH Zurich), he passed the entrance examination and began his studies there as a student of mechanical engineering. In 1869, he graduated with a PhD from the University of Zurich; once there, he became a favourite student of Professor August Kundt, whom he followed to the newly founded German Kaiser-Wilhelms-Universität in Strasbourg. Career In 1874, Röntgen became a lecturer at the University of Strasbourg. In 1875, he became a professor at the Academy of Agriculture at Hohenheim, Württemberg. He returned to Strasbourg as a professor of physics in 1876, and in 1879, he was appointed to the chair of physics at the University of Giessen. In 1888, he obtained the physics chair at the University of Würzburg, and in 1900 at the University of Munich, by special request of the Bavarian government. Röntgen had family in Iowa in the United States and planned to emigrate. He accepted an appointment at Columbia University in New York City and bought transatlantic tickets, before the outbreak of World War I changed his plans. He remained in Munich for the rest of his career. Discovery of X-rays During 1895, at his laboratory in the Würzburg Physical Institute of the University of Würzburg, Röntgen was investigating the external effects of passing an electrical discharge through various types of vacuum tube equipment—apparatuses from Heinrich Hertz, Johann Hittorf, William Crookes, Nikola Tesla and Philipp von Lenard In early November, he was repeating an experiment with one of Lenard's tubes in which a thin aluminium window had been added to permit the cathode rays to exit the tube but a cardboard covering was added to protect the aluminium from damage by the strong electrostatic field that produces the cathode rays. Röntgen knew that the cardboard covering prevented light from escaping, yet he observed that the invisible cathode rays caused a fluorescent effect on a small cardboard screen painted with barium platinocyanide when it was placed close to the aluminium window. It occurred to Röntgen that the Crookes–Hittorf tube, which had a much thicker glass wall than the Lenard tube, might also cause this fluorescent effect. In the late afternoon of 8 November 1895, Röntgen was determined to test his idea. He carefully constructed a black cardboard covering similar to the one he had used on the Lenard tube. He covered the Crookes–Hittorf tube with the cardboard and attached electrodes to a Ruhmkorff coil to generate an electrostatic charge. Before setting up the barium platinocyanide screen to test his idea, Röntgen darkened the room to test the opacity of his cardboard cover. As he passed the Ruhmkorff coil charge through the tube, he determined that the cover was light-tight and turned to prepare for the next step of the experiment. It was at this point that Röntgen noticed a faint shimmering from a bench a few feet away from the tube. To be sure, he tried several more discharges and saw the same shimmering each time. Striking a match, he discovered the shimmering had come from the location of the barium platinocyanide screen he had been intending to use next. Based on the formation of regular shadows, Röntgen termed the phenomenon "rays". As 8 November was a Friday, he took advantage of the weekend to repeat his experiments and made his first notes. In the following weeks, he ate and slept in his laboratory as he investigated many properties of the new rays he temporarily termed "X-rays", using the mathematical designation ("X") for something unknown. The new rays came to bear his name in many languages as "Röntgen rays" (and the associated X-ray radiograms as "Röntgenograms"). At one point, while he was investigating the ability of various materials to stop the rays, Röntgen brought a small piece of lead into position while a discharge was occurring. Röntgen thus saw the first radiographic image: his own flickering ghostly skeleton on the barium platinocyanide screen. About six weeks after his discovery, he took a picture—a radiograph—using X-rays of his wife Anna Bertha's hand. When she saw her skeleton she exclaimed "I have seen my death!" He later took a better picture of his friend Albert von Kölliker's hand at a public lecture. Röntgen's original paper, "On A New Kind of Rays" (Ueber eine neue Art von Strahlen), was published on 28 December 1895. On 5 January 1896, an Austrian newspaper reported Röntgen's discovery of a new type of radiation. Röntgen was awarded an honorary Doctor of Medicine degree from the University of Würzburg after his discovery. He also received the Rumford Medal of the British Royal Society in 1896, jointly with Philipp Lenard, who had already shown that a portion of the cathode rays could pass through a thin film of a metal such as aluminium. Röntgen published a total of three papers on X-rays between 1895 and 1897. Today, Röntgen is considered the father of diagnostic radiology, the medical speciality which uses imaging to diagnose disease. Personal life Röntgen was married to Anna Bertha Ludwig for 47 years until her death in 1919 at the age of 80. In 1866, they met in Zürich at Anna's father's café, Zum Grünen Glas. They became engaged in 1869 and wed in Apeldoorn, Netherlands on 7 July 1872; the delay was due to Anna being six years Wilhelm's senior and his father not approving of her age or humble background. Their marriage began with financial difficulties as family support from Röntgen had ceased. They raised one child, Josephine Bertha Ludwig, whom they adopted as a six-year-old after her father, Anna's only brother, died in 1887. For ethical reasons, Röntgen did not seek patents for his discoveries, holding the view that they should be publicly available without charge. After receiving his Nobel prize money, Röntgen donated the 50,000 Swedish krona to research at the University of Würzburg. Although he accepted the honorary degree of Doctor of Medicine, he rejected an offer of lower nobility, or Niederer Adelstitel, denying the preposition von (meaning "of") as a nobiliary particle (i.e., von Röntgen). With the inflation following World War I, Röntgen fell into bankruptcy, spending his final years at his country home at Weilheim, near Munich. Röntgen died on 10 February 1923 from carcinoma of the intestine, also known as colorectal cancer. In keeping with his will, his personal and scientific correspondence, with few exceptions, were destroyed upon his death. He was a member of the Dutch Reformed Church. Awards and honors 1896: Rumford Medal of the Royal Society 1896: Matteucci Medal of the Accademia nazionale delle scienze 1897: Elliott Cresson Medal of the Franklin Institute 1900: Barnard Medal for Meritorious Service to Science of Columbia University 1901: Nobel Prize in Physics for the discovery of X-rays In 1901, Röntgen was awarded the first Nobel Prize in Physics. The award was officially "in recognition of the extraordinary services he has rendered by the discovery of the remarkable rays subsequently named after him". Shy in public speaking, he declined to give a Nobel lecture. Röntgen donated the 50,000 Swedish krona reward from his Nobel Prize to research at his university, the University of Würzburg. Like Marie and Pierre Curie, Röntgen refused to take out patents related to his discovery of X-rays, as he wanted society as a whole to benefit from practical applications of the phenomenon. Röntgen was also awarded Barnard Medal for Meritorious Service to Science in 1900. In November 2004, IUPAC named element number 111 roentgenium (Rg) in his honor. IUPAP adopted the name in November 2011. He was elected an International Member of the American Philosophical Society in 1897. In 1907, he became a foreign member of the Royal Netherlands Academy of Arts and Sciences. Legacy A collection of his papers is held at the National Library of Medicine in Bethesda, Maryland. Today, in Remscheid-Lennep, 40 kilometres east of Röntgen's birthplace in Düsseldorf, is the Deutsches Röntgen-Museum. In Würzburg, where he discovered X-rays, a non-profit organization maintains his laboratory and provides guided tours to the Röntgen Memorial Site. World Radiography Day: World Radiography Day is an annual event promoting the role of medical imaging in modern healthcare. It is celebrated on 8 November each year, coinciding with the anniversary of the Röntgen's discovery. It was first introduced in 2012 as a joint initiative between the European Society of Radiology, the Radiological Society of North America, and the American College of Radiology. As of 2023, 55 stamps from 40 countries have been issued commemorating Röntgen as the discoverer of X-rays. Röntgen Peak in Antarctica is named after Wilhelm Röntgen. Minor planet 6401 Roentgen is named after him. See also German inventors and discoverers Röntgen Memorial Site Ivan Puluj References External links Annotated bibliography for Wilhelm Röntgen from the Alsos Digital Library Wilhelm Conrad Röntgen Biography The Cathode Ray Tube site First X-ray Photogram The American Roentgen Ray Society Deutsches Röntgen-Museum (German Röntgen Museum, Remscheid-Lennep) Röntgen Rays: Memoirs by Röntgen, Stokes, and J.J. Thomson (circa 1899) The New Marvel in Photography, an article on and interview with Röntgen, in McClure's magazine, Vol. 6, No. 5, April 1896, from Project Gutenberg Röntgen's 1895 article, on line and analyzed on BibNum [click 'à télécharger' for English analysis] 1845 births 1923 deaths 20th-century German physicists People from Remscheid ETH Zurich alumni Experimental physicists German Nobel laureates German people of Dutch descent Members of the Royal Netherlands Academy of Arts and Sciences Nobel laureates in Physics Particle physicists People from Apeldoorn People from the Rhine Province People associated with the University of Zurich Projectional radiography Recipients of the Pour le Mérite (civil class) Science teachers University of Zurich alumni Academic staff of the University of Zurich Academic staff of the University of Giessen Academic staff of the Ludwig Maximilian University of Munich Academic staff of the University of Strasbourg Academic staff of the University of Würzburg Utrecht University alumni X-ray pioneers Engineers from North Rhine-Westphalia German mechanical engineers Recipients of the Matteucci Medal German Calvinist and Reformed Christians Members of the Dutch Reformed Church Members of the American Philosophical Society
Wilhelm Röntgen
Physics
2,561
59,739,220
https://en.wikipedia.org/wiki/Society%20for%20Ecological%20Restoration
The Society for Ecological Restoration (SER) is a conservation organization based in the United States, supporting a "global community of restoration professionals that includes researchers, practitioners, decision-makers, and community leaders". The organization was founded in 1988. The mission of the organization is to: "advance the science, practice and policy of ecological restoration to sustain biodiversity, improve resilience in a changing climate, and re-establish an ecologically healthy relationship between nature and culture." SER produces definitions and standards for the practice of ecological restoration, including the SER International Primer on Ecological Restoration (2004), International Standards for the Practice of Ecological Restoration (2016), and a certification program for professionals: Certified Ecological Restoration Practitioner (CERP). References Nature conservation organizations based in the United States Ecological restoration
Society for Ecological Restoration
Chemistry,Engineering
161
13,096,524
https://en.wikipedia.org/wiki/Guano
Guano (Spanish from ) is the accumulated excrement of seabirds or bats. Guano is a highly effective fertilizer due to the high content of nitrogen, phosphate, and potassium, all key nutrients essential for plant growth. Guano was also, to a lesser extent, sought for the production of gunpowder and other explosive materials. The 19th-century seabird guano trade played a pivotal role in the development of modern input-intensive farming. The demand for guano spurred the human colonization of remote bird islands in many parts of the world. Unsustainable seabird guano mining processes can result in permanent habitat destruction and the loss of millions of seabirds. Bat guano is found in caves throughout the world. Many cave ecosystems are wholly dependent on bats to provide nutrients via their guano which supports bacteria, fungi, invertebrates, and vertebrates. The loss of bats from a cave can result in the extinction of species that rely on their guano. Unsustainable harvesting of bat guano may cause bats to abandon their roost. Demand for guano rapidly declined after 1910 with the development of the Haber–Bosch process for extracting nitrogen from the atmosphere. Composition and properties Seabird guano Seabird guano is the fecal excrement from marine birds and has an organic matter content greater than 40%, and is a source of nitrogen (N) and available phosphate (P2O5). Unlike most mammals, birds do not excrete urea, but uric acid, so that the amount of nitrogen per volume is much higher than in other animal excrement. Seabird guano contains plant nutrients including nitrogen, phosphorus, calcium and potassium. Bat guano Bat guano is partially decomposed bat excrement and has an organic matter content greater than 40%; it is a source of nitrogen, and may contain up to 6% available phosphate (P2O5).The feces of insectivorous bats consists of fine particles of insect exoskeleton, which are largely composed of chitin. Elements found in large concentrations include nitrogen, phosphorus, potassium and trace elements needed for plant growth. Bat guano is slightly alkaline with an average pH of 7.25. Chitin from insect exoskeletons is an essential compound needed by soil fungi to grow and expand. Chitin is a major component of fungal cell wall membranes. The growth of beneficial fungi adds to soil fertility. Bat guano composition varies between species with different diets. Insectivorous bats are the only species that congregate in large enough numbers to produce sufficient guano for sustainable harvesting. History of human use Bird guano Indigenous use The word "guano" originates from the Andean indigenous language Quechua, where it refers to any form of dung used as an agricultural fertilizer. Archaeological evidence suggests that Andean people collected seabird guano from small islands and points off the desert coast of Peru for use as a soil amendment for well over 1,500 years and perhaps as long as 5,000 years. Spanish colonial documents suggest that the rulers of the Inca Empire greatly valued guano, restricted access to it, and punished any disturbance of the birds with death. The guanay cormorant is historically the most abundant and important producer of guano. Other important guano-producing bird species off the coast of Peru are the Peruvian pelican and the Peruvian booby. Western discovery (1548–1800) The earliest European records noting the use of guano as fertilizer date back to 1548. Although the first shipments of guano reached Spain as early as 1700, it did not become a popular product in Europe until the 19th century. The Guano Age (1802–1884) In November 1802, Prussian geographer and explorer Alexander von Humboldt first encountered guano and began investigating its fertilizing properties at Callao in Peru, and his subsequent writings on this topic made the subject well known in Europe. Although Europeans knew of its fertilizing properties, guano was not widely used before this time. Cornish chemist Humphry Davy delivered a series of lectures which he compiled into an 1813 bestselling book about the role of nitrogenous manure as a fertilizer, Elements of Agricultural Chemistry. It highlighted the special efficacy of Peruvian guano, noting that it made the "sterile plains" of Peru fruitful. Though Europe had marine seabird colonies and thus, guano, it was of poorer quality because its potency was leached by high levels of rainfall and humidity. Elements of Agricultural Chemistry was translated into German, Italian, and French; American historian Wyndham D. Miles said that it was likely "the most popular book ever written on the subject, outselling the works of Dundonald, Chaptal, Liebig..." He also said that "No other work on agricultural chemistry was read by as many English-speaking farmers." The arrival of commercial whaling on the Pacific coast of South America contributed to scaling of its guano industry. Whaling vessels carried consumer goods to Peru such as textiles, flour, and lard; unequal trade meant that ships returning north were often half empty, leaving entrepreneurs in search of profitable goods that could be exported. In 1840, Peruvian politician and entrepreneur negotiated a deal to commercialize guano export among a merchant house in Liverpool, a group of French businessmen, and the Peruvian government. This agreement resulted in the abolition of all preexisting claims to Peruvian guano; thereafter, it was the exclusive resource of the State. By nationalizing its guano resources, the Peruvian government was able to collect royalties on its sale, becoming the country's largest source of revenue. Some of this income was used by the State to free its more than 25,000 black slaves. Peru also used guano revenue to abolish the head tax on its indigenous citizens. This export of guano from Peru to Europe has been suggested as the vehicle that brought a virulent strain of potato blight from the Andean highlands that began the Great Famine of Ireland. Soon guano was sourced from regions besides Peru. By 1846, of guano had been exported from Ichaboe Island, off the coast of Namibia, and surrounding islands to Great Britain. Guano pirating took off in other regions as well, causing prices to plummet and more consumers to try it. The biggest markets for guano from 1840–1879 were in Great Britain, the Low Countries, Germany, and the United States. By the late 1860s, it became apparent that Peru's most productive guano site, the Chincha Islands, was nearing depletion. This caused guano mining to shift to other islands north and south of the Chincha Islands. Despite this near exhaustion, Peru achieved its greatest ever export of guano in 1870 at more than . Concern of exhaustion was ameliorated by the discovery of a new Peruvian resource: sodium nitrate, also called Chile saltpetre. After 1870, the use of Peruvian guano as a fertilizer was eclipsed by Chile saltpetre in the form of caliche (a sedimentary rock) extraction from the interior of the Atacama Desert, close to the guano areas. The Guano Age ended with the War of the Pacific (1879–1883), which saw Chilean marines invade coastal Bolivia to claim its guano and saltpetre resources. Knowing that Bolivia and Peru had a mutual defense agreement, Chile mounted a preemptive strike on Peru, resulting in its occupation of the Tarapacá, which included Peru's guano islands. With the Treaty of Ancón of 1884, the War of the Pacific ended. Bolivia ceded its entire coastline to Chile, which also gained half of Peru's guano income from the 1880s and its guano islands. The conflict ended with Chilean control over the most valuable nitrogen resources in the world. Chile's national treasury grew by 900% between 1879 and 1902 thanks to taxes coming from the newly acquired lands. Imperialism The demand for guano led the United States to pass the Guano Islands Act in 1856, which gave U.S. citizens discovering a source of guano on an unclaimed island exclusive rights to the deposits. In 1857, the U.S. began annexing uninhabited islands in the Pacific and Caribbean, totaling nearly 100, though some islands claimed under the Act did not end up having guano mining operations established on them. Several of these islands are still officially U.S. territories. Conditions on annexed guano islands were poor for workers, resulting in a rebellion on Navassa Island in 1889 where black workers killed their white overseers. In defending the workers, lawyer Everett J. Waring argued that the men could not be tried by U.S. law because the guano islands were not legally part of the country. The case went to the Supreme Court of the United States where it was decided in Jones v. United States (1890). The Court decided that Navassa Island and other guano islands were legally part of the U.S. American historian Daniel Immerwahr claimed that by establishing these land claims as constitutional, the Court laid the "basis for the legal foundation for the U.S. empire". Other countries also used their desire for guano as a reason to expand their empires. The United Kingdom claimed Kiritimati and Malden Island for the British Empire. Others nations that claimed guano islands included Australia, France, Germany, Hawaii, Japan, and Mexico. Decline and resurgence In 1913, a factory in Germany began the first large-scale synthesis of ammonia using German chemist Fritz Haber's catalytic process. The scaling of this energy-intensive process meant that farmers could cease practices such as crop rotation with nitrogen-fixing legumes or the application of naturally derived fertilizers such as guano. The international trade of guano and nitrates such as Chile saltpetre declined as artificially synthesized fertilizers became more widely used. With the rising popularity of organic food in the twenty-first century, the demand for guano has started to rise again. Bat guano In the U.S., bat guano was harvested from caves as early as the 1780s to manufacture gunpowder. During the American Civil War (1861–1865), the Union's blockade of the southern Confederate States of America meant that the Confederacy resorted to mining guano from caves to produce saltpetre. One Confederate guano kiln in New Braunfels, Texas, had a daily output of of saltpetre, produced from of guano from two area caves. From the 1930s, Bat Cave mine in Arizona was used for guano extraction, though it cost more to develop than it was worth. U.S. Guano Corporation bought the property in 1958 and invested 3.5 million dollars to make it operational; actual guano deposits in the cave were one percent of predicted and the mine was abandoned in 1960. In Australia, the first documented claim on Naracoorte's Bat Cave guano deposits was in 1867. Guano mining in the country remained a localized and small industry. In modern times, bat guano is used in low levels in developed countries. It remains an important resource in developing countries, particularly in Asia. Paleoenvironment reconstruction Coring accumulations of bat guano can be useful in determining past climate conditions. The level of rainfall, for example, impacts the relative frequency of nitrogen isotopes. In times of higher rainfall, 15N is more common. Bat guano also contains pollen, which can be used to identify prior plant assemblages. A layer of charcoal recovered from a guano core in the U.S. state of Alabama was seen as evidence that a Woodlands tribe inhabited the cave for some time, leaving charcoal via the fires they lit. Stable isotope analysis of bat guano was also used to support that the climate of the Grand Canyon was cooler and wetter during the Pleistocene epoch than it is now in the Holocene. Additionally, the climatic conditions were more variable in the past. Mining Process Mining seabird guano from Peruvian islands has remained largely the same since the industry began, relying on manual labor. First, picks, brooms, and shovels are used to loosen the guano. The use of excavation machinery is not only impractical due to the terrain but also prohibited because it would frighten the seabirds. The guano is then placed in sacks and carried to sieves, where impurities are removed. Similarly, harvesting bat guano in caves was and is manual. In Puerto Rico, cave entrances were enlarged to facilitate access and extraction. Guano was freed from the rocky substrate by explosives. Then, it was shoveled into carts and removed from the cave. From there, the guano was taken to kilns to dry. The dried guano would then be loaded into sacks, ready for transport via ship. Today, bat guano is usually harvested in the developing world, using "strong backs and shovels". Ecological impacts and mitigation Bird guano Peru's guano islands experienced severe ecological effects as a result of unsustainable mining. In the late 1800s, approximately 53 million seabirds lived on the twenty-two islands. As of 2011, only 4.2 million seabirds lived there. After realizing the depletion of guano in the Guano Age, the Peruvian government recognized that it needed to conserve the seabirds. In 1906, American zoologist Robert Ervin Coker was hired by the Peruvian government to create management plans for its marine species, including the seabirds. Specifically, he made five recommendations: That the government turn its coastal islands into a state-run bird sanctuary. Private use of the island for hunting or egg collecting should be prohibited. To eliminate unhealthy competition, each island should be assigned only one state contractor for guano extraction. Guano mining should be entirely ceased from November to March so that the breeding season for the birds was undisturbed. In rotation, each island should be closed to guano mining for an entire year. The Peruvian government should monopolize all processes related to guano production and distribution. This recommendation was made with the belief that a single entity with a vested interest in the long-term success of the guano industry would manage the resource most responsibly. Despite these policies, the seabird population continued to decline, which was exacerbated by the 1911 El Niño–Southern Oscillation. In 1913, Scottish ornithologist Henry Ogg Forbes authored a report on behalf of the Peruvian Corporation focusing on how human actions harmed the birds and subsequent guano production. Forbes suggested additional policies to conserve the seabirds, including keeping unauthorized visitors a mile away from guano islands at all times, eliminating all the birds' natural predators, armed patrols of the islands, and decreasing the frequency of harvest on each island to once every three to four years. In 2009, these conservation efforts culminated in the establishment of the Guano Islands, Isles, and Capes National Reserve System, which consists of twenty-two islands and eleven capes. This Reserve System was the first marine protected area in South America, encompassing . Bat guano Unlike bird guano which is deposited on the surface of islands, bat guano can be deep within caves. Cave structure is often altered via explosives or excavation to facilitate extraction of the guano, which changes the cave's microclimate. Bats are sensitive to cave microclimate, and such changes can cause them to abandon the cave as a roost, as happened when Robertson Cave in Australia had a hole opened in its ceiling for guano harvesting. Guano harvesting may also introduce artificial light into caves; one cave in the U.S. state of New Mexico was abandoned by its bat colony after the installation of electric lights. In addition to harming bats by necessitating they find another roost, guano harvesting techniques can ultimately harm human livelihood as well. Harming or killing bats means that less guano will be produced, resulting in unsustainable harvesting practices. In contrast, sustainable harvesting practices do not negatively impact bat colonies nor other cave fauna. The International Union for Conservation of Nature's (IUCN) 2014 recommendations for sustainable guano harvesting include extracting guano when the bats are not present, such as when migratory bats are gone for the season or when non-migratory bats are out foraging at night. Work conditions Guano mining in Peru was at first done with black slaves. After Peru formally ended slavery, it sought another source of cheap labor. In the 1840s and 1850s, thousands of men were blackbirded (coerced or kidnapped) from the Pacific islands and southern China. Thousands of coolies from South China worked as "virtual slaves" mining guano. By 1852, Chinese laborers comprised two-thirds of Peru's guano miners; others who mined guano included convicts and forced laborers paying off debts. Chinese laborers agreed to work for eight years in exchange for passage from China, though many were misled that they were headed to California's gold mines. Conditions on the guano islands were very poor, commonly resulting in floggings, unrest, and suicide. Workers experienced lung damage by inhaling guano dust, were buried alive by falling piles of guano, and risked falling into the ocean. After visiting the guano islands, U.S. politician George Washington Peck wrote: Hundreds or thousands of Pacific Islanders, especially Native Hawaiians, traveled or were blackbirded to the U.S.-held and Peruvian guano islands for work, including Howland Island, Jarvis Island, and Baker Island. While most Hawaiians were literate, they could usually not read English; the contract they received in their own language lacked key amendments that the English version had. Because of this, the Hawaiian language contract was often missing key information, such as the departure date, the length of the contract, and the name of the company for which they would be working. When they arrived at their destination to begin mining, they learned that both contracts were largely meaningless in terms of work conditions. Instead, their overseer (commonly referred to as a luna), who was usually white, had nearly unlimited power over them. Wages varied from lows of $5/month to highs of $14/month. Native Hawaiian laborers of Jarvis Island referred to the island as Paukeaho, meaning "out of breath" or "exhausted", due to the strain of loading heavy bags of guano onto ships. Pacific Islanders also risked death: one in thirty-six laborers from Honolulu died before completing their contract. Slaves blackbirded from Easter Island in 1862 were repatriated by the Peruvian government in 1863; only twelve of 800 slaves survived the journey. On Navassa Island, the guano mining company switched from white convicts to largely black laborers after the American Civil War. Black laborers from Baltimore claimed that they were misled into signing contracts with stories of mostly fruit-picking, not guano mining, and "access to beautiful women". Instead, the work was exhausting and punishments were brutal. Laborers were frequently placed in stocks or tied up and dangled in the air. A labor revolt ensued, where the workers attacked their overseers with stones, axes, and even dynamite, killing five overseers. Although the process for mining guano is mostly the same today, worker conditions have improved. As of 2018, guano miners in Peru made US$750 per month, which is more than twice the average national monthly income of $300. Workers also have health insurance, meals, and eight-hour shifts. Human health Guano is one of the habitats of the fungus Histoplasma capsulatum, which can cause the disease histoplasmosis in humans, cats, and dogs. H. capsulatum grows best in the nitrogen-rich conditions present in guano. In the United States, histoplasmosis affects 3.4 adults per 100,000 over age 65, with higher rates in the Midwestern United States (6.1 cases per 100,000). In addition to the United States, H. capsulatum is found in Central and South America, Africa, Asia, and Australia. Of 105 outbreaks in the U.S. from 1938–2013, seventeen occurred after exposure to a chicken coop while nine occurred after exposure to a cave. Birds or their droppings were present in 56% of outbreaks, while bats or their droppings were present in 23%. Developing any symptoms after exposure to H. capsulatum is very rare; less than 1% of those infected develop symptoms. Only patients with more severe cases require medical attention, and only about 1% of acute cases are fatal. It is a much more serious illness for the immunocompromised, however. Histoplasmosis is the first symptom of HIV/AIDS in 50–75% of patients, and results in death for 39–58% of those with HIV/AIDS. The Centers for Disease Control and Prevention recommends that the immunocompromised avoid exploring caves or old buildings, cleaning chicken coops, or disturbing soil where guano is present. Rabies, which can affect humans who have been bitten by infected mammals including bats, cannot be transmitted through bat guano. A 2011 study of bat guano viromes in the U.S. states of Texas and California recovered no viruses that are pathogenic to humans, nor any close relatives of pathogenic viruses. It is hypothesized that Egyptian fruit bats, which are native to Africa and the Middle East, can spread Marburg virus to each other through contact with infected secretions such as guano, but a 2018 review concluded that more studies are necessary to determine the specific mechanisms of exposure that cause Marburg virus disease in humans. Exposure to guano could be a route of transmission to humans. As early as in the 18th century there are reports of travellers complaining about the unhealthy air of Arica and Iquique resulting from abundant bird spilling. Ecological importance Colonial birds and their guano deposits have an outsized role on the surrounding ecosystem. Bird guano stimulates productivity, though species richness may be lower on guano islands than islands without the deposits. Guano islands have a greater abundance of detritivorous beetles than islands without guano. The intertidal zone is inundated by the guano's nutrients, causing algae to grow more rapidly and coalesce into algal mats. These algal mats are in turn colonized by invertebrates. The abundance of nutrients offshore of guano islands also supports coral reef ecosystems. Cave ecosystems are often limited by nutrient availability. Bats bring nutrients into these ecosystems via their excretions, however, which are often the dominant energy resource of a cave. Many cave species depend on bat guano for sustenance, directly or indirectly. Because cave-roosting bats are often highly colonial, they can deposit substantial quantities of nutrients into caves. The largest colony of bats in the world at Bracken Cave (about 20 million individuals) deposit of guano into the cave every year. Even smaller colonies have relatively large impacts, with one colony of 3,000 gray bats annually depositing of guano into their cave. Invertebrates inhabit guano piles, including fly larvae, nematodes, springtails, beetles, mites, pseudoscorpions, thrips, silverfish, moths, harvestmen, spiders, isopods, millipedes, centipedes, and barklice. The invertebrate communities associated with the guano depends on the bat species' feeding guild: frugivorous bat guano has the greatest invertebrate diversity. Some invertebrates feed directly on the guano, while others consume the fungi that use it as a growth medium. Predators such as spiders depend on guano to support their prey base. Vertebrates consume guano as well, including the bullhead catfish and larvae of the grotto salamander. Bat guano is integral to the existence of endangered cave fauna. The critically endangered Shelta Cave crayfish feeds on guano and other detritus. The Ozark cavefish, a U.S. federally listed species, also consumes guano. The loss of bats from a cave can result in declines or extinctions of other species that rely on their guano. A 1987 cave flood resulted in the death of its bat colony; the Valdina Farms salamander is now likely extinct as a result. Bat guano also has a role in shaping caves by making them larger. It has been estimated that 70–95% of the total volume of Gomantong cave in Borneo is due to biological processes such as guano excretion, as the acidity of the guano weathers the rocky substrate. The presence of high densities of bats in a cave is predicted to cause the erosion of of rock over 30,000 years. Cultural significance There are several references to guano in the arts. In his 1845 poem "Guanosong", German author Joseph Victor von Scheffel used a humorous verse to take a position in the popular polemic against Hegel's Naturphilosophie. The poem starts with an allusion to Heinrich Heine's Lorelei and may be sung to the same tune. The poem ends, however, with the blunt statement of a Swabian rapeseed farmer from Böblingen who praises the seagulls of Peru as providing better manure even than his fellow countryman Hegel. This refuted the widespread Enlightenment belief that nature in the New World was inferior to the Old World. The poem has been translated by, among others, Charles Godfrey Leland. English author Robert Smith Surtees parodied the obsession of wealthy landowners with the "religion of progress" in 1843. In one of his works featuring the character John Jorrocks, Surtees has the character develop an obsession with trying all the latest farming experiments, including guano. In an effort to impress the upper class around him and disguise his low-class origins, Jorrocks references guano in conversation at every chance he can. At one point, he exclaims, "Guano!" along with two other varieties of fertilizer, to which the Duke replies, "I see you understand it all!" Guano is also the namesake for one of the nucleobases in RNA and DNA: guanine, a purine base, consisting of a fused pyrimidine-imidazole planar ring system with conjugated double bonds. Guanine was first obtained from guano by , who incorrectly first described it as xanthine, a closely related purine, in 1844. After he was corrected by Einbrodt two years later, Bodo Unger agreed and published it with the new name of "guanine" in 1846. See also Chicken manure Human uses of bats Phosphorite Uric acid References Bibliography External links ProAbonos Jamaican Bat Guano and Cave Preservation Animal waste products Bird products Nitrogen cycle Caves Bats and humans
Guano
Chemistry,Biology
5,564
19,104,225
https://en.wikipedia.org/wiki/Mond%20gas
Mond gas is a cheap coal gas that was used for industrial heating purposes. Coal gases are made by decomposing coal through heating it to a high temperature. Coal gases were the primary source of gas fuel during the 1940s and 1950s until the adoption of natural gas. They were used for lighting, heating, and cooking, typically being supplied to households through pipe distribution systems. The gas was named after its discoverer, Ludwig Mond. Discovery In 1889, Ludwig Mond discovered that the combustion of coal with air and steam produced ammonia along with an extra gas, which was named the Mond gas. He discovered this while looking for a process to form ammonium sulfate, which was useful in agriculture. The process involved reacting low-quality coal with superheated steam, which produced the Mond gas. The gas was then passed through dilute sulfuric acid spray, which ultimately removed the ammonia, forming ammonium sulfate. Mond modified the gasification process by restricting the air supply and filling the air with steam, providing a low working temperature. This temperature was below ammonia's point of dissociation, maximizing the amount of ammonia that could be produced from the nitrogen, a product from superheating coal. Gas production The Mond gas process was designed to convert cheap coal into flammable gas, which was made up of mainly hydrogen, while recovering ammonium sulfate. The gas produced was rich in hydrogen and poor in carbon monoxide. Although it could be used for some industrial purposes and power generation, the gas was limited for heating or lighting. In 1897, the first Mond gas plant began at the Brunner Mond & Company in Northwich, Cheshire. Mond plants which recovered ammonia needed to be large in order to be profitable, using at least 182 tons of coal per week. Reaction Predominant reaction in Mond Gas Process: C + 2H2O = CO2+ 2H2 The Mond gas was composed of roughly: 12% CO (Carbon monoxide) 28% H2 (Hydrogen) 2.2% CH4 (Methane) 16% CO2 (Carbon dioxide) 42% N2 (Nitrogen) Uses Mond gas could be produced and used more efficiently than other gases in the late 19th and early 20th century. The gas was used as fuel for street lighting and basic residential uses that required gas such as ovens, kilns, furnaces, and boilers. Advantages The Mond gas could be produced very cheaply since it required only a low-quality coal, offering large savings for many processes. The production of Mond gas did not require much labor. The Mond gas became popularized during the industrial power generation in the beginning of the 20th century, since industries were very interested in a source of low-cost energy. The Mond gas provided a boost to the gas engine industry in particular. For example, a large gas engine that used Mond gas was 5–6 times more efficient than a standard steam engine. This is primarily because Mond gas was produced from the lowest cost coal rather than steam coal, resulting in cheaper electricity at about 1/20 of the normal price. Modern use The Mond gas was used primarily during the early 20th century, and its process was further developed by the Power Gas Corporation as the Lymn system; however, the gas has been widely forgotten. The use of coal gases has become far less popular due to the adoption of natural gas in the 1960s. Natural gases were better for the environment because they burned more cleanly than other fuels such as coal and oil and could also be transported more safely and efficiently over sea. References Fuel gas Fuels Chemical mixtures Industrial gases Synthetic fuel technologies
Mond gas
Chemistry
746
39,071,852
https://en.wikipedia.org/wiki/Geometry%20of%20binary%20search%20trees
In computer science, one approach to the dynamic optimality problem on online algorithms for binary search trees involves reformulating the problem geometrically, in terms of augmenting a set of points in the plane with as few additional points as possible to avoid rectangles with only two points on their boundary. Access sequences and competitive ratio As typically formulated, the online binary search tree problem involves search trees defined over a fixed key set . An access sequence is a sequence ... where each access belongs to the key set. Any particular algorithm for maintaining binary search trees (such as the splay tree algorithm or Iacono's working set structure) has a cost for each access sequence that models the amount of time it would take to use the structure to search for each of the keys in the access sequence in turn. The cost of a search is modeled by assuming that the search tree algorithm has a single pointer into a binary search tree, which at the start of each search points to the root of the tree. The algorithm may then perform any sequence of the following operations: Move the pointer to its left child. Move the pointer to its right child. Move the pointer to its parent. Perform a single tree rotation on the pointer and its parent. The search is required, at some point within this sequence of operations to move the pointer to a node containing the key, and the cost of the search is the number of operations that are performed in the sequence. The total cost costA(X) for algorithm A on access sequence X is the sum of the costs of the searches for each successive key in the sequence. As is standard in competitive analysis, the competitive ratio of an algorithm A is defined to be the maximum, over all access sequences, of the ratio of the cost for A to the best cost that any algorithm could achieve: The dynamic optimality conjecture states that splay trees have a constant competitive ratio, but this remains unproven. The geometric view of binary search trees provides a different way of understanding the problem that has led to the development of alternative algorithms that could also (conjecturally) have a constant competitive ratio. Translation to a geometric point set In the geometric view of the online binary search tree problem, an access sequence (sequence of searches performed on a binary search tree (BST) with a key set ) is mapped to the set of points , where the X-axis represents the key space and the Y-axis represents time; to which a set of touched nodes is added. By touched nodes we mean the following. Consider a BST access algorithm with a single pointer to a node in the tree. At the beginning of an access to a given key , this pointer is initialized to the root of the tree. Whenever the pointer moves to or is initialized to a node, we say that the node is touched. We represent a BST algorithm for a given input sequence by drawing a point for each item that gets touched. For example, assume the following BST on 4 nodes is given: The key set is {1, 2, 3, 4}. Let 3, 1, 4, 2 be the access sequence. In the first access, only the node 3 is touched. In the second access, the nodes 3 and 1 are touched. In the third access - 3 and 4 are touched. In the fourth access, touch 3, then 1, and after that 2. The touches are represented geometrically: If an item x is touched in the operations for the ith access, then a point (x,i) is plotted. Arborally satisfied point sets A point set is said to be arborally satisfied if the following property holds: for any pair of points that do not lie on the same horizontal or vertical line, there exists a third point which lies in the rectangle spanned by the first two points (either inside or on the boundary). Theorem A point set containing the points is arborally satisfied if and only if it corresponds to a valid BST for the input sequence . Proof First, prove that the point set for any valid BST algorithm is arborally satisfied. Consider points and , where is touched at time and is touched at time . Assume by symmetry that and . It needs to be shown that there exists a third point in the rectangle with corners as and . Also let denote the lowest common ancestor of nodes and right before time . There are a few cases: If , then use the point , since must have been touched if was. If , then the point can be used. If neither of the above two cases holds, then must be an ancestor of right before time and be an ancestor of right before time . Then at some time , must have been rotated above , so the point can be used. Next, show the other direction: given an arborally satisfied point set, a valid BST corresponding to that point set can be constructed. Organize our BST into a treap which is organized in heap-order by next-touch-time. Note that next-touch-time has ties and is thus not uniquely defined, but this isn’t a problem as long as there is a way to break ties. When time reached, the nodes touched form a connected subtree at the top, by the heap ordering property. Now, assign new next-touch-times for this subtree, and rearrange it into a new local treap. If a pair of nodes, and , straddle the boundary between the touched and untouched part of the treap, then if is to be touched sooner than then is an unsatisfied rectangle because the leftmost such point would be the right child of , not . Corollary Finding the best BST execution for the input sequence is equivalent to finding the minimum cardinality superset of points (that contains the input in geometric representation) that is arborally satisfied. The more general problem of finding the minimum cardinality arborally satisfied superset of a general set of input points (not limited to one input point per coordinate), is known to be NP-complete. Greedy algorithm The following greedy algorithm constructs arborally satisfiable sets: Sweep the point set with a horizontal line by increasing coordinate. At time , place the minimal number of points at to make the point set up to arborally satisfied. This minimal set of points is uniquely defined: for any unsatisfied rectangle formed with in one corner, add the other corner at . The algorithm has been conjectured to be optimal within an additive term. Other results The geometry of binary search trees has been used to provide an algorithm which is dynamically optimal if any binary search tree algorithm is dynamically optimal. See also Binary search algorithm Tango trees Splay trees Self-balancing binary search tree Optimal binary search tree Interleave lower bound References Binary trees Geometry
Geometry of binary search trees
Mathematics
1,383
751,117
https://en.wikipedia.org/wiki/Charles%20Jean%20de%20la%20Vall%C3%A9e%20Poussin
Charles-Jean Étienne Gustave Nicolas, baron de la Vallée Poussin (; 14 August 1866 – 2 March 1962) was a Belgian mathematician. He is best known for proving the prime number theorem. The King of Belgium ennobled him with the title of baron. Biography De la Vallée Poussin was born in Leuven, Belgium. He studied mathematics at the Catholic University of Leuven under his uncle Louis-Philippe Gilbert, after he had earned his bachelor's degree in engineering. De la Vallée Poussin was encouraged to study for a doctorate in physics and mathematics, and in 1891, at the age of just 25, he became an assistant professor in mathematical analysis. De la Vallée Poussin became a professor at the same university (as was his father, Charles Louis de la Vallée Poussin, who taught mineralogy and geology) in 1892. De la Vallée Poussin was awarded with Gilbert's chair when Gilbert died. While he was a professor there, de la Vallée Poussin carried out research in mathematical analysis and the theory of numbers, and in 1905 was awarded the Decennial Prize for Pure Mathematics 1894–1903. He was awarded this prize a second time in 1924 for his work during 1914–23. In 1898, de la Vallée Poussin was appointed as the correspondent to the Royal Belgian Academy of Sciences, and he became a Member of the Academy in 1908. In 1923, he became the President of the Division of Sciences. In August 1914, de la Vallée Poussin escaped from Leuven at the time of its destruction by the invading German Army of World War I, and he was invited to teach at Harvard University in the United States. He accepted this invitation. In 1918, de la Vallée Poussin returned to Europe to accept professorships in Paris at the Collège de France and at the Sorbonne. After the war was over, de la Vallée Poussin returned to Belgium, The International Union of Mathematicians was created, and he was invited to become its President. Between 1918 and 1925, de la Vallée Poussin traveled extensively, lecturing in Geneva, Strasbourg, and Madrid. and then in the United States where he gave lectures at the Universities of Chicago, California, Pennsylvania, and Brown University, Yale University, Princeton University, Columbia University, and the Rice Institute of Houston. He was awarded the Prix Poncelet for 1916. De la Vallée Poussin was given the titles of Doctor Honoris Causa of the Universities of Paris, Toronto, Strasbourg, and Oslo, an Associate of the Institute of France, and a Member of the Pontifical Academy of Sciences, Nazionale dei Lincei, Madrid, Naples, Boston. He was awarded the title of Baron by King Albert I of the Belgians in 1928. In 1961, de la Vallée Poussin fractured his shoulder, and this accident and its complications led to his death in Watermael-Boitsfort, near Brussels, Belgium, a few months later. A student of his, Georges Lemaître, was the first to propose the Big Bang theory of the formation of the Universe. Work Although his first mathematical interests were in analysis, he became suddenly famous as he proved the prime number theorem independently of his coeval Jacques Hadamard in 1896. Afterwards, he found interest in approximation theory. He defined, for any continuous function f on the standard interval , the sums , where and are the vectors of the dual basis with respect to the basis of Chebyshev polynomials (defined as Note that the formula is also valid with being the Fourier sum of a -periodic function such that Finally, the de la Vallée Poussin sums can be evaluated in terms of the so-called Fejér sums (say ) The kernel is bounded () and obeys the property , if Later, he worked on potential theory and complex analysis. He also published a counterexample to Alfred Kempe's false proof of the four color theorem. The Poussin graph, the graph he used for this counterexample, is named after him. Cours d’analyse The textbooks of his mathematical analysis course have been a reference for a long time and had some international influence. The second edition (1909-1912) is remarkable for its introduction of the Lebesgue integral. It was in 1912, "the only textbook on analysis containing both Lebesgue integral and its application to Fourier series, and a general theory of approximation of functions by polynomials". The third edition (1914) introduced the now classical definition of differentiability due to Otto Stolz. The second volume of this third edition was burnt in the fire of Louvain during the German invasion. The further editions were much more conservative, returning essentially to the first edition. Starting from the eighth edition, Fernand Simonart took over the revision and the publication of the Cours d’analyse. Selected publications Œuvres, vol. 1 (Biography and number theory), 2000 (eds. Mawhin, Butzer, Vetro), vols. 2 to 4 planned Cours d´Analyse, 2 vols., 1903, 1906 (7th edition 1938), Reprint of the 2nd edition 1912, 1914 by Jacques Gabay, (deals only with real analysis). Online: Cours d'analyse infinitésimale, Tome I Cours d'analyse infinitésimale, Tome II Integrals de Lebesgue, fonctions d´ensemble, classes de Baire, 2nd edition 1934, Reprint by Jacques Gabay, Le potentiel logarithmique, balayage et representation conforme, Paris, Löwen 1949 Recherches analytiques de la théorie des nombres premiers, Annales de la Societe Scientifique de Bruxelles vol. 20 B, 1896, pp. 183–256, 281–362, 363–397, vol. 21 B, pp. 351–368 (prime number theorem) Sur la fonction Zeta de Riemann et le nombre des nombres premiers inferieur a une limite donnée, Mémoires couronnés de l Academie de Belgique, vol.59, 1899, pp. 1–74 Leçons sur l'approximation des fonctions d'une variable réelle Paris, Gauthier-Villars, 1919, 1952 See also Poussin proof Remez algorithm La Vallée-Poussin Notes External links Biographie Universelle, by Didot. 1866 births 1962 deaths 19th-century Belgian mathematicians 20th-century Belgian mathematicians Number theorists Harvard University Department of Mathematics faculty Academic staff of the University of Paris Catholic University of Leuven (1834–1968) alumni Belgian barons Foreign associates of the National Academy of Sciences Scientists from Leuven
Charles Jean de la Vallée Poussin
Mathematics
1,401
45,249,739
https://en.wikipedia.org/wiki/Miniature%20mass%20spectrometer
A miniature mass spectrometer (MMS) is a type of mass spectrometer (MS) which has small size and weight and can be understood as a portable or handheld device. What it means to be portable and a set of criteria by which portable and miniature mass spectrometers can be assessed have been discussed in detail. Current lab-scale mass spectrometers however, usually weigh hundreds of pounds and can cost on the range from thousands to millions of dollars. One purpose of producing MMS is for in situ analysis. This in situ analysis can lead to much simpler mass spectrometer operation such that non-technical personnel like physicians at the bedside, firefighters in a burning factory, food safety inspectors in a warehouse, or airport security at airport checkpoints, etc. can analyze samples themselves saving the time, effort, and cost of having the sample run by a trained MS technician offsite. Although, reducing the size of MS can lead to a poorer performance of the instrument versus current analytical laboratory standards, MMS is designed to maintain sufficient resolutions, detection limits, accuracy, and especially the capability of automatic operation. These features are necessary for the specific in-situ applications of MMS mentioned above. Coupling and ionization in miniature mass spectrometer In typical mass spectrometry, MS is coupled with separation tools like gas chromatography, liquid chromatography or electrophoresis to reduce the effect of the matrix or background and improve the selectivity especially when the analytes are widely different in concentration. Sample preparation including sample collection, extraction, pre-separation increases the size of the mass analysis system and adds time and sophistication to the analysis. A lot of contribution promotes miniaturizing devices and simplifying the operations. A micro-GC has been implemented to fit to a portable MS system. Besides microfluidics is a competent candidate for MMS and automating sample preparation. In this technique, most of the steps for sample preparation are staged similarly with laboratory systems, but miniature chip-based devices are used with low consumption of sample and solvents. One way to circumvent classical, lab-based sample introduction systems is the use of ambient ionization, as it does not require mechanical or electrical coupling to a MMS and can generate ions in the open atmosphere without prior sample preparation, but at the cost of more rigorous vacuum system requirements. Different ambient ionization methods, including low-temperature plasma, paper spray, and extraction spray, have been demonstrated to be highly compatible with MMS. A rigorous review of ambient ionization sources in the context of portable and miniature mass spectrometry has developed a set of criteria by which performance and portability can be evaluated. Without separation coupling, the basic building blocks in MMS, which are similar in composition with the conventional laboratory counterpart, are sample inlet, ionization source, mass analyzers, detector, vacuum system, instrument control and data acquisition system. Three most important components in MMS contributing to miniaturization are mass analyzer, vacuum system and electronics control system. Reducing the size of any components is beneficial to the miniaturization. However, it is noticeable that minimizing the analyzer’s size can greatly enhance the miniaturization of the other components especially the vacuum system because the analyzer is the pressure deciding factor for MS analysis and pressure interface fabrication. Miniature mass analyzer Smaller mass analyzers require smaller control system to generate adequate electric field and magnetic field strength, which are two fundamental fields separating ions based on their mass-to-charge ratio. Because a compact circuit can generate a high electric field, decreasing the size of the voltage-generating system does not significantly affect to the miniaturization of time-of-flight mass spectrometry (TOF) and electric sectors which use only the electric field to separate ions. In principle, the electromagnetic field mainly depends on the shape of the mass analyzers. As a result, a smaller magnet fitting with small size MS reduces the system weight significantly. In practice, when reducing the size, the geometries of mass analyzer are distorted. For example, smaller volume in ion trap leads to lower trapping capacity and therefore results in a loss of resolution and sensitivity. However, by utilizing tandem MS resolution and selectivity can be greatly enhanced in complex mixtures. In general, beam-type mass analyzers, such as TOF and sector mass analyzers, are much larger than ion trap type such as Paul trap, Penning trap or Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR). Additionally, ion trap mass analyzers can be used to perform multistage MS/MS in a single device. As a result, ion traps are received dominant attention for building a MMS. Miniature time of flight Some researchers are successful in designing a series of miniature TOF mass analyzers. Cotter at Johns Hopkins University used a pulsed extraction in linear time of flight mass analyzer and the ions are accelerated to with higher energy of 12 keV to enable detection of high-mass. The group achieved resolutions of 1/1200 and 1/600 at m/z 4500 and 12000 respectively. This mini analyzer can measure 66k Da proteins, mixtures of oligonucleotides, and biological spores. Verbeck at University of North Texas, created a mini-TOF based on reflectron TOF with a microelectromechanical system technology. To overcome the low resolution of short flight tube, the effective ion travelling path length is extended by moving ions back and forth in periods of time. The system used a 5-cm endcap reflectron TOF with higher-order kinetic energy focusing to analyze the ions with m/z exceeding 60,000. Ecelberger, a senior professional staff scientist in the Sensor Science Group of the Research and Technology Development Center at APL also developed a suitcase TOF incorporated with matrix-assisted laser desorption/ionization MALDI. The suitcase TOF was tested by scientists from U.S. Army Soldier and Biological Chemical Command. The samples are biological toxins and chemical agents with the mass range from a few hundred daltons to over 60 kDa. The Suitcase TOF was referenced with a commercial TOFMS for the same experiments. Both instruments can detect all but a few compounds with very encouraging results. Because a commercial TOFMS uses a higher voltage pulsed extraction with longer flight tube with other optimized conditions, it generally has better sensitivity and resolution than a suitcase TOF. However, in the case of very high mass compounds, the suitcase TOF shows as good resolution and sensitivity as the commercial TOF. The suitcase TOF was also tested with a series of chemical weapons agents. Every compound tested was detected at levels comparable to standard analytical techniques for these agents. Miniature sector Several miniature double-focusing mass analyzers have been fabricated. A non-scanning Mattauch–Herzog geometry sector was developed using new materials to construct a lighter magnet. Under the collaboration of University of Minnesota and Universidad de Costa Rica, a miniature double-focusing sector was produced under sophisticated technique of conventional machining methods and thin film patterning to overcome the distortion of the electric-magnetic fields due to small size. The MMS can reach a detection limit close to 10 ppm, a dynamic range of 5 orders of magnitude and a mass range up to 103 Da. The mass analyzer overall sizes 3.5cmx6cmx7.5 cm and it weighs 0.8 kg and consumes 2.5 W. Miniature linear quadrupole mass filter The linear quadrupole mass filter or quadrupole mass analyzer is one of the most popular mass analyzer. The mini-quadrupole has been used as a single analyzer or in arrays of identical mass analyzers. The quadrupole array has rods of 0.5mm radius and 10mm long while another one with rods of 1mm radius and 25mm long. These mini quadrupole were developed and characterized at a radio frequency (RF) higher than 11 MHz. Volatile organic compounds were ionized by electron ionization and were characterized with unit resolution. Micromachining was applied to produce a much smaller V-groove quadrupole. Miniature ion trap mass analyzer Ion traps include quadrupole ion traps or Paul trap, Fourier transform ion cyclotron resonance or Penning trap and newly developed orbitrap. However, Paul trap receives a great focus from researchers for a MMS because of its distinct advantages over other mass analyzers for building MMS. One of the benefits is that ion traps can work at much higher pressures than beam type mass analyzers and can be simplified with different geometry for the ease of fabrication. For example, a miniature quadrupole ion trap mass analyzers, such as cylindrical ion trap, linear ion trap, rectilinear ion trap), can operate at several mTorr in contrast to 10−5 Torr or less for other analyzers and it is able to perform MS/MS in a single device with minimum size of electronics system. Nevertheless, as the size gets smaller, it is hard to maintain the electric field shape and precise configuration and will negatively affect ion motion. The goal is to make the trap smaller without losing ion capacity. Tridion-9 mass spectrometer with toroidal ion trap is designed with a doughnut-shaped volume that can hold up to 400 times more ions. The outstanding result is achieved as the radius is reduced to one-fifth of a conventional laboratory ion trap while maintaining the ion capacity. Miniature vacuum system The purpose of using the vacuum is to eliminate background signal and avoid intermolecular collision events, therefore, providing a long mean free path for the ions. The vacuum system, including the vacuum pumps and the vacuum manifold with its various interfaces, is often the heaviest part and consumes the most power in a mass spectrometer. In the case of TOF, if the length of drift region is decreased, the pressure inside region can be operated at higher value because the free collision region is still maintained for a short traveling distance of the ions. As a result, the vacuum system requires less power to run the system. For a trap-type mass analyzer, because the ions are trapped in the device for long periods and the accumulated trajectory length is much longer than the size of the mass analyzer, the size reduction of the mass analyzer may not directly affect the adequate operating pressure. Miniature rough-turbo pump configurations similar to lab-scale instruments have been developed to be compatible with MMS. For high-vacuum pumping, turbomolecular pumps are also upgraded. A Thermo Fisher Orbitrap used three turbo pumps in LC-MS modes to achieve a vacuum below 10−10 torr. Recently, a turbo pump from Creare, Inc.TM weighs only 500g and needs below 18 W power to run. The pump can provide the ultimate vacuum below 10−8 torr, which is much lower than the operating pressure necessary for a MMS. The leading research groups, producers and applications One of the leading groups in academy for creating ion-trap MMS is Prof. Graham Cooks with his associate Professor Zheng Ouyang at Purdue University. They have built a series of mini mass spectrometer based on quadrupole ion trap called Mini 10, Mini 11, Mini 12. The group used Mini 10 mass spectrometer weighing 10 kg to analyze proteins, peptides and alkaloids in complex plant materials with electrospray ionization ESI and paperspray ionization. The group used low radio frequency of resonant ion ejection to increase mass range up to 17,000 Da proteins. For interfacing ESI source with MMS, a 10 cm stainless steel capillary was fabricated to transfer the ions directly into the vacuum manifold. The resulting high pressure of 20 mTorr, which is several orders of magnitude higher than that used in lab-scale mass spectrometers is compensated by using the pressure-tolerant rectilinear ion trap. One of the key component of this MMS is the commercial turbo-bump and the MS can be operated at 10−3 torr. To overcome the problem of continuous sample introduction because of the small size of the pump, the group developed a technique called discontinuous atmospheric pressure introduction (DAPI). This technique performs direct chemical analysis without sample pretreatment and enables the coupling of miniature mass spectrometers to atmospheric pressure ionization sources, including ESI, atmospheric pressure chemical ionization (APCI), and various ambient ionization sources. The ions are transferred from ionization source and hold at a punch-valve and injected to MS periodically. The performance of a hand-held Mini-10 mass spectrometer was upgraded with negative ion mode for detecting explosive compounds and hazardous materials at the picogram level, which is highly applicable for airport luggage checking. The 8.5 kg Mini- 11 and 25 kg Mini-12 can produce resolution mass spectra up to m/z 600, a range that makes it useful for studying metabolites, lipids, and other small molecules. The group also developed and incorporated a digital microfluidic platform to the MMS with the application to extract and quantify drugs in urine. Mini 12 can perform MS5 and analyze directly such complex samples as whole blood, untreated food, and environmental samples, without sample preparation or chromatographic separation. 1st Detect introduced the MMS 1000 which is a cylindrical ion-trap mass spectrometer with MS/MS capability. Some characteristics are advertised as wide mass range (35-450 Da), high resolution (<0.5 Da FWHM), fast analysis time (>=0.5s). The inlet flow rate can be high – up to 600ml/min with no external pumps or carrier gases. The MMS 1000 is incorporated with a non-cryogenic pre-concentrator. This coupling enhances the sensitivity up to 10^5 with a fast speed of 30s. 1st Detect's miniaturized mass spectrometers are used in a range of applications, including homeland security, military, breath analysis, leak detection, environmental and industrial quality control. The MMS 1000 was originally designed for NASA, for the purpose of monitoring air quality on the International Space Station. 908 Devices introduced a handheld mass spectrometer utilizing high-pressure mass spectrometry M908 weighing 2 kg with solid, liquid, gas multi-phase detector. On the other hand, Microsaic Systems in Surrey, United Kingdom develops single quadrupole mass spectrometer called 3500 and 4000 MiD. These mass analyzers are used for supporting the pharmaceutical process chemistry. Several other MMS instruments have been also fabricated using ion trap mass analyzers, including Tridion-9 GCMS from TorionInc, now part of Perkin Elmer (AmericanFork, Utah), GC/QIT from the Jet Propulsion Laboratory, Chemsense 600 from Griffin Analytical Technology LLC. (West Lafayette, Indiana). Another example is Girgui at Harvard University, who built a MMS based on existing underwater mass spectrometers (UMS) that can operate underwater to study the influence of microbes on the methane and hydrogen content of the ocean. He worked with a mechanical engineer to package a commercial quadrupole mass analyzer from Stanford Research Systems, a Pfeiffer HiPace80 turbopump, and a custom gas extractor into a 25 cm × 90 cm cylinder. Total cost is about $15,000. The Analytical Instrumentation Research Institute in Korea also developed a palm-portable mass spectrometer. The size and weight is reduced to 1.54 L and 1.48 kg respectively, and it used 5 W power only. The PPMS is based on four parallel disk ion traps, a small ion getter pump and a micro-computer. The PPM can perform the scan ion mass of up to m/z 300 and detect the ppm concentration of organic gases diluted in the air. The Harsh-Environment Mass Spectrometry Society is holding a biannual workshop that focuses on in-situ mass spectrometry in extreme environments, such as in the deep ocean, volcano crater, or outer space require high reliability, autonomous or remote operation, ruggedness with minimum size, weight, and power. The archives of the workshop include ~100 presentations focusing on the design and application of miniature mass spectrometers. For example, In 8th Harsh Environment Mass Spectrometry Workshop, a group of scientists presented their study about utilization of lightweight MS based instrumentation and small Unmanned Aerial Vehicles UAV platforms for in-situ volcanic plume analysis in Turrialba and Arenal volcanoes (Costa Rica). Mini mass spectrometers relying on miniature 18 mm rods transpector quadrupole for mTorr pressure operation, a miniature turbo molecular drag pump and assets like small, multi-parameter battery powered sensor suite MiniGas embedded with micro PC control system, and telemetry system were integrated in an aircraft to acquire 4D image of an erupting volcanic plume. References Mass spectrometry
Miniature mass spectrometer
Physics,Chemistry
3,521
66,516,395
https://en.wikipedia.org/wiki/Herdecovirus
Herdecovirus is a subgenus of viruses in the genus Deltacoronavirus, consisting of a single species, Night heron coronavirus HKU19. References Virus subgenera Deltacoronaviruses
Herdecovirus
Biology
43
15,417,747
https://en.wikipedia.org/wiki/SPZ1
Spermatogenic leucine zipper protein 1 is a protein that in humans is encoded by the SPZ1 gene. References Further reading
SPZ1
Chemistry
29
2,845,520
https://en.wikipedia.org/wiki/Square%20planar%20molecular%20geometry
In chemistry, the square planar molecular geometry describes the stereochemistry (spatial arrangement of atoms) that is adopted by certain chemical compounds. As the name suggests, molecules of this geometry have their atoms positioned at the corners. Examples Numerous compounds adopt this geometry, examples being especially numerous for transition metal complexes. The noble gas compound xenon tetrafluoride adopts this structure as predicted by VSEPR theory. The geometry is prevalent for transition metal complexes with d8 configuration, which includes Rh(I), Ir(I), Pd(II), Pt(II), and Au(III). Notable examples include the anticancer drugs cisplatin, [PtCl2(NH3)2], and carboplatin. Many homogeneous catalysts are square planar in their resting state, such as Wilkinson's catalyst and Crabtree's catalyst. Other examples include Vaska's complex and Zeise's salt. Certain ligands (such as porphyrins) stabilize this geometry. Splitting of d-orbitals A general d-orbital splitting diagram for square planar (D4h) transition metal complexes can be derived from the general octahedral (Oh) splitting diagram, in which the dz2 and the dx2−y2 orbitals are degenerate and higher in energy than the degenerate set of dxy, dxz and dyz orbitals. When the two axial ligands are removed to generate a square planar geometry, the dz2 orbital is driven lower in energy as electron-electron repulsion with ligands on the z-axis is no longer present. However, for purely σ-donating ligands the dz2 orbital is still higher in energy than the dxy, dxz and dyz orbitals because of the torus shaped lobe of the dz2 orbital. It bears electron density on the x- and y-axes and therefore interacts with the filled ligand orbitals. The dxy, dxz and dyz orbitals are generally presented as degenerate but they have to split into two different energy levels with respect to the irreducible representations of the point group D4h. Their relative ordering depends on the nature of the particular complex. Furthermore, the splitting of d-orbitals is perturbed by π-donating ligands in contrast to octahedral complexes. In the square planar case strongly π-donating ligands can cause the dxz and dyz orbitals to be higher in energy than the dz2 orbital, whereas in the octahedral case π-donating ligands only affect the magnitude of the d-orbital splitting and the relative ordering of the orbitals is conserved. See also AXE method Molecular geometry References External links 3D Chem – Chemistry, Structures, and 3D Molecules IUMSC – Indiana University Molecular Structure Center Interactive molecular examples for point groups – Coordination numbers and complex ions Molecular geometry
Square planar molecular geometry
Physics,Chemistry
617
11,464,110
https://en.wikipedia.org/wiki/Secondary%20measure
In mathematics, the secondary measure associated with a measure of positive density ρ when there is one, is a measure of positive density μ, turning the secondary polynomials associated with the orthogonal polynomials for ρ into an orthogonal system. Introduction Under certain assumptions, it is possible to obtain the existence of a secondary measure and even to express it. For example, this can be done when working in the Hilbert space L2([0, 1], R, ρ) with in the general case, or: when ρ satisfies a Lipschitz condition. This application φ is called the reducer of ρ. More generally, μ et ρ are linked by their Stieltjes transformation with the following formula: in which c1 is the moment of order 1 of the measure ρ. Secondary measures and the theory around them may be used to derive traditional formulas of analysis concerning the Gamma function, the Riemann zeta function, and the Euler–Mascheroni constant. They have also allowed the clarification of various integrals and series, although this tends to be difficult a priori. Finally they make it possible to solve integral equations of the form where g is the unknown function, and lead to theorems of convergence towards the Chebyshev and Dirac measures. The broad outlines of the theory Let ρ be a measure of positive density on an interval I and admitting moments of any order. From this, a family {Pn} of orthogonal polynomials for the inner product induced by ρ can be created. Let {Qn} be the sequence of the secondary polynomials associated with the family P. Under certain conditions there is a measure for which the family Q is orthogonal. This measure, which can be clarified from ρ, is called a secondary measure associated initial measure ρ. When ρ is a probability density function, a sufficient condition that allows μ to be a secondary measure associated with ρ while admitting moments of any order is that its Stieltjes Transformation is given by an equality of the type where a is an arbitrary constant and c1 indicates the moment of order 1 of ρ. For a = 1, the measure known as secondary can be obtained. For n ≥ 1 the norm of the polynomial Pn for ρ coincides exactly with the norm of the secondary polynomial associated Qn when using the measure μ. In this paramount case, and if the space generated by the orthogonal polynomials is dense in L2(I, R, ρ), the operator Tρ defined by creating the secondary polynomials can be furthered to a linear map connecting space L2(I, R, ρ) to L2(I, R, μ) and becomes isometric if limited to the hyperplane Hρ of the orthogonal functions with P0 = 1. For unspecified functions square integrable for ρ a more general formula of covariance may be obtained: The theory continues by introducing the concept of reducible measure, meaning that the quotient ρ/μ is element of L2(I, R, μ). The following results are then established: The reducer φ of ρ is an antecedent of ρ/μ for the operator Tρ. (In fact the only antecedent which belongs to Hρ). For any function square integrable for ρ, there is an equality known as the reducing formula: . The operator defined on the polynomials is prolonged in an isometry Sρ linking the closure of the space of these polynomials in L2(I, R, ρ2μ−1) to the hyperplane Hρ provided with the norm induced by ρ. Under certain restrictive conditions the operator Sρ acts like the adjoint of Tρ for the inner product induced by ρ. Finally the two operators are also connected, provided the images in question are defined, by the fundamental formula of composition: Case of the Lebesgue measure and some other examples The Lebesgue measure on the standard interval [0, 1] is obtained by taking the constant density ρ(x) = 1. The associated orthogonal polynomials are called Legendre polynomials and can be clarified by The norm of Pn is worth The recurrence relation in three terms is written: The reducer of this measure of Lebesgue is given by The associated secondary measure is then clarified as . If we normalize the polynomials of Legendre, the coefficients of Fourier of the reducer φ related to this orthonormal system are null for an even index and are given by for an odd index n. The Laguerre polynomials are linked to the density ρ(x) = e−x on the interval I = [0, ∞). They are clarified by and are normalized. The reducer associated is defined by The coefficients of Fourier of the reducer φ related to the Laguerre polynomials are given by This coefficient Cn(φ) is no other than the opposite of the sum of the elements of the line of index n in the table of the harmonic triangular numbers of Leibniz. The Hermite polynomials are linked to the Gaussian density on I = R. They are clarified by and are normalized. The reducer associated is defined by The coefficients of Fourier of the reducer φ related to the system of Hermite polynomials are null for an even index and are given by for an odd index n. The Chebyshev measure of the second form. This is defined by the density on the interval [0, 1]. It is the only one which coincides with its secondary measure normalised on this standard interval. Under certain conditions it occurs as the limit of the sequence of normalized secondary measures of a given density. Examples of non-reducible measures Jacobi measure on (0, 1) of density Chebyshev measure on (−1, 1) of the first form of density Sequence of secondary measures The secondary measure μ associated with a probability density function ρ has its moment of order 0 given by the formula where c1 and c2 indicating the respective moments of order 1 and 2 of ρ. This process can be iterated by 'normalizing' μ while defining ρ1 = μ/d0 which becomes in its turn a density of probability called naturally the normalised secondary measure associated with ρ. From ρ1, a secondary normalised measure ρ2 can be created. This can be iterated to obtain ρ3 from ρ2 and so on. Therefore, a sequence of successive secondary measures, created from ρ0 = ρ, is such that ρn+1 that is the secondary normalised measure deduced from ρn It is possible to clarify the density ρn by using the orthogonal polynomials Pn for ρ, the secondary polynomials Qn and the reducer associated φ. This gives the formula The coefficient is easily obtained starting from the leading coefficients of the polynomials Pn−1 and Pn. The reducer φn associated with ρn, as well as the orthogonal polynomials corresponding to ρn, can also be clarified. The evolution of these densities when the index tends towards the infinite can be related to the support of the measure on the standard interval [0, 1]: Let be the classic recurrence relation in three terms. If then the sequence {ρn} converges completely towards the Chebyshev density of the second form . These conditions about limits are checked by a very broad class of traditional densities. A derivation of the sequence of secondary measures and convergence can be found in. Equinormal measures One calls two measures thus leading to the same normalised secondary density. It is remarkable that the elements of a given class and having the same moment of order 1 are connected by a homotopy. More precisely, if the density function ρ has its moment of order 1 equal to c1, then these densities equinormal with ρ are given by a formula of the type: t describing an interval containing ]0, 1]. If μ is the secondary measure of ρ, that of ρt will be tμ. The reducer of ρt is by noting G(x) the reducer of μ. Orthogonal polynomials for the measure ρt are clarified from n = 1 by the formula with Qn secondary polynomial associated with Pn. It is remarkable also that, within the meaning of distributions, the limit when t tends towards 0 per higher value of ρt is the Dirac measure concentrated at c1. For example, the equinormal densities with the Chebyshev measure of the second form are defined by: with t describing ]0, 2]. The value t = 2 gives the Chebyshev measure of the first form. Applications In the formulas below G is Catalan's constant, γ is the Euler's constant, β2n is the Bernoulli number of order 2n, H2n+1 is the harmonic number of order 2n+1 and Ei is the Exponential integral function. The notation indicating the 2 periodic function coinciding with on (−1, 1). If the measure ρ is reducible and let φ be the associated reducer, one has the equality If the measure ρ is reducible with μ the associated reducer, then if f is square integrable for μ, and if g is square integrable for ρ and is orthogonal with P0 = 1, the following equivalence holds: c1 indicates the moment of order 1 of ρ and Tρ the operator In addition, the sequence of secondary measures has applications in Quantum Mechanics, where it gives rise to the sequence of residual spectral densities for specialized Pauli-Fierz Hamiltonians. This also provides a physical interpretation for the sequence of secondary measures. See also Orthogonal polynomials Probability References External links personal page of Roland Groux about the theory of secondary measures Measures (measure theory)
Secondary measure
Physics,Mathematics
1,981
63,083,999
https://en.wikipedia.org/wiki/C9H20O2
{{DISPLAYTITLE:C9H20O2}} The molecular formula C9H20O2 may refer to: Dibutoxymethane 1,9-Nonanediol
C9H20O2
Chemistry
43
37,339,852
https://en.wikipedia.org/wiki/Haemophilus%20virus%20HP2
Haemophilus virus HP2 is a virus of the family Myoviridae, genus Hpunavirus. References Myoviridae
Haemophilus virus HP2
Biology
31
323,413
https://en.wikipedia.org/wiki/Alpenglow
Alpenglow (from ; ) is an optical phenomenon that appears as a horizontal reddish glow near the horizon opposite to the Sun when the solar disk is just below the horizon. Description Strictly speaking, alpenglow refers to indirect sunlight reflected or diffracted by the atmosphere after sunset or before sunrise. This diffuse illumination creates soft shadows in addition to the reddish color. The term is also used informally to include direct illumination by the reddish light of the rising or setting sun, with sharply defined shadows. Reflected sunlight When the Sun is below the horizon, sunlight has no direct path to reach a mountain. Unlike the direct sunlight around sunrise or sunset, the light that causes alpenglow is reflected off airborne precipitation, ice crystals, or particulates in the lower atmosphere. These conditions differentiate between direct sunlight around sunrise or sunset and alpenglow. The term is generally confused to be any sunrise or sunset light reflected off the mountains or clouds, but alpenglow in the strict sense of the word is not direct sunlight and is only visible after sunset or before sunrise. After sunset, if mountains are absent, aerosols in the eastern sky can be illuminated in a similar way by the remaining scattered reddish light above the fringe of Earth's shadow. This backscattered light produces a pinkish band opposite of the Sun's direction, called the Belt of Venus. Direct sunlight Alpenglow in a looser sense may refer to any illumination by the rosy or reddish light of the setting or rising Sun. See also Golden hour (photography) Belt of Venus References Atmospheric optical phenomena
Alpenglow
Physics
324
36,443,194
https://en.wikipedia.org/wiki/Clavulina%20dicymbetorum
Clavulina dicymbetorum is a species of coral fungus in the family Clavulinaceae. Described as new to science in 2005, it occurs in Guyana. References External links Fungi described in 2005 Fungi of Guyana dicymbetorum Fungus species
Clavulina dicymbetorum
Biology
55
28,185,807
https://en.wikipedia.org/wiki/ECryptfs
eCryptfs (enterprise cryptographic filesystem) is a package of disk encryption software for Linux. Its implementation is a POSIX-compliant filesystem-level encryption layer, aiming to offer functionality similar to that of GnuPG at the operating system level, and has been part of the Linux kernel since version 2.6.19. Details The eCryptfs package has been included in Ubuntu since version 9.04 to implement Ubuntu's encrypted home directory feature, but is now deprecated eCryptfs is derived from Erez Zadok's Cryptfs. It uses a variant of the OpenPGP file format for encrypted data, extended to allow random access, storing cryptographic metadata (including a per-file randomly generated session key) with each individual file. It also encrypts file and directory names which makes them internally longer (average one third). The reason is it needs to uuencode the encrypted names to eliminate unwanted characters in the resulting name. This lowers the maximum usable byte name length of the original file system entry depending on the used file system (this can lead to four times fewer characters for example for Asian utf-8 file names). See also Disk encryption Disk encryption software Comparison of disk encryption software EncFS dm-crypt FileVault Encrypting File System References External links ArchWiki: System Encryption with eCryptfs eCryptfs FAQ Cryptfs: A Stackable Vnode Level Encryption File System (Zadok et al., 1999) Cryptographic software Disk encryption
ECryptfs
Mathematics
338
165,450
https://en.wikipedia.org/wiki/Phytophthora%20infestans
Phytophthora infestans is an oomycete or water mold, a fungus-like microorganism that causes the serious potato and tomato disease known as late blight or potato blight. Early blight, caused by Alternaria solani, is also often called "potato blight". Late blight was a major culprit in the 1840s European, the 1845–1852 Irish, and the 1846 Highland potato famines. The organism can also infect some other members of the Solanaceae. The pathogen is favored by moist, cool environments: sporulation is optimal at in water-saturated or nearly saturated environments, and zoospore production is favored at temperatures below . Lesion growth rates are typically optimal at a slightly warmer temperature range of . Etymology The genus name Phytophthora comes from the Greek (), meaning "plant" – plus the Greek (), meaning "decay, ruin, perish". The species name infestans is the present participle of the Latin verb , meaning "attacking, destroying", from which the word "to infest" is derived. The name Phytophthora infestans was coined in 1876 by the German mycologist Heinrich Anton de Bary (1831–1888). Life cycle, signs and symptoms The asexual life cycle of Phytophthora infestans is characterized by alternating phases of hyphal growth, sporulation, sporangia germination (either through zoospore release or direct germination, i.e. germ tube emergence from the sporangium), and the re-establishment of hyphal growth. There is also a sexual cycle, which occurs when isolates of opposite mating type (A1 and A2, see below) meet. Hormonal communication triggers the formation of the sexual spores, called oospores. The different types of spores play major roles in the dissemination and survival of P. infestans. Sporangia are spread by wind or water and enable the movement of P. infestans between different host plants. The zoospores released from sporangia are biflagellated and chemotactic, allowing further movement of P. infestans on water films found on leaves or soils. Both sporangia and zoospores are short-lived, in contrast to oospores which can persist in a viable form for many years. People can observe P. infestans produce dark green, then brown then black spots on the surface of potato leaves and stems, often near the tips or edges, where water or dew collects. The sporangia and sporangiophores appear white on the lower surface of the foliage. As for tuber blight, the white mycelium often shows on the tubers' surface. Under ideal conditions, P. infestans completes its life cycle on potato or tomato foliage in about five days. Sporangia develop on the leaves, spreading through the crop when temperatures are above and humidity is over 75–80% for 2 days or more. Rain can wash spores into the soil where they infect young tubers, and the spores can also travel long distances on the wind. The early stages of blight are easily missed. Symptoms include the appearance of dark blotches on leaf tips and plant stems. White mold will appear under the leaves in humid conditions and the whole plant may quickly collapse. Infected tubers develop grey or dark patches that are reddish brown beneath the skin, and quickly decay to a foul-smelling mush caused by the infestation of secondary soft bacterial rots. Seemingly healthy tubers may rot later when in store. P. infestans survives poorly in nature apart from on its plant hosts. Under most conditions, the hyphae and asexual sporangia can survive for only brief periods in plant debris or soil, and are generally killed off during frosts or very warm weather. The exceptions involve oospores, and hyphae present within tubers. The persistence of viable pathogen within tubers, such as those that are left in the ground after the previous year's harvest or left in cull piles is a major problem in disease management. In particular, volunteer plants sprouting from infected tubers are thought to be a major source of inoculum (or propagules) at the start of a growing season. This can have devastating effects by destroying entire crops. Mating types The mating types are broadly divided into A1 and A2. Until the 1980s populations could only be distinguished by virulence assays and mating types, but since then more detailed analysis has shown that mating type and genotype are substantially decoupled. These types each produce a mating hormone of their own. Pathogen populations are grouped into clonal lineages of these mating types and includes: A1 A1 produces a mating hormone, a diterpene α1. Clonal lineages of A1 include: CN-1, -2, -4, -5, -6, -7, -8 – mtDNA haplotype Ia, China in 1996–97 – Ia, China, 1996–97 – Ia, China, 2004 – IIb, China, 2000 & 2002 – IIa, China, 2004–09 – Ia/IIb, China, 2004–09 – (only presumed to be A1), mtDNA haplo Ia subtype , Japan, Philippines, India, China, Malaysia, Nepal, present some time before 1950 – Ia, India, Nepal, 1993 – Ia, India, 1993 JP-2/SIB-1/RF006 – mtDNA haplo IIa, distinguishable by RG57, intermediate level of metalaxyl resistance, Japan, China, Korea, Thailand, 1996–present – IIa, distinguishable by RG57, intermediate level of metalaxyl resistance, Japan, 1996–present – IIa, distinguishable by RG57, intermediate level of metalaxyl resistance, Japan, 1996–present sensu Zhang (not to be confused with #KR-1 sensu Gotoh below) – IIa, Korea, 2002–04 KR_1_A1 – mtDNA haplo unknown, Korea, 2009–16 – Ia, China, 2004 – Ia, India, Nepal, 1993, 1996–97 – Ia, Nepal, 1997 – Ia, Nepal, 1999–2000 – (Also A2, see #the A2 type of NP2 below) Ia, Nepal, 1999–2000 (not to be confused with #US-1 below) – Ib, Nepal, 1999–2000 (not to be confused with #NP3/US-1 above) – Ib, China, India, Nepal, Japan, Taiwan, Thailand, Vietnam, 1940–2000 – Ia, Nepal, 1999–2000 – mtDNA haplo unknown, Nepal, 1999–2000 – IIb, Taiwan, Korea, Vietnam, 1998–2016 – IIb, China, 2002 & 2004 – IIa, Korea, 2003–04 – Ia, Indonesia, 2016–19 A2 Discovered by John Niederhauser in the 1950s, in the Toluca Valley in Central Mexico, while working for the Rockefeller Foundation's Mexican Agriculture Program. Published in Niederhauser 1956. A2 produces a mating hormone α2. Clonal lineages of A2 include: CN02 – See #13_A2/CN02 below – with mtDNA haplotype H-20 – IIa, Japan, Korea, Indonesia, late 1980s–present sensu Gotoh (not to be confused with #KR-1 sensu Zhang above) – IIa, differs from JP-1 by one RG57 band, Korea, 1992 – mtDNA haplo unknown, Korea, 2009–16 – Ia, China, 2001 – (Also A1, see #the A1 type of NP2 above) Ia, Nepal, 1999–2000 – Ib, Nepal, 1999–2000 – Ia, Nepal, 1999–2000 – Ia, Thailand, China, Nepal, 1994 & 1997 Unknown – Ib, India, 1996–2003 – Brazil – IIa, Korea, 2002–03 /CN02 – Ia, China, India, Bangladesh, Nepal, Pakistan, Myanmar, 2005–19 Self-fertile A self-fertile type was present in China between 2009 and 2013. Physiology is the in P. infestans. Hosts respond with autophagy upon detection of this elicitor, Liu et al. 2005 finding this to be the only alternative to mass hypersensitivity leading to mass programmed cell death. Genetics P. infestans is diploid, with about 8–10 chromosomes, and in 2009 scientists completed the sequencing of its genome. The genome was found to be considerably larger (240 Mbp) than that of most other Phytophthora species whose genomes have been sequenced; P. sojae has a 95 Mbp genome and P. ramorum had a 65 Mbp genome. About 18,000 genes were detected within the P. infestans genome. It also contained a diverse variety of transposons and many gene families encoding for effector proteins that are involved in causing pathogenicity. These proteins are split into two main groups depending on whether they are produced by the water mold in the symplast (inside plant cells) or in the apoplast (between plant cells). Proteins produced in the symplast included RXLR proteins, which contain an arginine-X-leucine-arginine (where X can be any amino acid) sequence at the amino terminus of the protein. Some RXLR proteins are avirulence proteins, meaning that they can be detected by the plant and lead to a hypersensitive response which restricts the growth of the pathogen. P. infestans was found to encode around 60% more of these proteins than most other Phytophthora species. Those found in the apoplast include hydrolytic enzymes such as proteases, lipases and glycosylases that act to degrade plant tissue, enzyme inhibitors to protect against host defence enzymes and necrotizing toxins. Overall the genome was found to have an extremely high repeat content (around 74%) and to have an unusual gene distribution in that some areas contain many genes whereas others contain very few. The pathogen shows high allelic diversity in many isolates collected in Europe. This may be due to widespread trisomy or polyploidy in those populations. Research Study of P. infestans presents sampling difficulties in the United States. It occurs only sporadically and usually has significant founder effects due to each epidemic starting from introduction of a single genotype. Origin and diversity The highlands of central Mexico are considered by many to be the center of origin of P. infestans, although others have proposed its origin to be in the Andes, which is also the origin of potatoes. A recent study evaluated these two alternate hypotheses and found conclusive support for central Mexico being the center of origin. Support for Mexico specifically the Toluca Valley comes from multiple observations including the fact that populations are genetically most diverse in Mexico, late blight is observed in native tuber-bearing Solanum species, populations of the pathogen are in Hardy–Weinberg equilibrium, the two mating (see § Mating types above) types occur in a 1:1 ratio, and detailed phylogeographic and evolutionary studies. Furthermore, the closest relatives of P. infestans, namely P. mirabilis and P. ipomoeae are endemic to central Mexico. On the other hand, the only close relative found in South America, namely P. andina, is a hybrid that does not share a single common ancestor with P. infestans. Finally, populations of P. infestans in South America lack genetic diversity and are clonal. Migrations from Mexico to North America or Europe have occurred several times throughout history, probably linked to the movement of tubers. Until the 1970s, the A2 mating type was restricted to Mexico, but now in many regions of the world both A1 and A2 isolates can be found in the same region. The co-occurrence of the two mating types is significant due to the possibility of sexual recombination and formation of oospores, which can survive the winter. Only in Mexico and Scandinavia, however, is oospore formation thought to play a role in overwintering. In other parts of Europe, increasing genetic diversity has been observed as a consequence of sexual reproduction. This is notable since different forms of P. infestans vary in their aggressiveness on potato or tomato, in sporulation rate, and sensitivity to fungicides. Variation in such traits also occurs in North America, however importation of new genotypes from Mexico appears to be the predominant cause of genetic diversity, as opposed to sexual recombination within potato or tomato fields. In 1976 – due to a summer drought in Europe – there was a potato production shortfall and so eating potatoes were imported to fill the shortfall. It is thought that this was the vehicle for mating type A2 to reach the rest of the world. In any case, there had been little diversity, consisting of the US-1 strain, and of that only one type of: mating type, mtDNA, restriction fragment length polymorphism, and di-locus isozyme. Then in 1980 suddenly greater diversity and A2 appeared in Europe. In 1981 it was found in the Netherlands, United Kingdom, 1985 in Sweden, the early 1990s in Norway and Finland, 1996 in Denmark, and 1999 in Iceland. In the UK new A1 lineages only replaced the old lineage by end of the '80s, and A2 spread even more slowly, with Britain having low levels and Ireland (north and Republic) having none-to-trace detections through the '90s. Many of the strains that appeared outside of Mexico since the 1980s have been more aggressive, leading to increased crop losses. In Europe since 2013 the populations have been tracked by the EuroBlight network (see links below). Some of the differences between strains may be related to variation in the RXLR effectors that are present. Disease management P. infestans is still a difficult disease to control. There are many chemical options in agriculture for the control of damage to the foliage as well as the fruit (for tomatoes) and the tuber (for potatoes). A few of the most common foliar-applied fungicides are Ridomil, a Gavel/SuperTin tank mix, and Previcur Flex. All of the aforementioned fungicides need to be tank mixed with a broad-spectrum fungicide, such as mancozeb or chlorothalonil, not just for resistance management but also because the potato plants will be attacked by other pathogens at the same time. If adequate field scouting occurs and late blight is found soon after disease development, localized patches of potato plants can be killed with a desiccant (e.g. paraquat) through the use of a backpack sprayer. This management technique can be thought of as a field-scale hypersensitive response similar to what occurs in some plant-viral interactions whereby cells surrounding the initial point of infection are killed in order to prevent proliferation of the pathogen. If infected tubers make it into a storage bin, there is a very high risk to the storage life of the entire bin. Once in storage, there is not much that can be done besides emptying the parts of the bin that contain tubers infected with Phytophthora infestans. To increase the probability of successfully storing potatoes from a field where late blight was known to occur during the growing season, some products can be applied just prior to entering storage (e.g., Phostrol). Around the world the disease causes around $6 billion of damage to crops each year. Resistant plants Breeding for resistance, particularly in potato plants, has had limited success in part due to difficulties in crossing cultivated potato with its wild relatives, which are the source of potential resistance genes. In addition, most resistance genes work only against a subset of P. infestans isolates, since effective plant disease resistance results only when the pathogen expresses a RXLR effector gene that matches the corresponding plant resistance (R) gene; effector-R gene interactions trigger a range of plant defenses, such as the production of compounds toxic to the pathogen. Potato and tomato varieties vary in their susceptibility to blight. Most early varieties are very vulnerable; they should be planted early so that the crop matures before blight starts (usually in July in the Northern Hemisphere). Many old crop varieties, such as King Edward potato, are also very susceptible but are grown because they are wanted commercially. Maincrop varieties which are very slow to develop blight include Cara, Stirling, Teena, Torridon, Remarka, and Romano. Some so-called resistant varieties can resist some strains of blight and not others, so their performance may vary depending on which are around. These crops have had polygenic resistance bred into them, and are known as "field resistant". New varieties, such as Sarpo Mira and Sarpo Axona, show great resistance to blight even in areas of heavy infestation. Defender is an American cultivar whose parentage includes Ranger Russet and Polish potatoes resistant to late blight. It is a long white-skinned cultivar with both foliar and tuber resistance to late blight. Defender was released in 2004. Genetic engineering may also provide options for generating resistance cultivars. A resistance gene effective against most known strains of blight has been identified from a wild relative of the potato, Solanum bulbocastanum, and introduced by genetic engineering into cultivated varieties of potato. This is an example of cisgenic genetic engineering. Melatonin in the plant/P. infestans co-environment reduces the stress tolerance of the parasite. Reducing inoculum Blight can be controlled by limiting the source of inoculum. Only good-quality seed potatoes and tomatoes obtained from certified suppliers should be planted. Often discarded potatoes from the previous season and self-sown tubers can act as sources of inoculum. Compost, soil or potting medium can be heat-treated to kill oomycetes such as Phytophthora infestans. The recommended sterilisation temperature for oomycetes is for 30 minutes. Environmental conditions There are several environmental conditions that are conducive to P. infestans. An example of such took place in the United States during the 2009 growing season. As colder than average for the season and with greater than average rainfall, there was a major infestation of tomato plants, specifically in the eastern states. By using weather forecasting systems, such as BLITECAST, if the following conditions occur as the canopy of the crop closes, then the use of fungicides is recommended to prevent an epidemic. A is a period of 48 consecutive hours, in at least 46 of which the hourly readings of temperature and relative humidity at a given place have not been less than and 75%, respectively. A is at least two consecutive days where min temperature is or above and on each day at least 11 hours when the relative humidity is greater than 90%. The Beaumont and Smith periods have traditionally been used by growers in the United Kingdom, with different criteria developed by growers in other regions. The Smith period has been the preferred system used in the UK since its introduction in the 1970s. Based on these conditions and other factors, several tools have been developed to help growers manage the disease and plan fungicide applications. Often these are deployed as part of decision support systems accessible through web sites or smart phones. Several studies have attempted to develop systems for real-time detection via flow cytometry or microscopy of airborne sporangia collected in air samplers. Whilst these methods show potential to allow detection of sporangia in advance of occurrence of detectable disease symptoms on plants, and would thus be useful in enhancing existing decision support systems, none have been commercially deployed to date. Use of fungicides Fungicides for the control of potato blight are normally used only in a preventative manner, optionally in conjunction with disease forecasting. In susceptible varieties, sometimes fungicide applications may be needed weekly. An early spray is most effective. The choice of fungicide can depend on the nature of local strains of P. infestans. Metalaxyl is a fungicide that was marketed for use against P. infestans, but suffered serious resistance issues when used on its own. In some regions of the world during the 1980s and 1990s, most strains of P. infestans became resistant to metalaxyl, but in subsequent years many populations shifted back to sensitivity. To reduce the occurrence of resistance, it is strongly advised to use single-target fungicides such as metalaxyl along with carbamate compounds. A combination of other compounds are recommended for managing metalaxyl-resistant strains. These include mandipropamid, chlorothalonil, fluazinam, triphenyltin, mancozeb, and others. In the United States, the Environmental Protection Agency has approved oxathiapiprolin for use against late blight. In African smallholder production fungicide application can be necessary up to once every three days. In organic production In the past, copper(II) sulfate solution (called 'bluestone') was used to combat potato blight. Copper pesticides remain in use on organic crops, both in the form of copper hydroxide and copper sulfate. Given the dangers of copper toxicity, other organic control options that have been shown to be effective include horticultural oils, phosphorous acids, and rhamnolipid biosurfactants, while sprays containing "beneficial" microbes such as Bacillus subtilis or compounds that encourage the plant to produce defensive chemicals (such as knotweed extract) have not performed as well. During the crop year 2008, many of the certified organic potatoes produced in the United Kingdom and certified by the Soil Association as organic were sprayed with a copper pesticide to control potato blight. According to the Soil Association, the total copper that can be applied to organic land is /year. Control of tuber blight Ridging is often used to reduce tuber contamination by blight. This normally involves piling soil or mulch around the stems of the potato blight, meaning the pathogen has farther to travel to get to the tuber. Another approach is to destroy the canopy around five weeks before harvest, using a contact herbicide or sulfuric acid to burn off the foliage. Eliminating infected foliage reduces the likelihood of tuber infection. Historical impact The first recorded instances of the disease were in the United States, in Philadelphia and New York City in early 1843. Winds then spread the spores, and in 1845 it was found from Illinois to Nova Scotia, and from Virginia to Ontario. It crossed the Atlantic Ocean with a shipment of seed potatoes for Belgian farmers in 1845. The disease being first identified in Europe around Kortrijk, Belgium, in June 1845, and resulted in the Flemish potato harvest failing that summer, yields declining 75–80%, leading to an estimated forty thousand deaths in the locale. All of the potato-growing countries in Europe would be affected within a year. The effect of Phytophthora infestans in Ireland in 1845–52 was one of the factors which caused more than one million to starve to death and forced another two million to emigrate. Most commonly referenced is the Great Irish Famine, during the late 1840s. Implicated in Ireland's fate was the island's disproportionate dependency on a single variety of potato, the Irish Lumper. The lack of genetic variability created a susceptible host population for the organism after the blight strains originating in Chiloé Archipelago replaced earlier potatoes of Peruvian origin in Europe. During the First World War, all of the copper in Germany was used for shell casings and electric wire and therefore none was available for making copper sulfate to spray potatoes. A major late blight outbreak on potato in Germany therefore went untreated, and the resulting scarcity of potatoes contributed to the deaths from the blockade. Since 1941, Eastern Africa has been suffering potato production losses because of strains of P. infestans from Europe. France, Canada, the United States, and the Soviet Union researched P. infestans as a biological weapon in the 1940s and 1950s. Potato blight was one of more than 17 agents that the United States researched as potential biological weapons before the nation suspended its biological weapons program. Whether a weapon based on the pathogen would be effective is questionable, due to the difficulties in delivering viable pathogen to an enemy's fields, and the role of uncontrollable environmental factors in spreading the disease. Late blight (A2 type) has not yet been detected in Australia and strict biosecurity measures are in place. The disease has been seen in China, India and south-east Asian countries. A large outbreak of P. infestans occurred on tomato plants in the Northeast United States in 2009. In light of the periodic epidemics of P. infestans ever since its first emergence, it may be regarded as a periodically emerging pathogen – or a periodically "re-emerging pathogen". References Further reading External links USAblight A National Web Portal on Late Blight International Potato Center Online Phytophtora bibliography EuroBlight a potato blight network in Europe USDA-BARC Phytophthora infestans page Organic Alternatives for Late Blight Control in Potatoes, from ATTRA Google Map of Tomato Potato Blight Daily Risk across NE USA Species Profile – Late Blight (Phytophthora infestans), National Invasive Species Information Center, United States National Agricultural Library. Lists general information and resources for Late Blight. Continuing education lesson created by The American Phytopathological Society entry on Late Blight by PlantVillage infestans Water mould plant pathogens and diseases Potato diseases Biological agents
Phytophthora infestans
Biology,Environmental_science
5,388
32,337,981
https://en.wikipedia.org/wiki/United%20States%20v.%20Jones%20%282012%29
United States v. Jones, 565 U.S. 400 (2012), was a landmark United States Supreme Court case in which the court held that installing a Global Positioning System (GPS) tracking device on a vehicle and using the device to monitor the vehicle's movements constitutes a search under the Fourth Amendment. In 2004, Antoine Jones was suspected by police in the District of Columbia of drug trafficking. Investigators asked for and received a warrant to attach a GPS tracking device to the underside of Jones's car but then exceeded the warrant's scope in both geography and length of time. The Supreme Court ruled unanimously that this was a search under the Fourth Amendment, although they were split 5-4 as to the fundamental reasons behind that conclusion. The majority held that by physically installing the GPS device on Jones's car, the police had committed a trespass against his "personal effects". This trespass, in an attempt to obtain information, constituted a search per se. Background Police investigation and criminal trial Antoine Jones owned a nightclub in the District of Columbia; Lawrence Maynard managed the club. In 2004, a joint Federal Bureau of Investigation (FBI) and Metropolitan Police Department task force began investigating Jones and Maynard for narcotics violations. During the course of the investigation, police installed a Global Positioning System (GPS) device on Jones's wife's Jeep Grand Cherokee. They had received a valid warrant from a judge, but that warrant only covered the District of Columbia and only for a limited time period. The GPS device tracked the vehicle's movements 24 hours a day for four weeks, and in the states surrounding the District of Columbia. This exceeded both the time limit and the geographic reach of the original warrant. The FBI arrested Jones under conspiracy to distribute narcotics charges in late 2005, based on data about the locations to which the vehicle was tracked, and he filed a motion to exclude the GPS data from the evidence collected against him. Jones was tried in criminal court in late 2006, and a federal jury deadlocked on the conspiracy charge and acquitted him of multiple other counts. The government retried Jones, and in early 2008 the jury returned a guilty verdict on one count of conspiracy to distribute and to possess with intent to distribute five or more kilograms of cocaine and 50 or more grams of cocaine base. He was sentenced to life in prison. Appeal Jones argued that his criminal conviction should be overturned because the use of the GPS tracker violated the Fourth Amendment's protection against unreasonable search and seizure. In 2010, the United States Court of Appeals for the District of Columbia Circuit agreed with Jones and overturned his conviction, holding that the police action was a search because it violated Jones's reasonable expectation of privacy. The D.C. Circuit then denied prosecutors' petition for rehearing en banc.<ref>[https://harvardlawreview.org/wp-content/uploads/pdfs/vol126_united_states_v_jones.pdf The Supreme Court, 2011 Term — Leading Cases] , 126 Harv. L. Rev. 226 (2012).</ref> The Circuit Court's decision was the subject of significant legal debate. In 2007, Judge Richard Posner of the United States Court of Appeals for the Seventh Circuit had reached the opposite conclusion on whether GPS tracking by police was a search under the Fourth Amendment. Federal prosecutors appealed the Circuit Court decision. In June 2011, the Supreme Court granted certiorari to resolve two questions. The first question was "Whether the warrantless use of a tracking device on respondent's vehicle to monitor its movements on public streets violated the Fourth Amendment." The second question was "Whether the government violated respondent's Fourth Amendment rights by installing the GPS tracking device on his vehicle without a valid warrant and without his consent." Oral argument Deputy Solicitor General Michael Dreeben began his argument on behalf of federal prosecutors by noting that information that is visible to anyone in the public, such as a driver's movements on public roads, is not protected by the Fourth Amendment. Dreeben cited United States v. Knotts (1983) as an example in which police were allowed to use a device known as a "beeper" that enabled tracking a car from a short distance away. Chief Justice John Roberts distinguished the present case from Knotts, saying that using a beeper still took "a lot of work" whereas a GPS device allows the police to "sit back in the station ... and push a button whenever they want to find out where the car is." Justice Antonin Scalia then directed the discussion to whether installing the device was an unreasonable search. Scalia argued that "when that device is installed against the will of the owner of the car on the car, that is unquestionably a trespass and thereby rendering the owner of the car not secure in his effects... against an unreasonable search and seizure." Dreeben argued that it may have been a trespass by police, but in the 1984 precedent United States v. Karo (a case involving a similar trespass) the Supreme Court ruled that it "made no difference because the purpose of the Fourth Amendment is to protect privacy interests and meaningful interference [with possessions], not to cover all technical trespasses." Justice Samuel Alito stated that people's use of technology is changing what the expectation of privacy is for the courts. "You know, I don't know what society expects and I think it's changing. Technology is changing people's expectations of privacy. Suppose we look forward 10 years, and maybe 10 years from now 90 percent of the population will be using social networking sites and they will have on average 500 friends and they will have allowed their friends to monitor their location 24 hours a day, 365 days a year, through the use of their cell phones. Then — what would the expectation of privacy be then?" Justice Sonia Sotomayor noted that "What motivated the Fourth Amendment historically was the disapproval, the outrage, that our Founding Fathers experienced with general warrants that permitted police indiscriminately to investigate just on the basis of suspicion, not probable cause, and to invade every possession that the individual had in search of a crime." She then asked, "How is this different?" Opinion of the Court On January 23, 2012, the Supreme Court held that "the Government's installation of a GPS device on a target's vehicle, and its use of that device to monitor the vehicle's movements, constitutes a 'search'" under the Fourth Amendment. Some journalists and commentators interpreted this ruling as a requirement that all GPS data surveillance requires a search warrant, but this ruling was narrower and applied only to the circumstances of the police investigation of Jones, particularly regarding location data when driving a vehicle. It can be said that all nine justices unanimously considered the police's actions in Jones to be unconstitutional. Importantly, however, they were split 5-4 on the reasoning for that conclusion. Furthermore, the justices were of three different opinions with respect to the breadth of the judgment. Majority opinion Justice Antonin Scalia authored the majority opinion. He cited a line of cases dating as far back as 1886 to argue that a physical intrusion, or trespass, into a constitutionally-protected area – in an attempt to find something or to obtain information – was the basis, historically, for determining whether a "search" had occurred under the meaning of the Fourth Amendment. Scalia conceded that in the years following Katz v. United States (1967) – in which electronic eavesdropping on a public telephone booth was held to be a search – the vast majority of search and seizure case law had shifted away from that approach founded on property rights, and towards an approach based on a person's expectation of privacy. However, he cited a number of post-Katz cases including Alderman v. United States and Soldal v. Cook County to argue that the trespass analysis had not been abandoned by the Court. In response to criticisms within Alito's concurrence, Scalia emphasized that the Fourth Amendment must provide, at a minimum, the level of protection as it did when it was adopted. Furthermore, a trespassory test need not exclude a test of the expectation of privacy, which may be appropriate to consider in situations where there was no governmental trespass. In the present case, the Court concluded that government's installation of a GPS device onto the defendant's car (his "personal effects" per Fourth Amendment terminology) was a trespass that was purposed to obtain information, so it was a search under the Fourth Amendment. Having reached the conclusion that this was a search under the Fourth Amendment, the Court declined to examine whether any exception exists that would render the search "reasonable," because the government had failed to advance that theory in the lower courts. Also left unanswered was the broader question surrounding the privacy implications of a warrantless use of GPS data without a physical intrusion – as might occur, for example, with the electronic collection of GPS data from wireless service providers or factory-installed vehicle tracking and navigation services. The Court left these matters to be decided in some future case, saying, "It may be that achieving the same result through electronic means, without an accompanying trespass, is an unconstitutional invasion of privacy, but the present case does not require us to answer that question." Concurring opinions Justice Sotomayor Justice Sonia Sotomayor was the fifth justice to concur with Scalia's opinion, making hers the decisive vote. "As the majority's opinion makes clear", she noted, "Katzs reasonable-expectation-of-privacy test augmented, but did not displace or diminish, the common-law trespassory test that preceded it". She agreed with Justice Samuel Alito's expectation of privacy reasoning with respect to long-term surveillance (see below), but she went a step further by also disputing the constitutionality of warrantless short-term GPS surveillance. Even during short-term monitoring, she reasoned, GPS surveillance can precisely record an individual's every movement, and hence can reveal completely private destinations, like "trips to the psychiatrist, the plastic surgeon, the abortion clinic, the AIDS treatment center, the strip club, the criminal defense attorney, the by-the-hour motel, the union meeting, the mosque, synagogue or church, the gay bar and on and on". Sotomayor added: Sotomayor distinguished the present case from Knotts, reminding that Knotts suggested that a different principle might apply to situations in which every a person's movement was completely monitored for 24 hours. Justice Alito In his concurring opinion, Justice Samuel Alito wrote with respect to privacy: "short-term monitoring of a person's movements on public streets accords with expectations of privacy" but "the use of longer term GPS monitoring in investigations of most offenses impinges on expectations of privacy". Alito argued against the majority's reliance on trespass under modern circumstances. Specifically, he argued that the common law property-based analysis of a "search" under the Fourth Amendment did not apply to such electronic situations as the one that occurred in this case. He further argued that following the doctrinal changes in Katz, a technical trespass leading to the gathering of evidence was "neither necessary nor sufficient to establish a constitutional violation".Justice Scalia countered this quote from Karo (that "[trespass is] neither necessary nor sufficient...") by calling it "irrelevant" – Karo contemplated a seizure, not a search, and trespass has no bearing on the constitutionality of a seizure. Jones, 565 U.S. at 408, n.5. In his concurring opinion Alito outlined that long-term surveillance can reveal everything about a person: Other opinions Following the privacy-based approach most commonly used post-Katz, the other four justices were instead of the opinion that the continuous monitoring of every single movement of an individual's car for 28 days violated a reasonable expectation of privacy, and thus constituted a search. Alito explained that before GPS and similar electronic technology, month-long surveillance of an individual's every move would have been exceptionally demanding and costly, requiring a tremendous amount of resources and people. As a result, society's expectations were, and still are, that such complete and long-term surveillance would not be undertaken, and that an individual would not think it could occur to him or her. With regard to continuous monitoring for a short period, the other Justices relied on the Knotts precedent and declined to find a violation of the expectation of privacy. In Knotts, a short-distance signal beeper in the defendant's car was tracked during a single trip for less than a day. The Knotts court held that a person traveling on public roads has no expectation of privacy in his movements, because the vehicle's starting point, direction, stops, or final destination could be seen by anyone else on the road. Impact and subsequent developments Walter E. Dellinger III, the former U.S. Solicitor General and the attorney who represented Jones, said the decision was "a signal event in Fourth Amendment history." He also said the decision made it more risky for law enforcement to use a GPS tracking device without a warrant. FBI director Robert Mueller testified in 2013 that the Jones decision had limited the Bureau's surveillance capabilities. Criminal defense attorneys and civil libertarians such as Virginia Sloan of the Constitution Project praised the ruling for protecting Fourth Amendment rights against government intrusion through modern technology. The Electronic Frontier Foundation, which filed an amicus brief arguing that warrantless GPS tracking violates reasonable expectations of privacy, praised Sotomayor's concurrence for raising concerns that existing Fourth Amendment precedents do not reflect the realities of modern technology. The Supreme Court remanded the case to the district court to determine whether Jones's criminal conviction could be restored based on the other evidence collected, and without the GPS data ruled unconstitutional by the Supreme Court. During the original investigation, the police obtained cell site location data via a process enabled by the Stored Communications Act. Judge Ellen Segal Huvelle ruled in late 2012 that the government could use the cell site data against Jones.The Supreme Court's 2018 decision in Carpenter vs. United States held that Police must obtain a search warrant prior to obtaining cellular location information. A new criminal trial began in early 2013 after Jones rejected a plea bargain of 15 to 22 years in prison. In March 2013, a mistrial was declared with the jury evenly split. The Government planned for a fourth trial but in May 2013 Jones accepted a plea bargain of 15 years with credit for time served. In October 2013, the Court of Appeals for the Third Circuit addressed the unanswered question of whether warrantless use of GPS devices would be reasonable — and thus lawful — under the Fourth Amendment if police have probable cause to justify the search. United States v. Katzin was the first relevant appeals court ruling in the wake of Jones'' to address this topic. The court held that a warrant was indeed required to deploy GPS tracking devices, and further, that none of the narrow exceptions to the Fourth Amendment's warrant requirement (e.g. exigent circumstances) were applicable. References Further reading United States v. Jones: GPS Monitoring, Property, and Privacy Congressional Research Service External links 2012 in United States case law Search and seizure case law Surveillance United States controlled substances case law United States Fourth Amendment case law United States privacy case law United States Supreme Court cases United States Supreme Court cases of the Roberts Court Global Positioning System Legal history of the District of Columbia
United States v. Jones (2012)
Technology,Engineering
3,201
75,400,586
https://en.wikipedia.org/wiki/NGC%201537
NGC 1537 is an elliptical galaxy located around 64 million light-years away in the constellation Eridanus. NGC 1537 is south of the celestial equator and it was discovered by John Herschel in 1835. NGC 1537 is not know to have much star-formation, and it is not known to have an active galactic nucleus. See also NGC 154, a similar elliptical galaxy NGC 3640, a similar elliptical galaxy with around the same size. NGC 3311, a supergiant elliptical galaxy References External links Elliptical galaxies Eridanus (constellation) Astronomical objects discovered in 1835 Galaxies discovered in 1835 Discoveries by John Herschel 14695 420-12 -05-11-005 1537
NGC 1537
Astronomy
142
184,534
https://en.wikipedia.org/wiki/Danny%20Carey
Daniel Edwin Carey (born May 10, 1961) is an American musician and songwriter who is the drummer for the progressive metal band Tool. He has also contributed to albums by artists such as Zaum, Green Jellö, Pigface, Skinny Puppy, Adrian Belew, Carole King, Collide, Meat Puppets, Lusk, and the Melvins. He was ranked among the 100 greatest drummers of all time by Rolling Stone magazine, occupying the 26th position, in addition to being frequently considered by other magazines. Biography Born in Lawrence, Kansas, Carey's first encounter with the drums began at the age of ten when he joined the school band and began taking private lessons on the snare drum. Two years later, Carey began to practice on a drum set. In his senior year of high school in Paola, Kansas, Carey joined the high school jazz band. Carey also played basketball. Jazz would later play a huge role in his signature approach to the drum set in a rock setting. As Carey progressed through high school and later college at the University of Missouri–Kansas City, he began expanding his studies in percussion with theory into the principles of geometry, science, and metaphysics as well as delving into the occult. Carey also played jazz while attending college and got to experience the jazz scene in Kansas City. After college, a friend and bandmate convinced Carey to leave Kansas for Portland, Oregon, where he played briefly in various bands before moving to Los Angeles, where he was able to perform as a studio drummer with Carole King and perform live sets with Pigmy Love Circus. During this period he played in a country band, the Wild Blue Yonder, with Jeff Buckley and John Humphrey. He also played in Green Jellö as "Danny Longlegs" and recorded the album Cereal Killer. He would later find his way to Tool after coming to know singer Maynard James Keenan and guitarist Adam Jones and practicing with them in place of drummers the two had requested but had never shown up. Besides Tool, Carey also finds time for other projects new and old such as Legend of the Seagullmen, Pigmy Love Circus, Volto!, and Zaum. Legal issues Carey was arrested at the Kansas City International Airport on December 12, 2021, after allegedly using a homophobic slur and repeatedly jabbing someone in the chest with two fingers. He was charged with a municipal assault violation. The charges were dropped in January 2023. Influences Carey's drumming influences include Neil Peart, Billy Cobham, Buddy Rich, Bill Bruford, Lenny White, Stewart Copeland, Tony Williams, John Bonham, Barriemore Barlow and Tim Alexander. Equipment Danny Carey uses the wood tip version of his own signature model of drumstick made by Vic Firth. He previously had endorsed a signature model with Trueline Drumsticks (now Trueline's Tribal Assault model). Carey also uses Sonor drums, Paiste cymbals, Evans drumheads, Hammerax, and electronic devices such as Mandala, Korg and Roland. Paiste and Jeff Ocheltree (noted drumtech for Billy Cobham, John Bonham, Lenny White, etc) teamed up in the late 90s to develop an entire drumset made out of recycled cymbals. The final product was a melted down Paiste's Signature bronze custom cast cymbals. Danny Carey used the kit during the Lateralus 2002 tour and during some drum clinics through the years. Only three versions of this kit were ever created. Carey and Carl Palmer each own one, while the third resides at Paiste's Switzerland headquarters. At the 2009 Winter NAMM Show Sonor released a Danny Carey signature snare drum, which is a 1 mm thick bronze 14x8" snare with laser etched talisman symbols and his signature engraved around the vent hole. In 2016 Paiste released a Danny Carey signature ride cymbal called "Dry Heavy Ride - Monad" based on their discontinued model that Carey always used since becoming a Paiste artist. The cymbal has a purple color and sigils printed on. It is named "Monad" because the main print is an esoteric glyph from John Dee. During 2019, builder Alan Van Kleef from VK Drums was contacted by Carey to create a drum kit and a snare drum. After much debate Alan developed the set called "Monad", made by hand in Sheffield, England. Shortly after the completion of the Monad set, it was announced that a snare drum replica called "7empest" (after Tool's grammy award winning song) would be made available as part of a limited collection of 33 individual pieces. At the same time that the 7empest snare was launched, Alan was also developing the first complete 7empest drum set. Like the snare, the 7empest drums is a Monad replica in almost every way, except for the engravings. Drumming techniques Carey's popularity among drummers and non-drummers alike stems from the diversity of his sound and dynamics through his years of learning jazz music, his technical ability, frequent use of odd time signatures, polyrhythms, and polymeters. He has stated in interviews that he effectively treats his feet as he does his hands: he practices rudiments (used for sticking techniques) and even snare drum solos with his feet to improve his double bass drumming, hi-hat control, and foot independence. In search of new techniques, Carey has studied tabla with Aloke Dutta, who can be heard playing on the live version of the song "Pushit" (from Salival). This is especially apparent on tracks such as "Disposition" (Lateralus) or "Right in Two" (10,000 Days), for which Carey has recorded the tabla parts himself in the studio. The tabla (and other percussive instruments) used in Tool's music are replicated live using the Mandala pads (in fact the pads are also used when recording in the studio, a notable example being the tabla solo of "Right in Two"). He has also stated that when he is playing to an odd time signature, he tries to drum to the "feel" of the song and establish general "inner pulse" for the given time signature instead of fully counting it out. Carey has been featured in many drum and music magazines. Side projects and other musical endeavors In his time away from Tool, Carey has contributed (and still regularly does) to a vast number of projects: Fusion band Volto!, which regularly plays shows in the Los Angeles area, consisting of both covers and original material Pigmy Love Circus, which has recorded several albums Electronica-oriented project Zaum Green Jellö Pigface Drums on the track "Use Less" from the album The Greater Wrong of the Right by Skinny Puppy Contributed to Adrian Belew's Side One and Side Three projects with bassist Les Claypool Drums on certain tracks of the Carole King album Colour of Your Dreams (as a session drummer) with Guns N' Roses guitarist Slash playing on select tracks Drums on the track "Somewhere" from the Collide album Some Kind of Strange and several tracks from Two Headed Monster Made an appearance on the 1997 album Free Mars by former Tool bassist Paul D'Amour's band Lusk Drums on the track "Bird's Eye", (2008, from the movie Body of Lies): Serj Tankian (System Of A Down, vocals), Mike Patton (Faith No More, vocals), Daron Malakian (guitar), Les Claypool (bass) Drums on the track "The Fourth" on the self-titled album from Feersum Ennjin, the band of former Tool Bassist Paul D'Amour Drums on the track "Misery" from Author & Punisher's 2022 album Krüller. Drums with psychedelic rock supergroup Legend of the Seagullmen along with Brent Hinds of Mastodon, Jimmy Hayward and others. Their eponymous debut album was released in February 2018, on Dine Alone Records He would work with Hayward on his second animated feature, Free Birds, voicing a character who shares his name. Drums on Forever Love's Fool, a 22-minute progressive rock track recorded with Canadian musician Daniel Romano Touring North America in 2024 with the supergroup "BEAT" named after the King Crimson album, who are playing the music of 1980's King Crimson from this and other albums. The group consists of Carey, former King Crimson members Adrian Belew and Tony Levin, and Steve Vai. Playing with Primus on their opening slot for Tool in Punta Cana, Dominican Republic in March 2025 replacing former drummer Tim Alexander. References External links Danny Carey's website 1961 births 20th-century American drummers 21st-century American drummers Alternative metal musicians American male drummers American heavy metal drummers Jazz fusion drummers Living people Musicians from Lawrence, Kansas American occultists Pigface members Progressive metal musicians Sacred geometry Tool (band) members University of Missouri–Kansas City alumni American male jazz musicians Lusk (band) members Volto! members
Danny Carey
Engineering
1,860
58,830,187
https://en.wikipedia.org/wiki/Amanita%20gayana
Amanita gayana or Gay's death cap is a species of Amanita from Chile. References External links gayana Fungus species
Amanita gayana
Biology
30
927,051
https://en.wikipedia.org/wiki/RYB%20color%20model
RYB (an abbreviation of red–yellow–blue) is a subtractive color model used in art and applied design in which red, yellow, and blue pigments are considered primary colors. Under traditional color theory, this set of primary colors was advocated by Moses Harris, Michel Eugène Chevreul, Johannes Itten and Josef Albers, and applied by countless artists and designers. The RYB color model underpinned the color curriculum of the Bauhaus, Ulm School of Design and numerous art and design schools that were influenced by the Bauhaus, including the IIT Institute of Design (founded as the New Bauhaus), Black Mountain College, Design Department Yale University, the Shillito Design School, Sydney, and Parsons School of Design, New York. In this context, the term primary color refers to three exemplar colors (red, yellow, and blue) as opposed to specific pigments. As illustrated, in the RYB color model, red, yellow, and blue are intermixed to create secondary color segments of orange, green, and purple. This set of primary colors emerged at a time when access to a large range of pigments was limited by availability and cost, and it encouraged artists and designers to explore the many nuances of color through mixing and intermixing a limited range of pigment colors. In art and design education, gray, red, yellow, and blue pigments were usually augmented with white and black pigments, enabling the creation of a larger gamut of color nuances including tints and shades. Although scientifically obsolete because it does not meet the definition of a complementary color in which a neutral or black color must be mixed, it is still a model used in artistic environments, causing confusion about primary and complementary colors. It can be considered an approximation of the CMY color model. The RYB color model relates specifically to color in the form of paint and pigment application in art and design. Other common color models include the light model (RGB) and the paint, pigment and ink CMY color model, which is much more accurate in terms of color gamut and intensity compared to the traditional RYB color model, the latter emerging in conjunction with the CMYK color model in the printing industry. History The first scholars to propose that there are three primary colors for painters were Scarmiglioni (1601), Savot (1609), de Boodt (1609) and Aguilonius (1613). From these, the most influential was the work of Franciscus Aguilonius (1567–1617), although he did not arrange the colors in a wheel. Jacob Christoph Le Blon was the first to apply the RYB color model to printing, specifically mezzotint printing, and he used separate plates for each color: yellow, red and blue plus black to add shades and contrast. In 'Coloritto', Le Blon asserted that “the art of mixing colours…(in) painting can represent all visible objects with three colours: yellow, red and blue; for all colours can be composed of these three, which I call Primitive”. Le Blon added that red and yellow make orange; red and blue, make purple; and blue and yellow make green (Le Blon, 1725, p6). In the 18th century, Moses Harris advocated that a multitude of colors can be created from three "primitive" colors – red, yellow, and blue. Mérimée referred to "three simple colours (yellow, red, and blue)" that can produce a large gamut of color nuances. "United in pairs, these three primitive colours give birth to three other colours as distinct and brilliant as their originals; thus, yellow mixed with red, gives orange; red and blue, violet; and green is obtained by mixing blue and yellow" (Mérimée, 1839, p245). Mérimée illustrated these color relationships with a simple diagram located between pages 244 and 245: Chromatic Scale (Echelle Chromatique).De la peinture à l’huile : ou, Des procédés matériels employés dans ce genre de peinture, depuis Hubert et Jean Van-Eyck jusqu’à nos jours was published in 1830 and an English translation by W. B. Sarsfield Taylor was published in London in 1839. Similar ideas about the creation of color using red, yellow, and blue were discussed in Theory of Colours (1810) by the German poet, color theorist and government minister Johann Wolfgang von Goethe. In The Law of Simultaneous Color Contrast (1839) by the French industrial chemist Michel Eugène Chevreul discussed the creation of numerous color nuances and his color theories were underpinned by the RYB color model. Separate to the RYB color model, cyan, magenta, and yellow primary colors are associated with CMYK commonly used in the printing industry. Cyan, magenta, and yellow are often referred to as "process blue", "process red", and "process yellow". Old model of coloration with four primaries The ancient Greeks, under the influence of Aristotle, Democritus and Plato, considered that there were four basic colors that coincided with the four elements: earth (ochre), sky (blue), water (green) and fire (red), while black and white represented the light of day and the darkness of night. The four-color system is formed by the primaries yellow, green, blue and red, and was supported by Alberti in his "De Pictura" (1436), using the rectangle, rhombus, and color wheel to represent them. Leonardo da Vinci endorsed this model in 1510, although he hesitated to include green, noting that green could be obtained by mixing blue and yellow. Also Richard Waller, in his "Catalogue of Simple and Mixed Colors" (1686), graphed these four colors in a square. These four colors have often been referred to as "the primary psychological colors". Traditional coloring with three primaries The first known case of trichromacy coloration (of 3 primaries) can be found in a work on optics by the Belgian thinker Franciscus Aguilonius in 1613, who in his "Opticorum libri sex, philosophis iuxtà ac mathematicis utiles" in Latin (Roughly, Six books of optics: useful to philosophers as well as to mathematicians), graphed the colors flavvus, rvbevs and cærvlevs (yellow, red and blue) giving rise to the intermediate colors avrevs, viridis and pvrpvrevs (orange, green and purple) and their relationship with the extremes albvs and niger (white and black). However, the idea of three primary colors is older, as Aguilonius supported the view known since the Middle Ages that the colors yellow, red, and blue were the basic or "noble" colors from which all others are derived. This model was used for printing by Jacob Christoph Le Blon in 1725 and called it Coloritto or harmony of colouring, stating that the primitive (primary) colors are yellow, red and blue, while the secondary are orange, green and purple or violet. In 1766, Moses Harris developed an 18-color color wheel based on this model, including a wider range of colors by adding light and dark derivatives. During the 18th and 19th centuries, this color model was endorsed by many authors who have left illustrations that can still be appreciated today, such as Louis-Bertrand Castel (1740), the Tobias's color system Mayer (1758), Moses Harris (1770–76), Ignaz Schiffermuller (1772), Baumgartner and Muller (1803), Sowerby (1809), Runge (1809), the popular "Theory of Colors" (1810) by Goethe, Gregoire (1810–20), Merimee (1815-30-39), Klotz (1816), G. Field (1817-41-50), Hayter (1826 ), the "Law of Simultaneous Contrast of Colours" (1839) by Chevreul and many others. By the 20th century, natural pigments gave way to synthetic ones. The invention of phthalocyanine and derivatives of quinacridone, expanded the range of primary blues and reds, getting closer to the ideal subtractive colors and the CMY and CMYK models. See also Color Color solid Color theory List of colors Primary colors References External links a web RYB to RGB converter Color space Obsolete scientific theories
RYB color model
Mathematics
1,780
319,342
https://en.wikipedia.org/wiki/Microfilament
Microfilaments, also called actin filaments, are protein filaments in the cytoplasm of eukaryotic cells that form part of the cytoskeleton. They are primarily composed of polymers of actin, but are modified by and interact with numerous other proteins in the cell. Microfilaments are usually about 7 nm in diameter and made up of two strands of actin. Microfilament functions include cytokinesis, amoeboid movement, cell motility, changes in cell shape, endocytosis and exocytosis, cell contractility, and mechanical stability. Microfilaments are flexible and relatively strong, resisting buckling by multi-piconewton compressive forces and filament fracture by nanonewton tensile forces. In inducing cell motility, one end of the actin filament elongates while the other end contracts, presumably by myosin II molecular motors. Additionally, they function as part of actomyosin-driven contractile molecular motors, wherein the thin filaments serve as tensile platforms for myosin's ATP-dependent pulling action in muscle contraction and pseudopod advancement. Microfilaments have a tough, flexible framework which helps the cell in movement. Actin was first discovered in rabbit skeletal muscle in the mid 1940s by F.B. Straub. Almost 20 years later, H.E. Huxley demonstrated that actin is essential for muscle constriction. The mechanism in which actin creates long filaments was first described in the mid 1980s. Later studies showed that actin has an important role in cell shape, motility, and cytokinesis. Organization Actin filaments are assembled in two general types of structures: bundles and networks. Bundles can be composed of polar filament arrays, in which all barbed ends point to the same end of the bundle, or non-polar arrays, where the barbed ends point towards both ends. A class of actin-binding proteins, called cross-linking proteins, dictate the formation of these structures. Cross-linking proteins determine filament orientation and spacing in the bundles and networks. These structures are regulated by many other classes of actin-binding proteins, including motor proteins, branching proteins, severing proteins, polymerization promoters, and capping proteins. In vitro self-assembly Measuring approximately 6 nm in diameter, microfilaments are the thinnest fibers of the cytoskeleton. They are polymers of actin subunits (globular actin, or G-actin), which as part of the fiber are referred to as filamentous actin, or F-actin. Each microfilament is made up of two helical, interlaced strands of subunits. Much like microtubules, actin filaments are polarized. Electron micrographs have provided evidence of their fast-growing barbed-ends and their slow-growing pointed-end. This polarity has been determined by the pattern created by the binding of myosin S1 fragments: they themselves are subunits of the larger myosin II protein complex. The pointed end is commonly referred to as the minus (−) end and the barbed end is referred to as the plus (+) end. In vitro actin polymerization, or nucleation, starts with the self-association of three G-actin monomers to form a trimer. ATP-bound actin then itself binds the barbed end, and the ATP is subsequently hydrolyzed. ATP hydrolysis occurs with a half time of about 2 seconds, while the half time for the dissociation of the inorganic phosphate is about 6 minutes. This autocatalyzed event reduces the binding strength between neighboring subunits, and thus generally destabilizes the filament. In vivo actin polymerization is catalyzed by a class of filament end-tracking molecular motors known as actoclampins. Recent evidence suggests that the rate of ATP hydrolysis and the rate of monomer incorporation are strongly coupled. Subsequently, ADP-actin dissociates slowly from the pointed end, a process significantly accelerated by the actin-binding protein, cofilin. ADP bound cofilin severs ADP-rich regions nearest the (−)-ends. Upon release, the free actin monomer slowly dissociates from ADP, which in turn rapidly binds to the free ATP diffusing in the cytosol, thereby forming the ATP-actin monomeric units needed for further barbed-end filament elongation. This rapid turnover is important for the cell's movement. End-capping proteins such as CapZ prevent the addition or loss of monomers at the filament end where actin turnover is unfavorable, such as in the muscle apparatus. Actin polymerization together with capping proteins were recently used to control the 3-dimensional growth of protein filament so as to perform 3D topologies useful in technology and the making of electrical interconnect. Electrical conductivity is obtained by metallisation of the protein 3D structure. Mechanism of force generation As a result of ATP hydrolysis, filaments elongate approximately 10 times faster at their barbed ends than their pointed ends. At steady-state, the polymerization rate at the barbed end matches the depolymerization rate at the pointed end, and microfilaments are said to be treadmilling. Treadmilling results in elongation in the barbed end and shortening in the pointed-end, so that the filament in total moves. Since both processes are energetically favorable, this means force is generated, the energy ultimately coming from ATP. Actin in cells Intracellular actin cytoskeletal assembly and disassembly are tightly regulated by cell signaling mechanisms. Many signal transduction systems use the actin cytoskeleton as a scaffold, holding them at or near the inner face of the peripheral membrane. This subcellular location allows immediate responsiveness to transmembrane receptor action and the resulting cascade of signal-processing enzymes. Because actin monomers must be recycled to sustain high rates of actin-based motility during chemotaxis, cell signalling is believed to activate cofilin, the actin-filament depolymerizing protein which binds to ADP-rich actin subunits nearest the filament's pointed-end and promotes filament fragmentation, with concomitant depolymerization in order to liberate actin monomers. In most animal cells, monomeric actin is bound to profilin and thymosin beta-4, both of which preferentially bind with one-to-one stoichiometry to ATP-containing monomers. Although thymosin beta-4 is strictly a monomer-sequestering protein, the behavior of profilin is far more complex. Profilin enhances the ability of monomers to assemble by stimulating the exchange of actin-bound ADP for solution-phase ATP to yield actin-ATP and ADP. Profilin is transferred to the leading edge by virtue of its PIP2 binding site, and it employs its poly-L-proline binding site to dock onto end-tracking proteins. Once bound, profilin-actin-ATP is loaded into the monomer-insertion site of actoclampin motors. Another important component in filament formation is the Arp2/3 complex, which binds to the side of an already existing filament (or "mother filament"), where it nucleates the formation of a new daughter filament at a 70-degree angle relative to the mother filament, effecting a fan-like branched filament network. Specialized unique actin cytoskeletal structures are found adjacent to the plasma membrane. Four remarkable examples include red blood cells, human embryonic kidney cells, neurons, and sperm cells. In red blood cells, a spectrin-actin hexagonal lattice is formed by interconnected short actin filaments. In human embryonic kidney cells, the cortical actin forms a scale-free fractal structure. First found in neuronal axons, actin forms periodic rings that are stabilized by spectrin and adducin and this ring structure was then found by He et al 2016 to occur in almost every neuronal type and glial cells, across seemingly every animal taxon including Caenorhabditis elegans, Drosophila, Gallus gallus and Mus musculus. And in mammalian sperm, actin forms a helical structure in the midpiece, i.e., the first segment of the flagellum. Associated proteins In non-muscle cells, actin filaments are formed proximal to membrane surfaces. Their formation and turnover are regulated by many proteins, including: Filament end-tracking protein (e.g., formins, VASP, N-WASP) Filament-nucleator known as the Actin-Related Protein-2/3 (or Arp2/3) complex Filament cross-linkers (e.g., α-actinin, fascin, and fimbrin) Actin monomer-binding proteins profilin and thymosin β4 Filament barbed-end cappers such as Capping Protein and CapG, etc. Filament-severing proteins like gelsolin. Actin depolymerizing proteins such as ADF/cofilin. The actin filament network in non-muscle cells is highly dynamic. The actin filament network is arranged with the barbed-end of each filament attached to the cell's peripheral membrane by means of clamped-filament elongation motors, the above-mentioned "actoclampins", formed from a filament barbed-end and a clamping protein (formins, VASP, Mena, WASP, and N-WASP). The primary substrate for these elongation motors is profilin-actin-ATP complex which is directly transferred to elongating filament ends. The pointed-end of each filament is oriented toward the cell's interior. In the case of lamellipodial growth, the Arp2/3 complex generates a branched network, and in filopodia a parallel array of filaments is formed. Actin acts as a track for myosin motor motility Myosin motors are intracellular ATP-dependent enzymes that bind to and move along actin filaments. Various classes of myosin motors have very different behaviors, including exerting tension in the cell and transporting cargo vesicles. A proposed model – actoclampins track filament ends One proposed model suggests the existence of actin filament barbed-end-tracking molecular motors termed "actoclampin". The proposed actoclampins generate the propulsive forces needed for actin-based motility of lamellipodia, filopodia, invadipodia, dendritic spines, intracellular vesicles, and motile processes in endocytosis, exocytosis, podosome formation, and phagocytosis. Actoclampin motors also propel such intracellular pathogens as Listeria monocytogenes, Shigella flexneri, Vaccinia and Rickettsia. When assembled under suitable conditions, these end-tracking molecular motors can also propel biomimetic particles. The term actoclampin is derived from acto- to indicate the involvement of an actin filament, as in actomyosin, and clamp to indicate a clasping device used for strengthening flexible/moving objects and for securely fastening two or more components, followed by the suffix -in to indicate its protein origin. An actin filament end-tracking protein may thus be termed a clampin. Dickinson and Purich recognized that prompt ATP hydrolysis could explain the forces achieved during actin-based motility. They proposed a simple mechanoenzymatic sequence known as the Lock, Load & Fire Model, in which an end-tracking protein remains tightly bound ("locked" or clamped) onto the end of one sub-filament of the double-stranded actin filament. After binding to Glycyl-Prolyl-Prolyl-Prolyl-Prolyl-Prolyl-registers on tracker proteins, Profilin-ATP-actin is delivered ("loaded") to the unclamped end of the other sub-filament, whereupon ATP within the already clamped terminal subunit of the other subfragment is hydrolyzed ("fired"), providing the energy needed to release that arm of the end-tracker, which then can bind another Profilin-ATP-actin to begin a new monomer-addition round. Steps involved The following steps describe one force-generating cycle of an actoclampin molecular motor: The polymerization cofactor profilin and the ATP·actin combine to form a profilin-ATP-actin complex that then binds to the end-tracking unit The cofactor and monomer are transferred to the barbed-end of an actin already clamped filament The tracking unit and cofactor dissociate from the adjacent protofilament, in a step that can be facilitated by ATP hydrolysis energy to modulate the affinity of the cofactor and/or the tracking unit for the filament; and this mechanoenzymatic cycle is then repeated, starting this time on the other sub-filament growth site. When operating with the benefit of ATP hydrolysis, AC motors generate per-filament forces of 8–9 pN, which is far greater than the per-filament limit of 1–2 pN for motors operating without ATP hydrolysis. The term actoclampin is generic and applies to all actin filament end-tracking molecular motors, irrespective of whether they are driven actively by an ATP-activated mechanism or passively. Some actoclampins (e.g., those involving Ena/VASP proteins, WASP, and N-WASP) apparently require Arp2/3-mediated filament initiation to form the actin polymerization nucleus that is then "loaded" onto the end-tracker before processive motility can commence. To generate a new filament, Arp2/3 requires a "mother" filament, monomeric ATP-actin, and an activating domain from Listeria ActA or the VCA region of N-WASP. The Arp2/3 complex binds to the side of the mother filament, forming a Y-shaped branch having a 70-degree angle with respect to the longitudinal axis of the mother filament. Then upon activation by ActA or VCA, the Arp complex is believed to undergo a major conformational change, bringing its two actin-related protein subunits near enough to each other to generate a new filament gate. Whether ATP hydrolysis may be required for nucleation and/or Y-branch release is a matter under active investigation. References External links Cell biology Actin-based structures
Microfilament
Biology
3,238
51,325,500
https://en.wikipedia.org/wiki/Ethyltoluene
Ethyltoluene describes organic compounds with the formula . Three isomers exist: 1,2- 1,3-, and 1,4-. All are colorless liquids, immiscible in water, with similar boiling points. They are classified are aromatic hydrocarbons. The ring bears two substituents: a methyl group and an ethyl group. Production and reactions Ethyltoluenes are prepared by alkylation of toluene with ethylene: These alkylations are catalyzed by various Lewis acids, such as aluminium trichloride. 3- and 4-Ethyltoluenes are mainly of interest as precursors to methylstyrenes: This dehydrogenation is conducted in the presence of zinc oxide catalysts. References Alkylbenzenes Ethyl compounds
Ethyltoluene
Chemistry
174
19,215,515
https://en.wikipedia.org/wiki/Institute%20of%20Transport%20Economics
The Institute of Transport Economics (Transportøkonomisk institutt –TØI) is a national, Norwegian institution for multidisciplinary transport research. Its mission is to develop and disseminate transportation knowledge of scientific quality and practical application. The Institute is an independent, non-profit research foundation. It holds no interests in any commercial, manufacturing or supplying organisation. TØI has a multidisciplinary research environment with approximately 110 employees, of which about 80 are researchers. Its sphere of activity includes most of the current issues in road, rail, sea and air transport, as well as urban mobility, environmental sustainability and road safety. In recent years the Institute has been engaged in more than 70 research projects under EU's Research Framework Programmes. References External links Official site Research institutes in Norway Foundations based in Norway Defunct government agencies of Norway Government agencies established in 1954 Engineering research institutes 1954 establishments in Norway
Institute of Transport Economics
Engineering
186
73,518,016
https://en.wikipedia.org/wiki/List%20of%20North%20American%20pieced%20quilt%20patterns
Patchwork quilts are made with patterns, many of which are common designs in North America. Anvil Basket Bear Paw Brick Work Churn Dash Corn and Beans Dogwood and Sunflower Double Wedding Ring Dove in the Window Dresden Plate Drunkard's Path Eight-Pointed Star Four Patch Hen and Chickens God's Eye Grandmother's Flower Garden Liberty Star Block Lincoln Platform Log Cabin Nebraska Pinwheel Nebraska State Block Nine Patch Pinwheel Roman Square Roman Stripe Rose of Sharon, or Whig Rose School House Sunbonnet Babies Tumbling Blocks Wild Goose Chase References Further reading Quilting
List of North American pieced quilt patterns
Engineering
115
25,975,488
https://en.wikipedia.org/wiki/Course%20of%20Theoretical%20Physics
The Course of Theoretical Physics is a ten-volume series of books covering theoretical physics that was initiated by Lev Landau and written in collaboration with his student Evgeny Lifshitz starting in the late 1930s. It is said that Landau composed much of the series in his head while in an NKVD prison in 1938–1939. However, almost all of the actual writing of the early volumes was done by Lifshitz, giving rise to the witticism, "not a word of Landau and not a thought of Lifshitz". The first eight volumes were finished in the 1950s, written in Russian and translated into English in the late 1950s by John Stewart Bell, together with John Bradbury Sykes, M. J. Kearsley, and W. H. Reid. The last two volumes were written in the early 1980s. and Lev Pitaevskii also contributed to the series. The series is often referred to as "Landau and Lifshitz", "Landafshitz" (Russian: "Ландафшиц"), or "Lanlifshitz" (Russian: "Ланлифшиц") in informal settings. Impact The presentation of material is advanced and typically considered suitable for graduate-level study. Despite this specialized character, it is estimated that a million volumes of the Course were sold by 2005. The series has been called "renowned" in Science and "celebrated" in American Scientist. A note in Mathematical Reviews states, "The usefulness and the success of this course have been proved by the great number of successive editions in Russian, English, French, German and other languages." At a centenary celebration of Landau's career, it was observed that the Course had shown "unprecedented longevity." In 1962, Landau and Lifshitz were awarded the Lenin Prize for their work on the Course. This was the first occasion on which the Lenin Prize had been awarded for the teaching of physics. English editions The following list does not include reprints and revised editions. Volume 1 Volume 1 covers classical mechanics without special or general relativity, in the Lagrangian and Hamiltonian formalisms. Volume 2 Volume 2 covers relativistic mechanics of particles, and classical field theory for fields, specifically special relativity and electromagnetism, general relativity and gravitation. Volume 3 Volume 3 covers quantum mechanics without special relativity. Volume 4 The original edition comprised two books, labelled part 1 and part 2. The first covered general aspects of relativistic quantum mechanics and relativistic quantum field theory, leading onto quantum electrodynamics. The second continued with quantum electrodynamics and what was then known about the strong and weak interactions. These books were published in the early 1970s, at a time when the strong and weak forces were still not well understood. In the second edition, the corresponding sections were scrapped and replaced with more topics in the well-established quantum electrodynamics, and the two parts were unified into one, thus providing a one-volume exposition on relativistic quantum field theory with the electromagnetic interaction as the prototype of a quantum field theory. Volume 5 Early version: Volume 5 covers general statistical mechanics and thermodynamics and applications, including chemical reactions, phase transitions, and condensed matter physics. Volume 6 Volume 6 covers fluid mechanics in a condensed but varied exposition, from ideal to viscous fluids, includes a chapter on relativistic fluid mechanics, and another on superfluids. Volume 7 Volume 7 covers elasticity theory of solids, including viscous solids, vibrations and waves in crystals with dislocations, and a chapter on the mechanics of liquid crystals. Volume 8 Volume 8 covers electromagnetism in materials, and includes a variety of topics in condensed matter physics, a chapter on magnetohydrodynamics, and another on nonlinear optics. Volume 9 Volume 9 builds on the original statistical physics book, with more applications to condensed matter theory. Volume 10 Volume 10 presents various applications of kinetic theory to condensed matter theory, and to metals, insulators, and phase transitions. See also Lectures on Theoretical Physics List of textbooks on classical and quantum mechanics List of textbooks in thermodynamics and statistical mechanics List of textbooks in electromagnetism The Theoretical Minimum Notes External links Internet Archive: (for volumes 1, 2, 3, 6, 7, 8) and (for volume 4), and (for volume 5). Britannica Online: Course of Theoretical Physics Internet Archive: Landau-Lifschitz Vol. 1-10 Classical mechanics Nauka (publisher) books Physics textbooks Quantum mechanics Series of non-fiction books Statistical mechanics Pergamon Press books
Course of Theoretical Physics
Physics
959
3,656,500
https://en.wikipedia.org/wiki/Hyaloclastite
Hyaloclastite is a volcanoclastic accumulation or breccia consisting of glass (from the Greek hyalus) fragments (clasts) formed by quench fragmentation of lava flow surfaces during submarine or subglacial extrusion. It occurs as thin margins on the lava flow surfaces and between pillow lavas as well as in thicker deposits, more commonly associated with explosive, volatile-rich eruptions as well as steeper topography. Hyaloclastites form during volcanic eruptions under water, under ice or where subaerial flows reach the sea or other bodies of water. It commonly has the appearance of angular flat fragments sized between a millimeter to few centimeters. The fragmentation occurs by the force of the volcanic explosion, or by thermal shock and spallation during rapid cooling. Several mineraloids are found in hyaloclastite masses. Sideromelane is a basalt glass rapidly quenched in water. It is transparent and pure, lacking the iron oxide crystals dispersed in the more commonly occurring tachylite. Fragments of these glasses are usually surrounded by a yellow waxy layer of palagonite, formed by reaction of sideromelane with water. Hyaloclastite ridges, formed by subglacial eruptions during the last glacial period, are a prominent landscape feature of Iceland and the Canadian province of British Columbia. Hyaloclastite is usually found at subglacial volcanoes, such as tuyas, which is a type of distinctive, flat-topped, steep-sided volcano formed when lava erupts through a thick glacier or ice sheet. In lava deltas, hyaloclastites form the main constituent of foresets formed ahead of the expanding delta. The foresets fill in the seabed topography, eventually building up to sea level, allowing the subaerial flow to move forwards until it reaches the sea again. See also Hyalotuff References Volcanoes of Canada: Types of volcanoes Accessed Jan. 8, 2006 External links Volcanology Vitreous rocks Breccias Volcanic rocks
Hyaloclastite
Materials_science
420
3,062,721
https://en.wikipedia.org/wiki/Neuroinformatics
Neuroinformatics is the emergent field that combines informatics and neuroscience. Neuroinformatics is related with neuroscience data and information processing by artificial neural networks. There are three main directions where neuroinformatics has to be applied: the development of computational models of the nervous system and neural processes; the development of tools for analyzing and modeling neuroscience data; and the development of tools and databases for management and sharing of neuroscience data at all levels of analysis. Neuroinformatics encompasses philosophy (computational theory of mind), psychology (information processing theory), computer science (natural computing, bio-inspired computing), among others disciplines. Neuroinformatics doesn't deal with matter or energy, so it can be seen as a branch of neurobiology that studies various aspects of nervous systems. The term neuroinformatics seems to be used synonymously with cognitive informatics, described by Journal of Biomedical Informatics as interdisciplinary domain that focuses on human information processing, mechanisms and processes within the context of computing and computing applications. According to German National Library, neuroinformatics is synonymous with neurocomputing. At Proceedings of the 10th IEEE International Conference on Cognitive Informatics and Cognitive Computing was introduced the following description: Cognitive Informatics (CI) as a transdisciplinary enquiry of computer science, information sciences, cognitive science, and intelligence science. CI investigates into the internal information processing mechanisms and processes of the brain and natural intelligence, as well as their engineering applications in cognitive computing. According to INCF, neuroinformatics is a research field devoted to the development of neuroscience data and knowledge bases together with computational models. Neuroinformatics in neuropsychology and neurobiology Models of neural computation Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. Due to the complexity of nervous system behavior, the associated experimental error bounds are ill-defined, but the relative merit of the different models of a particular subsystem can be compared according to how closely they reproduce real-world behaviors or respond to specific input signals. In the closely related field of computational neuroethology, the practice is to include the environment in the model in such a way that the loop is closed. In the cases where competing models are unavailable, or where only gross responses have been measured or quantified, a clearly formulated model can guide the scientist in designing experiments to probe biochemical mechanisms or network connectivity. Neurocomputing technologies Artificial neural networks Artificial neural networks (ANNs), usually simply called neural networks (NNs), are computing systems vaguely inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron that receives a signal then processes it and can signal neurons connected to it. The "signal" at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are called edges. Neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times. Brain emulation and mind uploading Brain emulation is the concept of creating a functioning computational model and emulation of a brain or part of a brain. In December 2006, the Blue Brain project completed a simulation of a rat's neocortical column. The neocortical column is considered the smallest functional unit of the neocortex. The neocortex is the part of the brain thought to be responsible for higher-order functions like conscious thought, and contains 10,000 neurons in the rat brain (and 108 synapses). In November 2007, the project reported the end of its first phase, delivering a data-driven process for creating, validating, and researching the neocortical column. An artificial neural network described as being "as big and as complex as half of a mouse brain" was run on an IBM Blue Gene supercomputer by the University of Nevada's research team in 2007. Each second of simulated time took ten seconds of computer time. The researchers claimed to observe "biologically consistent" nerve impulses that flowed through the virtual cortex. However, the simulation lacked the structures seen in real mice brains, and they intend to improve the accuracy of the neuron and synapse models. Mind uploading is the process of scanning a physical structure of the brain accurately enough to create an emulation of the mental state (including long-term memory and "self") and copying it to a computer in a digital form. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind. Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility. Brain–computer interface Research on brain–computer interface began in the 1970s at the University of California, Los Angeles under a grant from the National Science Foundation, followed by a contract from DARPA. The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature. Recently, studies in Human-computer interaction through the application of machine learning with statistical temporal features extracted from the frontal lobe, EEG brainwave data has shown high levels of success in classifying mental states (Relaxed, Neutral, Concentrating) mental emotional states (Negative, Neutral, Positive) and thalamocortical dysrhythmia. Neuroengineering & Neuroinformatics Neuroinformatics is the scientific study of information flow and processing in the nervous system. Institute scientists utilize brain imaging techniques, such as magnetic resonance imaging, to reveal the organization of brain networks involved in human thought. Brain simulation is the concept of creating a functioning computer model of a brain or part of a brain. There are three main directions where neuroinformatics has to be applied: the development of computational models of the nervous system and neural processes, the development of tools for analyzing data from devices for neurological diagnostic devices, the development of tools and databases for management and sharing of patients brain data in healthcare institutions. Brain mapping and simulation Brain simulation is the concept of creating a functioning computational model of a brain or part of a brain. In December 2006, the Blue Brain project completed a simulation of a rat's neocortical column. The neocortical column is considered the smallest functional unit of the neocortex. The neocortex is the part of the brain thought to be responsible for higher-order functions like conscious thought, and contains 10,000 neurons in the rat brain (and 108 synapses). In November 2007, the project reported the end of its first phase, delivering a data-driven process for creating, validating, and researching the neocortical column. An artificial neural network described as being "as big and as complex as half of a mouse brain" was run on an IBM Blue Gene supercomputer by the University of Nevada's research team in 2007. Each second of simulated time took ten seconds of computer time. The researchers claimed to observe "biologically consistent" nerve impulses that flowed through the virtual cortex. However, the simulation lacked the structures seen in real mice brains, and they intend to improve the accuracy of the neuron and synapse models. Mind uploading Mind uploading is the process of scanning a physical structure of the brain accurately enough to create an emulation of the mental state (including long-term memory and "self") and copying it to a computer in a digital form. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind. Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility. Auxiliary sciences of neuroinformatics Data analysis and knowledge organisation Neuroinformatics (in context of library science) is also devoted to the development of neurobiology knowledge with computational models and analytical tools for sharing, integration, and analysis of experimental data and advancement of theories about the nervous system function. In the INCF context, this field refers to scientific information about primary experimental data, ontology, metadata, analytical tools, and computational models of the nervous system. The primary data includes experiments and experimental conditions concerning the genomic, molecular, structural, cellular, networks, systems and behavioural level, in all species and preparations in both the normal and disordered states. In the recent decade, as vast amounts of diverse data about the brain were gathered by many research groups, the problem was raised of how to integrate the data from thousands of publications in order to enable efficient tools for further research. The biological and neuroscience data are highly interconnected and complex, and by itself, integration represents a great challenge for scientists. History The United States National Institute of Mental Health (NIMH), the National Institute of Drug Abuse (NIDA) and the National Science Foundation (NSF) provided the National Academy of Sciences Institute of Medicine with funds to undertake a careful analysis and study of the need to introduce computational techniques to brain research. The positive recommendations were reported in 1991. This positive report enabled NIMH, now directed by Allan Leshner, to create the "Human Brain Project" (HBP), with the first grants awarded in 1993. Next, Koslow pursued the globalization of the HPG and neuroinformatics through the European Union and the Office for Economic Co-operation and Development (OECD), Paris, France. Two particular opportunities occurred in 1996. The first was the existence of the US/European Commission Biotechnology Task force co-chaired by Mary Clutter from NSF. Within the mandate of this committee, of which Koslow was a member the United States European Commission Committee on Neuroinformatics was established and co-chaired by Koslow from the United States. This committee resulted in the European Commission initiating support for neuroinformatics in Framework 5 and it has continued to support activities in neuroinformatics research and training. A second opportunity for globalization of neuroinformatics occurred when the participating governments of the Mega Science Forum (MSF) of the OECD were asked if they had any new scientific initiatives to bring forward for scientific cooperation around the globe. The White House Office of Science and Technology Policy requested that agencies in the federal government meet at NIH to decide if cooperation were needed that would be of global benefit. The NIH held a series of meetings in which proposals from different agencies were discussed. The proposal recommendation from the U.S. for the MSF was a combination of the NSF and NIH proposals. Jim Edwards of NSF supported databases and data-sharing in the area of biodiversity. The two related initiatives were combined to form the United States proposal on "Biological Informatics". This initiative was supported by the White House Office of Science and Technology Policy and presented at the OECD MSF by Edwards and Koslow. An MSF committee was established on Biological Informatics with two subcommittees: 1. Biodiversity (Chair, James Edwards, NSF), and 2. Neuroinformatics (Chair, Stephen Koslow, NIH). At the end of two years the Neuroinformatics subcommittee of the Biological Working Group issued a report supporting a global neuroinformatics effort. Koslow, working with the NIH and the White House Office of Science and Technology Policy to establishing a new Neuroinformatics working group to develop specific recommendation to support the more general recommendations of the first report. The Global Science Forum (GSF; renamed from MSF) of the OECD supported this recommendation. Community Institute of Neuroinformatics, University of Zurich The Institute of Neuroinformatics was established at the University of Zurich and ETH Zurich at the end of 1995. The mission of the Institute is to discover the key principles by which brains work and to implement these in artificial systems that interact intelligently with the real world. Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh Computational Neuroscience and Neuroinformatics Group in Institute for Adaptive and Neural Computation of University of Edinburgh's School of Informatics study how the brain processes information. The International Neuroinformatics Coordinating Facility An international organization with the mission to develop, evaluate, and endorse standards and best practices that embrace the principles of open, fair, and citable neuroscience. As of October 2019, the INCF has active nodes in 18 countries. This committee presented 3 recommendations to the member governments of GSF. These recommendations were: National neuroinformatics programs should be continued or initiated in each country should have a national node to both provide research resources nationally and to serve as the contact for national and international coordination. An International Neuroinformatics Coordinating Facility should be established. The INCF will coordinate the implementation of a global neuroinformatics network through integration of national neuroinformatics nodes. A new international funding scheme should be established. This scheme should eliminate national and disciplinary barriers and provide a most efficient approach to global collaborative research and data sharing. In this new scheme, each country will be expected to fund the participating researchers from their country. The GSF neuroinformatics committee then developed a business plan for the operation, support and establishment of the INCF which was supported and approved by the GSF Science Ministers at its 2004 meeting. In 2006 the INCF was created and its central office established and set into operation at the Karolinska Institute, Stockholm, Sweden under the leadership of Sten Grillner. Sixteen countries (Australia, Canada, China, the Czech Republic, Denmark, Finland, France, Germany, India, Italy, Japan, the Netherlands, Norway, Sweden, Switzerland, the United Kingdom and the United States), and the EU Commission established the legal basis for the INCF and Programme in International Neuroinformatics (PIN). To date, eighteen countries (Australia, Belgium, Czech Republic, Finland, France, Germany, India, Italy, Japan, Malaysia, Netherlands, Norway, Poland, Republic of Korea, Sweden, Switzerland, the United Kingdom and the United States) are members of the INCF. Membership is pending for several other countries. The goal of the INCF is to coordinate and promote international activities in neuroinformatics. The INCF contributes to the development and maintenance of database and computational infrastructure and support mechanisms for neuroscience applications. The system is expected to provide access to all freely accessible human brain data and resources to the international research community. The more general task of INCF is to provide conditions for developing convenient and flexible applications for neuroscience laboratories in order to improve our knowledge about the human brain and its disorders. Laboratory of Neuroinformatics, Nencki Institute of Experimental Biology The main activity of the group is development of computational tools and models, and using them to understand brain structure and function. Neuroimaging & Neuroinformatics, Howard Florey Institute, University of Melbourne Institute scientists utilize brain imaging techniques, such as magnetic resonance imaging, to reveal the organization of brain networks involved in human thought. Led by Gary Egan. Montreal Neurological Institute, McGill University Led by Alan Evans, MCIN conducts computationally-intensive brain research using innovative mathematical and statistical approaches to integrate clinical, psychological and brain imaging data with genetics. MCIN researchers and staff also develop infrastructure and software tools in the areas of image processing, databasing, and high performance computing. The MCIN community, together with the Ludmer Centre for Neuroinformatics and Mental Health, collaborates with a broad range of researchers and increasingly focuses on open data sharing and open science, including for the Montreal Neurological Institute. The THOR Center for Neuroinformatics Established April 1998 at the Department of Mathematical Modelling, Technical University of Denmark. Besides pursuing independent research goals, the THOR Center hosts a number of related projects concerning neural networks, functional neuroimaging, multimedia signal processing, and biomedical signal processing. The Neuroinformatics Portal Pilot The project is part of a larger effort to enhance the exchange of neuroscience data, data-analysis tools, and modeling software. The portal is supported from many members of the OECD Working Group on Neuroinformatics. The Portal Pilot is promoted by the German Ministry for Science and Education. Computational Neuroscience, ITB, Humboldt-University Berlin This group focuses on computational neurobiology, in particular on the dynamics and signal processing capabilities of systems with spiking neurons. Led by Andreas VM Herz. The Neuroinformatics Group in Bielefeld Active in the field of Artificial Neural Networks since 1989. Current research programmes within the group are focused on the improvement of man-machine-interfaces, robot-force-control, eye-tracking experiments, machine vision, virtual reality and distributed systems. Laboratory of Computational Embodied Neuroscience (LOCEN) This group, part of the Institute of Cognitive Sciences and Technologies, Italian National Research Council (ISTC-CNR) in Rome and founded in 2006 is currently led by Gianluca Baldassarre. It has two objectives: (a) understanding the brain mechanisms underlying learning and expression of sensorimotor behaviour, and related motivations and higher-level cognition grounded on it, on the basis of embodied computational models; (b) transferring the acquired knowledge to building innovative controllers for autonomous humanoid robots capable of learning in an open-ended fashion on the basis of intrinsic and extrinsic motivations. Japan national neuroinformatics resource The Visiome Platform is the Neuroinformatics Search Service that provides access to mathematical models, experimental data, analysis libraries and related resources. An online portal for neurophysiological data sharing is also available at BrainLiner.jp as part of the MEXT Strategic Research Program for Brain Sciences (SRPBS). Laboratory for Mathematical Neuroscience, RIKEN Brain Science Institute (Wako, Saitama) The target of Laboratory for Mathematical Neuroscience is to establish mathematical foundations of brain-style computations toward construction of a new type of information science. Led by Shun-ichi Amari. Netherlands state program in neuroinformatics Started in the light of the international OECD Global Science Forum which aim is to create a worldwide program in Neuroinformatics. NUST-SEECS Neuroinformatics Research Lab Establishment of the Neuro-Informatics Lab at SEECS-NUST has enabled Pakistani researchers and members of the faculty to actively participate in such efforts, thereby becoming an active part of the above-mentioned experimentation, simulation, and visualization processes. The lab collaborates with the leading international institutions to develop highly skilled human resource in the related field. This lab facilitates neuroscientists and computer scientists in Pakistan to conduct their experiments and analysis on the data collected using state of the art research methodologies without investing in establishing the experimental neuroscience facilities. The key goal of this lab is to provide state of the art experimental and simulation facilities, to all beneficiaries including higher education institutes, medical researchers/practitioners, and technology industry. The Blue Brain Project The Blue Brain Project was founded in May 2005, and uses an 8000 processor Blue Gene/L supercomputer developed by IBM. At the time, this was one of the fastest supercomputers in the world. The project involves: Databases: 3D reconstructed model neurons, synapses, synaptic pathways, microcircuit statistics, computer model neurons, virtual neurons. Visualization: microcircuit builder and simulation results visualizator, 2D, 3D and immersive visualization systems are being developed. Simulation: a simulation environment for large-scale simulations of morphologically complex neurons on 8000 processors of IBM's Blue Gene supercomputer. Simulations and experiments: iterations between large-scale simulations of neocortical microcircuits and experiments in order to verify the computational model and explore predictions. The mission of the Blue Brain Project is to understand mammalian brain function and dysfunction through detailed simulations. The Blue Brain Project will invite researchers to build their own models of different brain regions in different species and at different levels of detail using Blue Brain Software for simulation on Blue Gene. These models will be deposited in an internet database from which Blue Brain software can extract and connect models together to build brain regions and begin the first whole brain simulations. Genes to Cognition Project Genes to Cognition Project, a neuroscience research programme that studies genes, the brain and behaviour in an integrated manner. It is engaged in a large-scale investigation of the function of molecules found at the synapse. This is mainly focused on proteins that interact with the NMDA receptor, a receptor for the neurotransmitter, glutamate, which is required for processes of synaptic plasticity such as long-term potentiation (LTP). Many of the techniques used are high-throughout in nature, and integrating the various data sources, along with guiding the experiments has raised numerous informatics questions. The program is primarily run by Professor Seth Grant at the Wellcome Trust Sanger Institute, but there are many other teams of collaborators across the world. The CARMEN project The CARMEN project is a multi-site (11 universities in the United Kingdom) research project aimed at using GRID computing to enable experimental neuroscientists to archive their datasets in a structured database, making them widely accessible for further research, and for modellers and algorithm developers to exploit. EBI Computational Neurobiology, EMBL-EBI (Hinxton) The main goal of the group is to build realistic models of neuronal function at various levels, from the synapse to the micro-circuit, based on the precise knowledge of molecule functions and interactions (Systems Biology). Led by Nicolas Le Novère. Neurogenetics GeneNetwork Genenetwork started as component of the NIH Human Brain Project in 1999 with a focus on the genetic analysis of brain structure and function. This international program consists of tightly integrated genome and phenome data sets for human, mouse, and rat that are designed specifically for large-scale systems and network studies relating gene variants to differences in mRNA and protein expression and to differences in CNS structure and behavior. The great majority of data are open access. GeneNetwork has a companion neuroimaging web site—the Mouse Brain Library—that contains high resolution images for thousands of genetically defined strains of mice. The Neuronal Time Series Analysis (NTSA) NTSA Workbench is a set of tools, techniques and standards designed to meet the needs of neuroscientists who work with neuronal time series data. The goal of this project is to develop information system that will make the storage, organization, retrieval, analysis and sharing of experimental and simulated neuronal data easier. The ultimate aim is to develop a set of tools, techniques and standards in order to satisfy the needs of neuroscientists who work with neuronal data. The Cognitive Atlas The Cognitive Atlas is a project developing a shared knowledge base in cognitive science and neuroscience. This comprises two basic kinds of knowledge: tasks and concepts, providing definitions and properties thereof, and also relationships between them. An important feature of the site is ability to cite literature for assertions (e.g. "The Stroop task measures executive control") and to discuss their validity. It contributes to NeuroLex and the Neuroscience Information Framework, allows programmatic access to the database, and is built around semantic web technologies. Brain Big Data research group at the Allen Institute for Brain Science (Seattle, WA) Led by Hanchuan Peng, this group has focused on using large-scale imaging computing and data analysis techniques to reconstruct single neuron models and mapping them in brains of different animals. See also References Citations Sources Further reading Books Journals and conferences Computational neuroscience Bioinformatics Computational fields of study
Neuroinformatics
Technology,Engineering,Biology
5,194
12,308,593
https://en.wikipedia.org/wiki/Big%20Bang%20Observer
The Big Bang Observer (BBO) is a proposed successor to the Laser Interferometer Space Antenna (LISA) by the European Space Agency. The primary scientific goal is the observation of gravitational waves from the time shortly after the Big Bang, but it would also be able to detect younger sources of gravitational radiation, like binary inspirals. BBO would likely be sensitive to all LIGO and LISA sources, and others. Its extreme sensitivity would come from the higher-power lasers, and correlation of signals from several different interferometers that would be placed around the Sun. The first phase resembles LISA, consisting of three spacecraft flown in a triangular pattern. The second phase adds three more triangles (twelve spacecraft total), spaced 120° apart in solar orbit, with one position having two overlapping triangles in a hexagram formation. The individual satellites would differ from those in LISA by having far more powerful lasers. In addition each triangle will be much smaller than the triangles in LISA's pattern, about 50,000 km instead of 1 to 5 million km. Because of this smaller size, the test masses will experience smaller tidal deviations, and thus can be locked on a particular fringe of the interferometer — much as in LIGO. By contrast, LISA's test masses will fly in an essentially free orbit, with the spacecraft flying around them, and interferometer fringes will simply be counted, in a technique called "time-delay interferometry". The BBO instruments present massive technological challenges. Funding has not been allocated for development, and even if selected for development, optimistic estimates place the instrument's launch date many decades away. See also Cosmic gravitational wave background Gravitational wave Laser Interferometer Space Antenna LISA Pathfinder Further reading Gravitational Wave Missions from LISA to Big Bang Observer, WM Folkner, JPL - 2005 The Big Bang Observer, Gregory Harry (MIT), LIGO-G0900426 Interferometric gravitational-wave instruments European Space Agency space probes Space telescopes Proposed spacecraft
Big Bang Observer
Astronomy
413
45,562,261
https://en.wikipedia.org/wiki/Digital%20labor
Digital labor or digital labour represents an emergent form of labor characterized by the production of value through interaction with information and communication technologies such as digital platforms or artificial intelligence. Examples of digital labor include on-demand platforms, micro-working, and user-generated data for digital platforms such as social media. Digital labor describes work that encompasses a variety of online tasks. If a country has the structure to maintain a digital economy, digital labor can generate income for individuals without the limitations of physical barriers. Origins As production-based industries declined, the rise of a digital and information-based economy fostered the development of the digital labor market. The rise of digital labor can be attributed to the shift from the Industrial Revolution to the Information Age. Digital labor can be connected to the economic process of disintermediation, where digital labor has taken away the job of the mediator in employee-employer supply chains. The value of the labor produced by marginalized digital workers in the digital or gig economy has yet to be recognized formally through labor laws. In many cases, individuals who work in digital labor are considered to be self-employed and are not protected by their employer from fluctuations in the economy.  Based on Marxian economic theory, digital labor can be considered labor as it produces use-value, produces capital, and is based upon collective labor in a workforce. Digital labor markets are websites or economies that facilitate the production, trade, and selling of digital content, code, digital products, or other ideas or goods emerging from digital and technological environments. A widely used example of a digital labor market is Amazon Mechanical Turk. Other forms of emergent digital subcultures including community forums, blogs, and gamers utilize digital labor as organizing tools. The platforms can be potential generators of cultural goods and are incorporated into global economies and networks. The popularity of the digital economy can be applied to the onset of economies based on peer production platforms like free and open-source software projects like Linux/GNU and Wikipedia. Computer scientist Jaron Lanier, in the books You are Not a Gadget and Who Owns the Future, argues that the open source approach contributed to the social stratification and widening of the gaps between rich and the poor, the rich being the major stakeholders in digital companies, who own the content of the content creators. A critique of the open source software movement is that peer production economies rely on an increasingly alienated labor force, forced into unpaid, knowledge labor. On-demand platforms On-demand work has been rising since the years 2008-2010. It follows the development of Internet access and the spread of mobile devices, which allow almost everyone to be in touch with this kind of platform, including children and teenagers. Such platforms cover a large field of domains: rental (Airbnb, Booking.com), travel (trivago, tripadvisor), food delivery (Uber Eats, Grub Hub, and Postmates), transportation (Uber, Taxify, Lyft), home services (Task Rabbit, Helpling), education (Udemy, Coursera), etc. 'Workers on such platforms are often not considered as employees, and aren't well paid. For example, an Uber driver earns between $8.80 and $11 per hour after expenses. All of these platforms can be seen as data producers : both customers and workers produce data while using the service. This data can then be used for improving the service or can be sold on the market. Business model of such companies is often centered around data. In December of 2020, Saile Inc. filed a United States trademark for Digital Labor™, based on their patent-pending Artificial Intelligence that performs the entire sales prospecting lifecycle by performing digital tasks on behalf of human sales executives. Digital Labor™ tasks are tracked, counted and purchased from Saile by companies ranging from the Fortune 500 to small businesses. Social media The notion of digital labor on social media arise from the fact that most of the value of any social media platforms is created by the users. Therefore they can be considered as digital workers on the platform. On most platforms however this work remains unpaid. Some exceptions include video and music sharing platforms. This is linked with the notion of participatory culture, "a term often used for designating the involvement of users, audiences, consumers and fans in the creation of culture and content". Digital labor is rooted in Italian autonomist, workerist/Operaismo worker's rights movements of the 1960s and 1970s, as well as the wages for housework movement founded by Selma James in 1972. The idea of the "digital economy" is defined as the moment, where work has shifted from the factory to the social realm. Italian autonomists would describe this as the, "social factory." Studies of the digital labor of social media were some of the first critiques of digital labor. This included scholarship like, "What the MySpace generation should know about working for free" (Trebor Scholz), and "From Mobile Playgrounds to Sweatshop City" (2010). (Andrew Ross), Tiziana Terranova and others developed a working definition of digital labor, drawing from the idea of free labor, and immaterial labor. Other scholars who have written about Digital Labor include: Ursula Huws, Trebor Scholz, Frank Pasquale, Sergio Bellucci, Christian Fuchs, Andrew Ross, Jaron Lanier, as well as Postcolonial feminists, including, Lisa Nakamura. Their work has been tied to other Alter-globalization texts. Social networking labor, or user labor, denotes the creation of data by social media and networking platforms users, which contributes to the financial gains and profits of those platforms, but not to the users. It is based on the production and exchange of cultural content, and the collection of users' metadata. Microwork tasks can be completed before using the platform, which indirectly trains algorithms (such as text or image recognition when creating an account). Digital labor rights The current debate over digital labor examines whether or not society's capitalistic economy has prompted corporate exploitation of digital labor in social media. Social media has developed as a means for people to create and share information and ideas over the Internet. Because social media are typically associated with leisure and entertainment, the monetization of digital labor has blurred the line separating work from entertainment. Proponents argue that exploitation occurs as typical social media users do not receive any monetary compensation for their digital content, while companies are able to take advantage of this freely accessible information to generate revenues. Studies of social media sites such as YouTube have analyzed their business models and found that user-generated digital labor is being monetized through ads and other methods to create company profit. Criticism against exploitation centers around people as prosumers. Scholars argue that exploitation cannot occur if people are both producing and consuming their own digital labor, thereby deriving value from their own created content. Due to the lack of regulation, the issue of digital labor worker rights has been raised by some activists and scholars. Some scholars have criticized the current situation as a form of neocolonialist exploitation. Gender Inequality Female platform work is more prevalent in countries that have lower female participation rates or in areas in which women tend to be more prevalent in non-standard types of employment and lower-wage jobs. The platform economy provides employment opportunities for disadvantaged groups who lack better options in their area. Platform economies can reproduce inequalities that are present in offline work such as lower earnings and occupational segregation. Women tend to be centered around digital roles that conform to patterns in the traditional labor market and economy such as freelancing and on-location services provided by care work platforms. The participation of women in digital work platforms tends to be more concentered on traditionally female gender rolled tasks. A technical report by the European Commission found that females are less likely to perform creative tasks, micro-tasking, transportation, and software development when compared to men when performing digital labor. Over the last two decades, there has been a steady decline in the gender-based wage gap in the United Kingdom largely caused by strict national labor relation anti-discrimination legislation. However, there still exist many challenges such as low labor force participation, gender wage gaps, occupational segregation, and a postgraduate educational gap. In the UK and most of Europe, many women find digital labor employment through remote crowd-work platforms (also known as part of the "Gig-economy") like Upwork, TaskRabbit, etc. The switch from the traditional labor market to platform labor has not extinguished the gender bias in traditional employment but rather bought new sets of challenges. The hiring process used in digital labor platforms are executed by machine learning algorithms which learn from past data patterns and are showing discriminatory outcomes based around gender. An interview conducted with 49 women was carried out to figure out the gender dimensions of these digital platforms and multiple complaints based around gender bias were reported as customer feedback. The African Union has a vision to empower women through Information and communication technologies (ICTs). They also declared 2010 to 2020 as the African Women's Decade. It is found that there are several gender inequalities due to education, socioeconomic status, domesticity, and traditionalism which creates disparity in the ICT access and usage. It further widens the digital gender divide between men's and women's representation in the digital labor market. Women in Africa were hopeful that new digital technologies or digitized work would bring equal pay and working opportunities, but in reality, they are facing new gender-based inequalities like economic insecurities, high work intensity, and adverse psychological impacts among women workers on such platforms. See also Microwork Amazon Mechanical Turk Computer and network surveillance Hyperreality Wages for housework Online volunteering References Bibliography Paolo Virno and Michael Hardt, Radical Thought in Italy: A Potential Politics (Minneapolis: University of Minnesota Press, 1996). Antonio Negri, The Politics of Subversion: A Manifesto for the Twenty-first Century (Cambridge: Polity, 1989). Anonymous, "The Digital Artisan Manifesto." (posted to nettime on 15 May 1997). Anwar, M. A., & Graham, M. (2020). Digital labour at economic margins: African workers and the global information economy. Review of African Political Economy, 47(163), 95–105. https://doi.org/10.1080/03056244.2020.1728243 Graham, M. and Anwar, M.A. 2018. "Digital Labour" In: Digital Geographies Ash, J., Kitchin, R. and Leszczynski, A. (eds.). Sage. London. Gong, J., Hong, Y., & Zentner, A. (2018). Role of Monetary Incentives in the Digital and Physical Inter-Border Labor Flows. Journal of Management Information Systems, 35(3), 866–899. https://doi.org/10.1080/07421222.2018.1481661 Kaplan, M. (2020). The Self-consuming Commodity: Audiences, Users, and the Riddle of Digital Labor. Television & New Media, 21(3), 240–259. https://doi.org/10.1177/152747641881900 Kvasny, L. (2013). Digital labour: the Internet as playground and factory. New Technology, Work & Employment, 28(3), 254–256. https://doi.org/10.1111/ntwe.12019 Value-Creation in the Late Twentieth Century: The Rise of the Knowledge Worker. Institute of Governmental Affairs, University of California, Davis. 1995. . Political Economy of Information, ed. Vincent Mosco and Janet Wasko (Madison: University of Wisconsin Press, 1988). Schmiede, R. (2017). Reconsidering value and labour in the digital age (dynamics of virtual work series). New Technology, Work & Employment, 32(1), 59–61. https://doi.org/10.1111/ntwe.12083 Scholz, T. (2012). Digital Labor. Sergio Bellucci, E-Work. Lavoro, rete, innovazione, Roma, Derive e Approdi, 2005. Siegel, B., Hoffman, R., & Skigen, R. (2020). Evolution of Automation in the Department of Defense: Leveraging Digital Labor to Transform Finance and Business Operations. Armed Forces Comptroller, 65(2), 40–44. Surie, A., & Sharma, L. V. (2019). Climate change, Agrarian distress, and the role of digital labour markets: evidence from Bengaluru, Karnataka. Decision (0304-0941), 46(2), 127–138. https://doi.org/10.1007/s40622-019-00213-w Verma, T. (2018). Feminism, Labour and Digital Media: The Digital Housewife. Australian Feminist Studies, 33(96), 277. https://doi.org/10.1080/08164649.2018.1517252 External links Digital Labor: Sweatshops, picket lines, barricades, the New School November 2014. The Internet as Playground and Factory conference, at the New School, 2009 CUNY Digital Labor Working Group Political theories Labour economics
Digital labor
Technology
2,779
1,280,478
https://en.wikipedia.org/wiki/Schrader%20valve
The Schrader valve (also called American valve) is a type of pneumatic tire valve used on virtually every motor vehicle in the world today. The Schrader company, for which it was named, was founded in 1844 by August Schrader. The original Schrader valve design was invented in 1891, and patented in the United States in 1893. The Schrader valve consists of a valve stem into which a valve core is threaded. The valve core is a poppet valve assisted by a spring. A small rubber seal located on the core keeps the fluid from escaping through the threads. Using the appropriate tools, a faulty valve core can be immediately extracted from the valve stem and replaced with a new one. Uses The Schrader valve is used on virtually all automobile tires and motorcycle tires and most wider-rimmed bicycle tires. In addition to tube and tubeless tires, Schrader valves of varying diameters are used on many refrigeration and air conditioning systems to allow servicing, including recharging with refrigerant; by plumbers conducting leak-down pressure tests on pipe installations; as a bleeding and test port on the fuel rail of some fuel injected engines; on bicycle air shock absorbers to allow adjustment of air pressure according to the rider's weight; for medical gas outlets within hospitals and some medical vehicles; and in the buoyancy compensator (BC) inflators of SCUBA systems where the ability to easily disconnect an air hose (even underwater) without the loss of tank air is critical. Schrader valves are also widely used in high-pressure hydraulic systems on aircraft. Many domestic fire extinguishers use an internal valve identical to a Schrader valve, but with a lever on top to enable quick release of the pressurized content. It is also the same thread specification used on the shutter button of some old Leica, Yashica, and also Nikon F and F2 cameras. Valve A Schrader valve consists of an externally threaded hollow cylindrical metal tube, typically nickel-plated brass. In the center of the exterior end is a metal pin aligned with the long axis of the tube; the pin's end is approximately flush with the end of the valve body. All Schrader valves used on tires have threads and bodies of a single standard size at the exterior end, so caps and tools generally are universal for the valves on all common applications. The core of the valve can be removed or tightened with a tool. Industrial Schrader valves are available in different diameters and valve core variants and are used in refrigeration, propane, and a variety of other uses. With the advent of miniature electronics, Schrader valve stems with integrated transmitters for tire pressure monitoring systems (TPMS) became available. Cap A valve cap on a Schrader valve prevents the entry of contaminants that may interfere with the sealing surfaces and cause leakage. Metal, and some hard plastic valve caps, have a rubber washer or O-ring inside, both to prevent the cap from loosening and falling off due to vibration. These caps also serve as a mechanical seal or hermetic seal to prevent air from leaking from a faulty valve core. Simple caps without a seal do not reliably prevent leaks. Some metal caps are equipped with prongs to enable removing and replacing valve cores, thereby serving two functions: seal and emergency tool. In refrigeration and air conditioning applications, the valve cap is considered to be the primary seal, with the Schrader valve only being used for service access. Schrader versus other valve types There are three valves used on tires worldwide: Schrader valves, Presta valves, and Dunlop valves. Each goes by multiple names. Schrader valves are also known as American valves or car valves. Presta valves are also known as French valves, Sclaverand valves, and road bike valves. Dunlop valves are also known as German valves, English valves, Holland valves, Woods valves, flash valves, and alligator valves. Schrader valves are almost universal on car, truck, and motorcycle tires worldwide. Presta and Dunlop valves are mostly found on bicycle tubes. Both the Schrader and the Presta types are effective for sealing high pressures. Their chief differences are that Schrader valves are larger and have springs that close the valve except when the pin is depressed. Schrader valves are used in a wide variety of compressed gas and pressurized liquid applications such as small torch and grill cylinders, and air shocks. Schrader valves are also viewed as more complex (requiring two seals over one). They weigh more than Presta valves. Schrader and Dunlop valve stems are 8 mm in diameter, whereas Presta valve stems are 6 mm, allowing Prestas to be used on narrower, high-performance rims as on road racing bicycles. Another disadvantage of the Schrader is that debris may be introduced into its spring-loaded pin, impairing inflation, whereas the Presta valve relies only on air pressure and a small knurled nut to keep it shut. Inflating a bicycle tire equipped with a Presta or Dunlop valve at an automobile filling station requires an adaptor, while a Schrader-valved tube does not. Inflating at home or on the road requires either 6mm air chuck for Presta and Dunlop valves, or an 8mm chuck for Schrader valves. An important advantage of Schrader valves relative to Presta is that Schrader valves allow for quick air pressure checks. Some chucks have dual orifices to inflate all three. Dimensions External 8V1 thread: x 32 TPI ( pitch; tap size 8v1-32) Internal 5V1 thread: x 36 TPI ( pitch; tap size 5v1-36) See also Inner tube References External links Schrader-Bridgeport International (North American English version) Vehicle parts Air valves Tires 1891 introductions American inventions 1890s neologisms fr:Valve Schrader
Schrader valve
Technology
1,240
24,666,773
https://en.wikipedia.org/wiki/FlowJo
FlowJo is a software package for analyzing flow cytometry data. Files produced by modern flow cytometers are written in the Flow Cytometry Standard format with an .fcs file extension. FlowJo will import and analyze cytometry data regardless of which flow cytometer is used to collect the data. Operation In FlowJo, samples are organized in a "Workspace" window, which presents a hierarchical view of all the samples and their analyses (gates and statistics). Viewing an entire experiment in a Workspace permits organizing and managing complex cytometry experiments and produces detailed graphical reports. FlowJo's ability to automate repetitive operations facilitates the production of statistics tables and graphical reports when the experiment involves many samples, parameters and/or operations. Within a workspace, samples can be grouped or sorted by various attributes such as the panel of antibodies with which they are stained, tissue type, or patient from whom they came. When an operation on a group is initiated, FlowJo can perform the same operation on every sample belonging to that group. Thus, you can apply a gate to a sample, copy it to the group, and that gate will be automatically placed on all samples in the group. FlowJo provides tools for the creation of histogram and other plot overlays, cell cycle analysis. calcium flux analysis, proliferation analysis, quantification, cluster identification and backgating display Development FlowJo became a commercial product in 1996. In 2002, Tree Star released a Windows version. FlowJo is currently developed by the Ashland, Oregon-based FlowJo LLC, a subsidiary of Becton Dickinson. References External links Bioinformatics software Cell biology Flow cytometry Laboratory software
FlowJo
Chemistry,Biology
350
23,295,054
https://en.wikipedia.org/wiki/Lodoxamide
Lodoxamide is an antiallergic pharmaceutical drug. It is marketed under the tradename Alomide in the UK. Like cromoglicic acid it acts as a mast cell stabilizer. In 2014 lodoxamide and bufrolin were found to be potent agonists at the G protein-coupled receptor 35, an orphan receptor believed to play a role in inflammatory processes, pain and the development of stomach cancer. See also Nedocromil Zaprinast Amlexanox Pemirolast Pamoic acid Kynurenic acid CXCL17 References Nitriles Chloroarenes Carboxamides Dicarboxylic acids Mast cell stabilizers
Lodoxamide
Chemistry
145
16,011,006
https://en.wikipedia.org/wiki/Worst-case%20circuit%20analysis
Worst-case circuit analysis (WCCA or WCA) is a cost-effective means of screening a design to ensure with a high degree of confidence that potential defects and deficiencies are identified and eliminated prior to and during test, production, and delivery. It is a quantitative assessment of the equipment performance, accounting for manufacturing, environmental and aging effects. In addition to a circuit analysis, a WCCA often includes stress and derating analysis, failure modes and effects criticality (FMECA) and reliability prediction (MTBF). The specific objective is to verify that the design is robust enough to provide operation which meets the system performance specification over design life under worst-case conditions and tolerances (initial, aging, radiation, temperature, etc.). Stress and de rating analysis is intended to increase reliability by providing sufficient margin compared to the allowable stress limits. This reduces overstress conditions that may induce failure, and reduces the rate of stress-induced parameter change over life. It determines the maximum applied stress to each component in the system. General information A worst-case circuit analysis should be performed on all circuitry that is safety and financially critical. Worst-case circuit analysis is an analysis technique which, by accounting for component variability, determines the circuit performance under a worst-case scenario (under extreme environmental or operating conditions). Environmental conditions are defined as external stresses applied to each circuit component. It includes temperature, humidity or radiation. Operating conditions include external electrical inputs, component quality level, interaction between parts, and drift due to component aging. WCCA helps in the process of building design reliability into hardware for long-term field operation. Electronic piece-parts fail in two distinct modes: Out-of-tolerance limits: Through this, the circuit continues to operate, though with degraded performance, and ultimately exceeds the circuit's required operating limits. Catastrophic failures may be minimized through MTBF, stress and derating, and FMECA analyses that help to ensure that all components are properly derated, as well as that degradation is occurring “gracefully...” A WCCA permits you to predict and judge the circuit performance limits beneath all of the combos of half tolerances. There are many reasons to perform WCCA. Here are a few that may be impactful to schedule and cost. Methodology Worst-case analysis is the analysis of a device (or system) that assures that the device meets its performance specifications. These are typically accounting for tolerances that are due to initial component tolerance, temperature tolerance, age tolerance and environmental exposures (such as radiation for a space device). The beginning of life analysis comprises the initial tolerance and provides the data sheet limits for the manufacturing test cycle. The end of life analysis provides the additional degradation resulting from the aging and temperature effects on the elements within the device or system. This analysis is usually performed using SPICE, but mathematical models of individual circuits within the device (or system) are needed to determine the sensitivities or the worst-case performance. A computer program is frequently used to total and summarize the results. A WCCA follows these steps: Generate/obtain circuit model Obtain correlation to validate model Determine sensitivity to each component parameter Determine component tolerances Calculate the variance of each component parameter as sensitivity times absolute tolerance Use at least two methods of analysis (e.g. hand analysis and SPICE or Saber, SPICE and measured data) to assure the result Generate a formal report to convey the information produced The design is broken down into the appropriate functional sections. A mathematical model of the circuit is developed and the effects of various part/system tolerances are applied. The circuit's EVA and RSS results are determined for beginning-of-life and end-of-life states. These results are used to calculate part stresses and are applied to other analysis. In order for the WCCA to be useful throughout the product’s life cycle, it is extremely important that the analysis be documented in a clear and concise format. This will allow for future updates and review by other than the original designer. A compliance matrix is generated that clearly identifies the results and all issues. References External links WCCA Simple Comparing of different Methods :DOI: 10.13140/RG.2.2.13287.75689 Mil-Std 785B has a short section on WCCA Why Perform a Worse Case Analysis Aerospace Corporation - Aerospace Corp. Mission Assurance Improvement Workshop: Electrical Design Worst-Case Circuit Analysis: Guidelines and Draft Standard (REV A) (MAIW), TOR-2013-00297 European Cooperation for Space Standardization, See Worst case circuit performance analysis - ECSS-Q-30-01A and ECSS-Q-HB-30-01A and Dependability ECSS-Q-ST-30C Reliability analysis
Worst-case circuit analysis
Engineering
976