id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
36,401,358
https://en.wikipedia.org/wiki/OPS%205111
OPS 5111, also known as Navstar 1, NDS-1, GPS I-1 and GPS SVN-1, was an American navigation satellite launched in 1978 as part of the Global Positioning System development program. It was the first GPS satellite to be launched, and one of eleven Block I demonstration satellites. Background Global Positioning System (GPS) was developed by the U.S. Department of Defense to provide all-weather round-the-clock navigation capabilities for military ground, sea, and air forces. Since its implementation, GPS has also become an integral asset in numerous civilian applications and industries around the globe, including recreational used (e.g., boating, aircraft, hiking), corporate vehicle fleet tracking, and surveying. GPS employs 24 spacecraft in 20,200 km circular orbits inclined at 55°. These vehicles are placed in 6 orbit planes with four operational satellites in each plane. Spacecraft The first eleven spacecraft (GPS Block 1) were used to demonstrate the feasibility of the GPS system. They were 3-axis stabilized, nadir pointing using reaction wheels. Dual solar arrays supplied over 400 watts. They had S-band communications for control and telemetry and Ultra high frequency (UHF) cross-link between spacecraft. They were manufactured by Rockwell Space Systems, were 5.3 meters across with solar panels deployed, and had a design life expectancy of 5 years. Unlike the later operational satellites, GPS Block 1 spacecraft were inclined at 63°. Launch OPS 5111 was launched at 23:44 UTC on 22 February 1978, atop an Atlas F launch vehicle with an SGS-1 upper stage. The Atlas used had the serial number 64F, and was originally built as an Atlas F. The launch took place from Vandenberg Space Launch Complex 3 (SLC-3E) at Vandenberg Air Force Base, and placed OPS 5111 into a transfer orbit. The satellite raised itself into medium Earth orbit using a Star-27 apogee motor. Mission By 29 March 1978, OPS 5111 was in an orbit with a perigee of , an apogee of , a period of 718.70 minutes, and 63.3° of inclination to the equator. The satellite had a design life of 5 years and a mass of . It broadcast the PRN 04 signal in the GPS demonstration constellation, and was retired from service on 17 July 1985. References 1978 in spaceflight GPS satellites Spacecraft launched in 1978
OPS 5111
Technology
497
52,319,534
https://en.wikipedia.org/wiki/Tesevatinib
Tesevatinib (KD019, XL647) is an experimental drug proposed for use in kidney cancer and polycystic kidney disease. The drug was first developed by Exelixis, Inc. and was later acquired by Kadmon Corporation. Tesevatinib binds to and inhibits several tyrosine receptor kinases that play major roles in tumor cell proliferation and tumor vascularization, including epidermal growth factor receptor (EGFR; ERBB1), epidermal growth factor receptor 2 (HER2; ERBB2), vascular endothelial growth factor receptor (VEGFR), and ephrin B4 (EphB4). The drug activity was initially studied in non-small cell lung cancer. In a 2007 pre-clinical study with xenograft tumors of an erlotinib-resistant cell line tesevatinib substantially inhibited the growth of these tumors. In polycystic kidney disease, a histological study of the drug effects and toxicity in rats and mice was published in July 2017. As of March 2019 the drug was in Phase II clinical trials for the treatment of polycystic kidney disease in adults and children. References Angiogenesis inhibitors Experimental cancer drugs Tyrosine kinase inhibitors
Tesevatinib
Biology
264
1,983,541
https://en.wikipedia.org/wiki/Air%20interface
The air interface, or access mode, is the communication link between the two stations in mobile or wireless communication. The air interface involves both the physical and data link layers (layer 1 and 2) of the OSI model for a connection. Physical Layer The physical connection of an air interface is generally radio-based. This is usually a point to point link between an active base station and a mobile station. Technologies like Opportunity-Driven Multiple Access (ODMA) may have flexibility regarding which devices serve in which roles. Some types of wireless connections possess the ability to broadcast or multicast. Multiple links can be created in limited spectrum through FDMA, TDMA, or SDMA. Some advanced forms of transmission multiplexing combine frequency- and time-division approaches like OFDM or CDMA. In cellular telephone communications, the air interface is the radio-frequency portion of the circuit between the cellular phone set or wireless modem (usually portable or mobile) and the active base station. As a subscriber moves from one cell to another in the system, the active base station changes periodically. Each changeover is known as a handoff. In radio and electronics, an antenna (plural antennae or antennas), or aerial, is an electrical device which converts electric power into radio waves, and vice versa. It is usually used with a radio transmitter or radio receiver. In transmission, a radio transmitter supplies an electric current oscillating at radio frequency to the antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves). An antenna focuses the radio waves in a certain direction. Usually, this is called the main direction. Because of that, in other directions less energy will be emitted. The gain of an antenna, in a given direction, is usually referenced to an (hypothetical) isotropic antenna, which emits the radiation evenly strong in all directions. The antenna gain is the power in the strongest direction divided by the power that would be transmitted by an isotropic antenna emitting the same total power. In this case the antenna gain (Gi) is often specified in dBi, or decibels over isotropic. Other reference antennas are also used, especially: •gain relative to a half-wave dipole (Gd), when the reference antenna is a half-wave dipole antenna; •gain relative to a short vertical antenna (Gv), when the reference antenna is a linear conductor, much shorter than one quarter of the wavelength. Data Link Layer The data link layer in an air interface is often divided farther than the simple Media access control (MAC) and Logical link control (LLC) sublayers found in other OSI terminology. While the MAC sublayer is generally unmodified, the LLC sublayer is subdivided into two or more additional sublayers depending on the standard. Common sublayers include: Radio Link Control Packet Data Convergence Protocol Service data adaptation protocol Especially in mobile telecommunication and internet broadband (...) Maximal combined input ratio with respect to signal to noise ratio estimation The signals from each channel are added together The gain of each channel is made proportional to the RMS signal level and inversely proportional to the mean square noise level in that channel. Different proportionality constants are used for each channel. smart matrix array for combine input signal gain separated them with filters and different types of output multiplexed schemes are used for approach to multiple users for example CDMA, FDMA, WCDMA, TDMA, and ODMA. Such way calls and network services are approach and authenticate to unique subscriber. core network link protocols Standards GSM/UMTS various UTRA 5G NR References Radio technology
Air interface
Technology,Engineering
755
1,014,906
https://en.wikipedia.org/wiki/Cyclomatic%20complexity
Cyclomatic complexity is a software metric used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. It was developed by Thomas J. McCabe, Sr. in 1976. Cyclomatic complexity is computed using the control-flow graph of the program. The nodes of the graph correspond to indivisible groups of commands of a program, and a directed edge connects two nodes if the second command might be executed immediately after the first command. Cyclomatic complexity may also be applied to individual functions, modules, methods, or classes within a program. One testing strategy, called basis path testing by McCabe who first proposed it, is to test each linearly independent path through the program. In this case, the number of test cases will equal the cyclomatic complexity of the program. Description Definition There are multiple ways to define cyclomatic complexity of a section of source code. One common way is the number of linearly independent paths within it. A set of paths is linearly independent if the edge set of any path in is not the union of edge sets of the paths in some subset of . If the source code contained no control flow statements (conditionals or decision points) the complexity would be 1, since there would be only a single path through the code. If the code had one single-condition IF statement, there would be two paths through the code: one where the IF statement is TRUE and another one where it is FALSE. Here, the complexity would be 2. Two nested single-condition IFs, or one IF with two conditions, would produce a complexity of 3. Another way to define the cyclomatic complexity of a program is to look at its control-flow graph, a directed graph containing the basic blocks of the program, with an edge between two basic blocks if control may pass from the first to the second. The complexity is then defined as where = the number of edges of the graph. = the number of nodes of the graph. = the number of connected components. An alternative formulation of this, as originally proposed, is to use a graph in which each exit point is connected back to the entry point. In this case, the graph is strongly connected. Here, the cyclomatic complexity of the program is equal to the cyclomatic number of its graph (also known as the first Betti number), which is defined as This may be seen as calculating the number of linearly independent cycles that exist in the graph: those cycles that do not contain other cycles within themselves. Because each exit point loops back to the entry point, there is at least one such cycle for each exit point. For a single program (or subroutine or method), always equals 1; a simpler formula for a single subroutine is Cyclomatic complexity may be applied to several such programs or subprograms at the same time (to all of the methods in a class, for example). In these cases, will equal the number of programs in question, and each subprogram will appear as a disconnected subset of the graph. McCabe showed that the cyclomatic complexity of a structured program with only one entry point and one exit point is equal to the number of decision points ("if" statements or conditional loops) contained in that program plus one. This is true only for decision points counted at the lowest, machine-level instructions. Decisions involving compound predicates like those found in high-level languages like IF cond1 AND cond2 THEN ... should be counted in terms of predicate variables involved. In this example, one should count two decision points because at machine level it is equivalent to IF cond1 THEN IF cond2 THEN .... Cyclomatic complexity may be extended to a program with multiple exit points. In this case, it is equal to where is the number of decision points in the program and is the number of exit points. Algebraic topology An even subgraph of a graph (also known as an Eulerian subgraph) is one in which every vertex is incident with an even number of edges. Such subgraphs are unions of cycles and isolated vertices. Subgraphs will be identified with their edge sets, which is equivalent to only considering those even subgraphs which contain all vertices of the full graph. The set of all even subgraphs of a graph is closed under symmetric difference, and may thus be viewed as a vector space over GF(2). This vector space is called the cycle space of the graph. The cyclomatic number of the graph is defined as the dimension of this space. Since GF(2) has two elements and the cycle space is necessarily finite, the cyclomatic number is also equal to the 2-logarithm of the number of elements in the cycle space. A basis for the cycle space is easily constructed by first fixing a spanning forest of the graph, and then considering the cycles formed by one edge not in the forest and the path in the forest connecting the endpoints of that edge. These cycles form a basis for the cycle space. The cyclomatic number also equals the number of edges not in a maximal spanning forest of a graph. Since the number of edges in a maximal spanning forest of a graph is equal to the number of vertices minus the number of components, the formula defines the cyclomatic number. Cyclomatic complexity can also be defined as a relative Betti number, the size of a relative homology group: which is read as "the rank of the first homology group of the graph G relative to the terminal nodes t". This is a technical way of saying "the number of linearly independent paths through the flow graph from an entry to an exit", where: "linearly independent" corresponds to homology, and backtracking is not double-counted; "paths" corresponds to first homology (a path is a one-dimensional object); and "relative" means the path must begin and end at an entry (or exit) point. This cyclomatic complexity can be calculated. It may also be computed via absolute Betti number by identifying the terminal nodes on a given component, or drawing paths connecting the exits to the entrance. The new, augmented graph obtains It can also be computed via homotopy. If a (connected) control-flow graph is considered a one-dimensional CW complex called , the fundamental group of will be . The value of is the cyclomatic complexity. The fundamental group counts how many loops there are through the graph up to homotopy, aligning as expected. Interpretation In his presentation "Software Quality Metrics to Identify Risk" for the Department of Homeland Security, Tom McCabe introduced the following categorization of cyclomatic complexity: 1 - 10: Simple procedure, little risk 11 - 20: More complex, moderate risk 21 - 50: Complex, high risk > 50: Untestable code, very high risk Applications Limiting complexity during development One of McCabe's original applications was to limit the complexity of routines during program development. He recommended that programmers should count the complexity of the modules they are developing, and split them into smaller modules whenever the cyclomatic complexity of the module exceeded 10. This practice was adopted by the NIST Structured Testing methodology, which observed that since McCabe's original publication, the figure of 10 had received substantial corroborating evidence. However, it also noted that in some circumstances it may be appropriate to relax the restriction and permit modules with a complexity as high as 15. As the methodology acknowledged that there were occasional reasons for going beyond the agreed-upon limit, it phrased its recommendation as "For each module, either limit cyclomatic complexity to [the agreed-upon limit] or provide a written explanation of why the limit was exceeded." Measuring the "structuredness" of a program Section VI of McCabe's 1976 paper is concerned with determining what the control-flow graphs (CFGs) of non-structured programs look like in terms of their subgraphs, which McCabe identified. (For details, see structured program theorem.) McCabe concluded that section by proposing a numerical measure of how close to the structured programming ideal a given program is, i.e. its "structuredness". McCabe called the measure he devised for this purpose essential complexity. To calculate this measure, the original CFG is iteratively reduced by identifying subgraphs that have a single-entry and a single-exit point, which are then replaced by a single node. This reduction corresponds to what a human would do if they extracted a subroutine from the larger piece of code. (Nowadays such a process would fall under the umbrella term of refactoring.) McCabe's reduction method was later called condensation in some textbooks, because it was seen as a generalization of the condensation to components used in graph theory. If a program is structured, then McCabe's reduction/condensation process reduces it to a single CFG node. In contrast, if the program is not structured, the iterative process will identify the irreducible part. The essential complexity measure defined by McCabe is simply the cyclomatic complexity of this irreducible graph, so it will be precisely 1 for all structured programs, but greater than one for non-structured programs. Implications for software testing Another application of cyclomatic complexity is in determining the number of test cases that are necessary to achieve thorough test coverage of a particular module. It is useful because of two properties of the cyclomatic complexity, , for a specific module: is an upper bound for the number of test cases that are necessary to achieve a complete branch coverage. is a lower bound for the number of paths through the control-flow graph (CFG). Assuming each test case takes one path, the number of cases needed to achieve path coverage is equal to the number of paths that can actually be taken. But some paths may be impossible, so although the number of paths through the CFG is clearly an upper bound on the number of test cases needed for path coverage, this latter number (of possible paths) is sometimes less than . All three of the above numbers may be equal: branch coverage cyclomatic complexity number of paths. For example, consider a program that consists of two sequential if-then-else statements. if (c1()) f1(); else f2(); if (c2()) f3(); else f4(); In this example, two test cases are sufficient to achieve a complete branch coverage, while four are necessary for complete path coverage. The cyclomatic complexity of the program is 3 (as the strongly connected graph for the program contains 9 edges, 7 nodes, and 1 connected component) (). In general, in order to fully test a module, all execution paths through the module should be exercised. This implies a module with a high complexity number requires more testing effort than a module with a lower value since the higher complexity number indicates more pathways through the code. This also implies that a module with higher complexity is more difficult to understand since the programmer must understand the different pathways and the results of those pathways. Unfortunately, it is not always practical to test all possible paths through a program. Considering the example above, each time an additional if-then-else statement is added, the number of possible paths grows by a factor of 2. As the program grows in this fashion, it quickly reaches the point where testing all of the paths becomes impractical. One common testing strategy, espoused for example by the NIST Structured Testing methodology, is to use the cyclomatic complexity of a module to determine the number of white-box tests that are required to obtain sufficient coverage of the module. In almost all cases, according to such a methodology, a module should have at least as many tests as its cyclomatic complexity. In most cases, this number of tests is adequate to exercise all the relevant paths of the function. As an example of a function that requires more than mere branch coverage to test accurately, reconsider the above function. However, assume that to avoid a bug occurring, any code that calls either f1() or f3() must also call the other. Assuming that the results of c1() and c2() are independent, the function as presented above contains a bug. Branch coverage allows the method to be tested with just two tests, such as the following test cases: c1() returns true and c2() returns true c1() returns false and c2() returns false Neither of these cases exposes the bug. If, however, we use cyclomatic complexity to indicate the number of tests we require, the number increases to 3. We must therefore test one of the following paths: c1() returns true and c2() returns false c1() returns false and c2() returns true Either of these tests will expose the bug. Correlation to number of defects Multiple studies have investigated the correlation between McCabe's cyclomatic complexity number with the frequency of defects occurring in a function or method. Some studies find a positive correlation between cyclomatic complexity and defects; functions and methods that have the highest complexity tend to also contain the most defects. However, the correlation between cyclomatic complexity and program size (typically measured in lines of code) has been demonstrated many times. Les Hatton has claimed that complexity has the same predictive ability as lines of code. Studies that controlled for program size (i.e., comparing modules that have different complexities but similar size) are generally less conclusive, with many finding no significant correlation, while others do find correlation. Some researchers question the validity of the methods used by the studies finding no correlation. Although this relation likely exists, it is not easily used in practice. Since program size is not a controllable feature of commercial software, the usefulness of McCabe's number has been questioned. The essence of this observation is that larger programs tend to be more complex and to have more defects. Reducing the cyclomatic complexity of code is not proven to reduce the number of errors or bugs in that code. International safety standards like ISO 26262, however, mandate coding guidelines that enforce low code complexity. See also Programming complexity Complexity trap Computer program Computer programming Control flow Decision-to-decision path Design predicates Essential complexity (numerical measure of "structuredness") Halstead complexity measures Software engineering Software testing Static program analysis Maintainability Notes References External links Generating cyclomatic complexity metrics with Polyspace The role of empiricism in improving the reliability of future software McCabe's Cyclomatic Complexity and Why We Don't Use It Software metrics
Cyclomatic complexity
Mathematics,Engineering
3,060
12,274,852
https://en.wikipedia.org/wiki/Transistor%20array
Transistor arrays consist of two or more transistors on a common substrate. Unlike more highly integrated circuits, the transistors can be used individually like discrete transistors. That is, the transistors in the array are not connected to each other to implement a specific function. Transistor arrays can consist of bipolar junction transistors or field-effect transistors. There are three main motivations for combining several transistors on one chip and in one package: to ensure closely matching parameters between the transistors (which is almost guaranteed when the transistors on one chip are manufactured simultaneously and subject to identical manufacturing process variations) to ensure a closely matching thermal drift of parameters between the transistors (which is achieved by having the transistors on a common substrate, and in extremely close proximity) to save circuit board space and to reduce board production cost (only one component needs to be populated instead of several) The matching parameters and thermal drift are crucial for various analogue circuits such as differential amplifiers, current mirrors, and log amplifiers. The reduction in circuit board area is particularly significant for digital circuits where several switching transistors are combined in one package. Often the transistors here are Darlington pairs with a common emitter and flyback diodes, e.g. ULN2003A. While this stretches the above definition of a transistor array somewhat, the term is still commonly applied. A peculiarity of transistor arrays is that the substrate is often available as a separate pin (labelled substrate, bulk, or ground). Care is required when connecting the substrate in order to maintain isolation between the transistors in the array as p–n junction isolation is usually used. For instance, for an array of NPN transistors, the substrate must be connected to the most negative voltage in the circuit. References Integrated circuits
Transistor array
Technology,Engineering
381
2,341,744
https://en.wikipedia.org/wiki/Polystyrene%20sulfonate
Polystyrene sulfonates are a group of medications used to treat high blood potassium. Effects generally take hours to days. They are also used to remove potassium, calcium, and sodium from solutions in technical applications. Common side effects include loss of appetite, gastrointestinal upset, constipation, and low blood calcium. These polymers are derived from polystyrene by the addition of sulfonate functional groups. Sodium polystyrene sulfonate was approved for medical use in the United States in 1958. A polystyrene sulfonate was developed in the 2000s to treat Clostridioides difficile associated diarrhea under the name Tolevamer, but it was never marketed. Medical uses Polystyrene sulfonate is usually supplied in either the sodium or calcium form. It is used as a potassium binder in acute and chronic kidney disease for people with hyperkalemia (an abnormally high blood serum potassium level). However, it is unclear if it is beneficial and there is concern about possible side effects when it is combined with sorbitol. Polystyrene sulfonates are given by mouth with a meal or rectally by retention enema. Side effects Intestinal disturbances are common, including loss of appetite, nausea, vomiting, and constipation. In rare cases, it has been associated with colonic necrosis. Changes in electrolyte blood levels such as hypomagnesemia, hypocalcemia, and hypokalemia may occur. Polystyrene sulfonates should not be used in people with obstructive bowel disease and in newborns with reduced gut motility. Intestinal injury A total of 58 cases of intestinal injury including necrosis of the colon have been reported with polystyrene sulfonate as of 2013. Well more cases have been reported when used in combination with sorbitol and other cases have occurred when used alone. Interactions Polystyrene sulfonates can bind to various drugs within the digestive tract and thus lower their absorption and effectiveness. Common examples include lithium, thyroxine, and digitalis. In September 2017, the FDA recommended separating the dosing of polystyrene sulfonate from any other oral medications by at least three hours to avoid any potential interactions. Mechanism of action Hyperkalemia Polystyrene sulfonates release sodium or calcium ions in the stomach in exchange for hydrogen ions. When the resin reaches the large intestine the hydrogen ions are exchanged for free potassium ions, and the resin is then eliminated in the feces. The net effect is lowering the amount of potassium available for absorption into the blood and increasing the amount that is excreted via the feces. The effect is a reduction of potassium levels in the body, at a capacity of 1 mEq of potassium exchanged per 1 g of resin. Production and chemical structure Polystyrene sulfonic acid, the acid whose salts are the polystyrene sulfonates, has the idealized formula (CH2CHC6H4SO3H)n. The material is prepared by sulfonation of polystyrene: (CH2CHC6H5)n + n SO3 → (CH2CHC6H4SO3H)n Several methods exist for this conversion, which can lead to varying degree of sulfonation. Usually the polystyrene is crosslinked, which keeps the polymer from dissolving. Since the sulfonic acid group (SO3H) is strongly acidic, this polymer neutralizes bases. In this way, various salts of the polymer can be prepared, leading to sodium, calcium, and other salts: (CH2CHC6H4SO3H)n + n NaOH → (CH2CHC6H4SO3Na)n + n H2O These ion-containing polymers are called ionomers. Alternative sulfonation methods Double substitutions of the phenyl rings are known to occur, even with conversions well below 100%. Crosslinking reactions are also found, where condensation of two sulfonic acid groups yields a sulfonyl crosslink. On the other hand, the use of milder conditions such as acetyl sulfate leads to incomplete sulfonation. Recently, the atom transfer radical polymerization (ATRP) of protected styrene sulfonates has been reported, leading to well defined linear polymers, as well as more complicated molecular architectures. Chemical uses Polystyrene sulfonates are useful because of their ion exchange properties. Linear ionic polymers are generally water-soluble, whereas cross-linked materials (called resins) do not dissolve in water. These polymers are classified as polysalts and ionomers. Water softening Water softening is achieved by percolating hard water through a bed of the sodium form of cross-linked polystyrene sulfonate. The hard ions such as calcium (Ca2+) and magnesium (Mg2+) adhere to the sulfonate groups, displacing sodium ions. The resulting solution of sodium ions is softened. Other uses Sodium polystyrene sulfonate is used as a superplastifier in cement, as a dye improving agent for cotton, and as proton exchange membranes in fuel cell applications. In its acid form, the resin is used as a solid acid catalyst in organic synthesis, mostly commonly under the tradename Amberlyst. References Benzenesulfonates Nephrology procedures Organic polymers Plastics Polyelectrolytes Chelating agents used as drugs Acid catalysts Vinyl polymers Sanofi
Polystyrene sulfonate
Physics,Chemistry
1,174
28,327,642
https://en.wikipedia.org/wiki/VTD-XML
Virtual Token Descriptor for eXtensible Markup Language (VTD-XML) refers to a collection of cross-platform XML processing technologies centered on a non-extractive XML, "document-centric" parsing technique called Virtual Token Descriptor (VTD). Depending on the perspective, VTD-XML can be viewed as one of the following: A "Document-Centric" XML parser A native XML indexer or a file format that uses binary data to enhance the text XML An incremental XML content modifier An XML slicer/splitter/assembler An XML editor/eraser A way to port XML processing on chip A non-blocking, stateless XPath evaluator VTD-XML is developed by XimpleWare and dual-licensed under GPL and proprietary license. It was originally written in Java, but is now available in C, C++ and C#. Basic concept Non-extractive, document-centric parsing Traditionally, a lexical analyzer represents tokens (the small units of indivisible character values) as discrete string objects. This approach is designated extractive parsing. In contrast, non-extractive tokenization mandates that one keeps the source text intact, and uses offsets and lengths to describe those tokens. Virtual token descriptor Virtual Token Descriptor (VTD) applies the concept of non-extractive, document-centric parsing to XML processing. A VTD record uses a 64-bit integer to encode the offset, length, token type and nesting depth of a token in an XML document. Because all VTD records are 64 bits in length, they can be stored efficiently and managed as an array. Location cache Location Caches (LC) build on VTD records to provide efficient random access. Organized as tables, with one table per nesting depth level, LCs contain entries modeling an XML document's element hierarchy. An LC entry is a 64-bit integer encoding a pair of 32-bit values. The upper 32 bits identify the VTD record for the corresponding element. The lower 32 bits identify that element's first child in the LC at the next lower nesting level. Benefits Overview Virtually all the core benefits of VTD-XML are inherent to non-extractive, document-centric parsing which provides these characteristics: The source XML text is kept intact in memory without decoding. The internal representation of VTD-XML is inherently persistent. Obviates object-oriented modeling of the hierarchical representation as it relies entirely on primitive data types (e.g., 64-bit integers) to represent the XML hierarchy, thus reducing object creation cost to nearly zero. Combining those characteristics permits thinking of XML purely as syntax (bits, bytes, offsets, lengths, fragments, namespace-compensated fragments, and document composition) instead of the serialization/deserialization of objects. This is a powerful way to think about XML/SOA applications. Conformance VTD-XML conforms strictly to XML 1.0 (Except the DTD part) and XML Namespace 1.0. It essentially conforms to XPath 1.0 spec (with some subtle differences in terms of underlying data model) with extension to XPath 2.0 built-in functions. Simplicity As parser When used in parsing mode, VTD-XML is a general purpose, high performance XML parser which compares favorably with others: VTD-XML typically outperforms SAX (with NULL content handler) while still providing full random access and built-in XPath support. VTD-XML typically consumes 1.3-1.5 times the XML document's size in memory, which is about 1/5 the memory usage of DOM Applications written in VTD-XML are usually much shorter and cleaner than their DOM or SAX versions. As indexer Because of the inherent persistence of VTD-XML, developers can write the internal representation of a parsed XML document to disk and later reload it to avoid repetitive parsing. To this end, XimpleWare has introduced VTD+XML as a binary packaging format combining VTD, LC and the XML text. It can typically be viewed in one of the following two ways: A native XML index that completely eliminates the parsing cost and also retains all benefits of XML. It is a file format that is human readable and backward compatible with XML. A binary XML format that uses binary data to enhance the processing of the XML text. XML content modifier Because VTD-XML keeps the XML text intact without decoding, when an application intends to modify the content of XML it only needs to modify the portions most relevant to the changes. This is in stark contrast with DOM, SAX, or StAx parsing, which incur the cost of parsing and re-serialization no matter how small the changes are. Since VTDs refer to document elements by their offsets, changes to the length of elements occurring earlier in a document require adjustments to VTDs referring to all later elements. However, those adjustments are integer additions, albeit to many integers in multiple tables, so they are quick. XML slicer/splitter/assembler An application based on VTD-XML can also use offsets and lengths to address tokens, or element fragments. This allows XML documents to be manipulated like arrays of bytes. As a slicer, VTD-XML can "slice" off a token or an element fragment from an XML document, then insert it back into another location in the same document, or into a different document. As a splitter, VTD-XML can split sub-elements in an XML document and dump each into a separate XML document. As an assembler, VTD-XML can "cut" chunks out of multiple XML documents and assemble them into a new XML document. XML editor/eraser Used as an editor/eraser, VTD-XML can directly edit/erase the underlying byte content of the XML text, provided that the token length is wider than the intended new content. An immediate benefit of this approach is that the application can immediately reuse the original VTD and LC. In contrast, when using VTD-XML to incrementally update an XML document, an application needs to reparse the updated document before the application can process it. An editor can be made smart enough to track the location of each token, permitting new, longer tokens to replace existing, shorter tokens by merely addressing the new token in separate memory outside that used to store the original document. Likewise, when reordering the document, element text does not need to be copied; only the LCs need to be updated. When a complete, contiguous XML document is needed, such as when saving it, the disparate parts can be reassembled into a new, contiguous document. Other benefits VTD-XML also pioneers the non-blocking, stateless XPath evaluation approach. Weaknesses VTD-XML Also includes noticeable shortcomings/weaknesses: As an XML parser, it does not support external entities declared in the DTD. As a file format, it increases the document size by about 30% to 50%. As an API, it is not compatible with DOM, SAX or StAX. It is difficult to support certain validation techniques, employed by DTD and XML Schema (e.g., default attributes and elements), that require modifications to the XML instances being parsed. Areas of applications General-purpose replacement for DOM or SAX Because of VTD-XML's performance and memory advantages, it covers a larger portion of XML use cases than either DOM or SAX. Compared to DOM, VTD-XML processes bigger (3x~5x) XML documents for the same amount of physical memory at about 3 to 10 times the performance. Compared to SAX, VTD-XML provides random access and XPath support and outperforms SAX by at least 2x. XPath over large XML documents The extended edition of VTD-XML combining with 64-bit JVM makes possible XPath-based XML processing over huge XML documents (up to 256 GB) in size. For SOA/WS/XML security The combination of VTD-XML's high performance and incremental-update capability makes it essential to achieve the desired level of quality of service for SOA/WS/XML security applications. For SOA/WS/XML intermediary VTD-XML is well suited for SOA intermediary applications such as XML routers/switches/gateways, Enterprise Service Buses, and services aggregation points. All those applications perform the basic "store and forward" operations for which retaining the original XML is critical for minimizing latency. VTD-XML's incremental update capability also contributes significantly to the forwarding performance. VTD-XML's random-access capability lends itself well to XPath-based XML routing/switching/filtering common in AJAX and SOA deployment. Intelligent SOA/WS/XML Load-balancing and Offloading When an XML document travels through several middle-tier SOA components, the first message stop, after finishing the inspection of the XML document, can choose to send the VTD+XML file format to the downstream components to avoid repetitive parsing, thus improving throughput. By the same token, an intelligent SOA load balancer can choose to generate VTD+XML for incoming/outgoing SOAP messages to offload XML parsing from the application servers that receive those messages. XML persistence data store When viewed from the perspective of native XML persistence, VTD-XML can be used as a human-readable, easy to use, general-purpose XML index. XML documents stored this way can be loaded into memory to be queried, updated, or edited without the overhead of parsing/re-serialization. Schemaless XML data binding VTD-XML's combination of high performance, low memory usage, and efficient XPath evaluation makes possible a new XML data binding approach based entirely on XPath. This approach's biggest benefit is it no longer requires XML schema, avoids needless object creation, and takes advantage of XML's inherent loose encoding. It is worth noting that data binding discussed in the article mentioned above needs to be implemented by the application: VTD-XML itself only offers accessors. In this regard VTD-XML is not a data binding solution itself (unlike JiBX, JAXB, XMLBeans), although it offers extraction functionality for data binding packages, much like other XML parsers (DOM, SAX, StAX). Essential classes As of Version 2.11, the Java and C# versions of VTD-XML consist of the following classes: VTDGen (VTD Generator) is the class that encapsulates the main parsing, index loading and index writing functions. VTDNav (VTD Navigator) is the class that (1) encapsulates XML, VTD, and hierarchical info, (2) contains various navigation methods, (3) performs various comparisons between VTD records and strings, and (4) converts VTD records to primitive data types. AutoPilot is a class containing functions that perform node-level iteration and XPath. XMLModifier is a class that offers incremental update capability, such as delete, insert and update. The extended VTD-XML consists of the following classes: VTDGenHuge (Extended VTD Generator) encapsulates the main parsing. XMLBuffer performs in-memory loading of XML documents. XMLMemMappedBuffer performs memory mapped loading of XML documents. VTDNavHuge (Extended VTD Navigator) (1) encapsulates XML, Extended VTD, and hierarchical info, (2) contains various navigation methods, (3) performs various comparisons between VTD records and strings, and (4) converts VTD records to primitive data types. AutoPilotHuge performs node-level iteration and XPath. Code sample /* In this java program, we demonstrate how to use XMLModifier to incrementally * update a simple XML purchase order. * a particular name space. We also are going * to use VTDGen's parseFile to simplify programming. */ import com.ximpleware.*; public class Update { public static void main(String argv[]) throws NavException, ModifyException, IOException{ // open a file and read the content into a byte array VTDGen vg = new VTDGen(); if (vg.parseFile("oldpo.xml", true)){ VTDNav vn = vg.getNav(); AutoPilot ap = new AutoPilot(vn); XMLModifier xm = new XMLModifier(vn); ap.selectXPath("/purchaseOrder/items/item[@partNum='872-AA']"); int i = -1; while ((i = ap.evalXPath()) != -1){ xm.remove(); xm.insertBeforeElement("<something/>\n"); } ap.selectXPath("/purchaseOrder/items/item/USPrice[.<40]/text()"); while ((i = ap.evalXPath()) != -1){ xm.updateToken(i, "200"); } xm.output("newpo.xml"); } } } References XML XML parsers Cross-platform free software Java platform .NET programming tools XML-based standards C (programming language) libraries C++ libraries
VTD-XML
Technology
2,869
52,649,487
https://en.wikipedia.org/wiki/CloudSight
CloudSight, Inc. is a Los Angeles, California–based technology company that specializes in captioning and understanding images using AI. History CloudSight was founded in 2012 by Dominik Mazur and Bradford Folkens. It was previously known as Image Searcher, Inc. and then CamFind, Inc., respectively. In 2016, the company was officially rebranded as CloudSight, Inc. As of August 2022, CloudSight has 15+ granted patents for its technology and has recognized over 1 billion images. Products TapTapSee On October 11, 2012, CloudSight released TapTapSee, its first mobile application, on the AppStore. TapTapSee is a mobile camera application designed specifically for blind and visually impaired iOS and Android users. The application utilizes the device's camera and VoiceOver functions to photograph objects, identify them and communicate this information to the user. TapTapSee was the 2014 recipient of the Access Award by the American Foundation for the Blind. In March 2013, TapTapSee was named App of the Month by the Royal National Institute for the Blind. At the end of 2013, TapTapSee was elected into the AppleVis iOS Hall of Fame. CamFind On April 7, 2013, CloudSight released its second mobile application, CamFind. The application is a visual search engine application that utilizes image recognition to photograph, identify, and provide information on any object, at any angle. Its image recognition capabilities make use of CloudSight API. CamFind surpassed 1,000,000 downloads within the first seven months after its release into the Apple AppStore. The mobile application is also available in the Google Play Store, and between the two platforms it has received a combined 11,000,000+ downloads as of 2022. In February 2015, CamFind was released on Google Glass via MyGlass. CloudSight API In September 2013, CloudSight released its CloudSight API to the general public. The CloudSight API employs deep learning methods Google Cloud Marketplace On June 2, 2020, CloudSight announced the availability of their neural network products on Google Cloud Marketplace as part of a collaboration with Google Cloud. References Mobile technology companies American companies established in 2012 Technology companies based in Greater Los Angeles Software companies of the United States Applications of artificial intelligence 2012 establishments in California
CloudSight
Technology
471
49,441,428
https://en.wikipedia.org/wiki/Mesentoblast
Mesentoblasts, also called 4d cells, are the cells from which the mesoderm originates. Mesentoblasts are found in the blastopore area between the endoderm and the ectoderm. In protostomes the embryos are mosaic, so mesentoblast removal will result in failure of formation of the mesoderm and other structures related to the mesoderm, which in turn will give abnormal embryos. The mesentoblast migrates to the blastocoel where it reproduces to form a mass of cells that becomes the mesoderm. References Developmental biology
Mesentoblast
Biology
134
816,070
https://en.wikipedia.org/wiki/Length%20of%20a%20module
In algebra, the length of a module over a ring is a generalization of the dimension of a vector space which measures its size. page 153 It is defined to be the length of the longest chain of submodules. For vector spaces (modules over a field), the length equals the dimension. If is an algebra over a field , the length of a module is at most its dimension as a -vector space. In commutative algebra and algebraic geometry, a module over a Noetherian commutative ring can have finite length only when the module has Krull dimension zero. Modules of finite length are finitely generated modules, but most finitely generated modules have infinite length. Modules of finite length are called Artinian modules and are fundamental to the theory of Artinian rings. The degree of an algebraic variety inside an affine or projective space is the length of the coordinate ring of the zero-dimensional intersection of the variety with a generic linear subspace of complementary dimension. More generally, the intersection multiplicity of several varieties is defined as the length of the coordinate ring of the zero-dimensional intersection. Definition Length of a module Let be a (left or right) module over some ring . Given a chain of submodules of of the form one says that is the length of the chain. The length of is the largest length of any of its chains. If no such largest length exists, we say that has infinite length. Clearly, if the length of a chain equals the length of the module, one has and Length of a ring The length of a ring is the length of the longest chain of ideals; that is, the length of considered as a module over itself by left multiplication. By contrast, the Krull dimension of is the length of the longest chain of prime ideals. Properties Finite length and finite modules If an -module has finite length, then it is finitely generated. If R is a field, then the converse is also true. Relation to Artinian and Noetherian modules An -module has finite length if and only if it is both a Noetherian module and an Artinian module (cf. Hopkins' theorem). Since all Artinian rings are Noetherian, this implies that a ring has finite length if and only if it is Artinian. Behavior with respect to short exact sequences Supposeis a short exact sequence of -modules. Then M has finite length if and only if L and N have finite length, and we have In particular, it implies the following two properties The direct sum of two modules of finite length has finite length The submodule of a module with finite length has finite length, and its length is less than or equal to its parent module. Jordan–Hölder theorem A composition series of the module M is a chain of the form such that A module M has finite length if and only if it has a (finite) composition series, and the length of every such composition series is equal to the length of M. Examples Finite dimensional vector spaces Any finite dimensional vector space over a field has a finite length. Given a basis there is the chainwhich is of length . It is maximal because given any chain,the dimension of each inclusion will increase by at least . Therefore, its length and dimension coincide. Artinian modules Over a base ring , Artinian modules form a class of examples of finite modules. In fact, these examples serve as the basic tools for defining the order of vanishing in intersection theory. Zero module The zero module is the only one with length 0. Simple modules Modules with length 1 are precisely the simple modules. Artinian modules over Z The length of the cyclic group (viewed as a module over the integers Z) is equal to the number of prime factors of , with multiple prime factors counted multiple times. This follows from the fact that the submodules of are in one to one correspondence with the positive divisors of , this correspondence resulting itself from the fact that is a principal ideal ring. Use in multiplicity theory For the needs of intersection theory, Jean-Pierre Serre introduced a general notion of the multiplicity of a point, as the length of an Artinian local ring related to this point. The first application was a complete definition of the intersection multiplicity, and, in particular, a statement of Bézout's theorem that asserts that the sum of the multiplicities of the intersection points of algebraic hypersurfaces in a -dimensional projective space is either infinite or is exactly the product of the degrees of the hypersurfaces. This definition of multiplicity is quite general, and contains as special cases most of previous notions of algebraic multiplicity. Order of vanishing of zeros and poles A special case of this general definition of a multiplicity is the order of vanishing of a non-zero algebraic function on an algebraic variety. Given an algebraic variety and a subvariety of codimension 1 the order of vanishing for a polynomial is defined aswhere is the local ring defined by the stalk of along the subvariety pages 426-227, or, equivalently, the stalk of at the generic point of page 22. If is an affine variety, and is defined the by vanishing locus , then there is the isomorphismThis idea can then be extended to rational functions on the variety where the order is defined as which is similar to defining the order of zeros and poles in complex analysis. Example on a projective variety For example, consider a projective surface defined by a polynomial , then the order of vanishing of a rational functionis given bywhereFor example, if and and thensince is a unit in the local ring . In the other case, is a unit, so the quotient module is isomorphic toso it has length . This can be found using the maximal proper sequence Zero and poles of an analytic function The order of vanishing is a generalization of the order of zeros and poles for meromorphic functions in complex analysis. For example, the functionhas zeros of order 2 and 1 at and a pole of order at . This kind of information can be encoded using the length of modules. For example, setting and , there is the associated local ring is and the quotient module Note that is a unit, so this is isomorphic to the quotient moduleIts length is since there is the maximal chainof submodules. More generally, using the Weierstrass factorization theorem a meromorphic function factors aswhich is a (possibly infinite) product of linear polynomials in both the numerator and denominator. See also Hilbert–Poincaré series Weil divisor Chow ring Intersection theory Weierstrass factorization theorem Serre's multiplicity conjectures Hilbert scheme - can be used to study modules on a scheme with a fixed length Krull–Schmidt theorem References External links Steven H. Weintraub, Representation Theory of Finite Groups AMS (2003) , Allen Altman, Steven Kleiman, A term of commutative algebra. The Stacks project. Length Module theory
Length of a module
Mathematics
1,447
39,016,464
https://en.wikipedia.org/wiki/Desktop%20Developers%27%20Conference
The Desktop Developers' Conference was a Linux conference where developers discussed and worked on X11, Linux desktops like GNOME and KDE, FreeDesktop.org projects, and desktop software such as web browsers, office suites, and groupware. The conference took place in Ottawa, Ontario, Canada each year — just before the Linux Symposium conference — in 2004, 2005, and 2006. The Desktop Developers' Conference has not been held since 2006. References Further reading Free-software conferences Linux conferences
Desktop Developers' Conference
Technology
102
22,157,068
https://en.wikipedia.org/wiki/Multislice
The multislice algorithm is a method for the simulation of the elastic scattering of an electron beam with matter, including all multiple scattering effects. The method is reviewed in the book by John M. Cowley, and also the work by Ishizuka. The algorithm is used in the simulation of high resolution transmission electron microscopy (HREM) micrographs, and serves as a useful tool for analyzing experimental images. This article describes some relevant background information, the theoretical basis of the technique, approximations used, and several software packages that implement this technique. Some of the advantages and limitations of the technique and important considerations that need to be taken into account are described. Background The multislice method has found wide application in electron microscopy and crystallography. The mapping from a crystal structure to its image or electron diffraction pattern is relatively well understood and documented. However, the reverse mapping from electron micrograph images to the crystal structure is generally more complicated. The fact that the images are two-dimensional projections of three-dimensional crystal structure makes it tedious to compare these projections to all plausible crystal structures. Hence, the use of numerical techniques in simulating results for different crystal structure is integral to the field of electron microscopy and crystallography. Several software packages exist to simulate electron micrographs. There are two widely used simulation techniques that exist in literature: the Bloch wave method, derived from Hans Bethe's original theoretical treatment, and the multislice method. This article focuses on the multislice method for simulation of dynamical diffraction, including multiple elastic scattering effects. Most of the packages that exist implement the multislice algorithm along with Fourier analysis to incorporate electron lens aberration effects to determine electron microscope image and address aspects such as phase contrast and diffraction contrast. For electron microscope samples in the form of a thin crystalline slab in the transmission geometry, the aim of these software packages is to provide a map of the crystal potential, however this inversion process is greatly complicated by the presence of multiple elastic scattering. The first description of what is now known as the multislice theory was given in the classic paper by Cowley and Moodie. In this work, the authors describe scattering of electrons using a physical optics approach without invoking quantum mechanical arguments. Many other derivations of these iterative equations have since been given using alternative methods, such as Greens functions, differential equations, scattering matrices or path integral methods, see for instance the book by Lianmao Peng, Sergei Dudarev and Michael Whelan. A summary of the development of a computer algorithm from the multislice theory of Cowley and Moodie for numerical computation was reported by Goodman and Moodie. They also discussed in detail the relationship of the multislice to the other formulations. Specifically, using Zassenhaus's theorem, this paper gives the mathematical path from multislice to 1. Schrödinger equation, 2. Darwin's differential equations, widely used for diffraction contrast Transmission electron microscopy (TEM) image simulations - the Howie-Whelan equations, 3. Sturkey's scattering matrix method. 4. the free-space propagation case, 5. The phase grating approximation, 6. A new "thick-phase grating" approximation, which has never been used, 7. Moodie's polynomial expression for multiple scattering, 8. The Feynman path-integral formulation, and 9. relationship of multislice to the Born series. The relationship between algorithms is summarized in Section 5.11 of Spence (2013), (see Figure 5.9). Theory The form of multislice algorithm presented here has been adapted from Peng, Dudarev and Whelan 2003. The multislice algorithm is an approach to solving the Schrödinger equation: In 1957, Cowley and Moodie showed that the Schrödinger equation can be solved analytically to evaluate the amplitudes of diffracted beams. Subsequently, the effects of dynamical diffraction can be calculated and the resulting simulated image will exhibit good similarities with the actual image taken from a microscope under dynamical conditions. Furthermore, the multislice algorithm does not make any assumption about the periodicity of the structure and can thus be used to simulate HREM images of aperiodic systems as well. The following section will include a mathematical formulation of the multislice algorithm. The Schrödinger equation can also be represented in the form of incident and scattered wave as: where is the Green's function that represents the amplitude of the electron wave function at a point due to a source at point . Hence for an incident plane wave of the form the Schrödinger equation can be written as {{NumBlk|:| } We then choose the coordinate axis in such a way that the incident beam hits the sample at (0,0,0) in the -direction, i.e., . Now we consider a wave-function with a modulation function for the amplitude. Equation () becomes then an equation for the modulation function, i.e., . Now we make substitutions with regards to the coordinate system we have adhered, i.e., where . Thus , where is the wavelength of the electrons with energy and is the interaction constant. So far we have set up the mathematical formulation of wave mechanics without addressing the scattering in the material. Further we need to address the transverse spread, which is done in terms of the Fresnel propagation function . The thickness of each slice over which the iteration is performed is usually small and as a result within a slice the potential field can be approximated to be constant . Subsequently, the modulation function can be represented as: We can therefore represent the modulation function in the next slice where, * represents convolution, and defines the transmission function of the slice. Hence, the iterative application of the aforementioned procedure will provide a full interpretation of the sample in context. Further, it should be reiterated that no assumptions have been made on the periodicity of the sample apart from assuming that the potential is uniform within the slice. As a result, it is evident that this method in principle will work for any system. However, for aperiodic systems in which the potential will vary rapidly along the beam direction, the slice thickness has to be significantly small and hence will result in higher computational expense. Practical considerations The basic premise is to calculate diffraction from each layer of atoms using fast Fourier transforms (FFT) and multiplying each by a phase grating term. The wave is then multiplied by a propagator, inverse Fourier transformed, multiplied by a phase grating term yet again, and the process is repeated. The use of FFTs allows a significant computational advantage over the Bloch Wave method in particular, since the FFT algorithm involves steps compared to the diagonalization problem of the Bloch wave solution which scales as where is the number of atoms in the system. (See Table 1 for comparison of computational time). The most important step in performing a multislice calculation is setting up the unit cell and determining an appropriate slice thickness. In general, the unit cell used for simulating images will be different from the unit cell that defines the crystal structure of a particular material. The primary reason for this due to aliasing effects which occur due to wraparound errors in FFT calculations. The requirement is to add additional “padding” to the unit cell has earned the nomenclature “super cell” and the requirement to add these additional pixels to the basic unit cell comes at a computational price. To illustrate the effect of choosing a slice thickness that is too thin, consider a simple example. The Fresnel propagator describes the propagation of electron waves in the z direction (the direction of the incident beam) in a solid: Where is the reciprocal lattice coordinate, z is the depth in the sample, and is the wavelength of the electron wave (related to the wave vector by the relation ). In the case of the small-angle approximation ( 100 mRad) we can approximate the phase shift as . For 100 mRad the error is on the order of 0.5% since . For small angles this approximation holds regardless of how many slices there are, although choosing a greater than the lattice parameter (or half the lattice parameter in the case of perovskites) for a multislice simulation would result in missing atoms that should be in the crystal potential. Additional practical concerns are how to effectively include effects such as inelastic and diffuse scattering, quantized excitations (e.g. plasmons, phonons, excitons), etc. There was one code that took these things into consideration through a coherence function approach called Yet Another Multislice (YAMS), but the code is no longer available either for download or purchase. Available software There are several software packages available to perform multislice simulations of images. Among these is NCEMSS, NUMIS, MacTempas, and Kirkland . Other programs exist but unfortunately many have not been maintained (e.g. SHRLI81 by Mike O’Keefe of Lawrence Berkeley National Lab and Cerius2 of Accerlys). A brief chronology of multislice codes is given in Table 2, although this is by no means exhaustive. ACEM/JCSTEM This software is developed by Earl Kirkland of Cornell University. This code is freely available as an interactive Java applet and as standalone code written in C/C++. The Java applet is ideal for a quick introduction and simulations under a basic incoherent linear imaging approximation. The ACEM code accompanies an excellent text of the same name by Kirkland which describes the background theory and computational techniques for simulating electron micrographs (including multislice) in detail. The main C/C++ routines use a command line interface (CLI) for automated batching of many simulation. The ACEM package also includes a graphical user interface that is more appropriate for beginners. The atomic scattering factors in ACEM are accurately characterized by a 12-parameter fit of Gaussians and Lorentzians to relativistic Hartree–Fock calculations. NCEMSS This package was released from the National Center for High Resolution Electron Microscopy. This program uses a mouse-drive graphical user interface and is written by Roar Kilaas and Mike O’Keefe of Lawrence Berkeley National Laboratory. While the code is no longer developed, the program is available through the Electron Direct Methods (EDM) package written by Laurence D. Marks of Northwestern University. Debye-Waller factors can be included in as a parameter to account for diffuse scattering, although the accuracy is unclear (i.e. a good guess of the Debye-Waller factor is needed). NUMIS The Northwestern University Multislice and Imaging System (NUMIS) is a package is written by Laurence Marks of Northwestern University. It uses a command line interface (CLI) and is based on UNIX. A structure file must be provided as input in order to run use this code, which makes it ideal for advanced users. The NUMIS multislice programs use the conventional multislice algorithm by calculating the wavefunction of electrons at the bottom of a crystal and simulating the image taking into account various instrument-specific parameters including and convergence. This program is good to use if one already has structure files for a material that have been used in other calculations (for example, Density Functional Theory). These structure files can be used to general X-Ray structure factors which are then used as input for the PTBV routine in NUMIS. Microscope parameters can be changed through the MICROVB routine. MacTempas This software is specifically developed to run in Mac OS X by Roar Kilaas of Lawrence Berkeley National Laboratory. It is designed to have a user-friendly user interface and has been well-maintained relative to many other codes (last update May 2013). It is available (for a fee) from here. JMULTIS This is a software for multislice simulation was written in FORTRAN 77 by J. M. Zuo, while he was a postdoc research fellow at Arizona State University under the guidance of John C. H. Spence. The source code was published in the book of Electron Microdiffraction. A comparison between multislice and Bloch wave simulations for ZnTe was also published in the book. A separate comparison between several multislice algorithms at the year of 2000 was reported. QSTEM The Quantitative TEM/STEM (QSTEM) simulations software package was written by Christopher Koch of Humboldt University of Berlin in Germany. Allows simulation of HAADF, ADF, ABF-STEM, as well as conventional TEM and CBED. The executable and source code are available as a free download on the Koch group website. STEM-CELL This is a code written by Vincenzo Grillo of the Institute for Nanoscience (CNR) in Italy. This code is essentially a graphical frontend to the multislice code written by Kirkland, with more additional features. These include tools to generate complex crystalline structures, simulate HAADF images and model the STEM probe, as well as modeling of strain in materials. Tools for image analysis (e.g. GPA) and filtering are also available. The code is updated quite often with new features and a user mailing list is maintained. Freely available on their website. DR. PROBE Multi-slice image simulations for high-resolution scanning and coherent imaging transmission electron microscopy written by Juri Barthel from the Ernst Ruska-Centre at the Jülich Research Centre. The software comprises a graphical user interface version for direct visualization of STEM image calculations, as well as a bundle of command-line modules for more comprehensive calculation tasks. The programs have been written using Visual C++, Fortran 90, and Perl. Executable binaries for Microsoft Windows 32-bit and 64-bit operating systems are available for free from the website. clTEM OpenCL accelerated multislice software written by Adam Dyson and Jonathan Peters from University of Warwick. clTEM is under development as of October 2019. cudaEM The code cudaEM is a multi-GPU enabled code based on CUDA for multislice simulations developed by the group of Stephen Pennycook. References Microscopy Mathematical modeling
Multislice
Chemistry,Mathematics
2,941
27,159,329
https://en.wikipedia.org/wiki/Siegel%20G-function
In mathematics, the Siegel G-functions are a class of functions in transcendental number theory introduced by C. L. Siegel. They satisfy a linear differential equation with polynomial coefficients, and the coefficients of their power series expansion lie in a fixed algebraic number field and have heights of at most exponential growth. Definition A Siegel G-function is a function given by an infinite power series where the coefficients an all belong to the same algebraic number field, K, and with the following two properties. f is the solution to a linear differential equation with coefficients that are polynomials in z; the projective height of the first n coefficients is O(cn) for some fixed constant c > 0. The second condition means the coefficients of f grow no faster than a geometric series. Indeed, the functions can be considered as generalisations of geometric series, whence the name G-function, just as E-functions are generalisations of the exponential function. References C. L. Siegel, "Über einige Anwendungen diophantischer Approximationen", Ges. Abhandlungen, I, Springer (1966) Analytic number theory Algebraic number theory Ordinary differential equations Transcendental numbers Analytic functions
Siegel G-function
Mathematics
244
8,318,412
https://en.wikipedia.org/wiki/Articulatory%20phonology
Articulatory phonology is a linguistic theory originally proposed in 1986 by Catherine Browman of Haskins Laboratories and Louis Goldstein of University of Southern California and Haskins. The theory identifies theoretical discrepancies between phonetics and phonology and aims to unify the two by treating them as low- and high-dimensional descriptions of a single system. Unification can be achieved by incorporating into a single model the idea that the physical system (identified with phonetics) constrains the underlying abstract system (identified with phonology), making the units of control at the abstract planning level the same as those at the physical level. The plan of an utterance is formatted as a gestural score, which provides the input to a physically based model of speech production – the task dynamic model of Elliot Saltzman. The gestural score graphs locations within the vocal tract where constriction can occur, indicating the planned or target degree of constriction. A computational model of speech production developed at Haskins Laboratories combines articulatory phonology, task dynamics, and the Haskins articulatory synthesis system developed by Philip Rubin and colleagues. Notes Bibliography Browman, C.P. and Goldstein, L. (1986). Towards an articulatory phonology. In C. Ewen and J. Anderson (eds.) Phonology Yearbook 3. Cambridge: Cambridge University Press, pp. 219–252. Browman, C.P. and Goldstein, L. (1993). Dynamics and articulatory phonology. Status Reports on Speech Research, SR-l 13. New Haven: Haskins Laboratories, pp. 51–62. Fowler, C.A., Rubin, P. Remez, R.E. and Turvey, M.T. (1980). Implications for speech production of a general theory of action. In B. Butterworth (ed.) Language Production. New York, NY: Academic Press, pp. 373–420. Goldstein, Louis M., and Carol Fowler. (2003). Articulatory phonology: a phonology for public language use.” In Phonetics and Phonology in Language Comprehension and Production: Differences and Similarities, ed. Antje S. Meyer and Niels O. Schiller. Mouton de Gruyter Kröger, B.J. (1993). A gestural production model and its application to reduction in German. Phonetica 50: 213-233. Kröger, B.J., Birkholz, P. (2007). A gesture-based concept for speech movement control in articulatory speech synthesis. In: Esposito A, Faundez-Zanuy M, Keller E, Marinaro M (eds.) Verbal and Nonverbal Communication Behaviours, LNAI 4775 (Springer, Berlin) pp. 174-189 Saltzman, E. (1986). Task dynamic co-ordination of the speech articulators: a preliminary model. In H. Heuer and C. Fromm (eds.) Generation and Modulation of Action Patterns. Berlin: Springer-Verlag, pp. 129–144. Tatham, M. A. A. (1996). Articulatory phonology and computational adequacy. In R. Lawrence (ed.). Proceedings of the Institute of Acoustics, Vol. 18, Part 9. St. Albans: IoA, 375-382. Computational linguistics Psycholinguistics Phonetics Phonology
Articulatory phonology
Technology
744
53,507,375
https://en.wikipedia.org/wiki/Stears%20%28company%29
Stears is a market intelligence company for investing in Africa, with headquarters in Lagos, Abuja, and London. Initially established as a media publication called Stears Business, the company was founded in 2017 by Preston Ideh, Abdul Abdulrahim, Foluso Ogunlana, and Michael Famoroti, who met at the London School of Economics and Imperial College in the United Kingdom. Stears has become a provider of subscription-based data collection tools and analysis services for investing in Africa. Stears has provided bespoke content around specific issues such as market entry, country analysis, and digital economy for international organisations such as the United Nations Development Programme, the Foreign Commonwealth and Development Office, and knowledge workers. This provides data for the work of analysts, portfolio managers, researchers, and economists. In 2022, Stears raised $3.3 million in funding from MaC Venture Capital, Serena Ventures (the investment firm of retired tennis star Serena Williams), Melo 7 Tech Partners, Omidyar Group's Laminate Fund and Cascador. References Data Data and information organizations Nigerian websites
Stears (company)
Technology
225
21,538,523
https://en.wikipedia.org/wiki/EN%201063
EN 1063, or CEN 1063, is a security glazing standard created by the European Committee for Standardization for measuring the protective strength of bullet-resistant glass. It is commonly used in conjunction with EN 1522 (Euronorm standard for Bullet Resistance in Windows, Doors, Shutters and Blinds) to form a ballistic classification system by which armored vehicles and structures are tested and rated. A similar classification system primarily used in the United States is NIJ Standard 0108, the U.S. National Institute of Justice's Standard for Ballistic Resistant Protective Materials which includes glass and armor plate. Threat Levels The protective strength of a glazed shielding is rated based on the type of munitions, or threat level, it is capable of withstanding. There are 7 main standard threat levels: BR1-BR7 (also written as B1-B7), each corresponding to a different type of small arms fire. Additionally, there are two other threat levels (SG1 & SG2) corresponding to shotgun munitions. To be given a particular rating, the glazing must stop the bullet for the specified number of strikes, with multiple strikes placed within 120mm of each other in the test sample which dimensions are 500±5mm x 500±5mm. The glazing should also be shatterproof and produce no spalls after each strike. Lastly, the classification levels are numbered in order of increasing protective strength. Thus any sample complying with the requirements of one class also complies with the requirements of previous classes. However, the SG (shotgun) classes do not necessarily comply with BR classes. The precise test requirements and bullet types used are as follows: LB - Lead Bullet FJ - Full Metal Jacket FN - Flat Nose RN - Round Nose CB - Cone Bullet PB - Pointed Bullet SC - Soft Core (lead) SCP - Soft Core (lead) & Steel Penetrator HC - Hard core, steel hardness > 63 HRC References Glass engineering and science 01063 Armour
EN 1063
Materials_science,Engineering
407
55,835,384
https://en.wikipedia.org/wiki/Porhalaan
The Porhalaan is the traditional calendar of the Batak people of North Sumatra, Indonesia. The Batak Calendar is a lunisolar calendar consisting of 12 months divided to 30 days with an occasional leap month. The Batak calendar is derived from Hindu calendar. The Batak people do not use the porhalaan as a mean to tell time, but rather to determine auspicious day, which is only used by the Batak shaman. Ritual The name porhalaan came from the word hala, which is derived from Sanskrit kala, "scorpion", as the practice put observation of constellations into account. The porhalaan is used by the Batak people for divination. Batak people did not use the porhalaan for telling time. The responsibility of interpreting the porhalaan fell solely to the chief male ritualist known as the datu. The datu would read the porhalaan to determine which day is considered auspicious or inauspicious to hold a certain ritual. In order to minimize the risk of accidentally selecting an unfavorable day due to errors in calendar management, days are often chosen based on whether the day is able to promise happiness in two months time, probably the current month and the following one. There is often an extra 13th month in the calendar that serves this purpose, originally a Hindu leap year, but in the Batak context, it is used for a different reason. If the additional 13th month is not available, then the first month is simply used again for protection. Whether the 13th month is used to compensate for the difference to the solar year is not proven in the context of Batak society. The Porhalaan is usually written as a table of square boxes of 30 columns (days) of 12 or 13 rows (months) as recorded in the pustaha, the Batak magic book. Sometimes the porhalaan is written on a cylindrical piece of bamboo. The Porhalaan is the clearest example of the Batakization of Hindu culture. The original Hindu Calendar was borrowed, modified and reworked according to Batak empirical and pragmatic principles. The result is a simplification of the original calendar. All that remains of a complicated system of adjusting lunar months to the solar Zodiac is a divination calendar which is not used for the purpose of telling times. Calendar system There is no designation of year in Batak Calendar. New Year begins on the New Moon in May, when the constellation Orion (siala sungsang) vanishes in the west and the constellation Scorpius (siala poriama) rises in the east. Porhalaan is divided into 12 months, each contains 30 days. Each month was named by its number, the first month is called simply "first month" or bulan si pahasada, second month is bulan si pahadua, etc. The eleventh month is called bulan li, while the twelfth month is named bulan hurung. The first day of each month (bona ni bulan) fell directly one day after the New Moon. The Full Moon usually fell on the 14th or 15th day. Porhalaan do not use the term for "week", but each month is divided into four each containing seven days. The name of each of the seven days was borrowed from the Sanskrit name. The first day is called Aditya ("sun"), the second Soma ("moon"), Anggara (Mars), Budha (Mercury), Brihaspati (Jupiter), Syukra (Venus), and Syanaiscara (Saturn). In the porhalaan way of naming days, the name of the day in the context of '30 days of a month' is maintained. For example, the third day in a month which fell on Tuesday is known Nggara telu uari. sixth day is Cukera enem berngi, ninth is Suma na siwah, tenth is Nggara sepuluh, and so on. The 7th, 14th, 21st, and 28th day is named after the moon phase, that is bělah (first quarter waxing moon), bělah purnama (full moon), bělah turun (third quarter waning moon), dan mate bulan (dead moon). The word pultak ("increasing") is added to the bright fortnight days of the porhalaan when the moon phase grows, while the word cěpik ("decreasing") is added to the dark fortnight days of the porhalaan when the moon phase decreases; this is obviously influenced by the Hindu shukla pasha and krishna paksha. See also List of calendars References Cited works Calendars Batak
Porhalaan
Physics
973
62,588,724
https://en.wikipedia.org/wiki/Cui%20Tiejun
Cui Tiejun (; born September 1965) is a Chinese electrical engineer specializing in electromagnetic field and microwave technology. He is the deputy director of Southeast University's State Key Laboratory of Millimeter Waves and the Synergetic Innovation Center of Wireless Communication Technology and deputy dean of School of Information Science & Engineering. Education Cui was born in Luanping County, Hebei in September 1965. He earned his bachelor's degree in 1987, an master's degree in 1989, and doctor's degree in 1993, all in electrical engineering and all from Xidian University. From 1995 to 1997 he received grants from Alexander von Humboldt Foundation as a Humboldt Research Fellow at Karlsruhe University. He was a postdoctoral fellow at the University of Illinois at Urbana-Champaign from 1997 to 1999, and was a research scientist since 2000. Career In October 2001 he was hired as a professor and doctoral supervisor at the School of Information Science & Engineering, Southeast University. He was a delegate to the 12th National People's Congress. In December 2017, he was elected a member of the 14th Central Committee of Jiu San Society. Honours and awards 2002 National Science Fund for Distinguished Young Scholars 2014 State Natural Science Award (Second Class) 2015 Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for contributions to microwave metamaterials and computational electromagnetics. 2018 State Natural Science Award (Second Class) November 22, 2019 Member of the Chinese Academy of Sciences (CAS) References 1965 births Living people People from Luanping County Xidian University alumni Scientists from Hebei Academic staff of Southeast University Members of the Chinese Academy of Sciences Delegates to the 12th National People's Congress Fellows of the IEEE Microwave engineers Chinese electrical engineers Metamaterials scientists
Cui Tiejun
Materials_science
348
10,117,909
https://en.wikipedia.org/wiki/The%20Pesticide%20Question
The Pesticide Question: Environment, Economics and Ethics is a 1993 book edited by David Pimentel and Hugh Lehman. Use of pesticides has improved agricultural productivity, but there are also concerns about safety, health and the environment. This book is the result of research by leading scientists and policy experts into the non-technical and social issues of pesticides. In examining the social policies related to pesticides use, they consider the costs as well as the benefits. The book says that Intensive farming cannot completely do without synthetic chemicals, but that it is technologically possible to reduce the amount of pesticides used in the United States by 35-50 per cent without reducing crop yields. The researchers show that to regain public trust, those who regulate and use pesticides must examine fair ethical questions and take appropriate action to protect public welfare, health, and the environment. Anyone concerned with reducing our reliance on chemical pesticides and how human activities can remain both productive and environmentally sound will find this volume a stimulating contribution to a troubling debate. The Pesticide Question builds on the 1962 best seller book Silent Spring by Rachel Carson. Carson did not reject the use of pesticides, but argued that their use was often indiscriminate and resulted in harm to people and the environment. She also highlighted the problem of pests becoming resistant to pesticides. Carson's work is referred to many times in The Pesticide Question, which critically explores many non-technical issues associated with pesticide use, mainly in the United States. The book has 40 contributors, mainly academics from a wide range of disciplines. The Pesticide Question is divided into five main parts: social and environmental effects of pesticides; methods and effects of reducing pesticide use; government policy and pesticide use; history, public attitudes, and ethics in regard to pesticide use; and the benefits and risks of pesticides. References 1993 in the environment Agriculture books Environmental non-fiction books Pesticides
The Pesticide Question
Biology,Environmental_science
393
43,376,012
https://en.wikipedia.org/wiki/Social%20uterus
Social uterus is a developmental concept in family therapy for psychosomatic disorders. Social uterus as an integrative model of family development was invented by Vladislav Chvála and Ludmila Trapková in 1990's. The metaphor of a social uterus was formed by comparing the biological function of the uterus and the maturation of the foetus inside it from conception to childbirth with the changes in the family from the birth of the child up to its separation around the age of 18. The metaphor translates facts well known from the biology of reproduction to the social level. In the "social uterus", the development and maturing of the indispensable "social organs and functions" of man can be observed. At a higher, social level of organization of live matter, the physical birth of the child may be viewed as the moment of the child's social conception. This approach sums up the achievements of developmental psychology and the family therapy into a practical and understandable model, which is useful in clinical practice. The model offers an understanding of psychosomatic symptoms within a family. The concept has been gradually developed by the authors through extensive clinical work with individuals and families. It has shown to have clinical validity, particularly in family therapy for psychosomatic disorders and various chronic somatic diseases. See also A similar concept of a social womb has been used in a 2013 book by J. Ronald Lally, designating a protected, nurturing environment needed by babies from birth to age 3 for a healthy brain development. References Family therapy Clinical psychology Systems psychology Somatic psychology
Social uterus
Biology
321
57,375,630
https://en.wikipedia.org/wiki/Maclear%27s%20Beacon
Maclear's Beacon is a triangulation station used in Maclear's arc measurement for Earth's circumference determination. The beacon is situated on top of Table Mountain in Cape Town, South Africa. It is situated on the Eastern end on the plateau of the mountain, roughly 3km from the Cable Car Station. The beacon is above sea level, higher than the upper cable car station. The structure consists of man made rock packed in a triangle form, being high. It was painted in lamp black colour to make it visible, when light shown on it. In December 1844, the Astronomer Royal at the Cape, Thomas Maclear, instructed his assistant William Mann to build a beacon in the form of a pile of rocks which would be used to confirm and possibly expand on the existing curvature of the Earth data of Nicolas-Louis de Lacaille. This data was in connection with the Cape arc of the meridian. Initially the beacon had no name but in later years it was named after Maclear. In 1929, the pile of stones collapsed and it was restored in 1979 to commemorate the centenary of Maclear's death. The beacon is still used by cartographers today. It has become a tourist attraction and hiking trails over the mountain pass next to the beacon. It is also a National Monument. References External links 1844 in South Africa Buildings and structures completed in 1844 Buildings and structures in Cape Town Geodesy Geomatics 19th-century architecture in South Africa
Maclear's Beacon
Mathematics
305
30,553,860
https://en.wikipedia.org/wiki/Prix%20Jules%20Janssen
The Prix Jules Janssen is the highest award of the Société astronomique de France (SAF), the French astronomical society. This annual prize is given to a professional French astronomer or to an astronomer of another nationality in recognition of astronomical work in general, or for services rendered to Astronomy. The first recipient of the prize was Camille Flammarion, the founder of the Société astronomique de France, in 1897. The prize has been continuously awarded since then with the exception of the two World Wars. Non-French recipients have come from various countries including the United States, the United Kingdom, Canada, Switzerland, the Netherlands, Germany, Belgium, Sweden, Italy, Spain, Hungary, India, the former Czechoslovakia, and the former Soviet Union. It was established by the French astronomer Pierre Jules César Janssen (known as Jules Janssen) during his tenure as president of SAF from 1895 to 1897. Janssen announced the creation of the new prize at a meeting of the Société Astronomique de France on 2 December 1896. The medal was designed in 1896 by the Parisian engraver Alphée Dubois (1831–1905). It is minted by the Monnaie de Paris. This prize is distinct from the Janssen Medal (created in 1886), which is awarded by the French Academy of Sciences and also named for Janssen. Laureates See also List of astronomy awards Prizes named after people References External links Official list of all recipients of the prix Jules–Janssen given by the French Astronomical Society Astronomy prizes French awards Awards established in 1897 1897 establishments in France
Prix Jules Janssen
Astronomy,Technology
322
25,495,841
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20March%2031%2C%202071
An annular solar eclipse will occur at the Moon's descending node of orbit on Tuesday, March 31, 2071, with a magnitude of 0.9919. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. An annular solar eclipse occurs when the Moon's apparent diameter is smaller than the Sun's, blocking most of the Sun's light and causing the Sun to look like an annulus (ring). An annular eclipse appears as a partial eclipse over a region of the Earth thousands of kilometres wide. The Moon's apparent diameter will be near the average diameter because it will occur 7.2 days after apogee (on March 24, 2071, at 10:05 UTC) and 6.2 days before perigee (on April 6, 2071, at 19:05 UTC). The path of annularity will be visible from parts of Chile, Argentina, extreme southern Paraguay, Brazil, extreme southern Gabon, Congo, and the Democratic Republic of the Congo. A partial solar eclipse will also be visible for parts of South America, Antarctica, and Africa. Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2071 A penumbral lunar eclipse on March 16. An annular solar eclipse on March 31. A penumbral lunar eclipse on September 9. A total solar eclipse on September 23. Metonic Preceded by: Solar eclipse of June 11, 2067 Followed by: Solar eclipse of January 16, 2075 Tzolkinex Preceded by: Solar eclipse of February 17, 2064 Followed by: Solar eclipse of May 11, 2078 Half-Saros Preceded by: Lunar eclipse of March 25, 2062 Followed by: Lunar eclipse of April 4, 2080 Tritos Preceded by: Solar eclipse of April 30, 2060 Followed by: Solar eclipse of February 27, 2082 Solar Saros 140 Preceded by: Solar eclipse of March 20, 2053 Followed by: Solar eclipse of April 10, 2089 Inex Preceded by: Solar eclipse of April 20, 2042 Followed by: Solar eclipse of March 10, 2100 Triad Preceded by: Solar eclipse of May 30, 1984 Followed by: Solar eclipse of January 30, 2158 Solar eclipses of 2069–2072 Saros 140 Metonic series Tritos series Inex series References External links 2071 3 31 2071 in science 2071 3 31 2071 3 31
Solar eclipse of March 31, 2071
Astronomy
649
70,161,885
https://en.wikipedia.org/wiki/2%2C4%2C6-Tri-tert-butylpyrimidine
2,4,6-Tri-tert-butylpyrimidine is the organic compound with the formula HC(ButC)2N2CtBu where tBu = (CH3)3C. It is a substituted derivative of the heterocycle pyrimidine. Known also as TTBP, this compound is of interest as a base that is sufficiently bulky to not bind boron trifluoride but still able to bind protons. It is less expensive that the related bulky derivatives of pyridine such as 2,6-di-tert-butylpyridine, 2,4,6-tri-tert-butylpyridine, and 2,6-di-tert-butyl-4-methylpyridine. References Pyrimidines Reagents for organic chemistry Non-nucleophilic bases Tert-butyl compounds
2,4,6-Tri-tert-butylpyrimidine
Chemistry
192
39,698,430
https://en.wikipedia.org/wiki/Wildlife%20farming
Wildlife farming refers to the raising of traditionally undomesticated animals in an agricultural setting to produce: living animals for canned hunting and to be kept as pets; commodities such as food and traditional medicine; and materials like leather, fur and fiber. Purported benefits Some conservationists argue, wildlife farming can protect endangered species from extinction by reducing the pressure on populations of wild animals which are often poached for food. Others claim that it may be harmful for the majority of conservation efforts, except for a select few species. Certain African communities rely on bushmeat to obtain their daily amount of animal protein necessary to be healthy and survive. Oftentimes, bushmeat is not handled with care causing the spread of diseases. Wildlife farming can reduce the spread of diseases by providing African communities with bushmeat that is properly processed. In his documentary film The End of Eden, South African filmmaker Rick Lomba, presented examples of the environmentally sustainable and indeed rejuvenating effect of certain types of wildlife farming. Associated risks Wildlife farming has been linked to the emergence of zoonotic diseases, such as the SARs outbreak which has since been connected with the farming of civets. Current state of the industry In recent years, South Africa has seen a massive growth in wildlife ranching (also known as game farming), which has led to a range of issues due to a lack of regulations. This has led to the reclassification of 33 wild species as farm animals. As a result of the COVID-19 pandemic, approximately 20,000 wildlife farms have been shut-down in China. In the preceding years, the Chinese government had been promoting and incentivizing the development of the wildlife farming industry, which was valued as 520bn yuan, or £57bn, in 2017. See also Livestock References Animal husbandry Animal rights Animal welfare Cruelty to animals Livestock Meat industry Wildlife
Wildlife farming
Biology
382
19,325,934
https://en.wikipedia.org/wiki/Inyoite
Inyoite, named after Inyo County, California, where it was discovered in 1914, is a colourless monoclinic mineral. It turns white on dehydration. Its chemical formula is Ca(HBO)(OH)·4HO or CaB3O3(OH)5·4H2O. Associated minerals include priceite, meyerhofferite, colemanite, hydroboracite, ulexite and gypsum. References Calcium minerals Nesoborates Hydroxide minerals Tetrahydrate minerals Monoclinic minerals Minerals in space group 14 Minerals described in 1914 Luminescent minerals
Inyoite
Chemistry
130
391,251
https://en.wikipedia.org/wiki/Stark%E2%80%93Heegner%20theorem
In number theory, the Heegner theorem establishes the complete list of the quadratic imaginary number fields whose rings of integers are principal ideal domains. It solves a special case of Gauss's class number problem of determining the number of imaginary quadratic fields that have a given fixed class number. Let denote the set of rational numbers, and let be a square-free integer. The field is a quadratic extension of . The class number of is one if and only if the ring of integers of is a principal ideal domain. The Baker–Heegner–Stark theorem can then be stated as follows: If , then the class number of is one if and only if These are known as the Heegner numbers. By replacing with the discriminant of this list is often written as: History This result was first conjectured by Gauss in Section 303 of his Disquisitiones Arithmeticae (1798). It was essentially proven by Kurt Heegner in 1952, but Heegner's proof was not accepted until an establishment mathematician Harold Stark rewrote the proof in 1967, which had many commonalities to Heegner's work, but sufficiently many differences that Stark considers the proofs to be different. Heegner "died before anyone really understood what he had done". Stark formally paraphrases Heegner's proof in 1969 (other contemporary papers produced various similar proofs by modular functions. Alan Baker gave a completely different proof slightly earlier (1966) than Stark's work (or more precisely Baker reduced the result to a finite amount of computation, with Stark's work in his 1963/4 thesis already providing this computation), and won the Fields Medal for his methods. Stark later pointed out that Baker's proof, involving linear forms in 3 logarithms, could be reduced to only 2 logarithms, when the result was already known from 1949 by Gelfond and Linnik. Stark's 1969 paper also cited the 1895 text by Heinrich Martin Weber and noted that if Weber had "only made the observation that the reducibility of [a certain equation] would lead to a Diophantine equation, the class-number one problem would have been solved 60 years ago". Bryan Birch notes that Weber's book, and essentially the whole field of modular functions, dropped out of interest for half a century: "Unhappily, in 1952 there was no one left who was sufficiently expert in Weber's Algebra to appreciate Heegner's achievement." Deuring, Siegel, and Chowla all gave slightly variant proofs by modular functions in the immediate years after Stark. Other versions in this genre have also cropped up over the years. For instance, in 1985, Monsur Kenku gave a proof using the Klein quartic (though again utilizing modular functions). And again, in 1999, Imin Chen gave another variant proof by modular functions (following Siegel's outline). The work of Gross and Zagier (1986) combined with that of Goldfeld (1976) also gives an alternative proof. Real case On the other hand, it is unknown whether there are infinitely many d > 0 for which Q() has class number 1. Computational results indicate that there are many such fields. Number Fields with class number one provides a list of some of these. Notes References . Theorems in algebraic number theory
Stark–Heegner theorem
Mathematics
698
28,325,493
https://en.wikipedia.org/wiki/Balbis
In geometry, a balbis is a geometric shape that can be colloquially defined as a single (primary) line that is terminated by a (secondary) line at one endpoint and by a (secondary) line at the other endpoint. The terminating secondary lines are at right angles to the primary line. Its parallel sides are of indefinite lengths and can be infinitely long. The word "balbis" comes from the ancient Greek word βαλβίς, meaning a rope between two posts used to indicate the start and finish of a race. The most common example of a balbis is the capital letter 'H', the eighth letter in the ISO basic Latin alphabet. It can also be seen in, for example, rugby posts and old-fashioned television antenna. Another type of balbis is the rectangular balbis, that may be loosely described as a rectangle with one side missing. A rectangular balbis was used in the Olympic Games, as a throwing area and is described by Philostratus II. In his book about the balbis (see References below), the Rev. P. H. Francis describes the balbis as "the commonest geometrical figure, more in evidence than the triangle, circle, ellipse, or other geometrical figure that has been studied from ancient times" and goes on to state that it "was known to but not studied by the ancient Greeks; and this geometrical figure has been neglected." His memorial illustrates a balbis and can be seen in St. Mary's Church, Stoughton, West Sussex. References The Rev. Francis was sometime vicar of Stoughton, West Sussex. Geometric shapes
Balbis
Mathematics
339
44,893,163
https://en.wikipedia.org/wiki/New%20Formalism%20%28architecture%29
New Formalism is an architectural style that emerged in the United States during the mid-1950s and flowered in the 1960s. Buildings designed in that style exhibited many Classical elements including "strict symmetrical elevations" building proportion and scale, Classical columns, highly stylized entablatures and colonnades. The style was used primarily for high-profile cultural, high tech, institutional and civic buildings. Edward Durrell Stone's New Delhi American Embassy (1954), which blended the architecture of the east with modern western concepts, is considered to be the symbolic start of New Formalism architecture. Common features of the New Formalism style include: Use of traditionally rich materials such as travertine, marble, and granite or man-made materials that mimic their luxurious qualities Buildings usually set on a podium Designed to achieve modern monumentality Embraces classical precedents, such as arches, colonnades, classical columns and entablatures Smooth wall surfaces Delicacy of details Formal landscape; use of pools, fountains, and a sculpture within a central plaza Notable architects Welton Becket Philip Johnson Edward Durell Stone Minoru Yamasaki Friedrich Silaban Gunnar Birkerts Notable examples McGregor Memorial Conference Center, Detroit, Michigan (1958) Pacific Science Center, Seattle, Washington (1962) Lincoln Center for the Performing Arts, New York City (1962/69) Memphis International Airport, Memphis, Tennessee (1963) Uptown Campus, University at Albany, SUNY, Albany, New York (1964) 2 Columbus Circle, New York City (1964) Northwestern National Life Building, Minneapolis (1965) Dorothy Chandler Pavilion, Los Angeles, California (1964) Equitable Building, Los Angeles, 1969 Beinecke Rare Book & Manuscript Library, New Haven (1963) Cambridge Tower, Austin, Texas (1965) Original World Trade Center, Lower Manhattan, New York City (1966) Ahmanson Theater, Los Angeles, California (1967) Los Angeles County Museum of Art, Los Angeles (1965) United States Confluence Theater - now John H. Wood Federal Courthouse, San Antonio, Texas (1968) The Forum, Inglewood, California (1967) Wilshire Colonnade, Los Angeles, California (1970) John F. Kennedy Center for the Performing Arts, Washington, D.C. (1971) Teacher Retirement System of Texas Headquarters, Austin, Texas (1973) Istiqlal Mosque, Jakarta, Indonesia (1978) Weber County Main Library, Ogden, Utah (1968) San Diego Sports Arena - now Pechanga Arena, San Diego, California (1966) References 20th-century architectural styles American architectural styles Architectural history
New Formalism (architecture)
Engineering
526
48,599,028
https://en.wikipedia.org/wiki/Evert%20Jan%20Baerends
Evert Jan Baerends (born 17 September 1945) is a Dutch theoretical chemist. He is an emeritus professor of the Vrije Universiteit Amsterdam. Baerends is known for his development and application of electronic structure calculations, which over time led to the development of the Amsterdam Density Functional. He worked extensively on density functional theory. Career Baerends was born on 17 September 1945 in Voorst. He obtained his PhD at the Vrije Universiteit Amsterdam under professor Pieter Ros. Baerends became a professor of Theoretical chemistry at the Vrije Universiteit Amsterdam. He did extensive research on density functional theory and was involved in the development and application of electronic structure calculations, which later led to the development of the Amsterdam Density Functional. Baerends became a member of the Royal Netherlands Academy of Arts and Sciences in 2004. In 2010 he was awarded the Schrödinger Medal by the World Association of Theoretical and Computational Chemists, being noted for: "For his pioneering contributions to the development of computational density functional methods and his fundamental contributions to density functional theory and density matrix theory." Baerends is a member of the International Academy of Quantum Molecular Science. After his retirement in the Netherlands Baerends lectured at Pohang University of Science and Technology in South Korea. In 2019 he received an honorary doctorate from the University of Girona. References 1945 births Living people Computational chemists 20th-century Dutch chemists Members of the Royal Netherlands Academy of Arts and Sciences People from Voorst Academic staff of Pohang University of Science and Technology Schrödinger Medal recipients Theoretical chemists Vrije Universiteit Amsterdam alumni Academic staff of Vrije Universiteit Amsterdam 21st-century Dutch chemists
Evert Jan Baerends
Chemistry
346
4,917,048
https://en.wikipedia.org/wiki/Sterol%20regulatory%20element-binding%20protein
Sterol regulatory element-binding proteins (SREBPs) are transcription factors that bind to the sterol regulatory element DNA sequence TCACNCCAC. Mammalian SREBPs are encoded by the genes SREBF1 and SREBF2. SREBPs belong to the basic-helix-loop-helix leucine zipper class of transcription factors. Unactivated SREBPs are attached to the nuclear envelope and endoplasmic reticulum membranes. In cells with low levels of sterols, SREBPs are cleaved to a water-soluble N-terminal domain that is translocated to the nucleus. These activated SREBPs then bind to specific sterol regulatory element DNA sequences, thus upregulating the synthesis of enzymes involved in sterol biosynthesis. Sterols in turn inhibit the cleavage of SREBPs and therefore synthesis of additional sterols is reduced through a negative feed back loop. Isoforms Mammalian genomes have two separate SREBP genes ( and ): SREBP-1 expression produces two different isoforms, SREBP-1a and -1c. These isoforms differ in their first exons owing to the use of different transcriptional start sites for the SREBP-1 gene. SREBP-1c was also identified in rats as ADD-1. SREBP-1c is responsible for regulating the genes required for de novo lipogenesis. SREBP-2 regulates the genes of cholesterol metabolism. Function SREB proteins are indirectly required for cholesterol biosynthesis and for uptake and fatty acid biosynthesis. These proteins work with asymmetric sterol regulatory element (StRE). SREBPs have a structure similar to E-box-binding helix-loop-helix (HLH) proteins. However, in contrast to E-box-binding HLH proteins, an arginine residue is replaced with tyrosine making them capable of recognizing StREs and thereby regulating membrane biosynthesis. Mechanism of action Animal cells maintain proper levels of intracellular lipids (fats and oils) under widely varying circumstances (lipid homeostasis). For example, when cellular cholesterol levels fall below the level needed, the cell makes more of the enzymes necessary to make cholesterol. A principal step in this response is to make more of the mRNA transcripts that direct the synthesis of these enzymes. Conversely, when there is enough cholesterol around, the cell stops making those mRNAs and the level of the enzymes falls. As a result, the cell quits making cholesterol once it has enough. A notable feature of this regulatory feedback machinery was first observed for the SREBP pathway - regulated intramembrane proteolysis (RIP). Subsequently, RIP was found to be used in almost all organisms from bacteria to human beings and regulates a wide range of processes ranging from development to neurodegeneration. A feature of the SREBP pathway is the proteolytic release of a membrane-bound transcription factor, SREBP. Proteolytic cleavage frees it to move through the cytoplasm to the nucleus. Once in the nucleus, SREBP can bind to specific DNA sequences (the sterol regulatory elements or SREs) that are found in the control regions of the genes that encode enzymes needed to make lipids. This binding to DNA leads to the increased transcription of the target genes. The ~120 kDa SREBP precursor protein is anchored in the membranes of the endoplasmic reticulum (ER) and nuclear envelope by virtue of two membrane-spanning helices in the middle of the protein. The precursor has a hairpin orientation in the membrane, so that both the amino-terminal transcription factor domain and the COOH-terminal regulatory domain face the cytoplasm. The two membrane-spanning helices are separated by a loop of about 30 amino acids that lies in the lumen of the ER. Two separate, site-specific proteolytic cleavages are necessary for release of the transcriptionally active amino-terminal domain. These cleavages are carried out by two distinct proteases, called site-1 protease (S1P) and site-2 protease (S2P). In addition to S1P and S2P, the regulated release of transcriptionally active SREBP requires the cholesterol-sensing protein SREBP cleavage-activating protein (SCAP), which forms a complex with SREBP owing to interaction between their respective carboxy-terminal domains. SCAP, in turn, can bind reversibly with another ER-resident membrane protein, INSIG. In the presence of sterols, which bind to INSIG and SCAP, INSIG and SCAP also bind one another. INSIG always stays in the ER membrane and thus the SREBP-SCAP complex remains in the ER when SCAP is bound to INSIG. When sterol levels are low, INSIG and SCAP no longer bind. Then, SCAP undergoes a conformational change that exposes a portion of the protein ('MELADL') that signals it to be included as cargo in the COPII vesicles that move from the ER to the Golgi apparatus. In these vesicles, SCAP, dragging SREBP along with it, is transported to the Golgi. The regulation of SREBP cleavage employs a notable feature of eukaryotic cells, subcellular compartmentalization defined by intracellular membranes, to ensure that cleavage occurs only when needed. Once in the Golgi apparatus, the SREBP-SCAP complex encounters active S1P. S1P cleaves SREBP at site-1, cutting it into two halves. Because each half still has a membrane-spanning helix, each remains bound in the membrane. The newly generated amino-terminal half of SREBP (which is the ‘business end' of the molecule) then goes on to be cleaved at site-2 that lies within its membrane-spanning helix. This is the work of S2P, an unusual metalloprotease. This releases the cytoplasmic portion of SREBP, which then travels to the nucleus where it activates transcription of target genes (e.g. LDL receptor gene) Regulation Absence of sterols activates SREBP, thereby increasing cholesterol synthesis. Insulin, cholesterol derivatives, T3 and other endogenous molecules have been demonstrated to regulate the SREBP1c expression, particularly in rodents. Serial deletion and mutation assays reveal that both SREBP (SRE) and LXR (LXRE) response elements are involved in SREBP-1c transcription regulation mediated by insulin and cholesterol derivatives. Peroxisome proliferation-activated receptor alpha (PPARα) agonists enhance the activity of the SREBP-1c promoter via a DR1 element at -453 in the human promoter. PPARα agonists act in cooperation with LXR or insulin to induce lipogenesis. A medium rich in branched-chain amino acids stimulates expression of the SREBP-1c gene via the mTORC1/S6K1 pathway. The phosphorylation of S6K1 was increased in the liver of obese db/db mice. Furthermore, depletion of hepatic S6K1 in db/db mice with the use of an adenovirus vector encoding S6K1 shRNA resulted in down-regulation of SREBP-1c gene expression in the liver as well as a reduced hepatic triglyceride content and serum triglyceride concentration. mTORC1 activation is not sufficient to stimulate hepatic SREBP-1c in the absence of Akt signaling, revealing the existence of an additional downstream pathway also required for this induction which is proposed to involve mTORC1-independent Akt-mediated suppression of INSIG-2a, a liver-specific transcript encoding the SREBP-1c inhibitor INSIG2. FGF21 has been shown to repress the transcription of sterol regulatory element binding protein 1c (SREBP-1c). Overexpression of FGF21 ameliorated the up-regulation of SREBP-1c and fatty acid synthase (FAS) in HepG2 cells elicited by FFAs treatment. Moreover, FGF21 could inhibit the transcriptional levels of the key genes involved in processing and nuclear translocation of SREBP-1c, and decrease the protein amount of mature SREBP-1c. Unexpectedly, overexpression of SREBP-1c in HepG2 cells could also inhibit the endogenous FGF21 transcription by reducing FGF21 promoter activity. SREBP-1c has also been shown to upregulate in a tissue specific manner the expression of PGC1alpha expression in brown adipose tissue. Nur77 is suggested to inhibit LXR and downstream SREBP-1c expression modulating hepatic lipid metabolism. History The SREBPs were elucidated in the laboratory of Nobel laureates Michael Brown and Joseph Goldstein at the University of Texas Southwestern Medical Center in Dallas. Their first publication on this subject appeared in October 1993. References External links The Brown and Goldstein Lab . Cholesterol Synthesis - has some good regulatory details Protein Data Base (PDB), Sterol Regulatory Element Binding 1A structure. Transcription factors
Sterol regulatory element-binding protein
Chemistry,Biology
1,995
19,568,001
https://en.wikipedia.org/wiki/Cauchy%20elastic%20material
In physics, a Cauchy-elastic material is one in which the stress at each point is determined only by the current state of deformation with respect to an arbitrary reference configuration. A Cauchy-elastic material is also called a simple elastic material. It follows from this definition that the stress in a Cauchy-elastic material does not depend on the path of deformation or the history of deformation, or on the time taken to achieve that deformation or the rate at which the state of deformation is reached. The definition also implies that the constitutive equations are spatially local; that is, the stress is only affected by the state of deformation in an infinitesimal neighborhood of the point in question, without regard for the deformation or motion of the rest of the material. It also implies that body forces (such as gravity), and inertial forces cannot affect the properties of the material. Finally, a Cauchy-elastic material must satisfy the requirements of material objectivity. Cauchy-elastic materials are mathematical abstractions, and no real material fits this definition perfectly. However, many elastic materials of practical interest, such as steel, plastic, wood and concrete, can often be assumed to be Cauchy-elastic for the purposes of stress analysis. Mathematical definition Formally, a material is said to be Cauchy-elastic if the Cauchy stress tensor is a function of the strain tensor (deformation gradient) alone: This definition assumes that the effect of temperature can be ignored, and the body is homogeneous. This is the constitutive equation for a Cauchy-elastic material. Note that the function depends on the choice of reference configuration. Typically, the reference configuration is taken as the relaxed (zero-stress) configuration, but need not be. Material frame-indifference requires that the constitutive relation should not change when the location of the observer changes. Therefore the constitutive equation for another arbitrary observer can be written . Knowing that the Cauchy stress tensor and the deformation gradient are objective quantities, one can write: where is a proper orthogonal tensor. The above is a condition that the constitutive law has to respect to make sure that the response of the material will be independent of the observer. Similar conditions can be derived for constitutive laws relating the deformation gradient to the first or second Piola-Kirchhoff stress tensor. Isotropic Cauchy-elastic materials For an isotropic material the Cauchy stress tensor can be expressed as a function of the left Cauchy-Green tensor . The constitutive equation may then be written: In order to find the restriction on which will ensure the principle of material frame-indifference, one can write: A constitutive equation that respects the above condition is said to be isotropic. Non-conservative materials Even though the stress in a Cauchy-elastic material depends only on the state of deformation, the work done by stresses may depend on the path of deformation. Therefore a Cauchy elastic material in general has a non-conservative structure, and the stress cannot necessarily be derived from a scalar "elastic potential" function. Materials that are conservative in this sense are called hyperelastic or "Green-elastic". References Continuum mechanics Elasticity (physics)
Cauchy elastic material
Physics,Materials_science
674
5,847,675
https://en.wikipedia.org/wiki/Buffalo%20network-attached%20storage%20series
The Buffalo TeraStation network-attached storage series are network-attached storage (NAS) devices. The current lineup includes the LinkStation and TeraStation series. These devices have undergone various improvements since they were first produced, and have expanded to include a Windows Storage Server-based operating system. History Buffalo released the first TeraStation model, the HD-HTGL/R5, in December 2004. The second generation models, the TS-TGL/R5, was released the following year with uninterrupted operation and improved operational stability. This was followed up with the TeraStation Pro and the TeraStation Pro II in 2006, which offered iSCSI support, as well as 2U rackmount models. in 2008, the fourth generation TS-X models were released with hot swapping and replication, along with IU rackmount versions. TeraStation The TeraStation is a network-attached storage device using a PowerPC or ARM architecture processor. Many TeraStation models are shipped with enterprise-grade internal hard drives mounted in a RAID array. Since January 2012, the TeraStation uses LIO for its iSCSI target. LinkStation The LinkStation is a network-attached storage device using a PowerPC or ARM architecture processor designed for personal use, aiming to serve as a central media hub and backup storage for a household. Compared to the TeraStation series, LinkStation devices typically offer more streamlined UI and media server features. Current Product Lineup LinkStation The LinkStation is notable among the Linux community both in Japan and in the US/Europe for being "hackable" into a generic Linux appliance and made to do tasks other than the file storage and sharing tasks for which it was designed. As the device runs on Linux, and included changes to the Linux source code, Buffalo was required to release their modified versions of source code as per the terms of the GNU General Public License. Due to the availability of source code and the relatively low cost of the device, there are several community projects centered around it. There are two main replacement firmware releases available for the device: the first is OpenLink which is based on the official Buffalo firmware with some modifications and features added. The other is FreeLink, which is a Debian distribution. TeraStation Like the LinkStation, TeraStation devices run its own version of Linux, and some models run Windows Storage Server 2016. Debian and Gentoo Linux distributions and NetBSD are reported to have been ported to it. Operation The device in various iterations ships with its own Universal Plug and Play protocol for distribution of multimedia stored on the device. It can also be configured as a variety of different media servers TwonkyVision Media server, a SlimServer/SqueezeCenter server, an iTunes server using the Digital Audio Access Protocol, a Samba server, an LIO iSCSI target, MLDonkey client, as well as a Network File System server for Posix-based systems. For use as a backup server, it can be modified to use Rsync to back up or synchronize data from one or many computers in the network pushing their data, or even having the LinkStation pulling the data from remote servers—beside the use of the Buffalo-provided backup software for Windows. It has also found use in a number of other ways, notably through its USB interface which comes configured as a Print server but can also use the Common Unix Printing System to act as such for a USB Printer. Users have managed to get it to use a number of other USB devices with the version 2.6 Linux kernel's enhanced USB support. Additionally, because the Apache HTTP Server software is already installed for the purpose of providing the Buffalo configuration screens, the device is easily converted to be a lightweight web server (with the Buffalo content deleted) that can then serve any content of the operator's choice. Achievements The LinkStation and TeraStation NAS devices have won various industry awards since their introduction, such as the TS51210RH winning Storage Product of the Year for the 2018 Network Computing Awards. The TeraStation has also won the SMB External Storage Hardware category of the CRN® Annual Report Card (ARC) awards, which recognizes exceptional vendor performance, for three years in a row. Gallery See also NSLU2 References Network-attached storage Storage area networks Computer storage devices Backup ARM architecture Linux-based devices Server appliance
Buffalo network-attached storage series
Technology,Engineering
905
20,885,991
https://en.wikipedia.org/wiki/Oxacephem
An oxacephem is a β-lactam molecule similar to a cephem, but with an oxygen substituted for the sulfur. They are synthetic compounds not seen in nature, generally used as β-lactam antibiotics. Examples include Latamoxef and Flomoxef. References Antibiotics
Oxacephem
Biology
62
38,433,629
https://en.wikipedia.org/wiki/Neurochemistry%20International
Neurochemistry International is a peer-reviewed scientific journal covering research in neurochemistry, including molecular and cellular neurochemistry, neuropharmacology and genetic aspects of central nervous system function, neuroimmunology, metabolism as well as the neurochemistry of neurological and psychiatric disorders of the CNS. It is published by Elsevier and the editor-in-chief is Michael Robinson (Children's Hospital of Philadelphia). According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.297. References External links Elsevier academic journals Neurochemistry Neuroscience journals English-language journals 10 times per year journals
Neurochemistry International
Chemistry,Biology
136
286,550
https://en.wikipedia.org/wiki/Safety-critical%20system
A safety-critical system or life-critical system is a system whose failure or malfunction may result in one (or more) of the following outcomes: death or serious injury to people loss or severe damage to equipment/property environmental harm A safety-related system (or sometimes safety-involved system) comprises everything (hardware, software, and human aspects) needed to perform one or more safety functions, in which failure would cause a significant increase in the safety risk for the people or environment involved. Safety-related systems are those that do not have full responsibility for controlling hazards such as loss of life, severe injury or severe environmental damage. The malfunction of a safety-involved system would only be that hazardous in conjunction with the failure of other systems or human error. Some safety organizations provide guidance on safety-related systems, for example the Health and Safety Executive in the United Kingdom. Risks of this sort are usually managed with the methods and tools of safety engineering. A safety-critical system is designed to lose less than one life per billion (109) hours of operation. Typical design methods include probabilistic risk assessment, a method that combines failure mode and effects analysis (FMEA) with fault tree analysis. Safety-critical systems are increasingly computer-based. Safety-critical systems are a concept often used together with the Swiss cheese model to represent (usually in a bow-tie diagram) how a threat can escalate to a major accident through the failure of multiple critical barriers. This use has become common especially in the domain of process safety, in particular when applied to oil and gas drilling and production both for illustrative purposes and to support other processes, such as asset integrity management and incident investigation. Reliability regimens Several reliability regimes for safety-critical systems exist: Fail-operational systems continue to operate when their control systems fail. Examples of these include elevators, the gas thermostats in most home furnaces, and passively safe nuclear reactors. Fail-operational mode is sometimes unsafe. Nuclear weapons launch-on-loss-of-communications was rejected as a control system for the U.S. nuclear forces because it is fail-operational: a loss of communications would cause launch, so this mode of operation was considered too risky. This is contrasted with the fail-deadly behavior of the Perimeter system built during the Soviet era. Fail-soft systems are able to continue operating on an interim basis with reduced efficiency in case of failure. Most spare tires are an example of this: They usually come with certain restrictions (e.g. a speed restriction) and lead to lower fuel economy. Another example is the "Safe Mode" found in most Windows operating systems. Fail-safe systems become safe when they cannot operate. Many medical systems fall into this category. For example, an infusion pump can fail, and as long as it alerts the nurse and ceases pumping, it will not threaten the loss of life because its safety interval is long enough to permit a human response. In a similar vein, an industrial or domestic burner controller can fail, but must fail in a safe mode (i.e. turn combustion off when they detect faults). Famously, nuclear weapon systems that launch-on-command are fail-safe, because if the communications systems fail, launch cannot be commanded. Railway signaling is designed to be fail-safe. Fail-secure systems maintain maximum security when they cannot operate. For example, while fail-safe electronic doors unlock during power failures, fail-secure ones will lock, keeping an area secure. Fail-Passive systems continue to operate in the event of a system failure. An example includes an aircraft autopilot. In the event of a failure, the aircraft would remain in a controllable state and allow the pilot to take over and complete the journey and perform a safe landing. Fault-tolerant systems avoid service failure when faults are introduced to the system. An example may include control systems for ordinary nuclear reactors. The normal method to tolerate faults is to have several computers continually test the parts of a system, and switch on hot spares for failing subsystems. As long as faulty subsystems are replaced or repaired at normal maintenance intervals, these systems are considered safe. The computers, power supplies and control terminals used by human beings must all be duplicated in these systems in some fashion. Software engineering for safety-critical systems Software engineering for safety-critical systems is particularly difficult. There are three aspects which can be applied to aid the engineering software for life-critical systems. First is process engineering and management. Secondly, selecting the appropriate tools and environment for the system. This allows the system developer to effectively test the system by emulation and observe its effectiveness. Thirdly, address any legal and regulatory requirements, such as Federal Aviation Administration requirements for aviation. By setting a standard for which a system is required to be developed under, it forces the designers to stick to the requirements. The avionics industry has succeeded in producing standard methods for producing life-critical avionics software. Similar standards exist for industry, in general, (IEC 61508) and automotive (ISO 26262), medical (IEC 62304) and nuclear (IEC 61513) industries specifically. The standard approach is to carefully code, inspect, document, test, verify and analyze the system. Another approach is to certify a production system, a compiler, and then generate the system's code from specifications. Another approach uses formal methods to generate proofs that the code meets requirements. All of these approaches improve the software quality in safety-critical systems by testing or eliminating manual steps in the development process, because people make mistakes, and these mistakes are the most common cause of potential life-threatening errors. Examples of safety-critical systems Infrastructure Circuit breaker Emergency services dispatch systems Electricity generation, transmission and distribution Fire alarm Fire sprinkler Fuse (electrical) Fuse (hydraulic) Life support systems Telecommunications Medicine The technology requirements can go beyond avoidance of failure, and can even facilitate medical intensive care (which deals with healing patients), and also life support (which is for stabilizing patients). Heart-lung machines Anesthetic machines Mechanical ventilation systems Infusion pumps and Insulin pumps Radiation therapy machines Robotic surgery machines Defibrillator machines Pacemaker devices Dialysis machines Devices that electronically monitor vital functions (electrography; especially, electrocardiography, ECG or EKG, and electroencephalography, EEG) Medical imaging devices (X-ray, computerized tomography- CT or CAT, different magnetic resonance imaging- MRI- techniques, positron emission tomography- PET) Even healthcare information systems have significant safety implications Nuclear engineering Nuclear reactor control systems Oil and gas production Process containment Well integrity Hull integrity (for floating production storage and offloading) Jacket and topside structures Lifting equipment Helidecks Mooring systems Fire and gas detection Critical instrumented functions (process shutdown, emergency shutdown) Actuated isolation valves Pressure relief devices Blowdown valves and flare system Drilling well control (blowout preventer, mud and cement) Ventilation and heating, ventilation, and air conditioning Drainage systems Ballast systems Hull cargo tanks inerting system Heading control Ignition prevention (Ex certified electrical equipment, insulated hot surfaces, etc.) Firewater pumps Firewater and foam distribution piping Firewater and foam monitors Deluge valves Gaseous fire suppression systems Firewater hydrants Passive fire protection Temporary Refuge Escape routes Lifeboats and liferafts Personal survival equipment (e.g., lifejackets) Recreation Amusement rides Climbing equipment Parachutes Scuba equipment Diving rebreather Dive computer (depending on use) Transport Railway Railway signalling and control systems Platform detection to control train doors Automatic train stop Automotive Airbag systems Braking systems Seat belts Power Steering systems Advanced driver-assistance systems Electronic throttle control Battery management system for hybrids and electric vehicles Electric park brake Shift by wire systems Drive by wire systems Park by wire Aviation Air traffic control systems Avionics, particularly fly-by-wire systems Radio navigation (Receiver Autonomous Integrity Monitoring) Engine control systems Aircrew life support systems Flight planning to determine fuel requirements for a flight Spaceflight Human spaceflight vehicles Rocket range launch safety systems Launch vehicle safety Crew rescue systems Crew transfer systems See also High integrity software Real-time computing (risk analysis software) References External links An Example of a Life-Critical System Safety-critical systems Virtual Library Explanation of Fail Operational and Fail Passive in Avionics NASA Technical Standards System Software Assurance and Software Safety Standard Computer systems Control engineering Engineering failures Formal methods Safety Risk analysis Process safety Safety engineering Software quality
Safety-critical system
Chemistry,Technology,Engineering
1,730
14,777,746
https://en.wikipedia.org/wiki/60S%20ribosomal%20protein%20L3
60S ribosomal protein L3 is a protein that in humans is encoded by the RPL3 gene. Function Ribosomes, the complexes that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. The RPL3 gene encodes a ribosomal protein that is a component of the 60S subunit. The protein belongs to the L3P family of ribosomal proteins and it is located in the cytoplasm. The protein can bind to the HIV-1 TAR mRNA, and it has been suggested that the protein contributes to tat-mediated transactivation. This gene is co-transcribed with several small nucleolar RNA genes, which are located in several of this gene's introns. Alternate transcriptional splice variants, encoding different isoforms, have been characterized. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome. References Further reading Ribosomal proteins
60S ribosomal protein L3
Chemistry
216
75,986,282
https://en.wikipedia.org/wiki/Wrights%20tunnel
The Wrights Tunnel (also known as the Summit Tunnel, Tunnel 2, or Tunnel 1 after the daylighting of the Cats Canyon tunnel) is a railroad tunnel located in the Santa Cruz Mountains in Santa Clara and Santa Cruz Counties, California. Opened in 1880 after almost two years of construction involving numerous fatalities, the tunnel was at one point the longest tunnel in California and one the longest tunnels in the United States. It carried the tracks of the narrow gauge South Pacific Coast Railroad which ran trains from San Francisco to Santa Cruz until the railroad was acquired by Southern Pacific Railroad, which upgraded the tracks to standard gauge and continued operating trains through the line and its tunnel until a major storm in 1940 washed out certain sections of the track in the Santa Cruz Mountains. After two years without rail traffic, Southern Pacific abandoned the line. Subsequently, the United States Army Corps of Engineers collapsed both portals with explosives, destroying the northern portal in the process. The interior of the tunnel remains intact along with the south portal, but the conditions of the interior are unknown, particularly since the tunnel crosses the San Andreas Fault and no person has entered the tunnel in the aftermath of the Loma Prieta earthquake. Construction After it was determined where the tracks of the future South Pacific Coast Railroad would go in the Santa Cruz Mountains in September 1878, construction of the tunnel commenced in the following October. Camp sites, occupied almost exclusively by Chinese laborers, developed at each portal. These sites led to the founding of Wrights, located directly adjacent to the north portal, and Highland, now known as Laurel, located adjacent to the south portal and Burns Creek. Construction lasted around two years, during said time dozens of Chinese laborers were killed in multiple methane explosions caused by a methane leak within the tunnel, with the leak discovered on November 16, 1878. There was also crude oil leaking into the tunnel and coal deposits within the tunnel, with particularly the former contributing to the poor working conditions of the laborers. Lit candles were used by workers to burn off the unknown source of methane which availed to be fruitless, all while workers kept passing out due to the presence of this natural gas. Valentine's day explosion On February 14, 1879, the methane within the north branch tunnel ignited, causing a massive explosion, killing fourteen Chinese workers and burning many other workers. Thereafter, work was halted on the tunnel for three months while engineers worked to find a solution for the methane question. A crude air ventilation system was installed to pump fresh air into the incomplete tunnel. After this incident, the Chinese workers which resided in Wrights refused to reenter the tunnel, and different Chinese workers were brought to complete the work on the tunnel. June cave in During the construction of the tunnel in June 1879, multiple creosote-treated redwood support beams in the tunnel ignited and spread to one another, compromising the structural integrity of that portion of the tunnel and causing that segment to cave-in. This set the project back by another two months. November explosion After the previous two incidents in the same year, a pair of massive explosions occurred within the north branch of the tunnel at shortly before 12 am on November 17, 1879, killing 32 Chinese workers and injuring many other workers as well. The explosion was caused by a flame which was lit to blast the rock with explosives. However, that flame ignited the high amounts of methane in the air, with the subsequent explosion severely shaking the surrounding area. away at the portal, other workers felt the shock produced by the explosion, with 20 Chinese workers subsequently rushing into the tunnel with torches to rescue the injured. After traversing into the tunnel, another even more massive explosion occurred which essentially turned this half of the tunnel into a barrel, with a mountain of flame spewing out of the north portal. This explosion also destroyed the engine house and a shed within a hundred feet of the north portal. Thereafter, the methane leak was discovered right by the north portal and a lantern was placed by it to flare off the methane to prevent another explosion. North portal collapse In the winter of 1893, the wooden north portal by Wrights collapsed after a winter storm. The portal is located in a gully where water from the mountains above collects and flows over the portal onto the tracks, bringing debris with it, often landing on the right of way and blocking it. With the collapse of the portal, a new concrete portal was installed with an adjacent spillway to resolve this issue. The new portal was also designed to be larger to provide room for the future standard gauging of the tracks. Narrow gauge operations The tunnel opened to rail service on May 10, 1880. Passenger and freight service to Santa Cruz would pass through the tunnel, including the now famous Suntan Special. Much like how California State Route 17 becomes severely congested on weekends and during the summer in the present, tourists from the San Francisco Bay Area would flock to the Suntan special to spend a day or the weekend at the beaches of Santa Cruz, while others would take the train to whistle stops throughout the Santa Cruz Mountains to hike, picnic, or relax in the redwood forests, both of which have become less accessible to the average person since the abandonment of the railroad. In 1895, H.S. Kneedler wrote the following about the line in his book Through Storyland to Sunset Seas: Although the line was a major success for passenger rail to Santa Cruz, it was also a huge success for freight rail, with numerous quarries, sawmills, farmers, and other industries relying on the Summit Tunnel to transport their products to sea ports in Oakland and San Francisco. 1906 earthquake and reconstruction Since the tunnel runs through the San Andreas Fault Zone near Wrights, the tunnel suffered severe damage from the 1906 San Francisco earthquake, causing a one year closure of the tunnel and the railroad through the mountains. Because of the slip in the fault which caused the earthquake, the segments of the tunnel on the Pacific Plate and the North American Plate were displaced by five feet, requiring the formerly straight tunnel to incorporate a curve to be aligned again. The tunnel was repaired thereafter and widened to make way for standard gauge trains, which would begin using the tunnel in 1909. The western portal was also replaced due to the earthquake and a brick ceiling was installed for the first three hundred feet of the tunnel to prevent collapse from the sandstone present there. The brick ceiling is exposed to this day, with the collapse of the tunnel conducted further within when the tunnel was closed. The tunnel was also retimbered with redwood timbers by 1907. Standard gauge operations and abandonment After the reconstruction and retrofit of the existing tunnel, the tunnel continued to carry trains through the summit of the Santa Cruz Mountains without any major incidents. The tunnel operated for 33 years after its reopening and saw its last train in February 1940. After two years of inactivity on the rail line through the mountains, Southern Pacific abandoned segment of the railroad between Downtown Los Gatos in Los Gatos and Olympia, along with the Summit Tunnel, in 1942. Both portals were blasted to preserve the interior of the tunnel, prevent trespassers, and for insurance reasons. The blast at the north portal caused the portal to partially collapse, a state which remains the same, and is, alongside the concrete piers over the Los Gatos creek, one of the last remnant of Wrights, with the town also vanishing due to the absence of the railroad, although it had been in decline for a couple decades at that point. Prior to the blasting of the tunnel, the rails and timbers of value within the tunnel were removed by H. A. Christie under contract with the Southern Pacific Railroad. References Railroad tunnels in California Tunnels completed in 1880 Transportation buildings and structures in Santa Cruz County, California Transportation buildings and structures in Santa Clara County, California 1880 establishments in California Demolished buildings and structures in California Buildings and structures demolished in 1942 Buildings and structures demolished by controlled implosion
Wrights tunnel
Engineering
1,586
6,815,165
https://en.wikipedia.org/wiki/Belweder%20%28TV%20set%29
Belweder was the brand name of the OT1471 television set, manufactured in People's Republic of Poland (PRL) from 1957 to 1960 at the Warszawskie Zakłady Telewizyjne (WZT). It was the second (after the Wisła) TV set made in Poland and the first one designed entirely domestically. The communist authorities of the PRL saw TV set manufacturing not only as satisfying the consumption needs of the citizens, but also a way of popularizing a potentially powerful propaganda medium, which is why the development of television in general and the TV sets in particular enjoyed strong support within the reality of a centrally planned economy. The first plans for the new device, along with laboratory model, were created at the WZT in 1955. Contrary to WZT's first product, the Wisła, which was to a large extent based on solutions licensed from the Soviet Union with many components imported from there, the new TV was to be a modern design using domestic technology only, even though many of the components of the Belweder have not been manufactured in Poland before, and the manufacturing of plastics had to be set up virtually from scratch. The resulting TV set had a 14-inch screen, external dimensions of 51x41x34 cm and weighed 23 kg. It could receive up to eight TV channels and FM radio. The channel switch could only be set to receive a signal from the transmitters in the part of Poland where a given example was sold - there were two distinct versions, one for the southern and northern part of Poland. A Belweder cost 7000 złoty at the time when the average monthly salary was between one and two thousand, yet, like many consumption goods in the communist economies, it proved very sought after and hard to buy. This seems strange to Westerners used to free market economies; the explanation is that although the TV set cost a few monthly salaries, communist economies produced so little in the way of consumer goods that people normally had vast amounts of money saved, simply because there was nothing else to buy. Accordingly, demand constantly outpaced supply (remember that centrally planned economies, lacking a private sector, did not naturally increase supply to meet demand or raise prices to reduce demand). In 1958, production reached 60,000 items. Later, the Belweder and Wisła were joined by more modern models, the Wawel (named after the Wawel castle) and the "gems" family - Turkus (turquoise), Jantar (amber) and Szmaragd (emerald). Those models gradually superseded the older, and still unreliable, Wisła and Belweder, and in the 1960 the production of all types of TV sets at WZT reached 200,000. The "gems" were later superseded by the "planet" series, beginning with the popular Neptun (Neptune). With over 150,000 items sold in total, the Belweder can be credited with making television a popular medium in Poland for the first time. References Television technology 1950s in Poland
Belweder (TV set)
Technology
635
45,248,042
https://en.wikipedia.org/wiki/EL/M-2084
The ELM-2084 is an Israeli ground-based mobile 3D AESA multi-mission radar (MMR) family produced by ELTA, a subsidiary of Israel Aerospace Industries. The radar is capable of detecting and tracking both aircraft and ballistic targets and providing fire control guidance for missile interception or artillery air defense. Several versions of the radar were purchased and are operated by a number of armies, including the Israel Defense Forces, Canadian Army, Republic of Singapore Air Force, Army of the Czech Republic, Slovak Armed Forces. System development The MMR's development was launched by Elta and the Administration for the Development of Weapons and Technology (Hebrew abbreviated Maf'at) in 2002 as a response to the growing ballistic threat to Israel. A prototype of the system was used during IDF operation "Cast Lead" in 2008 as an early warning radar, detecting HAMAS artillery fire and providing accurate alert for the Israeli home front. The first successful interception using the MMR as a firing control unit took place on April 7, 2011, with the Iron Dome intercepting a rocket fired from the Gaza strip towards Ashkelon, a city in southern Israel. Description As a tactical radar, the ELM 2084 is a mobile system consisted of the radar unit, a control module, a cooling unit and a power generator. It can be mounted on a variety of transport platforms. The radar was designed to accommodate medium range operational needs in the battlefield: detection, classification and tracking of targets. The Radar's main missions are: Hostile weapon location – detection and tracking of hostile ballistic projectiles, calculating enemy launchers or artillery position. Early warning – impact point calculation for warning of civil population and military rear units. Friendly fire ranging – tracking of friendly artillery and providing corrections to the firing unit. Aerial surveillance – detecting and tracking aircraft, maintaining continuous aerial picture. Firing control radar – for various air defense systems, including Iron Dome, David's Sling, Skyhunter, and SPYDER-MR (Medium Range). Notable features of the radar: Advance technology Active electronically scanned array (AESA) radar Extensive operational experience, due to participation in large number of interceptions Interoperability with modern battle management systems and several different interceptor missiles. The radar is advertised as capable of processing all types of threats – aerial and ballistic including low radar cross-section (RCS) targets. Variants ELM-2084 This variant is the most prominent member of the family, with two sub variants, distinguished by antenna size and range capabilities: The MMR is capable of air surveillance, hostile weapon locating and fire guidance for the medium range. ELM-2084 Mini MMR The M-MMR is a scaled down MMR variant for Medium range threats. ELM-2311 The ELM-2311 is a C-band tactical C-RAM radar built for the battalion level. It is designed for a single vehicle platform, with a small operational crew. The Radar is designed to operate in Artillery fire ranging and hostile weapons locating roles. ELM-2248 MF-STAR The MF-STAR Radar is a naval implementation of The MMR, made up of four MMR modules mounted around a pyramid shaped mast. The radar provides full 360º coverage for air surveillance, hostile weapons locating and fire guidance capabilities naval platforms. The ELM-2248 is in service in the Israeli and Indian navies. The ELM-2248 is the fire control radar for the Barak 8 system. ELM-2248 LB The MF-STAR LB is a land based variant, consisted of a single rotating module. It is believed to work with a ground version of the Barak-8 Surface to Air missile system. Operators The IDF employs several variants of the MMR as an air defense and artillery detection radar, and Fire Control Radar for its air defense systems. Quantity unknown. The Radar was declared operational in 2010 and is operated as the main Fire ranging component of the Israeli Artillery Corps' spotting battalion. The ELM 2084 is an essential component in the Israeli Hostile Weapon Locating, Aerial Surveillance and Early Warning architecture – providing a constant coverage of the Israeli borders both In peacetime and in Israel's recent conflicts. ELM 2084 Radars are used on the IDF's Iron Dome and David's Sling air defense systems as Fire Control Radars, with an advertised capability to process dozens of threats simultaneously. A notable success rate of 90 percent of over 1000 interceptions is reported for the Iron Dome system. Azerbaijan has employed ELM-2084 since at least 2019. Canada has purchased ten ELM-2084 Multi-Mission Radars. They are expected to enter operational service late 2020 with the designation AN/MPQ-504. The Czech Ministry of Defense is reported to have purchased 8 ELM 2084 Radars scheduled to be delivered by 2023, the first ELM-2084 MMR was delivered in April 2022. After difficulties with the system and lack of delivered manuals for the system by ELTA it passed the army tests on 21 April 2023. In May 2023 the 5 already delivered of the 8 ordered units should be accepted by the Czech Army and put into service. Finland has purchased a "significant" number of ELM-2311 radars for mainly counter-battery use in 2019, with deliveries scheduled for 2021. They are to be used also for secondary air surveillance purposes. The radar system was tested in summer 2018, and was deemed the best of the systems which FDF had selected for the competition. On December 11, 2020 the Hungarian government announced they have ordered multiple (5+6) ELM-2084 radar systems from Israel Aerospace Industries with Rheinmetall's Canadian subsidiary providing sales and integration and Rheinmetall Canada also establishing assembly and future manufacturing as well as system development in Hungary. The unknown number of various M-MMR and F-MMR configuration are expected to start replacing Soviet-made but modernized P-37, PRV-17 and ST-68U locators from 2022, serving as both augmenting the RAT-31DL-based backbone of NATO airspace surveillance and Hungary's national air defense network, as well as providing state-of-the-art counter-battery capabilities for artillery regiments of the Hungarian Defence Forces. India operates the MF-STAR naval radar, quantity unknown. ELM 2084 are used by the Singapore Armed Forces for air surveillance and air defense roles. Quantity unknown. ELM 2311 are mounted on Bronco ATTC as the 'SAFARI' radar. Acquired with SPYDER air-defense systems, sources claim that they are the EL/M-2084 MMR variant. Publicised pictures showed that there was at least one system was commissioned and operated. References External links Ground radars Military radars of Israel Elta products Weapon locating radar
EL/M-2084
Technology
1,364
41,717,027
https://en.wikipedia.org/wiki/Assisted%20feeding
Assisted feeding, also called hand feeding or oral feeding, is the action of a person feeding another person who cannot otherwise feed themselves. The term is used in the context of some medical issue or in response to a disability, such as when a person living with dementia is no longer able to manage eating alone. The person being fed must be able to eat by mouth, but lacks either the cognitive or physical ability to self-feed. Individuals who are born with a disability like cerebral palsy, or arthrogryposis multiplex congenita (AMC) may be unable to feed themselves. Also, those who acquire a disability due to an accident or a disease like amyotrophic lateral sclerosis (ALS) may require hand feeding because they may become unable to pick-up and bring food to their own mouth. Assisted feeding as an alternative to tube feeding A feeding tube is a medical device used to provide nutrition to patients who cannot obtain nutrition by mouth, are unable to swallow safely, or need nutritional supplementation. Patients who are able to use assisted feeding should have that in preference to tube feeding whenever possible. Oral assisted feedings are preferable to percutaneous feeding in individuals with advanced dementia. Monetary costs In the United States a study reviewed a set of patients and found that the expenses to arrange assisted feeding for patients was higher than the cost of using a feeding tube. References Further reading Digestive system procedures Interpersonal relationships
Assisted feeding
Biology
291
36,922,551
https://en.wikipedia.org/wiki/%C3%89cole%20nationale%20des%20sciences%20appliqu%C3%A9es%20de%20Khouribga
The National School of Applied Sciences Sciences in Khouribga(ENSA Khouribga)' is a Moroccan public engineering school within the Sultan Moulay Souliman University of Beni Mellal. It was created in 2007 to support the government's commitment under the National Training Initiative of 10,000 engineers by 2010. It trains state engineers to be qualified scientifically, technically in modeling, and communication management. It is part of the National Schools of the Applied Sciences network. Programmes Computer engineering Telecommunication and Networks engineering Engineering Processes for Energy and the Environment Electrical engineering External links Site officiel de l'ENSA de Khouribga Education in Morocco Engineering universities and colleges 2007 establishments in Morocco Educational institutions established in 2007
École nationale des sciences appliquées de Khouribga
Engineering
151
1,151,048
https://en.wikipedia.org/wiki/FL%20%28complexity%29
In computational complexity theory, the complexity class FL is the set of function problems which can be solved by a deterministic Turing machine in a logarithmic amount of memory space. As in the definition of L, the machine reads its input from a read-only tape and writes its output to a write-only tape; the logarithmic space restriction applies only to the read/write working tape. Loosely speaking, a function problem takes a complicated input and produces a (perhaps equally) complicated output. Function problems are distinguished from decision problems, which produce only Yes or No answers and corresponds to the set L of decision problems which can be solved in deterministic logspace. FL is a subset of FP, the set of function problems which can be solved in deterministic polynomial time. FL is known to contain several natural problems, including arithmetic on numbers. Addition, subtraction and multiplication of two numbers are fairly simple, but achieving division is a far deeper problem which was open for decades. Similarly one may define FNL, which has the same relation with NL as FNP has with NP. References Complexity classes Functions and mappings
FL (complexity)
Mathematics
234
25,825
https://en.wikipedia.org/wiki/Red
Red is the color at the long wavelength end of the visible spectrum of light, next to orange and opposite violet. It has a dominant wavelength of approximately 625–740 nanometres. It is a primary color in the RGB color model and a secondary color (made from magenta and yellow) in the CMYK color model, and is the complementary color of cyan. Reds range from the brilliant yellow-tinged scarlet and vermillion to bluish-red crimson, and vary in shade from the pale red pink to the dark red burgundy. Red pigment made from ochre was one of the first colors used in prehistoric art. The Ancient Egyptians and Mayans colored their faces red in ceremonies; Roman generals had their bodies colored red to celebrate victories. It was also an important color in China, where it was used to color early pottery and later the gates and walls of palaces. In the Renaissance, the brilliant red costumes for the nobility and wealthy were dyed with kermes and cochineal. The 19th century brought the introduction of the first synthetic red dyes, which replaced the traditional dyes. Red became a symbolic color of communism and socialism; Soviet Russia adopted a red flag following the Bolshevik Revolution in 1917. The Soviet red banner would subsequently be used throughout the entire history of the Soviet Union. China adopted its own red flag following the Chinese Communist Revolution. A red flag was also adopted by North Vietnam in 1954, and by all of Vietnam in 1975. Since red is the color of blood, it has historically been associated with sacrifice, danger, and courage. Modern surveys in Europe and the United States show red is also the color most commonly associated with heat, activity, passion, sexuality, anger, love, and joy. In China, India, and many other Asian countries it is the color symbolizing happiness and good fortune. Shades and variations Varieties of the color red may differ in hue, chroma (also called saturation, intensity, or colorfulness), or lightness (or value, tone, or brightness), or in two or three of these qualities. Variations in value are also called tints and shades, a tint being a red or other hue mixed with white, a shade being mixed with black. Four examples are shown below. In science and nature Seeing red The human eye sees red when it looks at light with a wavelength between approximately 625 and 740 nanometers. It is a primary color in the RGB color model and the light just past this range is called infrared, or below red, and cannot be seen by human eyes, although it can be sensed as heat. In the language of optics, red is the color evoked by light that stimulates neither the S or the M (short and medium wavelength) cone cells of the retina, combined with a fading stimulation of the L (long-wavelength) cone cells. Primates can distinguish the full range of the colors of the spectrum visible to humans, but many kinds of mammals, such as dogs and cattle, have dichromacy, which means they can see blues and yellows, but cannot distinguish red and green (both are seen as gray). Bulls, for instance, cannot see the red color of the cape of a bullfighter, but they are agitated by its movement. (See color vision). One theory for why primates developed sensitivity to red is that it allowed ripe fruit to be distinguished from unripe fruit and inedible vegetation. This may have driven further adaptations by species taking advantage of this new ability, such as the emergence of red faces. Red light is used to help adapt night vision in low-light or night time, as the rod cells in the human eye are not sensitive to red. In color theory and on a computer screen In the RYB color model, which is the basis of traditional color theory, red is one of the three primary colors, along with blue and yellow. Painters in the Renaissance mixed red and blue to make violet: Cennino Cennini, in his 15th-century manual on painting, wrote, "If you want to make a lovely violet colour, take fine lac (red lake), ultramarine blue (the same amount of the one as of the other) with a binder"; he noted that it could also be made by mixing blue indigo and red hematite. In the CMY and CMYK color models, red is a secondary color subtractively mixed from magenta and yellow. In the RGB color model, red, green and blue are additive primary colors. Red, green and blue light combined makes white light, and these three colors, combined in different mixtures, can produce nearly any other color. This principle is used to generate colors on such as computer monitors and televisions. For example, magenta on a computer screen is made by a similar formula to that used by Cennino Cennini in the Renaissance to make violet, but using additive colors and light instead of pigment: it is created by combining red and blue light at equal intensity on a black screen. Violet is made on a computer screen in a similar way, but with a greater amount of blue light and less red light. Color of sunset As a ray of white sunlight travels through the atmosphere to the eye, some of the colors are scattered out of the beam by air molecules and airborne particles due to Rayleigh scattering, changing the final color of the beam that is seen. Colors with a shorter wavelength, such as blue and green, scatter more strongly, and are removed from the light that finally reaches the eye. At sunrise and sunset, when the path of the sunlight through the atmosphere to the eye is longest, the blue and green components are removed almost completely, leaving the longer wavelength orange and red light. The remaining reddened sunlight can also be scattered by cloud droplets and other relatively large particles, which give the sky above the horizon its red glow. Lasers Lasers emitting in the red region of the spectrum have been available since the invention of the ruby laser in 1960. In 1962 the red helium–neon laser was invented, and these two types of lasers were widely used in many scientific applications including holography, and in education. Red helium–neon lasers were used commercially in LaserDisc players. The use of red laser diodes became widespread with the commercial success of modern DVD players, which use a 660 nm laser diode technology. Today, red and red-orange laser diodes are widely available to the public in the form of extremely inexpensive laser pointers. Portable, high-powered versions are also available for various applications. More recently, 671 nm diode-pumped solid state (DPSS) lasers have been introduced to the market for all-DPSS laser display systems, particle image velocimetry, Raman spectroscopy, and holography. Red's wavelength has been an important factor in laser technologies; red lasers, used in early compact disc technologies, are being replaced by blue lasers, as red's longer wavelength causes the laser's recordings to take up more space on the disc than would blue-laser recordings. Astronomy Mars is called the Red Planet because of the reddish color imparted to its surface by the abundant iron oxide present there. Astronomical objects that are moving away from the observer exhibit a Doppler red shift. Jupiter's surface displays a Great Red Spot caused by an oval-shaped mega storm south of the planet's equator. Red giants are stars that have exhausted the supply of hydrogen in their cores and switched to thermonuclear fusion of hydrogen in a shell that surrounds its core. They have radii tens to hundreds of times larger than that of the Sun. However, their outer envelope is much lower in temperature, giving them an orange hue. Despite the lower energy density of their envelope, red giants are many times more luminous than the Sun due to their large size. Red supergiants like Betelgeuse, Antares, Mu Cephei, VV Cephei, and VY Canis Majoris one of the biggest stars in the Universe, are the biggest variety of red giants. They are huge in size, with radii 200 to 1700 times greater than the Sun, but relatively cool in temperature (3000–4500 K), causing their distinct red tint. Because they are shrinking rapidly in size, they are surrounded by an envelope or skin much bigger than the star itself. The envelope of Betelgeuse is 250 times bigger than the star inside. A red dwarf is a small and relatively cool star, which has a mass of less than half that of the Sun and a surface temperature of less than 4,000 K. Red dwarfs are by far the most common type of star in the Galaxy, but due to their low luminosity, from Earth, none are visible to the naked eye. Interstellar reddening is caused by the extinction of radiation by dust and gas Pigments and dyes Food coloring The most common synthetic food coloring today is Allura Red AC, a red azo dye that goes by several names including: Allura Red, Food Red 17, C.I. 16035, FD&C Red 40, It was originally manufactured from coal tar, but now is mostly made from petroleum. In Europe, Allura Red AC is not recommended for consumption by children. It is banned in Denmark, Belgium, France and Switzerland, and was also banned in Sweden until the country joined the European Union in 1994. The European Union approves Allura Red AC as a food colorant, but EU countries' local laws banning food colorants are preserved. In the United States, Allura Red AC is approved by the Food and Drug Administration (FDA) for use in cosmetics, drugs, and food. It is used in some tattoo inks and is used in many products, such as soft drinks, children's medications, and cotton candy. On June 30, 2010, the Center for Science in the Public Interest (CSPI) called for the FDA to ban Red 40. Because of public concerns about possible health risks associated with synthetic dyes, many companies have switched to using natural pigments such as carmine, made from crushing the tiny female cochineal insect. This insect, originating in Mexico and Central America, was used to make the brilliant scarlet dyes of the European Renaissance. Autumn leaves The red of autumn leaves is produced by pigments called anthocyanins. They are not present in the leaf throughout the growing season, but are actively produced towards the end of summer. They develop in late summer in the sap of the cells of the leaf, and this development is the result of complex interactions of many influences—both inside and outside the plant. Their formation depends on the breakdown of sugars in the presence of bright light as the level of phosphate in the leaf is reduced. During the summer growing season, phosphate is at a high level. It has a vital role in the breakdown of the sugars manufactured by chlorophyll. But in the fall, phosphate, along with the other chemicals and nutrients, moves out of the leaf into the stem of the plant. When this happens, the sugar-breakdown process changes, leading to the production of anthocyanin pigments. The brighter the light during this period, the greater the production of anthocyanins and the more brilliant the resulting color display. When the days of autumn are bright and cool, and the nights are chilly but not freezing, the brightest colorations usually develop. Anthocyanins temporarily color the edges of some of the very young leaves as they unfold from the buds in early spring. They also give the familiar color to such common fruits as cranberries, red apples, blueberries, cherries, raspberries, and plums. Anthocyanins are present in about 10% of tree species in temperate regions, although in certain areas—a famous example being New England—up to 70% of tree species may produce the pigment. In autumn forests they appear vivid in the maples, oaks, sourwood, sweetgums, dogwoods, tupelos, cherry trees and persimmons. These same pigments often combine with the carotenoids' colors to create the deeper orange, fiery reds, and bronzes typical of many hardwood species. (See Autumn leaf color). Blood and other reds in nature Oxygenated blood is red due to the presence of oxygenated hemoglobin that contains iron molecules, with the iron components reflecting red light. Red meat gets its color from the iron found in the myoglobin and hemoglobin in the muscles and residual blood. Plants like apples, strawberries, cherries, tomatoes, peppers, and pomegranates are often colored by forms of carotenoids, red pigments that also assist photosynthesis. Hair color Red hair occurs naturally on approximately 1–2% of the human population. It occurs more frequently (2–6%) in people of northern or western European ancestry, and less frequently in other populations. Red hair appears in people with two copies of a recessive gene on chromosome 16 which causes a mutation in the MC1R protein. Red hair varies from a deep burgundy through burnt orange to bright copper. It is characterized by high levels of the reddish pigment pheomelanin (which also accounts for the red color of the lips) and relatively low levels of the dark pigment eumelanin. The term "redhead" (originally redd hede) has been in use since at least 1510. In animal and human behavior Red is associated with dominance in a number of animal species. For example, in mandrills, red coloration of the face is greatest in alpha males, increasingly less prominent in lower ranking subordinates, and directly correlated with levels of testosterone. Red can also affect the perception of dominance by others, leading to significant differences in mortality, reproductive success and parental investment between individuals displaying red and those not. In humans, wearing red has been linked with increased performance in competitions, including professional sport and multiplayer video games. Controlled tests have demonstrated that wearing red does not increase performance or levels of testosterone during exercise, so the effect is likely to be produced by perceived rather than actual performance. Judges of tae kwon do have been shown to favor competitors wearing red protective gear over blue, and, when asked, a significant majority of people say that red abstract shapes are more "dominant", "aggressive", and "likely to win a physical competition" than blue shapes. In contrast to its positive effect in physical competition and dominance behavior, exposure to red decreases performance in cognitive tasks and elicits aversion in psychological tests where subjects are placed in an "achievement" context (e.g. taking an IQ test). History and art In prehistory and the ancient world Inside cave 13B at Pinnacle Point, an archeological site found on the coast of South Africa, paleoanthropologists in 2000 found evidence that, between 170,000 and 40,000 years ago, Late Stone Age people were scraping and grinding ochre, a clay colored red by iron oxide, probably with the intention of using it to color their bodies. Red hematite powder was also found scattered around the remains at a grave site in a Zhoukoudian cave complex near Beijing. The site has evidence of habitation as early as 700,000 years ago. The hematite might have been used to symbolize blood in an offering to the dead. Red, black and white were the first colors used by artists in the Upper Paleolithic age, probably because natural pigments such as red ochre and iron oxide were readily available where early people lived. Madder, a plant whose root could be made into a red dye, grew widely in Europe, Africa and Asia. The cave of Altamira in Spain has a painting of a bison colored with red ochre that dates to between 15,000 and 16,500 BC. A red dye called Kermes was made beginning in the Neolithic Period by drying and then crushing the bodies of the females of a tiny scale insect in the genus Kermes, primarily Kermes vermilio. The insects live on the sap of certain trees, especially Kermes oak trees near the Mediterranean region. Jars of kermes have been found in a Neolithic cave-burial at Adaoutse, Bouches-du-Rhône. Kermes from oak trees was later used by Romans, who imported it from Spain. A different variety of dye was made from Porphyrophora hamelii (Armenian cochineal) scale insects that lived on the roots and stems of certain herbs. It was mentioned in texts as early as the 8th century BC, and it was used by the ancient Assyrians and Persians. In ancient Egypt, red was associated with life, health, and victory. Egyptians would color themselves with red ochre during celebrations. Egyptian women used red ochre as a cosmetic to redden cheeks and lips and also used henna to color their hair and paint their nails. The ancient Romans wore togas with red stripes on holidays, and the bride at a wedding wore a red shawl, called a flammeum. Red was used to color statues and the skin of gladiators. Red was also the color associated with army; Roman soldiers wore red tunics, and officers wore a cloak called a paludamentum which, depending upon the quality of the dye, could be crimson, scarlet or purple. In Roman mythology red is associated with the god of war, Mars. The vexilloid of the Roman Empire had a red background with the letters SPQR in gold. A Roman general receiving a triumph had his entire body painted red in honor of his achievement. The Romans liked bright colors, and many Roman villas were decorated with vivid red murals. The pigment used for many of the murals was called vermilion, and it came from the mineral cinnabar, a common ore of mercury. It was one of the finest reds of ancient times – the paintings have retained their brightness for more than twenty centuries. The source of cinnabar for the Romans was a group of mines near Almadén, southwest of Madrid, in Spain. Working in the mines was extremely dangerous, since mercury is highly toxic; the miners were slaves or prisoners, and being sent to the cinnabar mines was a virtual death sentence. The Middle Ages After the fall of the Western Roman Empire, red was adopted as a color of majesty and authority by the Byzantine Empire, and the princes of Europe. It also played an important part in the rituals of the Roman Catholic Church, symbolizing the blood of Christ and the Christian martyrs. In Western Europe, Emperor Charlemagne painted his palace red as a very visible symbol of his authority, and wore red shoes at his coronation. Kings, princes and, beginning in 1295, Roman Catholic cardinals began to wear red colored habitus. When Abbe Suger rebuilt Saint Denis Basilica outside Paris in the early 12th century, he added stained glass windows colored blue cobalt glass and red glass tinted with copper. Together they flooded the basilica with a mystical light. Soon stained glass windows were being added to cathedrals all across France, England and Germany. In medieval painting red was used to attract attention to the most important figures; both Christ and the Virgin Mary were commonly painted wearing red mantles. In western countries red is a symbol of martyrs and sacrifice, particularly because of its association with blood. Beginning in the Middle Ages, the Pope and Cardinals of the Roman Catholic Church wore red to symbolize the blood of Christ and the Christian martyrs. The banner of the Christian soldiers in the First Crusade was a red cross on a white field, the St. George's Cross. According to Christian tradition, Saint George was a Roman soldier who was a member of the guards of the Emperor Diocletian, who refused to renounce his Christian faith and was martyred. The Saint George's Cross became the Flag of England in the 16th century, and now is part of the Union Flag of the United Kingdom, as well as the Flag of the Republic of Georgia. Renaissance In Renaissance painting, red was used to draw the attention of the viewer; it was often used as the color of the cloak or costume of Christ, the Virgin Mary, or another central figure. In Venice, Titian was the master of fine reds, particularly vermilion; he used many layers of pigment mixed with a semi-transparent glaze, which let the light pass through, to create a more luminous color. The figures of God, the Virgin Mary and two apostles are highlighted by their vermilion red costumes. Queen Elizabeth I of England liked to wear bright reds, before she adopted the more sober image of the "Virgin Queen". Red costumes were not limited to the upper classes. In Renaissance Flanders, people of all social classes wore red at celebrations. One such celebration was captured in The Wedding Dance (1566) by Pieter Bruegel the Elder. The painter Johannes Vermeer skilfully used different shades and tints of vermilion to paint the red skirt in The Girl with the Wine Glass, then glazed it with madder lake to make a more luminous color. Reds from the New World In Latin America, the Aztec people, the Paracas culture and other societies used cochineal, a vivid scarlet dye made from insects. From the 16th until the 19th century, cochineal became a highly profitable export from Spanish Mexico to Europe. 18th to 20th century In the 18th century, red began to take on a new identity as the color of resistance and revolution. It was already associated with blood, and with danger; a red flag hoisted before a battle meant that no prisoners would be taken. In 1793–94, red became the color of the French Revolution. A red Phrygian cap, or "liberty cap", was part of the uniform of the sans-culottes, the most militant faction of the revolutionaries. In the late 18th century, during a strike English dock workers carried red flags, and it thereafter became closely associated with the new labour movement, and later with the Labour Party in the United Kingdom, founded in 1900. In Paris in 1832, a red flag was carried by working-class demonstrators in the failed June Rebellion (an event immortalised in Les Misérables), and later in the 1848 French Revolution. The red flag was proposed as the new national French flag during the 1848 revolution, but was rejected by at the urging of the poet and statesman Alphonse Lamartine in favour of the tricolor flag. It appeared again as the flag of the short-lived Paris Commune in 1871. It was then adopted by Karl Marx and the new European movements of socialism and communism. Soviet Russia adopted a red flag following the Bolshevik Revolution in 1917. The People's Republic of China adopted the red flag following the Chinese Communist Revolution. It was adopted by North Vietnam in 1954, and by all of Vietnam in 1975. Symbolism Courage and sacrifice Surveys show that red is the color most associated with courage. In western countries red is a symbol of martyrs and sacrifice, particularly because of its association with blood. Beginning in the Middle Ages, the Pope and Cardinals of the Roman Catholic Church wore red to symbolize the blood of Christ and the Christian martyrs. The banner of the Christian soldiers in the First Crusade was a red cross on a white field, the St. George's Cross. According to Christian tradition, Saint George was a Roman soldier who was a member of the guards of the Emperor Diocletian, who refused to renounce his Christian faith and was martyred. The Saint George's Cross became the Flag of England in the 16th century, and now is part of the Union Flag of the United Kingdom, as well as the Flag of the Republic of Georgia. Hatred, anger, aggression, passion, heat and war While red is the color most associated with love, it also the color most frequently associated with hatred, anger, aggression and war. People who are angry are said to "." Red is the color most commonly associated with passion and heat. In ancient Rome, red was the color of Mars, the god of war—the planet Mars was named for him because of its red color. Warning and danger Red is the traditional color of warning and danger, and is therefore often used on flags. In the Middle Ages up through the French Revolution, a red flag shown in warfare indicated the intent to take no prisoners. Similarly, a red flag hoisted by a pirate ship meant no mercy would be shown to their target. In Britain, in the early days of motoring, motor cars had to follow a man with a red flag who would warn horse-drawn vehicles, before the Locomotives on Highways Act 1896 abolished this law. In automobile races, the red flag is raised if there is danger to the drivers. In international football, a player who has made a serious violation of the rules is shown a red penalty card and ejected from the game. Several studies have indicated that red carries the strongest reaction of all the colors, with the level of reaction decreasing gradually with the colors orange, yellow, and white, respectively. For this reason, red is generally used as the highest level of warning, such as threat level of terrorist attack in the United States. In fact, teachers at a primary school in the UK have been told not to mark children's work in red ink because it encourages a "negative approach". Red is the international color of stop signs and stop lights on highways and intersections. It was standardized as the international color at the Vienna Convention on Road Signs and Signals of 1968. It was chosen partly because red is the brightest color in daytime (next to orange), though it is less visible at twilight, when green is the most visible color. Red also stands out more clearly against a cool natural backdrop of blue sky, green trees or gray buildings. But it was mostly chosen as the color for stoplights and stop signs because of its universal association with danger and warning. The 1968 Vienna Convention on Road Signs and Signals of 1968 uses red color also for the margin of danger warning sign, give way signs and prohibitory signs, following the previous German-type signage (established by Verordnung über Warnungstafeln für den Kraftfahrzeugverkehr in 1927). The color that attracts attention Red is the color that most attracts attention. Surveys show it is the color most frequently associated with visibility, proximity, and extroverts. It is also the color most associated with dynamism and activity. Red is used in modern fashion much as it was used in Medieval painting; to attract the eyes of the viewer to the person who is supposed to be the center of attention. People wearing red seem to be closer than those dressed in other colors, even if they are actually the same distance away. Monarchs, wives of presidential candidates and other celebrities often wear red to be visible from a distance in a crowd. It is also commonly worn by lifeguards and others whose job requires them to be easily found. Because red attracts attention, it is frequently used in advertising, though studies show that people are less likely to read something printed in red because they know it is advertising, and because it is more difficult visually to read than black and white text. Seduction, sexuality and sin Red by a large margin is the color most commonly associated with seduction, sexuality, eroticism and immorality, possibly because of its close connection with passion and with danger. Red was long seen as having a dark side, particularly in Christian theology. It was associated with sexual passion, anger, sin, and the devil. In the Old Testament of the Bible, the Book of Isaiah said: "Though your sins be as scarlet, they shall be white as snow." In the New Testament, in the Book of Revelation, the Antichrist appears as a red monster, ridden by a woman dressed in scarlet, known as the Whore of Babylon. Satan is often depicted as colored red and/or wearing a red costume in both iconography and popular culture. By the 20th century, the devil in red had become a folk character in legends and stories. The devil in red appears more often in cartoons and movies than in religious art. In 17th-century New England, red was associated with adultery. In the 1850 novel by Nathaniel Hawthorne, The Scarlet Letter, set in a Puritan New England community, a woman is punished for adultery with ostracism, her sin represented by a red letter 'A' sewn onto her clothes. Red is still commonly associated with prostitution. At various points in history, prostitutes were required to wear red to announce their profession. Houses of prostitution displayed a red light. Beginning in the early 20th century, houses of prostitution were allowed only in certain specified neighborhoods, which became known as red-light districts. Large red-light districts are found today in Bangkok and Amsterdam. In the handkerchief code, the color red signifies interest in the sexual act of fisting. In both Christian and Hebrew tradition, red is also sometimes associated with murder or guilt, with "having blood on one's hands", or "being caught red-handed. In religion In Christianity, red is associated with the blood of Christ and the sacrifice of martyrs. In the Roman Catholic Church it is also associated with pentecost and the Holy Spirit. Since 1295, it is the color worn by Cardinals, the senior clergy of the Roman Catholic Church. Red is the liturgical color for the feasts of martyrs, representing the blood of those who suffered death for their faith. It is sometimes used as the liturgical color for Holy Week, including Palm Sunday and Good Friday, although this is a modern (20th-century) development. In Catholic practice, it is also the liturgical color used to commemorate the Holy Spirit (for this reason it is worn at Pentecost and during Confirmation masses). Because of its association with martyrdom and the Spirit, it is also the color used to commemorate saints who were martyred, such as St. George and all the Apostles (except for the Apostle St. John, who was not martyred, where white is used). As such, it is used to commemorate bishops, who are the successors of the Apostles (for this reason, when funeral masses are held for bishops, cardinals, or popes, red is used instead of the white that would ordinarily be used). In Buddhism, red is one of the five colors which are said to have emanated from the Buddha when he attained enlightenment, or nirvana. It is particularly associated with the benefits of the practice of Buddhism; achievement, wisdom, virtue, fortune and dignity. It was also believed to have the power to resist evil. In China red was commonly used for the walls, pillars, and gates of temples. In the Shinto religion of Japan, the gateways of temples, called torii, are traditionally painted vermilion red and black. The torii symbolizes the passage from the profane world to a sacred place. The bridges in the gardens of Japanese temples are also painted red (and usually only temple bridges are red, not bridges in ordinary gardens), since they are also passages to sacred places. Red was also considered a color which could expel evil and disease. In Taoism, red is sometimes used to symbolize yang. In Chinese folk religion, red is also sometimes used to symbolize yang in the context of the creator Pangu, who hatched out of a cosmic egg colored like a taijitu. Some art of Pangu colored yang as red. In addition, red is also an auspicious color according to Chinese beliefs. Military uses Red uniform The red military uniform was adopted by the English Parliament's New Model Army in 1645, and was still worn as a dress uniform by the British Army until the outbreak of the First World War in August 1914. Ordinary soldiers wore red coats dyed with madder, while officers wore scarlet coats dyed with the more expensive cochineal. This led to British soldiers being known as red coats. In the modern British army, scarlet is still worn by the Foot Guards, the Life Guards, and by some regimental bands or drummers for ceremonial purposes. Officers and NCOs of those regiments which previously wore red retain scarlet as the color of their "mess" or formal evening jackets. The Royal Gibraltar Regiment has a scarlet tunic in its winter dress. Scarlet is worn for some full dress, military band or mess uniforms in the modern armies of a number of the countries that made up the former British Empire. These include the Australian, Jamaican, New Zealand, Fijian, Canadian, Kenyan, Ghanaian, Indian, Singaporean, Sri Lankan and Pakistani armies. The musicians of the United States Marine Corps Band wear red, following an 18th-century military tradition that the uniforms of band members are the reverse of the uniforms of the other soldiers in their unit. Since the US Marine uniform is blue with red facings, the band wears the reverse. Red Serge is the uniform of the Royal Canadian Mounted Police, created in 1873 as the North-West Mounted Police, and given its present name in 1920. The uniform was adapted from the tunic of the British Army. Cadets at the Royal Military College of Canada also wear red dress uniforms. The Brazilian Marine Corps wears a red dress uniform. NATO Military Symbols for Land Based Systems uses red to denote hostile forces, hence the terms "red team" and "Red Cell" to denote challengers during exercises. In sports The first known team sport to feature red uniforms was chariot racing during the late Roman Empire. The earliest races were between two chariots, one driver wearing red, the other white. Later, the number of teams was increased to four, including drivers in light green and sky blue. Twenty-five races were run in a day, with a total of one hundred chariots participating. Today many sports teams throughout the world feature red on their uniforms. Along with blue, red is the most commonly used non-white color in sports. Numerous national sports teams wear red, often through association with their national flags. A few of these teams feature the color as part of their nickname such as Spain (with their association football (soccer) national team nicknamed La Furia Roja or "The Red Fury") and Belgium (whose football team bears the nickname Rode Duivels or "Red Devils"). In club association football (soccer), red is a commonly used color throughout the world. Among European notable club teams most often playing at home in red shirts include Bayern Munich, Benfica, Liverpool, Manchester United and Roma. Furthermore, many prominent teams play in partially red color schemes, involving different-colored sleeves or stripes. A number of teams' nicknames feature the color. A red penalty card is issued to a player who commits a serious infraction: the player is immediately disqualified from further play and his team must continue with one fewer player for the game's duration. Rosso Corsa is the red international motor racing color of cars entered by teams from Italy. Since the 1920s Italian race cars of Alfa Romeo, Maserati, Lancia, and later Ferrari and Abarth have been painted with a color known as rosso corsa ("racing red"). National colors were mostly replaced in Formula One by commercial sponsor liveries in 1968, but unlike most other teams, Ferrari always kept the traditional red, although the shade of the color varies. Ducati traditionally run red factory bikes in motorcycle World Championship racing. The color is commonly used for professional sports teams in Canada and the United States with eleven Major League Baseball teams, eleven National Hockey League teams, seven National Football League teams and eleven National Basketball Association teams prominently featuring some shade of the color. The color is also featured in the league logos of Major League Baseball, the National Football League and the National Basketball Association. In the National Football League, a red flag is thrown by the head coach to challenge a referee's decision during the game. During the 1950s when red was strongly associated with communism in the United States, the modern Cincinnati Reds team was known as the "Redlegs" and the term was used on baseball cards. After the red scare faded, the team was known as the "Reds" again. In boxing, red is often the color used on a fighter's gloves. George Foreman wore the same red trunks he used during his loss to Muhammad Ali when he defeated Michael Moorer 20 years later to regain the title he lost. Boxers named or nicknamed "red" include Red Burman, Ernie "Red" Lopez, and his brother Danny "Little Red" Lopez. On flags Red is the most common color found in national flags, found on the flags of 77 percent of the 210 countries listed as independent in 2016; far ahead of white (58 percent); green (40 percent) and blue (37 percent). The British flag bears the colors red, white and blue; it includes the cross of Saint George, patron saint of England, and the saltire of Saint Patrick, patron saint of Ireland, both of which are red on white. The flag of the United States bears the colors of Britain, the colors of the French include red as part of the old Paris coat of arms, and other countries' flags, such as those of Australia, New Zealand, and Fiji, carry a small inset of the British flag in memory of their ties to that country. Many former colonies of Spain, such as Mexico, Colombia, Costa Rica, Cuba, Ecuador, Panama, Peru, Puerto Rico and Venezuela, also feature red-one of the colors of the Spanish flag-on their own banners. Red flags are also used to symbolize storms, bad water conditions, and many other dangers. The red on the flag of Nepal represents the floral emblem of the country, the rhododendron. Red, blue, and white are also the Pan-Slavic colors adopted by the Slavic solidarity movement of the late nineteenth century. Initially these were the colors of the Russian flag; as the Slavic movement grew, they were adopted by other Slavic peoples including Slovaks, Slovenes, and Serbs. The flags of the Czech Republic and Poland use red for historic heraldic reasons (see Coat of arms of Poland and Coat of arms of the Czech Republic) & not due to Pan-Slavic connotations. In 2004 Georgia adopted a new white flag, which consists of four small and one big red cross in the middle touching all four sides. Red, white, and black were the colors of the German Empire from 1870 to 1918, and as such they came to be associated with German nationalism. In the 1920s they were adopted as the colors of the Nazi flag. In Mein Kampf, Hitler explained that they were "revered colors expressive of our homage to the glorious past." The red part of the flag was also chosen to attract attention – Hitler wrote: "the new flag ... should prove effective as a large poster" because "in hundreds of thousands of cases a really striking emblem may be the first cause of awakening interest in a movement." The red also symbolized the social program of the Nazis, aimed at German workers. Several designs by a number of different authors were considered, but the one adopted in the end was Hitler's personal design. Red, white, green and black are the colors of Pan-Arabism and are used by many Arab countries. Red, gold, green, and black are the colors of Pan-Africanism. Several African countries thus use the color on their flags, including South Africa, Ghana, Senegal, Mali, Ethiopia, Togo, Guinea, Benin, and Zimbabwe. The Pan-African colors are borrowed from the flag of Ethiopia, one of the oldest independent African countries. Rwanda, notably, removed red from its flag after the Rwandan genocide because of red's association with blood. The flags of Japan and Bangladesh both have a red circle in the middle of different colored backgrounds. The flag of the Philippines has a red trapezoid on the bottom signifying blood, courage, and valor (also, if the flag is inverted so that the red trapezoid is on top and the blue at the bottom, it indicates a state of war). The flag of Singapore has a red rectangle on the top. The field of the flag of Portugal is green and red. The Ottoman Empire adopted several different red flags during the six centuries of its rule, with the successor Republic of Turkey continuing the 1844 Ottoman flag. In politics In 18th-century Europe, red was usually associated with the monarchy and with those in power. The Pope wore red, as did the Swiss Guards of the Kings of France, the soldiers of the British Army and the Danish Army. In the Roman Empire, freed slaves were given a red Phrygian cap as an emblem of their liberation. Because of this symbolism, the red "Liberty cap" became a symbol of the American patriots fighting for independence from England. During the French Revolution, the Jacobins also adapted the red Phrygian cap, and forced the deposed King Louis XVI to wear one after his arrest. Socialism and communism In the 19th century, with the Industrial Revolution and the rise of worker's movements, red became the color of socialism (especially the Marxist variant), and, with the Paris Commune of 1871, of revolution. In the 20th century, red was the color first of the Russian Bolsheviks and then, after the success of the Russian Revolution of 1917, of communist parties around the world. However, after the fall of the Soviet Union in 1991, Russia went back to the pre-revolutionary blue, white and red flag. Red also became the color of many social democratic parties in Europe, including the Labour Party in Britain (founded 1900); the Social Democratic Party of Germany (whose roots went back to 1863) and the French Socialist Party, which dated back under different names, to 1879. The Socialist Party of America (1901–1972) and the Communist Party USA (1919) both also chose red as their color. Members of the Christian-Social People's Party in Liechtenstein (founded 1918) advocated an expansion of democracy and progressive social policies, and were often referred to disparagingly as "Reds" for their social liberal leanings and party colors. The Chinese Communist Party, founded in 1920, adopted the red flag and hammer and sickle emblem of the Soviet Union, which became the national symbols when the Party took power in China in 1949. Under Party leader Mao Zedong, the Party anthem became "The East Is Red", and Mao Zedong himself was sometimes referred to as a "red sun". During the Cultural Revolution in China, Party ideology was enforced by the Red Guards, and the sayings of Mao Zedong were published as a little red book in hundreds of millions of copies. Today the Chinese Communist Party claims to be the largest political party in the world, with eighty million members. Beginning in the 1960s and the 1970s, paramilitary extremist groups such as the Red Army Faction in Germany, the Japanese Red Army and the Shining Path Maoist movement in Peru used red as their color. But in the 1980s, some European socialist and social democratic parties, such as the Labour Party in Britain and the Socialist Party in France, moved away from the symbolism of the far left, keeping the red color but changing their symbol to a less-threatening red rose. Red is used around the world by political parties of the left or center-left. In the United States, it is the color of the Communist Party USA, and of the Social Democrats, USA. United States In the United States, political commentators often refer to the "red states", which voted for Republican candidates in the last four presidential elections, and "blue states", which voted for Democrats. This convention is relatively recent: before the 2000 presidential election, media outlets assigned red and blue to both parties, sometimes alternating the allocation for each election. Fixed usage was established during the 39-day recount following the 2000 election, when the media began to discuss the contest in terms of "red states" versus "blue states". States which voted for different parties in two of the last four presidential elections are called "Swing States", and are usually colored purple, a mix of red and blue. Social and special interest groups Such names as Red Club (a bar), Red Carpet (a discothèque) or Red Cottbus and Club Red (event locations) suggest liveliness and excitement. The Red Hat Society is a social group founded in 1998 for women 50 and over. Use of the color red to call attention to an emergency situation is evident in the names of such organizations as the Red Cross (humanitarian aid), Red Hot Organization (AIDS support), and the Red List of Threatened Species (of IUCN). In reference to humans, term "red" is often used in the West to describe the indigenous peoples of the Americas. Idioms Many idiomatic expressions exploit the various connotations of red: Expressing emotion "to see red" (to be angry or aggressive) "to have red ears / a red face" (to be embarrassed) "to paint the town red" (to have an enjoyable evening, usually with a generous amount of eating, drinking, dancing) Giving warning "to raise a red flag" (to signal that something is problematic) "like a red rag to a bull" (to cause someone to be enraged) "to be in the red" (to be losing money, from the accounting convention of writing deficits and losses in red ink) Calling attention "a red letter day" (a special or important event, from the medieval custom of printing the dates of saints' days and holy days in red ink.) "to roll out the red carpet" (to formally welcome an important guest) "to give red-carpet treatment" (to treat someone as important or special) "to catch someone red-handed" (to catch or discover someone doing something bad or wrong) Other idioms "to tie up in red tape". In England red tape was used by lawyers and government officials to identify important documents. It became a term for excessive bureaucratic regulation. It was popularized in the 19th century by the writer Thomas Carlyle, who complained about "red-tapism". "red herring". A false clue that leads investigators off the track. Refers to the practice of using a fragrant smoked fish to distract hunting or tracking dogs from the track they are meant to follow. "red ink" (to show a business loss) See also Blushing Lists of colors Little Red Riding Hood Red flag (politics) Red pigments References Notes and citations Bibliography External links Primary colors Secondary colors Optical spectrum Rainbow colors Web colors
Red
Physics
9,519
25,464,994
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20September%202%2C%202054
A partial solar eclipse will occur at the Moon's ascending node of orbit between Tuesday, September 1 and Wednesday, September 2, 2054, with a magnitude of 0.9793. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth. The partial solar eclipse will be visible for parts of Northeast Asia, Alaska, western Canada, and the western United States. This is the last of the first set of partial eclipses in Solar Saros 155. Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. The first and last eclipse in this sequence is separated by one synodic month. Related eclipses Eclipses in 2054 A total lunar eclipse on February 22. A partial solar eclipse on March 9. A partial solar eclipse on August 3. A total lunar eclipse on August 18. A partial solar eclipse on September 2. Metonic Preceded by: Solar eclipse of November 14, 2050 Followed by: Solar eclipse of June 21, 2058 Tzolkinex Preceded by: Solar eclipse of July 22, 2047 Followed by: Solar eclipse of October 13, 2061 Half-Saros Preceded by: Lunar eclipse of August 27, 2045 Followed by: Lunar eclipse of September 7, 2063 Tritos Preceded by: Solar eclipse of October 3, 2043 Followed by: Solar eclipse of August 2, 2065 Solar Saros 155 Preceded by: Solar eclipse of August 21, 2036 Followed by: Solar eclipse of September 12, 2072 Inex Preceded by: Solar eclipse of September 21, 2025 Followed by: Solar eclipse of August 13, 2083 Triad Preceded by: Solar eclipse of November 2, 1967 Followed by: Solar eclipse of July 3, 2141 Solar eclipses of 2051–2054 Saros 155 Metonic series Tritos series Inex series References External links NASA graphics 2054 9 2 2054 in science 2054 9 2 2054 9 2
Solar eclipse of September 2, 2054
Astronomy
562
74,367,759
https://en.wikipedia.org/wiki/Berkelium%28III%29%20oxychloride
Berkelium(III) oxychloride is an inorganic compound of berkelium, chlorine, and oxygen with the chemical formula BkOCl. Physical properties The compound forms very pale green crystals. References Oxychlorides Berkelium compounds
Berkelium(III) oxychloride
Chemistry
54
2,012,894
https://en.wikipedia.org/wiki/Zaleplon
Zaleplon, sold under the brand name Sonata among others, is a sedative and hypnotic which is used to treat insomnia. It is a nonbenzodiazepine or Z-drug of the pyrazolopyrimidine class. It was developed by King Pharmaceuticals and approved for medical use in the United States in 1999. Medical uses Zaleplon is slightly effective in treating insomnia, primarily characterized by difficulty falling asleep. Zaleplon significantly reduces the time required to fall asleep by improving sleep latency and may therefore facilitate sleep induction rather than sleep maintenance. Due to its ultrashort elimination half-life, zaleplon may not be effective in reducing premature awakenings; however, it may be administered to alleviate middle-of-the-night awakenings. However, zaleplon has not been empirically shown to increase total sleep time. Zaleplon does not significantly affect driving performance the morning following bedtime administration or 4 hours after middle-of-the-night administration. It may have advantages over benzodiazepines with fewer adverse effects. Special populations Zaleplon is not recommended for chronic use in the elderly. The elderly are more sensitive to the adverse effects of zaleplon such as cognitive side effects. Zaleplon may increase the risk of injury among the elderly. It should not be used during pregnancy or lactation. Clinicians should devote more attention when prescribing for patients with a history of alcohol or drug abuse, psychotic illness, or depression. In addition, some contend the efficacy and safety of long-term use of these agents remains to be enumerated, but nothing concrete suggests long-term use poses any direct harm to a person. Adverse effects The adverse effects of zaleplon are similar to the adverse effects of benzodiazepines, although with less next-day sedation, and in two studies zaleplon use was found not to cause an increase in traffic accidents, as compared to other hypnotics currently on the market. Sleeping pills, including zaleplon, have been associated with an increased risk of death. Some evidence suggests zaleplon is not as chemically reinforcing and exhibits far fewer rebound effects when compared with other nonbenzodiazepines, or Z-drugs. Interactions The CYP3A4 liver enzyme is a minor metabolic pathway for zaleplon, normally metabolizing about 9% of the drug. CYP3A4 inducers such as rifampicin, phenytoin, carbamazepine, and phenobarbital can reduce the effectiveness of zaleplon, and therefore the FDA suggests that other hypnotic drugs be considered in patients taking a CYP3A4 inducer. Additional sedation has been observed when zaleplon is combined with thioridazine, but it is not clear whether this was due to merely an additive effect of taking two sedative drugs at once or a true drug-drug interaction. Diphenhydramine, a weak inhibitor of aldehyde oxidase, has not been found to affect the pharmacokinetics of zaleplon. Pharmacology Mechanism of action Zaleplon is a high-selectivity, high-affinity ligand of positive modulatory benzodiazepine sites on GABAA receptors. Zaleplon binds preferentially at benzodiazepine sites on α1-containing GABAA receptors (previously known as BZ1/Ω1 receptors), which largely mediate the sedative effects of benzodiazepines. However, unlike zolpidem, zaleplon binds with appreciable affinity to benzodiazepine sites on some α2 and α3-containing GABAA receptors, which are implicated in the anxiolytic and muscle relaxant effects of benzodiazepines. Zaleplon demonstrates greater selectivity at these sites than lorazepam or zopiclone. Unlike nonselective benzodiazepine drugs and zopiclone, which distort the sleep pattern, zaleplon appears to induce sleep without disrupting the normal sleep architecture. A meta-analysis of randomized, controlled clinical trials which compared benzodiazepines against zaleplon or other Z-drugs such as zolpidem, zopiclone, and eszopiclone has found few clear and consistent differences between zaleplon and the benzodiazepines in terms of sleep onset latency, total sleep duration, number of awakenings, quality of sleep, adverse events, tolerance, rebound insomnia, and daytime alertness. Zaleplon should be understood as an ultrashort-acting sedative-hypnotic drug for the treatment of insomnia. Zaleplon increases EEG power density in the δ-frequency band and a decrease in the energy of the θ-frequency band. In contrast to non-selective benzodiazepine drugs and zopiclone, zaleplon does not increase power in the β-frequency band. Pharmacokinetics The ultrashort 1hr half-life gives Zaleplon a unique advantage over other hypnotics because of its lack of next-day residual effects on driving and other performance-related skills. Zaleplon is primarily metabolised by aldehyde oxidase into 5-oxozaleplon, and its half-life may be affected by substances which inhibit or induce aldehyde oxidase. According to urine analysis, about 9% of zaleplon is metabolized by CYP3A4 to form desethylzaleplon, which is quickly metabolized by aldehyde oxidase to 5-oxodesethylzaleplon. All of these metabolites are inactive. When taken orally, zaleplon reaches maximum concentration in about 45 minutes. Chemistry Zaleplon is classified as a pyrazolopyrimidine. Pure zaleplon in its solid state is a white to off-white powder with very low solubility in water, as well as low solubility in ethanol and propylene glycol. It has a constant octanol-water partition coefficient of log P = 1.23 in the pH range between 1 and 7. Synthesis The synthesis starts with the condensation of 3-acetylacetanilide (1) with N,N-dimethylformamide dimethyl acetal (DMFDMA) to give the eneamide (2). The anilide nitrogen is then alkylated by means of sodium hydride and ethyl iodide to give 3. The first step in the condensation with 3-amino-4-cyanopyrazole can be visualized as involving an addition-elimination reaction sequence on the eneamide function to give a transient intermediate such as 5. Cyclization then leads to the formation of the fused pyrimidine ring to afford zaleplon (6). Society and culture Recreational use Zaleplon has the potential to be a drug of recreational use, and has been found to have an addictive potential similar to benzodiazepine and benzodiazepine-like hypnotics. Some individuals use a different delivery method than prescribed, such as insufflation, to induce effects faster. Anterograde amnesia can occur and can cause one to lose track of the amount of zaleplon already ingested, prompting the ingesting of more than originally planned. Aviation use The Federal Aviation Administration allows zaleplon with a 12-hour wait period and no more than twice a week, which makes it the sleep medication with the shortest allowed waiting period after use. The substances with the 2nd shortest period, which is of 24 hours, are zolpidem and ramelteon. Military use The United States Air Force uses zaleplon as one of the hypnotics approved as a "no-go pill" to help aviators and special-duty personnel sleep in support of mission readiness (with a four-hour restriction on subsequent flight operation). "Ground tests" are required prior to authorization being issued to use the medication in an operational situation. The other hypnotics used as "no-go pills" are temazepam and zolpidem, which both have longer mandatory recovery periods. References Acetanilides GABAA receptor positive allosteric modulators Nitriles Drugs developed by Pfizer Nonbenzodiazepines Pyrazolopyrimidines
Zaleplon
Chemistry
1,805
8,168,925
https://en.wikipedia.org/wiki/Network%20address
A network address is an identifier for a node or host on a telecommunications network. Network addresses are designed to be unique identifiers across the network, although some networks allow for local, private addresses, or locally administered addresses that may not be unique. Special network addresses are allocated as broadcast or multicast addresses. These too are not unique. In some cases, network hosts may have more than one network address. For example, each network interface controller may be uniquely identified. Further, because protocols are frequently layered, more than one protocol's network address can occur in any particular network interface or node and more than one type of network address may be used in any one network. Network addresses can be flat addresses which contain no information about the node's location in the network (such as a MAC address), or may contain structure or hierarchical information for the routing (such as an IP address). Examples Examples of network addresses include: Telephone number, in the public switched telephone network IP address in IP networks including the Internet IPX address, in NetWare X.25 or X.21 address, in a circuit switched data network MAC address, in Ethernet and other related IEEE 802 network technologies References External links Telecommunications engineering
Network address
Engineering
247
45,449,080
https://en.wikipedia.org/wiki/MindAlign
Parlano MindAlign is a group chat software used as an alternative to email for large enterprises. MindAlign is used most notably in the financial services industry. Early history The software was originally developed at UBS AG as an internal group chat solution. The product was sold to Parlano Inc, in the year 2000. History and Acquisition Upon the acquisition of Parlano by Microsoft in 2007. Microsoft sold MindAlign 6 (the latest released version at that time) to Aditi Technologies Ltd in the same year. When Aditi acquired MindAlign in 2007, it inherited 56 of its customers, which included 5 of the top 7 global banks. References Business chat software UBS
MindAlign
Technology
142
65,601,334
https://en.wikipedia.org/wiki/Overcategory
In mathematics, specifically category theory, an overcategory (also called a slice category), as well as an undercategory (also called a coslice category), is a distinguished class of categories used in multiple contexts, such as with covering spaces (espace etale). They were introduced as a mechanism for keeping track of data surrounding a fixed object in some category . There is a dual notion of undercategory, which is defined similarly. Definition Let be a category and a fixed object of pg 59. The overcategory (also called a slice category) is an associated category whose objects are pairs where is a morphism in . Then, a morphism between objects is given by a morphism in the category such that the following diagram commutesThere is a dual notion called the undercategory (also called a coslice category) whose objects are pairs where is a morphism in . Then, morphisms in are given by morphisms in such that the following diagram commutesThese two notions have generalizations in 2-category theory and higher category theorypg 43, with definitions either analogous or essentially the same. Properties Many categorical properties of are inherited by the associated over and undercategories for an object . For example, if has finite products and coproducts, it is immediate the categories and have these properties since the product and coproduct can be constructed in , and through universal properties, there exists a unique morphism either to or from . In addition, this applies to limits and colimits as well. Examples Overcategories on a site Recall that a site is a categorical generalization of a topological space first introduced by Grothendieck. One of the canonical examples comes directly from topology, where the category whose objects are open subsets of some topological space , and the morphisms are given by inclusion maps. Then, for a fixed open subset , the overcategory is canonically equivalent to the category for the induced topology on . This is because every object in is an open subset contained in . Category of algebras as an undercategory The category of commutative -algebras is equivalent to the undercategory for the category of commutative rings. This is because the structure of an -algebra on a commutative ring is directly encoded by a ring morphism . If we consider the opposite category, it is an overcategory of affine schemes, , or just . Overcategories of spaces Another common overcategory considered in the literature are overcategories of spaces, such as schemes, smooth manifolds, or topological spaces. These categories encode objects relative to a fixed object, such as the category of schemes over , . Fiber products in these categories can be considered intersections, given the objects are subobjects of the fixed object. See also Comma category References Category theory
Overcategory
Mathematics
610
1,769,486
https://en.wikipedia.org/wiki/Lunar%20Receiving%20Laboratory
The Lunar Receiving Laboratory (LRL) was a facility at NASA's Lyndon B. Johnson Space Center (Building 37) that was constructed to quarantine astronauts and material brought back from the Moon during the Apollo program to reduce the risk of back-contamination. After recovery at sea, crews from Apollo 11, Apollo 12, and Apollo 14 walked from their helicopter to the Mobile Quarantine Facility on the deck of an aircraft carrier and were brought to the LRL for quarantine. Samples of rock and regolith that the astronauts collected and brought back were flown directly to the LRL and initially analyzed in glovebox vacuum chambers. The quarantine requirement was dropped for Apollo 15 and later missions. The LRL was used for study, distribution, and safe storage of the lunar samples. Between 1969 and 1972, six Apollo space flight missions brought back 382 kilograms (842 pounds) of lunar rocks, core samples, pebbles, sand, and dust from the lunar surface—in all, 2,200 samples from six exploration sites. Other lunar samples were returned to Earth by three automated Soviet spacecraft, Luna 16 in 1970, Luna 20 in 1972, and Luna 24 in 1976, which returned samples totaling 300 grams (about 3/4 pound). In 1976, some of the samples were moved to Brooks Air Force Base in San Antonio, Texas, for second-site storage. In 1979, a Lunar Sample Laboratory Facility was built to serve as the chief repository for the Apollo samples: permanent storage in a physically secure and non-contaminating environment. The facility includes vaults for the samples and records, and laboratories for sample preparation and study. The Lunar Receiving Laboratory building was later occupied by NASA's Life Sciences division, contained biomedical and environment labs, and was used for experiments involving human adaptation to microgravity. In September 2019, NASA announced that the Lunar Receiving Laboratory had not been used for two years and would be demolished. See also Moon rock Lunar Sample Laboratory Facility Notes External links Lunar Receiving Laboratory Project History NASA/CR–2004–208938, 2004 25 Years of Curating Moon Rocks, Judy Allton Apollo Lunar Quarantine Apollo program Astrobiology Johnson Space Center
Lunar Receiving Laboratory
Astronomy,Biology
443
30,864,962
https://en.wikipedia.org/wiki/Counterforce
In nuclear strategy, a counterforce target is one that has a military value, such as a launch silo for intercontinental ballistic missiles, an airbase at which nuclear-armed bombers are stationed, a homeport for ballistic missile submarines, or a command and control installation. The intent of a counterforce strategy (attacking counterforce targets with nuclear weapons) is to conduct a preemptive nuclear strike which has as its aim to disarm an adversary by destroying its nuclear weapons before they can be launched. That would minimize the impact of a retaliatory second strike. However, counterforce attacks are possible in a second strike as well, especially with weapons like UGM-133 Trident II. A counterforce target is distinguished from a countervalue target, which includes an adversary's population, knowledge, economic, or political resources. In short, a counterforce strike is directed against an adversary's military capabilities, while a countervalue strike is directed against an adversary's civilian-centered institutions. A closely related tactic is the decapitation strike, which destroys an enemy's nuclear command and control facilities and similarly has a goal to eliminate or reduce the enemy's ability to launch a second strike. Counterforce targets are almost always near to civilian population centers, which would not be spared in the event of a counterforce strike. Theory In nuclear warfare, enemy targets are divided into two types: counterforce and countervalue. A counterforce target is an element of the military infrastructure, usually either specific weapons or the bases that support them. A counterforce strike is an attack that targets those elements but leaving the civilian infrastructure, the countervalue targets, as undamaged as possible. Countervalue refers to the targeting of an opponent's cities and civilian populations. Counterforce weapons may be seen to provide more credible deterrence in future conflict by providing options for leaders. One option considered by the Soviet Union in the 1970s was basing missiles in orbit. Cold War Counterforce is a type of attack which was originally proposed during the Cold War. Because of the low accuracy (circular error probable) of early generation intercontinental ballistic missiles (and especially submarine-launched ballistic missiles), counterforce strikes were initially possible only against very large, undefended targets like bomber airfields and naval bases. Later-generation missiles, with much-improved accuracy, made possible counterforce attacks against the opponent's hardened military facilities, like missile silos and command and control centers. Both sides in the Cold War took steps to protect at least some of their nuclear forces from counterforce attacks. At one point, the US kept B-52 Stratofortress bombers permanently in flight so that they would remain operational after any counterforce strike. Other bombers were kept ready for launch on short notice, allowing them to escape their bases before intercontinental ballistic missiles, launched from land, could destroy them. The deployment of nuclear weapons on ballistic missile submarines changed the equation considerably, as submarines launching from positions off the coast would likely destroy airfields before bombers could launch, which would reduce their ability to survive an attack. Submarines themselves, however, are largely immune from counterforce strikes unless they are moored at their naval bases, and both sides fielded many such weapons during the Cold War. A counterforce exchange was one scenario mooted for a possible limited nuclear war. The concept was that one side might launch a counterforce strike against the other; the victim would recognize the limited nature of the attack and respond in kind. That would leave the military capability of both sides largely destroyed. The war might then come to an end because both sides would recognize that any further action would lead to attacks on the civilian population from the remaining nuclear forces, a countervalue strike. Critics of that idea claimed that since even a counterforce strike would kill millions of civilians since some strategic military facilities like bomber airbases were often located near large cities. That would make it unlikely that escalation to a full-scale countervalue war could be prevented. MIRVed land-based ICBMs are considered destabilizing because they tend to put a premium on striking first. For example, suppose that each side has 100 missiles, with five warheads each, and each side has a 95 percent chance of neutralizing the opponent's missiles in their silos by firing two warheads at each silo. In that case, the side that strikes first can reduce the enemy ICBM force from 100 missiles to about five by firing 40 missiles with 200 warheads and keeping the remaining 60 missiles in reserve. For such an attack to be successful, the warheads would have to strike their targets before the enemy launched a counterattack (see second strike and launch on warning). This type of weapon was therefore banned under the START II agreement, which was not ratified and therefore ineffectual. Counterforce disarming first-strike weapons Ababeel. MIRV nuclear capable ballistic missile developed by Pakistan in response to India's development of a Ballistic Missile Defence system. R-36M (SS-18 Satan). Deployed in 1976, this counterforce MIRV ICBM had single (20 Mt) or ten MIRV (550-750 kt each) warheads, with a circular error probable (CEP) of . Targeted against Minuteman III silos as well as CONUS command, control, and communications facilities. Has sufficient throw-weight to carry up to 10 RVs and 40 penaids. Still in service. RSD-10 (SS-20 Saber). Deployed in 1978, this counterforce MIRV IRBM could hide behind the Urals in Asian Russia, and launch its highly accurate three warhead payload (150 kt each, with a CEP) against NATO command, control, and communications installations, bunkers, air fields, air defense sites, and nuclear facilities in Europe. Extremely short flight time ensured NATO would be unable to respond prior to weapon impact. Triggered development and deployment of the Pershing II by NATO in 1983. Peacekeeper (MX Missile). Deployed in 1986, this missile boasted ten MIRV warheads each with a 300 kt yield, CEP . Decommissioned. Pershing II. Deployed in 1983, this single warhead MRBM boasted 50 m CEP with terminal active radar homing/DSMAC guidance. Short, seven-minute flight-time (which makes launch on warning much harder), variable yield warhead of 5-50 kt, and range of , allowed this weapon to strike command, control, and communications installations, bunkers, air fields, air defense sites, and ICBM silos in the European part of the Soviet Union with scarcely any warning. Decommissioned. RT-23 Molodets (SS-24 Scalpel). Deployed in 1987, this MIRV ICBM carried ten warheads, each with 300-550 kt yield and a CEP of . UGM-133 Trident II. Deployed in 1990, this intercontinental-range SLBM carries up to eight RVs with CEP of and yield of 100/475 kt. Main purpose is second strike countervalue retaliation, but the excellent CEP and much shorter flight-time due to submarine launch (reducing the possibility of launch on warning) makes it an excellent first-strike weapon. However, that any nuclear power would be willing to place its nuclear submarines close to enemy shores during times of strategic tension is highly questionable. Has sufficient throw-weight to deploy up to twelve warheads, but the post-boost vehicle is only capable of deploying eight, and on average about four are deployed in current practice. See also Balance of power (international relations) Balance of terror Deterrence theory Limited first strike Peace through strength References Military strategy Nuclear warfare Nuclear strategy Cold War terminology
Counterforce
Chemistry
1,576
4,601,032
https://en.wikipedia.org/wiki/Shore%20durometer
The Shore durometer is a device for measuring the hardness of a material, typically of polymers. Higher numbers on the scale indicate a greater resistance to indentation and thus harder materials. Lower numbers indicate less resistance and softer materials. The term is also used to describe a material's rating on the scale, as in an object having a "'Shore durometer' of 90." The scale was defined by Albert Ferdinand Shore, who developed a suitable device to measure hardness in the 1920s. It was neither the first hardness tester nor the first to be called a durometer (ISV duro- and -meter; attested since the 19th century), but today that name usually refers to Shore hardness; other devices use other measures, which return corresponding results, such as for Rockwell hardness. Durometer scales There are several scales of durometer, used for materials with different properties. The two most common scales, using slightly different measurement systems, are the ASTM D2240 type A and type D scales. The A scale is for softer ones, while the D scale is for harder ones. The image of Bareiss digital durometer is shown in the photo. However, the ASTM D2240-00 testing standard calls for a total of 12 scales, depending on the intended use: types A, B, C, D, DO, E, M, O, OO, OOO, OOO-S, and R. Each scale results in a value between 0 and 100, with higher values indicating a harder material. Method of measurement Durometer, like many other hardness tests, measures the depth of an indentation in the material created by a given force on a standardized presser foot. This depth is dependent on the hardness of the material, its viscoelastic properties, the shape of the presser foot, and the duration of the test. ASTM D2240 durometers allow for a measurement of the initial hardness, or the indentation hardness after a given period of time. The basic test requires applying the force in a consistent manner, without shock, and measuring the hardness (depth of the indentation). If a timed hardness is desired, force is applied for the required time and then read. The material under test should be a minimum of 6 mm (0.25 inches) thick. Theoretical background of the test is considered in Stoßprobleme in Physik, Technik und Medizin by Grundlagen und Anwendungen The ASTM D2240 standard recognizes twelve different durometer scales using combinations of specific spring forces and indentor configurations. These scales are properly referred to as durometer types; i.e., a durometer type is specifically designed to determine a specific scale, and the scale does not exist separately from the durometer. The table below provides details for each of these types, with the exception of Type R. Note: Type R is a designation, rather than a true "type". The R designation specifies a presser foot diameter (hence the R, for radius; obviously D could not be used) of 18 ± 0.5 mm (0.71 ± 0.02 in) in diameter, while the spring forces and indenter configurations remain unchanged. The R designation is applicable to any D2240 Type, with the exception of Type M; the R designation is expressed as Type xR, where x is the D2240 type, e.g., aR, dR, etc.; the R designation also mandates the employment of an operating stand. Some conditions and procedures that have to be met, according to DIN ISO 7619-1 standard are: For measuring Shore A the foot indents the material while for Shore D the foot penetrates the surface of the material. Material for testing needs to be in laboratory climate storage at least one hour before testing. Measuring time is 15s. Force is 1 kg +0.1 kg for Shore A, and 5 kg +0.5 kg for Shore D. Five measurements need to be taken. Calibration of the Durometer is one per week with elastomer blocks of different hardness. The final value of the hardness depends on the depth of the indenter after it has been applied for 15 seconds on the material. If the indenter penetrates 2.54 mm (0.100 inch) or more into the material, the durometer is 0 for that scale. If it does not penetrate at all, then the durometer is 100 for that scale. It is for this reason that multiple scales exist. But if the hardness is <10 °Sh or >90 °Sh the results are not to be trusted. The measurement must be redone with adjacent scale type. Durometer is a dimensionless quantity, and there is no simple relationship between a material's durometer in one scale, and its durometer in any other scale, or by any other hardness test. ASTM D2240 hardness and elastic modulus Using linear elastic indentation hardness, a relation between the ASTM D2240 hardness and the Young's modulus for elastomers has been derived by Gent.Gent's relation has the form where is the Young's modulus in MPa and is the ASTM D2240 type A hardness. This relation gives a value of at but departs from experimental data for . Mix and Giacomin derive comparable equations for all 12 scales that are standardized by ASTM D2240. Another relation, that fits the experimental data slightly better, is where is the error function, and is in units of Pa. To make that a little more insightful, here is a list of Shore A values with their corresponding Young's modulus (in MPa), where "MPa" is computed from "ShoreA" using first formula, and then "AltShoreA" is computed from "MPa" using second formula : A first-order estimate of the relation between ASTM D2240 type D hardness (for a conical indenter with a 15° half-cone angle) and the elastic modulus of the material being tested is where is the ASTM D2240 type D hardness, and is in MPa. Another Neo-Hookean linear relation between the ASTM D2240 hardness value and material elastic modulus has the form where is the ASTM D2240 type A hardness, is the ASTM D2240 type D hardness, and is the Young's modulus in MPa. Patents See also References External links Comparison Chart Reference Guide Растеряев Ю.К., Агальцов Г.Н. Связь между твёрдостью и модулем упругости резин (Connection between hardness and a modulus of gums) Shore Hardness Converter Dimensionless numbers of physics Hardness tests Rubber properties
Shore durometer
Materials_science
1,447
51,437,144
https://en.wikipedia.org/wiki/Well%20of%20Dina%20Nath
The Well of Dina Nath () was intended to be a water well in the Wazir Khan Chowk in Lahore, Pakistan. The well's construction by in the 19th century by a Sikh nobleman sparked controversy, given its location in the immediate vicinity of the Wazir Khan Mosque. History The well was commissioned by Raja Dina Nath in the mid 19th century under the reign of Ranjit Singh. The well was not built as an open well, but is instead enclosed within a walled structure. Legend It is said that Nath wished to build his well near the site of a well dug Sufi saint Said Soaf despite strong objections from local Muslim leaders who viewed construction of a second well to be antagonistic to the saint's memory. Disregarding their warnings and objections, Dina Nath ordered construction to begin on the site. After 200 metres of digging, labourers could not tap a water source, and refused to dig any further, much to the embarrassment of Dina Nath. The well has remained dry ever since, and remains a local monument. Restoration The well fell into disrepair, and was eventually surrounded by illegally constructed shops which had encroached upon the Wazir Khan Chowk. In 2012, the Aga Khan Trust for Culture and the Government of Punjab launched restoration efforts which have since removed the illegal shops, restoring public access to the well. See also Haveli Dina Nath References Buildings and structures in Lahore Walled City of Lahore Sikh architecture Water wells
Well of Dina Nath
Chemistry,Engineering,Environmental_science
299
4,317,081
https://en.wikipedia.org/wiki/Axostyle
An axostyle is a sheet of microtubules found in certain protists. It arises from the bases of the flagella, sometimes projecting beyond the end of the cell, and is often flexible or contractile, and so may be involved in movement and provides support for the cell. Axostyles originate in association with a flagellar microtubular root and occur in two groups, the oxymonads and parabasalids; they have different structures and are not homologous. Within trichomonads the axostyle has been theorised to participate in locomotion and cell adhesion, but also karyokinesis during cell division. References Cell biology
Axostyle
Biology
143
7,663,090
https://en.wikipedia.org/wiki/Test%20effort
In software development, test effort refers to the expenses for (still to come) tests. There is a relation with test costs and failure costs (direct, indirect, costs for fault correction). Some factors which influence test effort are: maturity of the software development process, quality and testability of the testobject, test infrastructure, skills of staff members, quality goals and test strategy. Methods for estimation of the test effort To analyse all factors is difficult, because most of the factors influence each other. Following approaches can be used for the estimation: top-down estimation and bottom-up estimation. The top-down techniques are formula based and they are relative to the expenses for development: Function Point Analysis (FPA) and Test Point Analysis (TPA) amongst others. Bottom-up techniques are based on detailed information and involve often experts. The following techniques belong here: Work Breakdown Structure (WBS) and Wide Band Delphi (WBD). We can also use the following techniques for estimating the test effort: Conversion of software size into person hours of effort directly using a conversion factor. For example, we assign 2 person hours of testing effort per one Function Point of software size or 4 person hours of testing effort per one use case point or 3 person hours of testing effort per one Software Size Unit Conversion of software size into testing project size such as Test Points or Software Test Units using a conversion factor and then convert testing project size into effort Compute testing project size using Test Points of Software Test Units. Methodology for deriving the testing project size in Test Points is not well documented. However, methodology for deriving Software Test Units is defined in a paper by Murali We can also derive software testing project size and effort using Delphi Technique or Analogy Based Estimation technique. Test efforts from literature In literature test efforts relative to total costs are between 20% and 70%. These values are amongst others dependent from the project specific conditions. When looking for the test effort in the single phases of the test process, these are diversely distributed: with about 40% for test specification and test execution each. References Andreas Spillner, Tilo Linz, Hans Schäfer. (2006). Software Testing Foundations - A Study Guide for the Certified Tester Exam - Foundation Level - ISTQB compliant, 1st print. dpunkt.verlag GmbH, Heidelberg, Germany. . Erik van Veenendaal (Hrsg. und Mitautor): The Testing Practitioner. 3. Auflage. UTN Publishers, CN Den Bosch, Niederlande 2005, . Thomas Müller (chair), Rex Black, Sigrid Eldh, Dorothy Graham, Klaus Olsen, Maaret Pyhäjärvi, Geoff Thompson and Erik van Veendendal. (2005). Certified Tester - Foundation Level Syllabus - Version 2005, International Software Testing Qualifications Board (ISTQB), Möhrendorf, Germany. (PDF; 0,424 MB). Andreas Spillner, Tilo Linz, Thomas Roßner, Mario Winter: Praxiswissen Softwaretest - Testmanagement: Aus- und Weiterbildung zum Certified Tester: Advanced Level nach ISTQB-Standard. 1. Auflage. dpunkt.verlag GmbH, Heidelberg 2006, . External links Wide Band Delphi Test Effort Estimation Information technology management Software testing
Test effort
Technology,Engineering
691
46,279,024
https://en.wikipedia.org/wiki/Patoo%20Abraham
Patoo Abraham (born 1966) is a Nigerian prostitute and sex workers' rights activist advocating for the legalization of sex work profession in Nigeria and the decriminalization of women in prostitution. As of 2014 she is leader of the African Sex Workers Alliance (ASWA) in Nigeria. She is also the President of Women of Power Initiative (WOPI), an NGO formed for the purpose of improving sex work in Nigeria. She has staged series of protests on the streets of Lagos against the abuse and disregard faced by sex workers. Activism Abraham was involved as a member and leader of two activist organizations which advocate for the rights of sex workers and prostitutes in Africa: the Nigerian chapter of the African Sex Workers Alliance and the Women of Power Initiative. In 2014, Abraham, as the leader of the Nigerian chapter of the African Sex Workers Alliance, led multiple protests advocating for the rights and protections of sex workers in Nigeria. Abraham’s goal as a leader of African Sex Workers Alliance was to make sure all African sex workers obtain equal rights and respect, just like any other profession or job would be subject to. In an interview during the Lagos street protests Abraham led, she stated how sex workers, such as herself, feel about their lack of rights and respect: “We are tired of dying in silence. We want to be able to practice our profession with pride like every other person. We want an end to name-calling and stigmatization.” In addition to her leading of multiple street protests in Lagos, Abraham continued to advocate for the legalization of African sex workers’ professions through her involvement in the Nigerian chapter of the ASWA and Women of Power Initiative (WOPI). Abraham served as a leader and the president of the Women of Power Initiative. Women of Power Initiative was a non-governmental organization which aimed to support the profession of sex workers. African Sex Workers Alliance (ASWA) Abraham was the leader of the African Sex Workers Alliance’s Nigerian branch in 2014. The African Sex Workers Alliance (ASWA) is run by sex workers; their mission is to support sex workers rights, and to “advocate for and advance the health and human rights of female, male, and transgender sex workers." ASWA was first established in 2009, and fused together groups who desired to support sex worker rights. The groups who formed ASWA include a combination of sex workers, activists, and non-governmental organizations. As a leader of the Nigerian branch of ASWA, Abraham helped aid the goal of speaking out for equality of African sex workers by, most notably and most recalled in news outlets, leading and participating in street protests such as in Lagos. ASWA, the alliance that Abraham was a leader of, has six primary guiding principles which guide their work. The six lead values they follow are: Accountability and Transparency, Equality and Justice, Voice and Agency, Respect, Diversity and Inclusion, and Solidarity. Accountability and Transparency means that the ASWA does their best to make sure everything is truthful and presented to their members as accurately as possible. Equality and Justice relates to the way the ASWA strives to ensure all members are treated equally and with respect. The Voice and Agency principle is set in place to make sure members have the ability to interact with ASWA in ways that are meaningful to them. The respect principle means that the ASWA respects all sex workers, and one of their core values is to make this respect is extended and understood by all. Diversity and Inclusion has to do intersectionality, and combining with other movements who want to dismantle exclusion and mistreatment of individuals. Solidarity, the last value stated by the ASWA, aims to ensure and remind members that they are there to unite and to support one another as sex-workers and as activists. These six guidelines serve as a template for everything the ASWA does and stands for. Abraham was a primary leader of ASWA, and during her time as a leader in the organization, a qualitative study of African sex workers and feminism was conducted by author Ntokozo Yingwana. In Yingwana’s qualitative research investigation, the ASWA was used as one of the two primary research groups being surveyed. Yingwana’s journal article was published in 2018, but the data was gathered during 2014 and 2015, which was during the time Abraham served as a leader of ASWA. The qualitative research study was published by Duke University Press, and was conducted in order to understand what it truly means to be an “African sex worker feminist." ASWA was open to engaging in this study since they hoped it would aid in the unity and agreement among feminists who may still be uncertain as to whether or not sex work is something they support or not. The qualitative research study used ASWA participants to describe what they individually felt described the meaning of the following terms: African, sex-worker, and feminist. Additionally, each participant expressed what those terms felt like in relation to themselves. Using ASWA as primary interview participants in the study, Yingwana was able to convey the lived-experiences of actual African sex-workers. Yingwana expressed that their research was conducted in order to showcase different social movements, and therefore strengthen the connection and unity of a variety of different fields of activism and social movements. Lagos street protests Abraham was the primary leader of the street protests in Lagos. With Abraham's leadership, protestors were able to use clothing as one of the primary forms of their protest. All the protestors wore t-shirts that were inscribed with the words, “Sex work is work, we need our rights.” Additionally, Abraham’s protesters were photographed carrying red umbrellas, as an homage to the Red Umbrella Project that sex workers in Italy demonstrated in response to inhumane and cruel conditions of sex workers. The red umbrella being symbolic of the sex worker’s refusal to accept discrimination and unfairness in their work was a central part of the visuals in Abraham’s street protests, and was documented by the news outlet, Aljazeera. The street protests were covered by a variety of different news outlets and sources based in Africa such as Aljazeera, Legit, and The Daily Post. Abraham is pictured at the front of the protest, leading the rest of the participants as they march for sex-workers’ rights. Even after Abraham’s leading of multiple Lagos street protests in 2014, The Nigerian Criminal Code still states that prostitution in Nigeria is an illegal activity. References 1966 births Living people Nigerian sex worker activists Nigerian prostitutes Nigerian activists Sex positivism
Patoo Abraham
Biology
1,323
34,845,914
https://en.wikipedia.org/wiki/Peacock%20Clock
The Peacock Clock is a large automaton featuring three life-sized mechanical birds. It was manufactured by the entrepreneur James Cox in the 2nd half of the 18th century, and through the influence of Grigory Potemkin, it was acquired by Catherine the Great in 1781. Today, it is a prominent exhibit in the collections of the Hermitage Museum in Saint Petersburg. The clock is also shown daily on the Russian TV channel Russia-K. References Yuna Zek, Antonina Balina, Mikhail Guryev, Yuri Semionov: The Peacock Clock – photos, history and description of the Peacock Clock at hermitagemuseum.org (website of the Hermitage Museum, archived version) Peacock Clock 18th-century robots Collection of the Hermitage Museum Automata (mechanical)
Peacock Clock
Engineering
157
9,930,635
https://en.wikipedia.org/wiki/Lubrication%20theory
In fluid dynamics, lubrication theory describes the flow of fluids (liquids or gases) in a geometry in which one dimension is significantly smaller than the others. An example is the flow above air hockey tables, where the thickness of the air layer beneath the puck is much smaller than the dimensions of the puck itself. Internal flows are those where the fluid is fully bounded. Internal flow lubrication theory has many industrial applications because of its role in the design of fluid bearings. Here a key goal of lubrication theory is to determine the pressure distribution in the fluid volume, and hence the forces on the bearing components. The working fluid in this case is often termed a lubricant. Free film lubrication theory is concerned with the case in which one of the surfaces containing the fluid is a free surface. In that case, the position of the free surface is itself unknown, and one goal of lubrication theory is then to determine this. Examples include the flow of a viscous fluid over an inclined plane or over topography. Surface tension may be significant, or even dominant. Issues of wetting and dewetting then arise. For very thin films (thickness less than one micrometre), additional intermolecular forces, such as Van der Waals forces or disjoining forces, may become significant. Theoretical basis Mathematically, lubrication theory can be seen as exploiting the disparity between two length scales. The first is the characteristic film thickness, , and the second is a characteristic substrate length scale . The key requirement for lubrication theory is that the ratio is small, that is, . The Navier–Stokes equations (or Stokes equations, when fluid inertia may be neglected) are expanded in this small parameter, and the leading-order equations are then where and are coordinates in the direction of the substrate and perpendicular to it respectively. Here is the fluid pressure, and is the fluid velocity component parallel to the substrate; is the fluid viscosity. The equations show, for example, that pressure variations across the gap are small, and that those along the gap are proportional to the fluid viscosity. A more general formulation of the lubrication approximation would include a third dimension, and the resulting differential equation is known as the Reynolds equation. Further details can be found in the literature or in the textbooks given in the bibliography. Applications An important application area is lubrication of machinery components such as fluid bearings and mechanical seals. Coating is another major application area including the preparation of thin films, printing, painting and adhesives. Biological applications have included studies of red blood cells in narrow capillaries and of liquid flow in the lung and eye. Notes References Aksel, N.; Schörner M. (2018) "Films over topography: from creeping flow to linear stability, theory, and experiments, a review", Acta Mechanica 229: 1453–1482 Batchelor, G. K. (1976), An Introduction to Fluid Mechanics, Cambridge University Press. . Hinton E. M.; Hogg A. J.; Huppert H. E. (2019), "Interaction of viscous free-surface flows with topography", Journal of Fluid Mechanics 876: 912–938 Lister J. R. (1992) "Viscous flows down an inclined plane from point and line sources", Journal of Fluid Mechanics 242: 631–653. Panton, R. L. (2005), Incompressible Flow (3rd ed.), New York: Wiley. . San Andres, L. (2010) MEEN334 Mechanical Systems Course Notes via Internet Archive Fluid dynamics Microfluidics Tribology
Lubrication theory
Chemistry,Materials_science,Engineering
761
76,828,449
https://en.wikipedia.org/wiki/Cristina%20Dalle%20Ore
Cristina Morea Dalle Ore (born 1958) is a hyperspectral imaging and remote sensing expert, originally from Italy. After many years as an astronomer and planetary scientist, she has shifted her interests to Earth-based agricultural applications of remote sensing, as Head of Remote Science and Geospatial Intelligence for Bayer Crop Science. Her work in astronomy studied the chemical composition of objects in the far reaches of the Solar System, with a special focus on tholins, and included the discovery of ammonia on Pluto, suggesting the possibility of liquid water there as well. Education and career Dalle Ore is originally from Treviso; astronomy was a shared interest with her father, heart surgeon Mario Morea. She earned a laurea in astronomy, the Italian equivalent of a master's degree, from the University of Padua, in 1983. Next, she began graduate studies with Sandra Faber at the University of California, Santa Cruz, but was pulled away to Boston by her new husband's job there. After spending nine years raising three children and studying spectroscopy at Harvard University, she returned to UC Santa Cruz to complete her Ph.D. Her 1993 dissertation, A critical examination of stellar atmosphere theory for metal-poor K-giant stars, was supervised by Faber. Despite her early research focus on stars, a chance social connection with planetary scientist Dale Cruikshank led her to a position studying the Solar System as a research scientist for the SETI Institute and NASA Ames Research Center. She worked there beginning in 1996, with a stint as a lecturer at UC Santa Cruz from 2007 to 2008, until taking her present position at Bayer Crop Science. Recognition Minor planets 25945 Moreadalleore and 151351 Dalleore are named for Dalle Ore. References External links 1958 births Living people Italian astronomers American astronomers Women astronomers Planetary scientists Women planetary scientists University of Padua alumni University of California, Santa Cruz alumni
Cristina Dalle Ore
Astronomy
383
18,308
https://en.wikipedia.org/wiki/Lanthanide
The lanthanide () or lanthanoid () series of chemical elements comprises at least the 14 metallic chemical elements with atomic numbers 57–70, from lanthanum through ytterbium. In the periodic table, they fill the 4f orbitals. Lutetium (element 71) is also sometimes considered a lanthanide, despite being a d-block element and a transition metal. The informal chemical symbol Ln is used in general discussions of lanthanide chemistry to refer to any lanthanide. All but one of the lanthanides are f-block elements, corresponding to the filling of the 4f electron shell. Lutetium is a d-block element (thus also a transition metal), and on this basis its inclusion has been questioned; however, like its congeners scandium and yttrium in group 3, it behaves similarly to the other 14. The term rare-earth element or rare-earth metal is often used to include the stable group 3 elements Sc, Y, and Lu in addition to the 4f elements. All lanthanide elements form trivalent cations, Ln3+, whose chemistry is largely determined by the ionic radius, which decreases steadily from lanthanum (La) to lutetium (Lu). These elements are called lanthanides because the elements in the series are chemically similar to lanthanum. Because "lanthanide" means "like lanthanum", it has been argued that lanthanum cannot logically be a lanthanide, but the International Union of Pure and Applied Chemistry (IUPAC) acknowledges its inclusion based on common usage. In presentations of the periodic table, the f-block elements are customarily shown as two additional rows below the main body of the table. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the 4f and 5f series in their proper places, as parts of the table's sixth and seventh rows (periods), respectively. The 1985 IUPAC "Red Book" (p. 45) recommends using lanthanoid instead of lanthanide, as the ending normally indicates a negative ion. However, owing to widespread current use, lanthanide is still allowed. Etymology The term "lanthanide" was introduced by Victor Goldschmidt in 1925. Despite their abundance, the technical term "lanthanides" is interpreted to reflect a sense of elusiveness on the part of these elements, as it comes from the Greek λανθανειν (lanthanein), "to lie hidden". Rather than referring to their natural abundance, the word reflects their property of "hiding" behind each other in minerals. The term derives from lanthanum, first discovered in 1838, at that time a so-called new rare-earth element "lying hidden" or "escaping notice" in a cerium mineral, and it is an irony that lanthanum was later identified as the first in an entire series of chemically similar elements and gave its name to the whole series. Together with the stable elements of group 3, scandium, yttrium, and lutetium, the trivial name "rare earths" is sometimes used to describe the set of lanthanides. The "earth" in the name "rare earths" arises from the minerals from which they were isolated, which were uncommon oxide-type minerals. However, these elements are neither rare in abundance nor "earths" (an obsolete term for water-insoluble strongly basic oxides of electropositive metals incapable of being smelted into metal using late 18th century technology). Group 2 is known as the alkaline earth elements for much the same reason. The "rare" in the name "rare earths" has more to do with the difficulty of separating of the individual elements than the scarcity of any of them. By way of the Greek dysprositos for "hard to get at", element 66, dysprosium was similarly named. The elements 57 (La) to 71 (Lu) are very similar chemically to one another and frequently occur together in nature. Often a mixture of three to all 15 of the lanthanides (along with yttrium as a 16th) occur in minerals, such as monazite and samarskite (for which samarium is named). These minerals can also contain group 3 elements, and actinides such as uranium and thorium. A majority of the rare earths were discovered at the same mine in Ytterby, Sweden and four of them are named (yttrium, ytterbium, erbium, terbium) after the village and a fifth (holmium) after Stockholm; scandium is named after Scandinavia, thulium after the old name Thule, and the immediately-following group 4 element (number 72) hafnium is named for the Latin name of the city of Copenhagen. The properties of the lanthanides arise from the order in which the electron shells of these elements are filled—the outermost (6s) has the same configuration for all of them, and a deeper (4f) shell is progressively filled with electrons as the atomic number increases from 57 towards 71. For many years, mixtures of more than one rare earth were considered to be single elements, such as neodymium and praseodymium being thought to be the single element didymium. Very small differences in solubility are used in solvent and ion-exchange purification methods for these elements, which require repeated application to obtain a purified metal. The diverse applications of refined metals and their compounds can be attributed to the subtle and pronounced variations in their electronic, electrical, optical, and magnetic properties. By way of example of the term meaning "hidden" rather than "scarce", cerium is almost as abundant as copper; on the other hand promethium, with no stable or long-lived isotopes, is truly rare. Physical properties of the elements * Between initial Xe and final 6s2 electronic shells ** Sm has a close packed structure like most of the lanthanides but has an unusual 9 layer repeat Gschneider and Daane (1988) attribute the trend in melting point which increases across the series, (lanthanum (920 °C) – lutetium (1622 °C)) to the extent of hybridization of the 6s, 5d, and 4f orbitals. The hybridization is believed to be at its greatest for cerium, which has the lowest melting point of all, 795 °C. The lanthanide metals are soft; their hardness increases across the series. Europium stands out, as it has the lowest density in the series at 5.24 g/cm3 and the largest metallic radius in the series at 208.4 pm. It can be compared to barium, which has a metallic radius of 222 pm. It is believed that the metal contains the larger Eu2+ ion and that there are only two electrons in the conduction band. Ytterbium also has a large metallic radius, and a similar explanation is suggested. The resistivities of the lanthanide metals are relatively high, ranging from 29 to 134 μΩ·cm. These values can be compared to a good conductor such as aluminium, which has a resistivity of 2.655 μΩ·cm. With the exceptions of La, Yb, and Lu (which have no unpaired f electrons), the lanthanides are strongly paramagnetic, and this is reflected in their magnetic susceptibilities. Gadolinium becomes ferromagnetic at below 16 °C (Curie point). The other heavier lanthanides – terbium, dysprosium, holmium, erbium, thulium, and ytterbium – become ferromagnetic at much lower temperatures. Chemistry and compounds * Not including initial [Xe] core f → f transitions are symmetry forbidden (or Laporte-forbidden), which is also true of transition metals. However, transition metals are able to use vibronic coupling to break this rule. The valence orbitals in lanthanides are almost entirely non-bonding and as such little effective vibronic coupling takes, hence the spectra from f → f transitions are much weaker and narrower than those from d → d transitions. In general this makes the colors of lanthanide complexes far fainter than those of transition metal complexes. Effect of 4f orbitals Viewing the lanthanides from left to right in the periodic table, the seven 4f atomic orbitals become progressively more filled (see above and ). The electronic configuration of most neutral gas-phase lanthanide atoms is [Xe]6s24fn, where n is 56 less than the atomic number Z. Exceptions are La, Ce, Gd, and Lu, which have 4fn−15d1 (though even then 4fn is a low-lying excited state for La, Ce, and Gd; for Lu, the 4f shell is already full, and the fifteenth electron has no choice but to enter 5d). With the exception of lutetium, the 4f orbitals are chemically active in all lanthanides and produce profound differences between lanthanide chemistry and transition metal chemistry. The 4f orbitals penetrate the [Xe] core and are isolated, and thus they do not participate much in bonding. This explains why crystal field effects are small and why they do not form π bonds. As there are seven 4f orbitals, the number of unpaired electrons can be as high as 7, which gives rise to the large magnetic moments observed for lanthanide compounds. Measuring the magnetic moment can be used to investigate the 4f electron configuration, and this is a useful tool in providing an insight into the chemical bonding. The lanthanide contraction, i.e. the reduction in size of the Ln3+ ion from La3+ (103 pm) to Lu3+ (86.1 pm), is often explained by the poor shielding of the 5s and 5p electrons by the 4f electrons. The chemistry of the lanthanides is dominated by the +3 oxidation state, and in LnIII compounds the 6s electrons and (usually) one 4f electron are lost and the ions have the configuration [Xe]4f(n−1). All the lanthanide elements exhibit the oxidation state +3. In addition, Ce3+ can lose its single f electron to form Ce4+ with the stable electronic configuration of xenon. Also, Eu3+ can gain an electron to form Eu2+ with the f7 configuration that has the extra stability of a half-filled shell. Other than Ce(IV) and Eu(II), none of the lanthanides are stable in oxidation states other than +3 in aqueous solution. In terms of reduction potentials, the Ln0/3+ couples are nearly the same for all lanthanides, ranging from −1.99 (for Eu) to −2.35 V (for Pr). Thus these metals are highly reducing, with reducing power similar to alkaline earth metals such as Mg (−2.36 V). Lanthanide oxidation states The ionization energies for the lanthanides can be compared with aluminium. In aluminium the sum of the first three ionization energies is 5139 kJ·mol−1, whereas the lanthanides fall in the range 3455 – 4186 kJ·mol−1. This correlates with the highly reactive nature of the lanthanides. The sum of the first two ionization energies for europium, 1632 kJ·mol−1 can be compared with that of barium 1468.1 kJ·mol−1 and europium's third ionization energy is the highest of the lanthanides. The sum of the first two ionization energies for ytterbium are the second lowest in the series and its third ionization energy is the second highest. The high third ionization energy for Eu and Yb correlate with the half filling 4f7 and complete filling 4f14 of the 4f subshell, and the stability afforded by such configurations due to exchange energy. Europium and ytterbium form salt like compounds with Eu2+ and Yb2+, for example the salt like dihydrides. Both europium and ytterbium dissolve in liquid ammonia forming solutions of Ln2+(NH3)x again demonstrating their similarities to the alkaline earth metals. The relative ease with which the 4th electron can be removed in cerium and (to a lesser extent praseodymium) indicates why Ce(IV) and Pr(IV) compounds can be formed, for example CeO2 is formed rather than Ce2O3 when cerium reacts with oxygen. Also Tb has a well-known IV state, as removing the 4th electron in this case produces a half-full 4f7 configuration. The additional stable valences for Ce and Eu mean that their abundances in rocks sometimes varies significantly relative to the other rare earth elements: see cerium anomaly and europium anomaly. Separation of lanthanides The similarity in ionic radius between adjacent lanthanide elements makes it difficult to separate them from each other in naturally occurring ores and other mixtures. Historically, the very laborious processes of cascading and fractional crystallization were used. Because the lanthanide ions have slightly different radii, the lattice energy of their salts and hydration energies of the ions will be slightly different, leading to a small difference in solubility. Salts of the formula Ln(NO3)3·2NH4NO3·4H2O can be used. Industrially, the elements are separated from each other by solvent extraction. Typically an aqueous solution of nitrates is extracted into kerosene containing tri-n-butylphosphate. The strength of the complexes formed increases as the ionic radius decreases, so solubility in the organic phase increases. Complete separation can be achieved continuously by use of countercurrent exchange methods. The elements can also be separated by ion-exchange chromatography, making use of the fact that the stability constant for formation of EDTA complexes increases for log K ≈ 15.5 for [La(EDTA)]− to log K ≈ 19.8 for [Lu(EDTA)]−. Coordination chemistry and catalysis When in the form of coordination complexes, lanthanides exist overwhelmingly in their +3 oxidation state, although particularly stable 4f configurations can also give +4 (Ce, Pr, Tb) or +2 (Sm, Eu, Yb) ions. All of these forms are strongly electropositive and thus lanthanide ions are hard Lewis acids. The oxidation states are also very stable; with the exceptions of SmI2 and cerium(IV) salts, lanthanides are not used for redox chemistry. 4f electrons have a high probability of being found close to the nucleus and are thus strongly affected as the nuclear charge increases across the series; this results in a corresponding decrease in ionic radii referred to as the lanthanide contraction. The low probability of the 4f electrons existing at the outer region of the atom or ion permits little effective overlap between the orbitals of a lanthanide ion and any binding ligand. Thus lanthanide complexes typically have little or no covalent character and are not influenced by orbital geometries. The lack of orbital interaction also means that varying the metal typically has little effect on the complex (other than size), especially when compared to transition metals. Complexes are held together by weaker electrostatic forces which are omni-directional and thus the ligands alone dictate the symmetry and coordination of complexes. Steric factors therefore dominate, with coordinative saturation of the metal being balanced against inter-ligand repulsion. This results in a diverse range of coordination geometries, many of which are irregular, and also manifests itself in the highly fluxional nature of the complexes. As there is no energetic reason to be locked into a single geometry, rapid intramolecular and intermolecular ligand exchange will take place. This typically results in complexes that rapidly fluctuate between all possible configurations. Many of these features make lanthanide complexes effective catalysts. Hard Lewis acids are able to polarise bonds upon coordination and thus alter the electrophilicity of compounds, with a classic example being the Luche reduction. The large size of the ions coupled with their labile ionic bonding allows even bulky coordinating species to bind and dissociate rapidly, resulting in very high turnover rates; thus excellent yields can often be achieved with loadings of only a few mol%. The lack of orbital interactions combined with the lanthanide contraction means that the lanthanides change in size across the series but that their chemistry remains much the same. This allows for easy tuning of the steric environments and examples exist where this has been used to improve the catalytic activity of the complex and change the nuclearity of metal clusters. Despite this, the use of lanthanide coordination complexes as homogeneous catalysts is largely restricted to the laboratory and there are currently few examples them being used on an industrial scale. Lanthanides exist in many forms other than coordination complexes and many of these are industrially useful. In particular lanthanide metal oxides are used as heterogeneous catalysts in various industrial processes. Ln(III) compounds The trivalent lanthanides mostly form ionic salts. The trivalent ions are hard acceptors and form more stable complexes with oxygen-donor ligands than with nitrogen-donor ligands. The larger ions are 9-coordinate in aqueous solution, [Ln(H2O)9]3+ but the smaller ions are 8-coordinate, [Ln(H2O)8]3+. There is some evidence that the later lanthanides have more water molecules in the second coordination sphere. Complexation with monodentate ligands is generally weak because it is difficult to displace water molecules from the first coordination sphere. Stronger complexes are formed with chelating ligands because of the chelate effect, such as the tetra-anion derived from 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA). Ln(II) and Ln(IV) compounds The most common divalent derivatives of the lanthanides are for Eu(II), which achieves a favorable f7 configuration. Divalent halide derivatives are known for all of the lanthanides. They are either conventional salts or are Ln(III) "electride"-like salts. The simple salts include YbI2, EuI2, and SmI2. The electride-like salts, described as Ln3+, 2I−, e−, include LaI2, CeI2 and GdI2. Many of the iodides form soluble complexes with ethers, e.g. TmI2(dimethoxyethane)3. Samarium(II) iodide is a useful reducing agent. Ln(II) complexes can be synthesized by transmetalation reactions. The normal range of oxidation states can be expanded via the use of sterically bulky cyclopentadienyl ligands, in this way many lanthanides can be isolated as Ln(II) compounds. Ce(IV) in ceric ammonium nitrate is a useful oxidizing agent. The Ce(IV) is the exception owing to the tendency to form an unfilled f shell. Otherwise tetravalent lanthanides are rare. However, recently Tb(IV) and Pr(IV) complexes have been shown to exist. Hydrides Lanthanide metals react exothermically with hydrogen to form LnH2, dihydrides. With the exception of Eu and Yb, which resemble the Ba and Ca hydrides (non-conducting, transparent salt-like compounds), they form black, pyrophoric, conducting compounds where the metal sub-lattice is face centred cubic and the H atoms occupy tetrahedral sites. Further hydrogenation produces a trihydride which is non-stoichiometric, non-conducting, more salt like. The formation of trihydride is associated with and increase in 8–10% volume and this is linked to greater localization of charge on the hydrogen atoms which become more anionic (H− hydride anion) in character. Halides The only tetrahalides known are the tetrafluorides of cerium, praseodymium, terbium, neodymium and dysprosium, the last two known only under matrix isolation conditions. All of the lanthanides form trihalides with fluorine, chlorine, bromine and iodine. They are all high melting and predominantly ionic in nature. The fluorides are only slightly soluble in water and are not sensitive to air, and this contrasts with the other halides which are air sensitive, readily soluble in water and react at high temperature to form oxohalides. The trihalides were important as pure metal can be prepared from them. In the gas phase the trihalides are planar or approximately planar, the lighter lanthanides have a lower % of dimers, the heavier lanthanides a higher proportion. The dimers have a similar structure to Al2Cl6. Some of the dihalides are conducting while the rest are insulators. The conducting forms can be considered as LnIII electride compounds where the electron is delocalised into a conduction band, Ln3+ (X−)2(e−). All of the diiodides have relatively short metal-metal separations. The CuTi2 structure of the lanthanum, cerium and praseodymium diiodides along with HP-NdI2 contain 44 nets of metal and iodine atoms with short metal-metal bonds (393-386 La-Pr). these compounds should be considered to be two-dimensional metals (two-dimensional in the same way that graphite is). The salt-like dihalides include those of Eu, Dy, Tm, and Yb. The formation of a relatively stable +2 oxidation state for Eu and Yb is usually explained by the stability (exchange energy) of half filled (f7) and fully filled f14. GdI2 possesses the layered MoS2 structure, is ferromagnetic and exhibits colossal magnetoresistance. The sesquihalides Ln2X3 and the Ln7I12 compounds listed in the table contain metal clusters, discrete Ln6I12 clusters in Ln7I12 and condensed clusters forming chains in the sesquihalides. Scandium forms a similar cluster compound with chlorine, Sc7Cl12 Unlike many transition metal clusters these lanthanide clusters do not have strong metal-metal interactions and this is due to the low number of valence electrons involved, but instead are stabilised by the surrounding halogen atoms. LaI and TmI are the only known monohalides. LaI, prepared from the reaction of LaI3 and La metal, it has a NiAs type structure and can be formulated La3+ (I−)(e−)2. TmI is a true Tm(I) compound, however it is not isolated in a pure state. Oxides and hydroxides All of the lanthanides form sesquioxides, Ln2O3. The lighter/larger lanthanides adopt a hexagonal 7-coordinate structure while the heavier/smaller ones adopt a cubic 6-coordinate "C-M2O3" structure. All of the sesquioxides are basic, and absorb water and carbon dioxide from air to form carbonates, hydroxides and hydroxycarbonates. They dissolve in acids to form salts. Cerium forms a stoichiometric dioxide, CeO2, where cerium has an oxidation state of +4. CeO2 is basic and dissolves with difficulty in acid to form Ce4+ solutions, from which CeIV salts can be isolated, for example the hydrated nitrate Ce(NO3)4.5H2O. CeO2 is used as an oxidation catalyst in catalytic converters. Praseodymium and terbium form non-stoichiometric oxides containing LnIV, although more extreme reaction conditions can produce stoichiometric (or near stoichiometric) PrO2 and TbO2. Europium and ytterbium form salt-like monoxides, EuO and YbO, which have a rock salt structure. EuO is ferromagnetic at low temperatures, and is a semiconductor with possible applications in spintronics. A mixed EuII/EuIII oxide Eu3O4 can be produced by reducing Eu2O3 in a stream of hydrogen. Neodymium and samarium also form monoxides, but these are shiny conducting solids, although the existence of samarium monoxide is considered dubious. All of the lanthanides form hydroxides, Ln(OH)3. With the exception of lutetium hydroxide, which has a cubic structure, they have the hexagonal UCl3 structure. The hydroxides can be precipitated from solutions of LnIII. They can also be formed by the reaction of the sesquioxide, Ln2O3, with water, but although this reaction is thermodynamically favorable it is kinetically slow for the heavier members of the series. Fajans' rules indicate that the smaller Ln3+ ions will be more polarizing and their salts correspondingly less ionic. The hydroxides of the heavier lanthanides become less basic, for example Yb(OH)3 and Lu(OH)3 are still basic hydroxides but will dissolve in hot concentrated NaOH. Chalcogenides (S, Se, Te) All of the lanthanides form Ln2Q3 (Q= S, Se, Te). The sesquisulfides can be produced by reaction of the elements or (with the exception of Eu2S3) sulfidizing the oxide (Ln2O3) with H2S. The sesquisulfides, Ln2S3 generally lose sulfur when heated and can form a range of compositions between Ln2S3 and Ln3S4. The sesquisulfides are insulators but some of the Ln3S4 are metallic conductors (e.g. Ce3S4) formulated (Ln3+)3 (S2−)4 (e−), while others (e.g. Eu3S4 and Sm3S4) are semiconductors. Structurally the sesquisulfides adopt structures that vary according to the size of the Ln metal. The lighter and larger lanthanides favoring 7-coordinate metal atoms, the heaviest and smallest lanthanides (Yb and Lu) favoring 6 coordination and the rest structures with a mixture of 6 and 7 coordination. Polymorphism is common amongst the sesquisulfides. The colors of the sesquisulfides vary metal to metal and depend on the polymorphic form. The colors of the γ-sesquisulfides are La2S3, white/yellow; Ce2S3, dark red; Pr2S3, green; Nd2S3, light green; Gd2S3, sand; Tb2S3, light yellow and Dy2S3, orange. The shade of γ-Ce2S3 can be varied by doping with Na or Ca with hues ranging from dark red to yellow, and Ce2S3 based pigments are used commercially and are seen as low toxicity substitutes for cadmium based pigments. All of the lanthanides form monochalcogenides, LnQ, (Q= S, Se, Te). The majority of the monochalcogenides are conducting, indicating a formulation LnIIIQ2−(e-) where the electron is in conduction bands. The exceptions are SmQ, EuQ and YbQ which are semiconductors or insulators but exhibit a pressure induced transition to a conducting state. Compounds LnQ2 are known but these do not contain LnIV but are LnIII compounds containing polychalcogenide anions. Oxysulfides Ln2O2S are well known, they all have the same structure with 7-coordinate Ln atoms, and 3 sulfur and 4 oxygen atoms as near neighbours. Doping these with other lanthanide elements produces phosphors. As an example, gadolinium oxysulfide, Gd2O2S doped with Tb3+ produces visible photons when irradiated with high energy X-rays and is used as a scintillator in flat panel detectors. When mischmetal, an alloy of lanthanide metals, is added to molten steel to remove oxygen and sulfur, stable oxysulfides are produced that form an immiscible solid. Pnictides (group 15) All of the lanthanides form a mononitride, LnN, with the rock salt structure. The mononitrides have attracted interest because of their unusual physical properties. SmN and EuN are reported as being "half metals". NdN, GdN, TbN and DyN are ferromagnetic, SmN is antiferromagnetic. Applications in the field of spintronics are being investigated. CeN is unusual as it is a metallic conductor, contrasting with the other nitrides also with the other cerium pnictides. A simple description is Ce4+N3− (e–) but the interatomic distances are a better match for the trivalent state rather than for the tetravalent state. A number of different explanations have been offered. The nitrides can be prepared by the reaction of lanthanum metals with nitrogen. Some nitride is produced along with the oxide, when lanthanum metals are ignited in air. Alternative methods of synthesis are a high temperature reaction of lanthanide metals with ammonia or the decomposition of lanthanide amides, Ln(NH2)3. Achieving pure stoichiometric compounds, and crystals with low defect density has proved difficult. The lanthanide nitrides are sensitive to air and hydrolyse producing ammonia. The other pnictides phosphorus, arsenic, antimony and bismuth also react with the lanthanide metals to form monopnictides, LnQ, where Q = P, As, Sb or Bi. Additionally a range of other compounds can be produced with varying stoichiometries, such as LnP2, LnP5, LnP7, Ln3As, Ln5As3 and LnAs2. Carbides Carbides of varying stoichiometries are known for the lanthanides. Non-stoichiometry is common. All of the lanthanides form LnC2 and Ln2C3 which both contain C2 units. The dicarbides with exception of EuC2, are metallic conductors with the calcium carbide structure and can be formulated as Ln3+C22−(e–). The C-C bond length is longer than that in CaC2, which contains the C22− anion, indicating that the antibonding orbitals of the C22− anion are involved in the conduction band. These dicarbides hydrolyse to form hydrogen and a mixture of hydrocarbons. EuC2 and to a lesser extent YbC2 hydrolyse differently producing a higher percentage of acetylene (ethyne). The sesquicarbides, Ln2C3 can be formulated as Ln4(C2)3. These compounds adopt the Pu2C3 structure which has been described as having C22− anions in bisphenoid holes formed by eight near Ln neighbours. The lengthening of the C-C bond is less marked in the sesquicarbides than in the dicarbides, with the exception of Ce2C3. Other carbon rich stoichiometries are known for some lanthanides. Ln3C4 (Ho-Lu) containing C, C2 and C3 units; Ln4C7 (Ho-Lu) contain C atoms and C3 units and Ln4C5 (Gd-Ho) containing C and C2 units. Metal rich carbides contain interstitial C atoms and no C2 or C3 units. These are Ln4C3 (Tb and Lu); Ln2C (Dy, Ho, Tm) and Ln3C (Sm-Lu). Borides All of the lanthanides form a number of borides. The "higher" borides (LnBx where x > 12) are insulators/semiconductors whereas the lower borides are typically conducting. The lower borides have stoichiometries of LnB2, LnB4, LnB6 and LnB12. Applications in the field of spintronics are being investigated. The range of borides formed by the lanthanides can be compared to those formed by the transition metals. The boron rich borides are typical of the lanthanides (and groups 1–3) whereas for the transition metals tend to form metal rich, "lower" borides. The lanthanide borides are typically grouped together with the group 3 metals with which they share many similarities of reactivity, stoichiometry and structure. Collectively these are then termed the rare earth borides. Many methods of producing lanthanide borides have been used, amongst them are direct reaction of the elements; the reduction of Ln2O3 with boron; reduction of boron oxide, B2O3, and Ln2O3 together with carbon; reduction of metal oxide with boron carbide, B4C. Producing high purity samples has proved to be difficult. Single crystals of the higher borides have been grown in a low melting metal (e.g. Sn, Cu, Al). Diborides, LnB2, have been reported for Sm, Gd, Tb, Dy, Ho, Er, Tm, Yb and Lu. All have the same, AlB2, structure containing a graphitic layer of boron atoms. Low temperature ferromagnetic transitions for Tb, Dy, Ho and Er. TmB2 is ferromagnetic at 7.2 K. Tetraborides, LnB4 have been reported for all of the lanthanides except EuB4, all have the same UB4 structure. The structure has a boron sub-lattice consists of chains of octahedral B6 clusters linked by boron atoms. The unit cell decreases in size successively from LaB4 to LuB4. The tetraborides of the lighter lanthanides melt with decomposition to LnB6. Attempts to make EuB4 have failed. The LnB4 are good conductors and typically antiferromagnetic. Hexaborides, LnB6 have been reported for all of the lanthanides. They all have the CaB6 structure, containing B6 clusters. They are non-stoichiometric due to cation defects. The hexaborides of the lighter lanthanides (La – Sm) melt without decomposition, EuB6 decomposes to boron and metal and the heavier lanthanides decompose to LnB4 with exception of YbB6 which decomposes forming YbB12. The stability has in part been correlated to differences in volatility between the lanthanide metals. In EuB6 and YbB6 the metals have an oxidation state of +2 whereas in the rest of the lanthanide hexaborides it is +3. This rationalises the differences in conductivity, the extra electrons in the LnIII hexaborides entering conduction bands. EuB6 is a semiconductor and the rest are good conductors. LaB6 and CeB6 are thermionic emitters, used, for example, in scanning electron microscopes. Dodecaborides, LnB12, are formed by the heavier smaller lanthanides, but not by the lighter larger metals, La – Eu. With the exception YbB12 (where Yb takes an intermediate valence and is a Kondo insulator), the dodecaborides are all metallic compounds. They all have the UB12 structure containing a 3 dimensional framework of cubooctahedral B12 clusters. The higher boride LnB66 is known for all lanthanide metals. The composition is approximate as the compounds are non-stoichiometric. They all have similar complex structure with over 1600 atoms in the unit cell. The boron cubic sub lattice contains super icosahedra made up of a central B12 icosahedra surrounded by 12 others, B12(B12)12. Other complex higher borides LnB50 (Tb, Dy, Ho Er Tm Lu) and LnB25 are known (Gd, Tb, Dy, Ho, Er) and these contain boron icosahedra in the boron framework. Organometallic compounds Lanthanide-carbon σ bonds are well known; however as the 4f electrons have a low probability of existing at the outer region of the atom there is little effective orbital overlap, resulting in bonds with significant ionic character. As such organo-lanthanide compounds exhibit carbanion-like behavior, unlike the behavior in transition metal organometallic compounds. Because of their large size, lanthanides tend to form more stable organometallic derivatives with bulky ligands to give compounds such as Ln[CH(SiMe3)3]. Analogues of uranocene are derived from dilithiocyclooctatetraene, Li2C8H8. Organic lanthanide(II) compounds are also known, such as Cp*2Eu. Physical properties Magnetic and spectroscopic All the trivalent lanthanide ions, except lanthanum and lutetium, have unpaired f electrons. (Ligand-to-metal charge transfer can nonetheless produce a nonzero f-occupancy even in La(III) compounds.) However, the magnetic moments deviate considerably from the spin-only values because of strong spin–orbit coupling. The maximum number of unpaired electrons is 7, in Gd3+, with a magnetic moment of 7.94 B.M., but the largest magnetic moments, at 10.4–10.7 B.M., are exhibited by Dy3+ and Ho3+. However, in Gd3+ all the electrons have parallel spin and this property is important for the use of gadolinium complexes as contrast reagent in MRI scans. Crystal field splitting is rather small for the lanthanide ions and is less important than spin–orbit coupling in regard to energy levels. Transitions of electrons between f orbitals are forbidden by the Laporte rule. Furthermore, because of the "buried" nature of the f orbitals, coupling with molecular vibrations is weak. Consequently, the spectra of lanthanide ions are rather weak and the absorption bands are similarly narrow. Glass containing holmium oxide and holmium oxide solutions (usually in perchloric acid) have sharp optical absorption peaks in the spectral range 200–900 nm and can be used as a wavelength calibration standard for optical spectrophotometers, and are available commercially. As f-f transitions are Laporte-forbidden, once an electron has been excited, decay to the ground state will be slow. This makes them suitable for use in lasers as it makes the population inversion easy to achieve. The Nd:YAG laser is one that is widely used. Europium-doped yttrium vanadate was the first red phosphor to enable the development of color television screens. Lanthanide ions have notable luminescent properties due to their unique 4f orbitals. Laporte forbidden f-f transitions can be activated by excitation of a bound "antenna" ligand. This leads to sharp emission bands throughout the visible, NIR, and IR and relatively long luminescence lifetimes. Occurrence Samarskite and similar minerals contain lanthanides in association with the elements such as tantalum, niobium, hafnium, zirconium, vanadium, and titanium, from group 4 and group 5, often in similar oxidation states. Monazite is a phosphate of numerous group 3 + lanthanide + actinide metals and mined especially for the thorium content and specific rare earths, especially lanthanum, yttrium and cerium. Cerium and lanthanum as well as other members of the rare-earth series are often produced as a metal called mischmetal containing a variable mixture of these elements with cerium and lanthanum predominating; it has direct uses such as lighter flints and other spark sources which do not require extensive purification of one of these metals. There are also lanthanide-bearing minerals based on group-2 elements, such as yttrocalcite, yttrocerite and yttrofluorite, which vary in content of yttrium, cerium, lanthanum and others. Other lanthanide-bearing minerals include bastnäsite, florencite, chernovite, perovskite, xenotime, cerite, gadolinite, lanthanite, fergusonite, polycrase, blomstrandine, håleniusite, miserite, loparite, lepersonnite, euxenite, all of which have a range of relative element concentration and may be denoted by a predominating one, as in monazite-(Ce). Group 3 elements do not occur as native-element minerals in the fashion of gold, silver, tantalum and many others on Earth, but may occur in lunar soil. Very rare halides of cerium, lanthanum, and presumably other lanthanides, feldspars and garnets are also known to exist. The lanthanide contraction is responsible for the great geochemical divide that splits the lanthanides into light and heavy-lanthanide enriched minerals, the latter being almost inevitably associated with and dominated by yttrium. This divide is reflected in the first two "rare earths" that were discovered: yttria (1794) and ceria (1803). The geochemical divide has put more of the light lanthanides in the Earth's crust, but more of the heavy members in the Earth's mantle. The result is that although large rich ore-bodies are found that are enriched in the light lanthanides, correspondingly large ore-bodies for the heavy members are few. The principal ores are monazite and bastnäsite. Monazite sands usually contain all the lanthanide elements, but the heavier elements are lacking in bastnäsite. The lanthanides obey the Oddo–Harkins rule – odd-numbered elements are less abundant than their even-numbered neighbors. Three of the lanthanide elements have radioactive isotopes with long half-lives (138La, 147Sm and 176Lu) that can be used to date minerals and rocks from Earth, the Moon and meteorites. Promethium is effectively a man-made element, as all its isotopes are radioactive with half-lives shorter than 20 years. Applications Industrial Lanthanide elements and their compounds have many uses but the quantities consumed are relatively small in comparison to other elements. About 15000 ton/year of the lanthanides are consumed as catalysts and in the production of glasses. This 15000 tons corresponds to about 85% of the lanthanide production. From the perspective of value, however, applications in phosphors and magnets are more important. The devices lanthanide elements are used in include superconductors, samarium-cobalt and neodymium-iron-boron high-flux rare-earth magnets, magnesium alloys, electronic polishers, refining catalysts and hybrid car components (primarily batteries and magnets). Lanthanide ions are used as the active ions in luminescent materials used in optoelectronics applications, most notably the Nd:YAG laser. Erbium-doped fiber amplifiers are significant devices in optical-fiber communication systems. Phosphors with lanthanide dopants are also widely used in cathode-ray tube technology such as television sets. The earliest color television CRTs had a poor-quality red; europium as a phosphor dopant made good red phosphors possible. Yttrium iron garnet (YIG) spheres can act as tunable microwave resonators. Lanthanide oxides are mixed with tungsten to improve their high temperature properties for TIG welding, replacing thorium, which was mildly hazardous to work with. Many defense-related products also use lanthanide elements such as night-vision goggles and rangefinders. The SPY-1 radar used in some Aegis equipped warships, and the hybrid propulsion system of s all use rare earth magnets in critical capacities. The price for lanthanum oxide used in fluid catalytic cracking has risen from $5 per kilogram in early 2010 to $140 per kilogram in June 2011. Most lanthanides are widely used in lasers, and as (co-)dopants in doped-fiber optical amplifiers; for example, in Er-doped fiber amplifiers, which are used as repeaters in the terrestrial and submarine fiber-optic transmission links that carry internet traffic. These elements deflect ultraviolet and infrared radiation and are commonly used in the production of sunglass lenses. Other applications are summarized in the following table: The complex Gd(DOTA) is used in magnetic resonance imaging. Life science Lanthanide complexes can be used for optical imaging. Applications are limited by the lability of the complexes. Some applications depend on the unique luminescence properties of lanthanide chelates or cryptates. These are well-suited for this application due to their large Stokes shifts and extremely long emission lifetimes (from microseconds to milliseconds) compared to more traditional fluorophores (e.g., fluorescein, allophycocyanin, phycoerythrin, and rhodamine). The biological fluids or serum commonly used in these research applications contain many compounds and proteins which are naturally fluorescent. Therefore, the use of conventional, steady-state fluorescence measurement presents serious limitations in assay sensitivity. Long-lived fluorophores, such as lanthanides, combined with time-resolved detection (a delay between excitation and emission detection) minimizes prompt fluorescence interference. Time-resolved fluorometry (TRF) combined with Förster resonance energy transfer (FRET) offers a powerful tool for drug discovery researchers: Time-Resolved Förster Resonance Energy Transfer or TR-FRET. TR-FRET combines the low background aspect of TRF with the homogeneous assay format of FRET. The resulting assay provides an increase in flexibility, reliability and sensitivity in addition to higher throughput and fewer false positive/false negative results. This method involves two fluorophores: a donor and an acceptor. Excitation of the donor fluorophore (in this case, the lanthanide ion complex) by an energy source (e.g. flash lamp or laser) produces an energy transfer to the acceptor fluorophore if they are within a given proximity to each other (known as the Förster's radius). The acceptor fluorophore in turn emits light at its characteristic wavelength. The two most commonly used lanthanides in life science assays are shown below along with their corresponding acceptor dye as well as their excitation and emission wavelengths and resultant Stokes shift (separation of excitation and emission wavelengths). Possible medical uses Currently there is research showing that lanthanide elements can be used as anticancer agents. The main role of the lanthanides in these studies is to inhibit proliferation of the cancer cells. Specifically cerium and lanthanum have been studied for their role as anti-cancer agents. One of the specific elements from the lanthanide group that has been tested and used is cerium (Ce). There have been studies that use a protein-cerium complex to observe the effect of cerium on the cancer cells. The hope was to inhibit cell proliferation and promote cytotoxicity. Transferrin receptors in cancer cells, such as those in breast cancer cells and epithelial cervical cells, promote the cell proliferation and malignancy of the cancer. Transferrin is a protein used to transport iron into the cells and is needed to aid the cancer cells in DNA replication. Transferrin acts as a growth factor for the cancerous cells and is dependent on iron. Cancer cells have much higher levels of transferrin receptors than normal cells and are very dependent on iron for their proliferation. In the field of magnetic resonance imaging (MRI), compounds containing gadolinium are utilized extensively. The photobiological characteristics, anticancer, anti-leukemia, and anti-HIV activities of the lanthanides with coumarin and its related compounds are demonstrated by the biological activities of the complex. Cerium has shown results as an anti-cancer agent due to its similarities in structure and biochemistry to iron. Cerium may bind in the place of iron on to the transferrin and then be brought into the cancer cells by transferrin-receptor mediated endocytosis. The cerium binding to the transferrin in place of the iron inhibits the transferrin activity in the cell. This creates a toxic environment for the cancer cells and causes a decrease in cell growth. This is the proposed mechanism for cerium's effect on cancer cells, though the real mechanism may be more complex in how cerium inhibits cancer cell proliferation. Specifically in HeLa cancer cells studied in vitro, cell viability was decreased after 48 to 72 hours of cerium treatments. Cells treated with just cerium had decreases in cell viability, but cells treated with both cerium and transferrin had more significant inhibition for cellular activity. Another specific element that has been tested and used as an anti-cancer agent is lanthanum, more specifically lanthanum chloride (LaCl3). The lanthanum ion is used to affect the levels of let-7a and microRNAs miR-34a in a cell throughout the cell cycle. When the lanthanum ion was introduced to the cell in vivo or in vitro, it inhibited the rapid growth and induced apoptosis of the cancer cells (specifically cervical cancer cells). This effect was caused by the regulation of the let-7a and microRNAs by the lanthanum ions. The mechanism for this effect is still unclear but it is possible that the lanthanum is acting in a similar way as the cerium and binding to a ligand necessary for cancer cell proliferation. In the field of magnetic resonance imaging (MRI), compounds containing gadolinium are utilized extensively. Biological effects Due to their sparse distribution in the earth's crust and low aqueous solubility, the lanthanides have a low availability in the biosphere, and for a long time were not known to naturally form part of any biological molecules. In 2007 a novel methanol dehydrogenase that strictly uses lanthanides as enzymatic cofactors was discovered in a bacterium from the phylum Verrucomicrobiota, Methylacidiphilum fumariolicum. This bacterium was found to survive only if there are lanthanides present in the environment. Compared to most other nondietary elements, non-radioactive lanthanides are classified as having low toxicity. The same nutritional requirement has also been observed in Methylorubrum extorquens and Methylobacterium radiotolerans. See also Actinides, the heavier congeners of the lanthanides Group 3 element Lanthanide probes Notes References Cited sources External links lanthanide Sparkle Model, used in the computational chemistry of lanthanide complexes USGS Rare Earths Statistics and Information Ana de Bettencourt-Dias: Chemistry of the lanthanides and lanthanide-containing materials Eric Scerri, 2007, The periodic table: Its story and its significance, Oxford University Press, New York, Periodic table
Lanthanide
Chemistry
11,085
1,522,379
https://en.wikipedia.org/wiki/Alpha%20Pavonis
Alpha Pavonis (α Pavonis, abbreviated Alpha Pav, α Pav), formally named Peacock , is a binary star in the southern constellation of Pavo, near the border with the constellation Telescopium. Nomenclature α Pavonis (Latinised to Alpha Pavonis) is the star's Bayer designation. The historical name Peacock was assigned by His Majesty's Nautical Almanac Office in the late 1930s during the creation of the Air Almanac, a navigational almanac for the Royal Air Force. Of the fifty-seven stars included in the new almanac, two had no classical names: Alpha Pavonis and Epsilon Carinae. The RAF insisted that all of the stars must have names, so new names were invented. Alpha Pavonis was named "Peacock" ('pavo' is Latin for 'peacock') whilst Epsilon Carinae was called "Avior". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Peacock for this star and Avior for Epsilon Carinae. In Chinese caused by adaptation of the European southern hemisphere constellations into the Chinese system, (), meaning Peacock, refers to an asterism consisting of α Pavonis, η Pavonis, π Pavonis, ν Pavonis, λ Pavonis, κ Pavonis, δ Pavonis, β Pavonis, ζ Pavonis, ε Pavonis and γ Pavonis. Consequently, α Pavonis itself is known as (, .) Properties At an apparent magnitude of 1.94, this is the brightest star in Pavo. Based upon parallax measurements, this star is about distant from the Earth. It has an estimated six times the Sun's mass and 6 times the Sun's radius, but 2,200 times the luminosity of the Sun. The effective temperature of the photosphere is 17,700 K, which gives the star a blue-white hue. It has a stellar classification of B3 V, although older studies have often given it a subgiant luminosity class. It is classified as B2.5 IV in the Bright Star Catalogue. Stars with the mass of Alpha Pavonis are believed not to have a convection zone near their surface. Hence the material found in the outer atmosphere is not processed by the nuclear fusion occurring at the core. This means that the surface abundance of elements should be representative of the material out of which it originally formed. In particular, the surface abundance of deuterium should not change during the star's main sequence lifetime. The measured ratio of deuterium to hydrogen in this star amounts to less than , which suggests this star may have formed in a region with an unusually low abundance of deuterium, or else the deuterium was consumed by some means. A possible scenario for the latter is that the deuterium was burned through while Alpha Pavonis was a pre-main-sequence star. The system is likely to be a member of the Tucana-Horologium association that share a common motion through space. The estimated age of this association is 45 million years. α Pavonis star has a peculiar velocity of relative to its neighbors. Companions Three stars have been listed as visual companions to α Pavonis: two ninth magnitude stars at about four arc minutes; and a 12th magnitude F5 main sequence star at about one arc minute. The two ninth magnitude companions are only 17 arc seconds from each other. α Pavonis A is a spectroscopic binary consisting of a pair of stars that orbit around each other with a period of 11.753 days. However, in part because the two stars have not been individually resolved, little is known about the companion except that it has a mass of at least . One attempt to model a composite spectrum estimated components with spectral types of B0.5 and B2, and a brightness difference between the two components of 1.3 magnitudes. References External links Peacock - Jim Kaler's Stars Pavonis, Alpha B-type subgiants 193924 100751 7790 Pavo (constellation) Spectroscopic binaries Peacock Durchmusterung objects
Alpha Pavonis
Astronomy
906
2,607,325
https://en.wikipedia.org/wiki/Project%20DReaM
Project DReaM was a Sun Microsystems project aimed at developing an open interoperable DRM architecture that implements standardized interfaces. Its primary goal was the creation of a royalty-free digital rights management industry standard. On 22 August 2005, Sun announced that it was opening up Project DReaM, which had started as an internal research project, as part of their Open Media Commons initiative. It was released under the Common Development and Distribution License (CDDL). Due to inactivity on the project, it was closed and archived in August 2008. DReaM is an acronym that stands for "DRM everywhere/available". Project DReaM included of a Java Stream Assembly API to support digital video management and distribution, a hardware- and operating system-independent interoperable DRM standard called DRM-OPERA, and the Sun Streaming Server to stream video and audio over IP. The key characteristics of Project DReaM were as follows: Network identity focus: Project DReaM approaches DRM (and CAS) from a network identity management-focused perspective, rather than a device-centric approach. Interoperability: Project DReaM uses an open approach and fully specifies everything necessary to build heterogeneous, interoperable, vendor neutral implementations. No reliance on security through obscurity: Project DReaM's architecture does not follow the traditional model of security through obscurity which must maintain a closed source code base in order to operate securely. Royalty-free design model: Project DReaM is designed to be royalty free, allowing developers to avoid encumbered technology that carries onerous licensing costs. Project DReaM technology required the software code to be signed and run on trusted computing hardware, on which unauthorized or unsigned code cannot be run. This approach was criticized by journalist Cory Doctorow, who characterized Project DReaM as crippleware. Project DReaM was favorably mentioned by Mike Linksvayer in a 2008 article discussing its support for fair use and Creative Commons-licensed content. See also Open Media Commons References External links OpenMediaCommons.org website Project DReaM press release Sun Microsystems Digital rights management standards Software using Common Development and Distribution License
Project DReaM
Technology
422
38,907,916
https://en.wikipedia.org/wiki/Fbsp%20wavelet
In applied mathematics, fbsp wavelets are frequency B-spline wavelets. These frequency B-spline wavelets are complex wavelets whose spectrum are spline. where sinc function that appears in Shannon sampling theorem. m > 1 is the order of the spline fb is a bandwidth parameter fc is the wavelet center frequency The Shannon wavelet (sinc wavelet) is then clearly a special case of fbsp. References S.G. Mallat, A Wavelet Tour of Signal Processing, Academic Press, 1999, C.S. Burrus, R.A. Gopinath, H. Guo, Introduction to Wavelets and Wavelet Transforms: A Primer, Prentice-Hall, 1988, . O. Cho, M-J. Lai, A Class of Compactly Supported Orthonormal B-Spline Wavelets in: Splines and Wavelets, Athens 2005, G Chen and M-J Lai Editors pp. 123–151. M. Unser, Ten Good Reasons for Using Spline Wavelets, Proc. SPIE, Vol.3169, Wavelets Applications in Signal and Image Processing, 1997, pp. 422–431. Continuous wavelets
Fbsp wavelet
Mathematics
252
46,328,125
https://en.wikipedia.org/wiki/Ganoderma%20sessile
Ganoderma sessile is a species of polypore fungus in the Ganodermataceae family. There is taxonomic uncertainty with this fungus since its circumscription in 1902. This wood decay fungus is found commonly in Eastern North America, and is associated with declining or dead hardwoods. Taxonomy Murrill described 17 new Ganoderma species in his treatises of North American polypores, including for example, G. oregonense, G. sessile, G. tsugae, G. tuberculosum and G. zonatum. Most notably and controversial was the typification of Ganoderma sessile, which was described from various hardwoods only in the United States. The specific epithet "sessile" comes from the sessile (without typical stem) nature of this species when found growing in a natural setting. Ganoderma sessile was distinguished based on a sessile fruiting habit, common on hardwood substrates and occasionally having a reduced, eccentric or "wanting" stipe. In 1908, Atkinson considered G. tsugae and G. sessile as synonyms of G. lucidum, but erected the species G. subperforatum from a single collection in Ohio on the basis of having “smooth” spores. Although he did not recognize the genus Ganoderma, but rather kept taxa in the genus, Polyporus, Overholts considered G. sessile as a synonym of the European G. lucidum. In a 1920 report on Polyporaceae of North America, Murrill conceded that G. sessile was closely related to the European G. lucidum. Approximately a decade later, Haddow considered G. sessile a unique taxon, but suggested Atkinson's G. subperforatum was a synonym of G. sessile, on the basis of the "smooth" spores the original basis of G. subperforatum when earlier named by Atkinson in 1908. Until this point, all identifications of Ganoderma taxa were based on fruiting body morphology, geography, host, and spore characters. In 1948 and then amended in 1965, Nobles characterized the cultural characteristics of numerous wood-inhabiting hymenomycetes, including Ganoderma taxa. Her work laid the foundation for culture-based identifications in this group of fungi. Nobles recognized that there were differences in cultural characteristics between G. oregonense, G. sessile, and G. tsugae. Although Nobles recognized G. lucidum in her 1948 publication as a correct name for the taxon from North American isolates that produce numerous broadly ovoid to elongate chlamydospores (12–21 x 7.5–10.5 μm), she corrected this misnomer in 1968 by amending the name to G. sessile. Others agreed with Haddow's distinction between G. lucidum and G. sessile on the basis of smooth spores, but synonymized G. sessile with G. resinaceum, a previously described European taxon. Others demonstrated the similarity in culture morphology and that vegetative compatibility was successful between the North American taxon recognized as ‘G. lucidum’ and the European G. resinaceum. In the monograph of North American Polypores written in 1986, which is still the only comprehensive treatise on this group of fungi unique for North America, the authors did not recognize G. sessile, but rather the five species present in the U.S.: G. colossum (Fr.) C.F. Baker (current name: Tomophagus colossus (Fr.) Murrill), G. curtisii, G. lucidum, G. oregonense, and G. tsugae. Molecular taxonomy In a multilocus phylogeny, the authors revealed that the global diversity of the laccate Ganoderma species included three highly supported major lineages that separated G. oregonense/G. tsugae from G. zonatum and from G. curtisii/G. sessile, and these lineages were not correlated to geographical separation. These results agree with several of the earlier works focusing mostly on morphology, geography and host preference showing genetic affinity of G. resinaceum and G. sessile, but with statistical support separating the European and North American taxa. Also, Ganoderma curtisii and G. sessile were separated with high levels of statistical support, although there was not enough information to say they were from distinct lineages. Lastly, G. sessile was not sister to G. lucidum. The phylogeny supported G. tsugae and G. oregonense as sister taxa to the European taxon G. lucdium sensu stricto. Description Fruiting bodies annual and sessile (without a stipe) or pseudostipitate (very small stipe). Fruiting bodies found growing on trunks or root flares of living or dead hardwood trees. Mature fruiting bodies are laccate and reddish-brown, often with a wrinkled margin if dry. Fruiting bodies are shelf-like if on stumps or overlapping clusters of fan-shaped (flabelliform) fruiting bodies if growing from underground roots, and range in size of in diameter. Hymenium white, bruising brown, and poroid with irregular pores that can range in shape from circular to angular. The context tissue is cream colored and can be thin to thick and on average the same length as the tubes. Black resinous deposits are never found embedded in the context tissue, but concentric zones are often found. Spores appear smooth, or nearly so, due to the fine (thin) echinulations from the endosporium. The spores can be used to differentiate the species from other common Eastern North American species such as Ganoderma curtisii (Berk.) Murrill. Elliptical to obovate to obpyriform chlamydospores formed in vegetative mycelium, and are abundant in cultures. Distribution Very common taxon, being found in practically every state East of the Rocky Mountains within the United States. Uses For centuries, laccate (varnished or polished) Ganoderma species have been used in traditional Chinese medicine. These species are often sold as G. lucidum', although genetic testing has shown that traditional Chinese medicine uses multiple species, such as G. lingzhi, G. multipileum, and G. sichuanense. References External links Ganoderma sessile images at Mushroom Observer Fungi described in 1902 Fungi of North America Fungal plant pathogens and diseases sessile Fungus species
Ganoderma sessile
Biology
1,389
2,795,762
https://en.wikipedia.org/wiki/Object%20orgy
In computer programming, an object orgy is a situation in which objects are insufficiently encapsulated via information hiding, allowing unrestricted access to their internals. This is a common failure (or anti-pattern) in object-oriented design or object-oriented programming, and it can lead to increased maintenance needs and problems, and even unmaintainable complexity. Consequences The results of an object orgy are mainly a loss of the benefits of encapsulation, including: Unrestricted access makes it hard for a reader to reason about the behaviour of an object. This is because direct access to its internal state means any other part of the system can manipulate it, increasing the amount of code to examine, and creating means for future abuse. As a consequence of the difficulty of reasoning, design by contract is effectively impossible. If much code takes advantage of the lack of encapsulation, the result is a scarcely maintainable maze of interactions, commonly known as a rat's nest or spaghetti code. The original design is obscured by the excessively broad interfaces to objects. The broad interfaces make it harder to re-implement a class without disturbing the rest of the system. This is especially hard when clients of a class are developed by a different team or organisation. Forms Encapsulation may be weakened in several ways, including: By declaring internal members public, or by providing free access to data via public mutator methods (setter). By providing non-public access. For example, see: Java access modifiers and accessibility levels in C# In C++, via some of the above means, and by declaring friend classes or functions. An object may also make its internal data accessible by passing references to them as arguments to methods or constructors of other classes, which may retain references. In contrast, objects holding references to one another, though sometimes described as a form of object orgy, does not by itself breach encapsulation. Causes Members may be declared public to avoid the effort or syntactic overhead of providing proper accessors for them. This may increase readability of the class, but at the cost of the consequences described above. For some languages, a member intended to be readable by other objects can be made modifiable because the language has no convenient construct for read-only access. An object orgy may be a symptom of coding to an immature and anemic design, when a designer has insufficiently analysed the interactions between objects. It can also arise from laziness or haste in implementing a design, especially if a programmer does not communicate enough with a designer, or from reluctance to revise a design when problems arise, which also encourages many other anti-patterns. Many programmers view objects as anemic data repositories and manipulate them violating Information Hiding, Encapsulation and Design by Contracts principles. Solutions In general, encapsulation is broken because the design of other classes requires it, and a redesign is needed. If that is not the case, it may be sufficient to re-code the system according to best practices. Once the interfaces are published irrevocably, it may be too late to fix them. References External links PerlDesignPatterns.com Anti-patterns
Object orgy
Technology
669
76,850,085
https://en.wikipedia.org/wiki/AT2018hyz
AT2018hyz is a tidal disruption event (TDE) that was discovered in 2018 by the All Sky Automated Survey for SuperNovae (ASASS-SN). History In 2022, astronomers announced the discovery of radio emission from AT2018hyz using the Very Large Array (VLA), MeerKAT, and the Australia Telescope Compact Array (ATCA), despite no radio emission detected earlier. The emission is still rising rapidly, and has been interpreted as an outflow of material that was "burped" several years after the initial TDE from the accretion disk of the supermassive black hole, traveling at up to half the speed of light. Alternately, it has been proposed that the delayed radio emission from AT2018hyz could be due to an off-axis astrophysical jet, which launched promptly when the black hole was consumed (similar to the TDE Swift J1644+57), and emission only became visible later when it entered our line of sight. Host galaxy The host galaxy for AT2018hyz is 2MASS J10065085+0141342, known as LEDA 3119592 or 2dFGRS TGN421Z052, located at redshift z = 0.04573. It is classified as a dormant post starburst galaxy or a type E+A galaxy. Based on studies, the host galaxy's redshift has a g-band visual magnitude of -20.2, with the galaxy containing a low-mass black hole measuring 106 M⊙. See also AT2019qiz RX J1242-11 References Tidal disruption events black holes Sextans
AT2018hyz
Physics,Astronomy
351
25,153,936
https://en.wikipedia.org/wiki/Performance-based%20building%20design
Performance-Based Building Design is an approach to the design of any complexity of building, from single-detached homes up to and including high-rise apartments and office buildings. A building constructed in this way is required to meet certain measurable or predictable performance requirements, such as energy efficiency or seismic load, without a specific prescribed method by which to attain those requirements. This is in contrast to traditional prescribed building codes, which mandate specific construction practises, such as stud size and distance between studs in wooden frame construction. Such an approach provides the freedom to develop tools and methods to evaluate the entire life cycle of the building process, from the business dealings, to procurement, through construction and the evaluation of results. Background One of the first implementations of performance-based building design requirements was in Hammurabi's Code (c. 1795 to 1750 BC), where is stated that "a house should not collapse and kill anybody". This concept is also described in Vitruvius's "De architectura libri decem" ("The Ten Books of Architecture") in first century BC.In modern times, the first definition of performance-based building design was introduced in 1965 in France by Blachère with the Agrément system Despite this, the building process remained relatively conventional for the next 50 years, based solely on experience and codes, regulations prescribed by law which stifled innovations and change. The prescription approach is a technical procedure based on past experience which consists of comparing the proposed design with standardized codes, so no simulation or verification tools are needed for the design and building process. A new approach began to emerge during the second half of the 20th century, when many local building markets began to show that they needed greater flexibility in the procurement procedures to facilitate the exchange of building goods between countries and to improve the speed of procedures and innovations in the building process. This innovative approach to the procurement, design, contracting, management and maintenance of buildings was performance-based building design (PBBD). Most recently the clearest definition of performance based building approach was explained in 1982 by the CIB W60 Commission in the report n.64, where Gibson stated that "first and foremost, the performance approach is [...] the practice of thinking and working in terms of ends rather than means.[ …] It is concerned with what a building or building product is required to do, and not with prescribing how it is to be constructed". Many research establishments have studied the implementation of PBBD during the last fifty years. A majority of areas of building design remain open to innovation. During 1998-2001, the CIB Board and Programme Committee initiated the Proactive Programme on Performance-Based Building in order to practically implement technical developments of performance-based building. This programme was followed by the establishment of the Performance-Based Building (PeBBu), running from October 2001 to October 2005, thanks to funds from the European Commission (EC) Fifth Framework Programme. The PeBBu Network had a broad and varied programme, a set of activities and produced many papers to aid in the implementation of such vision. PeBBu Thematic Network PeBBu Thematic Network was managed by the CIB General Secretariat (International Council for Research and Innovation in Building Construction), particularly by the CIB Development Foundation (CIBdf). The PeBBu Network started working in 2001 and completed in 2005. In the PeBBu Network 73 organisations, included CIBdf (coordinating contractor), BBRI (Belgium), VTT (Finland), CSTB (France), EGM (Netherlands), TNO (Netherlands), BRE (UK), cooperated to this project bringing people together to share their work, their information and knowledge. The objectives of the Network was to stimulate and facilitate international dissemination and implementation of Performance Based Building in building and construction sector, maximising the contribution to this by the international Research and Development community. The PeBBu Thematic Network result is described and explained in 26 final reports which included three reports with an overall PBB scope, a multitude of research reports from the PeBBu Domains, User Platforms and Regional Platforms, a Final Management report and four practice reports for providing practical support to the actual application of PBB concept in building and construction sector. PBB: Conceptual framework A conceptual framework for implementing a PBB market was identified while reviewing various viewpoints during the compilation of the 2nd International State of the Art Report for the PeBBu Thematic Network (Becker and Foliente 2005). The building facility is a multi-component system with a generally very long life cycle. The system's design agenda as a whole, and the more specific design objectives of its parts, originate from relevant user requirements. These requirements evolve into a comprehensive set of Performance Requirements that should be established by a large number of stakeholders (the users, entrepreneur/owner, regulatory framework, design team, and manufacturers). The main steps in a Performance Based Building Design process are: identifying and formulating the relevant User Requirements transforming the User Requirements identified into Performance Requirements and quantitative performance criteria using reliable design and evaluation tools to assess whether proposed solutions meet the stated criteria at a satisfactory level Performance concept In a Performance-based approach, the focus of all decisions, is on the required performance-in-use and on the evaluations and testing of building asset. Performance Based Building (PBB) is focused on performance required in use for the business processes and the needs of the users, and then on the evaluations and verification of building assets result. The Performance approach can be used whether the process is about an existing or new assets. It is applicable to the procurement of constructed assets and to any phase of the whole life cycle Building Process, such as strategic planning, asset management, briefing/programming, design and construction, operation and maintenance, management and use, renovations and alterations, codes, regulations and standards. It includes many topics and criteria, which can be categorized as physical, functional, environmental, financial, economical, psychological, social, facilities, and other more. These criteria are related to singular project, according to the context and the situation. Two key characteristics of performance concept Performance concept is based on two key characteristics: the use of two languages, one for the clients/users requirements and the other for the supply of the performance the need for validation and verification of results against performance targets Two languages The Performance concept requires two languages: the language of demand requirements and the language of the required performance which should have a capability to fulfill the demand. It is important to recognize that these languages are different. Szigeti and Davis (Performance Based Building: Conceptual Framework, 2005) explain that "the dialog between client and supplier can be described as two halves of a "hamburger bun", with the statement of the requirement in functional or performance language (FC - functional concept) matched to a solution (SC - solution concept) in more technical language, and the matching, verification / validation that needs to occur in between". In a recent paper Ang, Groosman, and Scholten (2005) explain that the functional concept represents the set of unquantified objectives and scopes to be satisfied by the supply solutions, related to performance requirements. The solution concept represents technical realization that satisfies at least the required performance. Design decision is a development of a solution concept. Assessing result – match and compare Building performance evaluation is the process of systematically comparing and matching the performance in use of building assets with explicitly documented or implicitly criteria for their expected performance. In the PBB approach is essential matching and comparing demand and supply. It can be done by using a validation method, by measurement, calculation, or testing. Tools and methods are used to permit some form of measurement of testing of the requirements, and the relating measurement of the capability of assets to perform. There are many types of in-depth specialized technical evaluations and audits. These validations generally require time, a major effort by the customer group, and a high level of funding. Normally, the most valuable methods and tools are comprehensive scans which are performance based and include metrics that can easily be measured without lab-type instruments. Evaluations and reviews, are integral part of asset and portfolio management, design, construction, commissioning. Evaluations can be used for different purposes, depending on the requirements being considered, for example they could be used in support of funding decisions, they could include a condition assessment to ensure that the level of degradation or the obsolescence is known, they could include an assessment of the utilization or an assessment of the capability of the product result to perform functional expected requirements. Such evaluations can be used at any time during the life cycle of the asset. PBB evaluations should be done in a routine manner, really the evaluations are often done only as part of Commissioning or shortly thereafter, or when there is a problem. There are two different kinds of performance verifications. Performance evaluations rate the physical asset according to a set of existing criteria and indicators of capability, and match the results against the required levels of performance. The Occupant Satisfaction Surveys record the perceptions of the users, usually through a scale of satisfaction measurements. Both types of evaluations complement each other. Tools Innovative decision-support methodologies are taking place in building sector. There are some tools explicitly based on the demand and supply concepts and other ones which employ standardized performance metrics that for the first time link facility condition to the functional requirements of organizations and their customers. Projects can be planned, prioritize, and budgeted using a multi-criteria approach, that is transparent, comprehensive and auditable. One of the methodologies that can be used is a gap analysis based on calibrated scales that measure both the levels of requirements and the capability of the asset that is either already used, or being designed, or offer to be bought, or leased. Such methodology is an ASTM and American National (ANSI) standard and is currently being considered as an ISO standard. It is particularly useful when the information about the "gap", if any, can be presented in support of funding decisions and actions. There are a large number of verification methodologies, (e.g. POEs, CRE-FM), and all of these need to refer back to explicit statements of requirements to be able to compare with expected performance. To evaluate the result of a building asset against the expected performance requirements it is necessary to fix some tools used during the process. These tools are the reference of whole life cycle building process, so organizations use 'key performance indicators (KPI)' to prove that they are meeting the targets that have been set by senior management. At the same time performance measurement (PM) becomes central to managing organizations, their operations and logistic support. These methodologies include the feedback loop that links a facility in use to the requirements and capabilities that are compared and matched whenever decisions are needed. Performance approach and prescriptive approach A prescriptive approach describes the way a building asset must be constructed, rather than the end result of the building process, and is related to the type and quality of materials used, the method of construction, and the workmanship. This type of approach is strictly mandated by a combination of law, codes, standards, and regulations, and is based on past experience and consolidated know-how. The content of prescriptive codes and standards is usually a consequence of an accident causing injury or death which requires a remedy to avoid a repeat, as a consequence of some hazardous situation, or as a consequence of some recognized social need. In many countries, in both the public and private sector, research is taking place into a different set of codes, methods and tools based on performance criteria to complement the traditional prescriptive codes. In the 1970s, this search produced the "Nordic Model" (NKB 1978), which constituted the reference model of next performance-based codes. This model links easily to one of the key characteristics of the Performance approach, the dialog between the why, the what and the how. Using a Performance Based approach does not preclude the use of prescriptive specifications. Although the benefits of the adopting of a PBBD approach are significant, it is recognized that employing a performance-based approach at any stage in the building process is more complex and expensive than using the simpler prescriptive route. So, the application of this approach should not be regarded as an end in itself. When simple building are concerned or well proven technologies are used, the use of prescriptive codes results more effective, efficient, faster, or less costly, so prescriptive specifications will continue to be useful in many situations. At the same time for the complex projects use of the performance based route at every stage is indispensable, in particular during design and evaluation phases. It is not likely that a facility will be planned, procured, delivered, maintained, used and renovated using solely Performance Based documents at each step of the way, down the supply chain, to the procurement of products and materials, because there is not yet enough experience with the Performance Based Building approach. At the same time the prescriptive approach can bring to stifle changes and innovations, so best way to set building process is blending both different approaches. Statements of Requirements (SoR) The Statements of Requirements represents a reference for the whole life cycle management of facilities, they are the core of the conceptual framework came up from the PeBBu Thematic Network. They constitute the key to implementation of the PBB in the construction sector. The SoRs is a document prepared by clients, or in the verbal statements communicated to supplies, it is based on the user functional needs. These user requirements are converted into performance requirements, which can be explicit or implicit. Such document should include information about what is essential to the client. SoRs will take different forms depending on the kind of client and what is being procured, at what phase of the Life Cycle or where in the supply chain a document is being used. The SoRs should be, dynamic, not static, and should include more and more details as projects proceed. This document should be prepared at different levels of granularity, how detailed the documentation is at each stage depends on the complexity of the project and on the procurement route chosen for the project. The SoRs represent a very important part of a continuous process of communication between clients (demand) and their project team (supply), they will be updated and managed using computerized tools and will contain all requirements throughout the life of the facility. This process is called "briefing" in UK and Commonwealth English, and "programming" in American English. An SoR is normally prepared for any project, whether it is a PBB project or not. Assembling such a document usually leads to a more appropriate match between the needs of clients and users and the constructed assets. Statements of Requirements have to be very carefully stated so that it is easy to verify that a proposed solution can meet those requirements. High level statement of requirements need to be paired with indicators of capability so design solutions can be evaluated before they are built in order to avoid mistakes. In the SoRs it is important to take into account some design aspect like flexibility indicators because constructed assets need for change during their life cycle, uses and activities can change very rapidly, so it is essential to test different solutions way that the spaces might be used according to anticipate changes. SoRs, as understood in ISO 9000, include not only what the client requires and is prepared to pay for, but also the process and indicators that will provide the means to verify, and validate, that the product or service delivered meets those stated requirements. As part of the worldwide movement to implement a PBB approach and to develop tools that will make it easier to shift to PBB, the International Alliance for Interoperability (IAI) set up projects to map the processes that are part of Whole Life Cycle Management as Portfolio and Asset Management: Performance (PAMPeR) and Early Design" (ED). The IAI efforts are complemented by many other efforts to create standards for the information to be captured and analyzed to verify performance-in-use. Performance requirements (PR) Performance requirements translate user requirements in more precise quantitative measurable and technical terms, usually for a specific purpose. Supply team prepares a document that includes, objectives and goals, performance requirements and criteria. It is important to include "indicators of performance" in the way that it can be measured the results against explicit requirements, whether qualitative or quantitative. Performance indicators need to be easily understood by the users and the evaluators. To validate the indicators and verify that required performance-in-use has been achieved it is necessary using appropriate methods and tools. Levels of performance requirements can be stated as part of the preparation of SoRs, as part of project programs, or as part of requests for proposals and procurement contracts. It is preferable adopting a flexible approach to the expression and comparison of performance levels, so required and achieved performance can be expressed not as single values but as bands between upper and lower limits. In consequence, in performance terms the criteria can be expressed as graduated scales, divided into broad bands. Performance based codes In the building and construction industry, until 25–30 years old, prescriptive codes, regulations and standards made innovation and change difficult and costly to implement, and created technical restrictions to trade. These concerns have been the major drivers towards the use of a Performance Based approach to codes, regulations and standards. Performance-based building regulations have been implemented or are being developed in many countries but they have not yet reached their full potential. In part, this can be attributed to the fact that the overall regulatory system has not yet been fully addressed, and gaps exist in several key areas. Bringing the regulatory and non-regulatory models together is probably the best way to work. This is shown in the "Total Performance System Models" diagram (Meacham, et al. 2002), that maps the flow of decision making from society and business objectives to construction solutions. The difference between the regulatory and non-regulatory parts of the Total Performance System Models is that the first one is mandated by codes and regulations based on the law, while those other functional requirements, included in Statements of Requirements, are an integral part of what the client requires and is willing to pay for. Consequences relating to procedure For procurements in the public sector and for publicly traded corporations, it's important that the decisions and choices are transparent and explicit, regardless of the specific procurement route. All procurement processes can be either Prescriptive or Performance Based. Design-Build, Public Private Partnerships (PPP), private finance initiative (PFI) and similar procurement procedures are particularly suited to the use of a strong Performance Based Building application. If the expected performance are not stated explicitly and verifiably then these procurement methods will likely be more subject to disappointments and legal problems. To get the benefits from these procurement approaches, it is essential to organize the services of the supply chain in order to get innovative, less costly, or better solutions by shifting decisions about "how" to the integrated team. References regulatory ISO 6240: 1980, Performance standards in building – Contents and presentation ISO 6241: 1984, Performance standards in building – Principles for their preparation and factors to be considered ISO 6242: 1992, Building construction – Expression of user's requirements – Part 1: Thermal requirements ISO 6242: 1992, Building construction – Expression of user's requirements – Part 2: Air purity requirements ISO 6242: 1992, Building construction – Expression of user's requirements – Part 3: Acoustical requirements ISO 6243: 1997, Climatic data for building design: proposed systems of symbols ISO 7162: 1992, Performance standards in building – Contents and format of standards for evaluation of performance ISO 19208: 2016, Framework for specifying performance in buildings ISO 9836: 1992, Performance standards in building – Definition and calculation of area and space indicators ISO 9000-00: 2000a, Quality Management system - Fundamentals and vocabulary ISO 9001-00: 2000b, Quality Management system - Requirements CEN (2002). EN 12152:2002 Curtain Walling — Air Permeability —Performance Requirements and Classification. CEN, European Commission for Standardization, Brussels. CEN (2002 − 2007). Structural Eurocodes (EN 1990 — Eurocode: Basis of structural design. EN 1991 —Eurocode 1: Actions on structures. EN 1992 — Eurocode 2: Design of concrete structures. EN 1993 —Eurocode 3: Design of steel structures. EN 1994 — Eurocode 4: Design of composite steel and concrete structures. EN 1995 — Eurocode 5: Design of timber structures. EN 1996 — Eurocode 6: Design of masonry structures. EN 1997 — Eurocode 7: Geotechnical design. EN 1998 — Eurocode 8: Design of structures for earthquake resistance. EN 1999 — Eurocode 9: Design of aluminium structures). CEN, European Commission for Standardization, Brussels. CEN (2004). EN 13779:2004 — Ventilation for Non-residential Buildings — Performance Requirements for Ventilation and Room-Conditioning Systems. CEN, European committee for standardization, Brussels UNI 8290 – 1: 1981 + A122: 1983, Residential building. Building elements. Classification and terminology UNI 8290 – 2: 1983, Residential building. Building elements. Analysis of requirements UNI 8290 – 3: 1987, Residential building. Building elements. Agents list UNI 8289: 1981, Building. Functional requirements of final users. Classification UNI 10838: 1999, Building. Terminology for users, performances, quality and building process See also Evidence-based design Feedback loop Post-occupancy evaluation References BAKENS W., PeBBu Finalized, CIB News Article, January 2006 BECKER R., Fundamentals of Performance-Based Building Design, Faculty of Civil and Environmental Engineering Technion – Israel Institute of Technology, Haifa, November 2008 FOLIENTE G., HUEVILA P., ANG G., SPEKKINK D., BACKENS W., Performance Based Building R&D Roadmap, PeBBu Final Report, CIBdf, Rotterdam, 2005 SZIGETI F., The PeBBuCo Study: Compendium of Statements of Performance Based (PB) Statements of Requirements (SoR), International Center for Facilities (ICF), Ottawa, 2005 SZIGETI F., DAVIS G., Performance Based Building: Conceptual Framework, PeBBu Final Report, CIBdf, Rotterdam, October 2005 Further reading BECKER R., FOLIENTE G., Performance Based International State of the art, PeBBu 2nd International SotA Report, CIBdf, Rotterdam, 2005 BLACHERE G., General consideration of standards, agreement and the assessment of fitness for use, Paper presented at the 3rd CIB Congress on Towards Industrialised Building held in Copenhagen, Denmark, 1965 BLACHERE G., Building Principles, Commission of the European Communities, Industrial Processes, Building and Civil Engineering, Directorate General, Internal Market and Industrial Affairs, EUR 11320 EN, 1987 GIBSON E.J., Working with the Performance Approach in Building, CIB Report Publication n.64, Rotterdam, 1982 GROSS J.G., Developments in the application of the performance concept in building, Proceedings of the 3rd symposium of CIB-ASTM-ISO-RILEM, National Building Research Institute, Israel, 1996 External links BRE – Building Research Establishment CIB - International Council for Research and Innovation in Building and Construction CSTB – Centre Scientifique et Technique du Bâtiment IAI – International Alliance for Interoperability Building engineering Methodology
Performance-based building design
Engineering
4,837
2,382,773
https://en.wikipedia.org/wiki/River%20Tigris%20%28constellation%29
River Tigris or Tigris (named after the Tigris river) was a constellation, introduced in 1612 by Petrus Plancius. One end was near the shoulder of Ophiuchus and the other was near Pegasus, and in between it passed through the area now occupied by Vulpecula, flowing between Cygnus and Aquila. It did not appear on Hevelius' atlas of 1687 or Johann Bode's Uranographia atlas of 1801 and was quickly forgotten. See also Obsolete constellations References External links River Tigris Former constellations Constellations listed by Petrus Plancius
River Tigris (constellation)
Astronomy
130
1,236,458
https://en.wikipedia.org/wiki/Bellman%20equation
A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes. The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used. The Bellman equation was first applied to engineering control theory and to other topics in applied mathematics, and subsequently became an important tool in economic theory; though the basic concepts of dynamic programming are prefigured in John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior and Abraham Wald's sequential analysis. The term "Bellman equation" usually refers to the dynamic programming equation (DPE) associated with discrete-time optimization problems. In continuous-time optimization problems, the analogous equation is a partial differential equation that is called the Hamilton–Jacobi–Bellman equation. In discrete time any multi-stage optimization problem can be solved by analyzing the appropriate Bellman equation. The appropriate Bellman equation can be found by introducing new state variables (state augmentation). However, the resulting augmented-state multi-stage optimization problem has a higher dimensional state space than the original multi-stage optimization problem - an issue that can potentially render the augmented problem intractable due to the “curse of dimensionality”. Alternatively, it has been shown that if the cost function of the multi-stage optimization problem satisfies a "backward separable" structure, then the appropriate Bellman equation can be found without state augmentation. Analytical concepts in dynamic programming To understand the Bellman equation, several underlying concepts must be understood. First, any optimization problem has some objective: minimizing travel time, minimizing cost, maximizing profits, maximizing utility, etc. The mathematical function that describes this objective is called the objective function. Dynamic programming breaks a multi-period planning problem into simpler steps at different points in time. Therefore, it requires keeping track of how the decision situation is evolving over time. The information about the current situation that is needed to make a correct decision is called the "state". For example, to decide how much to consume and spend at each point in time, people would need to know (among other things) their initial wealth. Therefore, wealth would be one of their state variables, but there would probably be others. The variables chosen at any given point in time are often called the control variables. For instance, given their current wealth, people might decide how much to consume now. Choosing the control variables now may be equivalent to choosing the next state; more generally, the next state is affected by other factors in addition to the current control. For example, in the simplest case, today's wealth (the state) and consumption (the control) might exactly determine tomorrow's wealth (the new state), though typically other factors will affect tomorrow's wealth too. The dynamic programming approach describes the optimal plan by finding a rule that tells what the controls should be, given any possible value of the state. For example, if consumption (c) depends only on wealth (W), we would seek a rule that gives consumption as a function of wealth. Such a rule, determining the controls as a function of the states, is called a policy function. Finally, by definition, the optimal decision rule is the one that achieves the best possible value of the objective. For example, if someone chooses consumption, given wealth, in order to maximize happiness (assuming happiness H can be represented by a mathematical function, such as a utility function and is something defined by wealth), then each level of wealth will be associated with some highest possible level of happiness, . The best possible value of the objective, written as a function of the state, is called the value function. Bellman showed that a dynamic optimization problem in discrete time can be stated in a recursive, step-by-step form known as backward induction by writing down the relationship between the value function in one period and the value function in the next period. The relationship between these two value functions is called the "Bellman equation". In this approach, the optimal policy in the last time period is specified in advance as a function of the state variable's value at that time, and the resulting optimal value of the objective function is thus expressed in terms of that value of the state variable. Next, the next-to-last period's optimization involves maximizing the sum of that period's period-specific objective function and the optimal value of the future objective function, giving that period's optimal policy contingent upon the value of the state variable as of the next-to-last period decision. This logic continues recursively back in time, until the first period decision rule is derived, as a function of the initial state variable value, by optimizing the sum of the first-period-specific objective function and the value of the second period's value function, which gives the value for all the future periods. Thus, each period's decision is made by explicitly acknowledging that all future decisions will be optimally made. Derivation A dynamic decision problem Let be the state at time . For a decision that begins at time 0, we take as given the initial state . At any time, the set of possible actions depends on the current state; we express this as , where a particular action represents particular values for one or more control variables, and is the set of actions available to be taken at state . It is also assumed that the state changes from to a new state when action is taken, and that the current payoff from taking action in state is . Finally, we assume impatience, represented by a discount factor . Under these assumptions, an infinite-horizon decision problem takes the following form: subject to the constraints Notice that we have defined notation to denote the optimal value that can be obtained by maximizing this objective function subject to the assumed constraints. This function is the value function. It is a function of the initial state variable , since the best value obtainable depends on the initial situation. Bellman's principle of optimality The dynamic programming method breaks this decision problem into smaller subproblems. Bellman's principle of optimality describes how to do this:Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. (See Bellman, 1957, Chap. III.3.) In computer science, a problem that can be broken apart like this is said to have optimal substructure. In the context of dynamic game theory, this principle is analogous to the concept of subgame perfect equilibrium, although what constitutes an optimal policy in this case is conditioned on the decision-maker's opponents choosing similarly optimal policies from their points of view. As suggested by the principle of optimality, we will consider the first decision separately, setting aside all future decisions (we will start afresh from time 1 with the new state ). Collecting the future decisions in brackets on the right, the above infinite-horizon decision problem is equivalent to: subject to the constraints Here we are choosing , knowing that our choice will cause the time 1 state to be . That new state will then affect the decision problem from time 1 on. The whole future decision problem appears inside the square brackets on the right. The Bellman equation So far it seems we have only made the problem uglier by separating today's decision from future decisions. But we can simplify by noticing that what is inside the square brackets on the right is the value of the time 1 decision problem, starting from state . Therefore, the problem can be rewritten as a recursive definition of the value function: , subject to the constraints: This is the Bellman equation. It may be simplified even further if the time subscripts are dropped and the value of the next state is plugged in: The Bellman equation is classified as a functional equation, because solving it means finding the unknown function , which is the value function. Recall that the value function describes the best possible value of the objective, as a function of the state . By calculating the value function, we will also find the function that describes the optimal action as a function of the state; this is called the policy function. In a stochastic problem In the deterministic setting, other techniques besides dynamic programming can be used to tackle the above optimal control problem. However, the Bellman Equation is often the most convenient method of solving stochastic optimal control problems. For a specific example from economics, consider an infinitely-lived consumer with initial wealth endowment at period . They have an instantaneous utility function where denotes consumption and discounts the next period utility at a rate of . Assume that what is not consumed in period carries over to the next period with interest rate . Then the consumer's utility maximization problem is to choose a consumption plan that solves subject to and The first constraint is the capital accumulation/law of motion specified by the problem, while the second constraint is a transversality condition that the consumer does not carry debt at the end of their life. The Bellman equation is Alternatively, one can treat the sequence problem directly using, for example, the Hamiltonian equations. Now, if the interest rate varies from period to period, the consumer is faced with a stochastic optimization problem. Let the interest r follow a Markov process with probability transition function where denotes the probability measure governing the distribution of interest rate next period if current interest rate is . In this model the consumer decides their current period consumption after the current period interest rate is announced. Rather than simply choosing a single sequence , the consumer now must choose a sequence for each possible realization of a in such a way that their lifetime expected utility is maximized: The expectation is taken with respect to the appropriate probability measure given by Q on the sequences of rs. Because r is governed by a Markov process, dynamic programming simplifies the problem significantly. Then the Bellman equation is simply: Under some reasonable assumption, the resulting optimal policy function g(a,r) is measurable. For a general stochastic sequential optimization problem with Markovian shocks and where the agent is faced with their decision ex-post, the Bellman equation takes a very similar form Solution methods The method of undetermined coefficients, also known as 'guess and verify', can be used to solve some infinite-horizon, autonomous Bellman equations. The Bellman equation can be solved by backwards induction, either analytically in a few special cases, or numerically on a computer. Numerical backwards induction is applicable to a wide variety of problems, but may be infeasible when there are many state variables, due to the curse of dimensionality. Approximate dynamic programming has been introduced by D. P. Bertsekas and J. N. Tsitsiklis with the use of artificial neural networks (multilayer perceptrons) for approximating the Bellman function. This is an effective mitigation strategy for reducing the impact of dimensionality by replacing the memorization of the complete function mapping for the whole space domain with the memorization of the sole neural network parameters. In particular, for continuous-time systems, an approximate dynamic programming approach that combines both policy iterations with neural networks was introduced. In discrete-time, an approach to solve the HJB equation combining value iterations and neural networks was introduced. By calculating the first-order conditions associated with the Bellman equation, and then using the envelope theorem to eliminate the derivatives of the value function, it is possible to obtain a system of difference equations or differential equations called the 'Euler equations'. Standard techniques for the solution of difference or differential equations can then be used to calculate the dynamics of the state variables and the control variables of the optimization problem. Applications in economics The first known application of a Bellman equation in economics is due to Martin Beckmann and Richard Muth. Martin Beckmann also wrote extensively on consumption theory using the Bellman equation in 1959. His work influenced Edmund S. Phelps, among others. A celebrated economic application of a Bellman equation is Robert C. Merton's seminal 1973 article on the intertemporal capital asset pricing model. (See also Merton's portfolio problem). The solution to Merton's theoretical model, one in which investors chose between income today and future income or capital gains, is a form of Bellman's equation. Because economic applications of dynamic programming usually result in a Bellman equation that is a difference equation, economists refer to dynamic programming as a "recursive method" and a subfield of recursive economics is now recognized within economics. Nancy Stokey, Robert E. Lucas, and Edward Prescott describe stochastic and nonstochastic dynamic programming in considerable detail, and develop theorems for the existence of solutions to problems meeting certain conditions. They also describe many examples of modeling theoretical problems in economics using recursive methods. This book led to dynamic programming being employed to solve a wide range of theoretical problems in economics, including optimal economic growth, resource extraction, principal–agent problems, public finance, business investment, asset pricing, factor supply, and industrial organization. Lars Ljungqvist and Thomas Sargent apply dynamic programming to study a variety of theoretical questions in monetary policy, fiscal policy, taxation, economic growth, search theory, and labor economics. Avinash Dixit and Robert Pindyck showed the value of the method for thinking about capital budgeting. Anderson adapted the technique to business valuation, including privately held businesses. Using dynamic programming to solve concrete problems is complicated by informational difficulties, such as choosing the unobservable discount rate. There are also computational issues, the main one being the curse of dimensionality arising from the vast number of possible actions and potential state variables that must be considered before an optimal strategy can be selected. For an extensive discussion of computational issues, see Miranda and Fackler, and Meyn 2007. Example In Markov decision processes, a Bellman equation is a recursion for expected rewards. For example, the expected reward for being in a particular state s and following some fixed policy has the Bellman equation: This equation describes the expected reward for taking the action prescribed by some policy . The equation for the optimal policy is referred to as the Bellman optimality equation: where is the optimal policy and refers to the value function of the optimal policy. The equation above describes the reward for taking the action giving the highest expected return. See also References Equations Dynamic programming Control theory
Bellman equation
Mathematics
3,054
40,698,922
https://en.wikipedia.org/wiki/Digital%20zombie
A digital zombie, as defined by the University of Sydney, is a person so engaged with digital technology and/or social media they are unable to separate themselves from a persistent online presence. Further, University of Sydney researcher Andrew Campbell also expressed concerns over whether or not the individual can truly live a full and healthy life while they are preoccupied with the digital world. Other individuals have also begun referencing certain types of behaviour with being a digital zombie. Stefanie Valentic, managing editor of EHS Today, refers to it as people hunting digital creatures through their smartphones in public spaces, always fixed on their phones. In looking at the origins of the word "Zonbi" from Haitian slave plantations, it's been noted that the term also implies a control of the physical body by technology. The University of Warwick has used the term to argue that further research needs to be done with people who exist in digital form after death to help people grieve their loss. Modern applications Distracted walking The term digital zombie can refer to a person performing distracted walking, which has been labelled dangerous by the American Academy of Orthopaedic Surgeons. They created the "Digital Deadwalkers" campaign after physicians became aware of the risks associated with walking across intersections and sidewalks while paying attention only to smartphones and not one's surroundings. Also stating that the name is derived from the fact that "they're oblivious to everyone else, so it's like they're dead-walking, sleepwalking." Living through media The Department of Sociology, University of Warwick has also identified the term, digital zombie, to refer to an individual who has died but is digitally resurrected, reanimated and socially active. These digital zombies do things in death they did not do when they were alive as they "live" again through a digital self on a digital medium. Dead celebrities sometimes become digital zombies when they are reanimated to appear in commercial advertisements (such as Audrey Hepburn and Bob Monkhouse). Other accidental digital zombies include Tupac Shakur and Michael Jackson who were both digitally resurrected and recreated to perform "live" on stage years after their death. Researchers at the University of Warwick have carried out research into the area of human-computer interaction. in an effort to understand the affect these digital zombies have on grief and bereavement. Mobile gaming Writer for EHS Today, Stefanie Valentic, has made observations with the mobile phone video game Pokémon Go, which offers players the experience to hunt and collect digital creatures called Pokémon through their smartphone in real world. Players can be observed simultaneously gazing at their phone while also obliviously walking around their environments looking for Pokémon. Stefanie references these individuals as "digital zombies" since they walk around with no cognition of their surroundings while engaged with their phone. Health risks Heavy use of technology Research by the University of Sydney has begun looking at how new technology such as digital media and smartphones impact our lives and questioning whether they can create new compulsions and obsessions. The research demonstrates that increased heavy technological use can have negative health consequences similar to drugs, smoking, and alcohol. Marcel O'Gorman, an associate professor of English at the University of Waterloo, has commented on the body of research examining how technology impacts cognition, stating currently that there is no empirical evidence to support any theories that suggest that technology can damage memory and attention span. Heightened risk to children Manfred Spitzer, a German psychiatrist, has raised concerns with providing digital devices to children. During the early childhood stage while their brains are rapidly growing, increased exposure to digital devices may deprive them of necessary development required to facilitate brain growth. These concerns are also shared by Korean doctors who believe giving digital devices, like smartphones to children, limits their cognitive development. See also Smartphone zombie References Behavioral addiction Text messaging Social media
Digital zombie
Technology
772
9,765
https://en.wikipedia.org/wiki/Equuleus
Equuleus is a faint constellation located just north of the celestial equator. Its name is Latin for "little horse", a foal. It was one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations. It is the second smallest of the modern constellations (after Crux), spanning only 72 square degrees. It is also very faint, having no stars brighter than the fourth magnitude. Notable features Stars The brightest star in Equuleus is α Equulei, traditionally called Kitalpha, a yellow star magnitude 3.9, 186 light-years from Earth. Its traditional name means "the section of the horse". There are few variable stars in Equuleus. Only around 25 are known, most of which are faint. γ Equulei is an α2 CVn variable star, ranging between magnitudes 4.58 and 4.77 over a period of around 12½ minutes. It is a white star 115 light-years from Earth, and has an optical companion of magnitude 6.1, 6 Equulei. It is divisible in binoculars. 6 Equulei is an astrometric binary system itself, with an apparent magnitude of 6.07. R Equulei is a Mira variable that ranges between magnitudes 8.0 and 15.7 over nearly 261 days. It has a spectral type of M3e-M4e and has an average B-V colour index of +1.41. Equuleus contains some double stars of interest. γ Equulei consists of a primary star with a magnitude around 4.7 (slightly variable) and a secondary star of magnitude 11.6, separated by 2 arcseconds. ε Equulei is a triple star also designated 1 Equulei. The system, 197 light-years away, has a primary of magnitude 5.4 that is itself a binary star; its components are of magnitude 6.0 and 6.3 and have a period of 101 years. The secondary is of magnitude 7.4 and is visible in small telescopes. The components of the primary are becoming closer together and will not be divisible in amateur telescopes beginning in 2015. δ Equulei is a binary star with an orbital period of 5.7 years, which at one time was the shortest known orbital period for an optical binary. The two components of the system are never more than 0.35 arcseconds apart. Deep-sky objects Due to its small size and its distance from the plane of the Milky Way, Equuleus is rather devoid of deep sky objects. Some very faint galaxies in the NGC catalog between magnitudes 13 and 15 include NGC 7015, NGC 7040, and NGC 7046. NGC 7045 is a triple star that was mistaken as a nebula by its discoverer, John Herschel. Other faint galaxies in the IC Catalog include IC 1360, IC 1361, IC 1364, IC 1367, IC 1375, and IC 5083. IC 1365 is a group of galaxies. The magnitudes of these objects vary from 14.5 to 15.5, making them hard to see in even the largest of amateur telescopes. Mythology In Greek mythology, one myth associates Equuleus with the foal Celeris (meaning "swiftness" or "speed"), who was the offspring or brother of the winged horse Pegasus. Celeris was given to Castor by Mercury. Other myths say that Equuleus is the horse struck from Poseidon's trident, during the contest between him and Athena when deciding which would be the superior. Because this section of stars rises before Pegasus, it is often called Equus Primus, or the First Horse. Equuleus is also linked to the story of Philyra and Saturn. Created by Hipparchus and included by Ptolemy, it abuts Pegasus; unlike the larger horse, it is depicted as a horse's head alone. Equivalents In Chinese astronomy, the stars that correspond to Equuleus are located within the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ). See also Equuleus (Chinese astronomy) References Burnham, Robert (1978). Burnham's Celestial Handbook: An observer's guide to the universe beyond the solar system, vol 2. Dover Publications Hoffleit+ (1991) V/50 The Bright Star Catalogue, 5th revised ed, Yale University Observatory, Strasbourg astronomical Data Center Ian Ridpath & Wil Tirion (2007). Stars and Planets Guide, Collins, London. . Princeton University Press, Princeton. . External links The Deep Photographic Guide to the Constellations: Equuleus The clickable Equuleus Star Tales – Equuleus Warburg Institute Iconographic Database (medieval and early modern images of Equuleus) Constellations Northern constellations Constellations listed by Ptolemy
Equuleus
Astronomy
1,025
4,030,732
https://en.wikipedia.org/wiki/121P/Shoemaker%E2%80%93Holt
121P/Shoemaker–Holt, also known as Shoemaker-Holt 2, is a periodic comet in the Solar System with an orbital period of about 8 years. The comet was discovered by Carolyn S. Shoemaker, Eugene M. Shoemaker, and Henry E. Holt on 9 March 1989. The comet then had an apparent magnitude of 13, was diffuse and had a tail about 2 arcminutes long. It was recovered by James V. Scotti on 29 August 1995 in images obtained as part of the Spacewatch survey. The nucleus of the comet is estimated to have a radius of 3.87 km based on infrared imaging by the Spitzer Space Telescope, when the comet displayed dust emission. Observations of the comet from the Isaac Newton Telescope indicate an effective radius of 3.61 kilometers. The rotational period was calculated to be 10 hours, but with high uncertainty. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 121P/Shoemaker-Holt 2 – Seiichi Yoshida @ aerith.net 121P at Kronk's Cometography Periodic comets 0121 121P 121P 121P 121P 19890309
121P/Shoemaker–Holt
Astronomy
241
71,523,018
https://en.wikipedia.org/wiki/BG%20Canis%20Minoris
BG Canis Minoris is a binary star system in the equatorial constellation of Canis Minor, abbreviated BG CMi. With an apparent visual magnitude that fluctuates around 14.5, it is much too faint to be visible to the naked eye. Parallax measurements provide a distance estimate of approximately 2,910 light years from the Sun. In 1981, I. M. McHardy and associates included the X-ray source '3A 0729+103' in their Ariel 5 satellite 3A catalogue. The team used a localized search of those coordinates with the Einstein Observatory to isolate an X-ray source that matched the location of a blue-hued star with a visual magnitude of 14.5. The light curve for this star proved quite similar to other intermediate polars that had been identified as X-ray sources. The overall brightness variation of 3A 0729+103 matched a binary system with an orbital period of 194.1 minutes. It displays a more rapid variation with a period of 913 seconds, which was interpreted as related to a spin period. The standard model for this category of variable star consists of a magnetized white dwarf in a close orbit with a cool main sequence secondary star. The Roche lobe of the secondary is overflowing, and this stream of matter is falling onto an accretion disk in orbit around the primary. X-ray observations with the EXOSAT observatory in 1984–1985 demonstrated there are two regions of emission. One of these is believed to be at the magnetic poles of the white dwarf component, while the second is located where the accretion stream is striking the white dwarf's magnetosphere. The emission at the pole is partially eclipsed by the rotation of the white dwarf. In 1987, cyclotron radiation was discovered based on the circular polarization of its near infrared output, the first conclusive identification of this behavior for an intermediate polar. This emission confirmed the model of a magnetic white dwarf that is accreting mass. Measurements suggested a magnetic field strength of . Changes in the rotation period over time indicate that the white dwarf is slowly being spun up due to torque from accreted matter. It has an estimated 78% of the mass of the Sun, while the donor companion has about 38%. References Further reading White dwarfs Red dwarfs Intermediate polars Astronomical X-ray sources Canis Minor Canis Minoris, BG
BG Canis Minoris
Astronomy
488
83,942
https://en.wikipedia.org/wiki/Courtyard
A courtyard or court is a circumscribed area, often surrounded by a building or complex, that is open to the sky. Courtyards are common elements in both Western and Eastern building patterns and have been used by both ancient and contemporary architects as a typical and traditional building feature. Such spaces in inns and public buildings were often the primary meeting places for some purposes, leading to the other meanings of court. Both of the words court and yard derive from the same root, meaning an enclosed space. See yard and garden for the relation of this set of words. In universities courtyards are often known as quadrangles. Historic use Courtyards—private open spaces surrounded by walls or buildings—have been in use in residential architecture for almost as long as people have lived in constructed dwellings. The courtyard house makes its first appearance –6000 BC (calibrated), in the Neolithic Yarmukian site at Sha'ar HaGolan, in the central Jordan Valley, on the northern bank of the Yarmouk River, giving the site a special significance in architectural history. Courtyards have historically been used for many purposes including cooking, sleeping, working, playing, gardening, and even places to keep animals. Before courtyards, open fires were kept burning in a central place within a home, with only a small hole in the ceiling overhead to allow smoke to escape. Over time, these small openings were enlarged and eventually led to the development of the centralized open courtyard we know today. Courtyard homes have been designed and built throughout the world with many variations. Courtyard homes are more prevalent in temperate climates, as an open central court can be an important aid to cooling house in warm weather. However, courtyard houses have been found in harsher climates as well for centuries. The comforts offered by a courtyard—air, light, privacy, security, and tranquility—are properties nearly universally desired in human housing. Almost all courtyards use natural elements. Comparison throughout the world Middle East Courtyards were widely used in the ancient Middle East. Middle Eastern courtyard houses reflect the nomadic influences of the region. Instead of officially designating rooms for cooking, sleeping, etc., these activities were relocated throughout the year as appropriate to accommodate the changes in temperature and the position of the sun. Often the flat rooftops of these structures were used for sleeping in warm weather. In some Islamic cultures, private courtyards provided the only outdoor space for women to relax unobserved. Convective cooling through transition spaces between multiple-courtyard buildings in the Middle East has also been observed. In c. 2000 BC Ur, two-storey houses were constructed around an open square were built of fired brick. Kitchen, working, and public spaces were located on the ground floor, with private rooms located upstairs. Europe The central uncovered area in a Roman domus was referred to as an atrium. Today, we generally use the term courtyard to refer to such an area, reserving the word atrium to describe a glass-covered courtyard. Roman atrium houses were built side by side along the street. They were one-storey homes without windows that took in light from the entrance and from the central atrium. The hearth, which used to inhabit the centre of the home, was relocated, and the Roman atrium most often contained a central pool used to collect rainwater, called an impluvium. These homes frequently incorporated a second open-air area, the garden, which would be surrounded by Greek-style colonnades, forming a peristyle. This created a colonnaded walkway around the perimeter of the courtyard, which influenced monastic structures centuries later. The medieval European farmhouse embodies what we think of today as one of the most archetypal examples of a courtyard house—four buildings arranged around a square courtyard with a steep roof covered by thatch. The central courtyard was used for working, gathering, and sometimes keeping small livestock. An elevated walkway frequently ran around two or three sides of the courtyards in the houses. Such structures afforded protection, and could even be made defensible. China The traditional Chinese courtyard house, (e.g. siheyuan), is an arrangement of several individual houses around a square. Each house belongs to a different family member, and additional houses are created behind this arrangement to accommodate additional family members as needed. The Chinese courtyard is a place of privacy and tranquility, almost always incorporating a garden and water feature. In some cases, houses are constructed with multiple courtyards that increase in privacy as they recede from the street. Strangers would be received in the outermost courtyard, with the innermost ones being reserved for close friends and family members. In a more contemporary version of the Chinese model, a courtyard can also can be used to separate a home into wings; for example, one wing of the house may be for entertaining/dining, and the other wing may be for sleeping/family/privacy. This is exemplified by the Hooper House in Baltimore, Maryland. United States A courtyard apartment building type appeared in Chicago in the early 1890s and flourished into the 1920s. They are characterized primarily by a low height, a structure along three sides of a rectangular or square lot, and an open court extending perpendicular to the street. The courtyards are generally deeper than they are wide, but many finer ones are wider than they are deep. Influenced by the privacy and domesticity of a standalone house as much as by strict health codes, the architectural style provided outdoor access and ventilation unseen in earlier multi-unit housing in the United States. Relevance today More and more, architects are investigating ways that courtyards can play a role in the development of today's homes and cities. In densely populated areas, a courtyard in a home can provide privacy for a family, a break from the frantic pace of everyday life, and a safe place for children to play. With space at a premium, architects are experimenting with courtyards as a way to provide outdoor space for small communities of people at a time. A courtyard surrounded by 12 houses, for example, would provide a shared park-like space for those families, who could take pride in ownership of the space. Though this might sound like a modern-day solution to an inner city problem, the grouping of houses around a shared courtyard was common practice among the Incas as far back as the 13th century. In San Francisco, the floor plans of "marina style" houses often include a central patio, a miniature version of an open courtyard, sometimes covered with glass or a translucent material. Central patios provide natural light to common areas and space for potted outdoor plants. In Gilgit/Baltistan, Pakistan, courtyards were traditionally used for public gatherings where village related issues were discussed. These were different from jirgahs, which are a tradition of the tribal regions of Pakistan. Gallery See also Hakka walled village Yaodong Tsubo-niwa References Atrium: Five Thousand Years of Open Courtyards, by Werner Blaser 1985, Wepf & Co. Atrium Buildings: Development and Design, by Richard Saxon 1983, The Architectural Press, London A History of Architecture, by Spiro Kostof 1995, The Oxford Press. External links Home Architectural elements
Courtyard
Technology,Engineering
1,459
34,270,640
https://en.wikipedia.org/wiki/MC21-B
MC21-B is an antibiotic isolated from the O-BC30T strain of a marine bacterium, Pseudoalteromonas phenolica. MC21-B is cytotoxic to human leukaemia cells and human normal dermal fibroblasts. See also MC21-A References Antibiotics Biphenyls Dicarboxylic acids Benzoic acids Bromoarenes Halogen-containing natural products
MC21-B
Biology
91
54,221,153
https://en.wikipedia.org/wiki/Network%20Performance%20Monitoring%20Solution
Network Performance Monitor (NPM) in Operations Management Suite, a component of Microsoft Azure, monitors network performance between office sites, data centers, clouds and applications in near real-time. It helps a network administrator locate and troubleshoot bottlenecks like network delay, data loss and availability of any network link across on-premises networks, Microsoft Azure VNets, Amazon Web Services VPCs, hybrid networks, VPNs or even public internet links. Network Performance Monitor Network Performance Monitor (NPM) is network monitoring from the Operations Management Suite, that monitors networks. NPM monitors the availability of connectivity and quality of connectivity between multiple locations within and across campuses, private and public clouds. It uses synthetic transactions to test for reachability and can be used on any IP network irrespective of the make and model of network routers or switches deployed. Features A dashboard is generated to display summarized information about the Network including Network health events, alleged unhealthy Network links, and the Subnetwork links with the most loss and most latency. Custom dashboards can also be created to find the state of the network at a point in time in history. An interactive topology map is also generated to show the routes between Nodes. Network administrator can use it to distinguish the unhealthy path to find out the root cause of the issue. Alerts can be configured to send e-mails to stakeholders when a threshold is reached. Use Cases Two on-premises networks: Monitor connectivity between two office sites which could be connected using an MPLS WAN link or VPN Multiple sites: Monitor connectivity to a central site from multiple sites. For example, scenarios where users from multiple office locations are accessing applications hosted at a central location Hybrid Networks: Monitor connectivity between on-premises and Azure VNets that could be connected using S2S VPN or ExpressRoute Multiple Virtual Networks in Cloud: Monitor connectivity between multiple VNets in the same or different Azure regions. These could be peered V-Nets or V-nets connected using a VPN. Any Cloud: Monitor connectivity between Amazon Web Services and on-premises Networks. And also between Amazon Web Services and Azure V-Nets. Operation It does not require any access to network devices. Microsoft Monitoring Agent (MMA) or OMS extension (valid only for Virtual machines hosted in Azure) is to be installed on the servers in the Subnetworks that are to be monitored. OMS Agent auto downloads the Network Monitoring Intelligence Packs which spawns an NPM agent that detects the subnets it is connected to and this information is sent to OMS. NPM Agent gets to know the list of the IP addresses of other agents from OMS. NPM Agent IP starts active probes using Internet Control Message Protocol (ICMP) or Transmission Control Protocol (TCP) Ping and the roundtrip time for a ping between two nodes is used to calculate network performance metrics such as packet loss and link latency. This data is pushed to OMS where it's used to create a customizable dashboard. A video-based demo of NPM is available online. Synthetic transactions NPM uses synthetic transactions to test for reachability and calculate network performance metrics across the network. Tests are performed using either TCP or ICMP and users have the option of choosing between these protocols. Users must evaluate their environments and weigh the pros and cons of the protocols. The following is a summary of the differences. TCP provides more accurate results compared to ICMP ECHO because routers and switches assign lower priority to ICMP ECHO packets compared to TCP Ping. TCP needs configuration of network firewall and local firewall on the computers where agents are installed to allow traffic on default port 8084. Some other ports can also be chosen for this. ICMP does not need to configure a firewall but it needs more agents to provide information about all the paths between two subnets. Consequently, the OMS agent must be installed on more machines in the subnet as compared to when TCP is used. Timeline February 27, 2017 NPM Solution became generally available (GA). The launch was picked up by eWeek July 27, 2016 NPM solution was announced in the Public Preview Operating systems supported Windows Server 2008 SP 1 or later Linux distributions CentOS Linux 7 RedHat Enterprise Linux 7.2 Ubuntu 14.04 LTS, 15.04, 16.04 LTS Debian 8 SUSSUSE LinuxE Linux Server 12 Client operating systems Windows 7 SP1 or later Availability in regions Network Performance Monitor is available in the following Azure regions: Eastern US Western Europe South East Asia South East Australia West Central US South UK US Gov Virginia Data collection frequency TCP handshakes every 5 seconds, data sent every 3 minutes References Servers (computing) Network performance Network software Computer performance
Network Performance Monitoring Solution
Technology,Engineering
981
632,899
https://en.wikipedia.org/wiki/Low-level%20waste
Low-level waste (LLW) or low-level radioactive waste (LLRW) is a category of nuclear waste. The definition of low-level waste is set by the nuclear regulators of individual countries, though the International Atomic Energy Agency (IAEA) provides recommendations. LLW includes items that have become contaminated with radioactive material or have become radioactive through exposure to neutron radiation. This waste typically consists of contaminated protective shoe covers and clothing, wiping rags, mops, filters, reactor water treatment residues, equipments and tools, luminous dials, medical tubes, swabs, injection needles, syringes, and laboratory animal carcasses and tissues. LLW in the United Kingdom In the UK, LLW is defined as waste with specific activities below 12 gigabecquerel/ tonne (GBq/t) beta/gamma and below 4 GBq/t alpha emitting nuclides. Waste with specific activities above these thresholds are categorised as either Intermediate-level waste (ILW) or high heat generating waste depending upon the heat output of the waste. Very Low Level Waste (VLLW) is a sub-category of LLW. VLLW is LLW that is suitable disposal with regular household or industrial waste at specially permitted landfill facilities. The major components of VLLW from nuclear sites are building rubble, soil and steel items. These arise from the dismantling and demolition of nuclear reactors and facilities. LLW in the United States LLW in the United States is defined as nuclear waste that does not fit into the categorical definitions: high-level waste (HLW), spent nuclear fuel (SNF), transuranic waste (TRU), or certain byproduct materials known as 11e(2) wastes, such as uranium mill tailings. In essence, it is a definition by exclusion, and LLW is that category of radioactive wastes that do not fit into the other categories. If LLW is mixed with hazardous wastes as classified by RCRA, then it has a special status as mixed low-level waste (MLLW) and must satisfy treatment, storage, and disposal regulations both as LLW and as hazardous waste. While the bulk of LLW is not highly radioactive, the definition of LLW does not include references to its activity, and some LLW may be quite radioactive, as in the case of radioactive sources used in industry and medicine. It is notable that U.S. regulations do not define the category intermediate-level waste and thus many wastes which would fall into this category under other regulatory regimes are instead classified as LLW. This also means that the radioactive of LLW in the US can range from just above background levels found in nature to very highly radioactive in certain cases such as parts from inside the reactor vessel in a nuclear power plant. Disposal Depending on who "owns" the waste, its handling and disposal is regulated differently. All nuclear facilities, whether they are a utility or a disposal site, have to comply with Nuclear Regulatory Commission (NRC) regulations. The four low-level waste facilities in the U.S. are Barnwell, South Carolina; Richland, Washington; Clive, Utah; and as of June 2013, Andrews County, Texas. The Barnwell and the Clive locations are operated by EnergySolutions, the Richland location is operated by U.S. Ecology, and the Andrews County location is operated by Waste Control Specialists. Barnwell, Richland, and Andrews County accept Classes A through C of low-level waste, whereas Clive only accepts Class A LLW. The DOE has dozens of LLW sites under management. The largest of these exist at DOE Reservations around the country (e.g. the Hanford Reservation, Savannah River Site, Nevada Test Site, Los Alamos National Laboratory, Oak Ridge National Laboratory, Idaho National Laboratory, to name the most significant). Classes of wastes are detailed in 10 C.F.R. § 61.55 Waste Classification, enforced by the Nuclear Regulatory Commission, reproduced in the table below. These are not all the isotopes disposed of at these facilities, just the ones that are of most concern for the long-term monitoring of the sites. Waste is divided into three classes, A through C, where A is the least radioactive and C is the most radioactive. Class A LLW is able to be deposited near the surface, whereas Classes B and C LLW have to be buried progressively deeper. In 10 C.F.R. § 20.2002, the NRC reserves the right to grant a free release of radioactive waste. The overall activity of such a disposal cannot exceed 1 mrem/yr and the NRC regards requests on a case-by-case basis. Low-level waste passing such strict regulations is then disposed of in a landfill with other garbage. Items allowed to be disposed of in this way include glow-in-the-dark watches (radium) and smoke detectors (americium). LLW should not be confused with high-level waste (HLW) or spent nuclear fuel (SNF). C Class low level waste has a limit of 100 nano-Curies per gram of alpha-emitting transuranic nuclides with a half life greater than 5 years; any more than 100 nCi, and it must be classified as transuranic waste (TRU). These require different disposal pathways. TRU wastes from the U.S. nuclear weapons complex is currently disposed at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, though other sites also are being considered for on-site disposal of particularly difficult to manage TRU wastes. See also Low Level Waste Repository Mixed waste (radioactive/hazardous) Radioactive waste Spent nuclear fuel Transuranic waste References Notes General references Fentiman, Audeen W. and James H. Saling. Radioactive Waste Management. New York: Taylor & Francis, 2002. Second ed. Jorge L. Contreras, "In the Village Square: Risk Misperception and Decisionmaking in the Regulation of Low-Level Radioactive Waste", 19 Ecology Law Quarterly 481 (1992) (SSRN) External links NRC description of low-level waste Radioactive waste
Low-level waste
Chemistry,Technology
1,287
25,264,092
https://en.wikipedia.org/wiki/Slow-growing%20hierarchy
In computability theory, computational complexity theory and proof theory, the slow-growing hierarchy is an ordinal-indexed family of slowly increasing functions gα: N → N (where N is the set of natural numbers, {0, 1, ...}). It contrasts with the fast-growing hierarchy. Definition Let μ be a large countable ordinal such that a fundamental sequence is assigned to every limit ordinal less than μ. The slow-growing hierarchy of functions gα: N → N, for α < μ, is then defined as follows: for limit ordinal α. Here α[n] denotes the nth element of the fundamental sequence assigned to the limit ordinal α. The article on the Fast-growing hierarchy describes a standardized choice for fundamental sequence for all α < ε0. Example Relation to fast-growing hierarchy The slow-growing hierarchy grows much more slowly than the fast-growing hierarchy. Even gε0 is only equivalent to f3 and gα only attains the growth of fε0 (the first function that Peano arithmetic cannot prove total in the hierarchy) when α is the Bachmann–Howard ordinal. However, Girard proved that the slow-growing hierarchy eventually catches up with the fast-growing one. Specifically, that there exists an ordinal α such that for all integers n gα(n) < fα(n) < gα(n + 1) where fα are the functions in the fast-growing hierarchy. He further showed that the first α this holds for is the ordinal of the theory ID<ω of arbitrary finite iterations of an inductive definition. However, for the assignment of fundamental sequences found in the first match up occurs at the level ε0. For Buchholz style tree ordinals it could be shown that the first match up even occurs at . Extensions of the result proved to considerably larger ordinals show that there are very few ordinals below the ordinal of transfinitely iterated -comprehension where the slow- and fast-growing hierarchy match up. The slow-growing hierarchy depends extremely sensitively on the choice of the underlying fundamental sequences. References See especially "A Glimpse at Hierarchies of Fast and Slow Growing Functions", pp. 59–64 of linked version. Notes Computability theory Proof theory Hierarchy of functions
Slow-growing hierarchy
Mathematics
492
35,892,247
https://en.wikipedia.org/wiki/C12H14N2O3
{{DISPLAYTITLE:C12H14N2O3}} The molecular formula C12H14N2O3 (molar mass: 234.25 g/mol, exact mass: 234.1004 u) may refer to: α-methyl-5-hydroxytryptophan Cyclopentobarbital Diproqualone
C12H14N2O3
Chemistry
78
795,072
https://en.wikipedia.org/wiki/Binder%20clip
A binder clip (also known as a foldback clip, paper clamp, banker's clip, foldover clip, bobby clip, or clasp) is a simple device for binding sheets of paper together. It leaves the paper intact and can be removed quickly and easily, unlike the staple. It is also sometimes referred to as a handbag clip because of resemblance to a handbag when its clips are folded up. Characteristics and methods of use A binder clip is a strip of spring steel bent into the shape of an isosceles triangle with loops at the apex. Tension along the base of the triangle forces the two sides closed, and the loops prevent the sharp steel edges from cutting into the paper. The loops also serve to hold two pieces of stiff wire, which are used as handles and allow the clip to be opened. The two slots cut in each loop are shaped so that the wire handles can be folded down once the clip has been attached, and the spring force of the wire holds them down on the surface of the paper. This holds the clip relatively flat, for easier stacking of paper. One handle can also be folded down while the other remains up to allow the stack of papers to be hung up. The handles can also be removed altogether by squeezing them sideways and pulling them out, allowing for more permanent binding. As compared to a paper clip, the binder clip is able to bind sheets of paper more securely, and is also resistant to rust. There are several sizes of binder clips, ranging from a base size of 5 millimetres (0.2 in) to 50 mm (1.97 in). The sheet steel portion is customarily black oxide coated, but a variety of decorative painted color schemes are also available. The sheet steel portion is occasionally made of stainless steel, the more typical spring steel can also be finished in nickel, silver or gold. The handles are normally nickel-plated. Uses The binder clip is in common use in the modern office. It can hold a few to many sheets of paper, and is usually used in place of the paper clip for large volumes of paper. Various practical (and sometimes whimsical) alternative uses have been proposed. These include holding pieces of quilt together, creating a "beer pyramid" in a refrigerator with wire shelves, serving as a bookmark, a cheap alternative to a money clip or preventing computer cables from slipping behind desks. Smaller sized clips have been commonly used as "quick fix" fitting and sizing solutions in the fashion industry. In 1966, test pilot Joseph F. Cotton used the shiny metal portion of such a clip to short-circuit an electrical circuit panel to force the landing gear of the XB-70 bomber on a flight. The object is such a common sight in offices that, in late 2020, when restrictions due to the COVID-19 pandemic were lowered and people started returning to their offices, some missed and enjoyed the feeling of using a binder clip, after months without doing so. At around the same time, healthcare workers in at least one medical center used extra large binder clips as a cheap way to secure a plastic curtain between workers and patients infected with the airborne virus. History In 1909, the method of binding sheets of paper together was either to sew them together or to punch holes in them and tie them together with string. It was therefore time-consuming to remove a single sheet of paper from multiple bound sheets. The binder clip was invented and patented in 1910 by Washington, D.C. area resident Louis E. Baltzley, to help his father, a writer and inventor, hold his manuscripts together more easily. While similar designs have since been patented five times, the most produced version remains the . Louis Baltzley produced the clips through the L.E.B. Manufacturing Company, and these early clips are stamped "L.E.B." on one side. Manufacturing rights were later licensed to other companies. Gallery See also Bulldog clip Clipboard Treasury tag References Fasteners Office equipment Stationery Products introduced in 1910 ja:バインダークリップ
Binder clip
Engineering
839
23,579,317
https://en.wikipedia.org/wiki/C15H26O
{{DISPLAYTITLE:C15H26O}} The molecular formula C15H26O may refer to: Bisabolol (Levomenol) α-Cadinol δ-Cadinol τ-Cadinol Carotol Cedrol Cubebol Farnesol Guaiol Indonesiol Junenol Ledol Nerolidol Patchouli alcohol Viridiflorol See also Cadinol
C15H26O
Chemistry
91
3,948,604
https://en.wikipedia.org/wiki/Hyperbolic%20equilibrium%20point
In the study of dynamical systems, a hyperbolic equilibrium point or hyperbolic fixed point is a fixed point that does not have any center manifolds. Near a hyperbolic point the orbits of a two-dimensional, non-dissipative system resemble hyperbolas. This fails to hold in general. Strogatz notes that "hyperbolic is an unfortunate name—it sounds like it should mean 'saddle point'—but it has become standard." Several properties hold about a neighborhood of a hyperbolic point, notably A stable manifold and an unstable manifold exist, Shadowing occurs, The dynamics on the invariant set can be represented via symbolic dynamics, A natural measure can be defined, The system is structurally stable. Maps If is a C1 map and p is a fixed point then p is said to be a hyperbolic fixed point when the Jacobian matrix has no eigenvalues on the complex unit circle. One example of a map whose only fixed point is hyperbolic is Arnold's cat map: Since the eigenvalues are given by We know that the Lyapunov exponents are: Therefore it is a saddle point. Flows Let be a C1 vector field with a critical point p, i.e., F(p) = 0, and let J denote the Jacobian matrix of F at p. If the matrix J has no eigenvalues with zero real parts then p is called hyperbolic. Hyperbolic fixed points may also be called hyperbolic critical points or elementary critical points. The Hartman–Grobman theorem states that the orbit structure of a dynamical system in a neighbourhood of a hyperbolic equilibrium point is topologically equivalent to the orbit structure of the linearized dynamical system. Example Consider the nonlinear system (0, 0) is the only equilibrium point. The Jacobian matrix of the linearization at the equilibrium point is The eigenvalues of this matrix are . For all values of α ≠ 0, the eigenvalues have non-zero real part. Thus, this equilibrium point is a hyperbolic equilibrium point. The linearized system will behave similar to the non-linear system near (0, 0). When α = 0, the system has a nonhyperbolic equilibrium at (0, 0). Comments In the case of an infinite dimensional system—for example systems involving a time delay—the notion of the "hyperbolic part of the spectrum" refers to the above property. See also Anosov flow Hyperbolic set Normally hyperbolic invariant manifold Notes References Limit sets Stability theory
Hyperbolic equilibrium point
Mathematics
523
37,431,806
https://en.wikipedia.org/wiki/Diisopropanolamine
Diisopropanolamine is a chemical compound with the molecular formula C6H15NO2, used as an emulsifier, stabilizer, and chemical intermediate. Diisopropanolamine can be prepared by the reaction of isopropanolamine or ammonia with propylene oxide. References Amines Diols
Diisopropanolamine
Chemistry
68
1,262,180
https://en.wikipedia.org/wiki/Bathybius%20haeckelii
Bathybius haeckelii was a substance that British biologist Thomas Henry Huxley discovered and initially believed to be a form of primordial matter, a source of all organic life. He later admitted his mistake when it proved to be just the product of an inorganic chemical process (precipitation). In 1868 Huxley studied an old sample of mud from the Atlantic seafloor taken in 1857. When he first examined it, he had found only protozoan cells and placed the sample into a jar of alcohol to preserve it. Now he noticed that the sample contained an albuminous slime that appeared to be criss-crossed with veins. Huxley thought he had discovered a new organic substance and named it Bathybius haeckelii, in honor of German biologist Ernst Haeckel. Haeckel had theorized about Urschleim ("primordial slime"), a protoplasm from which all life had originated. Huxley thought Bathybius could be that protoplasm, a missing link (in modern terms) between inorganic matter and organic life. Huxley published a description of Bathybius that year and also wrote to Haeckel to tell him about it. Haeckel was impressed and flattered and procured a sample for himself. In the next edition of his textbook The History of Creation Haeckel suggested that the substance was constantly coming into being at the bottom of the sea, "monera" arising from nonliving matter due to "physicochemical causes." Huxley asserted in a speech given to the Royal Geographical Society in 1870 that Bathybius undoubtedly formed a continuous mat of living protoplasm that covered the whole ocean floor for thousands of square miles, probably a continuous sheet around the Earth. Sir Charles Wyville Thomson examined some samples in 1869 and regarded them as analogous to mycelium; "no trace of differentiation of organs", "an amorphous sheet of a protein compound, irritable to a low degree and capable of assimilating food... a diffused formless protoplasm." Other scientists were less enthusiastic. George Charles Wallich claimed that Bathybius was a product of chemical disintegration. In 1872 the Challenger expedition began; it spent three years studying the oceans. The expedition also took soundings at 361 ocean stations. They did not find any sign of Bathybius, despite the claim that it was a nearly universal substance. In 1875 ship's chemist John Young Buchanan analyzed a substance that looked like Bathybius from an earlier collected sample. He noticed that it was a precipitate of calcium sulfate from the seawater that had reacted with the preservative liquid (alcohol), forming a gelatinous ooze which clung to particles as if ingesting them. Buchanan suspected that all the Bathybius samples had been prepared the same way and notified Sir Charles Thomson, now the leader of the expedition. Thomson sent a polite letter to Huxley and told about the discovery. Huxley realized that he had been too eager and made a mistake. He published part of the letter in Nature and recanted his previous views. Later, during the 1879 meeting of the British Association for the Advancement of Science, he stated that he was ultimately responsible for spreading the theory and convincing others. Most biologists accepted this acknowledgement of error. Haeckel, however, did not want to abandon the idea of Bathybius because it was so close to proof of his own theories about Urschleim. He claimed without foundation that Bathybius "had been observed" in the Atlantic. Haeckel drew a series of pictures of the evolution of his Urschleim, supposedly based on observations. He continued to support this position until 1883. Huxley's rival George Charles Wallich claimed that Huxley had committed deliberate fraud and also accused Haeckel of falsifying data. Other opponents of evolution, including George Campbell, 8th Duke of Argyll, tried to use the case as an argument against evolution. The entire affair was a blow to the evolutionary cause, who had posited it as their long-sought evolutionary origin of life from nonliving chemistry by natural processes, without the necessity of divine intervention. In retrospect, their error was in dismissing the necessary role of photosynthesis in supporting the entire food chain of life; and the corresponding requirement for sunlight, abundant at the surface, but absent on the ocean floor. References Notes External links History of evolutionary biology Obsolete biology theories
Bathybius haeckelii
Biology
930
15,448,308
https://en.wikipedia.org/wiki/Yeosuana
Yeosuana aromativorans is a species of non-motile aerobic marine bacterium that can degrade benzopyrene. It was first isolated from Gwangyang Bay and forms yellow-brown colonies requiring chlorides of both magnesium and calcium. References Hydrocarbon-degrading bacteria Bacteria described in 2006
Yeosuana
Biology
67
43,920,749
https://en.wikipedia.org/wiki/International%20Organization%20for%20Biological%20Crystallization
The International Organization for Biological Crystallization (IOBCr) is a non-profit, scientific organization for scientists who study the crystallization of biological macromolecules and develop crystallographic methodologies for their study. It was founded in 2002 to create a permanent organ for the organization of the International Conferences for the crystallization of Biological Macromolecules (ICCBM). The ICCBM conferences are organized biannually with venues that change regularly to maintain an international character. The objective of the IOBCr is the exchange of research results and encourage practical applications of biological crystallization. It organizes and supports interdisciplinary workshops. The attendance at the ICCBM meetings includes bio-crystallographers, biochemists, physicists, and engineers. The last International Conferences on Crystallization of Biological Macromolecules ICCBM15 was held in Hamburg, Germany. ICCBM meeting locations ICCBM17 Shanghai, China (Organisers: Zhi-Jie Liu & Da-Chuan Yin), October 29 - November 2, 2018 ICCBM16 Prague, Czech Republic) (Organiser: I Kutá Smatanová), July 2–7, 2016 ICCBM15 Hamburg, Germany, September 17–20, 2014 (Organisers: C. Betzel & J.R. Mesters) ICCBM14 Hunstville, Alabama, September 23–28, 2012 (Organisers: J. Ng & M. Pusey) ICCBM13 Dublin, Ireland, September 12–16, 2010 (Organiser: M. Caffrey) ICCBM-12 Cancun, Mexico, 6–9 May 2008 (Organiser: A. Moreno) ICCBM-11 Quebec, Canada, 16–21 August 2006 (Organisers: S.-X. Lin) ICCBM-10 Beijing, China, 5–8 June 2004 (Organiser: Z. Rao) ICCBM-9 Jena, Germany, 23–28 March 2002 (Organiser: R. Hilgenfeld) ICCBM-8 May 14–19, 2000 San Destin, Florida, USA (Organisers: L. DeLucas, A. Chernov) ICCBM-7 Granada Spain 3–8 May 1998 (Organizer: J. Garcia-Ruiz) ICCBM-6 Hiroshima, Japan 12–17 November 1995 (Organizers: T. Ashida, H. Komatsu) ICCBM-5 San Diego, California, USA, 8–13 August 1993 (Organizers: E.A. Stura, J. Sowadski, E. Villafranca) ICCBM-4 Freiburg, Germany 18–24 August 1991 (Organizers: J. Stezowski and W. Littke) ICCBM-3 Washington DC USA, 13–19 August 1989 (Organiser: K Ward) ICCBM-2 Bischenberg, Strasbourg, France, 19–25 July 1987 (Organisers: R. Giege, A. Ducruix, J. Fontecilla-Camps) ICCBM-1 Stanford, California, USA, 14–16 August 1985 (Organiser: R. Feigelson) ICCBM Proceedings ICCBM-14 Crystal Growth & Design Volume vi, Issue 10, (September 2012) ICCBM-13 Crystal Growth & Design Volume vi, Issue 7, (September 2012) ICCBM-12 Crystal Growth & Design Volume 8, Issue 12, pp 4193–4193 (November 2008) ICCBM-11 Crystal Growth & Design Volume 7, Issue 11 Pages 2123–2371 (November 2007) ICCBM-10 Acta Crystallographica D Volume 61, Part 6 (June 2005) ICCBM-9 Acta Crystallographica D Volume 58, Part 10 (October 2002) ICCBM-8 Journal of Crystal Growth, Volume 232, Issues 1–4, Pages 1–647 (November 2001) ICCBM-7 Journal of Crystal Growth, Volume 196, Issue 2–4, (January 1999) ICCBM-6 Journal of Crystal Growth, Volume 168, Issues 1–4, Pages 1–328 (June 1996) ICCBM-5 Acta Crystallographica D Volume 50, Part 4 (July 1994) ICCBM-4 Journal of Crystal Growth, Volume 122, Issues 1–4, Pages 1–405 (August 1992) ICCBM-3 Journal of Crystal Growth, Volume 110, Issue 1–2, Pages 1–338 (March 1991) ICCBM-2 Journal of Crystal Growth, Volume 90, Issue 1–3, Pages 1–374 (May 1988) ICCBM-1 Journal of Crystal Growth, Volume 76, Issue 3, Pages 529–715 (May 1986) References International scientific organizations Crystallography organizations International organizations based in the Czech Republic 2002 establishments in the Czech Republic
International Organization for Biological Crystallization
Chemistry,Materials_science
980
25,269,133
https://en.wikipedia.org/wiki/Malcev%20Lie%20algebra
In mathematics, a Malcev Lie algebra, or Mal'tsev Lie algebra, is a generalization of a rational nilpotent Lie algebra, and Malcev groups are similar. Both were introduced by , based on the work of . Definition According to a Malcev Lie algebra is a rational Lie algebra together with a complete, descending -vector space filtration , such that: the associated graded Lie algebra is generated by elements of degree one. Applications Relation to Hopf algebras showed that Malcev Lie algebras and Malcev groups are both equivalent to complete Hopf algebras, i.e., Hopf algebras H endowed with a filtration so that H is isomorphic to . The functors involved in these equivalences are as follows: a Malcev group G is mapped to the completion (with respect to the augmentation ideal) of its group ring QG, with inverse given by the group of grouplike elements of a Hopf algebra H, essentially those elements 1 + x such that . From complete Hopf algebras to Malcev Lie algebras one gets by taking the (completion of) primitive elements, with inverse functor given by the completion of the universal enveloping algebra. This equivalence of categories was used by to prove that, after tensoring with Q, relative K-theory K(A, I), for a nilpotent ideal I, is isomorphic to relative cyclic homology HC(A, I). This theorem was a pioneering result in the area of trace methods. Hodge theory Malcev Lie algebras also arise in the theory of mixed Hodge structures. References Hodge theory Lie algebras
Malcev Lie algebra
Engineering
346
7,247,215
https://en.wikipedia.org/wiki/Meltwater
Meltwater (or melt water) is water released by the melting of snow or ice, including glacial ice, tabular icebergs and ice shelves over oceans. Meltwater is often found during early spring when snow packs and frozen rivers melt with rising temperatures, and in the ablation zone of glaciers where the rate of snow cover is reducing. Meltwater can be produced during volcanic eruptions, in a similar way in which the more dangerous lahars form. It can also be produced by the heat generated by the flow itself. When meltwater pools on the surface rather than flowing, it forms melt ponds. As the weather gets colder meltwater will often re-freeze. Meltwater can also collect or melt under the ice's surface. These pools of water, known as subglacial lakes can form due to geothermal heat and friction. Melt ponds may also form above and below Arctic sea ice, decreasing its albedo and causing the formation of thin underwater ice layers or false bottoms. Water source Meltwater provides drinking water for a large proportion of the world's population, as well as providing water for irrigation and hydroelectric plants. This meltwater can originate from seasonal snowfall, or from the melting of more permanent glaciers. Climate change threatens the precipitation of snow and the shrinking volume of glaciers. Some cities around the world have large lakes that collect snow melt to supplement water supply. Others have artificial reservoirs that collect water from rivers, which receive large influxes of meltwater from their higher elevation tributaries. After that, leftover water will flow into oceans causing sea levels to rise. Snow melt hundreds of miles away can contribute to river replenishment. Snowfall can also replenish groundwater in a highly variable process. Cities that indirectly source water from meltwater include Melbourne, Canberra, Los Angeles, Las Vegas among others. In North America, 78% of meltwater flows west of the Continental Divide, and 22% flows east of the Continental Divide. Agriculture in Wyoming and Alberta relies on water sources made more stable during the growing season by glacial meltwater. The Tian Shan region in China once had such significant glacial runoff that it was known as the "Green Labyrinth", but it has faced significant reduction in glacier volume from 1964 to 2004 and become more arid, already impacting the sustainability of water sources. In tropical regions, there is much seasonal variability in the flow of mountainous rivers, and glacial meltwater provides a buffer for this variability providing more water security year-round, but this is threatened by climate change and aridification. Cities that rely heavily on glacial meltwater include La Paz and El Alto in Bolivia, about 30%. Changes in the glacial meltwater are a concern in more remote highland regions of the Andes, where the proportion of water from glacial melt is much greater than in lower elevations. In parts of the Bolivian Andes, surface water contributions from glaciers are as high as 31-65% in the wet season and 39-71% in the dry season. Glacial meltwater Glacial meltwater comes from glacial melt due to external forces or by pressure and geothermal heat. Often, there will be rivers flowing through glaciers into lakes. These brilliantly blue lakes get their color from "rock flour", sediment that has been transported through the rivers to the lakes. This sediment comes from rocks grinding together underneath the glacier. The fine powder is then suspended in the water and absorbs and scatters varying colors of sunlight, giving a milky turquoise appearance. Meltwater also acts as a lubricant in the basal sliding of glaciers. GPS measurements of ice flow have revealed that glacial movement is greatest in summer when the meltwater levels are highest. Glacial meltwater can also affect important fisheries, such as in Kenai River, Alaska. Rapid changes Meltwater can be an indication of abrupt climate change. An instance of a large meltwater body is the case of the region of a tributary of Bindschadler Ice Stream, West Antarctica where rapid vertical motion of the ice sheet surface has suggested shifting of a subglacial water body. It can also destabilize glacial lakes leading to sudden floods, and destabilize snowpack causing avalanches. Dammed glacial meltwater from a moraine-dammed lake that is released suddenly can result in the floods, such as those that created the granite chasms in Purgatory Chasm State Reservation. Global warming In a report published in June 2007, the United Nations Environment Programme estimated that global warming could lead to 40% of the world population being affected by the loss of glaciers, snow and the associated meltwater in Asia. The predicted trend of glacial melt signifies seasonal climate extremes in these regions of Asia. Historically Meltwater pulse 1A was a prominent feature of the last deglaciation and took place 14.7-14.2 thousand years ago. The snow of glaciers in the central Andes melted rapidly due to a heatwave, increasing the proportion of darker-coloured mountains. With alpine glacier volume in decline, much of the environment is affected. These black particles are recognized for their propensity to change the albedo – or reflectance – of a glacier. Pollution particles affect albedo by preventing sun energy from bouncing off a glacier's white, gleaming surface and instead absorbing the heat, causing the glacier to melt. See also Extreme Ice Survey Groundwater Kryal Moulin (geology) Snowmelt Surface water False bottom (sea ice) In the media June 4, 2007, BBC: UN warning over global ice loss References External links United Nations Environment Program: Global Outlook for Ice and Snow Drinking water Water supply Glaciology
Meltwater
Chemistry,Engineering,Environmental_science
1,129
36,544,153
https://en.wikipedia.org/wiki/Sodium%20triacetoxyborohydride
Sodium triacetoxyborohydride, also known as sodium triacetoxyhydroborate, commonly abbreviated STAB, is a chemical compound with the formula . Like other borohydrides, it is used as a reducing agent in organic synthesis. This colourless salt is prepared by protonolysis of sodium borohydride with acetic acid: Comparison with related reagents Sodium triacetoxyborohydride is a milder reducing agent than sodium borohydride or even sodium cyanoborohydride. It reduces aldehydes but not most ketones. It is especially suitable for reductive aminations of aldehydes and ketones. However, unlike sodium cyanoborohydride, the triacetoxyborohydride hydrolyzes readily, nor is it compatible with methanol. It reacts only slowly with ethanol and isopropanol and can be used with these. NaBH(OAc)3 may also be used for reductive alkylation of secondary amines with aldehyde-bisulfite adducts. Monoacetoxyborohydride The combination of with carboxylic acids results in the formation of acyloxyborohydride species other than sodium triacetoxyborohydride. These modified species can perform a variety of reductions not normally associated with borohydride chemistry, such as alcohols to hydrocarbons and nitriles to primary amines. See also Sodium cyanoborohydride - a slightly stronger reductant, but amenable to protic solvents Sodium borohydride - a stronger, cheaper reductant Tetramethylammonium triacetoxyborohydride References Sodium compounds Borohydrides Reducing agents Acetates
Sodium triacetoxyborohydride
Chemistry
382
184,308
https://en.wikipedia.org/wiki/Titanic%20acid
Titanic acid is a general name for a family of chemical compounds of the elements titanium, hydrogen, and oxygen, with the general formula . Various simple titanic acids have been claimed, mainly in the older literature. No crystallographic and little spectroscopic support exists for these materials. Some older literature refers to as titanic acid, and the dioxide forms an unstable hydrate when TiCl4 hydrolyzes. Metatitanic acid (), Orthotitanic acid () or . It is described as a white salt-like powder under "". Peroxotitanic acid () has also been described as resulting from the treatment of titanium dioxide in sulfuric acid with hydrogen peroxide. The resulting yellow solid decomposes with loss of . Pertitanic acid () Pertitanic acid () References Further reading Titanium(IV) compounds Hydroxides Transition metal oxoacids he:חומצה טיטנית ru:Титанаты
Titanic acid
Chemistry
208
39,550,107
https://en.wikipedia.org/wiki/Mt.%20Torry%20Furnace
Mt. Torry Furnace, also known as Virginia Furnace, is a historic iron furnace located at Sherando, Augusta County, Virginia. It was built in 1804, and is a stone square trapezoid measuring 30 feet at the base and 40 feet tall. The original cold-blast charcoal stack was converted for hot blast in 1853. It shut down in 1855, then was reactivated in 1863 to support the Confederate States Army. The furnace was destroyed in June 1864 during the American Civil War by Brigadier General Alfred N. Duffié, then rebuilt in January 1865. It operated until 1884. It was listed on the National Register of Historic Places in 1974. References Industrial buildings and structures on the National Register of Historic Places in Virginia Industrial buildings completed in 1804 Buildings and structures in Augusta County, Virginia National Register of Historic Places in Augusta County, Virginia Industrial furnaces George Washington and Jefferson National Forests
Mt. Torry Furnace
Chemistry
179
984,081
https://en.wikipedia.org/wiki/Space%20rendezvous
A space rendezvous () is a set of orbital maneuvers during which two spacecraft, one of which is often a space station, arrive at the same orbit and approach to a very close distance (e.g. within visual contact). Rendezvous requires a precise match of the orbital velocities and position vectors of the two spacecraft, allowing them to remain at a constant distance through orbital station-keeping. Rendezvous may or may not be followed by docking or berthing, procedures which bring the spacecraft into physical contact and create a link between them. The same rendezvous technique can be used for spacecraft "landing" on natural objects with a weak gravitational field, e.g. landing on one of the Martian moons would require the same matching of orbital velocities, followed by a "descent" that shares some similarities with docking. History In its first human spaceflight program Vostok, the Soviet Union launched pairs of spacecraft from the same launch pad, one or two days apart (Vostok 3 and 4 in 1962, and Vostok 5 and 6 in 1963). In each case, the launch vehicles' guidance systems inserted the two craft into nearly identical orbits; however, this was not nearly precise enough to achieve rendezvous, as the Vostok lacked maneuvering thrusters to adjust its orbit to match that of its twin. The initial separation distances were in the range of , and slowly diverged to thousands of kilometers (over a thousand miles) over the course of the missions. In early 1964 the Soviet Union were able to guide two unmanned satellites designated Polyot 1 and Polyot 2 within 5km, and the crafts were able to establish radio communication. In 1963 Buzz Aldrin submitted his doctoral thesis titled, Line-Of-Sight Guidance Techniques For Manned Orbital Rendezvous. As a NASA astronaut, Aldrin worked to "translate complex orbital mechanics into relatively simple flight plans for my colleagues." First attempt failed NASA's first attempt at rendezvous was made on June 3, 1965, when US astronaut Jim McDivitt tried to maneuver his Gemini 4 craft to meet its spent Titan II launch vehicle's upper stage. McDivitt was unable to get close enough to achieve station-keeping, due to depth-perception problems, and stage propellant venting which kept moving it around. However, the Gemini 4 attempts at rendezvous were unsuccessful largely because NASA engineers had yet to learn the orbital mechanics involved in the process. Simply pointing the active vehicle's nose at the target and thrusting was unsuccessful. If the target is ahead in the orbit and the tracking vehicle increases speed, its altitude also increases, actually moving it away from the target. The higher altitude then increases orbital period due to Kepler's third law, putting the tracker not only above, but also behind the target. The proper technique requires changing the tracking vehicle's orbit to allow the rendezvous target to either catch up or be caught up with, and then at the correct moment changing to the same orbit as the target with no relative motion between the vehicles (for example, putting the tracker into a lower orbit, which has a shorter orbital period allowing it to catch up, then executing a Hohmann transfer back to the original orbital height). First successful rendezvous Rendezvous was first successfully accomplished by US astronaut Wally Schirra on December 15, 1965. Schirra maneuvered the Gemini 6 spacecraft within of its sister craft Gemini 7. The spacecraft were not equipped to dock with each other, but maintained station-keeping for more than 20 minutes. Schirra later commented: Schirra used another metaphor to describe the difference between the two nations' achievements: First docking The first docking of two spacecraft was achieved on March 16, 1966 when Gemini 8, under the command of Neil Armstrong, rendezvoused and docked with an uncrewed Agena Target Vehicle. Gemini 6 was to have been the first docking mission, but had to be cancelled when that mission's Agena vehicle was destroyed during launch. The Soviets carried out the first automated, uncrewed docking between Cosmos 186 and Cosmos 188 on October 30, 1967. The first Soviet cosmonaut to attempt a manual docking was Georgy Beregovoy who unsuccessfully tried to dock his Soyuz 3 craft with the uncrewed Soyuz 2 in October 1968. Automated systems brought the craft to within , while Beregovoy brought this closer with manual control. The first successful crewed docking occurred on January 16, 1969 when Soyuz 4 and Soyuz 5 docked, collecting the two crew members of Soyuz 5, which had to perform an extravehicular activity to reach Soyuz 4. In March 1969 Apollo 9 achieved the first internal transfer of crew members between two docked spacecraft. The first rendezvous of two spacecraft from different countries took place in 1975, when an Apollo spacecraft docked with a Soyuz spacecraft as part of the Apollo–Soyuz mission. The first multiple space docking took place when both Soyuz 26 and Soyuz 27 were docked to the Salyut 6 space station during January 1978. Uses [[File:Mir collision damage STS086-720-091.JPG|right|thumb|Damaged solar arrays on Mir'''s Spektr module following a collision with an uncrewed Progress spacecraft in September 1997 as part of Shuttle-Mir. The Progress spacecraft were used for re-supplying the station. In this space rendezvous gone wrong, the Progress collided with Mir, beginning a depressurization that was halted by closing the hatch to Spektr.|alt=A gold-coloured solar array, bent and twisted out of shape and with several holes. The edge of a module can be seen to the right of the image, and Earth is visible in the background.]] A rendezvous takes place each time a spacecraft brings crew members or supplies to an orbiting space station. The first spacecraft to do this was Soyuz 11, which successfully docked with the Salyut 1 station on June 7, 1971. Human spaceflight missions have successfully made rendezvous with six Salyut stations, with Skylab, with Mir and with the International Space Station (ISS). Currently Soyuz spacecraft are used at approximately six month intervals to transport crew members to and from ISS. With the introduction of NASA's Commercial Crew Program, the US is able to use their own launch vehicle along with the Soyuz, an updated version of SpaceX's Cargo Dragon; Crew Dragon. Robotic spacecraft are also used to rendezvous with and resupply space stations. Soyuz and Progress spacecraft have automatically docked with both Mir and the ISS using the Kurs docking system, Europe's Automated Transfer Vehicle also used this system to dock with the Russian segment of the ISS. Several uncrewed spacecraft use NASA's berthing mechanism rather than a docking port. The Japanese H-II Transfer Vehicle (HTV), SpaceX Dragon, and Orbital Sciences' Cygnus spacecraft all maneuver to a close rendezvous and maintain station-keeping, allowing the ISS Canadarm2 to grapple and move the spacecraft to a berthing port on the US segment. However the updated version of Cargo Dragon will no longer need to berth but instead will autonomously dock directly to the space station. The Russian segment only uses docking ports so it is not possible for HTV, Dragon and Cygnus to find a berth there. Space rendezvous has been used for a variety of other purposes, including recent service missions to the Hubble Space Telescope. Historically, for the missions of Project Apollo that landed astronauts on the Moon, the ascent stage of the Apollo Lunar Module would rendezvous and dock with the Apollo Command/Service Module in lunar orbit rendezvous maneuvers. Also, the STS-49 crew rendezvoused with and attached a rocket motor to the Intelsat VI F-3 communications satellite to allow it to make an orbital maneuver. Possible future rendezvous may be made by a yet to be developed automated Hubble Robotic Vehicle (HRV), and by the CX-OLEV, which is being developed for rendezvous with a geosynchronous satellite that has run out of fuel. The CX-OLEV would take over orbital stationkeeping and/or finally bring the satellite to a graveyard orbit, after which the CX-OLEV can possibly be reused for another satellite. Gradual transfer from the geostationary transfer orbit to the geosynchronous orbit will take a number of months, using Hall effect thrusters. Alternatively the two spacecraft are already together, and just undock and dock in a different way: Soyuz spacecraft from one docking point to another on the ISS or Salyut In the Apollo spacecraft, a maneuver known as transposition, docking, and extraction was performed an hour or so after Trans Lunar Injection of the sequence third stage of the Saturn V rocket / LM inside LM adapter / CSM (in order from bottom to top at launch, also the order from back to front with respect to the current motion), with CSM crewed, LM at this stage uncrewed: the CSM separated, while the four upper panels of the LM adapter were disposed of the CSM turned 180 degrees (from engine backward, toward LM, to forward) the CSM connected to the LM while that was still connected to the third stage the CSM/LM combination then separated from the third stage NASA sometimes refers to "Rendezvous, Proximity-Operations, Docking, and Undocking" (RPODU) for the set of all spaceflight procedures that are typically needed around spacecraft operations where two spacecraft work in proximity to one another with intent to connect to one another. Phases and methods The standard technique for rendezvous and docking is to dock an active vehicle, the "chaser", with a passive "target". This technique has been used successfully for the Gemini, Apollo, Apollo/Soyuz, Salyut, Skylab, Mir, ISS, and Tiangong programs. To properly understand spacecraft rendezvous it is essential to understand the relation between spacecraft velocity and orbit. A spacecraft in a certain orbit cannot arbitrarily alter its velocity. Each orbit correlates to a certain orbital velocity. If the spacecraft fires thrusters and increases (or decreases) its velocity it will obtain a different orbit, one with a higher or lower altitude. In circular orbits, higher orbits have a lower orbital velocity. Lower orbits have a higher orbital velocity. For orbital rendezvous to occur, both spacecraft must be in the same orbital plane, and the phase of the orbit (the position of the spacecraft in the orbit) must be matched. For docking, the speed of the two vehicles must also be matched. The "chaser" is placed in a slightly lower orbit than the target. The lower the orbit, the higher the orbital velocity. The difference in orbital velocities of chaser and target is therefore such that the chaser is faster than the target, and catches up with it. Once the two spacecraft are sufficiently close, the chaser's orbit is synchronized with the target's orbit. That is, the chaser will be accelerated. This increase in velocity carries the chaser to a higher orbit. The increase in velocity is chosen such that the chaser approximately assumes the orbit of the target. Stepwise, the chaser closes in on the target, until proximity operations (see below) can be started. In the very final phase, the closure rate is reduced by use of the active vehicle's reaction control system. Docking typically occurs at a rate of to . Rendezvous phases Space rendezvous of an active, or "chaser", spacecraft with an (assumed) passive spacecraft may be divided into several phases, and typically starts with the two spacecraft in separate orbits, typically separated by more than : A variety of techniques may be used to effect the translational and rotational maneuvers necessary for proximity operations and docking. Methods of approach The two most common methods of approach for proximity operations are in-line with the flight path of the spacecraft (called V-bar, as it is along the velocity vector of the target) and perpendicular to the flight path along the line of the radius of the orbit (called R-bar, as it is along the radial vector, with respect to Earth, of the target). The chosen method of approach depends on safety, spacecraft / thruster design, mission timeline, and, especially for docking with the ISS, on the location of the assigned docking port. V-bar approach The V-bar approach is an approach of the "chaser" horizontally along the passive spacecraft's velocity vector. That is, from behind or from ahead, and in the same direction as the orbital motion of the passive target. The motion is parallel to the target's orbital velocity. In the V-bar approach from behind, the chaser fires small thrusters to increase its velocity in the direction of the target. This, of course, also drives the chaser to a higher orbit. To keep the chaser on the V-vector, other thrusters are fired in the radial direction. If this is omitted (for example due to a thruster failure), the chaser will be carried to a higher orbit, which is associated with an orbital velocity lower than the target's. Consequently, the target moves faster than the chaser and the distance between them increases. This is called a natural braking effect, and is a natural safeguard in case of a thruster failure. STS-104 was the third Space Shuttle mission to conduct a V-bar arrival at the International Space Station. The V-bar, or velocity vector, extends along a line directly ahead of the station. Shuttles approach the ISS along the V-bar when docking at the PMA-2 docking port. R-bar approach The R-bar approach consists of the chaser moving below or above the target spacecraft, along its radial vector. The motion is orthogonal to the orbital velocity of the passive spacecraft. When below the target the chaser fires radial thrusters to close in on the target. By this it increases its altitude. However, the orbital velocity of the chaser remains unchanged (thruster firings in the radial direction have no effect on the orbital velocity). Now in a slightly higher position, but with an orbital velocity that does not correspond to the local circular velocity, the chaser slightly falls behind the target. Small rocket pulses in the orbital velocity direction are necessary to keep the chaser along the radial vector of the target. If these rocket pulses are not executed (for example due to a thruster failure), the chaser will move away from the target. This is a natural braking effect''. For the R-bar approach, this effect is stronger than for the V-bar approach, making the R-bar approach the safer one of the two. Generally, the R-bar approach from below is preferable, as the chaser is in a lower (faster) orbit than the target, and thus "catches up" with it. For the R-bar approach from above, the chaser is in a higher (slower) orbit than the target, and thus has to wait for the target to approach it. Astrotech proposed meeting ISS cargo needs with a vehicle which would approach the station, "using a traditional nadir R-bar approach." The nadir R-bar approach is also used for flights to the ISS of H-II Transfer Vehicles, and of SpaceX Dragon vehicles. Z-bar approach An approach of the active, or "chaser", spacecraft horizontally from the side and orthogonal to the orbital plane of the passive spacecraft—that is, from the side and out-of-plane of the orbit of the passive spacecraft—is called a Z-bar approach. Surface rendezvous Apollo 12, the second crewed lunar landing, performed the first ever rendezvous outside of Low Earth Orbit by landing close to Surveyor 3 and taking parts of it back to Earth. See also Androgynous Peripheral Attach System Clohessy-Wiltshire equations for co-orbit analysis Common Berthing Mechanism Deliberate crash landings on extraterrestrial bodies Flyby (spaceflight) Lunar orbit rendezvous Mars orbit rendezvous Nodal precession of orbits around the Earth's axis Path-constrained rendezvous – the process of moving an orbiting object from its current position to a desired position, in such a way that no orbiting obstacles are contacted along the way Soyuz Kontakt Notes References External links Analysis of a New Nonlinear Solution of Relative Orbital Motion by T. Alan Lovell The Visitors (rendezvous) Handbook Automated Rendezvous and Docking of Spacecraft by Wigbert Fehse Docking system agreement key to global space policy – October 20, 2010 Astrodynamics Orbital maneuvers 1965 introductions Projects established in 1965
Space rendezvous
Engineering
3,372
49,718
https://en.wikipedia.org/wiki/Poynting%20vector
In physics, the Poynting vector (or Umov–Poynting vector) represents the directional energy flux (the energy transfer per unit area, per unit time) or power flow of an electromagnetic field. The SI unit of the Poynting vector is the watt per square metre (W/m2); kg/s3 in base SI units. It is named after its discoverer John Henry Poynting who first derived it in 1884. Nikolay Umov is also credited with formulating the concept. Oliver Heaviside also discovered it independently in the more general form that recognises the freedom of adding the curl of an arbitrary vector field to the definition. The Poynting vector is used throughout electromagnetics in conjunction with Poynting's theorem, the continuity equation expressing conservation of electromagnetic energy, to calculate the power flow in electromagnetic fields. Definition In Poynting's original paper and in most textbooks, the Poynting vector is defined as the cross product where bold letters represent vectors and E is the electric field vector; H is the magnetic field's auxiliary field vector or magnetizing field. This expression is often called the Abraham form and is the most widely used. The Poynting vector is usually denoted by S or N. In simple terms, the Poynting vector S depicts the direction and rate of transfer of energy, that is power, due to electromagnetic fields in a region of space that may or may not be empty. More rigorously, it is the quantity that must be used to make Poynting's theorem valid. Poynting's theorem essentially says that the difference between the electromagnetic energy entering a region and the electromagnetic energy leaving a region must equal the energy converted or dissipated in that region, that is, turned into a different form of energy (often heat). So if one accepts the validity of the Poynting vector description of electromagnetic energy transfer, then Poynting's theorem is simply a statement of the conservation of energy. If electromagnetic energy is not gained from or lost to other forms of energy within some region (e.g., mechanical energy, or heat), then electromagnetic energy is locally conserved within that region, yielding a continuity equation as a special case of Poynting's theorem: where is the energy density of the electromagnetic field. This frequent condition holds in the following simple example in which the Poynting vector is calculated and seen to be consistent with the usual computation of power in an electric circuit. Example: Power flow in a coaxial cable Although problems in electromagnetics with arbitrary geometries are notoriously difficult to solve, we can find a relatively simple solution in the case of power transmission through a section of coaxial cable analyzed in cylindrical coordinates as depicted in the accompanying diagram. We can take advantage of the model's symmetry: no dependence on θ (circular symmetry) nor on Z (position along the cable). The model (and solution) can be considered simply as a DC circuit with no time dependence, but the following solution applies equally well to the transmission of radio frequency power, as long as we are considering an instant of time (during which the voltage and current don't change), and over a sufficiently short segment of cable (much smaller than a wavelength, so that these quantities are not dependent on Z). The coaxial cable is specified as having an inner conductor of radius R1 and an outer conductor whose inner radius is R2 (its thickness beyond R2 doesn't affect the following analysis). In between R1 and R2 the cable contains an ideal dielectric material of relative permittivity εr and we assume conductors that are non-magnetic (so μ = μ0) and lossless (perfect conductors), all of which are good approximations to real-world coaxial cable in typical situations. The center conductor is held at voltage V and draws a current I toward the right, so we expect a total power flow of P = V · I according to basic laws of electricity. By evaluating the Poynting vector, however, we are able to identify the profile of power flow in terms of the electric and magnetic fields inside the coaxial cable. The electric fields are of course zero inside of each conductor, but in between the conductors () symmetry dictates that they are strictly in the radial direction and it can be shown (using Gauss's law) that they must obey the following form: W can be evaluated by integrating the electric field from to which must be the negative of the voltage V: so that: The magnetic field, again by symmetry, can only be non-zero in the θ direction, that is, a vector field looping around the center conductor at every radius between R1 and R2. Inside the conductors themselves the magnetic field may or may not be zero, but this is of no concern since the Poynting vector in these regions is zero due to the electric field's being zero. Outside the entire coaxial cable, the magnetic field is identically zero since paths in this region enclose a net current of zero (+I in the center conductor and −I in the outer conductor), and again the electric field is zero there anyway. Using Ampère's law in the region from R1 to R2, which encloses the current +I in the center conductor but with no contribution from the current in the outer conductor, we find at radius r: Now, from an electric field in the radial direction, and a tangential magnetic field, the Poynting vector, given by the cross-product of these, is only non-zero in the Z direction, along the direction of the coaxial cable itself, as we would expect. Again only a function of r, we can evaluate S(r): where W is given above in terms of the center conductor voltage V. The total power flowing down the coaxial cable can be computed by integrating over the entire cross section A of the cable in between the conductors: Substituting the earlier solution for the constant W we find: that is, the power given by integrating the Poynting vector over a cross section of the coaxial cable is exactly equal to the product of voltage and current as one would have computed for the power delivered using basic laws of electricity. Other similar examples in which the P = V · I result can be analytically calculated are: the parallel-plate transmission line, using Cartesian coordinates, and the two-wire transmission line, using bipolar cylindrical coordinates. Other forms In the "microscopic" version of Maxwell's equations, this definition must be replaced by a definition in terms of the electric field E and the magnetic flux density B (described later in the article). It is also possible to combine the electric displacement field D with the magnetic flux B to get the Minkowski form of the Poynting vector, or use D and H to construct yet another version. The choice has been controversial: Pfeifer et al. summarize and to a certain extent resolve the century-long dispute between proponents of the Abraham and Minkowski forms (see Abraham–Minkowski controversy). The Poynting vector represents the particular case of an energy flux vector for electromagnetic energy. However, any type of energy has its direction of movement in space, as well as its density, so energy flux vectors can be defined for other types of energy as well, e.g., for mechanical energy. The Umov–Poynting vector discovered by Nikolay Umov in 1874 describes energy flux in liquid and elastic media in a completely generalized view. Interpretation The Poynting vector appears in Poynting's theorem (see that article for the derivation), an energy-conservation law: where Jf is the current density of free charges and u is the electromagnetic energy density for linear, nondispersive materials, given by where E is the electric field; D is the electric displacement field; B is the magnetic flux density; H is the magnetizing field. The first term in the right-hand side represents the electromagnetic energy flow into a small volume, while the second term subtracts the work done by the field on free electrical currents, which thereby exits from electromagnetic energy as dissipation, heat, etc. In this definition, bound electrical currents are not included in this term and instead contribute to S and u. For light in free space, the linear momentum density is For linear, nondispersive and isotropic (for simplicity) materials, the constitutive relations can be written as where ε is the permittivity of the material; μ is the permeability of the material. Here ε and μ are scalar, real-valued constants independent of position, direction, and frequency. In principle, this limits Poynting's theorem in this form to fields in vacuum and nondispersive linear materials. A generalization to dispersive materials is possible under certain circumstances at the cost of additional terms. One consequence of the Poynting formula is that for the electromagnetic field to do work, both magnetic and electric fields must be present. The magnetic field alone or the electric field alone cannot do any work. Plane waves In a propagating electromagnetic plane wave in an isotropic lossless medium, the instantaneous Poynting vector always points in the direction of propagation while rapidly oscillating in magnitude. This can be simply seen given that in a plane wave, the magnitude of the magnetic field H(r,t) is given by the magnitude of the electric field vector E(r,t) divided by η, the intrinsic impedance of the transmission medium: where |A| represents the vector norm of A. Since E and H are at right angles to each other, the magnitude of their cross product is the product of their magnitudes. Without loss of generality let us take X to be the direction of the electric field and Y to be the direction of the magnetic field. The instantaneous Poynting vector, given by the cross product of E and H will then be in the positive Z direction: Finding the time-averaged power in the plane wave then requires averaging over the wave period (the inverse frequency of the wave): where Erms is the root mean square (RMS) electric field amplitude. In the important case that E(t) is sinusoidally varying at some frequency with peak amplitude Epeak, Erms is , with the average Poynting vector then given by: This is the most common form for the energy flux of a plane wave, since sinusoidal field amplitudes are most often expressed in terms of their peak values, and complicated problems are typically solved considering only one frequency at a time. However, the expression using Erms is totally general, applying, for instance, in the case of noise whose RMS amplitude can be measured but where the "peak" amplitude is meaningless. In free space the intrinsic impedance η is simply given by the impedance of free space η0 ≈377Ω. In non-magnetic dielectrics (such as all transparent materials at optical frequencies) with a specified dielectric constant εr, or in optics with a material whose refractive index , the intrinsic impedance is found as: In optics, the value of radiated flux crossing a surface, thus the average Poynting vector component in the direction normal to that surface, is technically known as the irradiance, more often simply referred to as the intensity (a somewhat ambiguous term). Formulation in terms of microscopic fields The "microscopic" (differential) version of Maxwell's equations admits only the fundamental fields E and B, without a built-in model of material media. Only the vacuum permittivity and permeability are used, and there is no D or H. When this model is used, the Poynting vector is defined as where μ0 is the vacuum permeability; E is the electric field vector; B is the magnetic flux. This is actually the general expression of the Poynting vector. The corresponding form of Poynting's theorem is where J is the total current density and the energy density u is given by where ε0 is the vacuum permittivity. It can be derived directly from Maxwell's equations in terms of total charge and current and the Lorentz force law only. The two alternative definitions of the Poynting vector are equal in vacuum or in non-magnetic materials, where . In all other cases, they differ in that and the corresponding u are purely radiative, since the dissipation term covers the total current, while the E × H definition has contributions from bound currents which are then excluded from the dissipation term. Since only the microscopic fields E and B occur in the derivation of and the energy density, assumptions about any material present are avoided. The Poynting vector and theorem and expression for energy density are universally valid in vacuum and all materials. Time-averaged Poynting vector The above form for the Poynting vector represents the instantaneous power flow due to instantaneous electric and magnetic fields. More commonly, problems in electromagnetics are solved in terms of sinusoidally varying fields at a specified frequency. The results can then be applied more generally, for instance, by representing incoherent radiation as a superposition of such waves at different frequencies and with fluctuating amplitudes. We would thus not be considering the instantaneous and used above, but rather a complex (vector) amplitude for each which describes a coherent wave's phase (as well as amplitude) using phasor notation. These complex amplitude vectors are not functions of time, as they are understood to refer to oscillations over all time. A phasor such as is understood to signify a sinusoidally varying field whose instantaneous amplitude follows the real part of where is the (radian) frequency of the sinusoidal wave being considered. In the time domain, it will be seen that the instantaneous power flow will be fluctuating at a frequency of 2ω. But what is normally of interest is the average power flow in which those fluctuations are not considered. In the math below, this is accomplished by integrating over a full cycle . The following quantity, still referred to as a "Poynting vector", is expressed directly in terms of the phasors as: where ∗ denotes the complex conjugate. The time-averaged power flow (according to the instantaneous Poynting vector averaged over a full cycle, for instance) is then given by the real part of . The imaginary part is usually ignored, however, it signifies "reactive power" such as the interference due to a standing wave or the near field of an antenna. In a single electromagnetic plane wave (rather than a standing wave which can be described as two such waves travelling in opposite directions), and are exactly in phase, so is simply a real number according to the above definition. The equivalence of to the time-average of the instantaneous Poynting vector can be shown as follows. The average of the instantaneous Poynting vector S over time is given by: The second term is the double-frequency component having an average value of zero, so we find: According to some conventions, the factor of 1/2 in the above definition may be left out. Multiplication by 1/2 is required to properly describe the power flow since the magnitudes of and refer to the peak fields of the oscillating quantities. If rather the fields are described in terms of their root mean square (RMS) values (which are each smaller by the factor ), then the correct average power flow is obtained without multiplication by 1/2. Resistive dissipation If a conductor has significant resistance, then, near the surface of that conductor, the Poynting vector would be tilted toward and impinge upon the conductor. Once the Poynting vector enters the conductor, it is bent to a direction that is almost perpendicular to the surface. This is a consequence of Snell's law and the very slow speed of light inside a conductor. The definition and computation of the speed of light in a conductor can be given. Inside the conductor, the Poynting vector represents energy flow from the electromagnetic field into the wire, producing resistive Joule heating in the wire. For a derivation that starts with Snell's law see Reitz page 454. Radiation pressure The density of the linear momentum of the electromagnetic field is S/c2 where S is the magnitude of the Poynting vector and c is the speed of light in free space. The radiation pressure exerted by an electromagnetic wave on the surface of a target is given by Uniqueness of the Poynting vector The Poynting vector occurs in Poynting's theorem only through its divergence , that is, it is only required that the surface integral of the Poynting vector around a closed surface describe the net flow of electromagnetic energy into or out of the enclosed volume. This means that adding a solenoidal vector field (one with zero divergence) to S will result in another field that satisfies this required property of a Poynting vector field according to Poynting's theorem. Since the divergence of any curl is zero, one can add the curl of any vector field to the Poynting vector and the resulting vector field S′ will still satisfy Poynting's theorem. However even though the Poynting vector was originally formulated only for the sake of Poynting's theorem in which only its divergence appears, it turns out that the above choice of its form is unique. The following section gives an example which illustrates why it is not acceptable to add an arbitrary solenoidal field to E × H. Static fields The consideration of the Poynting vector in static fields shows the relativistic nature of the Maxwell equations and allows a better understanding of the magnetic component of the Lorentz force, . To illustrate, the accompanying picture is considered, which describes the Poynting vector in a cylindrical capacitor, which is located in an H field (pointing into the page) generated by a permanent magnet. Although there are only static electric and magnetic fields, the calculation of the Poynting vector produces a clockwise circular flow of electromagnetic energy, with no beginning or end. While the circulating energy flow may seem unphysical, its existence is necessary to maintain conservation of angular momentum. The momentum of an electromagnetic wave in free space is equal to its power divided by c, the speed of light. Therefore, the circular flow of electromagnetic energy implies an angular momentum. If one were to connect a wire between the two plates of the charged capacitor, then there would be a Lorentz force on that wire while the capacitor is discharging due to the discharge current and the crossed magnetic field; that force would be tangential to the central axis and thus add angular momentum to the system. That angular momentum would match the "hidden" angular momentum, revealed by the Poynting vector, circulating before the capacitor was discharged. See also Wave vector References Further reading Electromagnetic radiation Optical quantities Vectors (mathematics and physics)
Poynting vector
Physics,Mathematics
3,921
58,006,810
https://en.wikipedia.org/wiki/Abramov%27s%20algorithm
In mathematics, particularly in computer algebra, Abramov's algorithm computes all rational solutions of a linear recurrence equation with polynomial coefficients. The algorithm was published by Sergei A. Abramov in 1989. Universal denominator The main concept in Abramov's algorithm is a universal denominator. Let be a field of characteristic zero. The dispersion of two polynomials is defined aswhere denotes the set of non-negative integers. Therefore the dispersion is the maximum such that the polynomial and the -times shifted polynomial have a common factor. It is if such a does not exist. The dispersion can be computed as the largest non-negative integer root of the resultant . Let be a recurrence equation of order with polynomial coefficients , polynomial right-hand side and rational sequence solution . It is possible to write for two relatively prime polynomials . Let andwhere denotes the falling factorial of a function. Then divides . So the polynomial can be used as a denominator for all rational solutions and hence it is called a universal denominator. Algorithm Let again be a recurrence equation with polynomial coefficients and a universal denominator. After substituting for an unknown polynomial and setting the recurrence equation is equivalent toAs the cancel this is a linear recurrence equation with polynomial coefficients which can be solved for an unknown polynomial solution . There are algorithms to find polynomial solutions. The solutions for can then be used again to compute the rational solutions . algorithm rational_solutions is input: Linear recurrence equation . output: The general rational solution if there are any solutions, otherwise false. Solve for general polynomial solution if solution exists then return general solution else return false end if Example The homogeneous recurrence equation of order over has a rational solution. It can be computed by considering the dispersionThis yields the following universal denominator:andMultiplying the original recurrence equation with and substituting leads toThis equation has the polynomial solution for an arbitrary constant . Using the general rational solution isfor arbitrary . References Computer algebra
Abramov's algorithm
Mathematics,Technology
429
35,576,646
https://en.wikipedia.org/wiki/MetaLab%2C%20Ltd.
MetaLab is an interface design firm headquartered in Victoria, British Columbia that provides product management, software engineering and UX research services. MetaLab was founded in 2006 by Andrew Wilkinson. In January 2017, MetaLab became a daughter company of Tiny. MetaLab also founded Pixel Union, which provides themes for platforms like Tumblr, Shopify and WordPress. Its clients include Slack, Google, Uber, and Amazon. Products MetaLab has produced the following products: Ballpark, an online application that allows users to send invoices, receive payments, and bid on projects. Flow, a task management platform to create, organize, discuss, and accomplish tasks. In March 2011, MetaLab announced the launch of Flow for the web, iPhone and iPad. A redesign of Flow was released on 25th September 2013. Pixel Union, a collaboration between MetaLab and 45royale that creates curated internet themes for web pages and web platforms. Mozilla In March 2010, Andrew Wilkinson (co-founder of MetaLab) wrote a blog post claiming that Mozilla 'literally copied images straight off our site' for use in the design of their FlightDeck editor. In a updated blog post, Wilkinson stated that, "I just got off the phone with the team at Mozilla, who apologized and clarified a few things. The design which used our site’s design elements was a development build and according to them the design has been changed in newer builds. That said, it was used in their launch video as well as their blog post announcing the product. They told me that that the team who put together the blog post and video was unaware of the similarities at the time of inclusion. We’ve asked for a public apology, and I’ll be doing a follow-up post tomorrow." References Design companies of Canada Companies based in Victoria, British Columbia Design companies established in 2006 Canadian companies established in 2006 2006 establishments in British Columbia
MetaLab, Ltd.
Engineering
391
41,972,103
https://en.wikipedia.org/wiki/Super-resolution%20optical%20fluctuation%20imaging
Super-resolution optical fluctuation imaging (SOFI) is a post-processing method for the calculation of super-resolved images from recorded image time series that is based on the temporal correlations of independently fluctuating fluorescent emitters. SOFI has been developed for super-resolution of biological specimen that are labelled with independently fluctuating fluorescent emitters (organic dyes, fluorescent proteins). In comparison to other super-resolution microscopy techniques such as STORM or PALM that rely on single-molecule localization and hence only allow one active molecule per diffraction-limited area (DLA) and timepoint, SOFI does not necessitate a controlled photoswitching and/ or photoactivation as well as long imaging times. Nevertheless, it still requires fluorophores that are cycling through two distinguishable states, either real on-/off-states or states with different fluorescence intensities. In mathematical terms SOFI-imaging relies on the calculation of cumulants, for what two distinguishable ways exist. For one thing an image can be calculated via auto-cumulants that by definition only rely on the information of each pixel itself, and for another thing an improved method utilizes the information of different pixels via the calculation of cross-cumulants. Both methods can increase the final image resolution significantly although the cumulant calculation has its limitations. Actually SOFI is able to increase the resolution in all three dimensions. Principle Likewise to other super-resolution methods SOFI is based on recording an image time series on a CCD- or CMOS camera. In contrary to other methods the recorded time series can be substantially shorter, since a precise localization of emitters is not required and therefore a larger quantity of activated fluorophores per diffraction-limited area is allowed. The pixel values of a SOFI-image of the n-th order are calculated from the values of the pixel time series in the form of a n-th order cumulant, whereas the final value assigned to a pixel can be imagined as the integral over a correlation function. The finally assigned pixel value intensities are a measure of the brightness and correlation of the fluorescence signal. Mathematically, the n-th order cumulant is related to the n-th order correlation function, but exhibits some advantages concerning the resulting resolution of the image. Since in SOFI several emitters per DLA are allowed, the photon count at each pixel results from the superposition of the signals of all activated nearby emitters. The cumulant calculation now filters the signal and leaves only highly correlated fluctuations. This provides a contrast enhancement and therefore a background reduction for good measure. As it is implied in the figure on the left the fluorescence source distribution: is convolved with the system's point spread function (PSF) U(r). Hence the fluorescence signal at time t and position is given by Within the above equations N is the amount of emitters, located at the positions with a time-dependent molecular brightness where is a variable for the constant molecular brightness and is a time-dependent fluctuation function. The molecular brightness is just the average fluorescence count-rate divided by the number of molecules within a specific region. For simplification it has to be assumed that the sample is in a stationary equilibrium and therefore the fluorescence signal can be expressed as a zero-mean fluctuation: where denotes time-averaging. The auto-correlation here e.g. the second-order can then be described deductively as follows for a certain time-lag : From these equations it follows that the PSF of the optical system has to be taken to the power of the order of the correlation. Thus in a second-order correlation the PSF would be reduced along all dimensions by a factor of . As a result, the resolution of the SOFI-images increases according to this factor. Cumulants versus correlations Using only the simple correlation function for a reassignment of pixel values, would ascribe to the independency of fluctuations of the emitters in time in a way that no cross-correlation terms would contribute to the new pixel value. Calculations of higher-order correlation functions would suffer from lower-order correlations for what reason it is superior to calculate cumulants, since all lower-order correlation terms vanish. Cumulant-calculation Auto-cumulants For computational reasons it is convenient to set all time-lags in higher-order cumulants to zero so that a general expression for the n-th order auto-cumulant can be found: is a specific correlation based weighting function influenced by the order of the cumulant and mainly depending on the fluctuation properties of the emitters. Albeit there is no fundamental limitation in calculating very high orders of cumulants and thereby shrinking the FWHM of the PSF there are practical limitations according to the weighting of the values assigned to the final image. Emitters with a higher molecular brightness will show a strong increase in terms of the pixel cumulant value assigned at higher-orders as well as this performance can be expected from a diverse appearance of fluctuations of different emitters. A wide intensity range of the resulting image can therefore be expected and as a result dim emitters can get masked by bright emitters in higher-order images:. The calculation of auto-cumulants can be realized in a very attractive way in a mathematical sense. The n-th order cumulant can be calculated with a basic recursion from moments where K is a cumulant of the index's order, likewise represents the moments. The term within the brackets indicates a binomial coefficient. This way of computation is straightforward in comparison with calculating cumulants with standard formulas. It allows for the calculation of cumulants with only little time of computing and is, as it is well implemented, even suitable for the calculation of high-order cumulants on large images. Cross-cumulants In a more advanced approach cross-cumulants are calculated by taking the information of several pixels into account. Cross-cumulants can be described as follows: j, l and k are indices for contributing pixels whereas i is the index for the current position. All other values and indices are used as before. The major difference in the comparison of this equation with the equation for the auto-cumulants is the appearance of a weighting-factor . This weighting-factor (also termed distance-factor) is PSF-shaped and depends on the distance of the cross-correlated pixels in a sense that the contribution of each pixels decays along the distance in a PSF-shaped manner. In principle this means that the distance-factor is smaller for pixels that are further apart. The cross-cumulant approach can be used to create new, virtual pixels revealing true information about the labelled specimen by reducing the effective pixel size. These pixels carry more information than pixels that arise from simple interpolation. In addition the cross-cumulant approach can be used to estimate the PSF of the optical system by making use of the intensity differences of the virtual pixels that is due to the "loss" in cross-correlation as aforementioned. Each virtual pixel can be re-weighted with the inverse of the distance-factor of the pixel leading to a restoration of the true cumulant value. At last the PSF can be used to create a resolution dependency of n for the nth-order cumulant by re-weighting the "optical transfer function" (OTF). This step can also be replaced by using the PSF for a deconvolution that is associated with less computational cost. Cross-cumulant calculation requires the usage of a computational much more expensive formula that comprises the calculation of sums over partitions. This is of course owed to the combination of different pixels to assign a new value. Hence no fast recursive approach is usable at this point. For the calculation of cross-cumulants the following equation can be used: In this equation P denotes the amount of possible partitions, p denotes the different parts of each partition. In addition i is the index for the different pixel positions taken into account during the calculation what for F is just the image stack of the different contributing pixels. The cross-cumulant approach facilitates the generation of virtual pixels depending on the order of the cumulant as previously mentioned. These virtual pixels can be calculated in a particular pattern from the original pixels for a 4th-order cross-cumulant image, as it is depicted in the lower image, part A. The pattern itself arises simple from the calculation of all possible combinations of the original image pixels A, B, C and D. Here this was done by a scheme of "combinations with repetitions". Virtual pixels exhibit a loss in intensity that is due to the correlation itself. Part B of the second image depicts this general dependency of the virtual pixels on the cross-correlation. To restore meaningful pixel values the image is smoothed by a routine that defines a distance-factor for each pixel of the virtual pixel grid in a PSF-shaped manner and applies the inverse on all image pixels that are related to the same distance-factor. References Microscopy Image processing Covariance and correlation
Super-resolution optical fluctuation imaging
Chemistry
1,901
26,743,529
https://en.wikipedia.org/wiki/Suillus%20quiescens
Suillus quiescens is a pored mushroom of the genus Suillus in the family Suillaceae. First collected in 2002 on Santa Cruz Island off the coast of California, in association with Bishop Pine (Pinus muricata), the species was scientifically described and named in 2010. In addition to its distribution in coastal California, it was also found forming ectomycorrhizae with the roots of pine seedlings in the eastern Sierra Nevada, coastal Oregon, and the southern Cascade Mountains. It resembles Suillus brevipes, but can be distinguished from that species by its paler-colored immature cap and by the tiny colored glands on the stipe that darken with age. Discovery Fruit bodies of the fungus were first collected in 2002 on Santa Cruz Island, in Santa Barbara County. They were named provisionally as a new species, Suillus quiescens, in conference proceedings published in 2005. The species was officially described and named in a 2010 Mycologia publication. The specific epithet quiescens refers to the organism's ability to wait dormant (quiescent) in the soil until it encounters pine roots. Phylogeny Based on phylogenetic analysis of the internal transcribed spacer region in the non-functional RNA of a number of Suillus species, S. quiescens is distinct from other morphologically similar species such as S. brevipes, S. volcanalis, and S. occidentalis. The S. quiescens sequences, which were obtained from fruit bodies and from mycorrhizal root tips, formed a clade. The analysis showed that the S. quiescens sequences were matches to some unidentified Suillus sequences found from mycorrhizae of pine seedlings collected from Oregon and California. Description The cap ranges in shape from hemispheric to broadly convex, and has a diameter of . The cap color is deep brown in mature specimens and lighter shades of brown in younger mushrooms. Young specimens have a sticky layer of gluten on the cap that dries out in maturity. The edge of the cap is rolled inwards in young specimens. The flesh of the cap is whitish and does not change color when bruised or cut. The tubes on the underside of the cap are light yellow to bright orange-yellow; the tube mouths are usually less than 1 mm wide. The stipe is usually between long, less frequently reaching up to . It is either the same width throughout or slightly larger (bulbous) at the base. The color of the upper portion of the stipe is pale to light yellow, while the lower portion may be light brown or covered with streaks of glutinous material like that on the cap. The stipe surface is covered with fine glands that are initially slightly darker than the color of the stipe surface, but deepen to brown or nearly black after drying. The color of the spore print was not determined from the initial collections, but is thought to be yellow-brown to brown based on the accumulated spore deposit seen on the surface of the caps of neighboring fruit bodies. The elongate spores are oblong in face view, with dimensions of 6.1–14.7 by 2.4–3.7 μm. Most spores have a single large drop of oil in them. The spore-bearing cells, the basidia, are club-shaped, two- or four-spored, and measure 20.2–26.2 by 5.2–6.7 μm. Similar species With its short stipe and sticky cap, S. quiescens is similar to S. brevipes. It may be distinguished from the latter species by the color of the young (light-brown) cap, the glandular dots at the top of stipes in mature specimens, and the yellowish color at the top of the stipe. Habitat and distribution Fruit bodies grow together in small groups on the ground in association with Bishop Pine (Pinus muricata). It is the most common Suillus species on Santa Cruz Island, its type locality and it has also been collected at Santa Rosa Island, and Point Reyes National Seashore in California. Santa Cruz and Santa Rosa, two of the four islands that make up the northern Channel Islands, have a Mediterranean climate with cool and wet winters, and warm and dry summers. Most species of Suillus do not have spores that survive in the soil for extended periods of time, but the spores of S. quiescens can tolerate the dry conditions and heat typical of California. Another study showed that viable S. quiescens spores were present in steam-pasteurized soil planted in Oregon fields. The authors suggest that S. quiescens is an early successional species that fruits in young forests, and whose spores remain dormant in the soil for extended periods of time until the roots of a suitable pine host are encountered. References External links Photos at Mushroom Observer Fungi described in 2010 Fungi of the United States quiescens Fungi without expected TNC conservation status Fungus species
Suillus quiescens
Biology
1,023
4,085,963
https://en.wikipedia.org/wiki/Coade%20stone
Coade stone or Lithodipyra or Lithodipra () is stoneware that was often described as an artificial stone in the late 18th and early 19th centuries. It was used for moulding neoclassical statues, architectural decorations and garden ornaments of the highest quality that remain virtually weatherproof today. Coade stone features were produced by appointment to George III and the Prince Regent for St George's Chapel, Windsor; The Royal Pavilion, Brighton; Carlton House, London; the Royal Naval College, Greenwich; and refurbishment of Buckingham Palace in the 1820s. Coade stone was prized by the most important architects such as: John Nash-Buckingham Palace; Sir John Soane-Bank of England; Robert Adam-Kenwood House; and James Wyatt-Radcliffe Observatory. The product (originally known as Lithodipyra) was created around 1770 by Eleanor Coade, who ran Coade's Artificial Stone Manufactory, Coade and Sealy, and Coade in Lambeth, London, from 1769 until her death in 1821. It continued to be manufactured by her last business partner, William Croggon, until 1833. History In 1769, Mrs Coade bought Daniel Pincot's struggling artificial stone business at Kings Arms Stairs, Narrow Wall, Lambeth, a site now under the Royal Festival Hall. This business developed into Coade's Artificial Stone Manufactory with Coade in charge, such that within two years (1771) she fired Pincot for "representing himself as the chief proprietor". Coade did not invent artificial stone. Various lesser-quality ceramic precursors to Lithodipyra had been both patented and manufactured over the forty (or sixty) years prior to the introduction of her product. She was, however, probably responsible for perfecting both the clay recipe and the firing process. It is possible that Pincot's business was a continuation of that run nearby by Richard Holt, who had taken out two patents in 1722 for a kind of liquid metal or stone and another for making china without the use of clay, but there were many start-up artificial stone businesses in the early 18th century of which only Coade's succeeded. The company did well and boasted an illustrious list of customers such as George III and members of the English nobility. In 1799, Coade appointed her cousin John Sealy (son of her mother's sister, Mary), already working as a modeller, as a partner in her business. The business then traded as Coade and Sealy until his death in 1813, when it reverted to Coade. In 1799, she opened a showroom, Coade and Sealy's Gallery of Sculpture, on Pedlar's Acre at the Surrey end of Westminster Bridge Road, to display her products.(See adjacent "Coade and Sealy gallery" image) In 1813, Coade took on William Croggan from Grampound in Cornwall, a sculptor and distant relative by marriage (second cousin once removed). He managed the factory until her death eight years later in 1821 whereupon he bought the factory from the executors for c. £4000. Croggan supplied a lot of Coade stone for Buckingham Palace; however, he went bankrupt in 1833 and died two years later. Trade declined, and production came to an end in the early 1840s. Material Description Coade stone is a type of stoneware. Mrs Coade's own name for her products was Lithodipyra, a name constructed from ancient Greek words meaning 'stone-twice-fire' (), or 'twice-fired stone'. Its colours varied from light grey to light yellow (or even beige) and its surface is best described as having a matte finish. The ease with which the product could be moulded into complex shapes made it ideal for large statues, sculptures and sculptural façades. One-off commissions were expensive to produce, as they had to carry the entire cost of creating a mould. Whenever possible moulds were kept for many years of repeated use. Formula The recipe for Coade stone is claimed to be used today by Coade Ltd. Its manufacture required extremely careful control and skill in kiln firing over a period of days, difficult to achieve with its era's fuels and technology. Coade's factory was the only really successful manufacturer. The formula used was: 10% grog 5–10% crushed flint 5–10% fine quartz 10% crushed soda lime glass 60–70% ball clay from Dorset and Devon This mixture was also referred to as "fortified clay", which was kneaded before insertion into a kiln for firing over four days – a production technique very similar to brick manufacture. Depending on the size and fineness of detail in the work, a different size and proportion of Coade grog was used. In many pieces a combination of grogs was used, with fine grogged clay applied to the surface for detail, backed up by a more heavily grogged mixture for strength. Durability One of the more striking features of Coade stone is its high resistance to weathering, with the material often faring better than most types of natural stone in London's harsh environment. Prominent examples listed below have survived without apparent wear and tear for 150 years. There were, however, notable exceptions. A few works produced by Coade, mainly dating from the later period, have shown poor resistance to weathering due to a bad firing in the kiln where the material was not brought up to a sufficient temperature. Demise Coade stone was only superseded after Mrs Coade's death in 1821, by products using naturally exothermic Portland cement as a binder. It appears to have been largely phased out by the 1840s. Examples Over 650 pieces are still in existence worldwide. Apsley House, No. 1, London. Duke of Wellington's house. The 1819 renovations by architect Benjamin Dean Wyatt included Scagliola ornamentation (that resembles marble inlays) in Coade stone. () Athenry Abbey, Ireland, The last de Bermingham to be buried at Athenry was Lady Mathilda Bermingham (d. 1788). The tower collapsed around 1790. Lady Mathilda's tomb, a Coade stone monument, was broken into in 2002. () Banff, Aberdeenshire, Scotland. Duff House Mausoleum, Wrack Woods. James Duff, 2nd Earl Fife built the mausoleum for his family in 1791, possibly on the site of a Carmelite friary. Built before the Gothic Revival, this is an example of "Gothick" architecture. Typically Georgian – the carvings, including the monument to the first Earl, are in Coade stone. () Bargate, a Grade I listed medieval gatehouse in the city centre of Southampton. In 1809 a Coade stone statue of George III in Roman dress was added the middle of the four windows of the southern side. It was a gift to the town from John Petty, 2nd Marquess of Lansdowne. () Bath, 8 Argyll Street – The Royal Arms of Queen Charlotte are above the entrance to A.H.Hale, (Pharmacy) established 1826.() Battersea, St Mary's Church The church includes several important monuments from the earlier church. John Camden, (d. 1780), and his eldest daughter Elizabeth Neild, (d. 1791). 'Girl by a funeral urn with a poetic eulogy'. Signed by Coade of Lambeth (1792).() Becconsall Old Church, Hesketh Bank, Lancashire. The baptismal font, dating from the 18th century, is the form of a vase, and is made from Coade stone.() Birmingham Botanical Gardens, England. A Coade stone fountain lies west of the bandstand, which was presented in 1850 and was designed by the Birmingham architect, Charles Edge.() Birmingham Library, displayed in the Library are two large Coade stone medallions, made in the 1770s and removed from the front of the city's Theatre Royal when it was demolished in 1956. These depict David Garrick and William Shakespeare.() Brighton, Royal Pavilion of King George IV.() Brighton and Hove Cemetery. Anna Maria Crouch, actress, singer and mistress of George IV, has an elaborate, Grade II-listed, Coade stone table tomb with a carved memorial tablet, friezes with foliage patterns and Vitruvian scrolls, putti and a Classical-style urn.() Brighton, Stanmer Park, Sussex. Frankland Monument. A Coade stone statue of 1775 by Richard Hayward, erected to commemorate Frederick Meinhardt Frankland (c. 1694–1768), barrister-at-law, MP for Thirsk, son of Sir Thomas Frankland, 2nd Baronet). Listed at Grade II by English Heritage (NHLE Code 1380952). It was erected at the expense of Thomas Pelham, 1st Earl of Chichester, who owned Stanmer House and the estate, and his wife Ann, who was Frankland's daughter. The plinth has three stone tortoises and a Latin inscription. The triangular column above has concave sides with oval panels and a cornice with a frieze and some egg-and-dart moulding, all topped by an urn. The monument stands on top of a hill in Stanmer Park.() Brogyntyn, near Oswestry, Shropshire. Benjamin Gummow designed a portico and other alterations for the Ormsby Gores, 1814–15. He used Coade stone ornamentation on the interior of the portico() Broomhall House, Dunfermline, Scotland. A 1796 redesign by Thomas Harrison included a semi-circular bay on the south front decorated with three Coade stone panels depicting reclining figures. Buckingham Palace London, (in a section not open to the public). A frieze with vegetative scrollwork of Coade stone, balconies accessible from the first floor, and an attic with figural sculptures based on the Elgin Marbles. The west front overlooking the main garden features large Classical urns made of Coade stone. () Burnham Thorpe – Nelson's Memorial.() Burton Constable Hall in the East Riding of Yorkshire, displays 3 figures and a number of 'medallions' above the doors and windows of the Orangerie. In 1966 this was designated as Grade II*. () Capesthorne Hall, Cheshire. The Drawing Room features twin fireplaces made from Coade stone, dated to 1789, which originally belonged to the family's house in Belgravia, London. Both are carved, one depicting Faith, Hope and Charity, and the other the Aldobrandini Marriage.() Carlton House, London.() Castle Howard, North Yorkshire, () Charborough House, Dorset. The park wall, alongside the A31 is punctuated by Stag Gate at the northern extremity and Lion Lodge at the easternmost entrance, with heraldic symbols in Coade stone. These gateways are Grade II listed, as is a third one, East Almer Lodge, further to the west. A fourth gateway, Peacock Lodge, is inside the estate, is Grade II* listed.() Chelmsford Cathedral, Essex. The nave partially collapsed in 1800, and was rebuilt by the County architect John Johnson, retaining the Perpendicular design, but using Coade stone piers and tracery, and a plaster ceiling.() Chichester – The Buttermarket. Designed by John Nash (coat of arms engraved with "Coade & Sealey 1808")() Chiswick High Road, London, Presbytery of brown brick with Coade stone details, three storeys with double-hung sash windows; Grade II listed.() Chiswick House, London. A couple of large ornate urns in the Italian Garden.() Clerkenwell, St James's Church Over the west door are the royal arms of George III. Made of Coade stone and dated 1792, they were formerly over the reredos.() Cottesbrooke, Northamptonshire. 'All Saints Church' contains a free-standing monument to Sir William Langham, (d.1812) in the nave, moulded in Coade stone by Bacon Junior.() Croome Court, Upton-upon-Severn in Worcestershire. The south face has a broad staircase, with Coade stone sphinxes on each side, leading to a south door topped with a cornice on consoles. () Culzean Castle, overlooking the Firth of Clyde, near Maybole, Scotland. The former home of the Marquess of Ailsa. "Cat Gates" – The original inner entrance with Coade stone cats (restored in 1995) surmounting the pillars. The lodge cottages were demolished in the 1950s.(), (See Gallery "Cat gates at Culzean Castle") Daylesford House, Gloucestershire. The main front was originally to the west, at the centre of which is a projecting semicircular bay, with four Ionic pillars and French Neoclassical garland swags around the architrave, topped by a shallow dome with pointed Coade stone finial, and wings projecting to either side. () Doddington Hall, Cheshire, The country house was designed by Samuel Wyatt. An outer double staircase leads up to a doorway flanked by columns and under a blind arch containing a Coade stone medallion containing a sign of the Zodiac. There are similar medallions over the first floor windows in the outer bays.() Edinburgh, Stockbridge The "Statue of Hygieia" in the St Bernard's Well building by the Water of Leith "is made of coade stone".(). (See additional image in Coade stone Gallery below.) Edinburgh, Bonaly Tower. Statue of William Shakespeare in Coade stone. () Egyptian House, Penzance, Cornwall. There is some dispute over the architect and the date of build, but in 1973, it was acquired by the Landmark Trust, the elaborate mouldings were mainly Coade stone.() Exeter, 'Palace Gate' – Coade stone doorways on the terrace in 'Palace Gate' between the cathedral and South Street. Several late 18th century houses near Exeter Cathedral have doorway surrounds decorated with a keystone face (chosen from a small range of moulds), and decorative blocks.() Fenstanton, Cambridgeshire, Church of St Peter and St Paul, Memorial to Frances Brown, daughter in law of Lancelot "Capability" Brown in Coade stone. (). (See adjacent image on right) Great Yarmouth, Britannia Monument Coade stone caryatids replaced by concrete copies.() Greenwich, Royal Naval College – Admiral Lord Nelson's Pediment in the King William Courtyard of the Old Royal Naval College was regarded by the Coade workers as the finest of all their work. It was sculpted by Joseph Panzetta in 1813, as a public memorial after his death at the Battle of Trafalgar in 1805. It was based on a painting by Benjamin West depicting Nelson's body being offered to Britannia by a Winged Victory. It was cleaned in 2016. (), (See Nelson Pediment at Top of this article) Grey Coat Hospital Westminster. The 1707 Acts of Union with Scotland arms of Queen Anne, with her 1702 motto semper eadem ("always the same"), executed in Coade stone. () Haberdashers' Hatcham College, Telegraph Hill, Lewisham. A Coade stone statue of Robert Aske stands in the forecourt of the college, formerly Haberdashers' Aske's Hatcham Boys' School, in Pepys Road. It dates from 1836 and shows him in the robes of the Haberdashers' Company, leaning on a plinth and holding in his hand the plans of the school built at that time in Hoxton, whence the statue was transferred in 1903.() Ham House Richmond, on the River Thames near London, has a reclining statue of Father Thames, by John Bacon in the entrance courtyard. Haldon Belvedere, Devon. Inside is a larger-than-life-size Coade stone statue of General Stringer Lawrence dressed as a Roman general; a copy of the marble statue of him by Peter Scheemakers (1691–1781). Hammerwood Park, East Grinstead. Coade stone plaques of scenes derived from the Borghese Vase adorn both porticos.() Harlow, Essex, The Gibberd Garden Coade stone urns originally from Coutts Bank, The Strand, now in the garden created by Sir Frederick Gibberd who died in 1984.() Heaton Hall, A country house that was remodelled between 1772 and 1789 by James Wyatt. Further additions were made in 1823 by Lewis Wyatt. It is built in sandstone with dressings in Coade stone and is in Palladian style. () Herstmonceux Place East Sussex. Circa 1932 it ceased to be a private house and was divided into flats. The north front of the house was built in the late 17th century. The south and east fronts were designed by Samuel Wyatt in 1778. The white panels are made of Coade Stone. (), (See "Herstmonceux Place" in Gallery below) Highclere Castle, Hampshire. 'London Lodge' (1793), Brick but Coade stone dressed, and wings (1840).(), (See "Highclere Castle, London Lodge" in Gallery below) Horniman Museum, Forest Hill, London. The facade of the Pelican and British Empire Life Insurance Company at 70 Lombard Street in the City of London was rescued before demolition in 1915 and is now displayed in the museum. To adorn its building, Pelican added an allegorical sculptural group to the previously plain facade; the group was designed by Lady Diana Beauclerk and sculpted by John de Veere of the Coade factory. () Ifield, West Sussex - St Margaret's Church, There are several other large tombs from the 18th century in the churchyard—some of which are good examples of Coade stone. The George Hutchinson wall memorial in the chancel, designed by local sculptor Richard Joanes, includes Coade stone embellishments. () Imperial War Museum, London. Sculptural reliefs above the entrance.() Kensington Palace, Kensington High Street, London. The lion and unicorn statues on pillars at the entrance to Kensington Palace.(), (See "Lion and Unicorn gate" images in Gallery) Kew Gardens – The lion and unicorn statues over their respective gates into The Royal Botanical Gardens.(Lion Gate-)(Unicorn Gate-), (See "Kew Lion and Unicorn gates" images above) Kew Gardens, The Medici Vase, from a pair ordered by George IV. Lancaster Castle, Shire Hall and Crown Court were completed by 1798 by Thomas Harrison (architect). Six Gothic columns support a panelled vault covering the main part of the courtroom. Around the perimeter is an arcade, and the judge's bench has an elaborate canopy in Coade stone.() Lancaster, Royal Lancaster Infirmary. The hospital by Paley, Austin and Paley is in free Renaissance style, and built in sandstone with slate roofs. It has an octagonal entrance tower that is flanked by wings. The tower has four stages, and above the entrance is a niche containing a Coade stone statue of the Good Samaritan. () Lawhitton, Cornwall. The parish church of St Michael includes two monuments, to R. Bennet (d. 1683) and in Coade stone to Richard Bennet-Coffin (d. 1796). () Lea Marston, Warwickshire. The Church Saint John the Baptist contains numerous monuments to members of the Adderley family, including one from 1784 made of Coade stone. () Lewes, Lewes Crown Court. Located at the highest point of the old town is the Portland stone and Coade stone facade of the Crown Court (1808–12, by John Johnson).() Lincoln Castle, Coade stone bust of George III, relocated from atop the Dunston Pillar in 1940. () Liverpool. George Bullock (sculptor) statue of Horatio Nelson, 1st Viscount Nelson in Coade stone. (Location unclear) () LiverpoolTown Hall. 1802 statue by Charles Rossi - Britannia or Minerva atop Liverpool Town Hall. Minerva, the goddess of wisdom, or Britannia. She is holding a spear, which is a common replacement for Britannia's trident, but that is usually in her right hand. Minerva is commonly depicted with an owl, but she is also the goddess of strategic warfare, so a spear makes sense. Both wear Corinthian helmets. Who is it? - Neither Rossi's own list of commissions, nor a (non-existent) Royal Academy contemporary list of his worksare available, so both Historic England and Pevsner hedge their bets saying "Britannia or Minerva". Lurgan, Northern Ireland. 42-46 High Street. Decorative stonework with Coade stone keys and sculpted heads.() Provenance unclear. Lyme Regis, Dorset – Eleanor Coade's country home at Belmont House decorated with Coade stone on its façade.(), (See image of Belmont House at Top of this article) Metropolitan Museum of Art ("The Met") - New York City. Faith, statue in 'overpainted Coade stone', after a model by John Bacon the Elder. 1791.(), (See image at start of this list of 'Examples' above.) Montreal – Nelson's Column, built 1809. Montreal's pillar is the second-oldest "Nelson's Column" in the world, after the Nelson Monument in Glasgow. The statue and ornaments were shipped in parts to Montreal, arriving in April 1808. William Gilmore, a local stonemason who had contributed £7 towards its construction, was hired to assemble its seventeen parts and the foundation base was laid on 17 August 1809.() Bank of Montreal. A series of Relief panels based on designs by John Bacon (1740–1799), moulded in Coade stone by Joseph Panzetta and Thomas Dubbin in 1819.() The Octagon House or the John Tayloe III House in Washington, DC, built 1800 by William Thornton. () North Ockendon, Church of St Mary Magdalene, (Havering). A Grade I listed building, The baptismal font and royal arms (made of Coade stone) were both made in 1842. () Paço de São Cristóvão, (Palace of Saint Christopher) Rio de Janeiro, Brazil. In front of the palace is a decorative Coade stone portico, a gift sent by Hugh Percy, 2nd Duke of Northumberland, inspired by Robert Adams' porch for "Sion House". () Pitzhanger Manor House, Ealing, was owned from 1800 to 1810 by the architect Sir John Soane, who radically rebuilt it. It features four Coade stone caryatids atop the columns of the east front, modelled after those that enclose the sanctuary of Pandrosus in Athens. (), (See Caryatid, Pitzhanger Manor in Gallery below) Plympton, Devon - St Mary's church, monument to W. Seymour (died 1801) in Coade stone. () Portman Square, London. About a third of the north side is in the statutory category scheme, Grade I. No.s 11–15 built in 1773–1776 by architect James Wyatt in cooperation with his brother Samuel Wyatt. First houses in which Coade stone was used. (), (See Portman Square in Gallery below) Portmeirion, Horatio Nelson, 1st Viscount Nelson,(See "Portmeirion, Lord Nelson section") Portobello, Edinburgh, Portobello Beach, three Coade Stone columns erected in a community garden, with Heritage Lottery funds in 2006 at 70 Promenade (John Street), Portobello; rescued from the garden of Argyle House, Hope Lane, off Portobello High Street when taken into Council storage in 1989 as a new extension was built onto the house. () Preston Hall, Midlothian, Significant features of the interior include four life-size female figures in the stairway, which are made from Coade stone, a type of ceramic used as an artificial stone. () Putney Old Burial Ground. The grave of 18th century novelist Harriet Thomson (c. 1719–1787) made of coade stone. () Reading, Berkshire. St Mary's Church, Castle Street. The frontage is rendered in stucco while the capitals of the portico are probably formed of Coade stone. () Radcliffe Observatory, Tower of the Winds (Oxford). The reliefs of the signs of the zodiac above the windows on the first floor are made of Coade stone by J. C. F. Rossi. () (See Tower of the Winds in Gallery) Richmond upon Thames. Two examples of the River God, one outside Ham House, the other in Terrace Gardens. (Ham House-) (Terrace Gardens-), (See image in Coade stone Gallery below.) Rio de Janeiro Zoo entrance. () Roscommon, Ireland, Entrance gate to former Mote Park demesne, The Lion Gate, built 1787, consisting of a Doric triumphal arch surmounted by a lion with screen walls linking it to a pair of identical lodges. () Saxham Hall, Suffolk has an Umbrello (shelter) constructed of Coade stone in the grounds (), (See "Saxham Hall, Umbrelllo" in Gallery below) Schomberg House at 81–83 Pall Mall, London, was built for Meinhardt Schomberg, 3rd Duke of Schomberg in the late 17th-century. The porch, framed by two Coade stone figures, was added in the late 18th century. Note – The figures that framed the doorway of the original Coade's Gallery, on Pedlar's Acre at the Surrey end of Westminster Bridge Road were made from the same moulds. () (See "Schomberg House" in Gallery below) Shrewsbury, Shropshire. Lord Hill's Column commemorates General Rowland Hill, 1st Viscount Hill, with a tall statue on a pillar. The statue was modelled in Lithodipyra (Coade stone) by Joseph Panzetta who worked for Eleanor Coade. () South Bank Lion at the south end of Westminster Bridge in central London originally stood atop the old Lion Brewery, on the Lambeth bank of the River Thames. The brewery was demolished in 1950, to make way for the South Bank Site of the 1951 Festival of Britain. Just before the demolition King George VI ordered that both lions should be preserved: - The lion which originally stood over one of the brewery gates is now painted gold and located at the west-gate entrance of Twickenham Stadium, the home of English rugby. (See Twickenham Stadium Lion section below) - The lion from the roof of the brewery, now known as the "South Bank Lion", was moved to Station Approach Waterloo, placed on a high plinth, and painted red as the symbol of British Rail. When removed, the initials of the sculptor William F. Woodington and the date, 24 May 1837, were discovered under one of its paws. In 1966, it was moved from outside Waterloo station to the south end of Westminster bridge. (), (See South Bank Lion image at Top of article) Southwark – Statue of King Alfred the Great, Trinity Church Square. The statue of a king on the stone plinth in the square is Grade II-listed. The provenance is unknown, but it may be either one of eight medieval statues from the north end towers of Westminster Hall (c. late 14th century) or, alternatively, one of a pair representing Alfred the Great and Edward, the Black Prince made for the garden of Carlton House in the 18th century. Analysis in 2021 showed that the top part was of Coade stone but the legs were Roman and of Bath stone.(), (See King Alfred the Great image in Gallery) St Botolph-without-Bishopsgate Church Hall, London, pair of statues of schoolchildren on the front of this former School House, replicas outside, listed originals now inside the Hall.() St Mary-at-Lambeth, Garden Museum, London – Captain Bligh's tomb in the churchyard of St Mary's Lambeth.() Shugborough Hall, Staffordshire. A large country house, between 1760 and 1770 the house was remodelled by "Athenian" Stuart, the giant portico was added to the front in 1794 by Samuel Wyatt. In front of the house is the portico, which has eight columns in wood faced with slate, and capitals in Coade stone. On the south front is another bowed bay.() St Mary Magdalene's Church, Stapleford, Leicestershire. In the west wall of the gallery is a Coade stone fireplace, above which are the Royal arms on a roundel.() Stourhead Gardens The 'Temple of Flora' contains a replica of the Borghese Vase modelled in Coade stone dating from 1770 to 1771. Stowe Gardens, a grade I listed landscape garden in Stowe, Buckinghamshire.() - 'The Oxford Gates'. The central piers were designed by William Kent in 1731 Pavilions at either end were added in the 1780s to the design of the architect Vincenzo Valdrè. The piers have coats of arms in Coade stone. - 'The Gothic Cross' erected in 1814 from Coade stone on the path linking the Doric Arch to the Temple of Ancient Virtue. It was erected by the 1st Duke of Buckingham and Chandos as a memorial to his mother Lady Mary Nugent. It was demolished in the 1980s by a falling elm tree. The National Trust rebuilt the cross in 2017 using several of the surviving pieces of the monument. - 'The Cobham Monument' is the tallest structure in the gardens. It incorporates a square plinth with corner buttresses surmounted by Coade stone lions holding shields added in 1778. - 'The Gothic Umbrello' also called the Conduit House a small octagonal pavilion dating from the 1790s. The coat of arms of the Marquess of Buckingham, dated 1793, made from Coade stone are placed over the entrance door. Teigngrace Devon. James Templer (1748–1813), the builder of the Stover Canal, is commemorated by a Coade stone monument in Teigngrace church.() Tong, Shropshire - St Bartholomew's Church. The church's north door served as the "Door of Excommunication". A stoneworked version of the Royal Arms of George III, is located above the north door which is made of Coade stone. The monument cost £60 in 1814, and was a present from George Jellicoe to celebrate the Peace of Paris and Napoleon's exile to Elba.() Towcester Racecourse on the Easton Neston estate – Main Entrance Gate decorated with an array of dogs, urns and vases surmounted by the Fermor arms, signed by William Croggon.(), (See "Towcester racecourse / Easton Neston House" images in Gallery) Tremadog, Gwynedd, Wales. St Mary's Church Lychgate. Tremadog was founded, planned, named for and built by William Madocks between 1798 and 1811. The Lychgate to the churchyard is spanned by a decorative arch of Coade stone, containing boars, dragons, frogs, grimacing cherubs, owls, shrouded figures and squirrels, while the tops of the towers are surrounded by elephant heads.() Twickenham Stadium Lion gate. (R.F.U.) The lion was sculpted in Coade stone by William F. Woodington in 1837 and paired with the "South Bank Lion" at the Lion Brewery on the Lambeth bank of the River Thames. It is now located above the central pillar of the Rowland Hill Memorial Gate (Gate 3) at Twickenham Stadium. It was covered with gold leaf prior to the 1991 Rugby World Cup held in England. The Lion brewery was damaged by fire and closed in 1931, and then demolished in 1949 to make way for the Royal Festival Hall. () (See "Twickenham Stadium Lion" image at top of this article) Twinings' first ever (and still operating) shop's frontispiece, in the Strand, London opposite the Royal Courts of Justice, rediscovered under soot after a century.() University of Maryland, College Park, United States – The keystone, featuring a carving of the head of Silenus, above the entry to The Rossborough Inn.() University of East London, Stratford Campus. Statue of William Shakespeare. (See Shakespeare, University of East London image in Gallery) Weymouth, Dorset. King's Statue, (Weymouth) is a tribute to George III on the seafront.() Weston Park, in Weston-under-Lizard, Staffordshire. - Sundial, 1825. The sundial in the grounds of the hall is in Coade stone, and is high. It has a triangular plan with concave sides. At the bottom is a plinth with meander decoration on a circular base, the sides are moulded with festoons at the top, in the angles are caryatids, and at the top is a fluted frieze and an egg-and-dart cornice. () - Two urns and planting basin, 1825. The urns and planting basin are in Coade stone, and are to the southwest of the 'Temple of Diana'. The basin has a diameter of , with a cabled rim to the kerb. The urns are on a base, and each has a short stem, and a wide body with guilloché decoration and carvings of lions' heads. () Whiteford House, Cornwall. The stables and a garden folly (called Whiteford Temple) survive. The Temple is owned by the Landmark Trust and let as a holiday cottage. There are Coade stone plaques on the exterior.() Windsor Castle, St George's Chapel. Mrs Coade was commissioned by King George III to make the Gothic screen designed by Henry Emlyn, and possibly also replace part of the ceiling of St George's Chapel. () Woodeaton Manor, Oxford. In 1775 John and Elizabeth Weyland had the old manor house demolished and the present Woodeaton Manor built. In 1791 the architect Sir John Soane enhanced its main rooms with marble chimneypieces, added an Ionic porch of Coade stone, a service wing and an ornate main hall.() Woodhall Park is a Grade I listed country house, Watton-at-Stone, Hertfordshire. Limited use of Coade stone in the park.() Woolverstone Hall, Ipswich, The house, now a school, is built of Woolpit brick, with Coade stone ornamentation. () Park Crescent, Worthing, A triumphal arch. The main archway, designed for carriages, contains the busts of four bearded men as atlantes. The two side arches, designed for pedestrians, each contain the busts of four young ladies as caryatids. The Coade stone busts were supplied by William Croggan, successor to Eleanor Coade.() Birkbeck Image library In 2020, the library of Birkbeck, University of London, launched the Coade Stone image collection online, consisting of digitised slides of examples of Coade stone bequeathed by Alison Kelly, whose book Coade Stone was described by Caroline Stanford as "the most authoritative treatment on the subject". Gallery Modern replication claims The recipe and techniques for producing Coade stone are claimed to have been rediscovered by Coade Ltd. from its workshops in Wilton, Wiltshire. In 2000, Coade ltd started producing statues, sculptures and architectural ornaments. See also Anthropic rock Baluster Cast stone Pulhamite Notes References Works cited External links In 2021 Historic England launched a crowd sourced Enrich the List map of Coade stone in England. Google - My Maps Historic England. Eleanor Coade and Interactive map of Coade stone sites Anna Keay of the Landmark Trust discussing Mrs Coade and Coade stone Birkbeck College Collections - Coade Stone Gallery of images. Plate 48: A view of Westminster Bridge, 1791. shows King's Arms Stairs in the foreground (possibly) with a sign advertising Coade's factory. Imagee of Coade's factory, circa 1800 Plate 38a: Coade's Artificial Stone Manufactory 1801 Plate 39a: The entrance to Coade and Sealy's Gallery of Sculpture, Westminster Bridge, 1802 Coade stone factory, Narrow Wall, Lambeth, London, c1800. Coade and Sealey's Artificial Stone Factory, by Thomas Hosmer Shepherd Thomason Cudworth, restorers of Coade stone. Coade Ltd, current makers and restorers of Coade stone. Artificial stone Stoneware Ceramic materials
Coade stone
Engineering
7,664
73,667,530
https://en.wikipedia.org/wiki/Radon%20hexafluoride
Radon hexafluoride is a binary chemical compound of radon and fluorine with the chemical formula . This is still a hypothetical compound that has not been synthesized so far. Potential properties The compound is calculated to be less stable than radon difluoride. Radon hexafluoride is expected to have an octahedral molecular geometry, unlike the C3v of xenon hexafluoride. The Rn-F bonds in radon hexafluoride is predicted to be shorter and more stable compared to Xe-F bonds in xenon hexafluoride. References Radon compounds Hexafluorides Nonmetal halides Hypothetical chemical compounds
Radon hexafluoride
Chemistry
148