id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
862,898
https://en.wikipedia.org/wiki/Absolute%20space%20and%20time
Absolute space and time is a concept in physics and philosophy about the properties of the universe. In physics, absolute space and time may be a preferred frame. Early concept A version of the concept of absolute space (in the sense of a preferred frame) can be seen in Aristotelian physics. Robert S. Westman writes that a "whiff" of absolute space can be observed in Copernicus's De revolutionibus orbium coelestium, where Copernicus uses the concept of an immobile sphere of stars. Newton Originally introduced by Sir Isaac Newton in Philosophiæ Naturalis Principia Mathematica, the concepts of absolute time and space provided a theoretical foundation that facilitated Newtonian mechanics. According to Newton, absolute time and space respectively are independent aspects of objective reality: Absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration: relative, apparent and common time, is some sensible and external (whether accurate or unequable) measure of duration by the means of motion, which is commonly used instead of true time ... According to Newton, absolute time exists independently of any perceiver and progresses at a consistent pace throughout the universe. Unlike relative time, Newton believed absolute time was imperceptible and could only be understood mathematically. According to Newton, humans are only capable of perceiving relative time, which is a measurement of perceivable objects in motion (like the Moon or Sun). From these movements, we infer the passage of time. These notions imply that absolute space and time do not depend upon physical events, but are a backdrop or stage setting within which physical phenomena occur. Thus, every object has an absolute state of motion relative to absolute space, so that an object must be either in a state of absolute rest, or moving at some absolute speed. To support his views, Newton provided some empirical examples: according to Newton, a solitary rotating sphere can be inferred to rotate about its axis relative to absolute space by observing the bulging of its equator, and a solitary pair of spheres tied by a rope can be inferred to be in absolute rotation about their center of gravity (barycenter) by observing the tension in the rope. Differing views Historically, there have been differing views on the concept of absolute space and time. Gottfried Leibniz was of the opinion that space made no sense except as the relative location of bodies, and time made no sense except as the relative movement of bodies. George Berkeley suggested that, lacking any point of reference, a sphere in an otherwise empty universe could not be conceived to rotate, and a pair of spheres could be conceived to rotate relative to one another, but not to rotate about their center of gravity, an example later raised by Albert Einstein in his development of general relativity. A more recent form of these objections was made by Ernst Mach. Mach's principle proposes that mechanics is entirely about relative motion of bodies and, in particular, mass is an expression of such relative motion. So, for example, a single particle in a universe with no other bodies would have zero mass. According to Mach, Newton's examples simply illustrate relative rotation of spheres and the bulk of the universe. When, accordingly, we say that a body preserves unchanged its direction and velocity in space, our assertion is nothing more or less than an abbreviated reference to the entire universe.—Ernst Mach These views opposing absolute space and time may be seen from a modern stance as an attempt to introduce operational definitions for space and time, a perspective made explicit in the special theory of relativity. Even within the context of Newtonian mechanics, the modern view is that absolute space is unnecessary. Instead, the notion of inertial frame of reference has taken precedence, that is, a preferred set of frames of reference that move uniformly with respect to one another. The laws of physics transform from one inertial frame to another according to Galilean relativity, leading to the following objections to absolute space, as outlined by Milutin Blagojević: The existence of absolute space contradicts the internal logic of classical mechanics since, according to Galilean principle of relativity, none of the inertial frames can be singled out. Absolute space does not explain inertial forces since they are related to acceleration with respect to any one of the inertial frames. Absolute space acts on physical objects by inducing their resistance to acceleration but it cannot be acted upon. Newton himself recognized the role of inertial frames. The motions of bodies included in a given space are the same among themselves, whether that space is at rest or moves uniformly forward in a straight line. As a practical matter, inertial frames often are taken as frames moving uniformly with respect to the fixed stars. See Inertial frame of reference for more discussion on this. Mathematical definitions Space, as understood in Newtonian mechanics, is three-dimensional and Euclidean, with a fixed orientation. It is denoted E3. If some point O in E3 is fixed and defined as an origin, the position of any point P in E3 is uniquely determined by its radius vector (the origin of this vector coincides with the point O and its end coincides with the point P). The three-dimensional linear vector space R3 is a set of all radius vectors. The space R3 is endowed with a scalar product ⟨ , ⟩. Time is a scalar which is the same in all space E3 and is denoted as t. The ordered set { t } is called a time axis. Motion (also path or trajectory) is a function r : Δ → R3 that maps a point in the interval Δ from the time axis to a position (radius vector) in R3. The above four concepts are the "well-known" objects mentioned by Isaac Newton in his Principia: I do not define time, space, place and motion, as being well known to all. Special relativity The concepts of space and time were separate in physical theory prior to the advent of special relativity theory, which connected the two and showed both to be dependent upon the reference frame's motion. In Einstein's theories, the ideas of absolute time and space were superseded by the notion of spacetime in special relativity, and curved spacetime in general relativity. Absolute simultaneity refers to the concurrence of events in time at different locations in space in a manner agreed upon in all frames of reference. The theory of relativity does not have a concept of absolute time because there is a relativity of simultaneity. An event that is simultaneous with another event in one frame of reference may be in the past or future of that event in a different frame of reference, which negates absolute simultaneity. Einstein Quoted below from his later papers, Einstein identified the term aether with "properties of space", a terminology that is not widely used. Einstein stated that in general relativity the "aether" is not absolute anymore, as the geodesic and therefore the structure of spacetime depends on the presence of matter. General relativity Special relativity eliminates absolute time (although Gödel and others suspect absolute time may be valid for some forms of general relativity) and general relativity further reduces the physical scope of absolute space and time through the concept of geodesics. There appears to be absolute space in relation to the distant stars because the local geodesics eventually channel information from these stars, but it is not necessary to invoke absolute space with respect to any system's physics, as its local geodesics are sufficient to describe its spacetime. See also References and notes External links Classical mechanics Theory of relativity Time in physics Physical cosmology Aristotelianism Philosophy of time Space and time Concepts in metaphysics
Absolute space and time
Physics,Astronomy
1,600
26,683,958
https://en.wikipedia.org/wiki/Data%20validation%20and%20reconciliation
Industrial process data validation and reconciliation, or more briefly, process data reconciliation (PDR), is a technology that uses process information and mathematical methods in order to automatically ensure data validation and reconciliation by correcting measurements in industrial processes. The use of PDR allows for extracting accurate and reliable information about the state of industry processes from raw measurement data and produces a single consistent set of data representing the most likely process operation. Models, data and measurement errors Industrial processes, for example chemical or thermodynamic processes in chemical plants, refineries, oil or gas production sites, or power plants, are often represented by two fundamental means: Models that express the general structure of the processes, Data that reflects the state of the processes at a given point in time. Models can have different levels of detail, for example one can incorporate simple mass or compound conservation balances, or more advanced thermodynamic models including energy conservation laws. Mathematically the model can be expressed by a nonlinear system of equations in the variables , which incorporates all the above-mentioned system constraints (for example the mass or heat balances around a unit). A variable could be the temperature or the pressure at a certain place in the plant. Error types Data originates typically from measurements taken at different places throughout the industrial site, for example temperature, pressure, volumetric flow rate measurements etc. To understand the basic principles of PDR, it is important to first recognize that plant measurements are never 100% correct, i.e. raw measurement is not a solution of the nonlinear system . When using measurements without correction to generate plant balances, it is common to have incoherencies. Measurement errors can be categorized into two basic types: random errors due to intrinsic sensor accuracy and systematic errors (or gross errors) due to sensor calibration or faulty data transmission. Random errors means that the measurement is a random variable with mean , where is the true value that is typically not known. A systematic error on the other hand is characterized by a measurement which is a random variable with mean , which is not equal to the true value . For ease in deriving and implementing an optimal estimation solution, and based on arguments that errors are the sum of many factors (so that the Central limit theorem has some effect), data reconciliation assumes these errors are normally distributed. Other sources of errors when calculating plant balances include process faults such as leaks, unmodeled heat losses, incorrect physical properties or other physical parameters used in equations, and incorrect structure such as unmodeled bypass lines. Other errors include unmodeled plant dynamics such as holdup changes, and other instabilities in plant operations that violate steady state (algebraic) models. Additional dynamic errors arise when measurements and samples are not taken at the same time, especially lab analyses. The normal practice of using time averages for the data input partly reduces the dynamic problems. However, that does not completely resolve timing inconsistencies for infrequently-sampled data like lab analyses. This use of average values, like a moving average, acts as a low-pass filter, so high frequency noise is mostly eliminated. The result is that, in practice, data reconciliation is mainly making adjustments to correct systematic errors like biases. Necessity of removing measurement errors ISA-95 is the international standard for the integration of enterprise and control systems It asserts that: Data reconciliation is a serious issue for enterprise-control integration. The data have to be valid to be useful for the enterprise system. The data must often be determined from physical measurements that have associated error factors. This must usually be converted into exact values for the enterprise system. This conversion may require manual, or intelligent reconciliation of the converted values [...]. Systems must be set up to ensure that accurate data are sent to production and from production. Inadvertent operator or clerical errors may result in too much production, too little production, the wrong production, incorrect inventory, or missing inventory. History PDR has become more and more important due to industrial processes that are becoming more and more complex. PDR started in the early 1960s with applications aiming at closing material balances in production processes where raw measurements were available for all variables. At the same time the problem of gross error identification and elimination has been presented. In the late 1960s and 1970s unmeasured variables were taken into account in the data reconciliation process., PDR also became more mature by considering general nonlinear equation systems coming from thermodynamic models., , Quasi steady state dynamics for filtering and simultaneous parameter estimation over time were introduced in 1977 by Stanley and Mah. Dynamic PDR was formulated as a nonlinear optimization problem by Liebman et al. in 1992. Data reconciliation Data reconciliation is a technique that targets at correcting measurement errors that are due to measurement noise, i.e. random errors. From a statistical point of view the main assumption is that no systematic errors exist in the set of measurements, since they may bias the reconciliation results and reduce the robustness of the reconciliation. Given measurements , data reconciliation can mathematically be expressed as an optimization problem of the following form: where is the reconciled value of the -th measurement (), is the measured value of the -th measurement (), is the -th unmeasured variable (), and is the standard deviation of the -th measurement (), are the process equality constraints and are the bounds on the measured and unmeasured variables. The term is called the penalty of measurement i. The objective function is the sum of the penalties, which will be denoted in the following by . In other words, one wants to minimize the overall correction (measured in the least squares term) that is needed in order to satisfy the system constraints. Additionally, each least squares term is weighted by the standard deviation of the corresponding measurement. The standard deviation is related to the accuracy of the measurement. For example, at a 95% confidence level, the standard deviation is about half the accuracy. Redundancy Data reconciliation relies strongly on the concept of redundancy to correct the measurements as little as possible in order to satisfy the process constraints. Here, redundancy is defined differently from redundancy in information theory. Instead, redundancy arises from combining sensor data with the model (algebraic constraints), sometimes more specifically called "spatial redundancy", "analytical redundancy", or "topological redundancy". Redundancy can be due to sensor redundancy, where sensors are duplicated in order to have more than one measurement of the same quantity. Redundancy also arises when a single variable can be estimated in several independent ways from separate sets of measurements at a given time or time averaging period, using the algebraic constraints. Redundancy is linked to the concept of observability. A variable (or system) is observable if the models and sensor measurements can be used to uniquely determine its value (system state). A sensor is redundant if its removal causes no loss of observability. Rigorous definitions of observability, calculability, and redundancy, along with criteria for determining it, were established by Stanley and Mah, for these cases with set constraints such as algebraic equations and inequalities. Next, we illustrate some special cases: Topological redundancy is intimately linked with the degrees of freedom () of a mathematical system, i.e. the minimum number of pieces of information (i.e. measurements) that are required in order to calculate all of the system variables. For instance, in the example above the flow conservation requires that . One needs to know the value of two of the 3 variables in order to calculate the third one. The degrees of freedom for the model in that case is equal to 2. At least 2 measurements are needed to estimate all the variables, and 3 would be needed for redundancy. When speaking about topological redundancy we have to distinguish between measured and unmeasured variables. In the following let us denote by the unmeasured variables and the measured variables. Then the system of the process constraints becomes , which is a nonlinear system in and . If the system is calculable with the measurements given, then the level of topological redundancy is defined as , i.e. the number of additional measurements that are at hand on top of those measurements which are required in order to just calculate the system. Another way of viewing the level of redundancy is to use the definition of , which is the difference between the number of variables (measured and unmeasured) and the number of equations. Then one gets i.e. the redundancy is the difference between the number of equations and the number of unmeasured variables . The level of total redundancy is the sum of sensor redundancy and topological redundancy. We speak of positive redundancy if the system is calculable and the total redundancy is positive. One can see that the level of topological redundancy merely depends on the number of equations (the more equations the higher the redundancy) and the number of unmeasured variables (the more unmeasured variables, the lower the redundancy) and not on the number of measured variables. Simple counts of variables, equations, and measurements are inadequate for many systems, breaking down for several reasons: (a) Portions of a system might have redundancy, while others do not, and some portions might not even be possible to calculate, and (b) Nonlinearities can lead to different conclusions at different operating points. As an example, consider the following system with 4 streams and 2 units. Example of calculable and non-calculable systems We incorporate only flow conservation constraints and obtain and . It is possible that the system is not calculable, even though . If we have measurements for and , but not for and , then the system cannot be calculated (knowing does not give information about and ). On the other hand, if and are known, but not and , then the system can be calculated. In 1981, observability and redundancy criteria were proven for these sorts of flow networks involving only mass and energy balance constraints. After combining all the plant inputs and outputs into an "environment node", loss of observability corresponds to cycles of unmeasured streams. That is seen in the second case above, where streams a and b are in a cycle of unmeasured streams. Redundancy classification follows, by testing for a path of unmeasured streams, since that would lead to an unmeasured cycle if the measurement was removed. Measurements c and d are redundant in the second case above, even though part of the system is unobservable. Benefits Redundancy can be used as a source of information to cross-check and correct the measurements and increase their accuracy and precision: on the one hand they reconciled Further, the data reconciliation problem presented above also includes unmeasured variables . Based on information redundancy, estimates for these unmeasured variables can be calculated along with their accuracies. In industrial processes these unmeasured variables that data reconciliation provides are referred to as soft sensors or virtual sensors, where hardware sensors are not installed. Data validation Data validation denotes all validation and verification actions before and after the reconciliation step. Data filtering Data filtering denotes the process of treating measured data such that the values become meaningful and lie within the range of expected values. Data filtering is necessary before the reconciliation process in order to increase robustness of the reconciliation step. There are several ways of data filtering, for example taking the average of several measured values over a well-defined time period. Result validation Result validation is the set of validation or verification actions taken after the reconciliation process and it takes into account measured and unmeasured variables as well as reconciled values. Result validation covers, but is not limited to, penalty analysis for determining the reliability of the reconciliation, or bound checks to ensure that the reconciled values lie in a certain range, e.g. the temperature has to be within some reasonable bounds. Gross error detection Result validation may include statistical tests to validate the reliability of the reconciled values, by checking whether gross errors exist in the set of measured values. These tests can be for example the chi square test (global test) the individual test. If no gross errors exist in the set of measured values, then each penalty term in the objective function is a random variable that is normally distributed with mean equal to 0 and variance equal to 1. By consequence, the objective function is a random variable which follows a chi-square distribution, since it is the sum of the square of normally distributed random variables. Comparing the value of the objective function with a given percentile of the probability density function of a chi-square distribution (e.g. the 95th percentile for a 95% confidence) gives an indication of whether a gross error exists: If , then no gross errors exist with 95% probability. The chi square test gives only a rough indication about the existence of gross errors, and it is easy to conduct: one only has to compare the value of the objective function with the critical value of the chi square distribution. The individual test compares each penalty term in the objective function with the critical values of the normal distribution. If the -th penalty term is outside the 95% confidence interval of the normal distribution, then there is reason to believe that this measurement has a gross error. Advanced process data reconciliation Advanced process data reconciliation (PDR) is an integrated approach of combining data reconciliation and data validation techniques, which is characterized by complex models incorporating besides mass balances also thermodynamics, momentum balances, equilibria constraints, hydrodynamics etc. gross error remediation techniques to ensure meaningfulness of the reconciled values, robust algorithms for solving the reconciliation problem. Thermodynamic models Simple models include mass balances only. When adding thermodynamic constraints such as energy balances to the model, its scope and the level of redundancy increases. Indeed, as we have seen above, the level of redundancy is defined as , where is the number of equations. Including energy balances means adding equations to the system, which results in a higher level of redundancy (provided that enough measurements are available, or equivalently, not too many variables are unmeasured). Gross error remediation Gross errors are measurement systematic errors that may bias the reconciliation results. Therefore, it is important to identify and eliminate these gross errors from the reconciliation process. After the reconciliation statistical tests can be applied that indicate whether or not a gross error does exist somewhere in the set of measurements. These techniques of gross error remediation are based on two concepts: gross error elimination gross error relaxation. Gross error elimination determines one measurement that is biased by a systematic error and discards this measurement from the data set. The determination of the measurement to be discarded is based on different kinds of penalty terms that express how much the measured values deviate from the reconciled values. Once the gross errors are detected they are discarded from the measurements and the reconciliation can be done without these faulty measurements that spoil the reconciliation process. If needed, the elimination is repeated until no gross error exists in the set of measurements. Gross error relaxation targets at relaxing the estimate for the uncertainty of suspicious measurements so that the reconciled value is in the 95% confidence interval. Relaxation typically finds application when it is not possible to determine which measurement around one unit is responsible for the gross error (equivalence of gross errors). Then measurement uncertainties of the measurements involved are increased. It is important to note that the remediation of gross errors reduces the quality of the reconciliation, either the redundancy decreases (elimination) or the uncertainty of the measured data increases (relaxation). Therefore, it can only be applied when the initial level of redundancy is high enough to ensure that the data reconciliation can still be done (see Section 2,). Workflow Advanced PDR solutions offer an integration of the techniques mentioned above: data acquisition from data historian, data base or manual inputs data validation and filtering of raw measurements data reconciliation of filtered measurements result verification range check gross error remediation (and go back to step 3) result storage (raw measurements together with reconciled values) The result of an advanced PDR procedure is a coherent set of validated and reconciled process data. Applications PDR finds application mainly in industry sectors where either measurements are not accurate or even non-existing, like for example in the upstream sector where flow meters are difficult or expensive to position (see ); or where accurate data is of high importance, for example for security reasons in nuclear power plants (see ). Another field of application is performance and process monitoring (see ) in oil refining or in the chemical industry. As PDR enables to calculate estimates even for unmeasured variables in a reliable way, the German Engineering Society (VDI Gesellschaft Energie und Umwelt) has accepted the technology of PDR as a means to replace expensive sensors in the nuclear power industry (see VDI norm 2048,). See also Process simulation Pinch analysis Industrial processes Chemical engineering References Alexander, Dave, Tannar, Dave & Wasik, Larry "Mill Information System uses Dynamic Data Reconciliation for Accurate Energy Accounting" TAPPI Fall Conference 2007. Rankin, J. & Wasik, L. "Dynamic Data Reconciliation of Batch Pulping Processes (for On-Line Prediction)" PAPTAC Spring Conference 2009. S. Narasimhan, C. Jordache, Data reconciliation and gross error detection: an intelligent use of process data, Golf Publishing Company, Houston, 2000. V. Veverka, F. Madron, 'Material and Energy Balancing in the Process Industries, Elsevier Science BV, Amsterdam, 1997. J. Romagnoli, M.C. Sanchez, Data processing and reconciliation for chemical process operations'', Academic Press, 2000. Data management
Data validation and reconciliation
Technology
3,675
54,564,975
https://en.wikipedia.org/wiki/G18P
G18P or RVA/pigeon-wt/AUS/VIC/2016/G18P[17] is a strain of Rotavirus A infecting and killing domestic pigeons. This disease is found in Western Australia, Victoria, and South Australia. References Rotaviruses Infraspecific virus taxa
G18P
Biology
68
61,515,200
https://en.wikipedia.org/wiki/C18H20INO4
{{DISPLAYTITLE:C18H20INO4}} The molecular formula C18H20INO4 (molar mass: 441.260 g/mol, exact mass: 441.0437 u) may refer to: 25I-NB34MD (NB34MD-2C-I) 25I-NBMD Molecular formulas
C18H20INO4
Physics,Chemistry
77
24,169,694
https://en.wikipedia.org/wiki/Urethral%20intercourse
Urethral intercourse or coitus par urethra is sexual penetration of the female urethra by an object such as a penis or a finger. It is not to be confused with urethral sounding, the act of inserting a specialized medical tool into the urethra (for both males and females) as a form of sexual or fetishistic activity. The untrained insertion of foreign bodies into the urethra carries a significant risk that subsequent medical attention may be required. Documented cases of urethral intercourse appear to have occurred between heterosexual couples; a survey of the global medical literature available in 1965 reported accounts of thirteen separate cases. By 2014, 26 cases had been documented in the medical literature, many in people with Müllerian dysgenesis who were engaging in urethral intercourse unknowingly. However, the stretching of the urethra required by this form of intercourse has also reportedly resulted in a complete and permanent loss of urethral sphincter control (urinary incontinence); furthermore such intercourse presents a very high risk of bladder infection to the receptive partner. It can also lead to permanent dilation of the urethra and incontinence during intercourse. Presenting symptoms of unintentional urethral intercourse include primary infertility, dyspareunia (pain during intercourse), and incontinence. More serious consequences include evisceration via the urethra and bladder rupture. See also Urethral sounding References External links Sexual acts Human sexuality Urethra Pediatric gynecology Sexual fetishism
Urethral intercourse
Biology
328
42,761,360
https://en.wikipedia.org/wiki/Trumpler%2014
Trumpler 14 (Tr 14) is an open cluster with a diameter of , located within the inner regions of the Carina Nebula, approximately from Earth. Together with the nearby Trumpler 16, they are the main clusters of the Carina OB1 stellar association, which is the largest association in the Carina Nebula, although Trumpler 14 is not as massive or as large as Trumpler 16. About stars have been identified in Trumpler 14 and the total mass of the cluster is estimated to be 4,300 . Age It is one of the youngest known star clusters, estimates range from 300 to 500 thousand years old. For comparison, the massive super star cluster R136 is about 1 to 2 million years old, and the famous Pleiades is about 115 million years old. Members Due to its location within the inner parts of the Carina Nebula, Trumpler 14 is currently undergoing massive star formation. As a result, the star cluster exhibits many stars of late O to early A spectral type, which are very massive (at least 10 solar masses), short-lived and hot (). The brightest member is HD 93129, a triple system consisting of three individual class O stars. It also contains HD 93128, an O3.5 V((fc))z star, an extremely hot and young main sequence star. Future In a few million years, as its stars die, it will trigger the formation of metal-rich stars, and in a few hundred million years Trumpler 14 will probably dissipate. Gallery See also List of most massive stars References External links Carina Nebula Open clusters Carina (constellation) Trumpler catalog Star-forming regions
Trumpler 14
Astronomy
342
4,530,419
https://en.wikipedia.org/wiki/Genomic%20island
A genomic island (GI) is part of a genome that has evidence of horizontal origins. The term is usually used in microbiology, especially with regard to bacteria. A GI can code for many functions, can be involved in symbiosis or pathogenesis, and may help an organism's adaptation. Many sub-classes of GIs exist that are based on the function that they confer. For example, a GI associated with pathogenesis is often called a pathogenicity island (PAIs), while GIs that contain many antibiotic resistant genes are referred to as antibiotic resistance islands. The same GI can occur in distantly related species as a result of various types of horizontal gene transfer (transformation, conjugation, transduction). This can be determined by base composition analysis, as well as phylogeny estimations. Computational prediction Various genomic island predictions programs have been developed. These tools can be broadly grouped into sequence based methods and comparative genomics/phylogeny based methods. Sequence based methods depend on the naturally occurring variation that exists between the genome sequence composition of different species. Genomic regions that show abnormal sequence composition (such as nucleotide bias or codon bias) suggests that these regions may have been horizontally transferred. Two major problems with these methods are that false predictions can occur due to natural variation in the genome (sometimes due to highly expressed genes) and that horizontally transferred DNA will ameliorate (change to the host genome) over time; therefore, limiting predictions to only recently acquired GIs. Comparative genomics based methods try to identify regions that show signs that they have been horizontally transferred using information from several related species. For example, a genomic region that is present in one species, but is not present in several other related species suggests that the region may have been horizontally transferred. The alternative explanations are (i) that the region was present in the common ancestor but has been lost in all the other species being compared, or (ii) that the region was absent in the common ancestor but was acquired through mutation and selection in the species in which it is still found. The argument for multiple deletions of the region would be strengthened if there is evidence from outgroups that the region was present in the common ancestor, or if the phylogeny implies relatively few actual deletion events would be required. The argument for acquisition via mutation would be strengthened if the species with the region is known to have diverged substantially from the other species, or if the region in question is small. The plausibility of either (i) or (ii) would be modified if taxon sampling omitted many extinct species that may have possessed the region, and particularly if extinction was correlated with the presence of the region. One example of a method that integrates several of the most accurate GI prediction methods is IslandViewer. Examples In bacteria, many type III and type IV secretion systems are located on genomic islands. These "islands" are characterised by their large size(>10 Kb), their frequent association with tRNA-encoding genes and a different G+C content compared with the rest of the genome. Many genomic islands are flanked by repeat structures and carry fragments of other mobile elements such as phages and plasmids. Some genomic islands, including those adjacent to integrative and conjugative elements (ICEs), can excise themselves spontaneously from the chromosome and can be transferred to other suitable recipients. While excision is dependent on the ICE machinery present, integration is attributed to integrases present on the genomic islands. References External links Cell biology Mobile genetic elements
Genomic island
Biology
742
6,282,752
https://en.wikipedia.org/wiki/Cost%20escalation
Cost escalation can be defined as changes in the cost or price of specific goods or services in a given economy over a period. This is similar to the concepts of inflation and deflation except that escalation is specific to an item or class of items (not as general in nature), it is often not primarily driven by changes in the money supply, and it tends to be less sustained. While escalation includes general inflation related to the money supply, it is also driven by changes in technology, practices, and particularly supply-demand imbalances that are specific to a good or service in a given economy. For example, while general inflation (e.g., consumer price index) in the US was less than 5% in the 2003-2007 time period, steel prices increased (escalated) by over 50% because of supply-demand imbalance. Cost escalation may contribute to a project cost overrun but it is not synonymous with it. Over long periods of time, as market supply and demand imbalances are corrected, escalation will tend to more-or-less equal inflation unless there are sustained technology or efficiency changes in a market. Escalation is usually calculated by examining the changes in price index measures for a good or service. Future escalation can be forecast using econometrics. Unfortunately, because escalation (unlike inflation) may occur in a micro-market, and it may be hard to measure with surveys, indices can be difficult to find. For example, the Bureau of Labor Statistics has a price index for construction wages and compensation (what the construction contractor's labor cost), but has none for the prices that owners must pay the construction contractor for their services. In cost engineering and project management usage, escalation and cost contingency are both considered risk funds, that should be included in project estimates and budgets. When escalation is minimal, it is sometimes estimated together with contingency. However, this is not a best practice, particularly when escalation is significant. References See also Chemical plant cost indexes Cost engineering
Cost escalation
Engineering
428
2,846,775
https://en.wikipedia.org/wiki/Alexander%27s%20trick
Alexander's trick, also known as the Alexander trick, is a basic result in geometric topology, named after J. W. Alexander. Statement Two homeomorphisms of the n-dimensional ball which agree on the boundary sphere are isotopic. More generally, two homeomorphisms of that are isotopic on the boundary are isotopic. Proof Base case: every homeomorphism which fixes the boundary is isotopic to the identity relative to the boundary. If satisfies , then an isotopy connecting f to the identity is given by Visually, the homeomorphism is 'straightened out' from the boundary, 'squeezing' down to the origin. William Thurston calls this "combing all the tangles to one point". In the original 2-page paper, J. W. Alexander explains that for each the transformation replicates at a different scale, on the disk of radius , thus as it is reasonable to expect that merges to the identity. The subtlety is that at , "disappears": the germ at the origin "jumps" from an infinitely stretched version of to the identity. Each of the steps in the homotopy could be smoothed (smooth the transition), but the homotopy (the overall map) has a singularity at . This underlines that the Alexander trick is a PL construction, but not smooth. General case: isotopic on boundary implies isotopic If are two homeomorphisms that agree on , then is the identity on , so we have an isotopy from the identity to . The map is then an isotopy from to . Radial extension Some authors use the term Alexander trick for the statement that every homeomorphism of can be extended to a homeomorphism of the entire ball . However, this is much easier to prove than the result discussed above: it is called radial extension (or coning) and is also true piecewise-linearly, but not smoothly. Concretely, let be a homeomorphism, then defines a homeomorphism of the ball. Exotic spheres The failure of smooth radial extension and the success of PL radial extension yield exotic spheres via twisted spheres. See also Clutching construction References Geometric topology Homeomorphisms
Alexander's trick
Mathematics
469
9,253,234
https://en.wikipedia.org/wiki/Keidel%20vacuum
The Keidel vacuum tube was a type of blood collecting device, first manufactured by Hynson, Wescott and Dunning in around 1922. This vacuum was one of the first evacuated systems, predating the more well known Vacutainer. Its primary use was to test for syphilis and typhoid fever. Process Essentially, the Keidel vacuum consists of a sealed ampule with or without a culture medium. Connected to the ampule was a short rubber tube with a needle at the end, using a small glass tube as a cap. The insertion of the needle into the vein crushes the ampule, thus creating a vacuum and forcing blood into the container. Typically, a prominent vein in the forearm such as the median cubital vein would suffice, although the Keidel vacuum can take blood for any prominent peripheral vein. This concept did not become popular until during World War II, when quick and efficient first aid care was necessary in the battle field. As a result, the vacutainer became the forefront device used for blood collection. See also Phlebotomy Fingerprick References History of medicine Blood tests Hematology
Keidel vacuum
Chemistry
235
22,051,699
https://en.wikipedia.org/wiki/S-Aminoethyl-L-cysteine
{{DISPLAYTITLE:S-Aminoethyl-L-cysteine}} S-Aminoethyl--cysteine, also known as thialysine, is a toxic analog of the amino acid lysine in which the second carbon of the amino acid's R-group (side chain) has been replaced with a sulfur atom. Strictly speaking, L-thialysine is actually considered an S-(2-aminoethyl) analogue of L-cysteine. This compound is known to have cytotoxic affects as it inhibits protein synthesis and lysine 2,3-aminomutase. References External links H-Cys(aminoethyl)-OH·HCl at ChemImpex Thialysine at US Biological Alpha-Amino acids Sulfur amino acids Thioethers Non-proteinogenic amino acids Toxic amino acids
S-Aminoethyl-L-cysteine
Chemistry
186
3,505,075
https://en.wikipedia.org/wiki/Colocalization
In fluorescence microscopy, colocalization refers to observation of the spatial overlap between two (or more) different fluorescent labels, each having a separate emission wavelength, to see if the different "targets" are located in the same area of the cell or very near to one another. The definition can be split into two different phenomena, co-occurrence, which refers to the presence of two (possibly unrelated) fluorophores in the same pixel, and correlation, a much more significant statistical relationship between the fluorophores indicative of a biological interaction. This technique is important to many cell biological and physiological studies during the demonstration of a relationship between pairs of bio-molecules. History The ability to demonstrate a correlation between a pair of bio-molecules was greatly enhanced by Erik Manders of the University of Amsterdam who introduced Pearson's correlation coefficient (PCC) to microscopists, along with other coefficients of which the "overlap coefficients" M1 and M2 have proved to be the most popular and useful. The purpose of using coefficients is to characterize the degree of overlap between images, usually two channels in a multidimensional microscopy image recorded at different emission wavelengths. A popular approach was introduced by Sylvain Costes, who utilized Pearson's correlation coefficient as a tool for setting the thresholds required by M1 and M2 in an objective fashion. Costes approach makes the assumption that only positive correlations are of interest, and does not provide a useful measurement of PCC. Although the use of coefficients can significantly improve the reliability of colocalization detection, it depends on the number of factors, including the conditions of how samples with fluorescence were prepared and how images with colocalization were acquired and processed. Studies should be conducted with great caution, and after careful background reading. Currently the field is dogged by confusion and a standardized approach is yet to be firmly established. Attempts to rectify this include re-examination and revision of some of the coefficients, application of a factor to correct for noise, "Replicate based noise corrected correlations for accurate measurements of colocalization". and the proposal of further protocols, which were thoroughly reviewed by Bolte and Cordelieres (2006). In addition, due to the tendency of fluorescence images to contain a certain amount of out-of-focus signal, and poisson shot and other noise, they usually require pre-processing prior to quantification. Careful image restoration by deconvolution removes noise and increases contrast in images, improving the quality of colocalization analysis results. Up to now, most frequently used methods to quantify colocalization calculate the statistical correlation of pixel intensities in two distinct microscopy channels. More recent studies have shown that this can lead to high correlation coefficients even for targets that are known to reside in different cellular compartments. A more robust quantification of colocalization can be achieved by combining digital object recognition, the calculation of the area overlap and combination with a pixel-intensity correlation value. This led to the concept of an object-corrected Pearson's correlation coefficient. Examples of use Some impermeable fluorescent zinc dyes can detectably label the cytosol and nuclei of apoptizing and necrotizing cells among each of four different tissue types examined. Namely: the cerebral cortex, the hippocampus, the cerebellum, and it was also demonstrated that colocalized detection of zinc increase and the well accepted cell death indicator propidium iodide also occurred in kidney cells. Using the principles of fluorescent colocalization. coincident detection of zinc accumulation and propidium iodide (a traditional cell death indicator) uptake in multiple cell types was demonstrated. Various examples of quantification of colocalization in the field of neuroscience can be found in a review. Detailed protocols on the quantification of colocalization can be found in a book chapter. Single-molecule resolution Colocalization is used in real-time single-molecule fluorescence microscopy to detect interactions between fluorescently labeled molecular species. In this case, one species (e.g. a DNA molecule) is typically immobilized on the imaging surface, and the other species (e.g. a DNA-binding protein) is supplied to the solution. The two species are labeled with dyes of spectrally resolved (>50 nm) colors, e.g. cyanine-3 and cyanine-5. Fluorescence excitation is typically carried out in total internal reflection mode which increases the signal-to-noise ratio for the molecules at the surface with respect to the molecules in bulk solution. The molecules are detected as spots appearing on the surface in real-time, and their locations are found to within 10-20 nm by fitting of point-spread functions. Since typical sizes of biomolecules are on the order of 10 nm, this precision is usually sufficient for calling of molecular interactions Interpretation of results For the purpose of better interpretation of the results of qualitative and quantitative colocalization studies, it was suggested to use a set of five linguistic variables tied to the values of colocalization coefficients, such as very weak, weak, moderate, strong, and very strong, for describing them. The approach is based on the use of the fuzzy system model and computer simulation. When new coefficients are introduced, their values can be fitted into the set. Related techniques Förster resonance energy transfer (FRET): 10 nm proximity (Light microscopy: only 250 nm resolution; no certainty of effective interaction) Immuno precipitation (IP) dropdowns / pulldowns Yeast 2 hybrid - protein interaction mapping Benchmark images The degree of colocalization in fluorescence microscopy images can be validated using the Colocalization Benchmark Source, a free collection of downloadable image sets with pre-defined values of colocalization. Software implementations open source FIJI is just ImageJ - batteries included BioImage XD closed source AxioVision Colocalization Module Colocalization Research Software CoLocalizer Pro CoLocalizer Pro Nikon's NIS-Elements Colocalization Module Scientific Volume Imaging's Huygens Colocalization Analyzer Quorum Technology's Volocity Media Cybernetics's Image-Pro Bitplane's Imaris arivis Vision4D References Microscopy
Colocalization
Chemistry
1,281
44,059,352
https://en.wikipedia.org/wiki/Penveu
The penveu is a pen-like device developed by the American company, Interphase, for use with digital audio-visual presentations. History In 1991, SMART Technologies introduced an Interactive Whiteboard, on which the image from a projector was displayed. A USB connection allowed the presenter to virtually interact with the projection. Other after-market products aimed at working with existing projectors (Mimio and eBeam) came out after that. Interphase Corporation (Nasdaq: INPH) was founded in 1974, and filed for initial public offering in 1984. In 2010, penveu was invented. The company developed and tested the concepts for two years until unveiling the product at the DEMO conference in Santa Clara, California, on April 18, 2012. On May 30, 2014, the product was released to the market. Since the product is made of two electronic boards (one in the handheld pen and one in the base station), the engineering department named those two boards the PEN and the VEU (video enhancement unit). PENVEU became the name of the device. On September 30, 2015, Interphase Corporation announced it had ceased operations and commenced bankruptcy proceedings. Technology The core of the penveu technology surrounds a camera located near the tip of the pen but implemented in two parts of the penveu system: the PEN, and the VEU. The VEU box intercepts the display to add visual targets that include information on its position encoded using a change in brightness. A digital signal processor (DSP) in the pen calculates the position of the tip based on the information retrieved from the targets detected. Since the positioning target system is included in the modified video signal, penveu may not require calibration. Awards Penveu received several awards since its introduction into the market, including the "Best of InfoComm 2012", the 2014 "Innovations in Design and Engineering" award and the International Society for Technology in Education (ISTE) inaugural Best of Show" award. References Office equipment Display technology Educational technology companies of the United States Computing input devices Pointing devices
Penveu
Engineering
430
21,117,094
https://en.wikipedia.org/wiki/Amorphous%20silica-alumina
Amorphous silica-alumina is a synthetic substance that is used as a catalyst or catalyst support. It can be prepared in a number of ways for example: Precipitation of hydrous alumina onto amorphous silica hydrogel Reacting a silica sol with an alumina sol Coprecipitation from sodium silicate / aluminium salt solution Water-soluble contaminants, e.g. sodium salts, are removed by washing. Some of the alumina is present in tetrahedral coordination as shown by NMR studies 29Si MASNMR and 27Al NMR Amorphous silica-alumina contains sites which are termed Brønsted acid (or protic) sites, with an ionizable hydrogen atom, and Lewis acid (aprotic), electron accepting sites and these different types of acidic site can be distinguished by the ways in which, say, pyridine attaches. On Lewis acid sites it forms complexes and on the Brønsted sites it adsorbs as the pyridinium ion. As of 2000 examples of processes that use silica-alumina catalysts are the production of pyridine from crotonaldehyde, formaldehyde, steam, air and ammonia and the cracking of hydrocarbons, References Catalysis Inorganic silicon compounds Acid catalysts
Amorphous silica-alumina
Chemistry
282
67,716,845
https://en.wikipedia.org/wiki/Web%20skimming
Web skimming, formjacking or a magecart attack is an attack in which the attacker injects malicious code into a website and extracts data from an HTML form that the user has filled in. That data is then submitted to a server under control of the attacker. Mitigation Subresource Integrity or a Content Security Policy can be used to protect against formjacking, although this does not protect against supply chain attacks. A web application firewall can also be used. Prevalence A report in 2016 suggested as many as 6,000 e-commerce sites may have been compromised via this class of attack. In 2018, British Airways had 380,000 card details stolen via this class of attack. A similar attack affected Ticketmaster the same year, with 40,000 customers affected by maliciously injected code on payment pages. Magecart Magecart is software used by a range of hacking groups for injecting malicious code into ecommerce sites to steal payment details. As well as targeted attacks such as on Newegg, it's been used in combination with commodity Magento extension attacks. The 'Shopper Approved' ecommerce toolkit utilised on hundreds of ecommerce sites was also compromised by Magecart as was the conspiracy site InfoWars. According to Malwarebytes, the Magecart software has tried to avoid detection by using the WebGL API to check whether a software renderer such as "swiftshader", "llvmpipe" or "virtualbox" is used. That would indicate that the software is running in a virtual machine probably used to detect the malware rather than make a purchase. In October 2023 a Magecraft version was reported to be inserted into all the 404 error pages of infected Web sites. The default '404 Not Found' page is used to hide and load the card-stealing code. The site visitor enters sensitive details into, for example, an order form, then sees a fake "session timeout" error, while the information is sent to the attacker. References Hacking (computer security) Web security exploits Internet fraud Carding (fraud) Types of cyberattacks
Web skimming
Technology
441
12,917,747
https://en.wikipedia.org/wiki/Sulfophobococcus
Sulfophobococcus is a genus of the Desulfurococcaceae. See also List of Archaea genera References Further reading Scientific journals External links Monotypic archaea genera Thermoproteota
Sulfophobococcus
Biology
45
7,719,047
https://en.wikipedia.org/wiki/Hollander%20beater
A Hollander beater is a machine developed by the Dutch in 1680 to produce paper pulp from cellulose containing plant fibers. It replaced stamp mills for preparing pulp because the Hollander could produce in one day the same quantity of pulp it would take a stamp mill eight days to prepare. However, the wooden paddles and beating process of a stamp mill produced longer, more easily hydrated, and more fibrillated cellulose fibers; thus increasing the resulting paper's strength. The Hollander used metal blades and a macerating action to separate the raw material, resulting in shorter cellulose fibers and weaker paper. Further, the metal blades of the Hollander often introduced metal contaminants into the paper as one metal blade struck another. These contaminants often acted as catalysts for oxidation that have been implicated in browning of old paper called foxing. In turn, the Hollander was (partially) replaced by the conical refiner or Jordan refiner, named after its American inventor Joseph Jordan, who patented the device in 1858. A Hollander beater design consists of a circular or ovoid water raceway with a beater wheel at a single point along the raceway. The beater wheel is a centrifugal compressor or radial impeller cylinder parallel to a grooved plate, similar to the construction of a water wheel or timing pulley. Under power, the blades rotate to beat the fiber into a usable pulp slurry. The beater wheel and plate do not touch, as this would result in cutting. The distance between the two is adjusted to increase or decrease the pressure on the fibers when passing through the beater. The objective of using a beater (rather than another process like grinding, as many wood-pulp mills do) is to create longer, hydrated, fibrillated fibers. (Fibrillated fibers are abraded to the extent that many partially broken-off fibers extend from the main fiber, increasing the fiber's surface area, and hence its potential for hydrogen bonding). Grinding of fibers is not desirable. Therefore, the "blades" are not what might be thought of as "sharpened", and well-designed beaters make it possible to minimize the shear action of the rotating blades against the bottom of the water raceway. References External links The Paper Project Ontario Science Centre Tools for Paper Madgoose Press Medieval Paper Technology Book arts Crafts Dutch inventions Industrial processes Papermaking Pulp and paper industry Pulp and paper mills Science and technology in the Dutch Republic Stamp mills Industrial history of the Netherlands Economic history of the Dutch Republic
Hollander beater
Chemistry,Engineering
526
44,728,150
https://en.wikipedia.org/wiki/Filial%20therapy
Filial therapy is a type of psychotherapy designed to treat emotional and behavioral difficulties in children; it was formulated by Bernard Guerney in 1964. It is based on the principles of play therapy; however, it is distinct from it, in that it teaches parents (or other paraprofessionals) how to provide therapeutic interventions for children. With respect to older children, filial therapy uses techniques that are adapted to age-appropriate interests and activities. Therapeutic mechanism Although the exact mechanism by which therapeutic gains occur through the use of filial therapy is not definitively known, several hypotheses have been formulated to explain its positive effects. Many hypotheses hold that therapeutic gains are due to the same factors that make play therapy efficacious. Other hypotheses suggest that benefits from filial therapy may also stem from parental skill acquisition, a sense of parental self-efficacy, decreases in negative parent-child interactions, and an increase in positive parent-child interactions. Empirical support Filial therapy is supported by clinical studies and meta-analyses. Some research suggests that under certain conditions, filial therapy can have a greater therapeutic impact than play therapy. See also Virginia Axline References External links Association of Play Therapy National Institute of Relationship Enhancement and Center for Couples, Families and Children Child development Play (activity)
Filial therapy
Biology
275
31,441,686
https://en.wikipedia.org/wiki/Gladding%2C%20McBean
Gladding, McBean is a ceramics company located in Lincoln, California. It is one of the oldest companies in California, a pioneer in ceramics technology, and a company which has "contributed immeasurably" to the state's industrialization. During the heyday of architectural terra cotta, the company "dominated the industry in California and the Far West." History Founding Charles Gladding (1828–1894) was born in Buffalo, New York, served as a first lieutenant in the Union Army during the Civil War, and later moved to Chicago, where he engaged in the clay sewer pipe business. He came to California in 1874 looking for new business opportunities. While in California, he read an article in a San Francisco newspaper about a large clay deposit near the town of Lincoln, California. Investigating, Gladding verified that it was an "unusually fine deposit of white kaolin clay" located close to a railroad line., and selected the spot as the site for a new business. Gladding, along with Peter McGill McBean and George Chambers, established Gladding-McBean in 1875. Its original product was clay sewer pipe. By 1883, the company had grown to 75 employees, and it then evolved into a major manufacturer of architectural terra-cotta. Peter McBean became president of the company after Charles Gladding's death in 1894, and his son Athol McBean later served as chairman of the board. Growth, acquisitions, and merger In June 1923, the company acquired the controlling stock of Tropico Potteries, Inc. of Los Angeles. In 1925, the company purchased all the holdings of the Northern Clay Products Company including the Auburn, Washington terra cotta plant. In 1926, the company merged with the Los Angeles Pressed Brick Company. After this merger, the company had plants in Los Angeles, Santa Monica, Point Richmond, and Alberhill, California. The former Los Angeles Pressed Brick Company's plant at 922 Date Street became Gladding, McBean's Los Angeles plant. In 1927, the company acquired the holdings of the Denny-Renton Clay and Coal Company which included the terra cotta plant in Renton, Washington, the plant and mines in Taylor and Mica, Washington. The company closed their plant in Van Aselt, Washington in 1927. Tropico Potteries, Inc. filed for dissolution of the corporation in 1928 merging with Gladding, McBean. The former Tropico Potteries's plant at 2901 Los Feliz Boulevard became the company's Glendale plant. Due to the Great Depression, the Auburn plant closed in 1932. All operations were consolidated with the Renton plant. The Taylor coal and clay mines and the town were condemned by the Seattle Water Department in order to include the area inside an expanded watershed. In 1933, the company bought the "entire holdings on the Pacific Coast of the American Encaustic Tiling Company, Ltd., of New York". Since the demand for building materials dwindled, the company began to look for new products. The company expanded into tableware. In 1932, experimental work in Dinnerware began at the Glendale plant in Los Angeles. In 1934, Gladding, McBean introduced the Franciscan Pottery line of dinnerware and art ware, named after the Franciscan friars who established missions throughout California in the 18th and 19th centuries. The lines were very successful. In 1937, Gladding, McBean and Co. purchased the Catalina Clay Products Division of Santa Catalina Island Co. The company closed the pottery moving all molds and equipment to the Glendale plant. The company continued to use the tradename of Catalina Pottery on select dinnerware and art ware lines produced in the Glendale plant until 1942. In 1940, the company introduced the hand-painted embossed pattern Franciscan Apple, and in 1941 Desert Rose. Both patterns became the company's most popular patterns. The company introduced fine china dinnerware in 1942 and due to World War II, discontinued all art ware lines. By 1950, it was considered one of the "world's largest ceramics manufacturers". In 1957, they purchased Washington Brick and Lime and its factories located in Dishman, Washington and Clayton, Washington. The company was described at that time as "the West's largest ceramics firm" with seven plants in California and two in Washington, in addition to those acquired in that purchase. Because of "the importation of inexpensive Japanese ceramics", Gladding McBean's tableware sales declined in the post World War II period. This was a factor in Gladding-McBean's decision to seek a merger. In 1962, the company merged with the Lock Joint Pipe Company, which resulted in the creation of the International Pipe and Ceramics Corporation, shortened to Interpace Corp. in 1968. Decline and revival In 1976 Interpace Corp. "announced their intention to cease operations at the Lincoln plant" where Gladding, McBean began. Pacific Coast Building Products then purchased the Lincoln factory and restored the historic name of Gladding, McBean, which remains in business today. Interpace Corp. sold its Franciscan Ceramics division to Josiah Wedgwood and Sons Ltd. in 1979. In 1984 production was moved to Wedgwood's Stoke-on-Trent facility in England. The company now operates as a division of Pacific Coast Building Products Inc under the name Gladding, McBean, LLC. Hard hit by the recession, the company had 110 employees in 2010, "down from an average of 240 workers between 2001 and 2007". The company sponsors an annual "Feats of Clay" ceramic arts festival in Lincoln. Products and legacy From its base in clay sewer pipe and terra cotta, the company expanded into brick production and then branched out to dinnerware in the 1930s, with its Franciscan and Catalina lines. In 1959, the company was awarded a "subcontract in excess of $500,000 for the production of ceramic radomes." That year, a spokesman for the company "cited research in oxides and other rare earths as providing a solution to the high heat, speed and radiation problems of the space age," and identified the company's best selling products at that time as "dinnerware, tile, refractories, facebrick, clay pipe and conduit, and technical ceramics." The company now identifies its main products as clay roof tile, piazza floor tile, chimney tops and caps, terra cotta, garden pottery and clay sewer pipe. The California State Library now holds the company's job files from 1888 to 1966, documenting the use of its products to decorate thousands of buildings, including most major structures on the campus of Stanford University. The red roof tiles and architectural terra cotta details helped achieve the distinctive Spanish Colonial Revival style so common to coastal California. While many of the buildings throughout the west coast have been demolished, such as the Richfield Tower in Los Angeles, beautiful examples remain, including San Diego's Spreckels Theater and the Ventura County Courthouse. See also Roof tiles Ludowici Roof Tile California pottery Franciscan Ceramics References External links Official Gladding, McBean company website California State Library: Gladding McBean Company Archives Index Kurutz, Gary. . City of Ventura. July 25, 2013 Lecture at Ventura City Hall Centennial Celebration. Includes remarks by Charles Johnson, Museum of Ventura County Librarian-in-charge, introducing Gary Kurutz, author of The Architectural Terra Cotta of Gladding, McBean American Museum of Ceramic Art Charles Gladding Civil War letters, 1862-1900. Collection guide, California State Library, California History Room. Gladding, McBean and Company. Archives : job order documents, 1888-1966. Collection guide, California State Library, California History Room. Roof tiles American pottery Ceramics manufacturers of the United States Manufacturing companies based in California Architecture in California Terracotta Companies based in Placer County, California Manufacturing companies established in 1875 1875 establishments in California Manufacturer of architectural terracotta
Gladding, McBean
Engineering
1,633
14,179,322
https://en.wikipedia.org/wiki/ATF6
Activating transcription factor 6, also known as ATF6, is a protein that, in humans, is encoded by the ATF6 gene and is involved in the unfolded protein response. Function ATF6 is an endoplasmic reticulum (ER) stress-regulated transmembrane transcription factor that activates the transcription of ER molecules. Accumulation of misfolded proteins in the Endoplasmic Reticulum results in the proteolytic cleavage of ATF6. The cytosolic portion of ATF6 will move to the nucleus and act as a transcription factor to cause the transcription of ER chaperones. See also Activating transcription factor Interactions ATF6 has been shown to interact with YY1 and Serum response factor. References Further reading External links Transcription factors
ATF6
Chemistry,Biology
163
1,811,292
https://en.wikipedia.org/wiki/Functional%20selectivity
Functional selectivity (or “agonist trafficking”, “biased agonism”, “biased signaling”, "ligand bias" and “differential engagement”) is the ligand-dependent selectivity for certain signal transduction pathways relative to a reference ligand (often the endogenous hormone or peptide) at the same receptor. Functional selectivity can be present when a receptor has several possible signal transduction pathways. To which degree each pathway is activated thus depends on which ligand binds to the receptor. Functional selectivity, or biased signaling, is most extensively characterized at G protein coupled receptors (GPCRs). A number of biased agonists, such as those at muscarinic M2 receptors tested as analgesics or antiproliferative drugs, or those at opioid receptors that mediate pain, show potential at various receptor families to increase beneficial properties while reducing side effects. For example, pre-clinical studies with G protein biased agonists at the μ-opioid receptor show equivalent efficacy for treating pain with reduced risk for addictive potential and respiratory depression. Studies within the chemokine receptor system also suggest that GPCR biased agonism is physiologically relevant. For example, a beta-arrestin biased agonist of the chemokine receptor CXCR3 induced greater chemotaxis of T cells relative to a G protein biased agonist. Functional vs. traditional selectivity Functional selectivity has been proposed to broaden conventional definitions of pharmacology. Traditional pharmacology posits that a ligand can be either classified as an agonist (full or partial), antagonist or more recently an inverse agonist through a specific receptor subtype, and that this characteristic will be consistent with all effector (second messenger) systems coupled to that receptor. While this dogma has been the backbone of ligand-receptor interactions for decades now, more recent data indicates that this classic definition of ligand-protein associations does not hold true for a number of compounds; such compounds may be termed as mixed agonist-antagonists. Functional selectivity posits that a ligand may inherently produce a mix of the classic characteristics through a single receptor isoform depending on the effector pathway coupled to that receptor. For instance, a ligand can not easily be classified as an agonist or antagonist, because it can be a little of both, depending on its preferred signal transduction pathways. Thus, such ligands must instead be classified on the basis of their individual effects in the cell, instead of being either an agonist or antagonist to a receptor. These observations were made in a number of different expression systems, and therefore functional selectivity is not just an epiphenomenon of one particular expression system. Examples One notable example of functional selectivity occurs with the 5-HT2A receptor, as well as the 5-HT2C receptor. Serotonin, the main endogenous ligand of 5-HT receptors, is a functionally selective agonist at this receptor, activating phospholipase C (which leads to inositol triphosphate accumulation), but does not activate phospholipase A2, which would result in arachidonic acid signaling. However, the other endogenous compound dimethyltryptamine activates arachidonic acid signaling at the 5-HT2A receptor, as do many exogenous hallucinogens such as DOB and lysergic acid diethylamide (LSD). Notably, LSD does not activate IP3 signaling through this receptor to any significant extent. (Conversely, LSD, unlike serotonin, has negligible affinity for the 5-HT2C-VGV isoform, is unable to promote calcium release, and is, thus, functionally selective at 5-HT2C.) Oligomers, specifically 5-HT2A– heteromers, mediate this effect. This may explain why some direct 5-HT2 receptor agonists have psychedelic effects, whereas compounds that indirectly increase serotonin signaling at the 5-HT2 receptors generally do not, for example: selective serotonin reuptake inhibitors (SSRIs), monoamine oxidase inhibitors (MAOIs), and medications using 5HT2A receptor agonists that do not have constitutive activity at the mGluR2 dimer, such as lisuride. Tianeptine, an atypical antidepressant, is thought to exhibit functional selectivity at the μ-opioid receptor to mediate its antidepressant effects. Oliceridine is a μ-opioid receptor agonist that has been described to be functionally selective towards G protein and away from β-arrestin2 pathways. However, recent reports highlight that, rather than functional selectivity or 'G protein bias', this agonist has low intrinsic efficacy. In vivo, it has been reported to mediate pain relief without tolerance nor gastrointestinal side effects. The delta opioid receptor agonists SNC80 and ARM390 demonstrate functional selectivity that is thought to be due to their differing capacity to cause receptor internalization. While SNC80 causes delta opioid receptors to internalize, ARM390 causes very little receptor internalization. Functionally, that means that the effects of SNC80 (e.g. analgesia) do not occur when a subsequent dose follows the first, whereas the effects of ARM390 persist. However, tolerance to ARM390's analgesia still occurs eventually after multiple doses, though through a mechanism that does not involve receptor internalization. Interestingly, the other effects of ARM390 (e.g. decreased anxiety) persist after tolerance to its analgesic effects has occurred. An example of functional selectivity to bias metabolism was demonstrated for an electron transfer protein cytochrome P450 reductase (POR) with binding of small molecule ligands shown to alter the protein conformation and interaction with various redox partner proteins of POR. See also Signal transduction Second messenger system References Further reading Biased ligands Neurophysiology Pharmacodynamics Signal transduction
Functional selectivity
Chemistry,Biology
1,282
374,669
https://en.wikipedia.org/wiki/Standard%20library
In computer programming, a standard library is the library made available across implementations of a programming language. Often, a standard library is specified by its associated programming language specification, however, some are set in part or whole by more informal practices of a language community. Some languages define a core part of the standard library that must be made available in all implementations while allowing other parts to be implemented optionally. As defined with the core language aspects, the line between the core language and its standard library is relatively subtle. A programmer may confuse the two aspects even though the language designers intentionally separate the two. The line between the core language and its standard library is further blurred in some languages by defining core language constructs in terms of its standard library. For example, Java defines a string literal as an instance of the java.lang.String class. Smalltalk defines an anonymous function expression (a "block") as an instance of its library's BlockContext class. Scheme does not specify which portions must be implemented as core language vs. standard library. Contents Depending on the constructs available in the core language, a standard library may include: Subroutines Macro definitions Global variables Class definitions Templates Commonly provided functionality includes: Algorithms; such as sorting algorithms Data structures; such as list, tree, and hash table Interaction with external systems; input/output Interaction with the host operating system Philosophies Philosophies of standard library design vary widely. For example, Bjarne Stroustrup, designer of C++, writes: This suggests a relatively small standard library, containing only the constructs that "every programmer" might reasonably require when building a large collection of software. This is the philosophy that is used in the C and C++ standard libraries. By contrast, Guido van Rossum, designer of Python, has embraced a much more inclusive vision of the standard library. Python attempts to offer an easy-to-code, object-oriented, high-level language. In the Python tutorial, he writes: Van Rossum goes on to list libraries for processing XML, XML-RPC, email messages, and localization, facilities that the C++ standard library omits. This other philosophy is often found in scripting languages (as in Python or Ruby) or languages that use a virtual machine, such as Java or the .NET Framework languages. In C++, such facilities are not part of the standard library, but instead are included in other libraries, such as Boost. Examples C standard library C++ standard library .NET Framework Class Library (FCL) Java Class Library (JCL) Factor standard library Ruby standard library Python standard library Common Language Infrastructure (CLI) standard libraries References Programming libraries Programming language standards
Standard library
Technology
558
65,742,191
https://en.wikipedia.org/wiki/Bilibili%20Video%20Satellite
Bilibili Video Satellite was launched by Long March 11 on Yellow Sea waters on 15 September 2020. It was China's first custom-made satellite by Chinese internet company Bilibili. The satellite was developed by Chang Guang Satellite Technology Corporation. Mission Bilibili vice chairman and COO Li Ni (李旎) said in 2020 that the satellite would be able to access remote sensing video and could be used to make popular science videos, which was to include science, technology, humanities and other aspects to encourage the younger generation to remain curious and explore. Farther into the future, the satellite is planned to also customize filming missions for Bilibili users, using the satellite to take orbital photos of the Earth. Specifications The satellite was equipped with two high-performance payload cameras, intended to obtain color video images with resolution better than covering an area of . History On 11 May 2020, the initial satellite was transported to Jiuquan Satellite Launch Center. In June 2020, Bilibili announced that the satellite was to have been launched in late June, saying it was "a Children's Day gift". In the event, the launch was delayed. On 10 July 2020, the initial Bilibili Video Satellite satellite was launched from the Jiuquan Center by the first Kuaizhou 11 rocket. However, the rocket flew abnormally and failed to reach orbit. Bilibili said then that the satellite launch program would continue. In September 2020, another satellite was launched by a Long March 11 from the Yellow Sea and successfully reached orbit. See also Jilin-1 References Earth observation satellites of China Commercial Earth imaging satellites Satellite video
Bilibili Video Satellite
Astronomy
332
2,539,793
https://en.wikipedia.org/wiki/XML%20Information%20Set
XML Information Set (XML Infoset) is a W3C specification describing an abstract data model of an XML document in terms of a set of information items. The definitions in the XML Information Set specification are meant to be used in other specifications that need to refer to the information in a well-formed XML document. An XML document has an information set if it is well-formed and satisfies the namespace constraints. There is no requirement for an XML document to be valid in order to have an information set. An information set can contain up to eleven different types of information items: The Document Information Item (always present) Element Information Items Attribute Information Items Processing Instruction Information Items Unexpanded Entity Reference Information Items Character Information Items Comment Information Items The Document Type Declaration Information Item Unparsed Entity Information Items Notation Information Items Namespace Information Items XML was initially developed without a formal definition of its infoset. This was only formalised by later work beginning in 1999, first published as a separate W3C Working Draft at the end of December that year. Infoset recommendation Second Edition was adopted on 4 February, 2004. If a 2.0 version of the XML standard is ever published, it is likely that this would absorb the Infoset recommendation as an integral part of that standard. Infoset augmentation Infoset augmentation or infoset modification refers to the process of modifying the infoset during schema validation, for example by adding default attributes. The augmented infoset is called the post-schema-validation infoset, or PSVI. Infoset augmentation is somewhat controversial, with claims that it is a violation of modularity and tends to cause interoperability problems, since applications get different information depending on whether or not validation has been performed. Infoset augmentation is supported by XML Schema but not RELAX NG. Serialization Typically, XML Information Set is serialized as XML. There are also serialization formats for Binary XML, CSV, and JSON. See also XML Information Set instances: Document Object Model Xpath data model SXML References External links World Wide Web Consortium standards XML-based standards ja:Extensible Markup Language#XMLインフォメーションセット
XML Information Set
Technology
442
33,447,798
https://en.wikipedia.org/wiki/Canberra%20distance
The Canberra distance is a numerical measure of the distance between pairs of points in a vector space, introduced in 1966 and refined in 1967 by Godfrey N. Lance and William T. Williams. It is a weighted version of L₁ (Manhattan) distance. The Canberra distance has been used as a metric for comparing ranked lists and for intrusion detection in computer security. It has also been used to analyze the gut microbiome in different disease states. Definition The Canberra distance d between vectors p and q in an n-dimensional real vector space is given as follows: where are vectors. The Canberra metric, Adkins form, divides the distance d by (n-Z) where Z is the number of attributes that are 0 for p and q. See also Normed vector space Metric Manhattan distance Notes References Digital geometry Metric geometry Distance
Canberra distance
Physics,Mathematics
166
662,173
https://en.wikipedia.org/wiki/Endoskeleton
An endoskeleton (From Greek ἔνδον, éndon = "within", "inner" + σκελετός, skeletos = "skeleton") is a structural frame (skeleton) on the inside of an animal, overlaid by soft tissues and usually composed of mineralized tissue. Endoskeletons serve as structural support against gravity and mechanical loads, and provide anchoring attachment sites for skeletal muscles to transmit force and allow movements and locomotion. Vertebrates and the closely related cephalochordates are the predominant animal clade with endoskeletons (made of mostly bone and sometimes cartilage), although invertebrates such as sponges also have evolved a form of "rebar" endoskeletons made of diffuse meshworks of calcite/silica structural elements called spicules, and echinoderms have a dermal calcite endoskeleton known as ossicles. Some coleoid cephalopods (squids and cuttlefish) have an internalized vestigial aragonite/calcite-chitin shell known as gladius or cuttlebone, which can serve as muscle attachments but the main function is often to maintain buoyancy rather than to give structural support, and their body shape is largely maintained by hydroskeleton. Compared to the exoskeletons of many invertebrates, endoskeletons allow much larger overall body sizes for the same skeletal mass, as most soft tissues and organs are positioned outside the skeleton rather than within it, thus unrestricted by the volume and internal capacity of the skeleton itself. Being more centralized in structure also means more compact volume, making it easier for the circulatory system to perfuse and oxygenate, as well as higher tissue density against stress. The external nature of muscle attachments also allows thicker and more diverse muscle architectures, as well as more versatile range of motions. Overview A true endoskeleton is derived from mesodermal tissue. In three phyla of animals, Chordata, Echinodermata and Porifera (sponges), endoskeletons of various complexity are found. An endoskeleton may function purely for structural support (as in the case of Porifera), but often also serves as an attachment site for muscles and a mechanism for transmitting muscular forces as in chordates and echinoderms, which provides a means of locomotion. Compared to the exoskeleton structure in many invertebrates (particularly panarthropods), the endoskeleton has several advantages: The capacity for larger body sizes under the same skeletal mass, as the endoskeleton has a "flesh-over-bone" construct rather than a "flesh-in-bone" one as in exoskeletons. This means that the body's overall volume is not restricted by the endoskeleton itself, but by the weight of soft tissues that can be attached and supported by it, while the capacity of an exoskeleton's internal cavity restricts how much organs and tissues can be supported. Because of skeletal rigidity, many invertebrates have to repeatedly moult (ecdysis) during the juvenile stages of life to grow bigger. Endoskeletons have a more concentrated layout due to its internalized nature, so a greater proportion of skeletal tissue can be recruited to handle mechanical loads. In contrast, exoskeletons are more "spread thin" over the exterior, meaning that when stress is applied to one area of the body, most of the remaining exoskeleton often just plays "dead weight". Increasing the skeletal strength of a local area often means having to increase the cuticle thickness and density of an entire part of the body, which increase the overall weight significantly, especially with larger body sizes. Being internal means the skeletal tissue can be perfused and maintained from both inside (via nutrient arteries of the marrow) and outside (via periosteal arterioles). The tissue catchment volume that the circulatory system is required to cover is also smaller than that of exoskeletons, making it easier to maintain skeletal health. Endoskeletons are typically cushioned from trauma by the overlying soft tissues, while exoskeletons are directly exposed to external insults. Having other tissues attached outside the skeleton means that endoskeletons can have a more diverse muscular layouts as well as bigger physiological cross-sectional area, which translates to greater contractile strength and adaptability. Having external muscles also means the potential for greater leverage as the muscle can attach further down from a joint (comparatively, exoskeletal muscles cannot attach farther than the internal diameter of the corresponding joint cavity), although the muscles (especially flexors) themselves can sometimes physically hinder the joint's range of motion. Chordates Chordates, with the exception of the subphylum Tunicata (which are either soft-bodied or are supported by an exoskeleton known as a test), are developed along an axial endoskeleton. In the more basal subphylum Cephalochordata (lancelets), the endoskeleton consists of solely of an elastic glycoprotein-collagen rod called notochord, which stores energy like a spring and enable more energy-efficient swimming. In the crown group subphylum Vertebrata (vertebrates), the endoskeleton is greatly expanded, with the notochord replaced by a segmented vertebral column and the skeletal elements develop further to form the cranium, rib cage and appendicular skeleton. The vertebrate endoskeleton is made up of two types of mineralized tissues, i.e. bone and cartilage, reinforced by collagenous ligaments. Vertebrates also evolved specialized striated muscles over their endoskeletons called skeletal muscles, which have serialized sarcomeres and parallel myofibrils bundled in fascicles to both generate greater force and optimize contraction speed. Echinoderms Echinoderms have a mesodermal skeleton in the dermis, composed of calcite-based plates known as ossicles, which form a porous structure known as stereom. In sea urchins, the ossicles are fused together into a test, while in the arms of sea stars, brittle stars and crinoids (sea lilies) they articulate to form flexible joints. The ossicles may bear external projections in the form of spines, granules or warts that are supported by a tough epidermis. Echinoderm skeletal elements are sometimes deployed in specialized ways such as the chewing organ in sea urchins called "Aristotle's lantern", the supportive stalks of crinoids, and the structural "lime ring" of sea cucumbers. Sponges The poriferan "skeleton" consists of mesh-like network of microscopic spicules. The soft connective tissues of sponges are composed of gelatinous mesohyl reinforced by fibrous spongin, forming a composite matrix that has decent tensile strength but severely lacks the rigidity needed to resist deformation from ocean currents. The spicules act as structural elements that add much needed compressive and shear strengths that help maintain the sponge's shape (which is needed to ensure optimal filter feeding), much like the aggregates and rebar stirrups within reinforced concrete. Sponges can have spicules made of calcium carbonate (calcite or aragonite) or more commonly silica, which separate sponges into two main clades, calcareous sponges and siliceous sponges. There are however species (such as bath sponge and lake sponge) that have no or severely reduced spicules, which gives them an overall soft "spongy" structure. Coleoids The Coleoidea, a subclass of cephalopod molluscs who evolved internalised shell, do not have a true endoskeleton in the physiological sense; there, the internal shell has evolved into a buoyancy organ called the gladius or cuttlebone, which may provide muscle attachment but does not support the cephalopod's body shape (which is maintained solely by a hydroskeleton). Gallery See also Exoskeleton Hydrostatic skeleton References Animal anatomy Biomechanics Skeletal system Zoology
Endoskeleton
Physics,Biology
1,755
11,421,501
https://en.wikipedia.org/wiki/S-element
The S-element is an RNA element found in p42d and related plasmids. The S-element has multiple functions and is believed to act as a negative regulator of repC transcription, and be required for efficient replication and act as a translational enhancer of repC. See also ctRNA References External links Cis-regulatory RNA elements
S-element
Chemistry
73
12,128,498
https://en.wikipedia.org/wiki/Arizona%20room
An Arizona room is a screened porch found frequently in homes in Arizona, based on similar concepts as the Florida room. Though often a patio or porch that has been covered and screened-in, creating an outdoor feeling while preventing excessive heat and keeping insects and animals out, many Arizona rooms are purpose built at the time the house is constructed. The room generally borders the backyard or side yard of the house and is often accessed directly from the living room, kitchen or other common room of the home. According to Phoenix newspaper The Arizona Republic, residents slept in their Arizona room during the summer months, before the advent of air conditioning, because the flow of cool night air made them more comfortable than in an enclosed bedroom. Arizona rooms are often decorated with Southwestern decor and furniture, and reflect the casual, informal style characteristic of the Southwest. See also Florida room Screened porch Sleeping porch References Room Rooms
Arizona room
Engineering
177
38,792,732
https://en.wikipedia.org/wiki/Prenda%20Law
Prenda Law, also known as Steele | Hansmeier PLLP and Anti-Piracy Law Group, was a Chicago-based law firm that ostensibly operated by undertaking litigation against copyright infringement. However, it was later characterized by the United States District Court for Central California in a May 2013 ruling as a "porno-trolling collective" whose business model "relie[d] on deception", and which resembled most closely a conspiracy and racketeering enterprise, referring in the judgment to RICO, the U.S. Federal anti-racketeering law. The firm ostensibly dissolved itself in July 2013 shortly after the adverse ruling although onlookers described Alpha Law Firm LLC as its apparent replacement. In 2014, the ABA Journal described the "Prenda Law saga" as having entered "legal folklore". In the 2013 civil ruling, Prenda Law and three named principals, John Steele, Paul Hansmeier, and Paul Duffy, were found to have undertaken vexatious litigation, identity theft, misrepresentation and calculated deception (including "fraudulent signature"), professional misconduct and to have shown moral turpitude. The principals were also deemed to have founded and been the de facto owners and officers of the shell company which plaintiff and their alleged "client" created to "give an appearance of legitimacy". The firm and its principals were fined; the matter was also referred to the United States Attorney for Central California (for criminal indictment consideration) and the IRS Criminal Investigation Division (for tax fraud consideration). A fourth attorney, Brett Gibbs, was also sanctioned, fined, and referred to a disciplinary committee by the court for false statements and his part in the subsequent cover-up, though described as their employee. (Gibbs' monetary sanctions were later vacated after turning whistleblower as part of his appeal.) Other Federal courts in various states later ruled in a similar manner against the firm and others linked to it, including a ruling of fraud upon the court in a Minnesota court review of five closed cases and "relentless willingness to lie" in an Illinois court. Criminal prosecutor referrals also occurred or were suggested in other jurisdictions. In 2019, Steele and Hansmeier both pleaded guilty in Federal court to a range of criminal charges related to and including extortion and fraudulent conduct, with substantial sums of their criminal proceeds to be reimbursed. Steele received a reduced sentence of 5 years due to his cooperation and Hansmeier, who had not cooperated, was sentenced to 14 years. Modus operandi As described in the ruling and media coverage, the firm's modus operandi was to threaten individual members of the public with litigation (hence public exposure) over allegations that they breached copyright during pornographic downloads (based, according to the court, on a "statistical guess"). The firm would then offer to settle the case (silently) for a little below the cost of an active legal defense, in what was described by the court as an "extortion payment". Cases with robust defendants and non-profitable cases were dropped. At times the firm acted for client corporations whose filed papers contained discredited signatures and identities found false by the courts—commenters characterized the firm's plaintiffs as often (though not always) being shell companies operated for the firm and/or attorneys' own benefit. A Federal District court described Prenda's principal attorneys in 2013 as "engaging [a] cloak of shell companies and fraud", and comments of "shocking" and apparent "shell game activity" by Seventh Circuit Court of Appeals judges at the inability or refusal to describe relationships between the law firms, "clients" and principals. An expert witness affidavit stated that IP addresses linked to Prenda's Minnesota and Florida offices and John Steele, had themselves been identified in 2013 as the initial "seeders" (sharers) of some pornographic media, tagged for "fast" sharing on file-sharing networks, which would be followed up by threat of legal action, with Prenda as the pornography producer or copyright purchaser, file sharer (offeror), plaintiff, and plaintiff's attorney; the suspect IP, linked to unauthorized media distribution, was confirmed as Steele and Hansmeier's by Comcast. In some cases hacking was alleged, or claims that the defendant was one of hundreds or thousands of "co-conspirators" for whom non-party subpoenas and discovery were sought; in one case the defendant testified he was in effect offered an ultimatum to act as a "sham" defendant and collusively agree to be sued. Commentators writing for Salon, Law360, JETLaw, Forbes, The Consumerist, and The Register described the business as a copyright troll or as "notorious" for trolling, while tech website Ars Technica described the judge as coming at the firm and its principals "like a tornado". In December 2016, Steele and Hansmeier were arrested by federal authorities and charged with 18 counts of running a multimillion-dollar extortion scheme between 2011 and 2014. (Duffy had died before that date) In December 2016 Steele was indicted by a federal grand jury in Minneapolis for his role in the scheme. The following March, he pleaded guilty to federal charges of conspiracy to commit mail and wire fraud and conspiracy to launder money "from his role in an alleged shakedown scheme allegedly designed to entrap and extract millions of dollars in settlements from those accused of illegally downloading internet porn." He was subsequently disbarred by the Illinois Supreme Court in May 2017. In July 2019 he was sentenced to five years in prison and ordered to pay, jointly and severally with Hansmeier, restitution in the amount of $1,541,527.37. Hansmeier initially pleaded not guilty, on the ground that a plaintiff was entitled to file a claim for damages, even on slim grounds, provided actual grounds might exist, and see if a court agreed or disagreed on its validity (hence arguing that his conduct had been within the law), but later changed his plea to guilty. On June 14, 2019, U.S. District Judge Joan Ericksen sentenced him to fourteen years in prison. In February 2021, the Court of Appeals for the Eighth Circuit upheld Judge Ericksen's order denying Hansmeier's motion to dismiss and to pay restitution in the amount of $1,541,527.37. History Prenda first came to prominence through the practice of identifying the IP addresses of Internet subscribers who, it claims, downloaded copyrighted pornographic videos or "hacked" into pornography-related clients. The firm would file copyright infringement lawsuits in federal court, in which it requested "up front" early discovery via "over-broad" subpoenas against the respective Internet service providers (ISPs), upon sometimes-deceptive grounds and at times with falsified signatures on key documents These discoveries were used to obtain names and addresses of hundreds or even thousands of subscribers said to be infringers or "co-conspirators" in some cases. The subscribers concerned—who might have had no responsibility for any alleged download due to Prenda's lack of care over innocent targets—were then written to, and accused of copyright infringement or computer misuse, and threatened with over $150,000 in statutory penalties or told of the possibility of higher damages if the matter was decided in court, and that refusal to pay would cause the recipient's name, together with the names of alleged pornographic videos, to be entered on public court documents, publicly exposing the subscribers' supposed pornographic interest and trial for alleged downloading. That is, the recipient would be stigmatized and identified to the public (e.g., to friends, employers, spouse, children, coworkers, etc.) as someone who had illegally downloaded specific pornography titles on the internet, and been sued in court for doing so. At times, what were sometimes seen as veiled threats were also made, that household members, neighbors, and visitors to the household would be formally asked if they had been responsible for the download of the pornographic material on the defendant's network, as part of their investigation, if they continued not to pay. The letters then offered to make the case go away "silently" for a fee—$4,000 was the price of silence offered to some. The amount demanded was usually slightly less than a typical attorney would charge to defend the case on its merits (as alluded to in the ruling on Ingenuity 13 (below)), so under the "American rule" for legal costs even the completely innocent would have a strong incentive to pay what Los Angeles-based U.S. District Judge Otis D. Wright II called an "extortion payment". Alan Cooper claimed in his 2013 testimony, that an attorney ruled to be one of Prenda's principals had stated to him that "his goal was $10,000 a day, to have a mailing of these letters. ... [t]hat he would just send out a letter stating that if they didn't send a check for a certain amount, that he would make it public to these people's family and friends what they were looking at". It was later testified and ruled in various courts that many (although not all) of the firm's purported "clients" or "plaintiffs" only appeared to exist as shell companies or "fronts" for the principals themselves, and that in effect the principals had been litigating on their own behalf. Signatures and representatives purported to be of clients, were ruled to be falsified, with at least one identity "stolen" and one purported "representative" excluded from court. In a 2013 Minnesota review of five closed cases, none of the documents used to show standing were ruled to be credible. At times the law firm itself was also alleged to have been behind sharing of previously-undistributed pornography on well-known video "piracy" websites – an apparent effort to induce litigable downloading. The rewards reaped by what Judge Wright called the "porno-trolling collective" ran into the millions. Steele told Forbes magazine that his firm had filed over 350 lawsuits against more than 20,000 people, resulting in "a little less than $15 million" in settlements. Brett Gibbs testified in 2013 that based on documents shown to him, the firm's revenue from settlements was around $1.93 million in 2012, of which around 80% was ultimately distributed to the principals or their joint companies, rather than purported clients. Alleged file downloader litigation cases "Ingenuity 13" case In November 2012, Morgan Pietz was engaged as defense attorney against a case brought by Ingenuity 13, a Prenda "client". He stated that, in the course of case preparation, he noticed various anomalies within Prenda's past litigation. In one Florida case (Sunlust vs. Nguyen) decided November 2012, a Prenda employee, Mark Lutz, had also self-identified as the "representative" of pornography producer Sunlust Pictures but been unable to describe his principal or the client in any manner, nor state who paid him, and was ultimately excluded from the court case. Additionally when Prenda attorney John Steele visited the court and Lutz kept whispering to him, causing the judge to eventually ask Steele to identify himself, Steele had answered evasively, not disclosing his connection to the firm or the case. The case was dismissed as an "attempted fraud on the court", with the court also inviting a motion against Prenda principal Paul Duffy for "lack of candor". Pietz described these as "the first thing [he] started to wonder about". An attempt to investigate the offshore-registered plaintiff in his case, Ingenuity 13, also suggested that "someone had something to hide", as its owners and even the source of its copyrights had been heavily obscured. Pietz further discovered a recent claim in a Minnesota court by a man called Alan Cooper, who stated that his name and signature had been falsely used as owner-of-record for Ingenuity 13 and another Prenda client (a tape recording in which Steele threatened Cooper with litigation was played as evidence in court). Finally, an email dialog between Pietz and Ingenuity 13's attorney Brett Gibbs in which Pietz asked for clarification about the identity of "Alan Cooper" and sight of originals of documents certified as having been signed by "Alan Cooper", was stonewalled in what court onlookers described as a "rambling", "confused" and evasive manner. Brett Gibbs had also been one of the Prenda attorneys named in the Sunlust hearing. Taken together, the circumstances caused Pietz to question to the court whether the claims and client relationships presented by the plaintiff were truthful, whether the plaintiffs were engaging in a fraud, and to collate and present his research into Prenda and its affiliates that "detail[ed] mysterious signatures, company addresses that appeared to belong to Steele's family members, and one entity that appeared to be run by a friend of Hansmeier", and which overall suggested "possible systemic fraud, perjury, lack of standing, undisclosed financial interests, and improper fee splitting". The case was heard by Judge Otis Wright, who declined the response to Pietz's defense and ordered Ingenuity 13 to show that the basis of its case, and its manner of identifying alleged offenders, was sufficient to safeguard innocent persons and subscribers from "simple coercion" and "harassment" (described as "the easy route"). Prenda in response tried and failed to get the judge removed and also to sue Cooper for defamation, but also "conspicuously avoid[ed] direct engagement" with the more serious counterclaims. Hansmeier eventually testified of one Prenda plaintiff company, that its business was buying copyrights to litigate against their downloaders, and that it had never paid any tax. Prenda also sought to end the case by requesting dismissal, which was also declined. On February 7, 2013, the court instead ordered plaintiff's attorney Brett Gibbs to attend a hearing, to answer questions concerning the firm's conduct in the case. "More ominously" according to a legal onlooker, it also stated that "the Court perceives that Plaintiff may have defrauded the Court" based on Pietz's evidence, and that based upon testimony presented by and about the plaintiffs at the next hearing, "the Court will consider whether sanctions are appropriate, and if so, determine the proper punishment. This may include a monetary fine, incarceration, or other sanctions..." As well as the plaintiffs and Gibbs as their attorney, Steele, Hansmeier and his brother (who acted as a Prenda forensic investigator), Duffy, Lutz, and Cooper were also among those ordered by the court to attend. On March 11, 2013, Judge Wright's courtroom manner was described by attorney-blogger and past prosecutor Ken White on Popehat, who had followed the case, as "a federal judge who was furious, intimately familiar with the case, and consummately prepared for the hearing... [and] made it explicitly, abundantly, frighteningly clear that he believes the principals of Prenda Law have engaged in misconduct – and that he means to get to the bottom of it." Alan Cooper testified first, that he had not signed or seen any papers related to Prenda, that "Alan Cooper" signatures on documents were not his, that he looked after one of John Steele's houses, and that Steele had stated he wanted to earn "$10,000 a day" litigating against downloaders, and that when Cooper had attempted to legally clarify that he was not the signatory of Prenda's documents, Steele began an escalating series of texts and voicemails whose interpretation (to White) clearly appeared to be intimidating Cooper into "back[ing] off". AT&T and Verizon, two major ISPs, testified that stays (court orders from other Prenda cases) had been ignored by Prenda and they were kept unaware of their issue. On the topic of Prenda's finances, Judge Wright made clear that he viewed the clients as shams that neither received settlement payments nor paid tax on any settlements, and that Prenda "basically prosecuted on their own behalf". Of the four main Prenda attorneys, only Gibbs attended and testified, coming across to White as "a young attorney out of his depth who fell in with the wrong crowd", but whose testimony in his own defense was contradicted at the hearing (Gibbs stated that he only had limited involvement in Prenda client "AF Holdings" but attorney Jason Sweet in the audience then stated to the court that Gibbs had previously self-identified as AF Holdings "national counsel"). On April 2, 2013, when Steele, Hansmeier and Duffy attended, the scrutiny of the court run by Judge Wright – who had sat on around 45 previous Prenda cases – had therefore long moved to "getting to the bottom" of the firm, rather than the specific case, and Wright opened by stating, "It should be clear by now that this court's focus has now shifted dramatically from the area of protecting intellectual property rights to attorney misconduct[,] such misconduct which I think brings discredit to the profession. That is much more of a concern now to this court than what this litigation initially was about." The plaintiffs had also been threatened with imprisonment and the three Prenda attorneys later described as "principals" had also appeared. However, the latter then chose to fall back upon their legal right to avoid self-incrimination under the Fifth Amendment, rather than answer questions of fact from the judge about the ownership, operations, and finances, of Prenda and its affiliates. Ken White, who had followed and documented the case, commented on April 2, 2013: Their invocation of their Fifth Amendment rights in the face of that order is utterly unprecedented in my experience as a lawyer. In effect, the responsible lawyers for a law firm conducting litigation before a court have refused to explain that litigation to the court on the grounds that doing so could expose them to criminal prosecution. However well grounded in ... individual rights ... the invocation eviscerates their credibility... I expect that defense attorneys will file notice of it in every state and federal case Prenda Law has brought... The message will be stark: the attorneys directing this litigation just took the Fifth rather than answer another judge's questions about their conduct in this litigation campaign. Despite the ongoing case, by May 30, 2013, according to BusinessWeek, Prenda Law had changed its name to the Anti-Piracy Law Group, and switched its continuing actions to state courts, BusinessWeek commenting that state court cases are not centrally listed or as easily found compared to Federal court cases. Ruling and subsequent events in the Ingenuity 13 case On May 6, 2013, Judge Wright sanctioned Prenda Law and its "principals" Steele, Hansmeier, and Duffy, along with Gibbs, whom he termed "attorneys with shattered law practices", $81,319.72 (of which half was punitive) for "brazen misconduct and relentless fraud", "vexatious litigation", "[stealing] the identity of Alan Cooper", and "representations about their operations, relationships, and financial interests [that] varied from feigned ignorance to misstatements to outright lies". Wright also referred the attorneys to the U.S. Attorney's office and the Internal Revenue Service-Criminal Investigation Division for possible criminal prosecution; and to various federal and state bars for "moral turpitude unbecoming of an officer of the court". He also noted that Steele, Hansmeier, and Duffy had pleaded the Fifth Amendment privilege against self-incrimination when questioned. Judge Wright also found that the attorneys "purposely ignored" his orders quashing subpoenas to ISPs so that ISPs would be "unaware of the vacatur and would turn over the requested subscriber information"; and had created sham companies such as "AF Holdings LLC" and "Ingenuity 13 LLC" "to give an appearance of legitimacy" to their pursuit of "easy money". In a footnote, Wright wryly observed that the punitive portion of the award "is calculated to be just below the cost of an effective appeal", a nod to his finding that plaintiffs' settlement demands were set "just below the cost of a bare-bones defense". Wright's order was replete with Star Trek references: Prenda, and attorney Paul Hansmeier, filed an "emergency motion" in the U.S. Court of Appeals for the Ninth Circuit seeking a stay of Judge Wright's sanctions order; it was denied without prejudice to the sanctioned parties' right to request a stay in Judge Wright's court. They then filed an ex parte motion seeking a stay from Wright. On May 21, 2013, Wright responded by ordering each sanctioned party ("Steele, Duffy, Hansmeier, Gibbs, AF Holdings, Ingenuity 13, and Prenda") to pay an additional $1,000 per day (to the clerk of the court), on top of the previously ordered $81,319.72 payable to John Doe's attorneys, until all sanctions are fully paid. Hansmeier's application for admission to the bar of the Ninth Circuit was also provisionally denied by a court commissioner, citing his referral to professional bar and discipline committees, based on Judge Wright's finding of "moral turpitude". Hansmeier had sought admission to the Ninth Circuit bar to represent his wife in her objection to a class-action settlement, but was told he may not do so. On May 20, 2013, attorneys Steele, Hansmeier, and Duffy secured and posted a bond of $101,650 on behalf of themselves, Prenda Law Inc., Ingenuity 13 LLC and AF Holdings LLC (but not Gibbs), to guarantee payment of Judge Wright's sanctions order if upheld on appeal. The amount of the bond was later raised to a total of $237,584 to cover a possible attorney fee award on appeal; the Prenda parties' appeal of this order was denied, 2-1, by a three-judge Ninth Circuit motions panel. The increased bond was to be filed by July 15, 2013. Appeals against monetary sanctions On October 17, 2013, Brett Gibbs offered a sworn statement, an unsworn motion, and alleged documents and claims concerning Prenda and its principals as part of an appeal against his sanction. The allegations and claims included: that Gibbs had openly and repeatedly testified at multiple cases and hearings, including testimony damaging to the firm's case such as identifying the voice recordings in AF Holdings v. Patel as being John Steele that Steele and Hansmeier tried to coerce or buy his silence and gain his dishonest representations in support of their position in return for covering his sanction bond, and had also tried to intimidate or discredit him (via "virtually fact-free" Bar complaints), but he had refused that he included for the court what he claimed to be in-house emails showing private communications within the firm, and financial statements showing over $1.9 million in settlement income in 2012 that around 70% of all settlement monies (80% when other payments were added) had been paid to Steele and Hansmeier or their jointly-owned company, although this left Prenda with a loss of almost $0.5 m for the year, which was inconsistent with the claim that these were arms length payments to previous owners that "it appears that neither the Profit and Loss Detail nor the Balance Sheet Detail show any payments to AF Holdings, Ingenuity 13 or other Plaintiffs represented by Prenda [which] supports the conclusion that these companies were not independent entities, but rather alter egos of Steele and Hansmeier." that Duffy, not a known "old owner", was described and paid internally as an "old owner" that court orders had also been ignored in cases he was not involved and a plea that his testimony had been valuable and his errors were due to unintended ignorance and in one case, personal error, and that he would testify on these if required. The defendants claimed that while his wrongdoings were of a lesser scale, they were not trivial, and opposed lifting of the entire sanction. On November 7 Gibbs' request was granted and the monetary part of his sanctions withdrawn ("vacated") due to his "dissociation". On November 18, 2013, the three principals appealed their sanctions, on the basis that the court had erred in combining elements of a civil and criminal hearing, in effect choosing elements of each and thereby failing to meet the requirements for a number of actions taken. Their claims included: if fixed penalties were proposed or imposed, these would be criminal and not civil responses and required contempt hearings that the defense lawyer acted in effect as a prosecutor but lacked the requisite disinterest that no substantial evidence was presented for the conclusions about plaintiff company ownership that abrupt termination of hearings and refusal to hear certain persons had meant important testimony was not heard or cross-examined that a number of the procedural failings were due to Gibbs and not the three principals and that the court had generally exceeded its authority. The appeal, styled Paul Hansmeier, Esq. v. John Doe, was heard before the Ninth Circuit Court of Appeals on May 4, 2015. A video recording of the oral argument is publicly available. The appeal was denied on June 10, 2016. The Ninth Circuit held that: Responses to rulings Attorney John Steele, who denied having any "ownership interest" in Prenda Law, told Adult Video News that "he has faith in the appellate process" and complained that his attorneys were not permitted to present "evidence or testimony". He denied ever practicing law in California, and claimed he had "satisfied" previous inquiries from the Illinois State Bar, and stated that Livewire Holdings, identified by Judge Wright as being a member of the Prenda family, "is filing multiple new cases this week (the week of May 6, 2013)". Morgan Pietz, who was awarded $76,752.52 of the sanctions award for his fees and costs, described Prenda Law as "the courtroom equivalent of a common bully". Pietz credited the efforts of "a number of attorneys all over the country [who helped] unravel the puzzle pieces and reveal [Prenda Law] for what they are, profiteering copyright trolls who are abusing the law". AF Holding vs. Navasca AF Holdings v. Navasca (California) was another Prenda case decided in 2013. As with the signature by "Alan Cooper" in Ingenuity 13, during April and May 2013, the judge in Navasca expressed curiosity about a signature by "Salt Marsh" on behalf of Prenda client AF Holdings, ordering the original to be produced. Paul Duffy stated he did not know, and Mark Lutz stated he had signed for the client but no longer had the original. Salt Marsh, originally identified as an individual person, was later stated to be a trust for the benefit of Lutz' family, however as plaintiff filings variously stated that "Salt Marsh" was an "individual", and that the individual had read various documents and discussed dispute options the court sought the identity of the person who had read, discussed, and signed for the client. Commentators identified that a man named "Saltmarsh" had multiple connections and shared residences with both Steele and other Prenda shell companies, but whether this was the same "Saltmarsh" was unknown. After the events of early 2013 in Ingenuity 13 v. Does (above), Prenda and AF Holdings sought dismissal of "numerous cases" in Californian Federal courts, including Navasca. The judge denied the dismissal, ruling that Prenda's prior actions in Nevasca seemed to be connected to their desire to prevent discovery of damaging evidence which might also impact the Ingenuity 13 case, Prenda's disinclination (as in other Prenda cases) to post a bond to support their case (the court ruled: "A plaintiff cannot invoke the benefits of the judicial system without being prepared to satisfy its obligations as a litigant"), and described it as "telling that AF moved for a voluntary dismissal only two days after... problems related to its standing were explored and exposed by Mr. Navasca". The case was subsequently dismissed with prejudice by the court, as "AF's counsel has now substantially complied with the Court's order". Ken White commented on the ruling: This [with prejudice] order is a body blow to Prenda Law. Judge Chen... is openly suggesting that Prenda's conduct suggests malfeasance and evasion of potential negative rulings. He invited Navasca to file a separate motion for fees, and this order strongly suggests that he will grant such a motion. Judge Chen's dismissal of Prenda's 'it doesn't matter if Cooper's signature is forged' argument suggests that he suspects that Prenda's entire litigation strategy is premised on fraud – that Prenda has manufactured the dispute, and that AF Holdings is merely a front for Prenda Law lawyers. Motion by defense On July 4, 2013, defense attorney Nicholas Ranallo filed a motion seeking an award for costs and legal fees against John Steele and Paul Hansmeier personally, rather than against Prenda or the plaintiffs. The motion summarized the known history and legal cases related to Prenda and its affiliates, and the findings of fraud upon the court, and stated the intent to "hold the individual attorneys that have knowingly committed fraud in this case responsible for their actions by making them jointly and severally liable for the costs and attorney fees incurred", since allegedly "[t]he evidence of fraud is clear and unrebutted, and the involvement of John Steele and Paul Hansmeier is likewise clear. The instant scheme was created by lawyers, for the benefit of lawyers, and it is entirely appropriate that these lawyers should bear the burden that their actions have caused". The motion cited evidence from several Prenda cases, including: Evidence cited about Salt Marsh, Alan Cooper, and client signatures and identities The "Alan Cooper" and "Salt Marsh" signatures on court-filed documents, and Prenda's failure or inability to produce the individuals responsible or original signatures in both cases despite having (in Hansmeier's testimony) obtained those signatures personally. Anthony Saltmarsh and Alan Cooper as named identifiable individuals having known personal connections to Steele and Prenda, Steele's change of a GoDaddy account nameholder from "Steele" to "Cooper", and Cooper's denial of being the signatory of the document signed "Alan Cooper"; Mark Lutz's statements that he represented at least one Prenda shell company, although under oath unable to describe even basic details of the company or evidence of being their "representative", his exclusion as a "fraud upon the court" in Sunlust vs. Nguyen (see below), and transcripts from that case; Brett Gibbs' testimony that he was directed by Steele and Hansmeier and had no personal contact with their clients; Steele and Hansmeier's reassurances to Gibbs that these issues and claims of fraud were mere "conspiracy theories" irrelevant to the litigation in Navasca; Testimony by Hansmeier that Steele had said the signatures were genuine, that "the only person who knows who this Alan Cooper John Steele", and testimony that Steele was asked by Lutz to find a "representative" for AF Holdings', for the purposes of litigation, and that Steele had denied these in a written statement claiming it was a favor at Alan Cooper's request, which were in turn denied by Cooper; AF Holdings filing, listing "Salt Marsh" under "Individuals Likely to Have Discoverable Information", and statements that this "Salt Marsh" had undertaken actions such as reading documents; contradicting later statements that "Salt Marsh" was a trust not an individual; Other evidence cited Expert analysis of "a number of sources" implying that the file sharing account "sharkmp4" identified in First Time Videos v. Oppold (below), which had been linked forensically to Steele and Hansmeier's internet account, the expert conclusion that "the individual that controlled the GoDaddy accounts associated with John Steele... used the exact same IP address as the Pirate Bay user that posted links...on the Pirate Bay [...including...] links to Prenda Law works before those works were publicly available from any source", and that "all three entities [John Steele, 6881 Forensics, and sharkmp4] appear under the same control"; Judicial findings in Ingenuity 13 and other cases of fraud and deception, and the decision by three Prenda principals to plead the Fifth Amendment rather than explain how they conducted their litigation; Changing and contradictory stories by plaintiff-related parties; Other cases involving findings of fraud, deception, or other irregularities In Sunlust vs. Nguyen, at a November 27, 2012, hearing before U.S. District Judge Mary S. Scriven of the Middle District of Florida, paralegal Mark Lutz claimed to be a "corporate representative" of Prenda's "client" "Sunlust Pictures." However, when placed under oath and questioned by Judge Scriven, Lutz could not name any of Sunlust's officers or directors, nor could he recall who signs his paychecks. As a result, Lutz was excluded as a plaintiff representative by the court. Plaintiff's attorney John Torres, stated on oath that he was engaged by Prenda Law through Brett Gibbs as principal, although with no contract or other documents and unable to identify his general counsel; Paul Duffy of Prenda Law had denied involvement, stating that the firm was not engaged or a principal attorney in the case. John Steele, present in court and discussing with plaintiff attorneys, described himself as "an attorney but not involved in this case", to which defense counsel Syfert stated that in fact Lutz had previously worked for Steele and Prenda Law, and "he should have better information about the structure of Prenda Law" than he had stated. Following Lutz' exclusion and Torres' withdrawal from the case, Scriven stated she would entertain a motion for sanctions against Prenda and its attorneys for "attempted fraud on the Court", as well as against Duffy for "lack of candor" based upon Torres' testimony. The defense filed multiple motions for sanctions, but withdrew them all on May 20, 2013, observers concluded an out-of-court settlement had probably been reached instead. On May 21, 2013, Hennepin County, Minnesota District Court Judge Ann L. Alton awarded no damages to Alan Cooper on his identity-theft claim against Steele and Prenda, but she ordered attorney Paul Hansmeier to "stop using Alan Cooper's name" and to "never, ever again send fraudulent demand letters." Alton said she will refer Hansmeier to the Minnesota Lawyers Professional Responsibility Board for [violating] "a whole lot of rules". On June 3, 2013, in another Florida case (First Time Videos v. Oppold), the defense filed a declaration by expert witness Delvan Neville which accused Prenda Law of "seeding" its own content (which was not otherwise available) in an effort to induce copyright infringement. Neville's declaration presented digital forensic evidence that someone with access to the GoDaddy.com account of John Steele was also sharing pornographic content through Pirate Bay user "sharkmp4", following which, the Pirate Bay released its own logs of the user "sharkmp4" supporting the claims in the declaration. Syfert concluded: "Prenda Law's business structure is such that it is copyright-violating pirate, forensic pirate hunter, and attorney. It also appears that Prenda Law also wants to/has formed/is forming a corporate structure where it is: pornography producer, copyright holder, pornography pirate, forensic investigator, [law] firm, and debt collector." Prenda's connection with the suspect IP (formally assigned by Comcast to 'Steele Hansmeier PLLC'), which had been used by "sharkmp4" and was linked to unlicensed pornographic media distribution, was subsequently confirmed by Comcast in August 2013 following a subpoena in another Prenda case, AF Holdings v Patel. In November 2013 (Minnesota v. Does), Prenda apparent shell AF Holdings was directed to repay settlements obtained from four alleged downloaders, in a ruling that stated there was no evidence the claims made were truthful, the copyrights described were held, authentic, or legally assigned by correct signature, and that "The copyright-assignment agreements [for] each complaint in each of these five cases are not what they purport to be. Alan Cooper denies signing either agreement and also denies giving anyone else the authority to sign them on his behalf. AF Holdings failed to produce any credible evidence that the assignments were authentic. The Court has been the victim of a fraud perpetrated by AF Holdings..." The court, as in other cases, referred the issues of misconduct to "federal and state law enforcement at the direction of the United States Attorney, the Minnesota Attorney General and the Boards of Professional Responsibility", noting that "The Court expressly disbelieves Steele's testimony" in explaining their use of Cooper's signature. The repayment was reversed on appeal in March 2014, as the magistrate judge had exceeded the "inherent authority of the court" and the signatures had not been material to the outcome. Other significant cases and legal events AF Holdings v. Patel ("GoDaddy subpoena" case) In July 2013, following an appearance by BitTorrent news site TorrentFreak's lead researcher, Andrew Norton as a defense expert witness; a Georgia Federal court ordered discovery for both parties in a November 2012 case filed by AF Holdings, in which the plaintiff's attorney stated he was of counsel to Prenda Law and offered the law firm's telephone number and Brett Gibbs' email. The case became relevant in the context of other cases because of its discovery evidence, which was at times referred to in other cases and courts. Information and audio records obtained from ISPs were alleged by the defendant to show that an account in attorney John Steele's name was used to access a domain registered to "Alan Cooper" and when re-registered to "Mark Lutz" retained Steele's email address; that audio recordings of support requests for a domain registered to "Alan Cooper" seemed to be "[the] same voice" but identify himself variously in the different calls as "Alan Cooper", "John", "John Steele" and "Mark Lutz"; that an IP address used to log into an account registered to "John Steele" was also found to be allegedly involved in uploading of copyrighted works for sharing as well; and that two plaintiff motions filed in the case contained metadata which, it was claimed, indicated they had been "authored by Paul Duffy" of Prenda. In February 2014, with evidence in the underlying case and "cause" hearing stayed due to inclement weather, AF Holdings applied for Paul Duffy to be appointed as attorney pro hac vice (temporarily, "for this case only") after its prior attorney went back on active duty, and in March 2013 the case was ultimately dismissed with prejudice. Litigation against Cooper, Cooper's attorney, and online bloggers ("chilling speech" cases) In February 2013, Prenda Law, Steele and Duffy filed three similar lawsuits with identical titles in state courts in Illinois and Florida, each alleging defamation. The defendants in each case were Alan Cooper (who had commenced litigation over the fraudulent misuse of his name in Prenda cases such as Ingenuity 13), Alan Cooper's attorney Paul Godfread (who alerted judiciary to those concerns), and "Does" (all persons who had used or merely read, during a two-year period January 2011 to February 2013, two websites established by victims of online copyright trolling to oppose trolling and support trolling victims, along with the identities of the websites' founding bloggers). Prenda Law v. Godfread, et al. - IL. case 3:2013cv00207, original filing: ?13-L-001656 on February 12, 2012, moved to Federal Court March 1, 2013 (original claim defense, DMLP resources) Paul Duffy v. Godfread, et al. - IL. case 1:2013cv01569, original filing: 13-L-001656 on February 15, 2012, moved to Federal Court February 28, 2013 (original claim defense DMLP resources) — Consolidated into a single case June 28, 2013 (motiongrant) Steele v. Godfread, et al. - FL. case 1:2013cv20741, original filing: 13–6680 CA 4 on February 25, 2012, moved to Federal Court March 1, 2013 (original claim (no defense filed), DMLP resources) — Withdrawn via voluntary dismissal March 6, 2013, prior to defense (request) Subpoenas seeking website visitor information were issued to Automattic (owner of WordPress) and Wild West Domains (part of GoDaddy), that hosted and registered the two websites. Automattic responded that the request was "overly broad" and "legally deficient and objectionable for numerous reasons" and would not be entertained, and the non-profit Electronic Frontier Foundation (EFF) announced almost at the same time that it would offer the anonymous website users free legal defense. Prenda's request for a subpoena was quashed on May 16, 2013, for failure to file a response, and Steele's case was dismissed voluntarily at his own request on March 6, 2013. In a March 2014 ruling on the remaining lawsuits (now consolidated into a single case), Prenda and Duffy were ruled to have engaged in "unreasonable and vexacious" conduct and acted duplicitously; the ruling highlights in addition a finding that "to fabricate what a federal judge said in a ruling before another court falls well outside the bounds of proper advocacy and demonstrates a serious disregard for the judicial process." Godfread and Cooper were granted their motion for sanctions. The cases were characterized by Techdirt as "basically defamation lawsuits" and in effect SLAPP lawsuits, whose purpose was to chill (i.e., discourage and avert) legitimate public discussions of Prenda and its principals' activities, and to obtain disclosure of online critics' personal information. Allegations of defendant coercion and collusion with defendant who agreed to be sued In Guava LLC vs. Merkel (similar to another case, Lightspeed v. Doe), the law practice focused on only one defendant, whom it asserted had "hacked" a website and was a member of a "conspiracy" who "colluded" with "multiple co-conspirators" using a "hacked" password to intercept and gain access to Guava's "financial information" and other confidential operational information about Guava's business. On the basis of that assertion, the plaintiffs and/or (according to defendant testimony) Prenda and its affiliates also sought to obtain details of numerous other potential targets, including ISP subpoenas concerning numerous other individuals, as well as damages from the defendant "in excess of $100,000". The defendant in Guava testified that he was contacted by Prenda, and given an ultimatum: he could "agree to be sued", and if so Prenda would suggest a choice of attorney, and in return provide Prenda a copy of data concerning file sharing activities, following which his case would be dismissed, or he could pay $3,400 to close the case silently, or he would face the risk of heavy penalties [figures of $222,000 and $675,000 were cited from other cases] if the court found against him for file sharing a pornographic item for which Guava claimed to be copyright holder. The defendant's testimony included the statement that: "After subpoenas were served in the case against me, I learned of Guava LLC's and Prenda Law's practice of finding one John Doe to be a named defendant, and then discovering the names of and requesting settlement money from other John Does by issuing subpoenas to ISPs." Upon filing the lawsuit the defendant's attorney then stipulated with Prenda that Prenda could issue subpoenas to the defendant's purported "co-conspirators." Defendant's attorney and Prenda's attorney then submitted an "agreed order" to the judge, who promptly signed it. Techdirt commented that "Judges in underfunded county courts are happy when defendant and plaintiff agree on something, and (often) endorse such agreements", and that this enabled Prenda and affiliates to issue subpoenas seeking the identities of untold hundreds or thousands of ISP subscribers, resulting in Prenda being able to send out hundreds or thousands of new demands for "settlement". Ars Technica opined that in acting this way, Guava was in effect colluding with a "sham" defendant to obtain legal orders identifying other internet users who might be targeted, and that "fil[ing] fake lawsuits against defendants who were in bed with Prenda" was used by the plaintiffs to reduce the risk of a named defendant actually fighting the case in court; the defendant decided to testify about the arrangement after Prenda continued to demand money. The activist website "fightcopyrighttrolls.com" opined that the Guava LLC v. Skylar case, Arte de Oaxaca LLC v. Stacey Mullen [plaintiff: LW Holdings], and other cases with "a single mysterious defendant and...an 'agreed order' allowing unmasking [of] hundreds and thousands of ISP subscribers" might be examples of similar Prenda state court cases, in which plaintiffs and/or their affiliates might have in effect coerced individuals they claimed to be downloaders into "playing a defendant" in a "sham lawsuit" to get testimony and non-party subpoenas which could be similarly exploited through litigation. File sharing writer and former copyright enforcer Ben Jones opined about the case, that "These [hacking] claims were, it seems, to get around the problems of having already sued and settled [Merkel's] copyright case. Using state laws they could file in state courts, and keep a distance between this case and the Doe case filed in DC that Merkel settled." On January 22, 2013, four ISPs asked the court to quash subpoenas in Guava, on the grounds that "new information strongly indicates that the present action may be nothing more than a contrived lawsuit with no actual controversy", in which the defendant had agreed to act as a defendant "for the sole purpose of facilitating Guava's pursuit of non-party discovery..." The case was eventually dismissed with prejudice, and costs and legal fees of $63,367.52 were awarded against Guava LLC, Alpha Law Firm LLC (a further plaintiff with links to Prenda that had joined the case) and Guava's Counsel of Record. U.S. legal background Signatures in the cases Attorney Cathy Gellis commented on the significance of the "Salt Marsh" and "Alan Cooper" signatures on legal papers in Nevasca and Ingenuity 13, that: "[T]ransferring the copyright [to a Prenda "client"]... shows that someone has a copyright. It doesn't show that someone has standing to come into court to enforce it. Given that Prenda Law has been unable to substantiate who that someone is, all of these cases have become suspect on that basis". In a similar manner, "The 'Alan Cooper' problem... stems from certain paperwork allegedly 'signed' by a Mr. Cooper that doesn't seem to exist, thereby creating a fundamental standing issue for all these cases". Legally, the act of knowingly falsifying a signature for a court document, or asserting a false statement about standing, would potentially be a fraud upon the court. Judge Wright stated in Ingenuity 13 that "Although the recipient of a copyright assignment need not sign the document, a forgery is still a forgery. And trying to pass that forged document by the Court smacks of fraud". Case law related to purpose of judicial processes Federal courts in the United States seek to procure the orderly, just and timely resolution of genuine controversies and disputes, which limits the purpose for which legal processes can be used, and their manner of use. In case law this has meant at times that motive becomes a distinction in US court cases, so that a case or motion perceived by the court to have an inappropriate motive or reason may be distinguished, and expedited subpoenas or discovery (against parties or non-parties) may be declined by a court if ill-suited to the circumstances or perceived to have an inappropriate purpose. US law on costs of litigation US law differs from some other countries on how legal fees and court costs are allocated following litigation, in a way that impacts some kinds of litigation. Unlike most other common-law countries, the "American rule" stipulates that, while discretion and many exceptions exist, the general principle is that , unless legally stated otherwise. This can avoid a deterrent effect against a poor plaintiff, but can also give even a defendant who might defend successfully, good incentive to settle and pay, if it will cost less than the likely cost of defense and any appeals, or they lack resources for the procedures that may be required. References External links Prenda Law website snapshot from 2012 (Archive.org record for wefightpiracy.com 2012-03-20) – Prenda Law does not appear to have a current active website as it dissolved in 2013. "John Henry Lawyer's" Prenda Timeline 'PrendaWiki' – website created "for tracking the many Copyright cases collectively termed 'Copyright troll cases'... [with] a focus on documents, people, cases, data" Prenda Law entries and posts on legal blog 'Popehat' Prenda Law filings and case documents at Digital Media Law Project Copyright enforcement companies Law firms based in Chicago Copyright infringement BitTorrent Sexuality and computing
Prenda Law
Technology
10,461
60,972,435
https://en.wikipedia.org/wiki/Metaphenomics
Metaphenomics studies the phenome of plants or other organisms by means of meta-analysis. Main goal is to establish dose-response relationships of a wide range of phenotypic traits for a large set of a-biotic environmental factors. Rationale A popular way to study the effect of the environment on plants is to set up experiments where subgroups of individuals of a species of interest are exposed to different levels of one environmental factor (e.g. light, CO2), while all other factors are similar. These studies have yielded a lot of insight into the way plants respond to the environment, but may be challenging to integrate by means of a classical meta-analysis. One of the reasons for that is that phenotypic traits often respond to the environment in a non-linear way. Rather than evaluating the difference between ‘low-CO2’ and ‘high-CO2’ grown plants, it would be better to derive dose-response curves which take into account at which levels experiments were carried out. Metaphenomics uses a method to calculate dose-response curves from a variety of experiments, and is applicable to any phenotypic trait and many environmental variables. Method Core of the method used in metaphenomics is to scale all phenotypic data for a given species or genotype across all the levels of the environmental variable of interest (say CO2) to the value they have at a reference value of that environmental variable (for example, a CO2 concentration of 400 ppm). In this way, inherent variation among species or genotypes in the trait of interest is removed, as for all experiments and species, the scaled value at 400 ppm will be 1.0. Subsequently, general dose-response curves can be derived by fitting mathematical equations to the data. Outcome The results generally are a family of curves where dose-response curves for one phenotypic trait are compared for a range of different environmental variables, or where many different phenotypic traits are analysed for their response to one environmental factor. This provides a simple and quantitative overview of the many ways plants or other organisms respond to their environmental. See also Dose-response relationship Meta-analyses Phenome References Plants Meta-analysis Phenomics
Metaphenomics
Biology
458
14,757,643
https://en.wikipedia.org/wiki/Stream%20restoration
Stream restoration or river restoration, also sometimes referred to as river reclamation, is work conducted to improve the environmental health of a river or stream, in support of biodiversity, recreation, flood management and/or landscape development. Stream restoration approaches can be divided into two broad categories: form-based restoration, which relies on physical interventions in a stream to improve its conditions; and process-based restoration, which advocates the restoration of hydrological and geomorphological processes (such as sediment transport or connectivity between the channel and the floodplain) to ensure a stream's resilience and ecological health. Form-based restoration techniques include deflectors; cross-vanes; weirs, step-pools and other grade-control structures; engineered log jams; bank stabilization methods and other channel-reconfiguration efforts. These induce immediate change in a stream, but sometimes fail to achieve the desired effects if degradation originates at a wider scale. Process-based restoration includes restoring lateral or longitudinal connectivity of water and sediment fluxes and limiting interventions within a corridor defined based on the stream's hydrology and geomorphology. The beneficial effects of process-based restoration projects may sometimes take time to be felt since changes in the stream will occur at a pace that depends on the stream dynamics. Despite the significant number of stream-restoration projects worldwide, the effectiveness of stream restoration remains poorly quantified, partly due to insufficient monitoring. However, in response to growing environmental awareness, stream-restoration requirements are increasingly adopted in legislation in different parts of the world. Definition, objectives and popularity Stream restoration or river restoration, sometimes called river reclamation in the United Kingdom, is a set of activities that aim to improve the environmental health of a river or stream. These activities aim to restore rivers and streams to their original states or to a reference state, in support of biodiversity, recreation, flood management, landscape development, or a combination of these phenomena. Stream restoration is generally associated with environmental restoration and ecological restoration. In that sense, stream restoration differs from: river engineering, a term which typically refers to physical alterations of a water body, for purposes that include navigation, flood control or water supply diversion and are not necessarily related to ecological restoration; waterway restoration, a term used in the United Kingdom describing alterations to a canal or river to improve navigability and related recreational amenities. Improved stream health may be indicated by expanded habitat for diverse species (e.g. fish, aquatic insects, other wildlife) and reduced stream bank erosion, although bank erosion is increasingly generally recognized as contributing to the ecological health of streams. Enhancements may also include improved water quality (i.e., reduction of pollutant levels and increase of dissolved oxygen levels) and achieving a self-sustaining, resilient stream system that does not require periodic human intervention, such as dredging or construction of flood or erosion control structures. Stream restoration projects can also yield increased property values in adjacent areas. In the past decades, stream restoration has emerged as a significant discipline in the field of water-resources management, due to the degradation of many aquatic and riparian ecosystems related to human activities. In the U.S. alone, it was estimated in the early 2000s that more than one billion U.S. dollars were spent each year to restore rivers and that close to 40,000 restoration projects had been conducted in the continental part of the country. Restoration approaches and techniques Stream restoration activities may range from the simple improvement or removal of a structure that inhibits natural stream functions (e.g. repairing or replacing a culvert, or removing barriers to fish passage such as weirs), to the stabilization of stream banks, or other interventions such as riparian zone restoration or the installation of stormwater-management facilities like constructed wetlands. The use of recycled water to augment stream flows that have been depleted as a result of human activities can also be considered a form of stream restoration. When present, navigation locks have a potential to be operated as vertical slot fishways to restore fish passage to some extent for a wide range of fish, including poor swimmers. Stream-restoration projects normally begin with an assessment of a focal stream system, including climatic data, geology, watershed hydrology, stream hydraulics, sediment transport patterns, channel geometry, historical channel mobility, and flood records. Numerous systems exist to classify streams according to their geomorphology. This preliminary assessment helps to understand the stream dynamics and determining the cause of the observed degradation to be addressed; it can also be used to determine the target state for the intended restoration work, especially since the "natural" or undisturbed state is sometimes no longer achievable due to various constraints. Two broad approaches to stream restoration have been defined in the past decades: form-based restoration and process-based restoration. Whereas the former focuses on the restoration of structural features and/or patterns considered to be characteristic of the target stream system, the latter is based on the restoration of hydrological and geomorphological processes (such as sediment transport or connectivity between the channel and the floodplain) to ensure a stream's resilience and ecological health. Form-based restoration Form-based stream restoration promotes the modification of a stream channel to improve stream conditions. Targeted outcomes can include improved water quality, enhanced fish habitat and abundance, as well as increased bank and channel stability. This approach is widely used worldwide, and is supported by various government agencies, including the United States Environmental Protection Agency (U.S. EPA). Form-based restoration projects can be carried out at various scales, including the reach scale. They can include measures such as the installation of in-stream structures, bank stabilization and more significant channel reconfiguration efforts. Reconfiguration work may focus on channel shape (in terms of sinuosity and meander characteristics), cross-section or channel profile (slope along the channel bed). These alterations affect the dissipation of energy through a channel, which impacts flow velocity and turbulence, water-surface elevations, sediment transport, and scour, among other characteristics. Installation of in-stream structures Deflectors Deflectors are generally wooden or rock structures installed at a bank toe and extending towards the center of a stream, in order to concentrate stream flow away from its banks. They can limit bank erosion and generate varying flow conditions in terms of depth and velocity, which can positively impact fish habitat. Cross-vanes and related structures Cross-vanes are U-shaped structures made of boulders or logs, built across the channel to concentrate stream flow in the center of the channel and thereby reduce bank erosion. They do not impact channel capacity and provides other benefits such as improved habitat for aquatic species. Similar structures used to dissipate stream energy include the W-weirs and J-Hook vanes. Weirs, step pools and grade-control structures These structures, which can be built with rocks or wood (logs or woody debris), gradually lower the elevation of the stream and dissipate flow energy, thereby reducing flow velocity. They can help limit bed degradation. They generate water accumulation upstream from them and fast flowing conditions downstream from them, which can improve fish habitat. However, they can limit fish passage if they are too high. Engineered log jams An emerging stream restoration technique is the installation of engineered log jams. Because of channelization and removal of beaver dams and woody debris, many streams lack the hydraulic complexity that is necessary to maintain bank stabilization and healthy aquatic habitats. Reintroduction of large woody debris into streams is a method that is being experimented in streams such as Lagunitas Creek in Marin County, California and Thornton Creek, in Seattle, Washington. Log jams add diversity to the water flow by creating riffles, pools, and temperature variations. Large wood pieces, both living and dead, play an important role in the long-term stability of engineered log jams. However, individual pieces of wood in log jams are rarely stable over long periods and are naturally transported downstream, where they can get trapped in further log jams, other stream features or human infrastructures, which can generate nuisances for human use. Bank stabilization Bank stabilization is a common objective for stream-restoration projects, although bank erosion is generally viewed as favorable for the sustainability and diversity of aquatic and riparian habitats. This technique may be employed where a stream reach is highly confined, or where infrastructure is threatened. Bank stabilization is achieved through the installation of riprap, gabions or through the use of revegetation and/or bioengineering methods, which relies on the use of live plants to build bank stabilizing structures. As new plants sprout from the live branches, the roots anchor the soil and prevent erosion. This makes bioengineering structures more natural and more adaptable to evolving conditions than "hard" engineering structures. Bioengineering structures include fascines, brush mattresses, brush layer, and vegetated geogrids. Other channel-reconfiguration techniques Channel reconfiguration involves the physical modification of the stream. Depending on the scale of a project, a channel's cross-section can be modified, and meanders can be constructed through earthworks to achieve the target stream morphology. In the U.S., such work is frequently based on the Natural Channel Design (NCD), a method developed in the 1990s. This method involves a classification of the stream to be restored based on parameters such as channel pattern and geometry, topography, slope, and bed material. This classification is followed by a design phase based on the NCD method, which includes 8 phases and 40 steps. The method relies on the construction of the desired morphology, and its stabilization with natural materials such as boulders and vegetation to limit erosion and channel mobility. Criticisms to form-based restoration Despite its popularity, form-based restoration has been criticized by the scientific community. Common criticisms are that the scale at which form-based restoration is often much smaller than the spatial and temporal scales of the processes that cause the observed problems and that the target state is frequently influenced by the social conception of what a stream should look like and does not necessarily take into account the stream's geomorphological context (e.g., meandering rivers tend to be viewed as more "natural" and more beautiful, whereas local conditions sometimes favour other patterns such as braided rivers). Numerous criticisms have also been directed at the NCD method by fluvial geomorphologists, who claim that the method is a "cookbook" approach sometimes used by practitioners that do not have sufficient knowledge of fluvial geomorphology, resulting in project failures. Another criticism is the importance given to channel stability in the NCD method (and with some other form-based restoration methods), which can limit the streams' alluvial dynamic and adaptability to evolving conditions. The NCD method has been criticized for its improper application in the Washington, D.C. area to small-order, interior-forested, upper-headwater streams and wetlands, leading to loss of natural forest ecosystems. Process-based restoration Contrary to form-based restoration, which consists of improving a stream's conditions by modifying its structure, process-based restoration focuses on restoring the hydrological and geomorphological processes (or functions) that contribute to the stream's alluvial and ecological dynamics. This type of stream restoration has gained in popularity since the mid-1990s, as a more ecosystem-centered approach. Process-based restoration includes restoring lateral connectivity (between the stream and its floodplain), longitudinal connectivity (along the stream) and water and/or sediment fluxes, which might be impacted by hydro-power dams, grade control structures, erosion control structures and flood protection structures. Valley Floor Resetting epitomises process-based restoration by infilling the river channel and allowing the stream to carve its anastomosed channel anew, matching 'Stage Zero' on the Stream Evolution Model. In general, process-based restoration aims to maximize the resilience of the system and minimize maintenance requirements. In some instances, form-based restoration methods might be coupled with process-based restoration to restore key structures and achieve quicker results while waiting for restored processes to ensure adequate conditions in the long term. Improving connectivity The connectivity of streams to their adjacent floodplain along their entire length plays an important role in the equilibrium of the river system. Streams are shaped by the water and sediment fluxes from their watershed, and any alteration of these fluxes (either in quantity, intensity or timing) will result in changes in equilibrium planform and cross-sectional geometry, as well as modifications of the aquatic and riparian ecosystem. Removal or modification of levees can allow a better connection between streams and their floodplain. Similarly, removing dams and grade control structures can restore water and sediment fluxes and result in more diversified habitats, although impacts on fish communities can be difficult to assess. In streams where existing infrastructures cannot be removed or modified, it is also possible to optimize sediment and water management in order to maximize connectivity and achieve flow patterns that ensure minimum ecosystem requirements. This can include releases from dams, but also delaying and/or treating water from agricultural and urban sources. Implementing a minimum stream corridor width Another method of ensuring the ecological health of streams while limiting impacts on human infrastructures is to delineate a corridor within which the stream is expected to migrate over time. This method is based on the concept of minimum intervention within this corridor, whose limits should be determined based on the stream's hydrology and geomorphology. Although this concept is often restricted to the lateral mobility of streams (related to bank erosion), some systems also integrate the space necessary for floods of various return periods. This concept has been developed and adapted in various countries around the world, resulting in the notion of "stream corridor" or "river corridor" in the U.S., "room for the river" in the Netherlands, "" ("freedom space") in France (where the concept of "erodible corridor" is also used) and Québec (Canada), "" ("space reserved for water(courses)") in Switzerland, "" in Italy, "fluvial territory" in Spain and "making space for water" in the United Kingdom. A cost-benefit analysis has shown that this approach could be beneficial in the long term due to lower stream stabilization and maintenance costs, lower damages resulting from erosion and flooding, and ecological services rendered by the restored streams. However, this approach cannot be implemented alone if watershed-scale stressors contribute to stream degradation. Additional practices In addition to the aforementioned restoration approaches and methods, additional measures can be implemented if stream degradation factors occur at the watershed scale. First, high-quality areas should also be protected. Additional measures include revegetation/reforestation efforts (ideally with native species); the adoption of agricultural best management practices that minimize erosion and runoff; adequate treatment of sewage water and industrial discharge across the watershed; and improved stormwater management to delay/minimize the transport of water to the stream and minimize pollutant migration. Alternative stormwater management facilities include the following options: Bioretention systems and rain gardens Constructed wetlands Infiltration basins Retention basins Effectiveness of stream restoration projects In the 2000s, a study of stream restoration efforts in the U.S. led to the creation of the National River Restoration Science Synthesis (NRRSS) database, which included information on over 35,000 stream restoration projects carried out in the U.S. Synthesizing efforts are also carried out in other parts of the world, such as Europe. However, despite the large number of stream restoration projects carried out each year worldwide, the effectiveness of stream restoration projects remains poorly quantified. This situation appears to result from limited data on the restored streams' biophysical and geochemical contexts, to insufficient post-monitoring work and to the varying metrics used to evaluate project effectiveness. Depending on the objectives of the restoration project, the goals (restoration of fish populations, of alluvial dynamics, etc.) may take considerable time to be fully achieved. Therefore, whereas monitoring efforts should be proportional to the scale of the situation to be addressed, long-term is often necessary in order to fully evaluate a project's effectiveness. In general, project effectiveness has been found to be dependent on selection of an appropriate restoration method considering the nature, cause and scale of the degradation problem. As such, reach-scale projects generally fail at restoring conditions whose root cause lies at the watershed scale, such as water quality issues. Furthermore, project failures have sometimes been attributed to design based on insufficient scientific bases; in some cases, restoration techniques may have been selected mainly for aesthetic reasons. Additional factors that can influence the effectiveness of river restoration projects include the selection of sites to be restored (for example, sites located near undisturbed reaches could be recolonized more effectively) and the amount of tree cutting and other destructive work necessary to carry out the restoration work (which can have long-lasting detrimental effects on the quality of the habitat). Although often viewed as a challenge, public involvement is generally considered to be a positive factor for the long-term success of stream restoration projects. Introduction in legislation Stream restoration is gradually being introduced in the legislative framework of various states. Examples include the European water framework's commitment to restoring surface water bodies, the adoption of the concept of freedom space in the French legislation, the inclusion in the Swiss legislation of the notion of space reserved for watercourses and of the requirement to restore streams to a state close to their natural state, and the inclusion of river corridors in land use planning in the American states of Vermont and Washington. Although this evolution is generally viewed positively by the scientific community, a concern expressed by some is that it could lead to less flexibility and less room for innovation in a field that is still in development. Informational resources The River Restoration Centre, based at Cranfield University, is responsible for the National River Restoration Inventory, which is used to document best practice in river watercourse and floodplain restoration, enhancement and management efforts in the United Kingdom. Other established sources for information on stream restoration include the NRRSS in the U.S. and the European Centre for River Restoration (ECRR), which holds details of projects across Europe. ECRR and the LIFE+ RESTORE project have developed a wiki-based inventory of river restoration case studies. See also Daylighting (streams) Environmental restoration Land rehabilitation Retrofit (environmental management) Restoration ecology Riparian zone restoration Subterranean river References Notes Federal Interagency Stream Restoration Working Group (United States)(2001). Stream Corridor Restoration: Principles, Processes, and Practices. GPO Item No. 0120-A; SuDocs No. A 57.6/2:EN 3/PT.653. . Water streams Ecological restoration Environmental engineering Environmental terminology Freshwater ecology Hydraulic engineering Hydrology Habitat Riparian zone Rivers Water and the environment
Stream restoration
Physics,Chemistry,Engineering,Environmental_science
3,882
705,600
https://en.wikipedia.org/wiki/Ces%C3%A0ro%20summation
In mathematical analysis, Cesàro summation (also known as the Cesàro mean or Cesàro limit) assigns values to some infinite sums that are not necessarily convergent in the usual sense. The Cesàro sum is defined as the limit, as n tends to infinity, of the sequence of arithmetic means of the first n partial sums of the series. This special case of a matrix summability method is named for the Italian analyst Ernesto Cesàro (1859–1906). The term summation can be misleading, as some statements and proofs regarding Cesàro summation can be said to implicate the Eilenberg–Mazur swindle. For example, it is commonly applied to Grandi's series with the conclusion that the sum of that series is 1/2. Definition Let be a sequence, and let be its th partial sum. The sequence is called Cesàro summable, with Cesàro sum , if, as tends to infinity, the arithmetic mean of its first n partial sums tends to : The value of the resulting limit is called the Cesàro sum of the series If this series is convergent, then it is Cesàro summable and its Cesàro sum is the usual sum. Examples First example Let for . That is, is the sequence Let denote the series The series is known as Grandi's series. Let denote the sequence of partial sums of : This sequence of partial sums does not converge, so the series is divergent. However, Cesàro summable. Let be the sequence of arithmetic means of the first partial sums: Then and therefore, the Cesàro sum of the series is . Second example As another example, let for . That is, is the sequence Let now denote the series Then the sequence of partial sums is Since the sequence of partial sums grows without bound, the series diverges to infinity. The sequence of means of partial sums of G is This sequence diverges to infinity as well, so is Cesàro summable. In fact, for the series of any sequence which diverges to (positive or negative) infinity, the Cesàro method also leads to the series of a sequence that diverges likewise, and hence such a series is not Cesàro summable. summation In 1890, Ernesto Cesàro stated a broader family of summation methods which have since been called for non-negative integers . The method is just ordinary summation, and is Cesàro summation as described above. The higher-order methods can be described as follows: given a series , define the quantities (where the upper indices do not denote exponents) and define to be for the series . Then the sum of is denoted by and has the value if it exists . This description represents an -times iterated application of the initial summation method and can be restated as Even more generally, for , let be implicitly given by the coefficients of the series and as above. In particular, are the binomial coefficients of power . Then the sum of is defined as above. If has a sum, then it also has a sum for every , and the sums agree; furthermore we have if (see little- notation). Cesàro summability of an integral Let . The integral is summable if exists and is finite . The value of this limit, should it exist, is the sum of the integral. Analogously to the case of the sum of a series, if , the result is convergence of the improper integral. In the case , convergence is equivalent to the existence of the limit which is the limit of means of the partial integrals. As is the case with series, if an integral is summable for some value of , then it is also summable for all , and the value of the resulting limit is the same. See also Abel summation Abel's summation formula Abel–Plana formula Abelian and tauberian theorems Almost convergent sequence Borel summation Divergent series Euler summation Euler–Boole summation Fejér's theorem Hölder summation Lambert summation Perron's formula Ramanujan summation Riesz mean Silverman–Toeplitz theorem Stolz–Cesàro theorem Cauchy's limit theorem Summation by parts References Bibliography . Reprinted 1986 with . Summability methods Means
Cesàro summation
Physics,Mathematics
875
12,965,053
https://en.wikipedia.org/wiki/Wigner%E2%80%93Seitz%20radius
The Wigner–Seitz radius , named after Eugene Wigner and Frederick Seitz, is the radius of a sphere whose volume is equal to the mean volume per atom in a solid (for first group metals). In the more general case of metals having more valence electrons, is the radius of a sphere whose volume is equal to the volume per a free electron. This parameter is used frequently in condensed matter physics to describe the density of a system. Worth to mention, is calculated for bulk materials. Formula In a 3-D system with free valence electrons in a volume , the Wigner–Seitz radius is defined by where is the particle density. Solving for we obtain The radius can also be calculated as where is molar mass, is count of free valence electrons per particle, is mass density and is the Avogadro constant. This parameter is normally reported in atomic units, i.e., in units of the Bohr radius. Assuming that each atom in a simple metal cluster occupies the same volume as in a solid, the radius of the cluster is given by where n is the number of atoms. Values of for the first group metals: Wigner–Seitz radius is related to the electronic density by the formula where, ρ can be regarded as the average electronic density in the outer portion of the Wigner-Seitz cell. See also Wigner–Seitz cell Wigner crystal References Atomic radius
Wigner–Seitz radius
Physics,Chemistry
291
96,423
https://en.wikipedia.org/wiki/Kumulipo
In Hawaiian religion, the Kumulipo is the creation chant, first recorded in the 18th century. It also includes a genealogy of the members of Hawaiian royalty and was created in honor of Kalaninuiamamao and passed down orally to his daughter Alapaiwahine. Creation chant In the Kumulipo the world was created over a cosmic night. This is not just one night, but many nights over time. The ancient Hawaiian kahuna and priests of the Hawaiian religion would recite the Kumulipo during the makahiki season, honoring the god Lono. In 1779, Captain James Cook arrived in Kealakekua Bay on the island of Hawaii during the season and was greeted by the Hawaiians reciting the Kumulipo. Some stories say Cook was mistaken for Lono, because of the type of sails on his ship and his pale skintone. In 1889, King Kalākaua printed a sixty-page pamphlet of the Kumulipo. Attached to the pamphlet was a 2-page paper on how the chant was originally composed and recited. Years later Queen Liliuokalani described the chant as a prayer of the development of the universe and the ancestry of the Hawaiians. Liliuokalani translated the chant under house arrest in Iolani Palace. The translation was published in 1897, then republished by Pueo Press in 1978. The Kumulipo is a total of 2,102 lines long, in honor of Kalaninuiamamao, who created peace for all when he was born. There was a lot of fighting between his ʻI and Keawe family, who were cousins so his birth stopped the two from feuding. The Kumulipo is a cosmogonic genealogy, which means that it relates to the creation of the universe and the descent of humans and other entities. Out of the 2102 lines, it has 16 "wā" which means era or age. In each wā, something is born whether it is a human, plant, or other creature. Divisions The Kumulipo is divided into sixteen wā, sections. The first seven wā fall under the section of pō (darkness), the age of spirit. The Earth may or may not exist, but the events described do not take place in a physical universe. The words show the development of life as it goes through similar stages as a human child. All plants and animals of sea and land, earth and sky, male and female are created. Eventually, it leads to early mammals. These are the first twelve lines of the Kumulipo, in Hawaiian, in Liliuokalani's English translation and in Bastian's German translation. Two other significant English translations - Rock's translation of Bastian and Beckwith's translation - appear in Beckwith's 1951 book The Kumulipo. {| border="0" |- ! Hawaiian language ! English (Liliuokalani) ! German (Bastian) |- | O ke au i kahuli wela ka honua O ke au i kahuli lole ka lani O ke au i kukaʻiaka ka la E hoʻomalamalama i ka malama O ke au o Makaliʻi ka po O ka walewale hoʻokumu honua ia O ke kumu o ka lipo, i lipo ai O ke kumu o ka Pō, i po ai O ka lipolipo, o ka lipolipo O ka lipo o ka la, o ka lipo o ka po Po wale ho--ʻi Hānau ka pō | At the time that turned the heat of the earth, At the time when the heavens turned and changed, At the time when the light of the sun was subdued To cause light to break forth, At the time of the night of Makaliʻi (winter) Then began the slime which established the earth, The source of deepest darkness, of the depth of darkness, The source of Night, of the depth of night Of the depth of darkness, Of the darkness of the sun in the depth of night, Night is come, Born is Night | Hin dreht der Zeitumschwung zum Ausgebrannten der Welt, Zurück der Zeitumschwung nach aufwärts wieder, Noch sonnenlos die Zeit verhüllten Lichtes, Und schwankend nur im matten Mondgeschimmer Aus Makalii's nächt'gem Wolkenschleier Durchzittert schaftenhaft das Grundbild künft'ger Welt. Des Dunkels Beginn aus den Tiefen (Wurzeln) des Abgrunds, Der Uranfang von Nacht in Nacht, Von weitesten Fernen her, von weitesten Fernen, Weit aus den Fernen der Sonne, weit aus den Fernen der Nacht, Noch Nacht ringsumher. |} The second section, containing the remaining nine wā, is ao and is signaled by the arrival of light and the gods, who watch over the changing of animals into the first humans. After that is the complex genealogy of Kalaninuiamamao that goes all the way to the late 18th century. Births in each wā The births in each age include: In the first wā, the sea urchins and limu (seaweed) were born. The limu was connected through its name to the land ferns. Some of these limu and fern pairs include: ʻEkaha and ʻEkahakaha, Limu ʻAʻalaʻula and ʻalaʻalawainui mint, Limu Manauea and Kalo Maunauea upland taro, Limu Kala and ʻakala berry. These plants were born to protect their sea cousins. In the second wā, 73 types of fish. Some deep sea fish include Naiʻa (porpoise) and the Mano (shark). Also reef fish, including Moi and Weke. Certain plants that have similar names are related to these fish and are born as protectors of the fish. In the third wā, 52 types of flying creatures, which include birds of the sea such as ʻIwa (frigate or man-of-war bird), the Lupe, and the Noio (Hawaiian noddy tern). These sea birds have land relatives, such as Io (hawk), Nene (goose), and Pueo (owl). In this wā, insects were also born, such as Peʻelua (caterpillar) and the Pulelehua (butterfly). In the fourth wā, the creepy and crawly creatures are born. These include Honu (sea turtle), Ula (lobster), Moʻo (lizards), and Pololia (jellyfish). Their cousins on land include Kuhonua (maile vine) and ʻOheʻohe bamboo. In the fifth wā, Kalo (taro) is born. In the sixth wā, Uku (flea) and the ʻIole (rat) are born. In the seventh wā, ʻĪlio (dog) and the Peʻapeʻa (bat) are born. In the eighth wā, the four divinities are born: Laʻilaʻi (Female), Kiʻi (Male), Kāne (God), Kanaloa (Octopus), respectively. In the ninth wā, Laʻilaʻi takes her eldest brother Kiʻi as a mate and the first humans are born from her brain. In the tenth wā, Laʻilaʻi takes her next brother Kāne as a mate after losing interest in Kiʻi, she then had four of Kāne's children: Laʻiʻoloʻolo, Kamahaʻina (Male), Kamamule (Male), Kamakalua (Female). Laʻilaʻi soon returned to Kiʻi and three children are born: Haʻi(F), Haliʻa(F), and Hākea(M). Having been born during their mothers being with two men they become "Poʻolua" and claim the lineage of both fathers. The eleventh wā pays homage to the Moa. The twelfth wā honors the lineage of Wākea, whose son Hāloa is the ancestor of all people. The thirteenth wā honors the lineage of Hāloa's mother Papahānaumoku. In the fourteenth wā Liʻaikūhonua mates with Keakahulihonua, and have their child Laka. The fifteenth wā refers to Haumeanuiʻāiwaiwa and her lineage, it also explains Māui's adventures and siblings. The sixteenth wā recounts all of Māui's lineage for forty-four generations, all the way down to the Moʻi of Māui, Piʻilani. In the 19th and early 20th centuries, anthropologists Adolf Bastian and Roland Burrage Dixon interpreted a recurring verse of the Kumulipo as describing the octopus as the sole survivor of a previous age of existence. In her 1951 translation of the Kumulipo, ethnographer Martha Warren Beckwith provided a different translation of the verse, although she does discuss the possibility that "octopus" is the correct translation and describes the god Kanaloa. Comparative literature Comparisons may be made between marital partners (husband and wife often have synonymous names), between genealogical and flora-fauna names, and in other Polynesian genealogies.<ref>See Kumulipo spouse-names, terms for flora and fauna in the Kumulipo, and [http://00.gs/Maniapoto;Uriwera;Moriori;Hivaoa;Kumulipo.htm Maori and Rarotongan parallels with the Kumulipo]</ref> Cultural impact The supermassive black hole M87*, captured by the Event Horizon Telescope, was informally given the Hawaiian name "Pōwehi", a poetic description of generative darkness or the spirit world taken from the Kumulipo. In 2009, the poet Jamaica Heolimeleikalani Osorio performed her poem, Kumulipo, at a poetry event at the White House. Notes References External links The Kumulipo Another copy of "The Kumulipo" with commentary and translations by Martha Warren Beckwith. The Kumulipo: a Hawaiian creation chant Another online copy of the Beckwith book, Paperback edition 1981. University of Hawaii Press Into the Source Article about Kumulipo translations by Shannon Wianecki. Maui No Ka 'Oi Magazine'' Volume 12 Number 6 (November 2008). Hawaiian mythology Creation myths
Kumulipo
Astronomy
2,220
6,241,959
https://en.wikipedia.org/wiki/Nanocomputer
Nanocomputer refers to a computer smaller than the microcomputer, which is smaller than the minicomputer. Microelectronic components that are at the core of all modern electronic devices employ semiconductor transistors. The term nanocomputer is increasingly used to refer to general computing devices of size comparable to a credit card. Modern single-board computer such as the Raspberry Pi and Gumstix would fall under this classification. Arguably, smartphones and tablets would also be classified as nanocomputers. Future computers with features smaller than 10 nanometers Die shrink has been more or less continuous since around 1970. A few years later, the 6 μm process allowed the making of desktop computers, known as microcomputers. Moore's Law in the next 40 years brought features 1/100 the size, or ten thousand times as many transistors per square millimeter, putting smartphones in every pocket. Eventually computers will be developed with fundamental parts that are no bigger than a few nanometers. Nanocomputers might be built in several ways, using mechanical, electronic, biochemical, or quantum nanotechnology. There used to be consensus among hardware developers that it is unlikely that nanocomputers will be made of semiconductor transistors, as they seem to perform significantly less well when shrunk to sizes under 100 nanometers. Neverthelesss developers reduced microprocessor features to 22 nm in April 2012. Moreover, Intel's 5 nanometer technology outlook predicts 5 nm feature size by 2022. The International Technology Roadmap for Semiconductors in the 2010s gave an industrial consensus on feature scaling following Moore's Law. A silicon-silicon bond length is 235.2 pm, which means that a 5 nm-width transistor would be 21 silicon atoms wide. See also Nanotechnology Quantum computer Starseed launcher – interstellar nanoprobes proposal References External links A spray-on computer is way to do IT Future Nanocomputer Technologies – diagram of possible technologies (electronic, organic, mechanical, quantum). Classes of computers Nanoelectronics
Nanocomputer
Materials_science,Technology
435
18,485,004
https://en.wikipedia.org/wiki/Efaroxan
Efaroxan is an α2-adrenergic receptor antagonist and antagonist of the imidazoline receptor. Synthesis The Darzens reaction between 2-fluorobenzaldehyde [57848-46-1] (1) and Ethyl 2-bromobutyrate [533-68-6] (2) gives ethyl 2-ethyl-3-(2-fluorophenyl)oxirane-2-carboxylate, CID:100942311 (3). A catalytic hydrogenation over Pd/C would give ethyl 2-[(2-fluorophenyl)methyl]-2-hydroxybutanoate, CID:77591056 (4). Saponification of the ester then gives 2-[(2-Fluorophenyl)methyl]-2-hydroxybutanoic acid, CID:53869347 (5). Treatment with 2 molar equivalents of sodium hydride apparently gives 2-Ethyl-2,3-dihydrobenzofuran-2-carboxylic acid [111080-50-3] (6). Treatment of the carboxylic acid with thionyl chloride then gives the acid chloride and subsequent treatment of this with ethylenediamine in the presence of trimethylaluminium completed the synthesis of (8). See also Fluparoxan Idazoxan References External links Benzofurans Imidazolines Alpha-2 blockers
Efaroxan
Chemistry
329
29,549,852
https://en.wikipedia.org/wiki/Hilbert%E2%80%93Bernays%20provability%20conditions
In mathematical logic, the Hilbert–Bernays provability conditions, named after David Hilbert and Paul Bernays, are a set of requirements for formalized provability predicates in formal theories of arithmetic (Smith 2007:224). These conditions are used in many proofs of Kurt Gödel's second incompleteness theorem. They are also closely related to axioms of provability logic. The conditions Let be a formal theory of arithmetic with a formalized provability predicate , which is expressed as a formula of with one free number variable. For each formula in the theory, let be the Gödel number of . The Hilbert–Bernays provability conditions are: If proves a sentence then proves . For every sentence , proves proves that and imply Note that is predicate of numbers, and it is a provability predicate in the sense that the intended interpretation of is that there exists a number that codes for a proof of . Formally what is required of is the above three conditions. In the more concise notation of provability logic, letting denote " proves " and denote : Use in proving Gödel's incompleteness theorems The Hilbert–Bernays provability conditions, combined with the diagonal lemma, allow proving both of Gödel's incompleteness theorems shortly. Indeed the main effort of Godel's proofs lied in showing that these conditions (or equivalent ones) and the diagonal lemma hold for Peano arithmetics; once these are established the proof can be easily formalized. Using the diagonal lemma, there is a formula such that . Proving Godel's first incompleteness theorem For the first theorem only the first and third conditions are needed. The condition that is ω-consistent is generalized by the condition that if for every formula , if proves , then proves . Note that this indeed holds for an -consistent because means that there is a number coding for the proof of , and if is -consistent then going through all natural numbers one can actually find such a particular number , and then one can use to construct an actual proof of in . Suppose T could have proven . We then would have the following theorems in : (by construction of and theorem 1) (by condition no. 1 and theorem 1) Thus proves both and . But if is consistent, this is impossible, and we are forced to conclude that does not prove . Now let us suppose could have proven . We then would have the following theorems in : (by construction of and theorem 1) (by ω-consistency) Thus proves both and . But if is consistent, this is impossible, and we are forced to conclude that does not prove . To conclude, can prove neither nor . Using Rosser's trick Using Rosser's trick, one needs not assume that is -consistent. However, one would need to show that the first and third provability conditions holds for , Rosser's provability predicate, rather than for the naive provability predicate Prov. This follows from the fact that for every formula , holds if and only if holds. An additional condition used is that proves that implies . This condition holds for every that includes logic and very basic arithmetics (as elaborated in Rosser's trick#The Rosser sentence). Using Rosser's trick, is defined using Rosser's provability predicate, instead of the naive provability predicate. The first part of the proof remains unchanged, except that the provability predicate is replaced with Rosser's provability predicate there, too. The second part of the proof no longer uses ω-consistency, and is changed to the following: Suppose could have proven . We then would have the following theorems in : (by construction of and theorem 1) (by theorem 2 and the additional condition following the definition of Rosser's provability predicate) (by condition no. 1 and theorem 1) Thus proves both and . But if is consistent, this is impossible, and we are forced to conclude that does not prove . The second theorem We assume that proves its own consistency, i.e. that: . For every formula : (by negation elimination) It is possible to show by using condition no. 1 on the latter theorem, followed by repeated use of condition no. 3, that: And using proving its own consistency it follows that: We now use this to show that is not consistent: (following proving its own consistency, with ) (by construction of ) (by condition no. 1 and theorem 2) (by condition no. 3 and theorem 3) (by theorems 1 and 4) (by condition no. 2) (by theorems 5 and 6) (by construction of ) (by theorems 7 and 8) (by condition 1 and theorem 9) Thus proves both and , hence is inconsistent. References Smith, Peter (2007). An introduction to Gödel's incompleteness theorems. Cambridge University Press. Mathematical logic Provability logic
Hilbert–Bernays provability conditions
Mathematics
1,030
69,136,119
https://en.wikipedia.org/wiki/List%20of%20fungi%20of%20South%20Africa%20%E2%80%93%20G
This is an alphabetical list of fungal taxa as recorded from South Africa. Currently accepted names have been appended. Ga Genus: Galera (Fr.) P. Kumm. 1871 Galera eatoni (Berk.) Sacc Galera hypnorum (Batsch) Quél. 1872 Galera lateritia Quel. (sic) possibly (Fr.) P. Kumm. 1871 Galera peroxydata (Berk.) Sacc. 1887, accepted as Conocybe peroxydata (Berk.) D.A. Reid, (1975) Galera pygmaeo-affinis (Fr.) Quél. 1872, accepted as Conocybe pygmaeoaffinis (Fr.) Kühner, (1935) Galera spartea Quel. (sic) possibly (Fr.) P. Kumm. 1871, accepted as Conocybe spartea (Fr.) Konrad & Maubl., (1949) [1948] Galera sphagnorum Karst. (sic) possibly (Pers.) Sacc. 1887, accepted as Galerina sphagnorum (Pers.) Kühner, (1935) Galera tenera Quel. (sic) possibly (Schaeff.) P. Kumm. 187,1 accepted as Conocybe tenera (Schaeff.) Kühner, (1935) Galera tenera var. siliginea Quel. (sic) possibly (Fr.) P. Kumm. 1871, accepted as Conocybe siliginea (Fr.) Kühner, (1935) Genus: Ganoderma P. Karst. 1881 Ganoderma africanum (Lloyd) Doidge 1950 Ganoderma alluaudi Pat. & Har. 1906 Ganoderma applanatum (Pers.) Pat. 1887 Ganoderma applanatum var. laccatum (Sacc.) Rea 1922 accepted as Ganoderma pfeifferi Bres., (1889) Ganoderma australe (Fr.) Pat. 1889 Ganoderma chilense (Fr.) Pat. 1889 Ganoderma colossus Bres.(sic) [as colossum] possibly (Fr.) C.F. Baker 1920 Ganoderma curtisii (Berk.) Murrill 1908 Ganoderma emini Henn. 1893 accepted as Humphreya eminii (Henn.) Ryvarden, (1980) Ganoderma fulvellum Bres. 1889 accepted as Fomes fulvellus (Bres.) Sacc., (1891) Ganoderma lobatum Atk. (sic) possibly Bres. 1889, accepted as Fomes fulvellus (Bres.) Sacc., (1891) Ganoderma lucidum (Curtis) P. Karst. 1881 Ganoderma mastoporum (Lév.) Pat. 1889 accepted as Ganoderma orbiforme (Fr.) Ryvarden [as orbiformum], (2000) Ganoderma mollicarnosum (Lloyd) Sacc. & Trotter 1925 accepted as Navisporus floccosus (Bres.) Ryvarden [as floccosa], (1980) Ganoderma obockense Pat. (1887) [as obockense] Ganoderma oerstedii Bres. (sic) possibly (Fr.) Torrend 1902 Ganoderma oroflavum (Lloyd) C.J. Humphrey 1931 accepted as Ganoderma australe (Fr.) Pat., (1889) Ganoderma resinaceum Boud. 1889 Ganoderma rugosum Bres. (sic) possibly (Blume & T. Nees) Pat. 1889, or Ganoderma rugosum var. nigrozonatum Bres. 1915, both accepted as Sanguinoderma rugosum (Blume & T. Nees) Y.F. Sun, D.H. Costa & B.K. Cui, (2020) Genus: Gassicurtia Fée 1825 Gassicurtia silacea Fée 1834 [as Gassicourtia silicea] accepted as Coniothecium silaceum (Fée) Keissl., (1930) Family: Gasteromyceteae* Ge Genus: Geaster P. Micheli 1729 accepted as Geastrum Pers., (1794) Geaster affinis Colenso. accepted as Geastrum affine Colenso (1884) [1883] Geaster calceus Lloyd. accepted as Geastrum calceum Lloyd (1907) Geaster capensis Thuem. accepted as Geastrum capense Thüm. (1877) Geaster coliformis Dicks. accepted as Geastrum coliforme (Dicks.) Pers. (1801) Geaster coriaceus Colenso. accepted as Geastrum coriaceum Colenso (1890) [1889] Geaster coronatus Schroet. accepted as Geastrum coronatum Schaeff. ex J. Schröt. (1889) Geaster drummondii Berk. accepted as Geastrum drummondii Berk. 1845 Geaster fenestratum Fisch. (sic) probably accepted as Geastrum fenestratum (Batsch) Lloyd 1901 Geaster granulosus Fuck. accepted as Geastrum granulosum Fuckel 1860 Geaster macowani Kalchbr. accepted as Geastrum macowanii Kalchbr. 1882 Geaster minimus Fisch.* Geaster plicatus Berk.* Geaster schmidelii Vit. accepted as Geastrum schmidelii Vittad. 1842 Geaster schweinfurthii P.Henn. accepted as Geastrum schweinfurthii Henn. 1891 Geaster striatulus Kalchbr. accepted as Geastrum striatulum Kalchbr. 1880 Genus: Geasteropsis Hollós 1903 accepted as Geastrum Pers., (1794) Geasteropsis conrathii Hollós 1903 [as conrathi] accepted as Geastrum conrathii (Hollós) P. Ponce de León, (1968) Family: Geastreae Genus: Geastrum Pers. 1794 Geastrum ambiguum Mont. 1837 Geastrum arenarium Lloyd 1907 Geastrum bryantii Berk. 1860 accepted as Geastrum striatum DC. [as Geaster], (1805) Geastrum campestre Morgan 1887 Geastrum conrathii (Hollós) P. Ponce de León, (1968) recorded as Geasteropsis conrathii Hollós 1903 [as conrathi] Geastrum corollinum (Batsch) Hollós, (1904) reported as Geastrum mammosum Chevall. 1826 Geastrum dissimile Bottomley 1948. Geastrum fimbriatum Fr. 1829 Geastrum floriforme Vittad. 1842 Geastrum fornicatum Fr. (sic) possibly (Huds.) Hook. 1821 Geastrum hieronymi Henn. 1897 Geastrum hygrometricum Pers. 1801 accepted as Astraeus hygrometricus (Pers.) Morgan, (1889) Geastrum limbatum Fr. 1829 Geastrum limbatum var. ellipsostoma N.J.G.* Geastrum macowani Kalchbr. 1882. Geastrum mammosum Chevall. 1826 accepted as Geastrum corollinum (Batsch) Hollós, (1904) Geastrum minimum Schwein. 1822 Geastrum mirabile Mont. 1855 Geastrum nanum Pers. 1809 accepted as Geastrum striatum DC. [as Geaster], (1805) Geastrum pazschkeanum Henn. 1900 Geastrum pectinatum Pers. 1801 Geastrum quadrifidum Pers. 1794 Geastrum saccatum Fr. 1829 Geastrum triplex Jungh. 1840 Geastrum velutinum Morgan 1895 Family: Geoglossaceae Corda 1838 Genus: Geopyxis (Pers.) Sacc. 1889 Geopyxis aluticolor (Berk.) Sacc. 1889 Geopyxis ammophila Dur. & Mont. (sic) possibly Sacc. 1889, accepted as Peziza ammophila Durieu & Lév., (1848) Geopyxis cupularis (L.) Sacc. 1889 accepted as Tarzetta cupularis (L.) Lambotte, (1887) Genus: Geotrichum Link 1809 Geotrichum rugosum (Castell.) C.W. Dodge 1935 Geotrichum sp. Gi Genus: Gibellula Cavara 1894 Gibellula aranearum P. Syd. 1922 Gibellula haygarthii Van der Byl 1922 Genus: Gibbera Fr. 1825 Gibbera engleriana (Henn.) Van der Byl 1928 Gibbera tinctoria Massee 1911 Genus: Gibberella Sacc. 1877 accepted as Fusarium Link, (1809) Gibberella acuminata Wollenw. 1943 accepted as Fusarium acuminatum Ellis & Everh., (1895) Gibberella baccata (Wallr.) Sacc. 1878, accepted as Fusarium lateritium Nees, (1816) Gibberella fujikuroi Wollenw. (sic) possibly (Sawada) S. Ito 1919 accepted as Fusarium fujikuroi Nirenberg, (1976) Gibberella fujikuroi var. subglutinans E.T. Edwards 1933 accepted as Fusarium fujikuroi Nirenberg, (1976) Gibberella intricans Wollenw. 1930 accepted as Fusarium gibbosum Appel & Wollenw., (1910) Gibberella pulicaris (Kunze) Sacc. 1877 accepted as Fusarium roseum Link, (1809) Gibberella saubinetii (Mont.) Sacc. 1879 accepted as Fusarium graminearum Schwabe, (1839) Gl Genus: Gliocladium Corda 1840, accepted as Sphaerostilbella (Henn.) Sacc. & D. Sacc., (1905) Gliocladium deliquescens Sopp 1912 accepted as Trichoderma deliquescens (Sopp) Jaklitsch, (2011) Gliocladium penicillioides Corda 1840 [as penicilloides] accepted as Sphaerostilbella penicillioides (Corda) Rossman, L. Lombard & Crous, (2015) Gliocladium roseum Bainier 1907 accepted as Clonostachys rosea (Link) Schroers, Samuels, Seifert & W. Gams, (1999) Genus: Gloeodes Colby 1920 Gloeodes pomigena (Schwein.) Colby 1920 Genus: Gloeoporus Mont. 1842 Gloeoporus conchoides Mont. 1842 accepted as Gloeoporus thelephoroides (Hook.) G. Cunn., (1965) Gloeoporus dichrous (Fr.) Bres. 1912 accepted as Vitreoporus dichrous (Fr.) Zmitr., (2018) Gloeoporus thelephoroides (Hook.) G. Cunn., (1965) reported as Gloeoporus conchoides Mont. 1842 Genus: Gloeosoma Bres. 1920, accepted as Aleurodiscus Rabenh. ex J. Schröt., (1888) Gloeosoma capensis (Lloyd) Lloyd 1920, accepted as Aleurocystis capensis (Lloyd) Lloyd, (1920) Genus: Gloeosporium Desm. & Mont. 1849, accepted as Diplocarpon F.A. Wolf, (1912) Gloeosporium affineSacc. 1878 Gloeosporium ampelophagum de Bary (sic) possibly (Pass.) Sacc. 1878, accepted as Elsinoe ampelina Shear, (1929) Gloeosporium amygdalinum Brizi (sic) possibly (Pass.) Sacc. 1878, accepted as Elsinoe ampelina Shear, (1929) Gloeosporium cocophilum Wakef. 1913 accepted as Colletotrichum gloeosporioides (Penz.) Penz. & Sacc., (1884) Gloeosporium crocatum Sacc. 1891 Gloeosporium epicarpi Thüm. 1877 Gloeosporium fructigenum Berk. 1856 accepted as Colletotrichum gloeosporioides (Penz.) Penz. & Sacc., (1884) Gloeosporium cydoniae Mont. 1851 Gloeosporium helichrysi G. Winter 1885 accepted as Colletotrichum helichrysi (G. Winter) Arx, (1957) Gloeosporium lagenaria (Pass.) Sacc. & Roum. [as lagenarium], (1880) accepted as Gloeosporium orbiculare (Berk.) Berk., (1876) Gloeosporium limetticola R.E. Clausen 1912, [as 'limetticolum'] accepted as Colletotrichum limetticola (R.E. Clausen) Damm, P.F. Cannon & Crous [as 'limetticolum'], (2012) Gloeosporium mangiferae Henn. 1898 accepted as Colletotrichum coccodes (Wallr.) S. Hughes, (1958) Gloeosporium musarum Cooke & Massee 1887 Gloeosporium myricae Dippen. 1931 Gloeosporium olivarum J.V. Almeida 1899 accepted as Colletotrichum coccodes (Wallr.) S. Hughes, (1958) Gloeosporium papayae Henn. 1895 Gloeosporium passiflorae Speg. 1898 accepted as Colletotrichum coccodes (Wallr.) S. Hughes, (1958) Gloeosporium psidii Delacr. 1903 accepted as Colletotrichum coccodes (Wallr.) S. Hughes, (1958) Gloeosporium ptychospermatis Henn. 1902 accepted as Phlyctema ptychospermatis (Henn.) Arx, (1957) Gloeosporium sansevieriae Verwoerd & du Plessis 1931 Gloeosporium sclerocaryae Pole Evans* Gloeosporium venetum Speg. 1879 Gloeosporium violae Berk. & Br. (sic) possibly Pass. 1849, accepted as Marssonina violae (Pass.) Magnus, (1906) Gloeosporium sp. Genus: Glomerella Spauld. & H. Schrenk 1903, accepted as Colletotrichum Corda, (1831) Glomerella cingulata (G.F. Atk.) Spauld. & H. Schrenk 1903 accepted as Colletotrichum gloeosporioides (Penz.) Penz. & Sacc., (1884) Glomerella glycines Lehman & F.A. Wolf 1926 accepted as Colletotrichum glycines Hori ex Hemmi, (1920) Glomerella gossypii Edgerton 1909 accepted as Colletotrichum gossypii Southw., (1891) Glomerella lindemuthiana Shear [as lindemuthianum], Bull. (1913) accepted as Colletotrichum lindemuthianum (Sacc. & Magnus) Briosi & Cavara 1889 Glomerella phacidiomorpha (Ces.) Petr. 1927 Glomerella psidii J. Sheld. 1905 Glomerella rufomaculans (Berk.) Spauld. & H. Schrenk 1903 accepted as Colletotrichum gloeosporioides (Penz.) Penz. & Sacc., (1884) Genus: Gloniella Sacc. 1883 Gloniella multiseptata Doidge 1920 accepted as Gloniella natalensis Doidge, (1941) Gloniella natalensis Doidge, 1941 Genus: Glyphis Ach. 1814 (Lichens) Glyphis cicatricosa Ach. 1814 Glyphis cicatricosa var. confluens (Zenker) Zahlbr. 1927 accepted as Glyphis cicatricosa Ach. 1814 Glyphis cicatricosa var. simplicior (Vain.) Zahlbr. 1927 Glyphis confluens Zenker 1827 accepted as Glyphis cicatricosa Ach. 1814 Gn Genus: Gnomonia Ces. & De Not. 1863 Gnomonia leptostyla (Fr.) Ces. & De Not. 1863 Family: Gnomoniaceae G. Winter 1886 Go Genus: Gomphinaria Preuss 1851, accepted as Ramularia Unger, (1833) Gomphinaria pedrosoi (Brumpt) C.W. Dodge 1935 accepted as Fonsecaea pedrosoi (Brumpt) Negroni, (1936) Genus: Gorgoniceps (P. Karst.) P. Karst. 1871 Gorgoniceps kuitoensis Henn. 1903 Gr Genus: Grammothele Berk. & M.A. Curtis 1868 Grammothele mappa Berk. & M.A. Curtis 1868 accepted as Grammothele lineata Berk. & M.A. Curtis, (1868) Genus: Grandinia Fr. 1838 accepted as Hyphodontia J. Erikss., (1958) Grandinia bicolor P.H.B. Talbot 1948 accepted as Dendrodontia bicolor (P.H.B. Talbot) Hjortstam & Ryvarden, (1980) Grandinia rosea Henn. 1905 accepted as Roseograndinia rosea (Henn.) Hjortstam & Ryvarden, (2005) Genus: Graphina Müll. Arg. 1880 Graphina acharii (Fée) Müll. Arg. 1887 accepted as Allographa acharii (Fée) Lücking & Kalb, (2018) Graphina analoga (Nyl.) Zahlbr. 1927 Graphina atrofusca Müll. Arg. 1887 accepted as Glyphis atrofusca (Müll. Arg.) Lücking, (2009) Graphina bylii (Vain.) Zahlbr. 1932 Graphina bylii var. lividula (Vain.) Zahlbr. 1932 Graphina obtrita (Fée) Müll. Arg. 1887 Graphina pergracilis Zahlbr. 1932 Graphina platycarpa (Eschw.) Zahlbr. 1902 Graphina polycarpa Müll. Arg. 1887 Graphina sophistica (Nyl.) Müll. Arg. 1880 Family: Graphiolaceae Clem. & Shear 1931 Genus: Graphiola Poit. 1824 Graphiola phoenicis (Moug. ex Fr.) Poit. 1824 Family: Graphidaceae Dumort. 1822 (Lichens) Genus: Graphis Adans. 1763 (Lichens) Graphis acharii Fée 1825 accepted as Allographa acharii (Fée) Lücking & Kalb, (2018) Graphis analoga Nyl. 1859 Graphis analoga f. tetraspora Stizenb.* Graphis atrofusca (Müll. Arg.) Stizenb. 1891 Graphis bylii Vain. 1926 Graphis bylii var. lividula Vain. 1926 Graphis caesiopruinosa (Fée) Kremp. 1875 accepted as Phaeographina caesiopruinosa (Fée) Müll. Arg., (1887) Graphis cicatricosa (Ach.) Vain. 1890 accepted as Glyphis cicatricosa Ach., (1814) Graphis cicatricosa var. confluens (Zenker) Vain. 1890 accepted as Glyphis cicatricosa Ach., (1814) Graphis cicatricosa var. simplicior Vain. 1890 Graphis denudans Vain. 1926 Graphis devestiens Nyl. 1891 Graphis diaphoroides Müll. Arg. 1886 Graphis intertexta Nyl. (sic) possibly Müll. Arg. 1895 Graphis intricata Fée 1825 Graphis inusta var. emergens Vain. ex Van der Byl 1931 Graphis mesographa Nyl. 1863 Graphis polycarpa (Müll. Arg.) Stizenb. 1891 Graphis scripta (L.) Ach. 1809 Graphis sophistica Nyl. 1863 Graphis striatula (Ach.) Spreng. 1827 accepted as Allographa striatula (Ach.) Lücking & Kalb, (2018) Graphis subfarinacea Nyl. 1891 Graphis subolivacea Zahlbr. 1926 Gu Genus: Guepinia Fr. 1825 Guepinia agariciformis Lloyd 1923 accepted as Dacryopinax spathularia (Schwein.) G.W. Martin, (1948) Guepinia fissa Berk. 1843 Guepinia flabellata Cooke 1884 accepted as Inflatostereum glabrum (Pat.) D.A. Reid, (1965) Guepinia palmiceps Berk. 1843 Guepinia petaloides Kalchbr. 1882 Guepinia sparassoides Kalchbr. 1882 Guepinia spathularia (Schwein.) Fr. 1828 accepted as Dacryopinax spathularia (Schwein.) G.W. Martin, (1948) Guepinia spathularia f. lata Lloyd.* Genus: Guignardia Viala & Ravaz 1892, accepted as Phyllosticta Pers., (1818) Guignardia bidwellii (Ellis) Viala & Ravaz 1892 accepted as Phyllosticta ampelicida (Engelm.) Aa, (1973) Gy Genus: Gyalecta Ach. 1808 Gyalecta thunbergiana Ach. 1810 accepted as Diploschistes thunbergianus (Ach.) Lumbsch & Vězda, (1993) Family: Gyalectaceae Stizenb. 1862 Order: Gymnoascales G. Winter 1884 Family: Gymnoascaceae Imperfectae Family: Gymnocarpeae Genus: Gymnoglossum Massee 1891 Gymnoglossum radiatum (Lloyd) Bottomley 1948 accepted as Aroramyces radiatus (Lloyd) Castellano, Verbeken & Walleyn, (2000) Genus: Gyrodon Opat. 1836 Gyrodon capensis Sacc. 1896 Family: Gyrophoraceae Zenker 1827 Genus: Gyrophragmium Mont. 1843 Gyrophragmium delilei Mont. 1843 Gyrophragmium inquinans (Berk.) Lloyd 1904 Genus: Gyrostomum Fr. 1825, accepted as Glyphis Ach., (1814) (Lichens) Gyrostomum scyphuliferum (Ach.) Nyl. 1862 accepted as Glyphis scyphulifera (Ach.) Staiger, (2002) See also List of bacteria of South Africa List of Oomycetes of South Africa List of slime moulds of South Africa List of fungi of South Africa List of fungi of South Africa – A List of fungi of South Africa – B List of fungi of South Africa – C List of fungi of South Africa – D List of fungi of South Africa – E List of fungi of South Africa – F List of fungi of South Africa – G List of fungi of South Africa – H List of fungi of South Africa – I List of fungi of South Africa – J List of fungi of South Africa – K List of fungi of South Africa – L List of fungi of South Africa – M List of fungi of South Africa – N List of fungi of South Africa – O List of fungi of South Africa – P List of fungi of South Africa – Q List of fungi of South Africa – R List of fungi of South Africa – S List of fungi of South Africa – T List of fungi of South Africa – U List of fungi of South Africa – V List of fungi of South Africa – W List of fungi of South Africa – X List of fungi of South Africa – Y List of fungi of South Africa – Z References Sources Further reading South African biodiversity lists South Africa
List of fungi of South Africa – G
Biology
5,424
67,910,349
https://en.wikipedia.org/wiki/List%20of%20organisms%20named%20after%20famous%20people%20%28born%201950%E2%80%93present%29
In biological nomenclature, organisms often receive scientific names that honor a person. A taxon (e.g., species or genus; plural: taxa) named in honor of another entity is an eponymous taxon, and names specifically honoring a person or persons are known as patronyms. Scientific names are generally formally published in peer-reviewed journal articles or larger monographs along with descriptions of the named taxa and ways to distinguish them from other taxa. Following the ICZN's International Code of Zoological Nomenclature, based on Latin grammar, species or subspecies names derived from a man's name often end in -i or -ii if named for an individual, and -orum if named for a group of men or mixed-sex group, such as a family. Similarly, those named for a woman often end in -ae, or -arum for two or more women. This list is part of the list of organisms named after famous people, and includes organisms named after famous individuals born on or after 1 January 1950. It also includes ensembles (including bands and comedy troupes) in which at least one member was born after that date; but excludes companies, institutions, ethnic groups or nationalities, and populated places. It does not include organisms named for fictional entities, for biologists, paleontologists or other natural scientists, nor for associates or family members of researchers who are not otherwise notable (exceptions are made, however, for natural scientists who are much more famous for other aspects of their lives, such as, for example, rock musician Greg Graffin). Organisms named after famous people born earlier can be found in: List of organisms named after famous people (born before 1800) List of organisms named after famous people (born 1800–1899) List of organisms named after famous people (born 1900–1949) The scientific names are given as originally described (their basionyms): subsequent research may have placed species in different genera, or rendered them taxonomic synonyms of previously described taxa. Some of these names may be unavailable in the zoological sense or illegitimate in the botanical sense due to senior homonyms already having the same name. List (people born 1950–present) See also List of bacterial genera named after personal names List of rose cultivars named after people List of taxa named by anagrams List of organisms named after the Harry Potter series Notes References Named after celebrities 1950 Taxonomy (biology) Organisms 1950 Organisms 1950 Organisms 1950 Taxonomic lists
List of organisms named after famous people (born 1950–present)
Biology
495
66,489,058
https://en.wikipedia.org/wiki/Drepanopeziza%20sphaerioides
Drepanopeziza sphaerioides is a species of fungus belonging to the family Drepanopezizaceae. Synonym: Marssonina salicicola (Bres.) Magnus, 1906 References Helotiales Fungus species
Drepanopeziza sphaerioides
Biology
52
779,220
https://en.wikipedia.org/wiki/Mission%20control%20center
A mission control center (MCC, sometimes called a flight control center or operations center) is a facility that manages space flights, usually from the point of launch until landing or the end of the mission. It is part of the ground segment of spacecraft operations. A staff of flight controllers and other support personnel monitor all aspects of the mission using telemetry, and send commands to the vehicle using ground stations. Personnel supporting the mission from an MCC can include representatives of the attitude control system, power, propulsion, thermal, attitude dynamics, orbital operations and other subsystem disciplines. The training for these missions usually falls under the responsibility of the flight controllers, typically including extensive rehearsals in the MCC. NASA's Mission Control Center United States missions are, prior to liftoff, controlled from the Launch Control Center (LCC) located at NASA's Kennedy Space Center on Merritt Island, Florida. Responsibility for the booster and spacecraft remains with the Launch Control Center until the booster has cleared the launch tower. After liftoff, responsibility is handed over to NASA's Mission Control Center in Houston, Texas (abbreviated MCC-H, full name Christopher C. Kraft Jr. Mission Control Center), at the Lyndon B. Johnson Space Center. NASA's Mission Control Center in Houston also manages the U.S. portions of the International Space Station (ISS). RKA Mission Control Center The Mission Control Center of the Russian Federal Space Agency (), also known by its acronym ЦУП ("TsUP") is located in Korolyov, near the RKK Energia plant. It contains an active control room for the ISS. It also houses a memorial control room for the Mir where the last few orbits of Mir before it burned up in the atmosphere are shown on the display screens. ISRO Mission Control Centre The Mission Control Center of the Indian Space Research Organisation is located at Satish Dhawan Space Centre, Sriharikota, India. European Space Operations Centre European Space Operations Centre (ESOC) is responsible for ESA's satellites and space probes. It is located in Darmstadt, Germany. German Space Operations Center German Space Operations Center (GSOC) is responsible for DLR's satellites and other customers' missions. It is located in Oberpfaffenhofen near Munich, Germany. The Columbus Control Centre (Col-CC) at the German Aerospace Center (DLR) in Oberpfaffenhofen, Germany. It is the mission control center for the European Columbus research laboratory at the International Space Station. The Galileo Control Center (GCC) at the German Aerospace Center (DLR) in Oberpfaffenhofen, Germany. It is one of the mission control centers for the European Galileo Navigation System. French Space Operations Center The French National Centre for Space Studies (CNES) ATV Control Centre (ATV-CC) is located at the Toulouse Space Centre (CST) in Toulouse, France. It is the mission control center for the European Automated Transfer Vehicles, that regularly resupply ISS. Beijing Aerospace Command and Control Center Beijing Aerospace Command and Control Center is a command center for the Chinese space program which includes the Shenzhou missions. The building is inside a complex nicknamed Aerospace City. The city is located in a suburb northwest of Beijing. Spaceflight Operations Facility The Jet Propulsion Laboratory (JPL) in Pasadena, California manages all of NASA's uncrewed spacecraft outside Earth's orbit and several research probes within along with the Deep Space Network from the Space Flight Operations Facility. Other significant centers America Axiom Space Mission Control Center (MCC-A) in Houston, Texas. Boeing Satellite Development Center (SDC) Mission Control Center in El Segundo, California, US. In charge of several military satellites. Goddard Space Flight Center in Greenbelt, Maryland provides mission control for the Hubble Space Telescope. Lockheed Martin A2100 Space Operations Center (ASOC) in Newtown, Pennsylvania, US. In charge of several military satellites. Mercury Control Center was located on the Cape Canaveral Air Force Station and was used during Project Mercury. One of its still standing buildings now serves as a makeshift bunker for the media if a rocket explodes near the ground. Mobile Servicing System Control and Training at Saint-Hubert, Quebec, Canada. Supports Canadarm2 and "dextre" robotics operations. Space Systems/Loral Mission Control Center in Palo Alto, California, US. The MESSENGER and New Horizons missions were controlled from the Applied Physics Laboratory near Baltimore, Maryland. SpaceX Mission Control Center (MCC-X) in Hawthorne, California Multi-Mission Operations Center at the Ames Research Center Payload Operations and Integration Center at the Marshall Spaceflight Center in Huntsville, Alabama where science activities aboard the International Space Station are monitored around the clock. Asia JEM Control Center and the HTV Control Center at the Tsukuba Space Center (TKSC) in Tsukuba, Japan manages operations aboard JAXA's Kibo ISS research laboratory and the resupply flights of the H-II Transfer Vehicle. JAXAs satellite operations are also based here. Europe The ATV Control Centre (ATV-CC) is located at the Toulouse Space Centre (CST) in Toulouse, France. It is the mission control center for the European Automated Transfer Vehicles, that regularly resupply ISS. The Columbus Control Center (Col-CC) at the German Aerospace Center (DLR) in Oberpfaffenhofen, Germany. It is the mission control center for the European Columbus research laboratory at the International Space Station. The Rover Operations Control Centre (ROCC) is located in Turin, Italy. It will be the mission control center for the ExoMars rover Rosalind Franklin. Titov Main Test and Space Systems Control Centre, mission control center in Krasnoznamensk, Russia. See also Control room Ground segment Launch status check References External links Mission Control Centre for the Russian Federal Space Agency Space flight - Mission Control Center (English) Goddard Space Flight Center Jet Propulsion Laboratory Columbus Control Centre Automated Transfer Vehicle Control Centre European Space Operations Center Control center Rooms Spaceflight technology Command and control
Mission control center
Astronomy,Engineering
1,233
57,794
https://en.wikipedia.org/wiki/Fractal%20antenna
A fractal antenna is an antenna that uses a fractal, self-similar design to maximize the effective length, or increase the perimeter (on inside sections or the outer structure), of material that can receive or transmit electromagnetic radiation within a given total surface area or volume. Such fractal antennas are also referred to as multilevel and space filling curves, but the key aspect lies in their repetition of a motif over two or more scale sizes, or "iterations". For this reason, fractal antennas are very compact, multiband or wideband, and have useful applications in cellular telephone and microwave communications. A fractal antenna's response differs markedly from traditional antenna designs, in that it is capable of operating with good-to-excellent performance at many different frequencies simultaneously. Normally, standard antennas have to be "cut" for the frequency for which they are to be used—and thus the standard antennas only work well at that frequency. In addition, the fractal nature of the antenna shrinks its size, without the use of any extra components such as inductors or capacitors. Log-periodic antennas Log-periodic antennas are arrays invented in 1952 and commonly seen as TV antennas. This was long before Mandelbrot coined the word fractal in 1975. Some authors (for instance Cohen) consider log-periodic antennas to be an early form of fractal antenna due to their infinite self similarity at all scales. However, they have a finite length even in the theoretical limit with an infinite number of elements and therefore do not have a fractal dimension that exceeds their topological dimension – which is one way of defining fractals. More typically, (for instance Pandey) authors treat them as a separate but related class of antenna. Performance Antenna elements (as opposed to antenna arrays, which are usually not included as fractal antennas) made from self-similar shapes were first created by Nathan Cohen then a professor at Boston University, starting in 1988. Cohen's efforts with a variety of fractal antenna designs were first published in 1995, which marked the inaugural scientific publication on fractal antennas. Many fractal element antennas use the fractal structure as a virtual combination of capacitors and inductors. This makes the antenna so that it has many different resonances, which can be chosen and adjusted by choosing the proper fractal design. This complexity arises because the current on the structure has a complex arrangement caused by the inductance and self capacitance. In general, although their effective electrical length is longer, the fractal element antennas are themselves physically smaller, again due to this reactive loading. Thus, fractal element antennas are shrunken compared to conventional designs and do not need additional components, assuming the structure happens to have the desired resonant input impedance. In general, the fractal dimension of a fractal antenna is a poor predictor of its performance and application. Not all fractal antennas work well for a given application or set of applications. Computer search methods and antenna simulations are commonly used to identify which fractal antenna designs best meet the needs of the application. Studies during the 2000s showed advantages of the fractal element technology in real-life applications, such as RFID and cell phones. Fractals have been used commercially in antennas since the 2010s. Their advantages are good multiband performance, wide bandwidth, and small area. The gain with small size results from constructive interference with multiple current maxima, afforded by the electrically long structure in a small area. Some researchers have disputed that fractal antennas have superior performance. S.R. Best (2003) observed "that antenna geometry alone, fractal or otherwise, does not uniquely determine the electromagnetic properties of the small antenna". Hansen & Collin (2011) reviewed many papers on fractal antennas and concluded that they offer no advantage over fat dipoles, loaded dipoles, or simple loops, and that non-fractals are always better. Balanis (2011) reported on several fractal antennas and found them equivalent in performance to the electrically small antennas they were compared to. Log periodics, a form of fractal antenna, have their electromagnetic characteristics uniquely determined by geometry, via an opening angle. Frequency invariance and Maxwell's equations One different and useful attribute of some fractal element antennas is their self-scaling aspect. In 1957, V.H. Rumsey presented results that angle-defined scaling was one of the underlying requirements to make antennas invariant (have same radiation properties) at a number, or range, of frequencies. Work by Y. Mushiake in Japan starting in 1948 demonstrated similar results of frequency independent antennas having self-complementarity. It was believed that antennas had to be defined by angles for this to be true, but in 1999 it was discovered that self-similarity was one of the underlying requirements to make antennas frequency and bandwidth invariant. In other words, the self-similar aspect was the underlying requirement, along with origin symmetry, for frequency independence. Angle-defined antennas are self-similar, but other self-similar antennas are frequency independent although not angle-defined. This analysis, based on Maxwell's equations, showed fractal antennas offer a closed-form and unique insight into a key aspect of electromagnetic phenomena. To wit: the invariance property of Maxwell's equations. This is now known as the Hohlfeld-Cohen-Rumsey (HCR) Principle. Mushiake's earlier work on self complementarity was shown to be limited to impedance smoothness, as expected from Babinet's Principle, but not frequency invariance. Other uses In addition to their use as antennas, fractals have also found application in other antenna system components, including loads, counterpoises, and ground planes. Fractal inductors and fractal tuned circuits (fractal resonators) were also discovered and invented simultaneously with fractal element antennas. An emerging example of such is in metamaterials. A recent invention demonstrates using close-packed fractal resonators to make the first wideband metamaterial invisibility cloak at microwave frequencies. Fractal filters (a type of tuned circuit) are another example where the superiority of the fractal approach for smaller size and better rejection has been proven. As fractals can be used as counterpoises, loads, ground planes, and filters, all parts that can be integrated with antennas, they are considered parts of some antenna systems and thus are discussed in the context of fractal antennas. See also Waveguide (electromagnetism) References External links How to make a fractal antenna for HDTV or DTV CPW-fed H-tree fractal antenna for WLAN, WIMAX, RFID, C-band, HiperLAN, and UWB applications Video of a fractal antenna monopole using fractal metamaterials Radio frequency antenna types Antennas (radio) Fractals
Fractal antenna
Mathematics
1,451
50,968,757
https://en.wikipedia.org/wiki/Oligotyping%20%28taxonomy%29
Oligotyping is a diagnostic or molecular biological method for classification of organisms by short intervals of primary DNA sequence. Oligotyping 'systems' are sets of recognized target sequences which identify the members of the categories within the classification. The classification may be for the purpose of primary biological taxonomy, or for a functional classification. Classifying bacteria Oligotyping has been used for classifying bacteria, identifying bacterial antibiotic resistance genes, identifying genetic factors in human infectious disease, and performing histocompatibility tests for human blood or bone marrow donors/recipients . See also Oligotyping (sequencing) References DNA sequencing Molecular biology techniques Taxonomy (biology) Bacterial taxonomy 1998 in biotechnology
Oligotyping (taxonomy)
Chemistry,Biology
139
9,017,360
https://en.wikipedia.org/wiki/Jumeirah%20Beach%20Hotel
Jumeirah Beach Hotel is a luxury hotel in Dubai, United Arab Emirates. The hotel, which opened in 1997, is operated by the Dubai-based hotelier Jumeirah. The hotel contains 598 rooms and suites, 19 beachfront villas, and 20 restaurants and bars. This wave-shaped hotel complements the sail-shaped Burj Al Arab, which is adjacent to the Jumeirah Beach Hotel. Details of the hotel The hotel occupies a location on the beach. Visitors to the hotel have a total of of beach for their use. Beside the hotel is the Wild Wadi Water Park. All guests in the hotel have unlimited access to the waterpark. The beachfront area where the Burj Al Arab and Jumeirah Beach Hotel are located was previously called Chicago Beach. The hotel is located on an island of reclaimed land offshore of the beach of the former Chicago Beach Hotel. The locale's name had its origins in the Chicago Bridge & Iron Company which had floating oil storage tankers on the site long before Dubai started its current modernisation. The old name persisted after the old hotel was demolished in 1997 since Dubai Chicago Beach Hotel was the public project name for the construction phase of the Burj Al Arab Hotel until Sheikh Mohammed bin Rashid Al Maktoum announced the new name. When completed in 1997, the Jumeirah Beach Hotel was 93 metres high making it the 9th tallest building in Dubai. Today, it is ranked lower than the 100th tallest building. Despite its lower rankings, the hotel remains a Dubai landmark. Gallery See also Hotels in Dubai References External links Jumeirah Beach Hotel official website 1997 establishments in the United Arab Emirates Hotel buildings completed in 1997 Hotels established in 1997 Hotels in Dubai High-tech architecture Postmodern architecture
Jumeirah Beach Hotel
Engineering
357
48,472,119
https://en.wikipedia.org/wiki/32%20Tauri
32 Tauri is the Flamsteed designation for a solitary star in the zodiac constellation of Taurus. It has a visual magnitude of 5.64, making it visible to the naked eye from suburban skies (according to the Bortle scale). The position of this star near the ecliptic plane means that it is subject to occultations by the Moon. Parallax measurements put it at a distance of 144 light years from the Sun. It is drifting further away with a radial velocity of +31.9 km/s, having come to within some 759,000 years ago. The spectrum of this star matches a stellar classification of F2IVs, with the luminosity class of IV indicating that this star has reached the subgiant stage and is in the process of evolving into a giant star. It has twice the mass of the sun with nearly three times the Sun's radius, but 15 times the Sun's luminosity and about half the Sun's age. The abundance of elements other than hydrogen and helium is lower in this star than in the Sun. The effective temperature of the star's outer atmosphere is , giving it the white-hued glow of an F-type star. References F-type subgiants Taurus (constellation) Tauri, 032 1218 BD+22 0605 024740 018471
32 Tauri
Astronomy
286
72,073,447
https://en.wikipedia.org/wiki/Hypersonic%20Attack%20Cruise%20Missile
The Hypersonic Attack Cruise Missile (HACM; pronounced Ha-sehm) is an Australian-American scramjet-powered hypersonic air-launched cruise missile project, the successor of the Hypersonic Air-breathing Weapon Concept (HAWC) and the SCIFiRE hypersonic programs. Technology developed for the HAWC demonstrator was used to influence the design of the HACM, a U.S. Air Force Program of Record to create a scramjet-powered hypersonic missile it could deploy as an operational weapon. In December 2021, Raytheon Technologies was awarded a $985 million contract to continue its HACM development. The contract to develop HACM further was awarded to Raytheon in September 2022. HACM will use a Northrop Grumman scramjet. It is designed to be smaller than the AGM-183 ARRW and able to fly along “vastly different trajectories” than the boost-glide ARRW. The system will give the US military "tactical flexibility to employ fighters to hold high-value, time-sensitive targets at risk, while maintaining bombers for other strategic targets." Following the U.S. Air Force's decision to not pursue procurement of ARRW in March 2023, the HACM became the service's only hypersonic weapon program. Though the USAF confirmed that they would not be purchasing any hypersonic weapons in FY 2024, the budget request for the upcoming fiscal year includes $380 million for R&D on the HACM, followed by a proposed $517 million in FY 2025. The United States hopes to have the missile in operational capacity by FY 2027. The United States Air Force has stated that Australian testing facilities will be used for testing of HACM. In Australian service, the projectile will become the fastest missile Australia has ever operated, and the first hypersonic missile. As of 2024, the fastest missile Australia has operated is the ATACMS ballistic missile, having a supersonic speed of Mach 3. Future Operators United States Air Force In future American service, it has been indicated that the F-15E Strike Eagle will be the sole carrier of the missile. Royal Australian Air Force Australia has indicated that their allocation of the future missiles will first be deployed on the F/A-18F Super Hornets, followed by usage on the EA-18G Growler, F-35A Lightning II and the P-8A Poseidon. See also Hypersonic Air Launched Offensive Anti-Surface 3M22 Zircon ASALM Kh-90 ASN4G References Air-launched cruise missiles Cruise missiles of the United States Proposed weapons of the United States RTX Corporation Hypersonic cruise missiles Scramjet-powered aircraft U.S. Air Force programs of record
Hypersonic Attack Cruise Missile
Astronomy
571
63,196,821
https://en.wikipedia.org/wiki/Pierre%20Peytier
Pierre Peytier - Jean Pierre Eugène Félicien Peytier (15 October 17931864), sometimes named Eugène Peytier - was a French officer, geographer, engineer, cartographer and painter. Life Pierre Peytier entered the École polytechnique in 1811, where he obtained his diploma (X1811). He was then integrated into the topographical service of the French army in the corps of the engineers-geographers in 1813. He was promoted lieutenant in 1817, then capitain in 1827. In the Pyrenees (1825) Engineer geographer and geodesist, he was one of the first geodesic officers charged in 1825 with the triangulation of the Pyrenees in order to establish the map of France, together with his colleague Paul-Michel Hossard. By necessity of service and together with the officers Corabœuf and Testuhe, he was also one of the first pyreneists. He made the first ascents of the Pyrenean peaks Palas, Balaïtous and Saint-Barthélemy. These true exploits went completely unnoticed at the time and many later ascensionists, believing they were achieving these ascents first, found traces of the passage of the Geodesists. That was the case for the explorer Charles Packe when reaching the summit of the Balaïtous. Scientific expedition of Morea (1829) Captain Pierre Peytier, of the topographic service of the French army, had already been invited to Greece by its Governor Ioannis Kapodistrias when the latter had come to Paris in October 1827 to ask the French government for advisers and officers of the army French to organize the army of the newly founded Greek State (during the Greek War of Independence). Thus, on the recommendation of the French Ministry of War, Peytier and three other officers arrived in Greece, in order to train young Greek engineers who would undertake surveying projects, while Peytier himself was to draw the plans for the city of Corinth and the map of Peloponnese. Then when the scientific expedition of Morea landed at Navarino in the Peloponnese on 3 March 1829, Peytier was thus attached to it. As soon as from March, a base of 3,500 meters had been traced in the Argolis, from one angle at the ruins of Tiryns to an angle of a house in ruins in the village of Aria. This was intended to serve as a point of departure in all the triangulation operations for topographic and geodesic readings in the Peloponnese. Peytier and Puillon-Boblaye proceeded to perform numerous verifications on the base and on the rulers used. The margin of error was thus reduced to 1 meter for every 15 kilometers. The longitude and latitude of the base point at Tiryns were read and checked, so that again the margin of error was reduced as far as possible to an estimated 0.2 seconds. 134 geodesic stations were set up on the peninsula's mountains, as well as on Aegina, Hydra and Nafplion. Thus, equilateral triangles whose sides measured about were drawn. The angles were measured with Gambey's theodolites. However, after the departure of the scientific mission from Greece, and although he fell ill with the fever five times, Peytier remained there alone until 31 July 1831 to complete the trigonometric, topographic and statistical work for the establishment of the map of Morea. This « Map of 1832 », very precise, drawn at a 1/200,000 scale, over 6 sheets (plus two sheets depicting some of the islands of the Cyclades) was the first map of the Greek territory ever constructed scientifically and geodesically. After a passage in France between 1831 and 1833, Peytier returned to Greece on 28 March 1833 and remained there until March 1836 to direct most of the work for the preparation of the complete map of the Kingdom of Greece at that time. This « Map of 1852 » was definitively published under his direction in 1852. Peytier also left an album which he himself composed with his pencil drawings, sepias and watercolours depicting city views, monuments, costumes and inhabitants of Greece at the time. He used an artistic style that avoided idealization for the benefit of scientific fidelity and precision, which fully revealed the topographer that he was. The last years Peytier returned definitively to France in 1836 and from 1839 he continued his work on the map of France in the cartographic section of the army. He became director of the war archives. He was promoted to the rank of colonel in 1852. He died in 1864 at the age of seventy. Gallery of drawings Bibliography Henri Beraldi, Cent ans aux Pyrénées, Paris, 1898–1904, sept volumes in-8°. Rééditions par « Les Amis du Livre Pyrénéen », Pau, 1977, puis par la « Librairie des Pyrénées et de Gascogne », Pau, 2001. Yiannis Saïtas et coll., L'œuvre de l'expédition scientifique de Morée 1829-1838, Edited by Yiannis Saïtas, Editions Melissa, 2011 (1re Partie) - 2017 (2nde Partie). Pierre Peytier, Émile Puillon Boblaye, Aristide-Camille Servier, Notice sur les opérations géodésiques exécutées en Morée, en 1829 et 1830, par MM.Peytier, Puillon-Boblaye et Servier ; suivie d’un catalogue des positions géographiques des principaux points déterminés par ces opérations (Gallica - BnF), Bulletin de la Société de géographie, v. 19 n°117–122 (Janvier – Juin 1833) Pierre Peytier, The Peytier Album, Liberated Greece and the Morea Scientific Expedition, in the Stephen Vagliano Collection, publié par la Banque Nationale de Grèce, Athènes, 1971. References École Polytechnique alumni 1864 deaths People from Ardèche Geodesy French cartographers French geographers Pyrénéistes 1793 births
Pierre Peytier
Mathematics
1,283
42,928,707
https://en.wikipedia.org/wiki/Isentropic%20expansion%20waves
In fluid dynamics, isentropic expansion waves are created when a supersonic flow is redirected along a curved surface. These waves are studied to obtain a relation between deflection angle and Mach number. Each wave in this case is a Mach wave, so it is at an angle where is the Mach number immediately before the wave. Expansion waves are divergent because as the flow expands the value of Mach number increases, thereby decreasing the Mach angle. In an isentropic wave, the speed changes from to , with deflection . We have oriented the coordinate system orthogonal to the wave. We write the basic equations (continuity, momentum and the first and second laws of thermodynamics) for this infinitesimal control volume. Relation between θ, M and v Assumptions: Steady flow. Negligible body forces. Adiabatic flow. No work terms. Negligible gravitational effect. The continuity equation is First term is zero by assumption (1). Now, which can be rewritten as Now we consider the momentum equation for normal and tangential to shock. For -component, Second term of L.H.S and first term of R.H.S are zero by assumption (2) and (1) respectively. Then, Or using equation 1.1 (continuity), Expanding and simplifying [Using the facts that, to the first order, in the limit as , and ], we obtain But, so, and Derivation of Prandtl-Meyer supersonic expansion function We skip the analysis of the -component of the momentum and move on to the first law of thermodynamics, which is First term of L.H.S, next three terms of L.H.S and first term of R.H.S are zero due to assumption (3), (4) and (1) respectively. where, For our control volume we obtain This may be simplified as Expanding and simplifying in the limit to first order, we get If we confine to ideal gases, , so Above equation relates the differential changes in velocity and temperature. We can derive a relation between and using . Differentiating (and dividing the left hand side by and the right by ), Using equation (1.6) Hence, Combining (1.4) and (1.7) We generally apply the above equation to negative , let . We can integrate this between the initial and final Mach numbers of given flow, but it will be more convenient to integrate from a reference state, the critical speed () to Mach number , with arbitrarily set to zero at , Leading to Prandtl-Meyer supersonic expansion function, References Thermodynamic processes
Isentropic expansion waves
Physics,Chemistry
554
11,710,718
https://en.wikipedia.org/wiki/Well%20drainage
Well drainage means drainage of agricultural lands by wells. Agricultural land is drained by pumped wells (vertical drainage) to improve the soils by controlling water table levels and soil salinity. Introduction Subsurface (groundwater) drainage for water table and soil salinity in agricultural land can be done by horizontal and vertical drainage systems. Horizontal drainage systems are drainage systems using open ditches (trenches) or buried pipe drains. Vertical drainage systems are drainage systems using pumped wells, either open dug wells or tube wells. Both systems serve the same purposes, namely water table control and soil salinity control . Both systems can facilitate the reuse of drainage water (e.g. for irrigation), but wells offer more flexibility. Reuse is only feasible if the quality of the groundwater is acceptable and the salinity is low. Design Although one well may be sufficient to solve groundwater and soil salinity problems in a few hectares, one usually needs a number of wells, because the problems may be widely spread. The wells may be arranged in a triangular, square or rectangular pattern. The design of the well field concerns depth, capacity, discharge, and spacing of the wells. The discharge is found from a water balance. The depth is selected in accordance to aquifer properties. The well filter must be placed in a permeable soil layer. The spacing can be calculated with a well spacing equation using discharge, aquifer properties, well depth and optimal depth of the water table. The determination of the optimum depth of the water table is the realm of drainage research . Flow to wells The basic, steady state, equation for flow to fully penetrating wells (i.e. wells reaching the impermeable base) in a regularly spaced well field in a uniform unconfined (phreatic) aquifer with a hydraulic conductivity that is isotropic is: where Q = safe well discharge - i.e. the steady state discharge at which no overdraught or groundwater depletion occurs - (m3/day), K = uniform hydraulic conductivity of the soil (m/day), D = depth below soil surface, = depth of the bottom of the well equal to the depth of the impermeable base (m), = depth of the watertable midway between the wells (m), is the depth of the water level inside the well (m), = radius of influence of the well (m) and is the radius of the well (m). The radius of influence of the wells depends on the pattern of the well field, which may be triangular, square, or rectangular. It can be found as: where = total surface area of the well field (m2)and N = number of wells in the well field. The safe well discharge (Q) can also be found from: where q is the safe yield or drainable surplus of the aquifer (m/day) and is the operation intensity of the wells (hours/24 per day). Thus the basic equation can also be written as: Well spacing With a well spacing equation one can calculate various design alternatives to arrive at the most attractive or economical solution for watertable control in agricultural land. The basic flow equation cannot be used for determining the well spacing in a partially penetrating well-field in a non-uniform and anisotropic aquifer, but one needs a numerical solution of more complicated equations. The costs of the most attractive solution can be compared with the costs of a horizontal drainage system - for which the drain spacing can be calculated with a drainage equation - serving the same purpose, to decide which system deserves preference. The well design proper is described in An illustration of the parameters involved is shown in the figure. The hydraulic conductivity can be found from an aquifer test. Software The numerical computer program WellDrain for well spacing calculations takes into account fully and partially penetrating wells, layered aquifers, anisotropy (different vertical and horizontal hydraulic conductivity or permeability) and entrance resistance. Modelling With a groundwater model that includes the possibility to introduce wells, one can study the impact of a well drainage system on the hydrology of the project area. There are also models that give the opportunity to evaluate the water quality. SahysMod is such a polygonal groundwater model permitting to assess the use of well water for irrigation, the effects on soil salinity and on depth of the water table. References External links Salinity Control and Reclamation Program (SCARP) using wells in the Indus valley of Pakistan. Website on waterlogging and land reclamation by horizontal and vertical drainage systems : Drainage Hydrology Hydrogeology Hydraulic engineering Land management Land reclamation Water and the environment de:Schluckbrunnen
Well drainage
Physics,Chemistry,Engineering,Environmental_science
974
11,299,724
https://en.wikipedia.org/wiki/1%2C2-Bis%28dimethylarsino%29benzene
1,2-Bis(dimethylarsino)benzene (diars) is the organoarsenic compound with the formula CH(As(CH)). The molecule consists of two dimethylarsino groups attached to adjacent carbon centers of a benzene ring. It is a chelating ligand in coordination chemistry. This colourless oil is commonly abbreviated "diars." Coordination chemistry Related, but non-chelating organoarsenic ligands include triphenylarsine and trimethylarsine. Work on diars preceded the development of the chelating diphosphine ligands such as dppe, which are now prevalent in homogeneous catalysis. Diars is a bidentate ligand used in coordination chemistry. It was first described in 1939, but was popularized by R. S. Nyholm for its ability to stabilize metal complexes with unusual oxidation states and coordination numbers, e.g. TiCl(diars). High coordination numbers arise because diars is fairly compact and the As-M bonds are long, which relieves crowding at the metal center. In terms of stabilizing unusual oxidation states, diars stabilizes Ni(III), as in [NiCl(diars)]Cl. Of historical interest is the supposedly diamagnetic [Ni(diars)](ClO), obtained by heating nickel perchlorate with diars. Octahedral d complexes characteristically have triplet ground states, so the diamagnetism of this complex was puzzling. Later by X-ray crystallography, the complex was shown to be pentacoordinate with the formula [Ni(triars)(diars)](ClO), where triars is the tridentate ligand [CHAs(CH)]As(CH), arising from the elimination of trimethylarsine. Preparation and handling Diars is prepared by the reaction of ortho-dichlorobenzene and sodium dimethylarsenide: CHCl + 2 NaAs(CH) → CH(As(CH)) + 2 NaCl It is a colorless liquid. Oxygen converts diars to the dioxide, CH(As(CH)O). References Phenylene compounds Chelating agents Cacodyl compounds
1,2-Bis(dimethylarsino)benzene
Chemistry
473
13,336,054
https://en.wikipedia.org/wiki/Powerset%20%28company%29
Powerset was an American company based in San Francisco, California, that, in 2006, was developing a natural language search engine for the Internet. On July 1, 2008, Powerset was acquired by Microsoft for an estimated $100 million (~$ in ). Powerset was working on building a natural language search engine that could find targeted answers to user questions (as opposed to keyword based search). For example, when confronted with a question like "Which U.S. state has the highest income tax?", conventional search engines ignore the question phrasing and instead do a search on the keywords "state", "highest", "income", and "tax". Powerset on the other hand, attempts to use natural language processing to understand the nature of the question and return pages containing the answer. The company was in the process of "building a natural language search engine that reads and understands every sentence on the Web". The company has licensed natural language technology from PARC, the former Xerox Palo Alto Research Center. On May 11, 2008, the company unveiled a tool for searching a fixed subset of English Wikipedia using conversational phrases rather than keywords. Acquisition by Microsoft: One significant milestone in Powerset's history was its acquisition by Microsoft on July 1, 2008, for an estimated $100 million. This acquisition was part of Microsoft's broader strategy to enhance its search capabilities and compete more effectively with other search engine providers, particularly Google. Natural Language Search Engine: Powerset's primary focus was on developing a natural language search engine capable of understanding and interpreting user queries in a more human-like manner. Instead of simply matching keywords, Powerset aimed to comprehend the meaning behind the words, allowing for more accurate and contextually relevant search results. Technology and Partnerships: Powerset had licensed natural language technology from PARC, the Xerox Palo Alto Research Center. This technology likely played a crucial role in the development of Powerset's NLP capabilities. Wikipedia Search Tool: In May 2008, Powerset unveiled a search tool that allowed users to search a fixed subset of English Wikipedia using conversational phrases rather than traditional keywords. This demonstrated the potential of Powerset's NLP technology in providing more precise and relevant search results. Powerlabs In a form of beta testing, Powerset opened an online community called Powerlabs on September 17, 2007. Business Week said: "The company hopes the site will marshal thousands of people to help build and improve its search engine before it goes public next year." Said The New York Times: "[Powerset Labs] goes far beyond the 'alpha' or 'beta' testing involved in most software projects, when users put a new product through rigorous testing to find its flaws. Powerset doesn’t have a product yet, but rather a collection of promising natural language technologies, which are the fruit of years of research at Xerox PARC." Powerlabs' initial search results are taken from Wikipedia. Notable people Barney Pell (born March 18, 1968, in Hollywood, California) was co-founder and CEO of Powerset. Pell received his Bachelor of Science degree in symbolic systems from Stanford University in 1989, where he graduated Phi Beta Kappa and was a National Merit Scholar. Pell received a PhD in computer science from Cambridge University in 1993, where he was a Marshall Scholar. He has worked at NASA, as chief strategist and vice president of business development at StockMaster.com (acquired by Red Herring in March, 2000) and at Whizbang! Labs. Prior to joining Powerset, Pell was an Entrepreneur-in-Residence at Mayfield Fund, a venture capital firm in Silicon Valley. Pell is also a founder of Moon Express, Inc., a U.S. company awarded a $10M commercial lunar contract by NASA and a competitor in the Google Lunar X PRIZE. Steve Newcomb was the COO and co-founder of Powerset. Prior to joining Powerset, he was a co-founder of Loudfire, General Manager at Promptu, and was on the board of directors at Jaxtr. He left Powerset in October 2007 to form Virgance, a social startup incubator. Lorenzo Thione (born in Como, Italy) was the product architect and co-founder of Powerset. Prior to joining Powerset, he worked at FXPAL in natural language processing and related research fields. Thione earned his master's degree in software engineering from the University of Texas at Austin. Ronald Kaplan, former manager of research in Natural Language Theory and Technology at PARC, served as the company's CTO and CSO. Ryan Ferrier is a member of the founding team of Powerset. He managed personnel and internal operations. After 2008 he went on to co-found Serious Business, which made Facebook applications and was later bought by Zynga. Another Powerset alumnus, Alex Le, became CTO of Serious Business and went on to become an executive producer at Zynga when it bought the company. Siqi Chen founded a stealth startup in mobile computing after leaving Powerset. Tom Preston-Werner worked at Powerset and left after the acquisition to found GitHub. Investors Powerset attracted a wide range of investors, many of whom had considerable experience in the venture capital field. The company received $12.5 million (~$ in ) in Series A funding during November 2007, co-led by the venture capital firms Foundation Capital and The Founders Fund. Among the better-known investors: Esther Dyson, founding chairman of ICANN, founder of the newsletter Release 1.0 and editor at Cnet Peter Thiel, founder and former CEO of PayPal Luke Nosek, founder of PayPal Todd Parker. Managing Partner, Hidden River Ventures Reid Hoffman, executive vice president of PayPal and founder of LinkedIn First Round Capital, seed-stage venture firm See also Bing (search engine) Apache HBase References External links Powerset main web site - redirects to Bing Powerset acquired by Microsoft Defunct internet search engines Companies based in San Francisco Natural language processing Microsoft acquisitions 2008 mergers and acquisitions
Powerset (company)
Technology
1,254
36,288,404
https://en.wikipedia.org/wiki/Anton%20Strauss
Anton Strauss (1858, Kamianka - ?) – mining engineer, inventor, partner of the famous architect Vladislav Gorodetsky, an entrepreneur, Kiev Yacht Club commodore, actual state councilor. Family Anton Strauss descended from a noble family of German descent. Father – Emil Christian Dietrich Strauss was born in Witzenhausen (Hessen, Germany) in 1829 to the parents of Herman Karl Strauss and Sofie Badenhausen. Emil Strauss graduated as a doctor of medicine and a surgeon from the University of Göttingen. As a medical doctor he participated in the Crimean and in the Caucasian Wars on the side of Russian Empire and was awarded with the Order of St. Anna (3rd Class). Mother – Fanny Elizabeth Wiesel was a daughter of a medical doctor Bernhard Lorenz Wiesel (Poltava Governorate) and Rosalie Caroline Maier. Brother – Oscar Strauss was born in 1858. Oscar Strauss was a physicist, mathematician, electrical engineer and an entrepreneur. He was a co-founder of the company “Savitsky and Strauss” which launched the first power plant in Kiev in 1890. He was also a shareholder of cable and gunpowder factories, a tobacco factory in Kiev and a lumber mill in Chernigov. Oscar published memoirs about Russian physicist Alexander Popov, who was the first person to demonstrate the practical application of electromagnetic radio waves. Cousin – Oscar Wiesel was born in Russia in 1864. He graduated as a lawyer and worked in Germany, Spitsbergen, and Switzerland as a Russian consul. Wiesel later served as a general consul in Italy (Napoli) in rank of actual state councilor. Cousin – Emil Anton Joseph Wiesel (1 March 1866, Saint Petersburg – 2 May 1943, Leningrad) – a painter, museum curator and a board member of the Imperial Academy of Arts (Russia), organizer of international art exhibitions, councilor of Hermitage and Russian museum and Legion of Honor holder. During Soviet times he was an expert in Russian and Western fine arts and sculpture in the Glavnauka museum department (central administrative board of science, science-artistic and museum institutions). Education Anton Strauss studied in the Kiev Realschule. He graduated from the Saint Petersburg Mining Institute in 1892. Activity Anton Strauss is famous for inventing an improved method of providing a water-tight vertical layer and simultaneously compressing portions of ground adjacent to the layer for use in dams, dikes and like structures or in the ground. The technology was used for the first time during construction of buildings of Russian South-West Railways in Kiev. Later the technology was applied to construct bridges, tunnels, ports and houses in Russia and abroad. Anton Strauss patented the technology on 18 May 1909 in the US, patent # 922,207. Anton Strauss supported various projects of his partner and close friend architect Vladislav Gorodetsky. Aton Strauss’s innovative solutions technically implemented the ideas of his famous colleague. For instance, because of a height difference, to construct the legendary House with Chimaeras in Kiev the engineer had to build special stepped foundation, pile on one side and tape on the other. As an engineer Anton Strauss participated in construction of the Museum of ancient history and arts (Grushevskogo str, 6), the Karaite Kenesa (Bolshaya Podvalnya str.7), the St. Nicholas Roman Catholic Cathedral (Bolshaya Vasilkovskaya str. 75). Anton Strauss also owned a company (Kiev, B. Vladimirskaya str.26, 1897) that specialized in constructing boreholes, pumps, pipes, filters; and mining minerals and fossils. Like his brother Oscar, Anton Strauss was a board member of “South-Russian gunpowder plant ” (Kiev, Bolshaya Podvalnaya str.8, 1911). Yacht club Yachting was among Anton Strauss’ hobbies. Anton Strauss was a member of the Kiev Yacht club and it’s commodore from 1911 till 1917 (?). The club was located on Trukhanov Island with winter residence in the Fundukleevskaya str, 10 (nowadays the Khmelnitskogo str.). Grand Duke Alexander Mikhailovich of Russia patronized the Club. The annual fee was equal 25 rubles. Each member was required to own a yacht. Address in Kiev Kreposti str, 4. No information about Anton Strauss is available after 1917. External links Большая иллюстрированная энциклопедия яхт-клубов Сайт, посвященный архитектору Павлу Алешину Статья "Отклубившийся Киев", автор Алексей Зотиков Горное профессиональное сообщество дореволюционной России Статья "Бывшее реальное училище" на сайте Kievstory Patent Ethnic German people from the Russian Empire 1858 births Inventors from the Russian Empire Businesspeople from the Russian Empire Mining engineers Year of death missing Saint Petersburg Mining University alumni Civil engineers from the Russian Empire
Anton Strauss
Engineering
1,150
577,340
https://en.wikipedia.org/wiki/California%20State%20University%2C%20Fullerton
California State University, Fullerton (CSUF or Cal State Fullerton) is a public research university in Fullerton, California. With a total enrollment of more than 41,000, it has the largest student body of the California State University (CSU) system, and its graduate student body of more than 5,000 is one of the largest in the CSU and in all of California. As of fall 2016, the school had 2,083 faculty, of whom 782 were on the tenure track. The university offers 109 degree programs: 55 undergraduate degrees and 54 graduate degrees, including 3 doctoral programs. Cal State Fullerton is classified among "R2: Doctoral Universities – High research activity". It is also a Hispanic-serving institution (HSI) and is eligible to be designated as an Asian American Native American Pacific Islander serving institution (AANAPISI). CSUF athletic teams compete in Division I of the NCAA and are collectively known as the CSUF Titans. They compete in the Big West Conference. History Founding In 1957, Orange County State College became the 12th state college in California to be authorized by the state legislature as a degree-granting institution. The following year, a site was designated for the campus to be established in northeast Fullerton. The property was purchased in 1959. The same year, William B. Langsdorf was appointed as founding president of the school. Classes began with 452 students in September 1959. The name of the school was changed to Orange State College in July 1962. In 1964, its name was changed to California State College at Fullerton. In June 1972, the final name change occurred and the school became California State University, Fullerton. Mascot The choice of the elephant as the university's mascot, dubbed Tuffy the Titan, dates to 1962, when the campus hosted "The First Intercollegiate Elephant Race in Human History." The May 11 event attracted 10,000 spectators, 15 pachyderm entrants, and worldwide news coverage. Campus violence The campus has seen three significant instances of violence with people killed. On July 12, 1976, Edward Charles Allaway, a campus janitor with paranoid schizophrenia, shot nine people, killing seven, in the University Library (now the Pollak Library) on the Cal State Fullerton campus. At the time, it was the worst mass shooting in Orange County history. On October 13, 1984, Edward Cooperman, a physics professor, was shot and killed by his former student, Minh Van Lam, in McCarthy Hall. On August 19, 2019, Steven Shek Keung Chan, a retired budget director working as a consultant in the international student affairs office, was found dead from multiple stab wounds in a campus parking lot. Chuyen Vo, a co-worker in the same office, was charged with murder. 2000s: Modern growth The university grew rapidly in the first decade of the 2000s. The Performing Arts Center was built in January 2006, and in the summer of 2008 the newly constructed Steven G. Mihaylo Hall and the new Student Recreation Center opened. In fall 2008, the Performing Arts Center was renamed the Joseph A.W. Clayes III Performing Arts Center, in honor of a $5 million pledge made to the university by the trustees of the Joseph A.W. Clayes III Charitable Trust. Since 1963, the curriculum has expanded to include many graduate programs, including multiple doctorate degrees, as well as numerous credential and certificate programs. In 2021, president of the university Framroze Virjee acknowledged the university's location on the lands of the Tongva and Acjachemen and pledged for the university to be more committed toward partnering with Indigenous peoples. Campus The campus is on the site of former citrus groves in northeast Fullerton. It is bordered on the east by the Orange Freeway (SR-57), on the west by State College Boulevard, on the north by Yorba Linda Boulevard, and on the south by Nutwood Avenue. Although established in the late 1950s, much of the initial construction on campus took place in the late 1960s, under the supervision of artist and architect Howard van Heuklyn, who gave the campus a striking, futuristic architecture (buildings like Pollak Library South, Titan Shops, Humanities, McCarthy Hall). This was in response to the numerous Googie buildings in the Fullerton community. The University Archives & Special Collections in the Pollak Library houses the Philip K. Dick papers and Frank Herbert papers as part of the Willis McNelly Science Fiction collection. Since 1993, the campus has added the College Park Building, Steven G. Mihaylo Hall, University Hall, the Titan Student Union, the Student Recreation Center, the Nutwood Parking Structure, the State College Parking Structure, Dan Black Hall, Joseph A.W. Clayes III Performing Arts Center West, Phase III Housing, the Grand Central Art Center, and Pollak Library. In order to generate power for the university and become more sustainable, the campus installed solar panels on top of a number of buildings. The panels, which generate up to 7–8 percent of the electrical power used daily, are atop the Eastside Parking Structure, Clayes Performing Arts Center and the Kinesiology and Health Science Building. In August 2011, the university added a $143 million housing complex, which included five new residence halls, a convenience store and a 565-seat dining hall called the Gastronome. El Dorado Ranch serves as the university president's residence. Satellite campus The university opened a satellite campus in Irvine, California in 1989, approximately south of the original Fullerton location. Amid the COVID-19 pandemic, the satellite campus closed in July 2021. Proposed expansion CSUF announced plans in May 2010 to buy the lot occupied by Hope International University, but this deal fell through. CSUF also announced plans in September 2010 to expand into the area south of Nutwood Avenue to construct a project called CollegeTown, which would integrate the surrounding residential areas and retail spaces into the campus. After community opposition, the Fullerton planning commission indefinitely postponed any action on the project in February 2016. Desert Studies Center The Desert Studies Center is a field station of the California State University located in Zzyzx, California in the Mojave Desert. The purpose of the center is to provide opportunities to conduct research, receive instruction and experience the Mojave Desert environment. It is officially operated by the California Desert Studies Consortium, a consortium of 7 CSU campuses: Fullerton, Cal Poly Pomona, Long Beach, San Bernardino, Northridge, Dominguez Hills and Los Angeles. Academics Admissions and enrollment Fall freshman statistics As of the fall 2013 semester, CSUF is the third most applied to CSU out of all 23 campuses receiving nearly 65,000 applications, including over 40,000 for incoming freshmen and nearly 23,000 transfer applications, the second highest in the CSU. Rankings and distinctions The 2024 edition of U.S. News & World Report ranked Fullerton tied for 2nd "Performers on Social Mobility," tied 70 in top public schools, tied 31 for best undergraduate teaching, 211 for best value schools, and the undergraduate engineering program tied for 40, tied 8 in computer engineering, tied 8 in civil engineering and tied 9 in electrical/electronic/communications, tied 201 in economics, and tied 154 for Nursing. Money magazine ranked Cal State Fullerton 34th in the country out of 739 schools evaluated for its 2020 "Best Colleges for Your Money" edition and 22nd in its list of the 50 best public schools in the U.S. Athletics CSUF participates in the NCAA Division I Big West Conference and MPSF. Cal State Fullerton Athletics boasts 31 national championships covering 11 sports and dating back to its first in 1967. There are 12 team national titles and 19 individual championships. The Titans became an NCAA Div. I program for the 1974-75 academic year and have since produced 11 (6 team and 5 individual) national titles, four of them by the Titans' baseball team. Eighteen of the titles come from men's sports, 12 from women's. 12 team national championships in eight different sports. (1970, women's basketball (CIAW); 1971, 1972, 1974 men's gymnastics; 1971 cross country team; 1973 women's fencing; 1979, women's gymnastics; 1979, 1984, 1995, 2004 baseball; 1986 softball). Their baseball team is a perennial national powerhouse with four national titles and dozens of players playing Major League Baseball. The CSUF Dance Team currently holds the most national titles at the school, with 15 national titles from UDA Division 1 Jazz; 2000, 2001, 2002, 2003, 2004, 2006, 2007, 2008, 2010, 2011, 2012, 2013, 2014, 2015, 2016 and 2017; and one national title from UDAs in Division 1 Hip Hop. The Dance Team also holds multiple titles from United Spirit Association. CSUF holds the Ben Brown Invitational every track and field season. CSUF currently supports 21 club sports on top of its Division I varsity teams, which are archery, baseball, cycling, equestrian, grappling and jiu jitsu, ice hockey, men's lacrosse, women's lacrosse, nazara Bollywood dance, men's rugby, women's rugby, roller hockey, salsa team, men's soccer, women's soccer, table tennis, tennis, ultimate Frisbee, men's volleyball, women's volleyball, skiing, and wushu. Because of the proximity to Long Beach State, the schools are considered rivals. The rivalry is especially heated in baseball with the Long Beach State baseball team also having a competitive college baseball program. Student life CSUF was the first college in Orange County to have a Greek system, with its first fraternity founded in 1960. The Daily Titan, the official student newspaper of the university, also started in 1960. Other official student media includes Titan Radio. On April 23, 2014, Cal State Fullerton opened the Titan Dreamers Resource Center. The center was the first resource center for undocumented students in the CSU system. Notable alumni CSUF alumni include: an astronaut who, , is participating in her third trip to space; a speaker of the California Assembly; other politicians and Academy Award-winning directors, actors, producers, and cinematographers; award-winning journalists, authors, and screenwriters; nationally recognized teachers; presidents and CEOs of leading corporations; international opera stars, musicians, and Broadway stars; professional athletes and Olympians; doctors, scientists and researchers; and social activists. Titan alumni number more than 210,000. An active alumni association keeps them connected through numerous networking and social events, and also sponsors nationwide chapters. Notes References External links Cal State Fullerton Athletics website Universities and colleges established in 1957 Fullerton California State University, Fullerton Education in Fullerton, California Universities and colleges in Orange County, California Schools accredited by the Western Association of Schools and Colleges 1957 establishments in California Glassmaking schools
California State University, Fullerton
Materials_science,Engineering
2,225
36,045,138
https://en.wikipedia.org/wiki/Chamber%20of%20Computer%20Engineers%20of%20Turkey
Chamber of Computer Engineers of Turkey (, abbreviated BMO) was founded on 2 June 2012. Formerly, the computer engineers in Turkey were the members of Chamber of Electrical Engineers of Turkey. But, on 9 March 2011 computer engineers decided to form their own chamber. The regulatory board announced that each year about 6,500 new CS engineers (including related undergraduate studies) graduate from the universities. During the general assembly of Union of chambers of Turkish engineers and architects (UCTEA) on the 2 June 2012, the request was approved. The chamber has become the 24th member of the union - UCTEA. References Engineering societies based in Turkey 2012 establishments in Turkey Organizations established in 2012 Computer engineering
Chamber of Computer Engineers of Turkey
Technology,Engineering
137
46,553,491
https://en.wikipedia.org/wiki/List%20of%20countries%20by%20refined%20petroleum%20exports
The following is a list of countries by exports of refined petroleum, including gasoline. Data is for 2023, in billions of United States dollars. Currently, the top 10 countries are listed according to Worlds Top Exports ranking. References Refined petroleum Refined petroleum exports Exports, refined Petroleum economics
List of countries by refined petroleum exports
Chemistry
57
10,888,763
https://en.wikipedia.org/wiki/Hoffmann%20kiln
The Hoffmann kiln is a series of batch process kilns. Hoffmann kilns are the most common kiln used in production of bricks and some other ceramic products. Patented by German Friedrich Hoffmann for brickmaking in 1858, it was later used for lime-burning, and was known as the Hoffmann continuous kiln. Construction and operation A Hoffmann kiln consists of a main fire passage surrounded on each side by several small rooms. Each room contains a pallet of bricks. In the main fire passage there is a fire wagon, that holds a fire that burns continuously. Each room is fired for a specific time, until the bricks are vitrified properly, and thereafter the fire wagon is rolled to the next room to be fired. Each room is connected to the next room by a passageway carrying hot gases from the fire. In this way, the hottest gases are directed into the room that is currently being fired. Then the gases pass into the adjacent room that is scheduled to be fired next. There the gases preheat the brick. As the gases pass through the kiln circuit, they gradually cool as they transfer heat to the brick as it is preheated and dried. This is essentially a counter-current heat exchanger, which makes for a very efficient use of heat and fuel. This efficiency is a principal advantage of the Hoffmann kiln, and is one of the reasons for its original development and continued use throughout history. In addition to the inner opening to the fire passage, each room also has an outside door, through which recently fired brick is removed, and replaced with wet brick to be dried and then fired in the next firing cycle. In a classic Hoffmann kiln, the fire may burn continuously for years, even decades; in Iran, there are kilns that are still active and have been working continuously for 35 years. Any fuel may be used in a Hoffmann kiln, including gasoline, natural gas, heavy petroleum and wood fuel. The dimensions of a typical Hoffmann kiln are completely variable, but in average about 5 m (height) x 15 m (width) x 150 m (length). Hoffmann kiln expansion The first kiln of this class was put into operation on November 22, 1859 in Scholwin (since 1946, Skolwin), near Stettin, which was then part of Prussia. In 1867 there were already 250 of them, most in the Prussian part of Germany, fifty in England and three in France. In Italy, their expansion began in 1870, after being shown at the Paris Exhibition. In September 1870, the first brick factory according to Hoffmann's patent was inaugurated in Australia. The first continuous Hoffmann system kilns installed in Spain would have been in 1880, near Madrid. In 1900 there were already more than 4,000 kilns of this type, distributed throughout Europe, Russia, the Americas, Africa and even the East Indies. In 1904, an oven according to the patent of the British William Sercombe and based on the Hoffmann model began operating in Palmerston North, New Zealand. Hoffman kilns are still in use for brick production in some parts of the world, especially in places where labor costs are low and modern technology is not easily accessible. Historic examples of Hoffmann kilns The Hoffmann kiln is used in almost every country. UK In the British Isles there are only a few Hoffmann kilns remaining, some of which have been preserved. The only ones with a chimney are at Prestongrange Industrial Heritage Museum and Llanymynech Heritage Area. The site at Llanymynech, close to Oswestry was used for lime-burning and has recently been partially restored as part of an industrial archaeology conservation project supported by English Heritage and the Heritage Lottery Fund. Two examples in North Yorkshire, the Hoffmann lime-burning kiln at Meal Bank Quarry, Ingleton, and that at the former Craven and Murgatroyd lime works, Langcliffe, are scheduled ancient monuments. There is an intact but abandoned Hoffmann kiln without a chimney present at Minera Limeworks; the site is abandoned but all entrances to the kiln have been grated-off, preventing access. The kiln is in a very poor state of repair, with trees growing out of the walls and the roof. Minera Quarry Trust hopes one day to develop the area into something of a tourist attraction. The Grade II listed Hoffmann brick kiln in Ilkeston, Derbyshire, is also badly neglected, although the recently installed fencing offers some protection for the building and for visitors. At Prestongrange Museum, outside Prestonpans in East Lothian, the Hoffman kiln is still standing and visitors can listen to more about it via a mobile phone tour. There is a nearly complete kiln in Horeb, Carmarthenshire. There is still a working kiln at Kings Dyke in Peterborough, which is the last site of the London Brick Company, owned by Forterra PLC. Australia In Victoria, Australia, at the Brunswick brickworks, there are two surviving kilns converted to residences, and a chimney from a third kiln; there is another in Box Hill, Victoria; also in Melbourne. In Adelaide, South Australia, the last remaining Hoffman kiln in the state is in at the old Hallett Brickworks site in Torrensville. There is one at St Peters in Sydney, New South Wales. In Western Australia, the kiln at the Maylands Brickworks in the Perth suburb of Maylands, which operated from 1927 to 1982 is the only remaining Hoffman kiln in the state. Catalonia Bòbila de Bellamar a Calafell. Other countries There is a complete kiln in the restored Tsalapatas brick Factory in Volos Greece that has been converted to an industrial museum. There are two in New Zealand. Kaohsiung city in Taiwan is also home to a Hoffman kiln, built by the Japanese government in 1899. References External links History of Hoffman Preston Grange tour site Evaluation of Hoffman Kiln Technology RCAHMS Canmore Industrial processes Kilns Lime kilns Firing techniques
Hoffmann kiln
Chemistry,Engineering
1,235
24,344,478
https://en.wikipedia.org/wiki/GVA%20Consultants
GVA Consultants was a Swedish marine and offshore engineering company specialising in the design of offshore structures, and semi-submersible platforms. History The experience of GVA as a marine engineering company stretches back more than 25 years. There is an even longer history of building ships of various types, semi-submersibles, jack-ups and offshore modules at the shipyard Götaverken Arendal AB where GVA is based. From the late 1970s the yard focused mainly on the offshore industry. Thus, the company designed, constructed and delivered 14 semisubmersibles from its facilities in Gothenburg. Seven units of the GVA series have also been built by yards all over the world, in different sizes and for various requirements. The units built at the yard in Gothenburg were adapted for duties such as production, drilling, diving support and accommodation/service. The deliveries included six Pacesetter type units and eight GVA designs. The shipyard closed down in 1989 and GVA Consultants AB was established from the shipyards technical departments. GVA started out as a small company focusing on engineering services and conceptual studies. The company grew considerably during the years when larger contracts such as Visund, Troll C, Åsgard B and Kristin Gas Field were awarded to them. The company has continued to grow and has taken on even larger projects such as the Thunder Horse and Atlantis oil fields. In 2001, Halliburton KBR (KBR), formerly Kellogg Brown & Root acquired GVA Consultants AB from British Maritime Technology Ltd (BMT) for an undisclosed amount." Halliburton KBR was the engineering and construction segment of Halliburton, the world's largest provider of products and services to the petroleum and energy industries. GVA Consultant's range of products and services includes: Conceptual designs Basic designs for the GVA Series of semi-submersibles Basic designs for other vessels Basic design for conversions or upgrades Consultation and project management support Research & development GVA was a subsidiary to KBR but operated as a fully independent company providing specialized design services. The owner KBR shut down GVA Consultants in 2016. Vessels References External links GVA official website KBR official website Companies based in Gothenburg Engineering companies of Sweden Offshore engineering
GVA Consultants
Engineering
453
14,640,744
https://en.wikipedia.org/wiki/Turtle%20Island
Turtle Island is a name for Earth or North America, used by some American Indigenous peoples, as well as by some Indigenous rights activists. The name is based on a creation myth common to several indigenous peoples of the Northeastern Woodlands of North America. A number of contemporary works continue to use and/or tell the Turtle Island creation story. Lenape The Lenape story of the "Great Turtle" was first recorded by Europeans between 1678 and 1680 by Jasper Danckaerts. The story is shared by other Northeastern Woodlands tribes, notably the Iroquois peoples. The Lenape believe that before creation there was nothing, an empty dark space. However, in this emptiness, there existed a spirit of their creator, Kishelamàkânk. Eventually in that emptiness, he fell asleep. While he slept, he dreamt of the world as we know it today, the Earth with mountains, forests, and animals. He also dreamt up man, and he saw the ceremonies man would perform. Then he woke up from his dream to the same nothingness he was living in before. Kishelamàkânk then started to create the Earth as he had dreamt it. First, he created helper spirits, the Grandfathers of the North, East, and West, and the Grandmother of the South. Together, they created the Earth just as Kishelamàkânk had dreamt it. One of their final acts was creating a special tree. From the roots of this tree came the first man, and when the tree bent down and kissed the ground, woman sprang from it. All the animals and humans did their jobs on the Earth, until a problem eventually arose. There was a tooth of a giant bear that could give the owner magical powers, and the humans started to fight over it. Eventually, the wars got so bad that people moved away, and made new tribes and new languages. Kishelamàkânk saw this fighting and decided to send down a spirit, Nanapush, to bring everyone back together. He went on top of a mountain and started the first Sacred Fire, which gave off a smoke that caused all the people of the world to come investigate what it was. When they all came, Nanapush created a pipe with a sumac branch and a soapstone bowl, and the creator gave him Tobacco to smoke with. Nanapush then told the people that whenever they fought with each other, to sit down and smoke tobacco in the pipe, and they would make decisions that were good for everyone. The same bear tooth later caused a fight between two evil spirits, a giant toad and an evil snake. The toad was in charge of all the waters, and amidst the fighting he ate the tooth and the snake. The snake then proceeded to bite his side, releasing a great flood upon the Earth. Nanapush saw this destruction and began climbing a mountain to avoid the flood, all the while grabbing animals that he saw and sticking them in his sash. At the top of the mountain there was a cedar tree that he started to climb, and as he climbed he broke off limbs of the tree. When he got to the top of the tree, he pulled out his bow, played it and sang a song that made the waters stop. Nanapush then asked which animal he could put the rest of the animals on top of in the water. The turtle volunteered saying he'd float and they could all stay on him, and that's why they call the land Turtle Island. Nanapush then decided the turtle needed to be bigger for everyone to live on, so he asked the animals if one of them would dive down into the water to get some of the old Earth. The beaver tried first, but came up dead and Nanapush had to revive him. The loon tried second, but its attempt ended with the same fate. Lastly, the muskrat tried. He stayed down the longest, and came up dead as well, but he had some Earth on his nose that Nanapush put on the Turtles back. Because of his accomplishment, Nanapush told the muskrat he was blessed and his kind would always thrive in the land. Nanapush then took out his bow and again sang, and the turtle started to grow. It kept growing, and Nanapush sent out animals to try to get to the edge to see how long it had grown. First, he sent the bear, and the bear returned in two days saying he had reached the end. Next, he sent out the deer, who came back in two weeks saying he had reached the end. Finally, he sent the wolf, and the wolf never returned because the land had gotten so big. Lenape tradition said wolves howl to call their ancestor back home. Haudenosaunee According to the oral tradition of the Haudenosaunee (or "Iroquois"), "the earth was the thought of [a ruler of] a great island which floats in space [and] is a place of eternal peace." Sky Woman fell down to the earth when it was covered with water, or more specifically, when there was a "great cloud sea". Various animals tried to swim to the bottom of the ocean to bring back dirt to create land. Muskrat succeeded in gathering dirt, which was placed on the back of a turtle. This dirt began to multiply and also caused the turtle to grow bigger. The turtle continued to grow bigger and bigger and the dirt continued to multiply until it became a huge expanse of land. Thus, when Iroquois cultures refer to the earth, they often call it Turtle Island. According to Converse and Parker, the Iroquois faith shared with other religions the "belief that the Earth is supported by a gigantic turtle." In the Seneca language, the mythical turtle is called Hah-nu-nah, while the name for an everyday turtle is ha-no-wa. In Susan M. Hill's version of the story, the muskrat or other animals die in their search for land for the Sky Woman (named Mature Flower in Hills's telling). This is a representation of the Haudenosaunee beliefs of death and chaos as forces of creation, as we all give our bodies to the land to become soil, which in turn continues to support life. This concept plays out again when the Mature Flower's daughter dies during childbirth, becoming the first person to be buried on the turtle's back and whose burial post helped grow various plants such as corn and strawberries. This, according to Hill, also shows how soil, and the land itself, has the ability to act and shape creation. Some tellings do not include this expanded edition as part of the Creation Story, however, these differences are important to note when considering Haudenosaunee traditions and relationships. Indigenous rights activism and environmentalism The name Turtle Island has been used by many Indigenous cultures in North America, and both native and non-native activists, especially since the 1970s when the term came into wider usage. American author and ecologist Gary Snyder uses the term to refer to North America, writing that it synthesizes both indigenous and colonizer cultures, by translating the indigenous name into the colonizer's languages (the Spanish "Isla Tortuga" being proposed as a name as well). Snyder argues that understanding North America under the name of Turtle Island will help shift conceptions of the continent. Turtle Island has been used by writers and musicians, including Snyder for his Pulitzer Prize-winning book of poetry, Turtle Island; the Turtle Island Quartet jazz string quartet; Tofurky manufacturer Turtle Island Foods; and the Turtle Island Research Cooperative in Boise, Idaho. The Canadian Association of University Teachers has put into practice the acknowledgment of indigenous territory and claims, particularly at institutions located within unceded land or covered by perpetual decrees such as the Haldimand Tract. At Canadian universities, many courses, student and academic meetings, as well as convocation and other celebrations begin with a spoken acknowledgement of the traditional Indigenous territories, sometimes including reference to Turtle Island, in which they are taking place. Contemporary works There are a number of contemporary works which continue to use and/or tell the story of the Turtle Island creation story. The Truth About Stories by Thomas King Thomas King's book tells us that "the truth about stories is they're all we are." King's book explores the power of story both in native lives and in the lives of every person on this planet. Every chapter opens with a telling of the story of the world on the back of a turtle in space, and in each chapter, it is slightly altered to show how stories change through tellers and audiences. Their fluidity is itself a characteristic of the story as they traverse through time. King provides us with his own telling of the story using a woman named Charm as his Sky Woman. Charm is from a different planet and is described as being curious to a fault, often asking the animals of her planet questions they deem to be too nosy. When she becomes pregnant, she develops a craving for Red Fern Root, which can only be found underneath the oldest tree. While digging for the Red Fern Root she digs so deep she makes a hole in the planet, and in her curiosity falls through all the way to earth. King tells us that this is a young Earth from before land was created, and in order to save Charm from falling hard and fast into the water and upsetting the stillness of the water, all the water birds fly up to catch her. With no land to set her on they offer her the back of the turtle. When Charm is almost ready to give birth the animals fear that the turtle will be too crowded, so she asks the animals to dive down to find mud so that she can use its magic to build dry land. Many animals try but most fail, until the otter dives down for days before finally surfacing, passed out from exhaustion, clutching mud in its paws. Charm creates land from the mud, magic, and the turtle's back and gives birth to twins which keep the earth in balance. One twin flattened out the land, created light, and created woman, while the other made valleys and mountains, shadows, and man. King emphasizes that the Turtle Island creation story creates "a world in which creation is a shared activity...a world that begins in chaos and moves toward harmony." He explains that understanding and continuing to tell this story creates a world that values these ideas and relationships with nature. Without that understanding, we fail to uphold the relationships forged by Charm, the twins, and the animals that created the earth. Braiding Sweetgrass by Robin Wall Kimmerer Robin Wall Kimmerer's book, Braiding Sweetgrass, addresses the need for us to understand our reciprocal relationships with nature in order for us to understand and use ecology as a means to save the earth. The version of the story from Kimmerer starts off with the Sky Woman falling from a hole in the sky, cradling something tightly in her hands. Geese rise up to soften her landing and place her on the back of a turtle so that she does not drown. All the animals congregate to help find dirt for the sky woman so that she can build her habitat, some giving their lives in the search. Finally, the muskrat surfaces, dead but clutching a handful of soil for the Sky Woman, who takes the offering gratefully and uses seeds from The Tree of Life to begin her garden using her gratitude and the gifts from the animals, thus creating Turtle Island as we know it. Through the Sky Woman story, Kimmerer tells us that we cannot "begin to move toward ecological and cultural sustainability if we cannot even imagine what the path feels like." Cherokee Stories of the Turtle Island Liars' Club by Christopher B. Teuton Christopher B. Teuton book provides a comprehensive look into Cherokee oral traditions and art to bring them into the contemporary moment. He put together his collection with three friends, also master storytellers, who get together to swap stories from around the 14 Cherokee states. The first chapter of the book Beginnings starts with a telling of the Sky Woman story. Notably, this telling of Turtle Island has the water beetle dive for the earth necessary for the sky woman, where often you will see a muskrat or otter. Turtle Island is a running theme throughout the book, as it is the beginning of life and story. We Are Water Protectors by Carole Lindstrom We Are Water Protectors is a children's storybook written by Carole Lindstrom in 2020 in response to the building of the Dakota Access Pipeline, represented as a large black snake in the book. The book says that water is the source of all life, and it is all of ours duty to protect our water sources so that we can preserve not only ourselves but those of animals and the environment. The story draws important meanings from the Turtle Island creation story such as water as the origin of life and closes with a drawing of the main character returning the turtle to the water saying "We are stewards of the earth. Our spirits are not to be broken." See also Geographical renaming – the practice of political renaming Abya Yala – a similar name used by the Guna people and others to refer to the Americas as a whole Aotearoa – the Māori name for New Zealand Aztlán – the legendary ancestral home of the Aztec peoples Anahuac – Nahuatl name for the historical and cultural region of Mexico Cemanahuac – Nahuatl name used by the Mexica to refer to the larger region beyond their empire, between the Pacific and Atlantic Ocean Turtles in North American Indigenous Mythology World Turtle Discworld Turtle Island (Lake Erie) References Specific Bibliography External links Geography of North America Iroquois legendary creatures Legendary creatures of the indigenous peoples of North America Legendary turtles Mythological islands Native American toponymy Creation myths
Turtle Island
Astronomy
2,836
4,406,537
https://en.wikipedia.org/wiki/Sun%20SPOT
Sun SPOT (Sun Small Programmable Object Technology) was a sensor node for a wireless sensor network developed by Sun Microsystems announced in 2007. The device used the IEEE 802.15.4 standard for its networking, and unlike other available sensor nodes, used the Squawk Java virtual machine. After the acquisition of Sun Microsystems by Oracle Corporation, the SunSPOT platform was supported but its forum was shut down in 2012. A mirror of the old site is maintained for posterity. Hardware The completely assembled device fit in the palm of a hand. Its first processor board included an ARM architecture 32 bit CPU with ARM920T core running at 180 MHz. It had 512 KB RAM and 4 MB flash memory. A 2.4 GHz IEEE 802.15.4 radio had an integrated antenna and a USB interface was included. A sensor board included a three-axis accelerometer (with 2G and 6G range settings), temperature sensor, light sensor, 8 tri-color LEDs, analog and digital inputs, two momentary switches, and 4 high current output pins. The unit used a 3.7V rechargeable 750 mAh lithium-ion battery, had a 30 uA deep sleep mode, and battery management provided by software. Software The device's use of Java device drivers is unusual since Java is generally hardware-independent. Sun SPOT uses a small Java ME Squawk which ran directly on the processor without an operating system. Both the Squawk VM and the Sun SPOT code are open source. Standard Java development environments such as NetBeans can be used to create SunSPOT applications. The management and deployment of application are handled by ant scripts, which can be called from a development environment, command line, or the tool provided with the SPOT SDK, "solarium". The nodes communicate using the IEEE 802.15.4 standard including the base-station approach to sensor networking. Protocols such as Zigbee can be built on 802.15.4. Sun Labs reported implementations of RSA and elliptic curve cryptography (ECC) optimized for small embedded devices. Availability Sun Microsystems Laboratories started research on sensor networks around 2004. After some initial experience using "Motes" from Crossbow Technology, a project began under Roger Meike to design an integrated hardware and software system. Sun sponsored a project at the Art Center College of Design called Autonomous Light Air Vessels in 2005. The first limited-production run of Sun SPOT development kits were released April 2, 2007, after months of delays. This introduction kit included two Sun SPOT demo sensor boards, a Sun SPOT base station, the software development tools, and a USB cable. The software was compatible with Windows XP, Mac OS X 10.4, and common Linux distributions. Some demonstration code was provided. A developer from Sun gave a demonstration in September 2007. After investigating commercial use, Sun moved to focus on educational users. The entire project, hardware, operating environment, Java virtual machine, drivers and applications, was available as open source in January 2008. Oracle Corporation acquired Sun Microsystems in 2010 and continued Sun SPOT development, through release 8 of the hardware (with Sun-Oracle logo) by March 2011. The 2011 version included larger memories and a faster processor, but with fewer inputs. In 2012 the forum said it would be "down for maintenance" until "mid-June". A new forum was started on the Oracle Technology Network on May 7, 2013. David G. Simmons, one of the SunSPOT developers for Sun Microsystems, maintained a blog through the end of 2010. He opened an alternative developers forum in July 2013 not connected to Oracle. When the project was shut down, the lead hardware engineer for the SunSPOT project, Bob Alkire, archived the hardware design on his personal website. References External links Sun Microsystems hardware Wireless sensor network Sensors Smart materials
Sun SPOT
Materials_science,Technology,Engineering
784
56,563,719
https://en.wikipedia.org/wiki/Dragon%27s%20Breath%20%28dessert%29
The dragon's Breath is a frozen dessert made from cereal dipped in liquid nitrogen. When placed in the eater's mouth, it produces vapors which comes out of the nose and mouth, giving the dessert its name. Description Dragon's Breath is made using colorful cereal balls described as having a flavor similar to Froot Loops. The cereal is dipped in liquid nitrogen and served in a cup. The eater uses a stick to skewer the balls. Once in the eater's mouth, the cold of the liquid nitrogen combines with the warmth of the mouth to release visible vapors out of the nose and mouth. According to Glutto Digest, Dragon's Breath was originally invented and served at a “minibar” by José Andrés in 2008. After Andrés stopped serving it at his LA restaurant, “The Bazaar”, in 2009, it spread throughout Taiwan, Korea, and the Philippines over the following years. According to The Straits Times, Dragon's Breath first appeared in the Philippines and South Korea circa 2015, but gained popularity when Los Angeles-based chain, Chocolate Chair, added it to its menu. Dragon's Breath is noted for the spectacle of its consumption more than its flavor, with several publications commenting on its compatibility with Instagram trends. Safety Liquid nitrogen is used in several foods and drinks to quickly freeze them or for the vapors it produces. Its consumption poses several dangers to humans. The extreme cold temperature can cause damage to human tissue, and the displacement of oxygen by nitrogen can cause asphyxiation. At a shop in Singapore in 2016, a woman was burned when the dessert stuck to her gums. In October 2017, two children at the Pensacola Interstate Fair in Florida were injured while handling or consuming Dragon's Breath. A 14-year-old suffered a burn on her thumb from contact with the frozen dessert. Another child suffered second degree burns on the roof of her mouth. Following complaints, the fair's general manager announced the vendor would not be allowed to sell Dragon's Breath at the next year's event. The smoking balls which are puffy cereals, infused with liquid nitrogen are sold as Heaven's Breath and Nitro Puff. In 2018, the FDA issued an alert against the delicacy. It warned on the colorful ball's danger to children afflicted with asthma, severe skin and internal organs damages, burns, breathing difficulty and life-threatening injuries. External links References Frozen desserts Cryogenics
Dragon's Breath (dessert)
Physics
502
12,869,384
https://en.wikipedia.org/wiki/Astrobiology%20%28journal%29
Astrobiology is a peer-reviewed scientific journal covering research on the origin, evolution, distribution and future of life across the universe. The journal's scope includes astrobiology, astrophysics, astropaleontology, bioastronomy, cosmochemistry, ecogenomics, exobiology, extremophiles, geomicrobiology, gravitational biology, life detection technology, meteoritics, origins of life, planetary geoscience, planetary protection, prebiotic chemistry, space exploration technology and terraforming. Abstracting and indexing This journal is indexed by the following services: According to the Journal Citation Reports, the journal has a 2019 impact factor of 4.091. References External links Astrobiology journals Mary Ann Liebert academic journals Academic journals established in 2001 Bimonthly journals English-language journals
Astrobiology (journal)
Astronomy
168
59,829,183
https://en.wikipedia.org/wiki/Hazel%20Reeves
Hazel Reeves, MRSS SWA is a British sculptor based in Sussex, England, who specialises in figure and portrait commissions in bronze. Her work has been shown widely across England and Wales. Public commissions can be found in Carlisle, London, Congleton and Manchester. Since 2021, Reeves' work increasingly embraces soundscapes of nature and movement. Early life and education Reeves was born in Croydon, Surrey and now lives in Brighton, East Sussex. She attended Imberhorne School in East Grinstead, West Sussex, Kingston Business School and the London School of Economics and Political Science (LSE) to study international development and gender equality MSc (Econ). In 2003 she studied sculpture with Sylvia MacRae Brown at the University of Sussex, at Heatherley School of Fine Art (London) and in 2009 at the Florence Academy of Art, Italy. Career Reeves' first quasi-public commission was of Sadako Sasaki for the Hedd Wen Peace Place, Llanfoist, Abergavenny, unveiled on the World Day of Peace, 21 September 2012. It tells the story of Sadako and her 1000 paper cranes, used worldwide in peace education. The statue of Sir Nigel Gresley, designer of steam locomotives Flying Scotsman and Mallard, was Reeves' first major public commission. Her original design had included a mallard duck but it was removed after objections from two relatives who thought it was demeaning. The statue was unveiled at London King's Cross railway station on 6 April 2016, the 75th anniversary of his death. On International Women's Day, 8 March 2018, Reeves' Cracker Packers statue was unveiled in Caldewgate, Carlisle, close to the pladis factory, where Carr's Table Water Biscuits are manufactured. The statue celebrates the lives of women biscuit factory workers from the Carr's factory in Carlisle. Based on former and current Cracker Packers the statue is of two women factory workers, one from the past and one from the present, standing atop a giant Carr's Table Water Biscuit. The statue was commissioned by Carlisle City Council and was one of hundreds that were nominated for Historic England's "Immortalised" season in 2018. In 2017, Reeves' winning design – Rise up, Women – was selected from a shortlist of six designs for a bronze statue of Emmeline Pankhurst, by winning the public vote and being the unanimous choice of the WoManchester Statue Project selection panel. The statue of Emmeline Pankhurst was unveiled in St Peter's Square, Manchester (her hometown) on 14 December 2018. In 2021 it won the Public Statues and Sculpture Association (PSSA) Marsh Award for Excellence in Public Sculpture. Reeves' statue of Elizabeth Wolstenholme Elmy (1833-1918), a pioneering activist who fought for equality throughout her life, was unveiled in Congleton by Baroness Hale of Richmond on International Women's Day, 8 March 2022. Reeves seeks to redress the lack of representation of women in some of her public commissions as well as private commissions, such as portrait sculptures of disability rights activists Baroness Jane Campbell and Diane Kingston. Reeves has been appointed to sculpt Ada Nield Chew (1870-1945), the vocal factory worker who became a women's rights campaigner, for installation in Crewe. The 'Statue for Ada' campaign is coordinated by Cheshire Women's Collaboration. Sir John Manduell CBE, the Founding Principal who brought together two Manchester music schools to establish the Royal Northern College of Music (RNCM), will be honoured June 2024 with a new bust created by Hazel Reeves. Reeves was artist-in-residence in 2021 at Knepp Estate, West Sussex, recording bird soundscapes to inspire movement. Her resultant Sculptural Murmurings project at Fabrica Gallery, Brighton, was funded by the National Lottery through Arts Council England, who are also funding Reeves' Soundscapes of Hope project in 2022/23, drawing on her field recordings at Knepp and the nature reserves of Svartådalan, Sweden. Two sound events resulted in 2023: Layback with Nature (Phoenix Art Space, Brighton) and Sculptural Murmurings (II) (Fabrica Gallery). In 2024, Reeves' collaborated with pianist and composer Damian Montagu on the track Knepp Dawn, released on 5 May 2024 to mark International Dawn Chorus Day. The track celebrates the dawn chorus in the Knepp scrubland that features bird species facing cataclysmic declines elsewhere, like the nightingale, turtle dove, cuckoo, white stork. Reeves was elected to the Society of Women Artists (SWA) in 2009 and elected a member of the Royal Society of Sculptors (MRSS) in 2017. She teaches portrait sculpture workshops at Art Junction in Billingshurst, Phoenix Brighton, Morley College (London) and Masterclasses at the Art Academy (London). References External links Knepp Wildland Podcast - The Soundscape, with Hazel Reeves, Episode 24 Living people Year of birth missing (living people) 21st-century English sculptors 21st-century English women artists Alumni of Kingston University Alumni of the London School of Economics Alumni of the University of Sussex Artists from Brighton English women sculptors Artists from London People from Croydon 21st-century British women sculptors Sound artists Field recording
Hazel Reeves
Engineering
1,085
54,186,204
https://en.wikipedia.org/wiki/Normustine
Normustine, also known as bis(2-chloroethyl)carbamic acid, is a nitrogen mustard and alkylating antineoplastic agent (i.e., chemotherapy agent). It is a metabolite of a number of antineoplastic agents that have been developed for the treatment of tumors, including estramustine phosphate, alestramustine, cytestrol acetate, and ICI-85966 (stilbostat), but only the former of which has actually been marketed. References Alkylating antineoplastic agents Carbamates Human drug metabolites Nitrogen mustards Organochlorides Chloroethyl compounds
Normustine
Chemistry
144
18,013,064
https://en.wikipedia.org/wiki/HD%2040307%20b
HD 40307 b is an extrasolar planet orbiting the star HD 40307, located 42 light-years away in the direction of the southern constellation Pictor. The planet was discovered by the radial velocity method, using the European Southern Observatory's HARPS apparatus, in June 2008. It is the second smallest of the planets orbiting the star, after HD 40307 e. The planet is of interest as this star has relatively low metallicity, supporting a hypothesis that different metallicities in protostars determine what kind of planets they will form. Discovery As with many other extrasolar planets, HD 40307 b was discovered by measuring variations in the radial velocity of the star it orbits. These measurements were made by the High Accuracy Radial Velocity Planet Searcher (HARPS) spectrograph at the Chile-based La Silla Observatory. The discovery was announced at the astrophysics conference that took place in Nantes, France between 16 and 18 June 2008. HD 40307 b was one of three found here at the time. Orbit and mass HD 40307 b is the second lightest planet discovered in the system, with at least 4.2 times the mass of the Earth. The planet orbits the star HD 40307 every 4.3 Earth days, corresponding of its location at approximately 0.047 astronomical units from the star. The eccentricity of the planet's orbit was found to not differ significantly from zero, meaning that there is insufficient data to distinguish the orbit from an entirely circular one. The star around which HD 40307 b orbits has a low metallicity, compared to other planet-bearing stars. This supports a hypothesis concerning the possibility that the metallicity of stars during their births may determine whether a protostar's accretion disk forms gas giants or terrestrial planets. The Arizonan astronomer Rory Barnes's mathematical model, in 2009, found that "Planet b's orbit must be more than 15◦ from face-on"; however it cannot be much more. Characteristics HD 40307 b does not transit and has not been imaged. More specific characteristics, such as its radius, composition, and possible surface temperature cannot be determined. With a lower mass bound of 4.2 times the mass of the Earth, HD 40307 b is presumably too small to be a jovian planet. This concept was challenged in a 2009 study, which stated that if HD 40307 b is terrestrial, the planet would be highly unstable and would be affected by tidal heating in a manner greater than Io, a volcanic satellite of planet Jupiter; restrictions that seem to bind terrestrial planets, however, do not restrict ice giant planets like Neptune or Uranus. As strong tidal forces often result in the destruction of larger natural satellites in planets orbiting close to a star, it is unlikely that HD 40307 b hosts any satellites. HD 40307 b, c, and d are presumed to have migrated into their present orbits. Trivia The planet was named "Good Planet" in the Xkcd strip "Exoplanet Names" in August, 2013. See also List of exoplanets References External links HD 40307 Pictor Super-Earths Exoplanets discovered in 2008 Exoplanets detected by radial velocity
HD 40307 b
Astronomy
663
40,925,603
https://en.wikipedia.org/wiki/Oliceridine
Oliceridine, sold under the brand name Olinvyk, is an opioid medication that is used for the treatment of moderate to severe acute pain in adults. It is given by intravenous (IV) injection. The most common side effects include nausea, vomiting, dizziness, headache, constipation, itchy skin and low oxygen levels in blood. It was approved for medical use in the United States in August 2020. Medical uses Oliceridine is indicated for short-term intravenous use in hospitals or other controlled clinical settings, such as during inpatient and outpatient procedures. It is not indicated for at-home use. Adverse effects The safety profile of oliceridine is similar to other opioids. As with other opioids, the most common side effects of oliceridine are nausea, vomiting, dizziness, headache and constipation. Prolonged use of opioid analgesics during pregnancy can result in neonatal opioid withdrawal syndrome. Olinvyk carries a boxed warning about addiction, abuse and misuse; life-threatening respiratory depression; neonatal opioid withdrawal syndrome; and risks from concomitant use with benzodiazepines or other central nervous system depressants. Unlike other opioids for intravenous administration, Olinvyk has a maximum recommended daily dose limit of 27 milligrams. Contraindications Oliceridine should not be given to people with significant respiratory depression; acute or severe bronchial asthma in an unmonitored setting or in the absence of resuscitative equipment; known or suspected gastrointestinal obstruction; or known hypersensitivity to the medication. Pharmacology Pharmacodynamics Oliceridine is a μ-opioid receptor biased agonist developed by Trevena. In cell-based (in vitro) research, oliceridine elicits robust G protein signaling, with potency and efficacy similar to that of morphine, but with less β-arrestin 2 recruitment and receptor internalization. It has been suggested that this might be due to its low intrinsic efficacy, rather than functional selectivity or 'G protein bias', although the validity of that conclusion has also been questioned. In vivo, it may have fewer adverse effects (including respiratory depression and constipation) compared with morphine. In general, in vitro potency does not guarantee any clinical relevance in humans. History A total of 1,535 participants with moderate to severe acute pain were treated with oliceridine in controlled and open-label trials. Its safety and efficacy were established by comparing oliceridine to placebo in randomized, controlled studies of participants who had undergone bunion surgery or abdominal surgery. Participants administered oliceridine reported decreased pain compared to placebo at the approved doses. The U.S. Food and Drug Administration (FDA) approved oliceridine based on evidence from three clinical trials (Trial 1/NCT02815709, Trial 2/NCT02820324 and Trial 3) of 1558 participants 18 to 89 years old who were in need of pain medication. The trials were conducted at 53 sites in the United States. Trial 1 enrolled participants who underwent bunion surgery. Participants with moderate to severe post-surgical pain were randomly assigned to receive oliceridine, placebo or an approved drug to treat pain (morphine) for 48 hours intravenously. Neither the participants nor the health care providers knew which treatment was being given until after the trial was completed. All participants were allowed to use a rescue pain medication, if the pain was not well controlled using the trial medications. Trial 2 enrolled participants who underwent surgical removal of abdominal wall fat (abdominoplasty) and had moderate to severe pain. Participants were randomly assigned to receive oliceridine, placebo or an approved drug to treat pain (morphine) for 24 hours intravenously. Neither the participants nor the health care providers knew which treatment was being given until after the trial was completed. All participants were allowed to use a rescue pain medication, if the pain was not well controlled using the trial medications. To assess the benefits of oliceridine, participants used a numerical scale to score how severe the pain was after the surgery. The scores for the participants receiving oliceridine were compared to the scores for the participants who received placebo and those who received morphine. In the third trial, participants who had pain following various type of surgeries or due to a medical condition received at least one dose of oliceridine. Data from this trial were used only to assess the side effects of oliceridine. Oliceridine was approved for medical use in the United States in August 2020. The FDA granted approval of Olinvyk to Trevena Inc. Society and culture Legal status An advisory committee of the U.S. Food and Drug Administration (FDA) voted against the approval of oliceridine in 2018, due to concerns that the benefit of the drug did not exceed the risk. The risks of oliceridine include prolongation of the QT interval on the ECG, and depression of the respiratory drive (which could cause a person to stop breathing). As a result of the committee's vote, the FDA declined to approve oliceridine, citing safety concerns. Oliceridine was approved for medical use in the United States in August 2020. The FDA granted approval of Olinvyk to Trevena Inc. The DEA issued an interim final rule on October 30, 2020, designating oliceridine as CSA Schedule II (DEA Code 9245). See also SHR9352 Tegileridine TRV734 References External links Analgesics Biased ligands Mu-opioid receptor agonists 2-Pyridyl compounds Spiro compounds Thiophenes
Oliceridine
Chemistry
1,224
48,781,047
https://en.wikipedia.org/wiki/Glossary%20of%20algebraic%20topology
This is a glossary of properties and concepts in algebraic topology in mathematics. See also: glossary of topology, list of algebraic topology topics, glossary of category theory, glossary of differential geometry and topology, Timeline of manifolds. Convention: Throughout the article, I denotes the unit interval, Sn the n-sphere and Dn the n-disk. Also, throughout the article, spaces are assumed to be reasonable; this can be taken to mean for example, a space is a CW complex or compactly generated weakly Hausdorff space. Similarly, no attempt is made to be definitive about the definition of a spectrum. A simplicial set is not thought of as a space; i.e., we generally distinguish between simplicial sets and their geometric realizations. Inclusion criterion: As there is no glossary of homological algebra in Wikipedia right now, this glossary also includes a few concepts in homological algebra (e.g., chain homotopy); some concepts in geometric topology and differential topology are also fair game. On the other hand, the items that appear in glossary of topology are generally omitted. Abstract homotopy theory and motivic homotopy theory are also outside the scope. Glossary of category theory covers (or will cover) concepts in theory of model categories. See the glossary of symplectic geometry for the topics in symplectic topology such as quantization. !$@ A B C D E F G H I J K L M N O P Q R S T U V W Notes References Lectures delivered by Michael Hopkins and Notes by Akhil Mathew, Harvard. (despite the title, it contains a significant amount of general results.) the 1970 MIT notes Further reading José I. Burgos Gil, The Regulators of Beilinson and Borel Lectures on groups of homotopy spheres by JP Levine B. I. Dundas, M. Levine, P. A. Østvær, O. Röndigs, and V. Voevodsky. Motivic homotopy theory. Universitext. Springer-Verlag, Berlin, 2007. Lectures from the Summer School held in Nordfjordeid, August 2002. External links Algebraic Topology: A guide to literature Algebraic topology Algebraic topology Wikipedia glossaries using description lists
Glossary of algebraic topology
Mathematics
475
38,195,771
https://en.wikipedia.org/wiki/NGC%205477
NGC 5477 is a dwarf galaxy located in the constellation of Ursa Major, 20 million light years away from Earth. It was discovered on April 14, 1789, by the astronomer William Herschel. References External links Dwarf galaxies Ursa Major 5477 09018 M101 Group 50262 Magellanic spiral galaxies
NGC 5477
Astronomy
68
871,458
https://en.wikipedia.org/wiki/Li%20Fan%20%28Han%20dynasty%29
Li Fan () was a Chinese astronomer during the Han dynasty (202 BC – 220 AD). He noticed that the Moon does not move uniformly through its phases by using background stars as reference. In 85 Li Fan and Bian Xin () were tasked by Emperor Zhang to resolve inaccuracies in the Taichu calendar. He is also known to have worked with inflow clepsydras as opposed to earlier, typically less accurate outflow clepsydras. The measurements of synodic periods of the planets given in the following table are attributed to him. An impact crater that is located at the Phaethontis quadrangle, Mars, 47.2°S Latitude and 153.2°W Longitude was named in his honor. The diameter of the impact crater is approximately 104.8 km. References Year of birth missing Year of death missing 1st-century Chinese astronomers
Li Fan (Han dynasty)
Astronomy
183
14,878,404
https://en.wikipedia.org/wiki/MTA3
Metastasis-associated protein MTA3 is a protein that in humans is encoded by the MTA3 gene. MTA3 protein localizes in the nucleus as well as in other cellular compartments MTA3 is a component of the nucleosome remodeling and deacetylate (NuRD) complex and participates in gene expression. The expression pattern of MTA3 is opposite to that of MTA1 and MTA2 during mammary gland tumorigenesis. However, MTA3 is also overexpressed in a variety of human cancers. Discovery Mouse Mta3 was initially identified as a partial cDNA with open reading frames in screening of a mouse keratinocyte cDNA library with a human MTA1 partial fragment by My G. Mahoney's research team. The full length Mta3 cDNA was cloned through 5'-RACE methodology using RNA from C57B1/6J mouse skin. The deduced amino acids and its comparison with the sequences in the GeneBank established MTA3 as the third MTA family member. Gene and spliced variants The Mta3 is localized on chromosome 12p in mice and MTA3 on 2p21 in human. The human MTA3 gene contains 20 exons, and 19 alternative spliced transcripts. Of these, nine MTA3 transcripts are predicted to code six proteins of 392, 514, 515, 537, 590 and 594 amino acids long, two MTA3 transcripts code 18 amino acids and 91 amino acids polypeptides. The remaining 10 transcripts are non-coding RNAs. The murine Mta3 gene contains nine transcripts, six of which are predicted to code proteins ranging from 251 amino acids to 591 amino acids while one transcript codes for 40 amino acids polypeptide. The murine Mta3 gene contains two predicted non-coding RNAs. Structure The overall organization of MTA3 protein domains is similar to the other two family members with a BAH (Bromo-Adjacent Homology), an ELM2 (egl-27 and MTA1 homology), a SANT (SWI, ADA2, N-CoR, TFIIIB-B), a GATA-like zinc finger, and one predicted bipartite nuclear localization signal (NLS). The SH3 motif of Mta3 allows it to interact with Fyn and Grb2 – both SH3 containing signaling proteins. Function Functions of MTA3 are believed to be differentially regulated in the context of cancer-types. For example, MTA3 expression is downregulated in breast cancer and endometrioid adenocarcinomas. MTA3 is overexpressed in non-small cell lung cancer and human placenta and chorionic carcinoma cells. In breast cancer, loss of MTA3 promotes EMT and invasiveness of breast cancer cells via upregulating Snail, which in turn represses E-cadherin adhesion molecule. In the mammary epithelium and breast cancer cells, MTA3 is an estrogen regulated gene and part of a larger regulatory network involving MTA1 and MTAs, all modifiers of hormone response, and participate in the processes involved in growth and differentiation. Accordingly, the MTA3-NuRD complex regulates the expression of Wnt4 in mammary epithelial cells and mice, and controls Wnt4-dependent ductal morphogenesis. In contrast to its repressive actions, MTA3 also stimulates the expression of HIF1α as well as its target genes under hypoxic conditions in trophoblasts and is thought to be involved in differentiation during pregnancy. MTA3-NuRD complex and downstream targets have been shown to participate in primitive hematopoietic and angiogenesis in a zebrafish model system As a part of BCL6 corepressor complex, MTA3 regulates BCL6-dependent repression of target genes, including PRDM1, and modulates the differentiation of B-cells. Regulation The estrogen receptor-stimulates the expression of MTA3 in breast cancer cells. The SP1 transcription factor stimulates the transcription of MTA3. MicroRNA-495 inhibits the level of MTA3 mRNA as well as the growth and migration of non-small cell lung cancer cells. The β-elemene - a traditional Chinese medicine, upregulates MTA3's expression in breast cancer cells Targets The MTA3-NuRD complex represses Snail, a master regulator of epithelial-to-mesenchymal transition (EMT), Wnt4 expression in mammary epithelial cells, and BCL6-corepressor target genes The MTA3-NuRD complex interacts with GATA3 to regulate the expression of GATA3 downstream targets. In addition, MTA3 upregulates HIF1 and its transactivation activity in hypoxic conditions. Notes References External links Transcription factors
MTA3
Chemistry,Biology
1,053
71,890,151
https://en.wikipedia.org/wiki/Vercel
Vercel Inc., formerly ZEIT, is an American cloud platform as a service company. The company maintains the Next.js web development framework. Vercel's architecture is built around composable architecture, and deployments are handled through Git repositories, the Vercel CLI, or the Vercel REST API. Vercel is a member of the MACH Alliance. History Vercel was founded by Guillermo Rauch in 2015 as ZEIT. Rauch had previously created the realtime event-driven communication library Socket.IO. ZEIT was rebranded to Vercel in April 2020, although retained the company's triangular logo. In June 2021, Vercel raised $102 million in a Series C funding round. As of May 2024, the company is valued at $3.25 billion. Acquisitions On December 9, 2021, Vercel acquired Turborepo. On October 25, 2022, Vercel acquired Splitbee. Architecture Deployments through Vercel are handled through Git repositories, with support for GitHub, GitLab, and Bitbucket repositories. Deployments are automatically given a subdomain under the vercel.app domain, although Vercel offers support for custom domains for deployments. Vercel's infrastructure uses Amazon Web Services and Cloudflare. Reception Vercel's clientele includes Airbnb, Uber, GitHub, Nike, Ticketmaster, Carhartt, IBM, and McDonald's. References Bibliography Citations American companies established in 2015 Companies based in San Francisco Blog hosting services Cloud computing providers Cloud infrastructure Cloud platforms Content delivery networks Free web hosting services Git (software) Internet technology companies of the United States Web service providers Project hosting websites Software companies established in 2015
Vercel
Technology
376
5,143,623
https://en.wikipedia.org/wiki/Rabi%20problem
The Rabi problem concerns the response of an atom to an applied harmonic electric field, with an applied frequency very close to the atom's natural frequency. It provides a simple and generally solvable example of light–atom interactions and is named after Isidor Isaac Rabi. Classical Rabi problem In the classical approach, the Rabi problem can be represented by the solution to the driven damped harmonic oscillator with the electric part of the Lorentz force as the driving term: where it has been assumed that the atom can be treated as a charged particle (of charge e) oscillating about its equilibrium position around a neutral atom. Here xa is its instantaneous magnitude of oscillation, its natural oscillation frequency, and its natural lifetime: which has been calculated based on the dipole oscillator's energy loss from electromagnetic radiation. To apply this to the Rabi problem, one assumes that the electric field E is oscillatory in time and constant in space: and xa is decomposed into a part ua that is in-phase with the driving E field (corresponding to dispersion) and a part va that is out of phase (corresponding to absorption): Here x0 is assumed to be constant, but ua and va are allowed to vary in time. However, if the system is very close to resonance (), then these values will be slowly varying in time, and we can make the assumption that , and , . With these assumptions, the Lorentz force equations for the in-phase and out-of-phase parts can be rewritten as where we have replaced the natural lifetime with a more general effective lifetime T (which could include other interactions such as collisions) and have dropped the subscript a in favor of the newly defined detuning , which serves equally well to distinguish atoms of different resonant frequencies. Finally, the constant has been defined. These equations can be solved as follows: After all transients have died away, the steady-state solution takes the simple form where "c.c." stands for the complex conjugate of the opposing term. Two-level atom Semiclassical approach The classical Rabi problem gives some basic results and a simple to understand picture of the issue, but in order to understand phenomena such as inversion, spontaneous emission, and the Bloch–Siegert shift, a fully quantum-mechanical treatment is necessary. The simplest approach is through the two-level atom approximation, in which one only treats two energy levels of the atom in question. No atom with only two energy levels exists in reality, but a transition between, for example, two hyperfine states in an atom can be treated, to first approximation, as if only those two levels existed, assuming the drive is not too far off resonance. The convenience of the two-level atom is that any two-level system evolves in essentially the same way as a spin-1/2 system, in accordance to the Bloch equations, which define the dynamics of the pseudo-spin vector in an electric field: where we have made the rotating wave approximation in throwing out terms with high angular velocity (and thus small effect on the total spin dynamics over long time periods) and transformed into a set of coordinates rotating at a frequency . There is a clear analogy here between these equations and those that defined the evolution of the in-phase and out-of-phase components of oscillation in the classical case. Now, however, there is a third term w, which can be interpreted as the population difference between the excited and ground state (varying from −1 to represent completely in the ground state to +1, completely in the excited state). Keep in mind that for the classical case, there was a continuous energy spectrum that the atomic oscillator could occupy, while for the quantum case (as we've assumed) there are only two possible (eigen)states of the problem. These equations can also be stated in matrix form: It is noteworthy that these equations can be written as a vector precession equation: where is the pseudo-spin vector, and acts as an effective torque. As before, the Rabi problem is solved by assuming that the electric field E is oscillatory with constant magnitude E0: . In this case, the solution can be found by applying two successive rotations to the matrix equation above, of the form and where Here the frequency is known as the generalized Rabi frequency, which gives the rate of precession of the pseudo-spin vector about the transformed u axis (given by the first coordinate transformation above). As an example, if the electric field (or laser) is exactly on resonance (such that ), then the pseudo-spin vector will precess about the u axis at a rate of . If this (on-resonance) pulse is shone on a collection of atoms originally all in their ground state (w = −1) for a time , then after the pulse, the atoms will now all be in their excited state (w = +1) because of the (or 180°) rotation about the u axis. This is known as a -pulse and has the result of a complete inversion. The general result is given by The expression for the inversion w can be greatly simplified if the atom is assumed to be initially in its ground state (w0 = −1) with u0 = v0 = 0, in which case Rabi problem in time-dependent perturbation theory In the quantum approach, the periodic driving force can be considered as periodic perturbation and, therefore, the problem can be solved using time-dependent perturbation theory, with where is the time-independent Hamiltonian that gives the original eigenstates, and is the time-dependent perturbation. Assume at time , we can expand the state as where represents the eigenstates of the unperturbed states. For an unperturbed system, is a constant. Now, let's calculate under a periodic perturbation . Applying operator on both sides of the previous equation, we can get and then multiply both sides of the equation by : When the excitation frequency is at resonance between two states and , i.e. , it becomes a normal-mode problem of a two-level system, and it is easy to find that where The probability of being in the state m at time t is The value of depends on the initial condition of the system. An exact solution of spin-1/2 system in an oscillating magnetic field is solved by Rabi (1937). From their work, it is clear that the Rabi oscillation frequency is proportional to the magnitude of oscillation magnetic field. Quantum field theory approach In Bloch's approach, the field is not quantized, and neither the resulting coherence nor the resonance is well explained. for the QFT approach, mainly Jaynes–Cummings model. See also Rabi cycle Rabi frequency Vacuum Rabi oscillation References Atomic physics Spintronics
Rabi problem
Physics,Chemistry,Materials_science
1,444
13,207,749
https://en.wikipedia.org/wiki/Snow%20pillow
A snow pillow is a device for measuring snowpack, especially for automated reporting stations such as SNOTEL. The snow pillow measures the water equivalent of the snow pack based on hydrostatic pressure created by overlying snow. Any discrepancy due to bridging is minimized by the large dimension of the pillow, typically . Another application for snow pillows is to estimate the snow weight on a roof to warn of potential for roof collapse. Snow pillows were developed in the early 1960s. Set-up Large dimensions (e.g. 3 m × 3 m) of the pillow prevent any bridging that might occur from having an effect on the measurement readings. For snow pressure measurement on roofs using a smaller snow pillow (e.g. 1 m × 1 m) is the better choice, because of the weight of the filling of the snow pillow. See also Snowboard Snow gauge References Snow Telemetry Meteorological instrumentation and equipment
Snow pillow
Technology,Engineering
190
67,300,389
https://en.wikipedia.org/wiki/Allan%20David%20Stephen%20Barr
Allan David Stephen Barr (A. D. S. Barr) (11 September 1930 – 11 February 2018) was a British Chartered Engineer who was a professor at the University of Aberdeen. Barr was selected as Fellow of the Royal Society of Edinburgh in 1983. Family Allan David Stephen Barr was born on 11 September 1930, in Glasgow, Scotland, son of Allan and Agnes Barr. His father, Allan Barr, was a professor, a theologian and a moderator of the United Free Church of Scotland. (David, Barr, 2018) His grandfather, Rev James Barr, was elected as the first moderator of the United Free Church of Scotland in 1929. Education Barr was educated at the University of Edinburgh, where he studied mechanical engineering in the early 1950s. In May 1956 he submitted his PhD thesis in the title “Approximate theories for the flexural vibration of uniform beams and their derivation from the general elastic equations”. In this theis, Barr described experimental work on three deep rectangular sectioned beams against theories of flexural vibration. He first introduced how this experiment had been conducted,“The approximate theories of flexural vibration dealt with in the thesis are those in which the problem is reduced to the solution of a differential equation with one dependent variable (the transverse displacement of the neutral line of the beam) by a process of making reasonable assumptions during the derivation of the equation, In order to facilitate comparisons of the effects of the various assumptions made in the different theories, the differential equations are derived from the general elastic equilibrium equations written in terms of the stress components.”He went on, “Particular attention is given to the equation which includes the effects of rotatory inertia and transverse shear (usually referred to as the Timoshenko equation) because of Its interesting prediction for a finite beam of the possible existence of more than one natural frequency with the same number of nodes (i.e, a second spectrum of frequencies).”Barr then reported his research findings that “the predictions of the Timoshenko theory are closely followed, including the second spectrum frequencies. Third spectrum frequencies are detected very faintly for only one of the three beams, this is probably because lateral inertia is relatively unimportant for a cross-section of deep rectangular form”. Barr's research was supervised by R. N. Arnold and J. D. Robson, and this was acknowledged in his thesis. Career Barr worked as an academic and a researcher in mechanical engineering at different universities, for example, the University of Edinburgh, University of Dundee, as a lecturer, Reader, Head of the Department of Mechanical Engineering, Dean of the Faculty of Engineering and Applied Science. When he retired from the University of Aberdeen in 1996, he was Emeritus Professor of Engineering. As an academician, Barr successfully supervised research students. For example, Haxton acknowledged his gratitude to Barr in his PhD thesis submitted to the University of Edinburgh in 1971, noting that "The author wishes to gratefully acknowledge the valuable guidance given by his supervisor, Dr. A.D.S. Barr of the Department of Mechanical Engineering, and the Studentship support of the Science Research Council, London, during the period of this research." Life Barr had a broad range of interests, including “flying as a member of the University Air Squadron, motorcycle racing, music, painting, fly fishing and nature conservation,” Barr died from dementia in Lincoln on 11 February 2018. References 20th-century Scottish engineers Mechanical engineers 1930 births 2018 deaths
Allan David Stephen Barr
Engineering
695
13,384,414
https://en.wikipedia.org/wiki/Bessel%27s%20correction
In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the mean squared error in these estimations. This technique is named after Friedrich Bessel. Formulation In estimating the population variance from a sample when the population mean is unknown, the uncorrected sample variance is the mean of the squares of deviations of sample values from the sample mean (i.e., using a multiplicative factor 1/n). In this case, the sample variance is a biased estimator of the population variance. Multiplying the uncorrected sample variance by the factor gives an unbiased estimator of the population variance. In some literature, the above factor is called Bessel's correction. One can understand Bessel's correction as the degrees of freedom in the residuals vector (residuals, not errors, because the population mean is unknown): where is the sample mean. While there are n independent observations in the sample, there are only n − 1 independent residuals, as they sum to 0. For a more intuitive explanation of the need for Bessel's correction, see . Generally Bessel's correction is an approach to reduce the bias due to finite sample size. Such finite-sample bias correction is also needed for other estimates like skew and kurtosis, but in these the inaccuracies are often significantly larger. To fully remove such bias it is necessary to do a more complex multi-parameter estimation. For instance a correct correction for the standard deviation depends on the kurtosis (normalized central 4th moment), but this again has a finite sample bias and it depends on the standard deviation, i.e., both estimations have to be merged. Caveats There are three caveats to consider regarding Bessel's correction: It does not yield an unbiased estimator of standard deviation. The corrected estimator often has a higher mean squared error (MSE) than the uncorrected estimator. Furthermore, there is no population distribution for which it has the minimum MSE because a different scale factor can always be chosen to minimize MSE. It is only necessary when the population mean is unknown (and estimated as the sample mean). In practice, this generally happens. Firstly, while the sample variance (using Bessel's correction) is an unbiased estimator of the population variance, its square root, the sample standard deviation, is a biased estimate of the population standard deviation; because the square root is a concave function, the bias is downward, by Jensen's inequality. There is no general formula for an unbiased estimator of the population standard deviation, though there are correction factors for particular distributions, such as the normal; see unbiased estimation of standard deviation for details. An approximation for the exact correction factor for the normal distribution is given by using n − 1.5 in the formula: the bias decays quadratically (rather than linearly, as in the uncorrected form and Bessel's corrected form). Secondly, the unbiased estimator does not minimize mean squared error (MSE), and generally has worse MSE than the uncorrected estimator (this varies with excess kurtosis). MSE can be minimized by using a different factor. The optimal value depends on excess kurtosis, as discussed in mean squared error: variance; for the normal distribution this is optimized by dividing by n + 1 (instead of n − 1 or n). Thirdly, Bessel's correction is only necessary when the population mean is unknown, and one is estimating both population mean and population variance from a given sample, using the sample mean to estimate the population mean. In that case there are n degrees of freedom in a sample of n points, and simultaneous estimation of mean and variance means one degree of freedom goes to the sample mean and the remaining n − 1 degrees of freedom (the residuals) go to the sample variance. However, if the population mean is known, then the deviations of the observations from the population mean have n degrees of freedom (because the mean is not being estimated – the deviations are not residuals but errors) and Bessel's correction is not applicable. Source of bias Most simply, to understand the bias that needs correcting, think of an extreme case. Suppose the population is (0,0,0,1,2,9), which has a population mean of 2 and a population variance of . A sample of n = 1 is drawn, and it turns out to be The best estimate of the population mean is But what if we use the formula to estimate the variance? The estimate of the variance would be zero – and the estimate would be zero for any population and any sample of n = 1. The problem is that in estimating the sample mean, the process has already made our estimate of the mean close to the value we sampled—identical, for n = 1. In the case of n = 1, the variance just cannot be estimated, because there is no variability in the sample. But consider n = 2. Suppose the sample were (0, 2). Then and , but with Bessel's correction, , which is an unbiased estimate (if all possible samples of n = 2 are taken and this method is used, the average estimate will be 12.4, same as the sample variance with Bessel's correction.) To see this in more detail, consider the following example. Suppose the mean of the whole population is 2050, but the statistician does not know that, and must estimate it based on this small sample chosen randomly from the population: One may compute the sample average: This may serve as an observable estimate of the unobservable population average, which is 2050. Now we face the problem of estimating the population variance. That is the average of the squares of the deviations from 2050. If we knew that the population average is 2050, we could proceed as follows: But our estimate of the population average is the sample average, 2052. The actual average, 2050, is unknown. So the sample average, 2052, must be used: The variance is now smaller, and it (almost) always is. The only exception occurs when the sample average and the population average are the same. To understand why, consider that variance measures distance from a point, and within a given sample, the average is precisely that point which minimises the distances. A variance calculation using any other average value must produce a larger result. To see this algebraically, we use a simple identity: With representing the deviation of an individual sample from the sample mean, and representing the deviation of the sample mean from the population mean. Note that we've simply decomposed the actual deviation of an individual sample from the (unknown) population mean into two components: the deviation of the single sample from the sample mean, which we can compute, and the additional deviation of the sample mean from the population mean, which we can not. Now, we apply this identity to the squares of deviations from the population mean: Now apply this to all five observations and observe certain patterns: The sum of the entries in the middle column must be zero because the term a will be added across all 5 rows, which itself must equal zero. That is because a contains the 5 individual samples (left side within parentheses) which – when added – naturally have the same sum as adding 5 times the sample mean of those 5 numbers (2052). This means that a subtraction of these two sums must equal zero. The factor 2 and the term b in the middle column are equal for all rows, meaning that the relative difference across all rows in the middle column stays the same and can therefore be disregarded. The following statements explain the meaning of the remaining columns: The sum of the entries in the first column (a2) is the sum of the squares of the distance from sample to sample mean; The sum of the entries in the last column (b2) is the sum of squared distances between the measured sample mean and the correct population mean Every single row now consists of pairs of a2 (biased, because the sample mean is used) and b2 (correction of bias, because it takes the difference between the "real" population mean and the inaccurate sample mean into account). Therefore, the sum of all entries of the first and last column now represents the correct variance, meaning that now the sum of squared distance between samples and population mean is used The sum of the a2-column and the b2-column must be bigger than the sum within entries of the a2-column, since all the entries within the b2-column are positive (except when the population mean is the same as the sample mean, in which case all of the numbers in the last column will be 0). Therefore: The sum of squares of the distance from samples to the population mean will always be bigger than the sum of squares of the distance to the sample mean, except when the sample mean happens to be the same as the population mean, in which case the two are equal. That is why the sum of squares of the deviations from the sample mean is too small to give an unbiased estimate of the population variance when the average of those squares is found. The smaller the sample size, the larger is the difference between the sample variance and the population variance. Terminology This correction is so common that the term "sample variance" and "sample standard deviation" are frequently used to mean the corrected estimators (unbiased sample variation, less biased sample standard deviation), using n − 1. However caution is needed: some calculators and software packages may provide for both or only the more unusual formulation. This article uses the following symbols and definitions: μ is the population mean is the sample mean σ2 is the population variance sn2 is the biased sample variance (i.e., without Bessel's correction) s2 is the unbiased sample variance (i.e., with Bessel's correction) The standard deviations will then be the square roots of the respective variances. Since the square root introduces bias, the terminology "uncorrected" and "corrected" is preferred for the standard deviation estimators: sn is the uncorrected sample standard deviation (i.e., without Bessel's correction) s is the corrected sample standard deviation (i.e., with Bessel's correction), which is less biased, but still biased Formula The sample mean is given by The biased sample variance is then written: and the unbiased sample variance is written: Proof Suppose thus that are independent and identically distributed random variables with expectation and variance . Knowing the values of the at an outcome of the underlying sample space, we would like to get a good estimate for the variance , which is unknown. To this end, we construct a mathematical formula containing the such that the expectation of this formula is precisely . This means that on average, this formula should produce the right answer. The educated, but naive way of guessing such a formula would be , where ; this would be the variance if we had a discrete random variable on the discrete probability space that had value at . But let us calculate the expected value of this expression: here we have (by independence, symmetric cancellation and identical distributions) and therefore . In contrast, . Therefore, our initial guess was wrong by a factor of , and this is precisely Bessel's correction. See also Cochran's theorem Bias of an estimator Standard deviation Unbiased estimation of standard deviation Jensen's inequality Notes External links Animated experiment demonstrating the correction, at Khan Academy Statistical deviation and dispersion Estimation methods Articles containing proofs
Bessel's correction
Mathematics
2,490
13,092,651
https://en.wikipedia.org/wiki/Cell%20synchronization
Cell synchronization is a process by which cells in a culture at different stages of the cell cycle are brought to the same phase. Cell synchrony is a vital process in the study of cells progressing through the cell cycle as it allows population-wide data to be collected rather than relying solely on single-cell experiments. The types of synchronization are broadly categorized into two groups; physical fractionization and chemical blockade. Physical Separation Physical fractionation is a process by which continuously dividing cells are separated into phase-enriched populations based on characteristics such as the following: Cell density. Cell size The presence of cell surface epitopes marked by antibodies Light scatter Fluorescent emission by labeled cells. Given that cells take on varying morphologies and surface markers throughout the cell cycle, these traits can be used to separate by phase. There are two commonly used methods. Centrifugal Elutriation (Previously called: counter streaming centrifugation) Centrifugal elutriation can be used to separate cells in different phases of the cell cycle based on their size and sedimentation velocity (related to sedimentation coefficient). Because of the consistent growth patterns throughout the cell cycle, centrifugal elutriation can separate cells into G1, S, G2, and M phases by increasing size (and increasing sedimentation coefficients) with diminished resolution between G2 and M phases due to cellular heterogeneity and lack of a distinct size change. Larger cells sediment faster, so a cell in G2, which has experienced more growth time, will sediment faster than a cell in G1 and can therefore be fractionated out. Cells grown in suspension tend to be easier to elutriate given that they do not adhere to one another and have rounded, uniform shapes. However, some types of adherent cells can be treated with trypsin and resuspended for elutriation as they will assume a more rounded shape in suspension. Flow Cytometry and Cell Sorting Flow cytometry allows for detection, counting, and measurement of the physical and chemical properties of cells. Cells are suspended in fluid and put through the flow cytometer. Cells are sent one at a time through a laser beam and the light scatter is measured by a detector. Cells or their components can be labeled with fluorescent markers so that they emit different wavelengths of light in response to the laser, allowing for additional data collection. For quantitative cell cycle analysis, cells are usually fixed with ethanol and stained with DNA-binding dyes like propidium iodide, Hoechst 33342, DAPI, 7-Aminoactinomysin D, Mithramycin, DRAQ5, or TO-PRO-3, allowing for determination of phase by DNA quantity. However, if these cells have been fixed, they are dead and cannot be maintained for continued growth. Cells can also be resuspended in media and dyed with non-toxic dyes to maintain living cultures. Cells can also be genome edited such that some cellular proteins are made with conjugated fluorescent tags such as GFP, mCherry, and Luciferase that can be used to detect and quantify those components. For example, chimeric histone H2B-GFP constructs can be made and used to measure DNA content and determine replication status as a means of discerning cell phase. Light scatter measurements can be used to determine characteristics like size, allowing for distinction of cell phase without tagging. Flow cytometers can be used to collect multiparameter cytometry data, but cannot be used to separate or purify cells. Fluorescence-activated cell sorting (FACS) is a technique for sorting out the cells based on the differences that can be detected by light scatter (e.g. cell size) or fluorescence emission (by penetrated DNA, RNA, proteins or antigens). The system works much like flow cytometry, but will also charge each cell droplet after it has been measured based on a defined parameter. The charged droplet will then encounter an electrostatic deflection system that will sort the cell to a different container based on that charge. This allows cells to be separated on the basis of fluorescent content or scatter. To summarize, flow cytometry alone can be used to gather quantitative data about cell cycle phase distribution, but flow cytometry in coordination with FACS can be used to gather quantitative data and separate cells by phase for further study. Limitations include: for light scatter measurements, poor resolution between G2 and M (as with centrifugal elutriation) for fixed cells, unable to maintain living cultures for unfixed but dyed cells, possible disruption or mutagenesis of cells because of dye treatment some population heterogeneity may be maintained, as size separation may not always be accurate for measuring phase and not all cells may be at the same point within each phase (early G1 vs late G1) for DNA-edited cells, the process may take an extended period of time Chemical blockade The addition of exogenous substrates can be used to block cells in certain phases of the cell cycle and frequently target cell cycle checkpoints. These techniques can be carried out in vitro and do not require removal from the culture environment. The most common type of chemical blockade is arrest-and-release, which involves treatment of a culture with a chemical block and subsequent release by washing or addition of a neutralizing agent for the block. While chemical blockade is typically more effective and precise than physical separation, some methods can be imperfect for various reasons, including: the proportion of synchronized cells is insufficient chemical manipulations may disrupt cellular function and/or kill a portion of cells the treatment is toxic and not applicable in vivo (only relevant if considering clinical application) Arrest in M Mitotic arrest can be achieved through many methods and at various points within M-phase, including the G2/M transition, the metaphase/anaphase transition, and mitotic exit. Nocodazole Nocodazole is a rapidly-reversible inhibitor of microtubule polymerization that can be used to arrest cells before Anaphase at the spindle assembly checkpoint in the metaphase/anaphase transition. The microtubule poison works by blocking the formation of the mitotic spindles that attach to and pull apart sister chromatids in dividing cells. Cells will remain arrested until the nocodazole has been washed out. Nocodazole does not appear to disrupt interphase metabolism, and released cells return to normal cell cycle progression. Because microtubules are vital in other cellular functions, sustained use of nocodazole can result in disruption of those functions, causing cell death. Inhibition of CDK1 CDK1 is necessary for the transition from G2 to M phase. RO-3306 is a selective CDK1 inhibitor that can reversibly arrest cells at the G2/M border. RO-3306 synchronized >95% of cycling cells (including cancer cells), and released cells rapidly enter mitosis. Roscovitine Roscovitine can be used to inhibit the activity of cyclin-dependent kinases (CDKs) by competing with ATP in the ATP-binding region of the kinase. Its effects are potent, arresting cells by knocking down the function of CDKs necessary for cell cycle progression. Roscovitine can be used to arrest cells in G0/G1, G1/S, or G2/M transitions. Colchicine Colchicine arrests cells in metaphase and is a microtubule poison preventing mitotic spindle formation, much like nocodazole. It works by depolymerizing tubulin in microtubules, blocking progression to anaphase through sustained arrest at the spindle assembly checkpoint. Arrest in S (G1/S arrest) Arrest in S phase typically involves inhibition of DNA synthesis as the genome is being replicated. Most methods are reversible through washing. Double thymidine block High concentrations of thymidine interrupt the deoxynucleotide metabolism pathway through competitive inhibition, thus blocking DNA replication. A single treatment with thymidine arrests cells throughout S phase, so a double treatment acts to induce a more uniform block in early S phase. The process begins with a treatment with thymidine, washing of the culture, followed by another thymidine treatment. Hydroxyurea Hydroxyurea decreases the production of dNTPs by inhibiting the enzyme ribonucleotide reductase. This serves to halt DNA synthesis by depriving DNA polymerase of dNTPs at replication forks. Hydroxyurea is also used to treat certain types of cancer and blood disorders. Aphidicolin Aphidocolin is a fungus-derived tetracyclic diterpenoid that acts as a selective inhibitor for DNA polymerase α. This enzyme is necessary for replicative DNA synthesis, but does not disrupt DNA repair synthesis or mitochondrial DNA replication. Arrest in G1 A single commonly-used chemical method exists for synchronization of cells in G1. It involves Lovastatin, a reversible competitive inhibitor of 3-hydroxy-3-methylglutaryl-coenzyme A reductase, an enzyme vital in the production of mevalonic acid. Mevalonic acid is a key intermediate in the mevalonate pathway responsible for synthesis of cholesterol. Addition of cholesterol to Lovastatin-treated cells does not undo the arrest affect, so Lovastatin appears to inhibit the formation of some early intermediate in the pathway that is essential for progression through early G1. Other Methods of Synchronization Mitotic Selection Mitotic selection is a drug-free procedure for the selection of mitotic cells from a monolayer undergoing exponential growth. During mitosis, cells undergo changes in morphology, and mitotic selection takes advantage of this in adherent cells grown in a monolayer. The cells become more spherical, decreasing the surface area of cell membrane attached to the culture plate. Mitotic cells can therefore be completely detached by gently shaking and collected from the supernatant. Nutrient/Serum Deprivation Elimination of serum from the culture medium for about 24 hours results in the accumulation of cells at the transition between G0 quiescence and early G1. This arrest is easily reversible through addition of serum or the deprived nutrient. Upon release, progression through the cell cycle is variable, as some cells remain quiescent while others proceed through the cell cycle at variable rates. Contact Inhibition Contact inhibition occurs when cells are allowed to grow to high or full confluence, maximizing cell-to-cell contact. This triggers arrest in early G1 in normal cells. Arrest is reversed by replating cells at a lower density. Because of the proliferation-promoting mutations intrinsic to cancer, tumor cell lines are not usually able to undergo contact inhibition, though there are exceptions. External links Cell synchronization protocol References Cell biology
Cell synchronization
Biology
2,263
5,244,767
https://en.wikipedia.org/wiki/Clean%20configuration
Clean configuration is the flight configuration of a fixed-wing aircraft when its external equipment is retracted to minimize drag, and thus maximize airspeed for a given power setting. For most airplanes, clean configuration means simply that the wing flaps and landing gear are retracted, as these are the cause of drag due to the lack of streamlined shape. On more complex airplanes, it also means that other devices on the wings (such as slats, spoilers, and leading edge flaps) are retracted. Clean configuration is used for normal cruising at altitude during which lift, or rise in altitude, is not needed. In military aviation, a clean configuration is generally without external stores which reduce maximum performance both due to increased weight and even more so due to increased drag. References Aerospace engineering
Clean configuration
Engineering
154
648,102
https://en.wikipedia.org/wiki/Caulk
Caulk (also known as caulking and calking) is a material used to seal joints or seams against leakage in various structures and piping. The oldest form of caulk consisted of fibrous materials driven into the wedge-shaped seams between boards on wooden boats or ships. Cast iron sewerage pipes were formerly caulked in a similar way. Riveted seams in ships and boilers were formerly sealed by hammering the metal. Modern caulking compounds are flexible sealing compounds used to close up gaps in buildings and other structures against water, air, dust, insects, or as a component in firestopping. In the tunneling industry, caulking is the sealing of joints in segmental precast concrete tunnels, commonly by using concrete. Historical uses Wooden shipbuilding Traditional caulking (also spelled calking) on wooden vessels uses fibers of cotton and oakum (hemp) soaked in pine tar. These fibers are driven into the wedge-shaped seam between planks, with a caulking mallet and a broad chisel-like tool called a caulking iron. The caulking is then covered over with a putty, in the case of hull seams, or else in deck seams with melted pine pitch, in a process referred to as paying, or "calefaction" (cf Dutch kalefateren). Those who carried out this work were known as caulkers. In the Hebrew Bible, the prophet Ezekiel refers to the caulking of ships as a specialist skill. Iron or steel shipbuilding In riveted steel or iron ship construction, caulking was a process of rendering seams watertight by driving a thick, blunt chisel-like tool into the plating adjacent to the seam. This had the effect of displacing the metal into a close fit with the adjoining piece. Boilermaking Caulking of iron and steel, of the same type described above for ship's hulls, was also used by boilermakers in the era of riveted boilers to make the joints watertight and steamtight. Modern use in construction Application For bulk use, caulk is generally distributed in disposable cartridges, which are rigid cylindrical cardboard or plastic tubes with an applicator tip at one end, and a movable plunger at the far end. These are used in caulking guns, which typically have a trigger connected to a rod which pushes the plunger, and has a ratchet to prevent backlash. The push rod may also be actuated by a motor or by compressed air. Similar mechanisms are used for grease guns. For smaller applications, caulk may be distributed in squeeze tubes. Backer rod Backer rod, also called backer material or back-up rod, is a flexible foam product used behind caulking to increase elasticity, reduce consumption, force the caulking into contact with the sides of the joint creating a better bond, determine the thickness of the caulking, and define the cross-section hour-glass shape of the caulk. The backer rod also acts as a bond breaker to keep the caulking from sticking to the bottom of the opening—called a three-sided bond—with the caulk only adhering to the sides of the opening in an hour-glass shape it can flex more easily and is less likely to tear. Backer rods can also be used to reduce consumption of the caulking by filling part of the joints. Closed-cell foam does not absorb water and is impermeable. Closed-cell rods are less compressible and should not be compressed more than 25%. Closed-cell rod will also lose firmness and out-gas if damaged during installation or overcompressed or at sharp bends. The gasses cannot pass through this backer rod and can deform, weaken, and even cause holes (leaks) in the caulk or sealant as it escapes. Out-gassing is the reason that open-cell backer rod was developed. Open-cell foam is porous so it will let gasses through which could otherwise cause blistering of the caulk or sealant. Additionally, open-cell backer rod allows air to get to the back side of the caulk or sealant which accelerates curing when used with air-cured sealants such as silicone. Open-cell rod is more compressible than closed-cell foam and should be compressed 25% to 75%. Energy efficiency According to the Consumer Federation of America, sealing unwanted leaks around homes is an excellent way to cut home energy costs and decrease the household carbon footprint. Also, sealing cracks and crevices around homes lessens the strain on home appliances and can save time, money and hassle by preventing major repairs. Preventing infestation Sealing cracks and crevices prevents ingress by rodents. Types Acrylic latex The most common type of caulk is acrylic latex, for general-purpose use. Not only is acrylic latex inexpensive, but it is also the easiest type to apply smoothly and later paint if needed. Acrylic tile sealant Acrylic tile sealant usually comes in small tubes and is commonly used for wet applications. Polyurethane Polyurethane caulk is very durable and professional grade. Silicone Silicone caulk or sealant is water-, mold-, and mildew-resistant. Technically, when a joint material is silicone-based, it is considered a sealant rather than caulk. See also References External links Building materials
Caulk
Physics,Engineering
1,158
51,350,263
https://en.wikipedia.org/wiki/Nitrolysis
Nitrolysis is a chemical reaction involving cleavage ("lysis") of a chemical bond concomitant with installation of a nitro group (NO2). Typical reagents for effecting this conversion are nitric acid and acetyl nitrate. A commercially important nitrolysis reaction is the conversion of hexamine to nitramide. Nitrolysis of hexamine is also used to produce RDX, (O2NNCH2)3, a trinitramide widely used as an explosive. References Chemical reactions Nitration reactions
Nitrolysis
Chemistry
117
78,549,514
https://en.wikipedia.org/wiki/NGC%201155
NGC 1155 is a lenticular galaxy located in the constellation Eridanus. It was discovered by Francis Leavenworth in 1886. The galaxy is classified as type S0 and has an apparent magnitude of 13.4. Observation NGC 1155 can be observed in the southern sky, located at a right ascension of 02h 58m 13.0s and a declination of −10° 21′ 01″. It is part of the Eridanus constellation and is visible with moderate-sized telescopes due to its magnitude. References External links SEDS - NGC 1155 In-The-Sky.org - NGC 1155 Lenticular galaxies Eridanus (constellation) 1155
NGC 1155
Astronomy
140
167,718
https://en.wikipedia.org/wiki/Porcelain
Porcelain, also called china () is a ceramic material made by heating raw materials, generally including kaolinite, in a kiln to temperatures between . The greater strength and translucence of porcelain, relative to other types of pottery, arise mainly from vitrification and the formation of the mineral mullite within the body at these high temperatures. End applications include tableware, decorative ware such as figurines, and products in technology and industry such as electrical insulators and laboratory ware. The manufacturing process used for porcelain is similar to that used for earthenware and stoneware, the two other main types of pottery, although it can be more challenging to produce. It has usually been regarded as the most prestigious type of pottery due to its delicacy, strength, and high degree of whiteness. It is frequently both glazed and decorated. Though definitions vary, porcelain can be divided into three main categories: hard-paste, soft-paste, and bone china. The categories differ in the composition of the body and the firing conditions. Porcelain slowly evolved in China and was finally achieved (depending on the definition used) at some point about 2,000 to 1,200 years ago. It slowly spread to other East Asian countries, then to Europe, and eventually to the rest of the world. The European name, porcelain in English, comes from the old Italian porcellana (cowrie shell) because of its resemblance to the surface of the shell. Porcelain is also referred to as china or fine china in some English-speaking countries, as it was first seen in imports from China during the 17th century. Properties associated with porcelain include low permeability and elasticity; considerable strength, hardness, whiteness, translucency, and resonance; and a high resistance to corrosive chemicals and thermal shock. Porcelain has been described as being "completely vitrified, hard, impermeable (even before glazing), white or artificially coloured, translucent (except when of considerable thickness), and resonant". However, the term "porcelain" lacks a universal definition and has "been applied in an unsystematic fashion to substances of diverse kinds that have only certain surface-qualities in common". Traditionally, East Asia only classifies pottery into low-fired wares (earthenware) and high-fired wares (often translated as porcelain), the latter also including what Europeans call "stoneware", which is high-fired but not generally white or translucent. Terms such as "proto-porcelain", "porcellaneous", or "near-porcelain" may be used in cases where the ceramic body approaches whiteness and translucency. In 2021, the global market for porcelain tableware was estimated to be worth US$22.1 billion. Types Hard paste Hard-paste porcelain was invented in China, and it was also used in Japanese porcelain. Most of the finest quality porcelain wares are made of this material. The earliest European porcelains were produced at the Meissen factory in the early 18th century; they were formed from a paste composed of kaolin and alabaster and fired at temperatures up to in a wood-fired kiln, producing a porcelain of great hardness, translucency, and strength. Later, the composition of the Meissen hard paste was changed, and the alabaster was replaced by feldspar and quartz, allowing the pieces to be fired at lower temperatures. Kaolinite, feldspar, and quartz (or other forms of silica) continue to constitute the basic ingredients for most continental European hard-paste porcelains. Soft paste Soft-paste porcelains date back to early attempts by European potters to replicate Chinese porcelain by using mixtures of clay and frit. Soapstone and lime are known to have been included in these compositions. These wares were not yet actual porcelain wares, as they were neither hard nor vitrified by firing kaolin clay at high temperatures. As these early formulations suffered from high pyroplastic deformation, or slumping in the kiln at high temperatures, they were uneconomic to produce and of low quality. Formulations were later developed based on kaolin with quartz, feldspars, nepheline syenite, or other feldspathic rocks. These are technically superior and continue to be produced. Soft-paste porcelains are fired at lower temperatures than hard-paste porcelains; therefore, these wares are generally less hard than hard-paste porcelains. Bone china Although originally developed in England in 1748 to compete with imported porcelain, bone china is now made worldwide, including in China. The English had read the letters of Jesuit missionary François Xavier d'Entrecolles, which described Chinese porcelain manufacturing secrets in detail. One writer has speculated that a misunderstanding of the text could possibly have been responsible for the first attempts to use bone-ash as an ingredient in English porcelain, although this is not supported by modern researchers and historians. Traditionally, English bone china was made from two parts of bone ash, one part of kaolin, and one part of china stone, although the latter has been replaced by feldspars from non-UK sources. Materials Kaolin is the primary material from which porcelain is made, even though clay minerals might account for only a small proportion of the whole. The word paste is an old term for both unfired and fired materials. A more common terminology for the unfired material is "body"; for example, when buying materials a potter might order an amount of porcelain body from a vendor. The composition of porcelain is highly variable, but the clay mineral kaolinite is often a raw material. Other raw materials can include feldspar, ball clay, glass, bone ash, steatite, quartz, petuntse and alabaster. The clays used are often described as being long or short, depending on their plasticity. Long clays are cohesive (sticky) and have high plasticity; short clays are less cohesive and have lower plasticity. In soil mechanics, plasticity is determined by measuring the increase in content of water required to change a clay from a solid state bordering on the plastic, to a plastic state bordering on the liquid, though the term is also used less formally to describe the ease with which a clay may be worked. Clays used for porcelain are generally of lower plasticity than many other pottery clays. They wet very quickly, meaning that small changes in the content of water can produce large changes in workability. Thus, the range of water content within which these clays can be worked is very narrow and consequently must be carefully controlled. Production Forming Porcelain can be made using all the shaping techniques for pottery. Glazing Biscuit porcelain is unglazed porcelain treated as a finished product, mostly for figures and sculpture. Unlike their lower-fired counterparts, porcelain wares do not need glazing to render them impermeable to liquids and for the most part are glazed for decorative purposes and to make them resistant to dirt and staining. Many types of glaze, such as the iron-containing glaze used on the celadon wares of Longquan, were designed specifically for their striking effects on porcelain. Decoration Porcelain often receives underglaze decoration using pigments that include cobalt oxide and copper, or overglaze enamels, allowing a wider range of colours. Like many earlier wares, modern porcelains are often biscuit-fired at around , coated with glaze and then sent for a second glaze-firing at a temperature of about or greater. Another early method is "once-fired", where the glaze is applied to the unfired body and the two fired together in a single operation. Firing In this process, "green" (unfired) ceramic wares are heated to high temperatures in a kiln to permanently set their shapes, vitrify the body and the glaze. Porcelain is fired at a higher temperature than earthenware so that the body can vitrify and become non-porous. Many types of porcelain in the past have been fired twice or even three times, to allow decoration using less robust pigments in overglaze enamel. History Chinese porcelain Porcelain was invented in China over a centuries-long development period beginning with "proto-porcelain" wares dating from the Shang dynasty (1600–1046 BCE). By the time of the Eastern Han dynasty (25–220 CE) these early glazed ceramic wares had developed into porcelain, which Chinese defined as high-fired ware. By the late Sui dynasty (581–618 CE) and early Tang dynasty (618–907 CE), the now-standard requirements of whiteness and translucency had been achieved, in types such as Ding ware. The wares were already exported to the Islamic world, where they were highly prized. Eventually, porcelain and the expertise required to create it began to spread into other areas of East Asia. During the Song dynasty (960–1279 CE), artistry and production had reached new heights. The manufacture of porcelain became highly organised, and the dragon kilns excavated from this period could fire as many as 25,000 pieces at a time, and over 100,000 by the end of the period. While Xing ware is regarded as among the greatest of the Tang dynasty porcelain, Ding ware became the premier porcelain of the Song dynasty. By the Ming dynasty, production of the finest wares for the court was concentrated in a single city, and Jingdezhen porcelain, originally owned by the imperial government, remains the centre of Chinese porcelain production. By the time of the Ming dynasty (1368–1644 CE), porcelain wares were being exported to Asia and Europe. Some of the most well-known Chinese porcelain art styles arrived in Europe during this era, such as the coveted "blue-and-white" wares. The Ming dynasty controlled much of the porcelain trade, which was expanded to Asia, Africa and Europe via the Silk Road. In 1517, Portuguese merchants began direct trade by sea with the Ming dynasty, and in 1598, Dutch merchants followed. Some porcelains were more highly valued than others in imperial China. The most valued types can be identified by their association with the court, either as tribute offerings, or as products of kilns under imperial supervision. Since the Yuan dynasty, the largest and best centre of production has made Jingdezhen porcelain. During the Ming dynasty, Jingdezhen porcelain had become a source of imperial pride. The Yongle emperor erected a white porcelain brick-faced pagoda at Nanjing, and an exceptionally smoothly glazed type of white porcelain is peculiar to his reign. Jingdezhen porcelain's fame came to a peak during the Qing dynasty. Japanese porcelain Although the Japanese elite were keen importers of Chinese porcelain from early on, they were not able to make their own until the arrival of Korean potters that were taken captive during the Japanese invasions of Korea (1592–1598). They brought an improved type of kiln, and one of them spotted a source of porcelain clay near Arita, and before long several kilns had started in the region. At first their wares were similar to the cheaper and cruder Chinese porcelains with underglaze blue decoration that were already widely sold in Japan; this style was to continue for cheaper everyday wares until the 20th century. Exports to Europe began around 1660, through the Chinese and the Dutch East India Company, the only Europeans allowed a trading presence. Chinese exports had been seriously disrupted by civil wars as the Ming dynasty fell apart, and the Japanese exports increased rapidly to fill the gap. At first the wares used European shapes and mostly Chinese decoration, as the Chinese had done, but gradually original Japanese styles developed. Nabeshima ware was produced in kilns owned by the families of feudal lords, and were decorated in the Japanese tradition, much of it related to textile design. This was not initially exported, but used for gifts to other aristocratic families. Imari ware and Kakiemon are broad terms for styles of export porcelain with overglaze "enamelled" decoration begun in the early period, both with many sub-types. A great range of styles and manufacturing centres were in use by the start of the 19th century, and as Japan opened to trade in the second half, exports expanded hugely and quality generally declined. Much traditional porcelain continues to replicate older methods of production and styles, and there are several modern industrial manufacturers. By the early 1900s, Filipino porcelain artisans working in Japanese porcelain centres for much of their lives, later on introduced the craft into the native population in the Philippines, although oral literature from Cebu in the central Philippines have noted that porcelain were already being produced by the natives locally during the time of Cebu's early rulers, prior to the arrival of colonizers in the 16th century. Korean porcelain Olive green glaze was introduced in the late Silla Dynasty. Most ceramics from Silla are generally leaf-shaped, which is a very common shape in Korea. Korean celadon comes in a variety of colors, from turquoise to putty. Additionally, in the late 13th century, the Inlay technique of expressing pigmented patterns by filling the hollow parts of pottery with white and red clay was frequently used. The main difference from those in China is that many specimens have inlay decoration under the glaze. Most Korean ceramics from the Joseon Dynasty (1392-1910) are of excellent decorative quality. It usually has a melon shape and is asymmetrical. European porcelain Imported Chinese porcelains were held in such great esteem in Europe that in English china became a commonly used synonym for the Italian-derived porcelain. The first mention of porcelain in Europe is in Il Milione by Marco Polo in the 13th century. Apart from copying Chinese porcelain in faience (tin glazed earthenware), the soft-paste Medici porcelain in 16th-century Florence was the first real European attempt to reproduce it, with little success. Early in the 16th century, Portuguese traders returned home with samples of kaolin, which they discovered in China to be essential in the production of porcelain wares. However, the Chinese techniques and composition used to manufacture porcelain were not yet fully understood. Countless experiments to produce porcelain had unpredictable results and met with failure. In the German state of Saxony, the search concluded in 1708 when Ehrenfried Walther von Tschirnhaus produced a hard, white, translucent type of porcelain specimen with a combination of ingredients, including kaolin and alabaster, mined from a Saxon mine in Colditz. It was a closely guarded trade secret of the Saxon enterprise. In 1712, many of the elaborate Chinese porcelain manufacturing secrets were revealed throughout Europe by the French Jesuit father Francois Xavier d'Entrecolles and soon published in the Lettres édifiantes et curieuses de Chine par des missionnaires jésuites. The secrets, which d'Entrecolles read about and witnessed in China, were now known and began seeing use in Europe. Meissen Von Tschirnhaus along with Johann Friedrich Böttger were employed by Augustus II, King of Poland and Elector of Saxony, who sponsored their work in Dresden and in the town of Meissen. Tschirnhaus had a wide knowledge of science and had been involved in the European quest to perfect porcelain manufacture when, in 1705, Böttger was appointed to assist him in this task. Böttger had originally been trained as a pharmacist; after he turned to alchemical research, he claimed to have known the secret of transmuting dross into gold, which attracted the attention of Augustus. Imprisoned by Augustus as an incentive to hasten his research, Böttger was obliged to work with other alchemists in the futile search for transmutation and was eventually assigned to assist Tschirnhaus. One of the first results of the collaboration between the two was the development of a red stoneware that resembled that of Yixing. A workshop note records that the first specimen of hard, white and vitrified European porcelain was produced in 1708. At the time, the research was still being supervised by Tschirnhaus; however, he died in October of that year. It was left to Böttger to report to Augustus in March 1709 that he could make porcelain. For this reason, credit for the European discovery of porcelain is traditionally ascribed to him rather than Tschirnhaus. The Meissen factory was established in 1710 after the development of a kiln and a glaze suitable for use with Böttger's porcelain, which required firing at temperatures of up to to achieve translucence. Meissen porcelain was once-fired, or green-fired. It was noted for its great resistance to thermal shock; a visitor to the factory in Böttger's time reported having seen a white-hot teapot being removed from the kiln and dropped into cold water without damage. Although widely disbelieved this has been replicated in modern times. Russian porcelain In 1744, Elizabeth of Russia signed an agreement to establish the first porcelain manufactory; previously it had to be imported. The technology of making "white gold" was carefully hidden by its creators. Peter the Great had tried to reveal the "big porcelain secret", and sent an agent to the Meissen factory, and finally hired a porcelain master from abroad. This relied on the research of the Russian scientist Dmitry Ivanovich Vinogradov. His development of porcelain manufacturing technology was not based on secrets learned through third parties, but was the result of painstaking work and careful analysis. Thanks to this, by 1760, Imperial Porcelain Factory, Saint Petersburg became a major European factories producing tableware, and later porcelain figurines. Eventually other factories opened: Gardner porcelain, Dulyovo (1832), Kuznetsovsky porcelain, Popovsky porcelain, and Gzhel. During the twentieth century, under Soviet governments, ceramics continued to be a popular artform, supported by the state, with an increasingly propagandist role. One artist, who worked at the Baranovsky Porcelain Factory and at the Experimental Ceramic and Artistic Plant in Kyiv, was Oksana Zhnikrup, whose porcelain figures of the ballet and the circus were widely known. Soft paste porcelain The pastes produced by combining clay and powdered glass (frit) were called Frittenporzellan in Germany and frita in Spain. In France they were known as pâte tendre and in England as "soft-paste". They appear to have been given this name because they do not easily retain their shape in the wet state, or because they tend to slump in the kiln under high temperature, or because the body and the glaze can be easily scratched. France Experiments at Rouen produced the earliest soft-paste in France, but the first important French soft-paste porcelain was made at the Saint-Cloud factory before 1702. Soft-paste factories were established with the Chantilly manufactory in 1730 and at Mennecy in 1750. The Vincennes porcelain factory was established in 1740, moving to larger premises at Sèvres in 1756. Vincennes soft-paste was whiter and freer of imperfections than any of its French rivals, which put Vincennes/Sèvres porcelain in the leading position in France and throughout the whole of Europe in the second half of the 18th century. Italy Doccia porcelain of Florence was founded in 1735 and remains in production, unlike Capodimonte porcelain which was moved from Naples to Madrid by its royal owner, after producing from 1743 to 1759. After a gap of 15 years Naples porcelain was produced from 1771 to 1806, specializing in Neoclassical styles. All these were very successful, with large outputs of high-quality wares. In and around Venice, Francesco Vezzi was producing hard-paste from around 1720 to 1735; survivals of Vezzi porcelain are very rare, but less so than from the Hewelke factory, which only lasted from 1758 to 1763. The soft-paste Cozzi factory fared better, lasting from 1764 to 1812. The Le Nove factory produced from about 1752 to 1773, then was revived from 1781 to 1802. England The first soft-paste in England was demonstrated by Thomas Briand to the Royal Society in 1742 and is believed to have been based on the Saint-Cloud formula. In 1749, Thomas Frye took out a patent on a porcelain containing bone ash. This was the first bone china, subsequently perfected by Josiah Spode. William Cookworthy discovered deposits of kaolin in Cornwall, and his factory at Plymouth, established in 1768, used kaolin and china stone to make hard-paste porcelain with a body composition similar to that of the Chinese porcelains of the early 18th century. But the great success of English ceramics in the 18th century was based on soft-paste porcelain, and refined earthenwares such as creamware, which could compete with porcelain, and had devastated the faience industries of France and other continental countries by the end of the century. Most English porcelain from the late 18th century to the present is bone china. In the twenty-five years after Briand's demonstration, a number of factories were founded in England to make soft-paste tableware and figures: Chelsea (1743) Bow (1745) St James's (1748) Bristol porcelain (1748) Longton Hall (1750) Royal Crown Derby (1750 or 1757) Royal Worcester (1751) Lowestoft porcelain (1757) Wedgwood (1759) Spode (1767) Applications other than decorative and tableware Electric insulators Porcelain has been used for electrical insulators since at least 1878, with another source reporting earlier use of porcelain insulators on the telegraph line between Frankfurt and Berlin. It is widely used for insulators in electrical power transmission system due to its high stability of electrical, mechanical and thermal properties even in harsh environments. A body for electrical porcelain typically contains varying proportions of ball clay, kaolin, feldspar, quartz, calcined alumina and calcined bauxite. A variety of secondary materials can also be used, such as binders which burn off during firing. UK manufacturers typically fired the porcelain to a maximum of 1200 °C in an oxidising atmosphere, whereas reduction firing is standard practice at Chinese manufacturers. In 2018, a porcelain bushing insulator manufactured by NGK in Handa, Aichi Prefecture, Japan was certified as the world's largest ceramic structure by Guinness World Records. It is 11.3 m in height and 1.5 m in diameter. The global market for high-voltage insulators was estimated to be worth US$4.95 billion in 2015, of which porcelain accounts for just over 48%. Chemical porcelain A type of porcelain characterised by low thermal expansion, high mechanical strength and high chemical resistance. Used for laboratory ware, such as reaction vessels, combustion boats, evaporating dishes and Büchner funnels. Raw materials for the body include kaolin, quartz, feldspar, calcined alumina, and possibly also low percentages of other materials. A number of International standards specify the properties of the porcelain, such as ASTM C515. Tiles A porcelain tile has been defined as 'a ceramic mosaic tile or paver that is generally made by the dust-pressed method of a composition resulting in a tile that is dense, fine-grained, and smooth with sharply formed face, usually impervious and having colors of the porcelain type which are usually of a clear, luminous type or granular blend thereof.' Manufacturers are found across the world with Italy being the global leader, producing over 380 million square metres in 2006. Historic examples of rooms decorated entirely in porcelain tiles can be found in several palaces including ones at Galleria Sabauda in Turin, Museo di Doccia in Sesto Fiorentino, Museo di Capodimonte in Naples, the Royal Palace of Madrid and the nearby Royal Palace of Aranjuez. and the Porcelain Tower of Nanjing. More recent examples include the Dakin Building in Brisbane, California and the Gulf Building in Houston, Texas, which when constructed in 1929 had a porcelain logo on its exterior. Sanitaryware Because of its durability, inability to rust and impermeability, glazed porcelain has been in use for personal hygiene since at least the third quarter of the 17th century. During this period, porcelain chamber pots were commonly found in higher-class European households, and the term "bourdaloue" was used as the name for the pot. Whilst modern sanitaryware, such as closets and washbasins, is made of ceramic materials, porcelain is no longer used and vitreous china is the dominant material. Bath tubs are not made of porcelain, but of enamel on a metal base, usually of cast iron. Porcelain enamel is a marketing term used in the US, and is not porcelain but vitreous enamel. Dental porcelain Dental porcelain is used for crowns, bridges and veneers. A formulation of dental porcelain is 70-85% feldspar, 12-25% quartz, 3-5% kaolin, up to 15% glass and around 1% colourants. Manufacturers The Americas Brazil Germer Porcelanas Finas Porcelana Schmidt United States Blue Ridge CoorsTek, Inc. Franciscan Lenox Lotus Ware Pickard China Asia China Ding ware Jingdezhen porcelain Iran Maghsoud Group of Factories, (1993–present) Zarin Iran Porcelain Industries, (1881–present) Japan Hirado ware Kakiemon Nabeshima ware Narumi Noritake Malaysia Royal Selangor South Korea Haengnam Chinaware Hankook Chinaware Sri Lanka Dankotuwa Porcelain Noritake Lanka Porcelain Royal Fernwood Porcelain Taiwan Franz Collection Turkey Yildiz Porselen (1890–1936, 1994–present) Kütahya Porselen (1970–present) Güral Porselen (1989–present) Porland Porselen (1976–present) Istanbul Porselen (1963 – early 1990s) Sümerbank Porselen (1957–1994) United Arab Emirates RAK Porcelain Vietnam Minh Long I porcelain (1970–present) Bát Tràng porcelain (1352–present) Europe Austria Vienna Porcelain Manufactory, 1718–1864 Vienna Porcelain Manufactory Augarten, 1923–present Croatia Inkerpor (1953–present) Czech Republic Haas & Czjzek, Horní Slavkov (1792–2011) Thun 1794, Klášterec nad Ohří (1794–present) Český porcelán a.s., Dubí, Eichwelder Porzellan und Ofenfabriken Bloch & Co. Böhmen (1864–present) Rudolf Kämpf, Nové Sedlo (Sokolov District) (1907–present) Denmark Aluminia Bing & Grøndahl Denmark porcelain P. Ipsens Enke Kastrup Vaerk Kronjyden Porcelænshaven Royal Copenhagen (1775–present) GreenGate Finland Arabia France Saint-Cloud porcelain (1693–1766) Chantilly porcelain (1730–1800) Vincennes porcelain (1740–1756) Mennecy-Villeroy porcelain (1745–1765) Sèvres porcelain (1756–present) Revol porcelain (1789–present) Limoges porcelain Haviland porcelain Germany Current porcelain manufacturers in Germany Hungary Hollóháza Porcelain Manufactory (1777–present) Herend Porcelain Manufacture (1826–present) Zsolnay Porcelain Manufacture (1853–present) Italy Richard-Ginori 1735 Manifattura di Doccia (1735–present) Capodimonte porcelain (1743–1759) Naples porcelain (1771–1806) Manifattura Italiana Porcellane Artistiche Fabris (1922–1972) Mangani SRL, Porcellane d'Arte (Florence) Lithuania Jiesia Netherlands (1883–1916) Loosdrechts Porselein Weesp Porselein Norway Egersund porcelain Figgjo (1941–present) Herrebøe porcelain Porsgrund Stavangerflint Poland AS Ćmielów Fabryka Fajansu i Porcelany Polskie Fabryki Porcelany "Ćmielów" i "Chodzież" S.A. Kristoff Porcelana Lubiana S.A. Portugal Vista Alegre Sociedade Porcelanas de Alcobaça Costa Verde (company), located in the district of Aveiro Russia Imperial Porcelain Factory, Saint Petersburg (1744–present) Verbilki Porcelain (1766–present), Verbilki near Taldom Gzhel ceramics (1802–present), Gzhel Dulevo Farfor (1832–present), Likino-Dulyovo Spain Buen Retiro Royal Porcelain Factory (1760–1812) Real Fábrica de Sargadelos (1808–present, intermittently) Porvasal Sweden Rörstrand Gustavsberg porcelain Switzerland Suisse Langenthal United Kingdom Aynsley China (1775–present) Belleek (1884–present) Bow porcelain factory (1747–1776) Caughley porcelain Chelsea porcelain factory (c. 1745; merged with Derby in 1770) Coalport porcelain Davenport Goss crested china Liverpool porcelain Longton Hall porcelain Lowestoft Porcelain Factory Mintons Ltd (1793–1968; merged with Royal Doulton) Nantgarw Pottery New Hall porcelain Plymouth Porcelain Rockingham Pottery Royal Crown Derby (1750/57–present) Royal Doulton (1815–2009; acquired by Fiskars) Royal Worcester (1751–2008; acquired by Portmeirion Pottery) Spode (1767–2008; acquired by Portmeirion Pottery) Saint James's Factory (or "Girl-in-a-Swing", 1750s) Swansea porcelain Vauxhall porcelain Wedgwood, (factory 1759–present, porcelain 1812–1829, and modern. Acquired by Fiskars) See also Blue and white porcelain List of porcelain manufacturers Notes and references Notes References Sources Battie, David, ed., Sotheby's Concise Encyclopedia of Porcelain, 1990, Conran Octopus. Le Corbellier, Clare, Eighteenth-century Italian porcelain, 1985, Metropolitan Museum of Art, (fully available online as PDF) Smith, Lawrence, Harris, Victor and Clark, Timothy, Japanese Art: Masterpieces in the British Museum, 1990, British Museum Publications, Vainker, S.J., Chinese Pottery and Porcelain, 1991, British Museum Press, 9780714114705 Watson, William ed., The Great Japan Exhibition: Art of the Edo Period 1600–1868, 1981, Royal Academy of Arts/Weidenfeld & Nicolson Further reading Burton, William (1906). Porcelain, Its Nature, Art and Manufacture. London: Batsford. Combined Nomenclature of the European Communities – EC Commission in Luxembourg, 1987. Gleeson, Janet, The Arcanum: The Extraordinary True Story of the Invention of European Porcelain, 1998, Bantam Press. Valenstein, S. (1998). A Handbook of Chinese ceramics, Metropolitan Museum of Art, New York. . External links How porcelain is made How bisque porcelain is made ArtLex Art Dictionary – Porcelain Ceramic materials Chinese culture Chinese inventions Dielectrics Materials with minor glass phase Pottery Tableware
Porcelain
Physics,Engineering
6,431
7,870,826
https://en.wikipedia.org/wiki/Stephan%20K%C3%B6rner
Stephan Körner, FBA (26 September 1913 – 17 August 2000) was a British philosopher, who specialised in the work of Kant, the study of concepts, and in the philosophy of mathematics. Born to a Jewish family in what would soon become Czechoslovakia, Körner left that country to avoid certain death at the hands of the Nazis after the German occupation in 1939, and came to the United Kingdom as a refugee, where he began his study of philosophy; by 1952 he was a professor of philosophy at the University of Bristol, taking up a second professorship at Yale in 1970. He was married to Edith Körner, and was the father of the mathematician Thomas Körner and the biochemist, writer and translator Ann M. Körner. Early life Körner was born in Ostrava, then part of Austria-Hungary, on 26 September 1913. He was the only son of a teacher of classics and his wife. His father had studied classics in Vienna, while at the same time, winning prizes in mathematics to supplement his meagre income (a fellow student was a certain Leon Trotsky, who was frequently asked, "When is that great revolution that you are always talking about going to happen?"). Despite an early wish to study philosophy, Stephan was dissuaded by his father, who feared that his son would become a penniless academic; he was persuaded to study something more practical, and took his degree in law at Charles University in Prague, completing it in 1935. (He practised law only briefly but retained a strong interest, attending seminars at Yale Law School after his appointment as a visiting professor at Yale in the 1970s.) From 1936 to 1939 he carried out his military service, serving as an officer in the cavalry. After German troops moved into the country in March 1939, a schoolmate of his, an officer in the SS, warned the Jewish family that life in German-occupied Moravia was no longer safe. His parents refused to leave, believing that they had nothing to fear since they were not communists. His father died in 1939, most likely by his own hand, during deportation to Nisko and his mother was murdered in 1941 after deportation to Minsk Ghetto, Belarus, on Transport F. His first cousin Ruth Maier was one of many other family members who was murdered at Auschwitz, after her arrest in and deportation from Norway in 1944. She is remembered as "Norway's Anne Frank". Stephan travelled with two friends, Otto Eisner and Willi Haas, through Poland to the United Kingdom, arriving a refugee just as the Second World War began. In Britain, he rejoined the army of the émigré Czechoslovak government; he saw service with them during the Battle of France in 1940 before returning to Britain. He received a small grant to continue his education at the University of Cambridge, where he studied philosophy under R. B. Braithwaite at Trinity Hall; among others, he was taught by Ludwig Wittgenstein. Professor Braithwaite was exceedingly kind to his refugee student. On one occasion, Braithwaite invited him to his home saying, "Someone has given me a Hungarian salami; would you come to my house and show me how to eat it?" Such invitations were welcome since Stephan made little money as a waiter in a Greek restaurant and survived on "one fourpenny meat pie per day." In 1943 he was recalled to the Czechoslovak army, serving as a sergeant in the infantry during the push through France and into Germany. He would later say that he survived the fighting outside Dunkirk due to Dickens; recuperating in hospital from a minor wound, a doctor refused to discharge him until he had had another day to finish his novel. As a result, he missed the heavy fighting the next day, when many of his close friends were killed. He was awarded his PhD in 1944; shortly afterwards, he married Edith Laner ("Diti"; born Edita Leah Löwy; in 1938/39, her father changed the family name to Laner in a vain attempt to deceive the Nazis into thinking that he and his family were not Jewish), a fellow Czech refugee, whom he had met in London in 1941. He remained in the Czechoslovak army until 1946. Academic career After his army service, he worked at Cardiff University, tutoring students in German. He took up his first academic post in 1947, lecturing in philosophy at the University of Bristol. In 1952, he was appointed to the sole professorship and chairmanship of his department, which he would hold until 1979. In 1965 and 1966 he was Dean of the Faculty of Arts, and from 1968 to 1971 a Pro-Vice-Chancellor. During this time he worked as a visiting professor of philosophy at Brown University in 1957, Yale University in 1960, the University of North Carolina, Chapel Hill in 1963, University of Texas at Austin in 1964 and Indiana University in 1967. In 1970 he returned to Yale with a tenured visiting professorship in philosophy, holding it jointly with the Bristol post for nine years, and then as his sole post from 1979 to 1984. Bristol appointed him a professor emeritus on his retirement, and he subsequently held a visiting professorship at the University of Graz from 1980 to 1989. He received honorary doctorates from the Queen's University Belfast in 1981, and Graz in 1984, where he was appointed to an honorary professorship in 1986. Bristol appointed him an honorary fellow in 1987. Trinity Hall bestowed upon him the same honour in 1991. He was President of the British Society for the Philosophy of Science in 1965, the Aristotelian Society in 1967, the International Union of History and Philosophy of Science in 1969, and the Mind Association in 1973. He edited the journal Ratio from 1961 to 1980. He also served on the editorial board of Erkenntnis from 1974 to 1999. In 1967 he was elected a Fellow of the British Academy. Philosophical work In 1955 he published his first two major works. Kant, an introduction for non-specialists to Immanuel Kant's work, went through several impressions over the next three decades and is still regarded as a minor classic in the field; it was one of the first post-war books to reintroduce Kant to the English-speaking world. The fact that in this and later works Korner put forward a controversial view that Kant's categories apply directly to ordinary empirical science, was little noticed by a public grateful for any short work covering all of Kant's philosophy. The second, Conceptual Thinking, was a more specialised study, studying the way in which people deal with "exact" and "inexact" concepts – exact concepts, like logical constructs or mathematical ideas, could be clearly defined, whilst inexact concepts, like 'colour', would always have unclear boundaries. In 1957 he expanded on this, editing Observation and Interpretation, a collection of papers arising from a seminar which brought together both philosophers and physicists to discuss these questions. His work led him into the philosophy of mathematics, on which he would publish a textbook in 1960; Philosophy of Mathematics, which took as its central theme the question of how applied mathematics can be metaphysically possible. He also wrote on the philosophy of science in Experience and Theory (1966), including work on theoretical incommensurability, the concept that two directly contradictory theories – such as classical mechanics and relativity – can coexist, without either being specifically "wrong". In 1969 he published What is Philosophy?, and in 1970 Categorial Frameworks, attempts to put forward his views to a general audience. Experience and Conduct, published in 1979, discussed how we evaluate and develop our own preferences and value systems; his final work, Metaphysics: Its Structure and Function (1984) was a wide-ranging study of metaphysics. Personal life Körner was remembered by colleagues and pupils as "extraordinarily handsome with an astonishing Czech accent ... [with] a certain sense of grandeur about him". He retained an old-fashioned sense of manners, formal but courteous, as well as a formal appearance. Even on the hottest days, he was never seen without a tie and jacket. He lived a happy and contented home life; he and Edith were remembered by friends as exceptionally close and devoted to one another. In their early married life they fitted the conventional academic mould – whilst he worked incessantly at his studies, she raised the family, looked after the house, managed the finances – but after the children had grown and left she worked at her own career, eventually becoming the chairman of the magistrates' court in Bristol and overseeing the redevelopment of the National Health Service's information-management system. Edith managed their lives, as with everything else, in a practical, organised and forceful way, ensuring that he could work as freely as possible; he was fond of saying that "Diti does everything, but leaves the philosophy to me". The couple had two children – Thomas, a professor of mathematics, and Ann, a biochemist, writer and translator, who married Sidney Altman (a joint winner of the Nobel Prize in Chemistry in 1989). Following Edith's diagnosis with advanced cancer in the summer of 2000, they chose to die together in August of that year, on the 17th. They were survived by both children and by four grandchildren. Publications Books/monographs authored (1955) Kant. (1955) Conceptual Thinking (corrected republication, 1959) (1960) The Philosophy of Mathematics. Dover Publications, (1966) Experience and Theory - An Essay in the Philosophy of Science (1967) Kant's Conception of Freedom Oxford University Press. (1969) What is Philosophy? - One Philosopher's Answer. later published as Fundamental Questions in Philosophy (1970) Categorical Frameworks. (1971) Abstraction in Science and Morals, 24th Eddington Memorial Lecture (Cambridge University Press) (1976) Experience and Conduct.. (1984) Metaphysics: Its Structure and Function. Books edited (1957) Observation and Interpretation: a Symposium of Philosophers and Physicists. (1971) Practical Reason - Papers and Discussions (1976) Explanation - Papers and Discussions (1976).Philosophy of Logic- Papers And Discussions (Oxford, Blackwell, and California University Press) Select papers/chapters (1957) “Some Types of Philosophical Thinking” in C. A. Mace (ed.) British Moral Philosophy in the Mid-Century London: George Allen & Unwin, pp. 115–131 (1959) "Broad on Philosophical Method" in Paul Arthur Schilpp (ed.) The Philosophy of C.D. Broad (1968) "Kant’s Conception of Freedom" [Dawes Hicks lecture] Proceedings of the British Academy 53, 1967 (1970) "Description, Analysis and Metaphysics," in Joseph Bobik (ed.) The Nature of Philosophical Inquiry, (South Bend, Indiana, Notre Dame University Press) (1975) "On Some Relations Between Logic and Metaphysics," in The Logical Enterprise, (ed.) Alan R. Anderson et al. (New Haven, Yale University Press). (1976) "On the Subject Matter of Philosophy," in H. D. Lewis (ed.) Contemporary British Philosophy (London, Allen & Unwin) (1980) "Science and the Organization of Belief," in: Mellor, D. H., (ed.). Science, Belief, and Behaviour: Essays in Honour of R. B. Braithwaite. Cambridge [Eng.]; New York : Cambridge University Press. . (1986) "On Some Methods and Results of Philosophical Analysis" in Shanker, S.G. (ed.) Philosophy in Britain Today (1991) "On the relation between Common Sense, Science and Metaphysics," in A. Phillips Griffiths (ed.), A. J. Ayer: Memorial Essays (Royal Institute of Philosophy Supplements, pp. 89-104). Cambridge: Cambridge University Press. (1991) "On the Logic of Practical Evaluation," in: Peter Geach (ed.) Logic and Ethics, Nijhoff International Philosophy Series, vol 41. Springer, Dordrecht For more complete publication details see Körner's PhilPapers entry or 1987 bibliography. Festschrift (1987) Stephan Körner — Philosophical Analysis and Reconstruction, Jan T. J. Srzednicki (ed.) See also Schema (Kant) Notes References General References "Professor Stephan Korner". The Times, 23 August 2000 (obituary). External links Interview with Stephan Körner (1990) E-books by Stephan Körner available for loan at Open Library "Wittgenstein and the problem of universals" (video) - Korner and Renford Bambrough discuss 'the traditional problem of universals and Wittgenstein's contribution, in his later philosophy, towards solving the problem'. (Open University, 1972) 1913 births 2000 deaths People from Ostrava Philosophers of mathematics Fellows of the British Academy Alumni of Trinity Hall, Cambridge Fellows of Trinity Hall, Cambridge Jewish philosophers Czech Jews Czechoslovak refugees Jews who immigrated to the United Kingdom to escape Nazism 20th-century British philosophers Analytic philosophers British philosophers of science British metaphysicians Presidents of the Aristotelian Society Academics of the University of Bristol
Stephan Körner
Mathematics
2,698
48,592,395
https://en.wikipedia.org/wiki/Stirrup%20pump
A stirrup pump is a portable reciprocating water pump used to extinguish or control small fires. It is operated by hand. The operator places a foot on a stirrup-like bracket at the bottom of the pump to hold the pump steady, the bottom of the suction cylinder is placed inside a bucket of water. References External links Fire watchers Pumps Appropriate technology Human power Firefighting equipment Fire suppression Active fire protection
Stirrup pump
Physics,Chemistry
90
1,007,613
https://en.wikipedia.org/wiki/Bell%20state
In quantum information science, the Bell's states or EPR pairs are specific quantum states of two qubits that represent the simplest examples of quantum entanglement. The Bell's states are a form of entangled and normalized basis vectors. This normalization implies that the overall probability of the particles being in one of the mentioned states is 1: . Entanglement is a basis-independent result of superposition. Due to this superposition, measurement of the qubit will "collapse" it into one of its basis states with a given probability. Because of the entanglement, measurement of one qubit will "collapse" the other qubit to a state whose measurement will yield one of two possible values, where the value depends on which Bell's state the two qubits are in initially. Bell's states can be generalized to certain quantum states of multi-qubit systems, such as the GHZ state for three or more subsystems. Understanding of Bell's states is useful in analysis of quantum communication, such as superdense coding and quantum teleportation. These mechanisms cannot transmit information faster than the speed of light, a result known as the no-communication theorem. Bell states The Bell states are four specific maximally entangled quantum states of two qubits. They are in a superposition of 0 and 1a linear combination of the two states. Their entanglement means the following: The qubit held by Alice (subscript "A") can be in a superposition of 0 and 1. If Alice measured her qubit in the standard basis, the outcome would be either 0 or 1, each with probability 1/2; if Bob (subscript "B") also measured his qubit, the outcome would be the same as for Alice. Thus, Alice and Bob would each seemingly have random outcome. Through communication they would discover that, although their outcomes separately seemed random, these were perfectly correlated. This perfect correlation at a distance is special: maybe the two particles "agreed" in advance, when the pair was created (before the qubits were separated), which outcome they would show in case of a measurement. Hence, following Albert Einstein, Boris Podolsky, and Nathan Rosen in their famous 1935 "EPR paper", there is something missing in the description of the qubit pair given abovenamely this "agreement", called more formally a hidden variable. In his famous paper of 1964, John S. Bell showed by simple probability theory arguments that these correlations (the one for the 0, 1 basis and the one for the +, − basis) cannot both be made perfect by the use of any "pre-agreement" stored in some hidden variablesbut that quantum mechanics predicts perfect correlations. In a more refined formulation known as the Bell–CHSH inequality, it is shown that a certain correlation measure cannot exceed the value 2 if one assumes that physics respects the constraints of local "hidden-variable" theory (a sort of common-sense formulation of how information is conveyed), but certain systems permitted in quantum mechanics can attain values as high as . Thus, quantum theory violates the Bell inequality and the idea of local "hidden variables". Bell basis Four specific two-qubit states with the maximal value of are designated as "Bell states". They are known as the four maximally entangled two-qubit Bell states and form a maximally entangled basis, known as the Bell basis, of the four-dimensional Hilbert space for two qubits: Creating Bell states via quantum circuits Although there are many possible ways to create entangled Bell states through quantum circuits, the simplest takes a computational basis as the input, and contains a Hadamard gate and a CNOT gate (see picture). As an example, the pictured quantum circuit takes the two qubit input and transforms it to the first Bell state Explicitly, the Hadamard gate transforms into a superposition of . This will then act as a control input to the CNOT gate, which only inverts the target (the second qubit) when the control (the first qubit) is 1. Thus, the CNOT gate transforms the second qubit as follows . For the four basic two-qubit inputs, , the circuit outputs the four Bell states (listed above). More generally, the circuit transforms the input in accordance with the equation where is the negation of . Properties of Bell states The result of a measurement of a single qubit in a Bell state is indeterminate, but upon measuring the first qubit in the z-basis, the result of measuring the second qubit is guaranteed to yield the same value (for the Bell states) or the opposite value (for the Bell states). This implies that the measurement outcomes are correlated. John Bell was the first to prove that the measurement correlations in the Bell State are stronger than could ever exist between classical systems. This hints that quantum mechanics allows information processing beyond what is possible with classical mechanics. In addition, the Bell states form an orthonormal basis and can therefore be defined with an appropriate measurement. Because Bell states are entangled states, information on the entire system may be known, while withholding information on the individual subsystems. For example, the Bell state is a pure state, but the reduced density operator of the first qubit is a mixed state. The mixed state implies that not all the information on this first qubit is known. Bell States are either symmetric or antisymmetric with respect to the subsystems. Bell states are maximally entangled in the sense that its reduced density operators are maximally mixed, the multipartite generalization of Bell states in this spirit is called the absolutely maximally entangled (AME) state. Bell state measurement The Bell measurement is an important concept in quantum information science: It is a joint quantum-mechanical measurement of two qubits that determines which of the four Bell states the two qubits are in. A helpful example of quantum measurement in the Bell basis can be seen in quantum computing. If a CNOT gate is applied to qubits A and B, followed by a Hadamard gate on qubit A, a measurement can be made in the computational basis. The CNOT gate performs the act of un-entangling the two previously entangled qubits. This allows the information to be converted from quantum information to a measurement of classical information. Quantum measurement obeys two key principles. The first, the principle of deferred measurement, states that any measurement can be moved to the end of the circuit. The second principle, the principle of implicit measurement, states that at the end of a quantum circuit, measurement can be assumed for any unterminated wires. The following are applications of Bell state measurements: Bell state measurement is the crucial step in quantum teleportation. The result of a Bell state measurement is used by one's co-conspirator to reconstruct the original state of a teleported particle from half of an entangled pair (the "quantum channel") that was previously shared between the two ends. Experiments that utilize so-called "linear evolution, local measurement" techniques cannot realize a complete Bell state measurement. Linear evolution means that the detection apparatus acts on each particle independent of the state or evolution of the other, and local measurement means that each particle is localized at a particular detector registering a "click" to indicate that a particle has been detected. Such devices can be constructed from, for example: mirrors, beam splitters, and wave platesand are attractive from an experimental perspective because they are easy to use and have a high measurement cross-section. For entanglement in a single qubit variable, only three distinct classes out of four Bell states are distinguishable using such linear optical techniques. This means two Bell states cannot be distinguished from each other, limiting the efficiency of quantum communication protocols such as teleportation. If a Bell state is measured from this ambiguous class, the teleportation event fails. Entangling particles in multiple qubit variables, such as (for photonic systems) polarization and a two-element subset of orbital angular momentum states, allows the experimenter to trace over one variable and achieve a complete Bell state measurement in the other. Leveraging so-called hyper-entangled systems thus has an advantage for teleportation. It also has advantages for other protocols such as superdense coding, in which hyper-entanglement increases the channel capacity. In general, for hyper-entanglement in variables, one can distinguish between at most classes out of Bell states using linear optical techniques. Bell state correlations Independent measurements made on two qubits that are entangled in Bell states positively correlate perfectly if each qubit is measured in the relevant basis. For the state, this means selecting the same basis for both qubits. If an experimenter chose to measure both qubits in a Bell state using the same basis, the qubits would appear positively correlated when measuring in the basis, anti-correlated in the basis, and partially (probabilistically) correlated in other bases. The correlations can be understood by measuring both qubits in the same basis and observing perfectly anti-correlated results. More generally, can be understood by measuring the first qubit in basis , the second qubit in basis , and observing perfectly positively correlated results. Applications Superdense coding Superdense coding allows two individuals to communicate two bits of classical information by only sending a single qubit. The basis of this phenomenon is the entangled states or Bell states of a two qubit system. In this example, Alice and Bob are very far from each other, and have each been given one qubit of the entangled state. . In this example, Alice is trying to communicate two bits of classical information, one of four two bit strings: or . If Alice chooses to send the two bit message , she would perform the gate to her qubit. Similarly, if Alice wants to send , she would apply the phase flip ; if she wanted to send , she would apply the gate to her qubit; and finally, if Alice wanted to send the two bit message , she would do nothing to her qubit. Alice performs these quantum gate transformations locally, transforming the initial entangled state into one of the four Bell states. The steps below show the necessary quantum gate transformations, and resulting Bell states, that Alice needs to apply to her qubit for each possible two bit message she desires to send to Bob. . After Alice applies the desired transformations to her qubit, she sends it to Bob. Bob then performs a measurement on the Bell state, which projects the entangled state onto one of the four two-qubit basis vectors, one of which will coincide with the original two bit message Alice was trying to send. Quantum teleportation Quantum teleportation is the transfer of a quantum state over a distance. It is facilitated by entanglement between A, the giver, and B, the receiver of this quantum state. This process has become a fundamental research topic for quantum communication and computing. More recently, scientists have been testing its applications in information transfer through optical fibers. The process of quantum teleportation is defined as the following: Alice and Bob share an EPR pair and each took one qubit before they became separated. Alice must deliver a qubit of information to Bob, but she does not know the state of this qubit and can only send classical information to Bob. It is performed step by step as the following: Alice sends her qubits through a CNOT gate. Alice then sends the first qubit through a Hadamard gate. Alice measures her qubits, obtaining one of four results, and sends this information to Bob. Given Alice's measurements, Bob performs one of four operations on his half of the EPR pair and recovers the original quantum state. The following quantum circuit describes teleportation: Quantum cryptography Quantum cryptography is the use of quantum mechanical properties in order to encode and send information safely. The theory behind this process is the fact that it is impossible to measure a quantum state of a system without disturbing the system. This can be used to detect eavesdropping within a system. The most common form of quantum cryptography is quantum key distribution. It enables two parties to produce a shared random secret key that can be used to encrypt messages. Its private key is created between the two parties through a public channel. Quantum cryptography can be considered a state of entanglement between two multi-dimensional systems, also known as two-qudit (quantum digit) entanglement. See also Bell test experiments Bell's inequality EPR paradox GHZ state Dicke state Superdense coding Quantum teleportation Quantum cryptography Quantum circuits Bell diagonal state Notes References , pp. 75. . Quantum information science Quantum states
Bell state
Physics
2,648
50,569,286
https://en.wikipedia.org/wiki/Limbitless%20Solutions
Limbitless Solutions is a 501(c)(3) non-profit organization founded in the United States that uses additive manufacturing (3D printing) to create accessible, yet affordable personalized bionics and prosthetic partial arms for children with limb deltas The organization says their bionic arms are manufactured for under $400, 1% of the standard production cost. Headquartered on the University of Central Florida campus in Orlando, Florida, the organization was founded by a team of engineering students, led by CEO and Executive Director Albert Manero. History The idea of Limbitless Solutions came to life in 2014 after a team of engineering students at the University of Central Florida led an initiative to provide bionic 3D printed limbs to children. In their free time, the students took advantage of a donated Stratasys Dimension 3D printer in the engineering manufacturing lab on campus to create an affordable prosthetic that displayed their ideas of art and engineering all into one. Their method was the first of its kind and minimized the cost and time of traditional prosthetic manufacturing processes like CNC milling. The first 3D printed arm the students created was run with off-the-shelf servomechanisms and batteries which are activated by the electromyography muscle energy on the child's limb. Most prosthetic arms are mechanical, which presents a challenge for children without elbows because they have to open and close their mechanical prosthetic by bending their elbow. That led the Limbitless team to come up with the idea for an electronic arm with a muscle sensor that allows the child to open and close their prosthetic hand by flexing their biceps. Production Before creating the bionic arm, the child is measured carefully to ensure that the length, width, and size of their new 3D prosthetic is as similar to their residual arm as possible. The model of the arm is then appropriately scaled and adjusted using Fusion 360 before being printed, assembled, and fitted. Electromyography (EMG) sensors are then calibrated before the arm is ready for use. The time to create one bionic arm varies depending on several factors, the most significant being the type of limb difference the child has. Recipients Children who have been given bionics from Limbitless Solutions include a 7-year-old boy who received a 3D printed Iron Man themed arm, presented by actor Robert Downey Jr. (facilitated by Microsoft's The Collective Project), a 12-year-old from Vero Beach, Florida, who was the recipient of a bionic arm presented by the Blue Man Group at Universal Studios in Orlando, Florida, an 11-year-old girl originally from California who was presented a floral themed arm at the Clearwater Marine Aquarium, an 8-year-old boy from Seattle, Washington, who received his arm as part of the 12 Arms of Christmas delivery, a 10-year-old girl from Texas who was the recipient of a UCF themed arm, presented by the UCF Cheerleading team and Knightro, the UCF mascot, and a 22-year old model from Hawaii who wore the arm she received on the runway. Other Limbitless projects Project Xavier Project Xavier is the name for the production of a wheelchair that is controlled by the same EMG sensors as the 3-D printed arms. These EMG sensors are placed on the temporalis muscles, allowing for those with limited or no hand dexterity the ability to control the wheelchair by clenching their jaw in different ways. This wheelchair reduces the need for the user to be pushed around by someone, making tasks easier and less time-consuming for them. This increased independence enhances their quality of life immediately. The Bionic Kid comic book series In December 2018, Limbitless Solutions released a comic book entitled The Bionic Kid. The comic book was written by Zachary, one of the Limbitless Solutions bionic kids, his brother Christo, and their dad Niko. The visuals were created by student artists at the University of Central Florida with assistance from professors at UCF School of Visual Arts and Design, The Bionic Kid is being sold in order to support those with limb difference. This comic tells the story of Zachary, one of the bionic kids. They attend the 8-Bit-World Finals where Zachary ends up playing the accessible video game Bash Bro against a bully named Norman. After both are electrocuted in an accident, the each receive special powers. In the comic, Zachary is referred to as The Bionic Kid, Norman is called Aquarius, and Limbitless Solutions Executive Director Albert Manero is a character as well. Accessible games Limbitless Solutions also created custom video game controllers that have been created to utilize the same EMG input that is used to operate the prosthetic arms. Typically, traditional controllers have not fully considered disabled user-experience, but Limbitless is creating new accessibility tools for not only their Bionic Kids, but many others in the same situation. Inclusive gaming not only trains Bionic Kids, but empowers through creativity. References External links Organizations based in Orlando, Florida Non-profit technology Research organizations in the United States Health charities in the United States Children's health-related organizations Organizations established in 2014 Bionics 3D printer companies University of Central Florida Charities based in Florida Disability organizations in Florida
Limbitless Solutions
Technology,Engineering,Biology
1,077
991,459
https://en.wikipedia.org/wiki/Hopper%20crystal
A hopper crystal is a form of crystal, the shape of which resembles that of a pyramidal hopper container. The edges of hopper crystals are fully developed, but the interior spaces are not filled in. This results in what appears to be a hollowed out step lattice formation, as if someone had removed interior sections of the individual crystals. In fact, the "removed" sections never filled in, because the crystal was growing so rapidly that there was not enough time (or material) to fill in the gaps. The interior edges of a hopper crystal still show the crystal form characteristic to the specific mineral, and so appear to be a series of smaller and smaller stepped down miniature versions of the original crystal. Hoppering occurs when electrical attraction is higher along the edges of the crystal; this causes faster growth at the edges than near the face centers. This attraction draws the mineral molecules more strongly than the interior sections of the crystal, thus the edges develop more quickly. However, the basic physics of this type of growth is the same as that of dendrites but, because the anisotropy in the solid–liquid inter-facial energy is so large, the dendrite so produced exhibits a faceted morphology. Hoppering is common in many minerals, including lab-grown bismuth, galena, quartz (called skeletal or fenster crystals), gold, calcite, halite (salt), and water (ice). In 2017, Frito-Lay filed for (and later received) a patent for a salt cube hopper crystal. Because the shape increases surface area to volume, it allows people to taste more salt compared to the amount actually consumed. References "Hopper crystals" in A New Kind of Science by Stephen Wolfram, p. 993. External links Images of hopper crystals, Glendale Community College Earth Science Image Archive Crystals
Hopper crystal
Chemistry,Materials_science
373
6,966
https://en.wikipedia.org/wiki/Chinese%20calendar
The traditional Chinese calendar, dating back to the Han dynasty, is a lunisolar calendar that blends solar, lunar, and other cycles for social and agricultural purposes. While modern China primarily uses the Gregorian calendar for official purposes, the traditional calendar remains culturally significant. It determines the timing of Chinese New Year with traditions like the twelve animals of the Chinese Zodiac still widely observed. The traditional Chinese calendar uses the sexagenary cycle, a repeating system of Heavenly Stems and Earthly Branches, to mark years, months, and days. This system, along with astronomical observations and mathematical calculations, was developed to align solar and lunar cycles, though some approximations are necessary due to the natural differences between these cycles. Over centuries, the calendar was refined through advancements in astronomy and horology, with dynasties introducing variations to improve accuracy and meet cultural or political needs. While the Gregorian calendar has become now standard for civic daily use in China, the traditional lunisolar calendar continues to influence festivals, cultural practices, and zodiac-based customs. Beyond China, it has shaped other East Asian calendars, including the Korean, Vietnamese, and Japanese lunar systems, each adapting the same lunisolar principles while integrating local customs and terminology. Epochs, or fixed starting points for year counting, have played an essential role in the Chinese calendar's structure. Some epochs are based on historical figures, such as the inauguration of the Yellow Emperor (Huangdi), while others marked the rise of dynasties or significant political shifts. This system allowed for the numbering of years based on regnal eras, with the start of a ruler's reign often resetting the count. The Chinese calendar also tracks time in smaller units, including months, days, and double-hour periods called shichen. These timekeeping methods have influenced broader fields of horology, with some principles, such as precise time subdivisions, still evident in modern scientific timekeeping. The continued use of the calendar today highlights its enduring cultural, historical, and scientific significance. Etymology The name of calendar is in , and was represented in earlier character forms variants (), and ultimately derived from an ancient form (秝). The ancient form of the character consists of two stalks of rice plant (), arranged in parallel. This character represents the order in space and also the order in time. As its meaning became complex, the modern dedicated character () was created to represent the meaning of calendar. Maintaining the correctness of calendars was an important task to maintain the authority of rulers, being perceived as a way to measure the ability of a ruler. For example, someone seen as a competent ruler would foresee the coming of seasons and prepare accordingly. This understanding was also relevant in predicting abnormalities of the Earth and celestial bodies, such as lunar and solar eclipses. The significant relationship between authority and timekeeping helps to explain why there are 102 calendars in Chinese history, trying to predict the correct courses of sun, moon and stars, and marking good time and bad time. Each calendar is named as and recorded in a dedicated calendar section in history books of different eras. The last one in imperial era was . A ruler would issue an almanac before the commencement of each year. There were private almanac issuers, usually illegal, when a ruler lost his control to some territories. Various modern Chinese calendar names resulted from the struggle between the introduction of Gregorian calendar by government and the preservation of customs by the public in the era of Republic of China. The government wanted to abolish the Chinese calendar to force everyone to use the Gregorian calendar, and even abolished the Lunar New Year, but faced great opposition. The public needed the astronomical Chinese calendar to do things at a proper time, for example farming and fishing; also, a wide spectrum of festivals and customs observations have been based on the calendar. The government finally compromised and rebranded it as the agricultural calendar in 1947, depreciating the calendar to merely agricultural use. Epochs An epoch is a point in time chosen as the origin of a particular calendar era, thus serving as a reference point from which subsequent time or dates are measured. The use of epochs in Chinese calendar system allow for a chronological starting point from whence to begin point continuously numbering subsequent dates. Various epochs have been used. Similarly, nomenclature similar to that of the Christian era has occasionally been used: No reference date is universally accepted. The most popular is the Gregorian calendar (). During the 17th century, the Jesuit missionaries tried to determine the epochal year of the Chinese calendar. In his Sinicae historiae decas prima (published in Munich in 1658), Martino Martini (1614–1661) dated the Yellow Emperor's ascension at 2697 BCE and began the Chinese calendar with the reign of Fuxi (which, according to Martini, began in 2952 BCE). Philippe Couplet's 1686 Chronological table of Chinese monarchs (Tabula chronologica monarchiae sinicae) gave the same date for the Yellow Emperor. The Jesuits' dates provoked interest in Europe, where they were used for comparison with Biblical chronology. Modern Chinese chronology has generally accepted Martini's dates, except that it usually places the reign of the Yellow Emperor at 2698 BCE and omits his predecessors Fuxi and Shennong as "too legendary to include". Publications began using the estimated birth date of the Yellow Emperor as the first year of the Han calendar in 1903, with newspapers and magazines proposing different dates. Jiangsu province counted 1905 as the year 4396 (using a year 1 of 2491 BCE, and implying that CE is ), and the newspaper Ming Pao () reckoned 1905 as 4603 (using a year 1 of 2698 BCE, and implying that CE is ). Liu Shipei (, 1884–1919) created the Yellow Emperor Calendar (), with year 1 as the birth of the emperor (which he determined as 2711 BCE, implying that CE is ). There is no evidence that this calendar was used before the 20th century. Liu calculated that the 1900 international expedition sent by the Eight-Nation Alliance to suppress the Boxer Rebellion entered Beijing in the 4611th year of the Yellow Emperor. Taoists later adopted Yellow Emperor Calendar and named it Tao Calendar (). On 2 January 1912, Sun Yat-sen announced changes to the official calendar and era. 1 January was 14 Shíyīyuè 4609 Huángdì year, assuming a year 1 of 2698 BCE, making CE year . Many overseas Chinese communities like San Francisco's Chinatown adopted the change. The modern Chinese standard calendar uses the epoch of the Gregorian calendar, which is on 1 January of the year 1 CE. Calendar types Lunisolar Lunisolar calendars involve correlations of the cycles of the sun (solar) and the moon (lunar). Solar and agricultural A solar calendar (also called the Tung Shing, the Yellow Calendar or Imperial Calendar, both alluding to Yellow Emperor) keeps track of the seasons as the earth and the sun move in the solar system relatively to each other. A purely solar calendar may be useful in planning times for agricultural activities such as planting and harvesting. Solar calendars tend to use astronomically observable points of reference such as equinoxes and solstices, events which may be approximately predicted using fundamental methods of observation and basic mathematical analysis. Modern Chinese calendar and horology The topic of the Chinese calendar also includes variations of the modern Chinese calendar, influenced by the Gregorian calendar. Variations include methodologies of the People's Republic of China and Taiwan. Modern calendars In China, the modern calendar is defined by the Chinese national standard GB/T 33661–2017, "Calculation and Promulgation of the Chinese Calendar", issued by the Standardization Administration of China on 12 May 2017. Influence of Gregorian calendar Although modern-day China uses the Gregorian calendar, the traditional Chinese calendar governs holidays, such as the Chinese New Year and Lantern Festival, in both China and overseas Chinese communities. It also provides the traditional Chinese nomenclature of dates within a year which people use to select auspicious days for weddings, funerals, moving or starting a business. The evening state-run news program Xinwen Lianbo in the People's Republic of China continues to announce the months and dates in both the Gregorian and the traditional lunisolar calendar. History The Chinese calendar system has a long history, which has traditionally been associated with specific dynastic periods. Various individual calendar types have been developed with different names. In terms of historical development, some of the calendar variations are associated with dynastic changes along a spectrum beginning with a prehistorical/mythological time to and through well attested historical dynastic periods. Many individuals have been associated with the development of the Chinese calendar, including researchers into underlying astronomy; and, furthermore, the development of instruments of observation are historically important. Influences from India, Islam, and Jesuits also became significant. Phenology Early calendar systems often were closely tied to natural phenomena. Phenology is the study of periodic events in biological life cycles and how these are influenced by seasonal and interannual variations in climate, as well as habitat factors (such as elevation). The plum-rains season (), the rainy season in late spring and early summer, begins on the first bǐng day after Mangzhong () and ends on the first wèi day after Xiaoshu (). The Three Fu () are three periods of hot weather, counted from the first gēng day after the summer solstice. The first fu () is 10 days long. The mid-fu () is 10 or 20 days long. The last fu () is 10 days from the first gēng day after the beginning of autumn. The Shujiu cold days () are the 81 days after the winter solstice (divided into nine sets of nine days), and are considered the coldest days of the year. Each nine-day unit is known by its order in the set, followed by "nine" (). In traditional Chinese culture, "nine" represents the infinity, which is also the number of "Yang". According to one belief nine times accumulation of "Yang" gradually reduces the "Yin", and finally the weather becomes warm. Names of months Lunar months were originally named according to natural phenomena. Current naming conventions use numbers as the month names. Every month is also associated with one of the twelve Earthly Branches. Gregorian dates are approximate and should be used with caution. Many years have intercalary months. Chinese astronomy The Chinese calendar has been a development involving much observation and calculation of the apparent movements of the Sun, Moon, planets, and stars, as observed from Earth. Chinese astronomers Many Chinese astronomers have contributed to the development of the Chinese calendar. Many were of the scholarly or shi class (), including writers of history, such as Sima Qian. Notable Chinese astronomers who have contributed to the development of the calendar include Gan De, Shi Shen, and Zu Chongzhi Technology Early technological developments aiding in calendar development include the development of the gnomon. Later technological developments useful to the calendar system include naming, numbering and mapping of the sky, the development of analog computational devices such as the armillary sphere and the water clock, and the establishment of observatories. Chinese calendar names Ancient six calendars From the Warring States period (ending in 221 BCE), six especially significant calendar systems are known to have begun to be developed. Later on, during their future course in history, the modern names for the ancient six calendars were also developed, and can be translated into English as Huangdi, Yin, Zhou, Xia, Zhuanxu, and Lu. Calendar variations There are various Chinese terms for calendar variations including: Nongli Calendar (traditional Chinese: 農曆; simplified Chinese: 农历; pinyin: nónglì; lit. 'agricultural calendar') Jiuli Calendar (traditional Chinese: 舊曆; simplified Chinese: 旧历; pinyin: jiùlì; Jyutping: Gau6 Lik6; lit.'former calendar') Laoli Calendar (traditional Chinese: 老曆; simplified Chinese: 老历; pinyin: lǎolì; lit. 'old calendar') Zhongli Calendar (traditional Chinese: 中曆; simplified Chinese: 中历; pinyin: zhōnglì; Jyutping: zung1 lik6; lit. 'Chinese calendar') Huali Calendar (traditional Chinese: 華曆; simplified Chinese: 华历; pinyin: huálì; Jyutping: waa4 lik6; lit. 'Chinese calendar') Solar calendars The traditional Chinese calendar was developed between 771 BCE and 476 BCE, during the Spring and Autumn period of the Eastern Zhou dynasty. Solar calendars were used before the Zhou dynasty period, along with the basic sexagenary system. Five-elements calendar One version of the solar calendar is the five-elements calendar (), which derives from the Wu Xing. A 365-day year was divided into five phases of 73 days, with each phase corresponding to a Day 1 Wu Xing element. A phase began with a governing-element day (), followed by six 12-day weeks. Each phase consisted of two three-week months, making each year ten months long. Years began on a jiǎzǐ () day (and a 72-day wood phase), followed by a bǐngzǐ day () and a 72-day fire phase; a wùzǐ () day and a 72-day earth phase; a gēngzǐ () day and a 72-day metal phase, and a rénzǐ day () followed by a water phase. Other days were tracked using the Yellow River Map (He Tu). Four-quarters calendar Another version is a four-quarters calendar (, or ). The weeks were ten days long, with one month consisting of three weeks. A year had 12 months, with a ten-day week intercalated in summer as needed to keep up with the tropical year. The 10 Heavenly Stems and 12 Earthly Branches were used to mark days. Balanced calendar A third version is the balanced calendar (). A year was 365.25 days, and a month was 29.5 days. After every 16th month, a half-month was intercalated. According to oracle bone records, the Shang dynasty calendar ( BCE) was a balanced calendar with 12 to 14 months in a year; the month after the winter solstice was Zhēngyuè. Lunisolar calendars by dynasty Six ancient calendars Modern historical knowledge and records are limited for the earlier calendars. These calendars are known as the six ancient calendars (), or quarter-remainder calendars, (), since all calculate a year as days long. Months begin on the day of the new moon, and a year has 12 or 13 months. Intercalary months (a 13th month) are added to the end of the year. The Qiang and Dai calendars are modern versions of the Zhuanxu calendar, used by mountain peoples. Zhou dynasty The first lunisolar calendar was the Zhou calendar (), introduced under the Zhou dynasty (1046 BCE – 256 BCE). This calendar sets the beginning of the year at the day of the new moon before the winter solstice. Competing Warring states calendars Several competing lunisolar calendars were also introduced as Zhou devolved into the Warring States, especially by states fighting Zhou control during the Warring States period (perhaps 475 BCE - 221 BCE). The state of Lu issued its own Lu calendar(). Jin issued the Xia calendar () with a year beginning on the day of the new moon nearest the March equinox. Qin issued the Zhuanxu calendar (), with a year beginning on the day of the new moon nearest the winter solstice. Song's Yin calendar () began its year on the day of the new moon after the winter solstice. Qin and early Han dynasties After Qin Shi Huang unified China under the Qin dynasty in 221 BCE, the Qin calendar () was introduced. It followed most of the rules governing the Zhuanxu calendar, but the month order was that of the Xia calendar; the year began with month 10 and ended with month 9, analogous to a Gregorian calendar beginning in October and ending in September. The intercalary month, known as the second Jiǔyuè (), was placed at the end of the year. The Qin calendar was used going into the Han dynasty. Han dynasty Tàichū calendar Emperor Wu of Han introduced reforms in the seventh of the eleven named eras of his reign, Tàichū (), 104 BCE – 101 BCE. His Tàichū Calendar () defined a solar year as days (365;06:00:14.035), and the lunar month had days (29;12:44:44.444). Since the 19 years cycle used for the 7 additional months was taken as an exact one, and not as an approximation. This calendar introduced the 24 solar terms, dividing the year into 24 equal parts of 15° each. Solar terms were paired, with the 12 combined periods known as climate terms. The first solar term of the period was known as a pre-climate (节气), and the second was a mid-climate (中气). Months were named for the mid-climate to which they were closest, and a month without a mid-climate was an intercalary month. The Taichu calendar established a framework for traditional calendars, with later calendars adding to the basic formula. Northern and Southern Dynasties Dàmíng calendar The Dàmíng Calendar (), created in the Northern and Southern Dynasties by Zu Chongzhi (429 CE – 500 CE), introduced the equinoxes. Tang dynasty Wùyín Yuán calendar The use of syzygy to determine the lunar month was first described in the Tang dynasty Wùyín Yuán Calendar (). Yuan dynasty Shòushí calendar The Yuan dynasty Shòushí calendar () used spherical trigonometry to find the length of the tropical year. The calendar had a 365.2425-day year, identical to the Gregorian calendar. Shíxiàn calendar From 1645 to 1913 the Shíxiàn or Chongzhen was developed. During the late Ming dynasty, the Chinese Emperor appointed Xu Guangqi in 1629 to be the leader of the ShiXian calendar reform. Assisted by Jesuits, he translated Western astronomical works and introduced new concepts, such as those of Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, and Tycho Brahe; however, the new calendar was not released before the end of the dynasty. In the early Qing dynasty, Johann Adam Schall von Bell submitted the calendar which was edited by the lead of Xu Guangqi to the Shunzhi Emperor. The Qing government issued it as the Shíxiàn (seasonal) calendar. In this calendar, the solar terms are 15° each along the ecliptic and it can be used as a solar calendar. However, the length of the climate term near the perihelion is less than 30 days and there may be two mid-climate terms. The Shíxiàn calendar changed the mid-climate-term rule to "decide the month in sequence, except the intercalary month." The present traditional calendar follows the Shíxiàn calendar, except: The baseline is Chinese Standard Time, rather than Beijing local time. (Modern) astronomical data, rather than mathematical calculations, is used. Republic of China The Chinese calendar lost its place as the country's official calendar at the beginning of the 20th century, its use has continued. The Republic of China Calendar published by the Beiyang government of the Republic of China still listed the dates of the Chinese calendar in addition to the Gregorian calendar. In 1929, the Nationalist government tried to ban the traditional Chinese calendar. The Kuómín Calendar published by the government no longer listed the dates of the Chinese calendar. However, Chinese people were used to the traditional calendar and many traditional customs were based on the Chinese calendar. The ban failed and was lifted in 1934. The latest Chinese calendar was "New Edition of Wànniánlì, revised edition", edited by Beijing Purple Mountain Observatory, People's Republic of China. To optimize the Chinese calendar, astronomers have proposed a number of changes. Kao Ping-tse (; 1888–1970), a Chinese astronomer who co-founded the Purple Mountain Observatory, proposed that month numbers be calculated before the new moon and solar terms to be rounded to the day. Since the intercalary month is determined by the first month without a mid-climate and the mid-climate time varies by time zone, countries that adopted the calendar but calculate with their own time could vary from the time in China. Horology Horology, or chronometry, refers to the measurement of time. In the context of the Chinese calendar, horology involves the definition and mathematical measurement of terms or elements such observable astronomic movements or events such as are associated with days, months, years, hours, and so on. These measurements are based upon objective, observable phenomena. Calendar accuracy is based upon accuracy and precision of measurements. The Chinese calendar is lunisolar, similar to the Hindu, Hebrew and ancient Babylonian calendars. In this case the calendar is in part based in objective, observable phenomena and in part by mathematical analysis to correlate the observed phenomena. Lunisolar calendars especially attempt to correlate the solar and lunar cycles, but other considerations can be agricultural and seasonal or phenological, or religious, or even political. Basic horologic definitions include that days begin and end at midnight, and months begin on the day of the new moon. Years start on the second (or third) new moon after the winter solstice. Solar terms govern the beginning, middle, and end of each month. A sexagenary cycle, comprising the heavenly stems () and the earthly branches (), is used as identification alongside each year and month, including intercalary months or leap months. Months are also annotated as either long ( for months with 30 days) or short ( for months with 29 days). There are also other elements of the traditional Chinese calendar. Day Days are Sun oriented, based upon divisions of the solar year. A day () is considered both traditionally and currently to be the time from one midnight to the next. Traditionally days (including the night-time portion) were divided into 12 double-hours, and in modern times the 24 hour system has become more standard. Month Months are Moon oriented. Month (), the time from one new moon to the next. These synodic months are about days long. This includes the Date (), when a day occurs in the month. Days are numbered in sequence from 1 to 29 (or 30). And, a Calendar month (), is when a month occurs within a year. Some months may be repeated. Year A year () is based upon the time of one revolution of Earth around the Sun, rounded to whole days. Traditionally, the year is measured from the first day of spring (lunisolar year) or the winter solstice (solar year). A year is astronomically about days. This includes the calendar () year, when it is authoritatively determined on which day one year ends and another begins. The year usually begins on the new moon closest to Lichun, the first day of spring. This is typically the second and sometimes third new moon after the winter solstice. A calendar year is 353–355 or 383–385 days long. Also includes Zodiac, year, or 30° on the ecliptic. A zodiacal year is about days. Solar terms Solar term (), year, or 15° on the ecliptic. A solar term is about days. Planets The movements of the Sun, Moon, Mercury, Venus, Mars, Jupiter and Saturn (sometimes known as the seven luminaries) are the references for calendar calculations. The distance between Mercury and the sun is less than 30° (the sun's height at chénshí:, 8:00 to 10:00 am), so Mercury was sometimes called the "chen star" (); it is more commonly known as the "water star" (). Venus appears at dawn and dusk and is known as the "bright star" () or "long star" (). Mars looks like fire and occurs irregularly, and is known as the "fire star" ( or ). Mars is the punisher in Chinese mythology. When Mars is near Antares (), it is a bad omen and can forecast an emperor's death or a chancellor's removal (). Jupiter's revolution period is 11.86 years, so Jupiter is called the "age star" (); 30° of Jupiter's revolution is about a year on earth. Saturn's revolution period is about 28 years. Known as the "guard star" (), Saturn guards one of the 28 Mansions every year. Stars Big Dipper The Big Dipper is the celestial compass, and its handle's direction indicates or some said determines the season and month. 3 Enclosures and 28 Mansions The stars are divided into Three Enclosures and 28 Mansions according to their location in the sky relative to Ursa Minor, at the center. Each mansion is named with a character describing the shape of its principal asterism. The Three Enclosures are Purple Forbidden, (), Supreme Palace (), and Heavenly Market. () The eastern mansions are , , , , , , . Southern mansions are , , , , , , . Western mansions are , , , , , , . Northern mansions are , , , , , , . The moon moves through about one lunar mansion per day, so the 28 mansions were also used to count days. In the Tang dynasty, Yuan Tiangang () matched the 28 mansions, seven luminaries and yearly animal signs to yield combinations such as "horn-wood-flood dragon" (). List of lunar mansions The names and determinative stars of the mansions are: Descriptive mathematics Several coding systems are used to avoid ambiguity. The Heavenly Stems is a decimal system. The Earthly Branches, a duodecimal system, mark dual hours ( or ) and climatic terms. The 12 characters progress from the first day with the same branch as the month (first Yín day () of Zhēngyuè; first Mǎo day () of Èryuè), and count the days of the month. The stem-branches is a sexagesimal system. The Heavenly Stems and Earthly Branches make up 60 stem-branches. The stem branches mark days and years. The five Wu Xing elements are assigned to each stem, branch, or stem branch. Sexagenary system Twelve branches Day China has used the Western hour-minute-second system to divide the day since the Qing dynasty. Several era-dependent systems had been in use; systems using multiples of twelve and ten were popular, since they could be easily counted and aligned with the Heavenly Stems and Earthly Branches. Week As early as the Bronze Age Xia dynasty, days were grouped into nine- or ten-day weeks known as xún (). Months consisted of three xún. The first 10 days were the early xún (), the middle 10 the mid xún (), and the last nine (or 10) days were the late xún (). Japan adopted this pattern, with 10-day-weeks known as . In Korea, they were known as sun (,). The structure of xún led to public holidays every five or ten days. Officials of the Han dynasty were legally required to rest every five days (twice a xún, or 5–6 times a month). The name of these breaks became huan (, "wash"). Grouping days into sets of ten is still used today in referring to specific natural events. "Three Fu" (), a 29–30-day period which is the hottest of the year, reflects its three-xún length. After the winter solstice, nine sets of nine days were counted to calculate the end of winter. The seven-day week was adopted from the Hellenistic system by the 4th century CE, although its method of transmission into China is unclear. It was again transmitted to China in the 8th century by Manichaeans via Kangju (a Central Asian kingdom near Samarkand), and is the most-used system in modern China. Month Months are defined by the time between new moons, which averages approximately days. There is no specified length of any particular Chinese month, so the first month could have 29 days (short month, ) in some years and 30 days (long month, ) in other years. A 12-month-year using this system has 354 days, which would drift significantly from the tropical year. To fix this, traditional Chinese years have a 13-month year approximately once every three years. The 13-month version has the same long and short months alternating, but adds a 30-day leap month (). Years with 12 months are called common years, and 13-month years are known as long years. Although most of the above rules were used until the Tang dynasty, different eras used different systems to keep lunar and solar years aligned. The synodic month of the Taichu calendar was days long. The 7th-century, Tang-dynasty Wùyín Yuán Calendar was the first to determine month length by synodic month instead of the cycling method. Since then, month lengths have primarily been determined by observation and prediction. The days of the month are always written with two characters and numbered beginning with 1. Days one to 10 are written with the day's numeral, preceded by the character Chū (); Chūyī () is the first day of the month, and Chūshí () the 10th. Days 11 to 20 are written as regular Chinese numerals; Shíwǔ () is the 15th day of the month, and Èrshí () the 20th. Days 21 to 29 are written with the character Niàn () before the characters one through nine; Niànsān (), for example, is the 23rd day of the month. Day 30 (when applicable) is written as the numeral Sānshí (). History books use days of the month numbered with the 60 stem-branches: Because astronomical observation determines month length, dates on the calendar correspond to moon phases. The first day of each month is the new moon. On the seventh or eighth day of each month, the first-quarter moon is visible in the afternoon and early evening. On the 15th or 16th day of each month, the full moon is visible all night. On the 22nd or 23rd day of each month, the last-quarter moon is visible late at night and in the morning. Since the beginning of the month is determined by when the new moon occurs, other countries using this calendar use their own time standards to calculate it; this results in deviations. The first new moon in 1968 was at 16:29 UTC on 29 January. Since North Vietnam used UTC+07:00 to calculate their Vietnamese calendar and South Vietnam used UTC+08:00 (Beijing time) to calculate theirs, North Vietnam began the Tết holiday at 29 January at 23:29 while South Vietnam began it on 30 January at 00:15. The time difference allowed asynchronous attacks in the Tet Offensive. Names of months and lunar date conventions Current naming conventions use numbers as the month names, although Lunar months were originally named according to natural phenomena phenology. Each month is also associated with one of the twelve Earthly Branches. Correspondences with Gregorian dates are approximate and should be used with caution. Many years have intercalary months. Though the numbered month names are often used for the corresponding month number in the Gregorian calendar, it is important to realize that the numbered month names are not interchangeable with the Gregorian months when talking about lunar dates. Incorrect: The Dragon Boat Festival falls on 5 May in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival, and Qixi Festival fall on 9 September, 15 January, and 7 July in the Lunar Calendar, respectively. Correct: The Dragon Boat Festival falls on Wǔyuè 5th (or, 5th day of the fifth month) in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival and Qixi Festival fall on Jiǔyuè 9th (or, 9th day of the ninth month), Zhēngyuè 15th (or, 15th day of the first month) and Qīyuè 7th (or, 7th day of the seventh month) in the Lunar Calendar, respectively. Alternate Chinese Zodiac correction: The Dragon Boat Festival falls on Horse Month 5th in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival and Qixi Festival fall on Dog Month 9th, Tiger Month 15th and Monkey Month 7th in the Lunar Calendar, respectively. One may identify the heavenly stem and earthly branch corresponding to a particular day in the month, and those corresponding to its month, and those to its year, to determine the Four Pillars of Destiny associated with it, for which the Tung Shing, also referred to as the Chinese Almanac of the year, or the Huangli, and containing the essential information concerning Chinese astrology, is the most convenient publication to consult. Days rotate through a sexagenary cycle marked by coordination between heavenly stems and earthly branches, hence the referral to the Four Pillars of Destiny as, "Bazi", or "Birth Time Eight Characters", with each pillar consisting of a character for its corresponding heavenly stem, and another for its earthly branch. Since Huangli days are sexagenaric, their order is quite independent of their numeric order in each month, and of their numeric order within a week (referred to as True Animals in relation to the Chinese zodiac). Therefore, it does require painstaking calculation for one to arrive at the Four Pillars of Destiny of a particular given date, which rarely outpaces the convenience of simply consulting the Huangli by looking up its Gregorian date. Solar term The solar year (), the time between winter solstices, is divided into 24 solar terms known as jié qì (節氣). Each term is a 15° portion of the ecliptic. These solar terms mark both Western and Chinese seasons, as well as equinoxes, solstices, and other Chinese events. The even solar terms (marked with "Z", for , Zhongqi) are considered the major terms, while the odd solar terms (marked with "J", for , Jieqi) are deemed minor. The solar terms qīng míng (清明) on 5 April and dōng zhì (冬至) on 22 December are both celebrated events in China. Solar year The calendar solar year, known as the suì, () begins on the December solstice and proceeds through the 24 solar terms. Since the speed of the Sun's apparent motion in the elliptical is variable, the time between major solar terms is not fixed. This variation in time between major solar terms results in different solar year lengths. There are generally 11 or 12 complete months, plus two incomplete months around the winter solstice, in a solar year. The complete months are numbered from 0 to 10, and the incomplete months are considered the 11th month. If there are 12 complete months in the solar year, it is known as a leap solar year, or leap suì. Due to the inconsistencies in the length of the solar year, different versions of the traditional calendar might have different average solar year lengths. For example, one solar year of the 1st century BCE Tàichū calendar is (365.25016) days. A solar year of the 13th-century Shòushí calendar is (365.2425) days, identical to the Gregorian calendar. The additional .00766 day from the Tàichū calendar leads to a one-day shift every 130.5 years. Pairs of solar terms are climate terms, or solar months. The first solar term is "pre-climate" (), and the second is "mid-climate" (). If there are 12 complete months within a solar year, the first month without a mid-climate is the leap, or intercalary, month. In other words, the first month that does not include a major solar term is the leap month. Leap months are numbered with rùn , the character for "intercalary", plus the name of the month they follow. In 2017, the intercalary month after month six was called Rùn Liùyuè, or "intercalary sixth month" () and written as 6i or 6+. The next intercalary month (in 2020, after month four) will be called Rùn Sìyuè () and written 4i or 4+. Lunisolar year The lunisolar year begins with the first spring month, Zhēngyuè (), and ends with the last winter month, Làyuè (). All other months are named for their number in the month order. See below on the timing of the Chinese New Year. Years were traditionally numbered by the reign in ancient China, but this was abolished after founding the People's Republic of China in 1949. For example, the year from 12 February 2021 to 31 January 2022 was a Xīnchǒu year () of 12 months or 354 days. The Tang dynasty used the Earthly Branches to mark the months from December 761 to May 762. Over this period, the year began with the winter solstice. Age reckoning In modern China, a person's official age is based on the Gregorian calendar. For traditional use, age is based on the Chinese Sui calendar. A child is considered one year old at birth. After each Chinese New Year, one year is added to their traditional age. Their age therefore is the number of Chinese calendar years in which they have lived. Due to the potential for confusion, the age of infants is often given in months instead of years. After the Gregorian calendar was introduced in China, the Chinese traditional-age was referred to as the "nominal age" () and the Gregorian age was known as the "real age" (). Year-numbering systems Eras Ancient China numbered years from an emperor's ascension to the throne or his declaration of a new era name. The first recorded reign title was Jiànyuán (), from 140 BCE; the last reign title was Xuāntǒng (), from 1908 CE. The era system was abolished in 1912, after which the current or Republican era was used. Stem-branches The 60 stem-branches have been used to mark the date since the Shang dynasty (1600 BCE – 1046 BCE). Astrologers knew that the orbital period of Jupiter is about 12×361 = 4332 days, which they divided period into 12 years () of 361 days each. The stem-branches system solved the era system's problem of unequal reign lengths. Chinese New Year The date of the Chinese New Year accords with the patterns of the lunisolar calendar and hence is variable from year to year. The invariant between years is that the winter solstice, Dongzhi is required to be in the eleventh month of the year This means that Chinese New Year will be on the second new moon after the previous winter solstice, unless there is a leap month 11 or 12 in the previous year. This rule is accurate, however there are two other mostly (but not completely) accurate rules that are commonly stated: The new year is on the new moon closest to Lichun (typically 4 February). The new year is on the first new moon after Dahan (typically 20 January) It has been found that Chinese New Year moves back by either 10, 11, or 12 days in most years. If it falls on or before 31 January, then it moves forward in the next year by either 18, 19, or 20 days. Chinese lunar date conventions Though the numbered month names are often used for the corresponding month number in the Gregorian calendar, it is important to realize that the numbered month names are not interchangeable with the Gregorian months when talking about lunar dates. Holidays Various traditional and religious holidays shared by communities throughout the world use the Chinese (Lunisolar) calendar: Holidays with the same day and same month The Chinese New Year (known as the Spring Festival/春節 in China) is on the first day of the first month and was traditionally called the Yuan Dan (元旦) or Zheng Ri (正日). In Vietnam it is known as Tết Nguyên Đán (). Traditionally it was the most important holiday of the year. It is an official holiday in China, Hong Kong, Macau, Taiwan, Vietnam, Korea, the Philippines, Malaysia, Singapore, Indonesia, and Mauritius. It is also a public holiday in Thailand's Narathiwat, Pattani, Yala and Satun provinces, and is an official public school holiday in New York City. The Double Third Festival is on the third day of the third month. The Dragon Boat Festival, or the Duanwu Festival (端午節), is on the fifth day of the fifth month and is an official holiday in China, Hong Kong, Macau, and Taiwan. It is also celebrated in Vietnam where it is known as Tết Đoan Dương (節端陽) The Qixi Festival (七夕節) is celebrated in the evening of the seventh day of the seventh month. It is also celebrated in Vietnam where it is known as Tết Ngâu. The Double Ninth Festival (重陽節) is celebrated on the ninth day of the ninth month. It is also celebrated in Vietnam where it is known as Tết Trùng Cửu (節重九). Full moon holidays (holidays on the fifteenth day) The Lantern Festival is celebrated on the fifteenth day of the first month and was traditionally called the Yuan Xiao (元宵) or Shang Yuan Festival (上元節). In Vietnam, it is known as Rằm tháng giêng. The Zhong Yuan Festival is celebrated on the fifteenth day of the seventh month. In Vietnam, it is celebrated as Lễ Vu Lan (禮盂蘭). The Mid-Autumn Festival is celebrated on the fifteenth day of the eighth month. In Vietnam, it is celebrated as Tết Trung Thu (節中秋). The Xia Yuan Festival is celebrated on the fifteenth day of the tenth month. In Vietnam, it is celebrated as Lễ mừng lúa mới. Celebrations of the twelfth month The Laba Festival is on the eighth day of the twelfth month. It is the enlightenment day of Sakyamuni Buddha and in Vietnam is known as Lễ Vía Phật Thích Ca thành đạo. The Kitchen God Festival is celebrated on the twenty-third day of the twelfth month in northern regions of China and on the twenty-fourth day of the twelfth month in southern regions of China. Chinese New Year's Eve is also known as the Chuxi Festival and is celebrated on the evening of the last day of the lunar calendar. It is celebrated wherever the lunar calendar is observed. Celebrations of solar-term holidays The Qingming Festival (清明节) is celebrated on the fifteenth day after the Spring Equinox. The Dongzhi Festival (冬至) or the Winter Solstice is celebrated. Religious holidays based on the lunar calendar East Asian Mahayana, Daoist, and some Cao Dai holidays and/or vegetarian observances are based on the Lunar Calendar. Celebrations in Japan Many of the above holidays of the lunar calendar are also celebrated in Japan, but since the Meiji era on the similarly numbered dates of the Gregorian calendar. Double celebrations due to intercalary months In the case when there is a corresponding intercalary month, the holidays may be celebrated twice. For example, in the hypothetical situation in which there is an additional intercalary seventh month, the Zhong Yuan Festival will be celebrated in the seventh month followed by another celebration in the intercalary seventh month. (The next such occasion will be 2033, the first such since the calendar reform of 1645. Similar calendars Like Chinese characters, variants of the Chinese calendar have been used in different parts of the Sinosphere throughout history: this includes Vietnam, Korea, Singapore, Japan and Ryukyu, Mongolia, and elsewhere. Outlying areas of China Calendars of ethnic groups in mountains and plateaus of southwestern China and grasslands of northern China are based on their phenology and algorithms of traditional calendars of different periods, particularly the Tang and pre-Qin dynasties. Non-Chinese areas Korea, Vietnam, and the Ryukyu Islands adopted the Chinese calendar. In the respective regions, the Chinese calendar has been adapted into the Korean, Vietnamese, and Ryukyuan calendars, with the main difference from the Chinese calendar being the use of different meridians due to geography, leading to some astronomical events — and calendar events based on them — falling on different dates. The traditional Japanese calendar was also derived from the Chinese calendar (based on a Japanese meridian), but Japan abolished its official use in 1873 after Meiji Restoration reforms. Calendars in Mongolia and Tibet have absorbed elements of the traditional Chinese calendar but are not direct descendants of it. See also Chinese calendar correspondence table Chinese numerals East Asian age reckoning Guo Shoujing, an astronomer tasked with calendar reform during the 13th century List of festivals in Asia Metonic cycle of 19 years is used to reckon leap years with intercalary months in the Hebrew and Babylonian calendars Notes References Sources Further reading External links Calendars Chinese months Gregorian-Lunar calendar years (1901–2100) Chinese calendar and holidays Chinese calendar with Auspicious Events Chinese Calendar Online Calendar conversion 2000-year Chinese-Western calendar converter From 1 CE to 2100 CE. Useful for historical studies. To use, put the western year 年 month 月day 日in the bottom row and click on 執行. Western-Chinese calendar converter Rules Mathematics of the Chinese Calendar The Structure of the Chinese Calendar Calendar Horology Lunisolar calendars Specific calendars
Chinese calendar
Physics
9,453
40,145,449
https://en.wikipedia.org/wiki/Shellite%20%28explosive%29
Shellite (known as Tridite in US service) is an explosive mixture of picric acid and dinitrophenol or picric acid and hexanitrodiphenylamine in a ratio of 70/30. It was typically used as a filling in Royal Navy armour-piercing shells during the early part of the 20th century. History Shellite originated after World War I as a development of lyddite (picric acid). During the war, lyddite-filled, armour-piercing shells had been found to be shock-sensitive, with a tendency to prematurely detonate upon impact rather than after penetrating the target's armour plate. Shellite was less sensitive, and also had the advantage of a low melting point, that allowed it to be easily melted and poured into shell casings during manufacture. The first trials of shellite took place in 1921, when the British monitor experimentally fired different types of 15 inch (381 mm) shell at , point-blank range against the surrendered German battleship . During World War II, Shellite continued to be used in naval shells. It was used in the British Disney bomb, a type of concrete-piercing bomb. Legacy Shellite-filled munitions may still be encountered in the wrecks of sunken warships. They are considered hazardous as, over time, picric acid will react to form crystals of metal picrates, such as iron picrate. These crystals are extremely shock sensitive and it is recommended that wrecks that contain shellite munitions not be disturbed in any way. The hazard may reduce when the shells become corroded enough to admit seawater as these materials are water-soluble. References Citations Bibliography Explosives Naval artillery
Shellite (explosive)
Chemistry
345
24,380,013
https://en.wikipedia.org/wiki/Chain%20rule%20%28probability%29
In probability theory, the chain rule (also called the general product rule) describes how to calculate the probability of the intersection of, not necessarily independent, events or the joint distribution of random variables respectively, using conditional probabilities. This rule allows one to express a joint probability in terms of only conditional probabilities. The rule is notably used in the context of discrete stochastic processes and in applications, e.g. the study of Bayesian networks, which describe a probability distribution in terms of conditional probabilities. Chain rule for events Two events For two events and , the chain rule states that , where denotes the conditional probability of given . Example An Urn A has 1 black ball and 2 white balls and another Urn B has 1 black ball and 3 white balls. Suppose we pick an urn at random and then select a ball from that urn. Let event be choosing the first urn, i.e. , where is the complementary event of . Let event be the chance we choose a white ball. The chance of choosing a white ball, given that we have chosen the first urn, is The intersection then describes choosing the first urn and a white ball from it. The probability can be calculated by the chain rule as follows: Finitely many events For events whose intersection has not probability zero, the chain rule states Example 1 For , i.e. four events, the chain rule reads . Example 2 We randomly draw 4 cards (one at a time) without replacement from deck with 52 cards. What is the probability that we have picked 4 aces? First, we set . Obviously, we get the following probabilities . Applying the chain rule, . Statement of the theorem and proof Let be a probability space. Recall that the conditional probability of an given is defined as Then we have the following theorem. Chain rule for discrete random variables Two random variables For two discrete random variables , we use the events and in the definition above, and find the joint distribution as or where is the probability distribution of and conditional probability distribution of given . Finitely many random variables Let be random variables and . By the definition of the conditional probability, and using the chain rule, where we set , we can find the joint distribution as Example For , i.e. considering three random variables. Then, the chain rule reads Bibliography , p. 496. References Bayesian inference Bayesian statistics Mathematical identities Probability theory
Chain rule (probability)
Mathematics
490
15,077,071
https://en.wikipedia.org/wiki/Myopalladin
Myopalladin is a protein that in humans is encoded by the MYPN gene. Myopalladin is a muscle protein responsible for tethering proteins at the Z-disc and for communicating between the sarcomere and the nucleus in cardiac and skeletal muscle Structure Myopalladin is a 145.2 kDa protein composed of 1320 amino acids. Myopalladin has five Ig-like repeats within the protein, and a proline-rich domain. Myopalladin binds the Src homology domain of nebulette and nebulin and tethers it to alpha-actinin via its C-terminal domain binding to the EF hand domains of alpha-actinin. The N-terminal region of myopalladin binds to the nuclear protein CARP, known to regulate gene expression in muscle. It also has been shown to bind ANKRD23. Function Myopalladin has dual subcellular localization, residing in both the nucleus and sarcomere/I-bands in muscle. Accordingly, myopalladin has functions in both sarcomere assembly and in control of gene expression. Specifics of these functions were gleaned from studies involving MYPN mutants associated with various cardiomyopathies. The Q529X myopalladin mutant demonstrated incompetence in recruiting key binding partners such as desmin, alpha-actinin and CARP to the Z-disc during myofibrilogenesis. In contrast, the Y20C mutant resulted in decreased expression of binding partners. Clinical significance Mutations in MYPN have been linked to nemaline myopathy, dilated cardiomyopathy, hypertrophic cardiomyopathy and restrictive cardiomyopathy. References Further reading Proteins Genes
Myopalladin
Chemistry
365
2,094,449
https://en.wikipedia.org/wiki/Cluster%20decay
Cluster decay, also named heavy particle radioactivity, heavy ion radioactivity or heavy cluster decay, is a rare type of nuclear decay in which an atomic nucleus emits a small "cluster" of neutrons and protons, more than in an alpha particle, but less than a typical binary fission fragment. Ternary fission into three fragments also produces products in the cluster size. Description The loss of protons from the parent nucleus changes it to the nucleus of a different element, the daughter, with a mass number Ad = A − Ae and atomic number Zd = Z − Ze, where Ae = Ne + Ze. For example: → + According to "Ronen's golden rule" of cluster decay, the emitted nucleus tends to be one with a high binding energy per nucleon, and especially one with a magic number of nucleon. This type of rare decay mode was observed in radioisotopes that decay predominantly by alpha emission, and it occurs only in a small percentage of the decays for all such isotopes. The branching ratio with respect to alpha decay is rather small (see the Table below). Ta and Tc are the half-lives of the parent nucleus relative to alpha decay and cluster radioactivity, respectively. Cluster decay, like alpha decay, is a quantum tunneling process: in order to be emitted, the cluster must penetrate a potential barrier. This is a different process than the more random nuclear disintegration that precedes light fragment emission in ternary fission, which may be a result of a nuclear reaction, but can also be a type of spontaneous radioactive decay in certain nuclides, demonstrating that input energy is not necessarily needed for fission, which remains a fundamentally different process mechanistically. In the absence of any energy loss for fragment deformation and excitation, as in cold fission phenomena or in alpha decay, the total kinetic energy is equal to the Q-value and is divided between the particles in inverse proportion with their masses, as required by conservation of linear momentum where Ad is the mass number of the daughter, Ad = A − Ae. Cluster decay exists in an intermediate position between alpha decay (in which a nucleus spits out a 4He nucleus), and spontaneous fission, in which a heavy nucleus splits into two (or more) large fragments and an assorted number of neutrons. Spontaneous fission ends up with a probabilistic distribution of daughter products, which sets it apart from cluster decay. In cluster decay for a given radioisotope, the emitted particle is a light nucleus and the decay method always emits this same particle. For heavier emitted clusters, there is otherwise practically no qualitative difference between cluster decay and spontaneous cold fission. History The first information about the atomic nucleus was obtained at the beginning of the 20th century by studying radioactivity. For a long period of time only three kinds of nuclear decay modes (alpha, beta, and gamma) were known. They illustrate three of the fundamental interactions in nature: strong, weak, and electromagnetic. Spontaneous fission became better studied soon after its discovery in 1940 by Konstantin Petrzhak and Georgy Flyorov because of both the military and the peaceful applications of induced fission. This was discovered circa 1939 by Otto Hahn, Lise Meitner, and Fritz Strassmann. There are many other kinds of radioactivity, e.g. cluster decay, proton emission, various beta-delayed decay modes (p, 2p, 3p, n, 2n, 3n, 4n, d, t, alpha, f), fission isomers, particle accompanied (ternary) fission, etc. The height of the potential barrier, mainly of Coulomb nature, for emission of the charged particles is much higher than the observed kinetic energy of the emitted particles. The spontaneous decay can only be explained by quantum tunneling in a similar way to the first application of the Quantum Mechanics to Nuclei given by G. Gamow for alpha decay. Usually the theory explains an already experimentally observed phenomenon. Cluster decay is one of the rare examples of phenomena predicted before experimental discovery. Theoretical predictions were made in 1980, four years before experimental discovery. Four theoretical approaches were used: fragmentation theory by solving a Schrödinger equation with mass asymmetry as a variable to obtain the mass distributions of fragments; penetrability calculations similar to those used in traditional theory of alpha decay, and superasymmetric fission models, numerical (NuSAF) and analytical (ASAF). Superasymmetric fission models are based on the macroscopic-microscopic approach using the asymmetrical two-center shell model level energies as input data for the shell and pairing corrections. Either the liquid drop model or the Yukawa-plus-exponential model extended to different charge-to-mass ratios have been used to calculate the macroscopic deformation energy. Penetrability theory predicted eight decay modes: 14C, 24Ne, 28Mg, 32,34Si, 46Ar, and 48,50Ca from the following parent nuclei: 222,224Ra, 230,232Th, 236,238U, 244,246Pu, 248,250Cm, 250,252Cf, 252,254Fm, and 252,254No. The first experimental report was published in 1984, when physicists at Oxford University discovered that 223Ra emits one 14C nucleus among every billion (109) decays by alpha emission. Theory The quantum tunneling may be calculated either by extending fission theory to a larger mass asymmetry or by heavier emitted particle from alpha decay theory. Both fission-like and alpha-like approaches are able to express the decay constant , as a product of three model-dependent quantities where is the frequency of assaults on the barrier per second, S is the preformation probability of the cluster at the nuclear surface, and Ps is the penetrability of the external barrier. In alpha-like theories S is an overlap integral of the wave function of the three partners (parent, daughter, and emitted cluster). In a fission theory the preformation probability is the penetrability of the internal part of the barrier from the initial turning point Ri to the touching point Rt. Very frequently it is calculated by using the Wentzel-Kramers-Brillouin (WKB) approximation. A very large number, of the order 105, of parent-emitted cluster combinations were considered in a systematic search for new decay modes. The large amount of computations could be performed in a reasonable time by using the ASAF model developed by Dorin N Poenaru, Walter Greiner, et al. The model was the first to be used to predict measurable quantities in cluster decay. More than 150 cluster decay modes have been predicted before any other kind of half-lives calculations have been reported. Comprehensive tables of half-lives, branching ratios, and kinetic energies have been published, e.g. Potential barrier shapes similar to that considered within the ASAF model have been calculated by using the macroscopic-microscopic method. Previously it was shown that even alpha decay may be considered a particular case of cold fission. The ASAF model may be used to describe in a unified manner cold alpha decay, cluster decay, and cold fission (see figure 6.7, p. 287 of the Ref. [2]). One can obtain with good approximation one universal curve (UNIV) for any kind of cluster decay mode with a mass number Ae, including alpha decay In a logarithmic scale the equation log T = f(log Ps) represents a single straight line which can be conveniently used to estimate the half-life. A single universal curve for alpha decay and cluster decay modes results by expressing log T + log S = f(log Ps). The experimental data on cluster decay in three groups of even-even, even-odd, and odd-even parent nuclei are reproduced with comparable accuracy by both types of universal curves, fission-like UNIV and UDL derived using alpha-like R-matrix theory. In order to find the released energy one can use the compilation of measured masses M, Md, and Me of the parent, daughter, and emitted nuclei, c is the light velocity. The mass excess is transformed into energy according to the Einstein's formula E = mc2. Experiments The main experimental difficulty in observing cluster decay comes from the need to identify a few rare events against a background of alpha particles. The quantities experimentally determined are the partial half life, Tc, and the kinetic energy of the emitted cluster Ek. There is also a need to identify the emitted particle. Detection of radiations is based on their interactions with matter, leading mainly to ionizations. Using a semiconductor telescope and conventional electronics to identify the 14C ions, the Rose and Jones's experiment was running for about six months in order to get 11 useful events. With modern magnetic spectrometers (SOLENO and Enge-split pole), at Orsay and Argonne National Laboratory (see ch. 7 in Ref. [2] pp. 188–204), a very strong source could be used, so that results were obtained in a run of few hours. Solid state nuclear track detectors (SSNTD) insensitive to alpha particles and magnetic spectrometers in which alpha particles are deflected by a strong magnetic field have been used to overcome this difficulty. SSNTD are cheap and handy but they need chemical etching and microscope scanning. A key role in experiments on cluster decay modes performed in Berkeley, Orsay, Dubna, and Milano was played by P. Buford Price, Eid Hourany, Michel Hussonnois, Svetlana Tretyakova, A. A. Ogloblin, Roberto Bonetti, and their coworkers. The main region of 20 emitters experimentally observed until 2010 is above Z = 86: 221Fr, 221-224,226Ra, 223,225Ac, 228,230Th, 231Pa, 230,232-236U, 236,238Pu, and 242Cm. Only upper limits could be detected in the following cases: 12C decay of 114Ba, 15N decay of 223Ac, 18O decay of 226Th, 24,26Ne decays of 232Th and of 236U, 28Mg decays of 232,233,235U, 30Mg decay of 237Np, and 34Si decay of 240Pu and of 241Am. Some of the cluster emitters are members of the three natural radioactive families. Others should be produced by nuclear reactions. Up to now no odd-odd emitter has been observed. From many decay modes with half-lives and branching ratios relative to alpha decay predicted with the analytical superasymmetric fission (ASAF) model, the following 11 have been experimentally confirmed: 14C, 20O, 23F, 22,24-26Ne, 28,30Mg, and 32,34Si. The experimental data are in good agreement with predicted values. A strong shell effect can be seen: as a rule the shortest value of the half-life is obtained when the daughter nucleus has a magic number of neutrons (Nd = 126) and/or protons (Zd = 82). The known cluster emissions as of 2010 are as follows: Fine structure The fine structure in 14C radioactivity of 223Ra was discussed for the first time by M. Greiner and W. Scheid in 1986. The superconducting spectrometer SOLENO of IPN Orsay has been used since 1984 to identify 14C clusters emitted from 222–224,226Ra nuclei. Moreover, it was used to discover the fine structure observing transitions to excited states of the daughter. A transition with an excited state of 14C predicted in Ref. was not yet observed. Surprisingly, the experimentalists had seen a transition to the first excited state of the daughter stronger than that to the ground state. The transition is favoured if the uncoupled nucleon is left in the same state in both parent and daughter nuclei. Otherwise the difference in nuclear structure leads to a large hindrance. The interpretation was confirmed: the main spherical component of the deformed parent wave function has an i11/2 character, i.e. the main component is spherical. References External links National Nuclear Data Center Nuclear physics Radioactivity
Cluster decay
Physics,Chemistry
2,530
60,842,586
https://en.wikipedia.org/wiki/NGC%203741
NGC 3741 is an irregular galaxy in the constellation Ursa Major. It was discovered by John Herschel on March 19, 1828. At a distance of about 10 million light-years (3.2 Mpc), it is located in the M94 Group. It is relatively undisturbed by other galaxies. NGC 3741 is an unusual galaxy in several aspects. It has a disk of neutral hydrogen (H I) that is extremely wide, extending some 23,000 light-years (7 kpc). The disk is strongly but symmetrically warped. With a mass-to-light ratio of MT/LB ~ 149, it is highly rich in dark matter. NGC 3741 has a central bar and a faint spiral arm rich in H I. The bar rotates slowly, likely due to interaction with the dark matter. The bar and spiral arms would make NGC 3741 a low-luminosity spiral galaxy. The unusual properties could be explained if NGC 3741 were a late-stage merger between a low-mass companion or if it accreted mass from the intergalactic medium. References External links 3741 07768 Ursa Major Irregular galaxies 035878 M94 Group
NGC 3741
Astronomy
249
60,847,281
https://en.wikipedia.org/wiki/Jarman%E2%80%93Bell%20principle
The Jarman–Bell principle is a concept in ecology that the food quality of a herbivore's intake decreases as the size of the herbivore increases, but the amount of such food increases to counteract the low quality foods. It operates by observing the allometric (non- linear scaling) properties of herbivores. The principle was coined by P.J Jarman (1968.) and R.H.V Bell (1971). Large herbivores can subsist on low quality food. Their gut size is larger than smaller herbivores. The increased size allows for better digestive efficiency, and thus allow viable consumption of low quality food. Small herbivores require more energy per unit of body mass compared to large herbivores. A smaller size, thus smaller gut size and lower efficiency, imply that these animals need to select high quality food to function. Their small gut limits the amount of space for food, so they eat low quantities of high quality diet. Some animals practice coprophagy, where they ingest fecal matter to recycle untapped/undigested nutrients. However, the Jarman–Bell principle is not without exception. Small herbivorous members of mammals, birds and reptiles were observed to be inconsistent with the trend of small body mass being linked with high-quality food. There have also been disputes over the mechanism behind the Jarman–Bell principle; that larger body sizes does not increase digestive efficiency. The implications of larger herbivores ably subsisting on poor quality food compared to smaller herbivores mean that the Jarman–Bell principle may contribute evidence for Cope's rule. Furthermore, the Jarman–Bell principle is also important by providing evidence for the ecological framework of "resource partitioning, competition, habitat use and species packing in environments" and has been applied in several studies. Links with allometry Allometry refers to the non-linear scaling factor of one variable with respect to another. The relationship between such variables is expressed as a power law, where the exponent is a value not equal to 1 (thereby implying a non-linear relationship). Allometric relationships can be mathematically expressed as follow: (BM = body mass) Kleiber's law Kleiber's law describes how larger animals use less energy relative to small animals. Max Kleiber developed a formula that estimates this phenomenon (the exact values are not always consistent). Where MR = metabolic rate (kcal/day), W = weight/body mass (Kg) Gut capacity scales linearly with body size (gut capacity = BM1.0) but maintenance metabolism (energy required to maintain homeostasis) scales fractionally ( = BM0.75). Both of these factors are linked through the MR/GC (metabolic requirement to gut capacity ratio). If body mass increases, then the observed ratio demonstrates how large bodies display a lower MR/GC ratio relative to a small body. That is, smaller herbivores require more metabolic energy per unit of body mass than a large one. Retention time The allometric scaling of retention time (the time that food remains inside the digestive system) with respect to body mass: Where Tr = retention time (hours), D = digestibility of the food, W = weight/body mass (Kg). This formula was refined from a previous iteration because the previous formula took into account the entire gut, rather than focusing on the fermentation site where cellulose (the fibrous substance) is broken down. Explanation Food intake The energy gained food depends on the rate of digestion, retention time and the digestible content of the food. As herbivores, food intake is achieved through three main steps: ingestion, digestion, and absorption. Plant- based food is hard to digest and is done so with the help of symbiotic microbes in the gut of the herbivore. When food is passed through the digestive system (including multiple stomach chambers), it breaks down further through symbiotic microbes at fermentation site(s). There exists different types of stomach plans: Ruminants: 4 chambered stomach animals with fermentation occurring in the rumen (first stomach). Pseudoruminants: ruminants but with 3 chambered stomach Monogastric: one stomach, but fermentation can occur in multiple places depending on the animal. Places include the foregut, colon, caecum and hindgut. In order, the stomach plans represent the general level of efficiency when digesting plant-based food; ruminants are better compared to pseudoruminants and monogastrics. The development of the rumen not only allows a site for fermentation but also decrease the food digestion (increase retention time). However, a body mass ranging from 600 to 1200 kg is enough to cause sufficient digestion regardless of stomach plan. Link to the Jarman–Bell principle The Jarman–Bell Principle implies that the food quality a herbivore consumes is inversely proportional to the size of the herbivore, but the quantity of such food is proportional. The principle relies on the allometric (non-linear) scaling of size and energy requirement. The metabolic rate per unit of body mass of large animals is slow enough to subside on a consistent flow of low-quality food. However, in small animals, the rate is higher and they cannot draw sufficient energy from low-quality food to live on. The length of the digestive tract scales proportionally to the size of the animal. A longer digestive tract allows for more retention time and hence increases the efficiency of digestion and absorption. Larger body mass Poorer quality food selects animals to grow larger in size, and hence develop an increased digestive efficiency compared to smaller animals. Larger sized animals have a larger/longer digestive tract, allowing for more quantities of low quality food to be processed (retention time). Although herbivores can consume high quality food, the relative abundance of low quality food and other ecological factors such as resource competition and predator presence influence foraging behavior of the animal to primarily consume low quality food. Other factors include the size of the mouth constraining the selective ability of foraging, and the absolute energy large animals require compared to small (though smaller animals require higher energy per unit body mass). Smaller body mass Smaller animals have a limited digestive tract relative to larger animals. As such, they have a shorter retention time of food and cannot digest and absorb food to the same degree as larger animals. To counteract this disadvantage, high-quality food is selected, with quantity being limited by the animals gut size. Another method to counteract this is to practice coprophagy, where re-ingestion of fecal matter recycles untapped/undigested nutrients. However, there are also reports of larger animals, including primates and horses (under dietary restrictions), practicing coprophagy. Through the extra flexibility of subsisting on low-quality food, the Jarman–Bell Principle suggests an evolutionary advantage of larger animals and hence provides evidence for Cope's rule. Exceptions The Jarman–Bell Principle has some notable exceptions. Small herbivorous members of class Mammalia, Aves and Reptilia were observed to be inconsistent with the trend of small body mass being linked with high quality food. This discrepancy could be due to ecological factors which apply pressure and encourage an adaptive approach to the given environment, rather than taking on an optimal form of digestive physiology. Small rodents subjected to low quality diet were observed to increase food intake and increase the size of their cecum and intestine, counteracting their low quality diet by allowing viable consumption of such food and hence refuting the link between diet quality and body size. Refuting the mechanism of the Jarman–Bell principle Although the pattern of low food quality and body size appears consistent across multiple species, the explanation behind the principle (bigger size allowed better digestion via more retention time) has been disputed. M. Clauss et al. argues that retention time is not proportional to body mass above 500 grams. That is, smaller species (that are above 500 grams but not too large) have been observed to rival larger species in their mean retention time. Retention time being proportional to food intake was only observed in non-ruminant animals, not ruminants. Clauss et al. suggests that this is due to the diverse adaptations that support the rumen such that the digestive efficiency of ruminants remain consistent and independent of body size and food intake. Applications and examples In addition to providing evidence for ecological frameworks such as "resource partitioning, competition, habitat use and species packing in environment" and Cope's rule, the Jarman–Bell Principle has been applied to model primate behaviours and explain sexual segregation in ungulates. Sexual segregation in polygynous ungulates Sexual segregation in Soay sheep (Ovis aries) has been observed. Soay sheep are polygynous in nature; males have multiple partners (opposed to polygynandry). Two main hypotheses have been proposed to explain the observed phenomena. Sexual dimorphism-body size hypothesis Male soay sheep are morphologically larger than females. Larger overall size implies larger gut size, and hence digestive efficiency. As males are larger they can subsist on lower quality food. This leads to resource partitioning of males and females and thus sexual segregation on an intraspecies level. Activity budget hypothesis The time taken to process food depends on the food quality; poorer/high fibre food requires more time to process and ruminate. This extra time influences behaviour and, over a group of ungulates, lead to segregation via food quality. Since males are larger and can handle low quality food, their feeding and ruminating activity will differ from females. The digestive efficiency between both sexes of Soay sheep Pérez-Barbería F. J., et al. (2008) tested the proposed hypothesis by feeding Soay sheep grass hay and observing the digestive efficiency between both sexes via their faecal output. Given that the supplied food is the same, more faecal matter implies less digestion and thus lower digestive effectiveness. Male Soay sheep produced less faecal matter than females. Although this result is consistent with the Jarman–Bell principle in that it observes the relationship between size and food quality, it does not adequately explain the proposed hypotheses. For hypothesis (1), the sheep were kept in an environment where the food abundance and quality were controlled. There was no need for resources to be partitioned and segregation to occur. For hypothesis (2), there are many external factors which may influence behavioural changes in males, enough to induce sexual segregation, that are not explored in Pérez-Barbería F. J, et al. experiment. In the experiment, the sheep were kept in a controlled environment with a controlled diet (monitoring for digestive efficiency only). Males consume more food than females, thereby having a greater allowance of energy to expend. Activities such as predator lookout, migration or simply standing all use energy, and since males have more energy, there could be enough leeway to induce sexual segregation. However, the cost:benefit ratio of segregating from a group remains equivocal and hard to test. Size induced sexual segregation threshold By observing effective food digestibility in Soay sheep, the Jarman–Bell principle seems to apply at an intraspecific level. The threshold at which this occurs was tested at 30%, but other studies (Ruckstuhl and Neuhaus 2002) have shown the threshold to be close to 20% Modelling primate behavior Primates are very diverse in their dietary range, general morphological and physiological adaptations. The Jarman–Bell principle was used to help organise these variables. It expects a negative trend between body size and food quality. This trend is supported by observed primate adaptations and how they help them survive in their environment. It can also be used to hypothesis the general diet of newly discovered/mysterious primates that have not been researched by taking into account the animal's body size. For example, information about pygmy chimpanzees was scarce around 1980s. However, it was expected to have a fruity diet. Steven J. C. Gaulin examined 102 primate species (from various scientific literature) for links between size and diet, and hence the Jarman–Bell principle. Omnivorous primates seemed inconsistent with the trend, likely due to the diversity of their diet. Carnivorous diets The aye-aye is a large primate for its nearly exclusive insectivorous diet. This seems inconsistent with the Jarman–Bell principle. However, specialised adaptations, such as large ears and elongated fingers for echolocating larvae, allow the aye-aye to subside on such diet. This supports the idea that the Jarman- Bell Principle is not universal, and that depending on the circumstances (in this case, specialised adaptations), the expected trend is not followed. Herbivorous diets Colobines which feed heavily on low- quality food display ruminant like qualities such as digestion via symbiotic microbes in a separate forestomach. Weasel sportive lemurs are extremely small folivores. They practice coprophagy to maximise nutrient extraction. The Western gorilla is large and highly herbivorous; its diet contains 90% herbivorous food. Omnivorous diets Omnivorous cercopithecoids such as baboons and the patas monkey display the second largest average body mass. Humans feature such a diverse range of diet that they do not rely on one particular food group. Both of the above omnivores and the majority of primate omnivores live in open ranges, particularly ecotonal regions (where two biomes meet). In these environments, food abundance is comparatively lower than forest biomes. The diet would shift to a mixture of low amounts of high-quality food, and high amounts of low-quality food to maximise forage and energy. The universality of the Jarman–Bell principle Deviations from the expected trend question the universality of the principle. Steven J. C. Gaulin notes that, when the principle is applied to offer any type of explanation, it is subjected to numerous other phenomena that occur at the same time. For example, the habitat range constrains the size of an organism; large primates are too heavy to live on tree tops. Or perhaps the use of adaptations or even tools were enough to allow viable consumption of food quality that would not otherwise be sufficient. Gigantism in dinosaurs Extinct dinosaurs, particularly the large sauropods, can be imagined primarily through two methods. Method one involves fossil records; bones and dentition. Method two involves drawing ideas from extant animals and how their body mass is linked with their diet. Comparing digestion in extant, herbivorous reptiles and mammals and relating this to Sauropod gigantism Reptiles generally have a shorter retention time than mammals. However, this loss of digestive efficiency is offset by their ability to process food into smaller particles for digestion. Smaller particles are easier to digest and ferment. As sauropods are reptiles, it would be expected that they have a similar retention time to extant reptiles. However, the lack of particle reduction mechanisms (e.g. gastric mills, chewing teeth) challenges this expectation. Marcus Clauss et al. hypothesised that sauropods have a very enlarged gut capacity to account for this. Retention time is inversely proportional to intake amount. Therefore, an enlarged gut cavity allows increased intake, and thus shorter retention time similar to other herbivorous reptiles. Nutrient constraints D. M. Wilkinson and G. D. Ruxton considered the available nutrients as a driving factor for sauropod gigantism. Sauropods appeared during the late triassic period and became extinct at the end of the cretaceous period. During this time period, herbivorous plant matter such as conifers, ginkgos, cycads, ferns and horsetails may have been the dietary choice of sauropods. These plants have a high carbon/nitrogen content. Large amounts of these plant matter would be consumed to meet the bodily nitrogen requirement. Hence, more carbon content is consumed than required. Clauss Hummel et al. (2005), cited in D. M. Wilkinson and G. D. Ruxton's paper, argues that larger sizes does not necessarily improve digestive efficiency. Rather, it allows nutrient prioritisation. For example, if there exists a diet with high carbon but low nitrogen content, then meeting the nitrogen dietary requirement suggests consuming a high level of carbon diet. Since gut volume scales linearly with body mass, larger animals have more capacity to digests food. References Ecological theories Biological rules Animal size
Jarman–Bell principle
Biology
3,470
44,132,789
https://en.wikipedia.org/wiki/Hyperbolic%20geometric%20graph
A hyperbolic geometric graph (HGG) or hyperbolic geometric network (HGN) is a special type of spatial network where (1) latent coordinates of nodes are sprinkled according to a probability density function into a hyperbolic space of constant negative curvature and (2) an edge between two nodes is present if they are close according to a function of the metric (typically either a Heaviside step function resulting in deterministic connections between vertices closer than a certain threshold distance, or a decaying function of hyperbolic distance yielding the connection probability). A HGG generalizes a random geometric graph (RGG) whose embedding space is Euclidean. Mathematical formulation Mathematically, a HGG is a graph with a vertex set V (cardinality ) and an edge set E constructed by considering the nodes as points placed onto a 2-dimensional hyperbolic space of constant negative Gaussian curvature, and cut-off radius , i.e. the radius of the Poincaré disk which can be visualized using a hyperboloid model. Each point has hyperbolic polar coordinates with and . The hyperbolic law of cosines allows to measure the distance between two points and , The angle  is the (smallest) angle between the two position vectors. In the simplest case, an edge is established iff (if and only if) two nodes are within a certain neighborhood radius , , this corresponds to an influence threshold. Connectivity decay function In general, a link will be established with a probability depending on the distance . A connectivity decay function represents the probability of assigning an edge to a pair of nodes at distance . In this framework, the simple case of hard-code neighborhood like in random geometric graphs is referred to as truncation decay function. Generating hyperbolic geometric graphs Krioukov et al. describe how to generate hyperbolic geometric graphs with uniformly random node distribution (as well as generalized versions) on a disk of radius in . These graphs yield a power-law distribution for the node degrees. The angular coordinate of each point/node is chosen uniformly random from , while the density function for the radial coordinate r is chosen according to the probability distribution : The growth parameter controls the distribution: For , the distribution is uniform in , for smaller values the nodes are distributed more towards the center of the disk and for bigger values more towards the border. In this model, edges between nodes and exist iff or with probability if a more general connectivity decay function is used. The average degree is controlled by the radius of the hyperbolic disk. It can be shown, that for the node degrees follow a power law distribution with exponent . The image depicts randomly generated graphs for different values of and in . It can be seen how has an effect on the distribution of the nodes and on the connectivity of the graph. The native representation where the distance variables have their true hyperbolic values is used for the visualization of the graph, therefore edges are straight lines. Quadratic complexity generator Source: The naive algorithm for the generation of hyperbolic geometric graphs distributes the nodes on the hyperbolic disk by choosing the angular and radial coordinates of each point are sampled randomly. For every pair of nodes an edge is then inserted with the probability of the value of the connectivity decay function of their respective distance. The pseudocode looks as follows: for to do for every pair do if then return is the number of nodes to generate, the distribution of the radial coordinate by the probability density function is achieved by using inverse transform sampling. denotes the uniform sampling of a value in the given interval. Because the algorithm checks for edges for all pairs of nodes, the runtime is quadratic. For applications where is big, this is not viable any more and algorithms with subquadratic runtime are needed. Sub-quadratic generation To avoid checking for edges between every pair of nodes, modern generators use additional data structures that partition the graph into bands. A visualization of this shows a hyperbolic graph with the boundary of the bands drawn in orange. In this case, the partitioning is done along the radial axis. Points are stored sorted by their angular coordinate in their respective band. For each point , the limits of its hyperbolic circle of radius can be (over-)estimated and used to only perform the edge-check for points that lie in a band that intersects the circle. Additionally, the sorting within each band can be used to further reduce the number of points to look at by only considering points whose angular coordinate lie in a certain range around the one of (this range is also computed by over-estimating the hyperbolic circle around ). Using this and other extensions of the algorithm, time complexities of (where is the number of nodes and the number of edges) are possible with high probability. Findings For (Gaussian curvature ), HGGs form an ensemble of networks for which is possible to express the degree distribution analytically as closed form for the limiting case of large number of nodes. This is worth mentioning since this is not true for many ensembles of graphs. Applications HGGs have been suggested as promising model for social networks where the hyperbolicity appears through a competition between similarity and popularity of an individual. References Networks Network theory Geometric graphs graph
Hyperbolic geometric graph
Mathematics
1,059
74,054,198
https://en.wikipedia.org/wiki/NGC%205506
NGC 5506 is a spiral galaxy located in the constellation Virgo. It is located at a distance of about 75 million light years from Earth, which, given its apparent dimensions, means that NGC 5506 is about 80,000 light years across. It was discovered by William Herschel on April 15, 1787. It is a Seyfert galaxy. Characteristics NGC 5506 is a spiral galaxy seen edge-on, with dust lanes visible south of the nucleus. Active nucleus The nucleus of NGC 5506 has been found to be active and it has been categorised as a narrow line type I Seyfert galaxy, and is the brightest such nucleus. The classification of the active nucleus had been an issue of debate, as it lacked broad emission lines in the visual wavelength. However, broader lines were observed in the infrared, indicating that the broad line region is obscured in visual light. The most accepted theory for the energy source of active galactic nuclei is the presence of an accretion disk around a supermassive black hole. The mass of the black hole in the centre of NGC 5506 is estimated to be based on stellar velocity dispersion and based on the MBH–σ⋆ relation and X-ray variability. NGC 5506 is a bright X-ray source, detected by all X-ray space observatories, starting with Uhuru. The X-ray spectrum indicates that there is both a compton-thick and a compton-thin absorber. The compton-thick absorber is a dust torus around the supermassive black hole at a distance of around one parsec, while the compton thin absorbs the softer X-rays emitted by the nucleus. The soft emission by the nucleus extends to a distance of about 350 pc and is attributed to reflection of the nuclear emission by photoionized gas. The inclination of the accretion disk is estimated to be between 40° and 50°. The iron line is complex, indicating emission by neutral and ionised iron. A broad component of the Fe Kα fluorescent emission line was observed by XMM-Newton. The galaxy also emits radiowaves. The galaxy exhibits a central source that accounts for 75% of the total emission and diffuse wing-like emission towards the north-west and east of the nucleus and a low-surface-brightness halo measuring 2.75 arcseconds in diameter that surrounds these features. The features have no clear axis of symmetry. The galaxy has been found to host an megamaser. Nearby galaxies NGC 5506 is the foremost galaxy in a galaxy group known as the NGC 5506 Group. Other members of the group include NGC 5507, while IC 978 lies a bit farther away. Garcia identified as members of group also the galaxies NGC 5496, and UGC 9057. NGC 5506 forms a pair with NGC 5507, which lies 4 arcminutes from it. The group is part of the Virgo III Groups, a very obvious chain of galaxy groups on the left side of the Virgo cluster, stretching across 40 million light years of space. References External links NGC 5506 on SIMBAD Spiral galaxies Peculiar galaxies Seyfert galaxies Virgo (constellation) 5506 387 Markarian galaxies 50782 Discoveries by William Herschel Astronomical objects discovered in 1787
NGC 5506
Astronomy
673
6,856,520
https://en.wikipedia.org/wiki/Industrial%20radiography
Industrial radiography is a modality of non-destructive testing that uses ionizing radiation to inspect materials and components with the objective of locating and quantifying defects and degradation in material properties that would lead to the failure of engineering structures. It plays an important role in the science and technology needed to ensure product quality and reliability. In Australia, industrial radiographic non-destructive testing is colloquially referred to as "bombing" a component with a "bomb". Industrial Radiography uses either X-rays, produced with X-ray generators, or gamma rays generated by the natural radioactivity of sealed radionuclide sources. Neutrons can also be used. After crossing the specimen, photons are captured by a detector, such as a silver halide film, a phosphor plate, flat panel detector or CdTe detector. The examination can be performed in static 2D (named radiography), in real time 2D (fluoroscopy), or in 3D after image reconstruction (computed tomography or CT). It is also possible to perform tomography nearly in real time (4-dimensional computed tomography or 4DCT). Particular techniques such as X-ray fluorescence (XRF), X-ray diffractometry (XRD), and several other ones complete the range of tools that can be used in industrial radiography. Inspection techniques can be portable or stationary. Industrial radiography is used in welding, casting parts or composite pieces inspection, in food inspection and luggage control, in sorting and recycling, in EOD and IED analysis, aircraft maintenance, ballistics, turbine inspection, in surface characterisation, coating thickness measurement, in counterfeit drug control, etc. History Radiography started in 1895 with the discovery of X-rays (later also called Röntgen rays after the man who first described their properties in detail), a type of electromagnetic radiation. Soon after the discovery of X-rays, radioactivity was discovered. By using radioactive sources such as radium, far higher photon energies could be obtained than those from normal X-ray generators. Soon these found various applications, with one of the earliest users being Loughborough College. X-rays and gamma rays were put to use very early, before the dangers of ionizing radiation were discovered. After World War II new isotopes such as caesium-137, iridium-192 and cobalt-60 became available for industrial radiography, and the use of radium and radon decreased. Applications Inspection of products Gamma radiation sources, most commonly iridium-192 and cobalt-60, are used to inspect a variety of materials. The vast majority of radiography concerns the testing and grading of welds on piping, pressure vessels, high-capacity storage containers, pipelines, and some structural welds. Other tested materials include concrete (locating rebar or conduit), welder's test coupons, machined parts, plate metal, or pipewall (locating anomalies due to corrosion or mechanical damage). Non-metal components such as ceramics used in the aerospace industries are also regularly tested. Theoretically, industrial radiographers could radiograph any solid, flat material (walls, ceilings, floors, square or rectangular containers) or any hollow cylindrical or spherical object. Inspection of welding The beam of radiation must be directed to the middle of the section under examination and must be normal to the material surface at that point, except in special techniques where known defects are best revealed by a different alignment of the beam. The length of weld under examination for each exposure shall be such that the thickness of the material at the diagnostic extremities, measured in the direction of the incident beam, does not exceed the actual thickness at that point by more than 6%. The specimen to be inspected is placed between the source of radiation and the detecting device, usually the film in a light tight holder or cassette, and the radiation is allowed to penetrate the part for the required length of time to be adequately recorded. The result is a two-dimensional projection of the part onto the film, producing a latent image of varying densities according to the amount of radiation reaching each area. It is known as a radio graph, as distinct from a photograph produced by light. Because film is cumulative in its response (the exposure increasing as it absorbs more radiation), relatively weak radiation can be detected by prolonging the exposure until the film can record an image that will be visible after development. The radiograph is examined as a negative, without printing as a positive as in photography. This is because, in printing, some of the detail is always lost and no useful purpose is served. Before commencing a radiographic examination, it is always advisable to examine the component with one's own eyes, to eliminate any possible external defects. If the surface of a weld is too irregular, it may be desirable to grind it to obtain a smooth finish, but this is likely to be limited to those cases in which the surface irregularities (which will be visible on the radio graph) may make detecting internal defects difficult. After this visual examination, the operator will have a clear idea of the possibilities of access to the two faces of the weld, which is important both for the setting up of the equipment and for the choice of the most appropriate technique. Defects such as delaminations and planar cracks are difficult to detect using radiography, particularly to the untrained eye. Without overlooking the negatives of radiographic inspection, radiography does hold many significant benefits over ultrasonics, particularly insomuch that as a 'picture' is produced keeping a semi permanent record for the life cycle of the film, more accurate identification of the defect can be made, and by more interpreters. Very important as most construction standards permit some level of defect acceptance, depending on the type and size of the defect. To the trained radiographer, subtle variations in visible film density provide the technician the ability to not only accurately locate a defect, but identify its type, size and location; an interpretation that can be physically reviewed and confirmed by others, possibly eliminating the need for expensive and unnecessary repairs. For purposes of inspection, including weld inspection, there exist several exposure arrangements. First, there is the panoramic, one of the four single-wall exposure/single-wall view (SWE/SWV) arrangements. This exposure is created when the radiographer places the source of radiation at the center of a sphere, cone, or cylinder (including tanks, vessels, and piping). Depending upon client requirements, the radiographer would then place film cassettes on the outside of the surface to be examined. This exposure arrangement is nearly ideal – when properly arranged and exposed, all portions of all exposed film will be of the same approximate density. It also has the advantage of taking less time than other arrangements since the source must only penetrate the total wall thickness (WT) once and must only travel the radius of the inspection item, not its full diameter. The major disadvantage of the panoramic is that it may be impractical to reach the center of the item (enclosed pipe) or the source may be too weak to perform in this arrangement (large vessels or tanks). The second SWE/SWV arrangement is an interior placement of the source in an enclosed inspection item without having the source centered up. The source does not come in direct contact with the item, but is placed a distance away, depending on client requirements. The third is an exterior placement with similar characteristics. The fourth is reserved for flat objects, such as plate metal, and is also radiographed without the source coming in direct contact with the item. In each case, the radiographic film is located on the opposite side of the inspection item from the source. In all four cases, only one wall is exposed, and only one wall is viewed on the radiograph. Of the other exposure arrangements, only the contact shot has the source located on the inspection item. This type of radiograph exposes both walls, but only resolves the image on the wall nearest the film. This exposure arrangement takes more time than a panoramic, as the source must first penetrate the WT twice and travel the entire outside diameter of the pipe or vessel to reach the film on the opposite side. This is a double wall exposure/single wall view DWE/SWV arrangement. Another is the superimposure (wherein the source is placed on one side of the item, not in direct contact with it, with the film on the opposite side). This arrangement is usually reserved for very small diameter piping or parts. The last DWE/SWV exposure arrangement is the elliptical, in which the source is offset from the plane of the inspection item (usually a weld in pipe) and the elliptical image of the weld furthest from the source is cast onto the film. Airport security Both hold luggage and carry-on hand luggage are normally examined by X-ray machines using X-ray radiography. See airport security for more details. Non-intrusive cargo scanning Gamma radiography and high-energy X-ray radiography are currently used to scan intermodal freight cargo containers in US and other countries. Also research is being done on adapting other types of radiography like dual-energy X-ray radiography or muon radiography for scanning intermodal cargo containers. Art The American artist Kathleen Gilje has painted copies of Artemisia Gentileschi's Susanna and the Elders and Gustave Courbet's Woman with a Parrot. Before, she painted in lead white similar pictures with differences: Susanna fights the intrusion of the elders; there is a nude Courbet beyond the woman he paints. Then she painted over reproducing the original. Gilje's paintings are exhibited with radiographs that show the underpaintings, simulating the study of pentimentos and providing a comment on the old masters' work. Sources Many types of ionizing radiation sources exist for use in industrial radiography. X-Ray generators X-ray generators produce X-rays by applying a high voltage between the cathode and the anode of an X-ray tube and in heating the tube filament to start the electron emission. The electrons are then accelerated in the resulting electric potential and collide with the anode, which is usually made of Tungsten. The X-rays that are emitted by this generator are directed towards the object to control. They cross it and are absorbed according to the object material's attenuation coefficient. The attenuation coefficient is compiled from all the cross sections of the interactions that are happening in the material. The three most important inelastic interactions with X-rays at those energy levels are the photoelectric effect, compton scattering and pair production. After having crossed the object, the photons are captured by a detector, such as a silver halide film, a phosphor plate or flat panel detector. When an object is too thick, too dense, or its effective atomic number is too high, a linac can be used. They work in a similar way to produce X-rays, by electron collisions on a metal anode, the difference is that they use a much more complex method to accelerate them. Sealed Radioactive Sources Radionuclides are often used in industrial radiography. They have the advantage that they do not need a supply of electricity to function, but it also means that they can't be turned off. The two most common radionuclides used in industrial radiography are Iridium-192 and Cobalt-60. But others are used in general industry as well. Am-241: Backscatter gauges, smoke detectors, fill height and ash content detectors. Sr-90: Thickness gauging for thick materials up to 3 mm. Kr-85: Thickness gauging for thin materials like paper, plastics, etc. Cs-137: Density and fill height level switches. Ra-226: Ash content Cf-255: Ash content Ir-192: Industrial radiography Se-75: Industrial radiography Yb-169: Industrial radiography Co-60: Density and fill height level switches, industrial radiography These isotopes emit radiation in a discrete set of energies, depending on the decay mechanism happening in the atomic nucleus. Each energies will have different intensities depending on the probability of a particular decay interaction. The most prominent energies in Cobalt-60 are 1.33 and 1.17 MeV, and 0.31, 0.47 and 0.60 MeV for Iridium-192. From a radiation safety point of view, this makes them more difficult to handle and manage. They always need to be enclosed in a shielded container and because they are still radioactive after their normal life cycle, their ownership often requires a license and they are usually tracked by a governmental body. If this is the case, their disposal must be done in accordance with the national policies. The radionuclides used in industrial radiography are chosen for their high specific activity. This high activity means that only a small sample is required to obtain a good radiation flux. However, higher activity often means higher dose in the case of an accidental exposure. Radiographic cameras A series of different designs have been developed for radiographic "cameras". Rather than the "camera" being a device that accepts photons to record a picture, the "camera" in industrial radiography is the radioactive photon source. Most industries are moving from film based radiography to a digital sensor based radiography much the same way that traditional photography has made this move. Since the amount of radiation emerging from the opposite side of the material can be detected and measured, variations in this amount (or intensity) of radiation are used to determine thickness or composition of material. Shutter design One design uses a moving shutter to expose the source. The radioactive source is placed inside a shielded box; a hinge allows part of the shielding to be opened, exposing the source and allowing photons to exit the radiography camera. Another design for a shutter is where the source is placed in a metal wheel, which can turn inside the camera to move between the expose and storage positions. Shutter-based devices require the entire device, including the heavy shielding, to be located at the exposure site. This can be difficult or impossible, so they have largely been replaced by cable-driven projectors. Projector design Modern projector designs use a cable drive mechanism to move the source along a hollow guide tube to the exposure location. The source is stored in a block of shielding that has an S-shaped tube-like hole through the block. In the safe position the source is in the center of the block. The source is attached to a flexible metal cable called a pigtail. To use the source a guide tube is attached to one side of the device while a drive cable is attached to the pigtail. Using a hand-operated control the source is then pushed out of the shield and along the source guide tube to the tip of the tube to expose the film, then cranked back into its fully shielded position. Neutrons In some rare cases, radiography is done with neutrons. This type of radiography is called neutron radiography (NR, Nray, N-ray) or neutron imaging. Neutron radiography provides different images than X-rays, because neutrons can pass with ease through lead and steel but are stopped by plastics, water and oils. Neutron sources include radioactive (241Am/Be and Cf) sources, electrically driven D-T reactions in vacuum tubes and conventional critical nuclear reactors. It might be possible to use a neutron amplifier to increase the neutron flux. Safety Radiation safety is a very important part of industrial radiography. The International Atomic Energy Agency has published a report describing the best practices in order to lower the amount of radiation dose the workers are exposed to. It also provides a list of national competent authorities responsible for approvals and authorizations regarding the handling of radioactive material. Shielding Shielding can be used to protect the user of the harmful properties of ionizing radiation. The type of material used for shielding depends on the type of radiation being used. National radiation safety authorities usually regulate the design, commissioning, maintenance and inspection of Industrial Radiography installations. In the industry Industrial radiographers are in many locations required by governing authorities to use certain types of safety equipment and to work in pairs. Depending on location industrial radiographers may have been required to obtain permits, licenses and/or undertake special training. Prior to conducting any testing the nearby area should always first be cleared of all other persons and measures should be taken to ensure that workers do not accidentally enter into an area that may expose them to dangerous levels of radiation. The safety equipment usually includes four basic items: a radiation survey meter (such as a Geiger/Mueller counter), an alarming dosimeter or rate meter, a gas-charged dosimeter, and a film badge or thermoluminescent dosimeter (TLD). The easiest way to remember what each of these items does is to compare them to gauges on an automobile. The survey meter could be compared to the speedometer, as it measures the speed, or rate, at which radiation is being picked up. When properly calibrated, used, and maintained, it allows the radiographer to see the current exposure to radiation at the meter. It can usually be set for different intensities, and is used to prevent the radiographer from being overexposed to the radioactive source, as well as for verifying the boundary that radiographers are required to maintain around the exposed source during radiographic operations. The alarming dosimeter could be most closely compared with the tachometer, as it alarms when the radiographer "redlines" or is exposed to too much radiation. When properly calibrated, activated, and worn on the radiographer's person, it will emit an alarm when the meter measures a radiation level in excess of a preset threshold. This device is intended to prevent the radiographer from inadvertently walking up on an exposed source. The gas-charged dosimeter is like a trip meter in that it measures the total radiation received, but can be reset. It is designed to help the radiographer measure his/her total periodic dose of radiation. When properly calibrated, recharged, and worn on the radiographer's person, it can tell the radiographer at a glance how much radiation to which the device has been exposed since it was last recharged. Radiographers in many states are required to log their radiation exposures and generate an exposure report. In many countries personal dosimeters are not required to be used by radiographers as the dose rates they show are not always correctly recorded. The film badge or TLD is more like a car's odometer. It is actually a specialized piece of radiographic film in a rugged container. It is meant to measure the radiographer's total exposure over time (usually a month) and is used by regulating authorities to monitor the total exposure of certified radiographers in a certain jurisdiction. At the end of the month, the film badge is turned in and is processed. A report of the radiographer's total dose is generated and is kept on file. When these safety devices are properly calibrated, maintained, and used, it is virtually impossible for a radiographer to be injured by a radioactive overexposure. The elimination of just one of these devices can jeopardize the safety of the radiographer and all those who are nearby. Without the survey meter, the radiation received may be just below the threshold of the rate alarm, and it may be several hours before the radiographer checks the dosimeter, and up to a month or more before the film badge is developed to detect a low intensity overexposure. Without the rate alarm, one radiographer may inadvertently walk up on the source exposed by the other radiographer. Without the dosimeter, the radiographer may be unaware of an overexposure, or even a radiation burn, which may take weeks to result in noticeable injury. And without the film badge, the radiographer is deprived of an important tool designed to protect him or her from the effects of a long-term overexposure to occupationally obtained radiation, and thus may suffer long-term health problems as a result. There are three ways a radiographer will ensure they are not exposed to higher than required levels of radiation: time, distance, shielding. The less time that a person is exposed to radiation the lower their dose will be. The further a person is from a radioactive source the lower the level of radiation they receive, this is largely due to the inverse square law. Lastly the more a radioactive source is shielded by either better or greater amounts of shielding the lower the levels of radiation that will escape from the testing area. The most commonly used shielding materials in use are sand, lead (sheets or shot), steel, spent (non-radioactive uranium) tungsten and in suitable situations water. Industrial radiography appears to have one of the worst safety profiles of the radiation professions, possibly because there are many operators using strong gamma sources (> 2 Ci) in remote sites with little supervision when compared with workers within the nuclear industry or within hospitals. Due to the levels of radiation present whilst they are working many radiographers are also required to work late at night when there are few other people present as most industrial radiography is carried out 'in the open' rather than in purpose built exposure booths or rooms. Fatigue, carelessness and lack of proper training are the three most common factors attributed to industrial radiography accidents. Many of the "lost source" accidents commented on by the International Atomic Energy Agency involve radiography equipment. Lost source accidents have the potential to cause a considerable loss of human life. One scenario is that a passerby finds the radiography source and not knowing what it is, takes it home. The person shortly afterwards becomes ill and dies as a result of the radiation dose. The source remains in their home where it continues to irradiate other members of the household. Such an event occurred in March 1984 in Casablanca, Morocco. This is related to the more famous Goiânia accident, where a related chain of events caused members of the public to be exposed to radiation sources. List of standards International Organization for Standardization (ISO) ISO 4993, Steel and iron castings – Radiographic inspection ISO 5579, Non-destructive testing – Radiographic examination of metallic materials by X- and gamma-rays – Basic rules ISO 10675-1, Non-destructive testing of welds – Acceptance levels for radiographic testing – Part 1: Steel, nickel, titanium and their alloys ISO 11699-1, Non-destructive testing – Industrial radiographic films – Part 1: Classification of film systems for industrial radiography ISO 11699-2, Non-destructive testing – Industrial radiographic films – Part 2: Control of film processing by means of reference values ISO 14096-1, Non-destructive testing – Qualification of radiographic film digitisation systems – Part 1: Definitions, quantitative measurements of image quality parameters, standard reference film and qualitative control ISO 14096-2, Non-destructive testing – Qualification of radiographic film digitisation systems – Part 2: Minimum requirements ISO 17636-1: Non-destructive testing of welds. Radiographic testing. X- and gamma-ray techniques with film ISO 17636-2: Non-destructive testing of welds. Radiographic testing. X- and gamma-ray techniques with digital detectors ISO 19232, Non-destructive testing – Image quality of radiographs European Committee for Standardization (CEN) EN 444, Non-destructive testing; general principles for the radiographic examination of metallic materials using X-rays and gamma-rays EN 462-1: Non-destructive testing – image quality of radiographs – Part 1: Image quality indicators (wire type) – determination of image quality value EN 462-2, Non-destructive testing – image quality of radiographs – Part 2: image quality indicators (step/hole type) determination of image quality value EN 462-3, Non-destructive testing – Image quality of radiogrammes – Part 3: Image quality classes for ferrous metals EN 462-4, Non-destructive testing – Image quality of radiographs – Part 4: Experimental evaluation of image quality values and image quality tables EN 462-5, Non-destructive testing – Image quality of radiographs – Part 5: Image quality of indicators (duplex wire type), determination of image unsharpness value EN 584-1, Non-destructive testing – Industrial radiographic film – Part 1: Classification of film systems for industrial radiography EN 584-2, Non-destructive testing – Industrial radiographic film – Part 2: Control of film processing by means of reference values EN 1330-3, Non-destructive testing – Terminology – Part 3: Terms used in industrial radiographic testing EN 2002–21, Aerospace series – Metallic materials; test methods – Part 21: Radiographic testing of castings EN 10246-10, Non-destructive testing of steel tubes – Part 10: Radiographic testing of the weld seam of automatic fusion arc welded steel tubes for the detection of imperfections EN 12517-1, Non-destructive testing of welds – Part 1: Evaluation of welded joints in steel, nickel, titanium and their alloys by radiography – Acceptance levels EN 12517-2, Non-destructive testing of welds – Part 2: Evaluation of welded joints in aluminium and its alloys by radiography – Acceptance levels EN 12679, Non-destructive testing – Determination of the size of industrial radiographic sources – Radiographic method EN 12681, Founding – Radiographic examination EN 13068, Non-destructive testing – Radioscopic testing EN 14096, Non-destructive testing – Qualification of radiographic film digitisation systems EN 14784-1, Non-destructive testing – Industrial computed radiography with storage phosphor imaging plates – Part 1: Classification of systems EN 14584-2, Non-destructive testing – Industrial computed radiography with storage phosphor imaging plates – Part 2: General principles for testing of metallic materials using X-rays and gamma rays ASTM International (ASTM) ASTM E 94, Standard Guide for Radiographic Examination ASTM E 155, Standard Reference Radiographs for Inspection of Aluminum and Magnesium Castings ASTM E 592, Standard Guide to Obtainable ASTM Equivalent Penetrameter Sensitivity for Radiography of Steel Plates 1/4 to 2 in. [6 to 51 mm] Thick with X Rays and 1 to 6 in. [25 to 152 mm] Thick with Cobalt-60 ASTM E 747, Standard Practice for Design, Manufacture and Material Grouping Classification of Wire Image Quality Indicators (IQI) Used for Radiology ASTM E 801, Standard Practice for Controlling Quality of Radiological Examination of Electronic Devices ASTM E 1030, Standard Test Method for Radiographic Examination of Metallic Castings ASTM E 1032, Standard Test Method for Radiographic Examination of Weldments ASTM 1161, Standard Practice for Radiologic Examination of Semiconductors and Electronic Components ASTM E 1648, Standard Reference Radiographs for Examination of Aluminum Fusion Welds ASTM E 1735, Standard Test Method for Determining Relative Image Quality of Industrial Radiographic Film Exposed to X-Radiation from 4 to 25 MeV ASTM E 1815, Standard Test Method for Classification of Film Systems for Industrial Radiography ASTM E 1817, Standard Practice for Controlling Quality of Radiological Examination by Using Representative Quality Indicators (RQIs) ASTM E 2104, Standard Practice for Radiographic Examination of Advanced Aero and Turbine Materials and Components American Society of Mechanical Engineers (ASME) BPVC Section V, Nondestructive Examination: Article 2 Radiographic Examination American Petroleum Institute (API) API 1104, Welding of Pipelines and Related Facilities: 11.1 Radiographic Test Methods See also Collimator Industrial computed tomography Medical radiography Notes References External links NIST's XAAMDI: X-Ray Attenuation and Absorption for Materials of Dosimetric Interest Database NIST's XCOM: Photon Cross Sections Database NIST's FAST: Attenuation and Scattering Tables List of incidents UN information on the security of industrial sources Nondestructive testing Radiography Casting (manufacturing) Welding
Industrial radiography
Materials_science,Engineering
5,825
2,031,942
https://en.wikipedia.org/wiki/Vinyl%20acetate
Vinyl acetate is an organic compound with the formula CH3CO2CH=CH2. This colorless liquid is the precursor to polyvinyl acetate, ethene-vinyl acetate copolymers, polyvinyl alcohol, and other important industrial polymers. Production The worldwide production capacity of vinyl acetate was estimated at 6,969,000 tonnes/year in 2007, with most capacity concentrated in the United States (1,585,000 all in Texas), China (1,261,000), Japan (725,000) and Taiwan (650,000). The average list price for 2008 was US$1600/tonne. Celanese is the largest producer (ca 25% of the worldwide capacity), while other significant producers include China Petrochemical Corporation (7%), Chang Chun Group (6%), and LyondellBasell (5%). It is a key ingredient in furniture glue. Preparation Vinyl acetate is the acetate ester of vinyl alcohol. Since vinyl alcohol is highly unstable (with respect to acetaldehyde), the preparation of vinyl acetate is more complex than the synthesis of other acetate esters. The major industrial route involves the reaction of ethylene and acetic acid with oxygen in the presence of a palladium catalyst. This method has replaced the addition of acetic acid to acetylene. The main side reaction is the combustion of organic precursors. Mechanism Isotope labeling and kinetics experiments suggest that the mechanism involves PdCH2CH2OAc-containing intermediates. Beta-hydride elimination would generate vinyl acetate and a palladium hydride, which would be oxidized to give hydroxide. Alternative routes Vinyl acetate was once mainly prepared by hydroesterification, i.e., the addition of acetic acid to acetylene in the presence of metal catalysts. Using mercury(II) catalysts, vinyl acetate was first prepared by Fritz Klatte in 1912. Presently, zinc acetate is used as the catalyst: Approximately 1/3 of the world's production relies on this route, which, because it is environmentally messy, is mainly practiced in countries with relaxed environmental regulations, such as China. Another route to vinyl acetate involves thermal decomposition of ethylidene diacetate: Polymerization It can be polymerized to give polyvinyl acetate (PVAc). With other monomers it can be used to prepare various copolymers such as ethylene-vinyl acetate (EVA), vinyl acetate-acrylic acid (VA/AA), polyvinyl chloride acetate (PVCA), and polyvinylpyrrolidone (Vp/Va copolymer, used in hair gels). Due to the instability of the radical, attempts to control the polymerization by most "living/controlled" radical processes have proved problematic. However, RAFT (or more specifically, MADIX) polymerization offers a convenient method of controlling the synthesis of PVA by the addition of a xanthate or a dithiocarbamate chain transfer agent. Other reactions Vinyl acetate is useful in organic synthesis. Transacetylation is used to obtain enantioenriched alcohols and esters. Iridium-catalyzed transacetylation have also been demonstrated: ROH + CH2=CHOAc → ROCH=CH2 + HOAc Transvinylation is also possible using vinyl acetate. It undergoes Diels-Alder reactions with dienes. Vinyl acetate undergoes many of the reactions anticipated for an alkene and an ester. Bromine adds to give the dibromide. Hydrogen halides add to give 1-haloethyl acetates, which cannot be generated by other methods because of the non-availability of the corresponding halo-alcohols. Acetic acid adds in the presence of palladium catalysts to give ethylidene diacetate, CH3CH(OAc)2. It undergoes transesterification with a variety of carboxylic acids. The alkene also undergoes Diels–Alder and 2+2 cycloadditions. Toxicity evaluation Tests suggest that vinyl acetate has low toxicity. Oral for rats is 2920 mg/kg. On January 31, 2009, the Government of Canada's final assessment concluded that exposure to vinyl acetate is not harmful to human health. This decision under the Canadian Environmental Protection Act (CEPA) was based on new information received during the public comment period, as well as more recent information from the risk assessment conducted by the European Union. In the context of large-scale release into the environment, it is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (), under which it "does not meet toxicity criteria[,] but because of its acute lethality, high production volume [or] known risk is considered a chemical of concern". By this law, it is subject to strict reporting requirements by facilities that produce, store, or use it in quantities greater than 1000 pounds. See also Polyvinyl alcohol Vinyl propionate References External links EPA health assessment information on vinyl acetate CDC - NIOSH Pocket Guide to Chemical Hazards Summary of risk assessment by the Government of Canada Hazardous air pollutants Monomers IARC Group 2B carcinogens Acetate esters Commodity chemicals Vinyl esters
Vinyl acetate
Chemistry,Materials_science
1,135
17,728,375
https://en.wikipedia.org/wiki/Image%20conversion
A large number of image file formats are available for storing graphical data, and, consequently, there are a number of issues associated with converting from one image format to another, most notably loss of image detail. Software compatibility Many image formats are native to one specific graphics application and are not offered as an export option in other software, due to proprietary considerations. An example of this is Adobe Photoshop's native PSD-format (Prevention of Significant Deterioration), which cannot be opened in less sophisticated programs for image viewing or editing, such as Microsoft Paint. Most image editing software is capable of importing and exporting in a variety of formats though, and a number of dedicated image converters exist. Loss due to compression Besides uncompressed formats and lossless compression formats that can usually be interconverted without any loss of detail, there are compressed formats such as JPEG, which lose detail on nearly every compress. While a conversion from a compressed to an uncompressed format is in general without loss, this is not true the other way around. Even a compressed-uncompressed-compressed round trip without any image manipulation may incur some loss of detail. Loss due to format change Like any resampling operation, changing image size and bit depth are lossy in all cases of downsampling, such as 30-bit to 24-bit or 24-bit to 8-bit palette-based images. While increasing bit depth is usually lossless, increasing image size can introduce aliasing or other undesired artifacts. RAW images More expensive digital cameras usually offer the option to shoot in Raw image format. RAW is not a standardized format, in fact, RAW-formats even differ between camera models from the same vendor. Data in a RAW-file is structured according to the Bayer filter's pattern in cameras that use a single image sensor. Debayering, the process of obtaining bitmap data from a RAW-image is always a lossy operation. In addition, some downsampling is always performed, again reducing image information. See also Comparison of graphics file formats Image organizer Image viewer Graphics standards File conversion software
Image conversion
Technology
432
54,388,605
https://en.wikipedia.org/wiki/Telluropyrylium
Telluropyrylium is an aromatic heterocyclic compound consisting of a six member ring with five carbon atoms, and a positively charged tellurium atom. Derivatives of telluropyrylium are important in research of infrared dyes. Naming and numbering Formerly it was named tellurapyrylium. However this is misleading, as "tellura" indicates that tellurium substitutes for carbon atom, but actually tellurium is substituted for the oxygen atom in pyrilium. In the Hantzsch-Widman system it is called tellurinium. This is the name used by Chemical Abstracts. Replacement nomenclature would call this telluroniabenzene. Numbering in telluropyrylium starts with 1 on the tellurium atom and counts up to 6 counter-clockwise on the carbon atoms. The positions adjacent to the chalcogen, numbered 2 and 6 can also be called α, the next two positions 3 and 5 can be termed "β" and the opposite carbon at position 4 can be called "γ". Occurrence Because telluropyrylium is a positively charged cation, it takes the solid form as a salt with non-nucleophillic anions like perchlorate, tetrafluoroborate, or hexafluorophosphate. Properties The positive charge is not confined to the tellurium atom in telluropyrylium, but distributes on the ring in several resonance structures, so that the α and γ positions have some positive charge. A nucleophillic attack targets these carbon atoms. The shape of the telluropyrylium molecule is not a perfect hexagon, as the bond lengths to the tellurium atom at about 2.068 Å compared to about 1.4 Å for the carbon-carbon bonds. The angle at the tellurium atom is also reduced to about 94°, angles at the α and γ carbon atoms in the ring are about 122° and at the β positions 129°. The whole ring is bent so that it forms a boat shape with an angles of 8.7° on the Te-γ axis. (This was measured in the crystal structure of tetraphenyl telluropyrylium-pyrylium monomethine fluoroborate. Related When the ring of telluropyrylium is fused with other aromatic rings larger aromatic structures such as tellurochromenylium, telluroflavylium, and telluroxanthylium result. See also 6-membered aromatic rings with one carbon replaced by another group: borabenzene, silabenzene, germabenzene, stannabenzene, pyridine, phosphorine, arsabenzene, stibabenzene, bismabenzene, pyrylium, thiopyrylium, selenopyrylium References Extra reading Tellurium heterocycles Heterocyclic compounds with 1 ring Cations Six-membered rings
Telluropyrylium
Physics,Chemistry
624
744,914
https://en.wikipedia.org/wiki/C17H21NO4
{{DISPLAYTITLE:C17H21NO4}} The formula C17H21NO4 may refer to: Cocaine Cocaine reverse ester Fenoterol Hydromorphinol Hyoscine (scopolamine) Oxymorphol Molecular formulas
C17H21NO4
Physics,Chemistry
58
49,325,223
https://en.wikipedia.org/wiki/Art4d
art4d is a magazine for Architecture, Design, Arts (11 annual issues, bilingual Thai-English). It was founded in 1995 and has been based in Bangkok, Thailand, since then. It is published by Corporation 4d, a Bangkok-based publisher specialized in architecture and design in Thailand. The magazine is devoted to architecture, interior design, product design, graphic design and arts. It focuses mainly on movements of design, artistic, and creative professional and participants in Thailand and Asia, especially Southeast Asia. Since its beginning years, the editors of the magazine have constantly organized, with collaborations of private and public organizations, a series of cultural events to promote the community of design and artistic professional in Thailand as well as to connect the local community with an international scene. For example: Tomorrow Where Shall We Live? - a series of events held in Bangkok in 1996 with a lecture given by Toyo Ito and an architectural design workshop also guided by him, Bangkok on the Move: Cities on the Move 6 - a series of multi-media exhibitions designed to stimulate discussion of social changes and universal issues in visual and architectural cultures, curatorial concept by Hans Ulrich Obrist and Hou Hanru, scenography and co-curation by Ole Scheeren, held in Bangkok, Thailand, from 9–30 October 1999,. Since 2004, the magazine has organized Designers' Saturday - an annual lecture series about design, interior design and architecture held in Bangkok, inviting international architects and designers who are well known for their particular experimental approach. In 2007, it launched the first edition of Bangkok Design Festival - a yearly event about architecture, design, fashion and arts, together with Degree Show - an annual exhibition of selected final year student works with an open call for projects in seven design fields- architecture, interior, product, graphic, animation and motion graphic, fashion and jewelry - from all Thai universities. All of the three events are still held in Bangkok every year. See also List of architecture magazines References External links 1995 establishments in Thailand Architecture magazines Bilingual magazines Design magazines Magazines established in 1995 Magazines published in Thailand Mass media in Bangkok
Art4d
Engineering
426