id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
5,478,813 | https://en.wikipedia.org/wiki/T-stage | T-stage is a British term for a compressor used in a particular concept for a variable cycle combat engine. The T-stage is part of the HP rotor in this concept.
A US concept for a variable cycle combat engine also uses a similar compressor arrangement as part of the HP rotor. It is called a core driven fan stage (CDFS) by General Electric Aviation in their Variable Cycle Engine (VCE) which ran in 1981.
Alternative concepts including an LP driven stage are shown in the US patent "Variable Cycle Gas Turbine Engines" filed in 1975.
References
Jet engines | T-stage | [
"Technology"
] | 117 | [
"Jet engines",
"Engines"
] |
5,479,047 | https://en.wikipedia.org/wiki/Aortic%20orifice | The aortic orifice (aortic opening) is a circular opening, in front and to the right of the left atrioventricular orifice, from which it is separated by the anterior cusp of the bicuspid valve.
It is guarded by the aortic semilunar valve.
The portion of the ventricle immediately below the aortic orifice is termed the aortic vestibule, and has fibrous instead of muscular walls.
References
External links
Circulatory system | Aortic orifice | [
"Biology"
] | 109 | [
"Organ systems",
"Circulatory system"
] |
5,479,075 | https://en.wikipedia.org/wiki/Rapid%20modes%20of%20evolution | Rapid modes of evolution have been proposed by several notable biologists after Charles Darwin proposed his theory of evolutionary descent by natural selection. In his book On the Origin of Species (1859), Darwin stressed the gradual nature of descent, writing:
It may be said that natural selection is daily and hourly scrutinizing, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. We see nothing of these slow changes in progress, until the hand of time has marked the long lapses of ages, and then so imperfect is our view into long past geological ages, that we only see that the forms of life are now different from what they formerly were. (1859)
Evolutionary developmental biology
Work in developmental biology has identified dynamical and physical mechanisms of tissue morphogenesis that may underlie such abrupt morphological transitions. Consequently, consideration of mechanisms of phylogenetic change that are actually (not just apparently) non-gradual is increasingly common in the field of evolutionary developmental biology, particularly in studies of the origin of morphological novelty. A description of such mechanisms can be found in the multi-authored volume Origination of Organismal Form.
See also
Evolution
Evolutionary developmental biology
Otto Schindewolf
Punctuated equilibrium
Quantum evolution
Richard Goldschmidt
Saltationism
Industrial melanism
Peppered moth evolution
Bibliography
Darwin, C. (1859) On the Origin of Species London: Murray.
Goldschmidt, R. (1940) The Material Basis of Evolution. New Haven, Conn.: Yale University Press.
Gould, S. J. (1977) "The Return of Hopeful Monsters" Natural History 86 (June/July): 22-30.
Gould, S. J. (2002) The Structure of Evolutionary Theory. Cambridge MA: Harvard Univ. Press.
Müller, G. B. and Newman, S. A., eds. (2003) Origination of Organismal Form: Beyond the Gene in Developmental and Evolutionary Biology. Cambridge: The MIT Press.
Schindewolf, O. H. (1963) Neokatastrophismus? Zeits. Deutsch. Geol. Res. 114: 430-435.
Newman, S. A. and Bhat, R. (2009) Dynamical patterning modules: a "pattern language" for development and evolution of multicellular form. Int. J. Dev. Biol. 53: 693-705
Evolutionary biology | Rapid modes of evolution | [
"Biology"
] | 532 | [
"Evolutionary biology"
] |
5,479,275 | https://en.wikipedia.org/wiki/Zero-stage | Jet engines and other gas turbine engines are often uprated by adding a zero-stage, sometimes written '0' stage, to the front of a compressor.
At a given core size, adding a stage to the front of the compressor not only increases the cycle overall pressure ratio, but increases the core mass flow. A further uprating may be done by adding another stage in front of the previously-added zero stage, in which case the new one may be known as a zero-zero stage.
Zero-staging is also combined with other modifications to provide increased thrust or lower turbine temperature. It may be required for an existing aircraft weight increase, or for a new application, as shown by the following examples.
Examples
A comparison with other ways of uprating an existing engine without drastically redesigning the engine shows for a particular case, e.g. the Rolls-Royce/SNECMA M45H, the thrust could have been increased by 25% with a zero-staged l-p compressor or 10% with either an improved HP turbine or with water injection.
A 15-stage Rolls-Royce Avon powered the Lightning F.1. A zero-stage, together with a new turbine, was added (total 16 stages) for the Caravelle III. A zero-zero stage was added (total 17 stages) for the Caravelle VI.
The 7-stage Snecma Atar D was used in the Mystere II. A zero-stage was added (total 8 stages) for the E and G used in the Vautour and Super Mystere B.2. A zero-zero stage (total 9 stages), together with a 2-stage turbine was added for the Atar 8 and 9 used in the Mirage III.
The Rolls-Royce/Snecma Olympus 593 started with a 6-stage LP compressor. As the Concorde increased in weight during the design phase the take-off thrust requirement increased. The engine was given a zero-stage to the compressor, a redesigned turbine and partial reheat.
Examples of zero-staging for land-based gas turbines are the aeroderivative GE LM2500+ and the heavy-duty GE MS5002B. An alternative to zero-staging used by some OEMs is supercharging the compressor with a fan driven by an electric motor.
Improved overall pressure ratio
Zero-staging is demonstrated by the following relationship:
where:
core mass flow =
core size =
core total head pressure ratio =
inverse of core total head temperature ratio = i.e. ()
core entry total pressure =
core entry total temperature =
So basically, increasing increases .
On the other hand, adding a stage to the rear of the compressor increases overall pressure ratio, and decreases core size, but has no effect on core flow. This option also needs a Turbine with a significantly smaller flow capacity to drive the compressor.
Zero-staging a compressor also implies an increase in shaft speed:
where:
HP Shaft Speed =
HP Compressor "Non-Dimensional" Speed (based on Exit Total Temperature) =
HP Compressor Exit Total Temperature =
So if the "Non-Dimensional" Speed of the original compressor is to be maintained, increasing increases . This implies an increase in both the blade and disc stress levels.
If the original shaft speed is maintained, then the increase in pressure ratio and mass flow from adding the zero stage will be severely reduced.
Although the above equations are written with zero-staging an HP compressor in mind, the same approach would apply to an LP or IP compressor.
References
Jet engines | Zero-stage | [
"Technology"
] | 724 | [
"Jet engines",
"Engines"
] |
5,479,431 | https://en.wikipedia.org/wiki/Universal%20quadratic%20form | In mathematics, a universal quadratic form is a quadratic form over a ring that represents every element of the ring. A non-singular form over a field which represents zero non-trivially is universal.
Examples
Over the real numbers, the form x2 in one variable is not universal, as it cannot represent negative numbers: the two-variable form over R is universal.
Lagrange's four-square theorem states that every positive integer is the sum of four squares. Hence the form over Z is universal.
Over a finite field, any non-singular quadratic form of dimension 2 or more is universal.
Forms over the rational numbers
The Hasse–Minkowski theorem implies that a form is universal over Q if and only if it is universal over Qp for all p (where we include , letting Q∞ denote R). A form over R is universal if and only if it is not definite; a form over Qp is universal if it has dimension at least 4. One can conclude that all indefinite forms of dimension at least 4 over Q are universal.
See also
The 15 and 290 theorems give conditions for a quadratic form to represent all positive integers.
References
Field (mathematics)
Quadratic forms | Universal quadratic form | [
"Mathematics"
] | 248 | [
"Quadratic forms",
"Number theory"
] |
5,480,019 | https://en.wikipedia.org/wiki/Immunochemistry | Immunochemistry is the study of the chemistry of the immune system. This involves the study of the properties, functions, interactions and production of the chemical components (antibodies/immunoglobulins, toxin, epitopes of proteins like CD4, antitoxins, cytokines/chemokines, antigens) of the immune system. It also include immune responses and determination of immune materials/products by immunochemical assays.
In addition, immunochemistry is the study of the identities and functions of the components of the immune system. Immunochemistry is also used to describe the application of immune system components, in particular antibodies, to chemically labelled antigen molecules for visualization.
Various methods in immunochemistry have been developed and refined, and used in scientific study, from virology to molecular evolution. Immunochemical techniques include: enzyme-linked immunosorbent assay, immunoblotting (e.g., Western blot assay), precipitation and agglutination reactions, immunoelectrophoresis, immunophenotyping, immunochromatographic assay and cyflometry.
One of the earliest examples of immunochemistry is the Wasserman test to detect syphilis. Svante Arrhenius was also one of the pioneers in the field; he published Immunochemistry in 1907 which described the application of the methods of physical chemistry to the study of the theory of toxins and antitoxins.
Immunochemistry is also studied from the aspect of using antibodies to label epitopes of interest in cells (immunocytochemistry) or tissues (immunohistochemistry).
References
Branches of immunology | Immunochemistry | [
"Biology"
] | 380 | [
"Branches of immunology"
] |
5,480,302 | https://en.wikipedia.org/wiki/Differentiation%20in%20Fr%C3%A9chet%20spaces | In mathematics, in particular in functional analysis and nonlinear analysis, it is possible to define the derivative of a function between two Fréchet spaces. This notion of differentiation, as it is Gateaux derivative between Fréchet spaces, is significantly weaker than the derivative in a Banach space, even between general topological vector spaces. Nevertheless, it is the weakest notion of differentiation for which many of the familiar theorems from calculus hold. In particular, the chain rule is true. With some additional constraints on the Fréchet spaces and functions involved, there is an analog of the inverse function theorem called the Nash–Moser inverse function theorem, having wide applications in nonlinear analysis and differential geometry.
Mathematical details
Formally, the definition of differentiation is identical to the Gateaux derivative. Specifically, let and be Fréchet spaces, be an open set, and be a function. The directional derivative of in the direction is defined by
if the limit exists. One says that is continuously differentiable, or if the limit exists for all and the mapping
is a continuous map.
Higher order derivatives are defined inductively via
A function is said to be if is continuous. It is or smooth if it is for every
Properties
Let and be Fréchet spaces. Suppose that is an open subset of is an open subset of and are a pair of functions. Then the following properties hold:
Fundamental theorem of calculus. If the line segment from to lies entirely within then
The chain rule. For all and
Linearity. is linear in More generally, if is then is multilinear in the 's.
Taylor's theorem with remainder. Suppose that the line segment between and lies entirely within If is then where the remainder term is given by
Commutativity of directional derivatives. If is then for every permutation σ of
The proofs of many of these properties rely fundamentally on the fact that it is possible to define the Riemann integral of continuous curves in a Fréchet space.
Smooth mappings
Surprisingly, a mapping between open subset of Fréchet spaces is smooth (infinitely often differentiable) if it maps smooth curves to smooth curves; see Convenient analysis.
Moreover, smooth curves in spaces of smooth functions are just smooth functions of one variable more.
Consequences in differential geometry
The existence of a chain rule allows for the definition of a manifold modeled on a Fréchet space: a Fréchet manifold. Furthermore, the linearity of the derivative implies that there is an analog of the tangent bundle for Fréchet manifolds.
Tame Fréchet spaces
Frequently the Fréchet spaces that arise in practical applications of the derivative enjoy an additional property: they are tame. Roughly speaking, a tame Fréchet space is one which is almost a Banach space. On tame spaces, it is possible to define a preferred class of mappings, known as tame maps. On the category of tame spaces under tame maps, the underlying topology is strong enough to support a fully fledged theory of differential topology. Within this context, many more techniques from calculus hold. In particular, there are versions of the inverse and implicit function theorems.
See also
References
Banach spaces
Differential calculus
Euclidean geometry
Functions and mappings
Generalizations of the derivative
Topological vector spaces | Differentiation in Fréchet spaces | [
"Mathematics"
] | 650 | [
"Mathematical analysis",
"Functions and mappings",
"Vector spaces",
"Calculus",
"Mathematical objects",
"Space (mathematics)",
"Topological vector spaces",
"Mathematical relations",
"Differential calculus"
] |
5,480,422 | https://en.wikipedia.org/wiki/Field%20cycling | Field cycling is a measurement method which uses variable magnetic fields to measure the magnetization of a sample. Fast field cycling is the same method except with fast switchable magnetic fields.
Field cycling is either "mechanical" or "electrical." Mechanical field cycling moves the sample between two positions with different (static) magnetic fields and can be done using static magnetic fields. Electrical field cycling requires switchable fields. The sample remains at its original position during both forms of field cycling.
Field cycling is used in fast field cycling relaxometry to measure specific physical and chemical properties of materials. For instance, nuclear magnetic resonance frequencies depend on the molecular environment. Furthermore, nuclear spin-lattice relaxation rates depend on local molecular mobility.
See also
NMR spectroscopy
References
Measurement | Field cycling | [
"Physics",
"Mathematics"
] | 151 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
5,480,561 | https://en.wikipedia.org/wiki/Recliner | A recliner is an armchair or sofa that reclines when the occupant lowers the chair's back and raises its front. It has a backrest that can be tilted back, and often a footrest that may be extended by means of a lever on the side of the chair, or may extend automatically when the back is reclined.
A recliner is also known as a reclining chair, lounger and an armchair.
Modern recliners often feature an adjustable headrest, lumbar support and an independent footstool that adjusts with the weight and angle of the user's legs to maximize comfort. Additional features include heat, massage and vibration. Some models are wheelchair accessible.
Recliners can also accommodate a near supine position for sleeping (making them multifunctional furniture), and are common in airplanes and trains, as well as in homes.
Etymology
The word "recline" was first used in the 1660s, derived ultimately from the Latin word reclinare . This Latin term itself combines the prefix re-, meaning "back," with clinare, meaning "to bend." Beginning in 1880, the word "recliner" was used to describe a type of chair.
History
Around 1850, the French introduced a reclining camp bed that could serve as a chair, a bed and a chaise longue. It was portable and featured padded arm rests and a steel frame. In the late 1800s, many designs were found for motion chairs that were made of wood with a padded seat and back. Designs from France and America included a document or book holder. The first reclining chair was reportedly owned by Napoleon III.
Knabush and Shoemaker, two American cousins, are credited with gaining a patent on a wooden recliner. The design was the same wooden bench recliner found in other designs. Issued in 1928, the patent led to the founding of La-Z-Boy. In 1930, Knabush and Shoemaker patented an upholstered model with a mechanical movement.
In 1959, Daniel F. Caldemeyer patented a recliner as owner of National Furniture Mfg. Co based in Evansville, Indiana. The design was based on the science of kinetics that he used while serving in the US Air Force. His design was used by NASA for the seats in Projects Mercury, Gemini and Apollo.
His chairs were used in the ready room for these missions and can be seen in the movie Apollo 13. The Secret Service bought 50 of them for President Lyndon Baines Johnson as a Christmas gift. A Life magazine photo of President Johnson, post gall bladder surgery, has the President lifting his shirt and showing his scar while sitting in one of these chairs. The Presidential Seal was embossed on these chairs with one currently in the Smithsonian Institution and another at the Lyndon Baines Johnson Library and Museum. With over 300 patents, Caldemeyer added the foot lift rest, heated seating and massage features to this chair and had the patent for the first entertainment center.
See also
Ergonomy
Massage chair
Sunlounger
References
Products introduced in 1928
Accessibility
Ergonomics | Recliner | [
"Engineering"
] | 633 | [
"Accessibility",
"Design"
] |
5,480,651 | https://en.wikipedia.org/wiki/Materialise%20Mimics | Materialise Mimics is an image processing software for 3D design and modeling, developed by Materialise NV, a Belgian company specialized in additive manufacturing software and technology for medical, dental and additive manufacturing industries. Materialise Mimics is used to create 3D surface models from stacks of 2D image data. These 3D models can then be used for a variety of engineering applications. Mimics is an acronym for Materialise Interactive Medical Image Control System. It is developed in an ISO environment with CE and FDA 510k premarket clearance. Materialise Mimics is commercially available as part of the Materialise Mimics Innovation Suite, which also contains Materialise 3-matic, a design and meshing software for anatomical data. The current version is 24.0(released in 2021), and it supports Windows 10, Windows 7, Vista and XP in x64.
Process
Materialise Mimics calculates surface 3D models from stacked image data such as Computed Tomography (CT), Micro CT, Magnetic Resonance Imaging (MRI), Confocal Microscopy, X-ray and Ultrasound, through image segmentation. The ROI, selected in the segmentation process is converted to a 3D surface model using an adapted marching cubes algorithm that takes the partial volume effect into account, leading to very accurate 3D models. The 3D files are represented in the STL format.
Uploading Data
DICOM data from CT or MRI images can be uploaded into Materialise Mimics in order to begin the segmentation process. From this data, 3 different views are present: the coronal, axial, and sagittal views. Another window is present to display 3D objects.
Mask Creation
The "New Mask" tool can be used to highlight specific anatomy from the DICOM data.
Printing Models
Models can be sent to 3D printers in the form of STLs.
Gallery
See also
3D modeling
3D Slicer
Computer representation of surfaces
Computed tomography
Medical imaging
References
External links
User community
Biomedical engineering
Windows graphics-related software
3D graphics software
Computer-aided design software | Materialise Mimics | [
"Engineering",
"Biology"
] | 406 | [
"Biological engineering",
"Medical technology",
"Biomedical engineering"
] |
5,480,870 | https://en.wikipedia.org/wiki/Hyperland | Hyperland is a 50-minute-long documentary film about hypertext and surrounding technologies. It was written by Douglas Adams and produced and directed by Max Whitby for BBC Two in 1990. It stars Douglas Adams as a computer user and Tom Baker, with whom Adams had already worked on Doctor Who, as a personification of a software agent.
In hindsight, what Hyperland describes and predicts is an approximation of today's World Wide Web.
Content
The self-proclaimed "fantasy documentary" begins with Adams asleep by the fireside with his television still on. In a dream that follows, Adams, fed up by game shows and generally passive, non-interactive linear content, takes his TV to a rubbish dump, where he meets Tom, played by Tom Baker. Tom is a software agent, who shows him the future of TV: interactive multimedia.
Much like Apple Inc's Knowledge Navigator concept, Tom acts as a butler within a virtual space populated with hypermedia: linked text, sound, pictures and movies represented by animated icons. The documentary is centred on Adams browsing these media and discovering their interconnectedness.
This process leads him, for example, from the topic Atlantic Ocean to literature about the sea to The Rime of the Ancient Mariner by Samuel Taylor Coleridge to the poem Kubla Khan by the same author to Xanadu and back to the topic of hypertext via Ted Nelson's Project Xanadu. The references to Coleridge and to Kubla Khan are rather knowing nods to Adams' own book Dirk Gently's Holistic Detective Agency, where they play significant roles in the plot. Dirk Gently was published in 1987 and also touches on the themes of interconnectedness, suggesting that this was a subject Adams had thought about at some length and for some time.
Many aspects of the documentary demonstrate Adams' noted enthusiasm for technology, and for Apple computers in particular. At the beginning of the documentary a Macintosh Portable can be seen, and most of the projects presented run on Apple hardware. Even the general design of the animated icons and environments featured in his dream is inspired by pre-OS X era Mac OS icons and design cues.
Multimedia
While Adams is browsing, many people and projects related to the general theme of hypertext and multimedia are presented:
Vannevar Bush and his Memex concept of a theoretical proto-hypertext information system are shown.
Ted Nelson explains hypertext and Project Xanadu.
Hans Peter Brøndmo talks about the concept of animated navigation icons, which he calls Micons.
Robert Winter talks about an interactive version of Beethoven's 9th Symphony.
An idea from Kurt Vonnegut's book Palm Sunday is presented: stories and narrative structures have shapes that can be represented mathematically as graphs.
Robert Abel shows his multimedia version of Pablo Picasso's Guernica.
Apple Multimedia Lab employees Steve Gano, Kristee Kreitman, Kristina Hooper, Michael Naimark and Fabrice Florin talk about a multimedia version of Life Story, a BBC TV film dramatisation of the 1953 discovery of the structure of DNA.
Amanda Goodenough presents Inigo Gets Out, an interactive story for children implemented with Hypercard.
Brad deGraf and Michael Wahrman talk about their digital puppet Mike Normal.
A NASA Ames Research Center scientist presents a prototype virtual reality helmet called Cyberiad.
Marc Canter makes a cameo (non-)appearance as an animated icon that isn't "clicked" by Adams; no interview with Canter is shown.
The dream (and the documentary) ends with a vision of how information might be accessed in 2005. In hindsight, Hyperland does describe a number of features of the modern web and, apart from some underestimates of graphics and processing power available, the documentary paints a not inaccurate picture of hypermedia and hypertext and how they are used today. This is especially noteworthy considering that it predates the public release of the first Web browser by about a year.
References
External links
Douglas Adams Homepage about Hyperland
Watch and download Hyperland at the Internet Archive.
BBC television documentaries
Hypertext
Electronic literature
Films about virtual reality
Multimedia
Television episodes written by Douglas Adams
1990 television specials
Documentary films about the Internet | Hyperland | [
"Technology"
] | 853 | [
"Multimedia"
] |
5,481,051 | https://en.wikipedia.org/wiki/Association%20for%20Women%20Geoscientists | The Association for Women Geoscientists (AWG) is an international professional organization which promotes the professional development of its members, provides geoscience outreach to girls, and encourages the participation of girls and women in the geosciences. Membership is open to all who support AWG's goals. Members include professional women and men from industry, government, museums and academia, students from a cross-section of colleges and universities, retirees, and others interested in supporting the society's goals.
History
AWG was founded in San Francisco in 1977. The original purpose of the society was to provide encouragement to women in the geosciences, a career choice where they were largely underrepresented at the time. Today, the purpose remains the same, although some advances have been made, as AWG membership approaches 1200 students and scientists, reflecting the increasing participation of women in the geosciences. AWG is a 501(c)(6) mutual benefit corporation with local chapters in many cities and at-Large members throughout the U.S. and around the world. AWG is a member society of the American Geological Institute, the umbrella organization of geological societies, and the Geological Society of America.
Notable members
Claudia Alexander
Gail Ashley
Denise Cox
Francisca Oboh Ikuenobe
Sharon Mosher
Sarah K. Noble
Sian Proctor
Activities
The society provides and sponsors several programs that strive to achieve the goals of the society:
The Association for Women Geoscientsts Distinguished Lecturer Program is a Speakers Bureau of female geoscientists available to give AWG-funded talks or lectures on their areas of interest
Scholarships
AWG Outstanding Educator Award
GAEA bi-monthly newsletter
Geology field trips
References
External links
AWG San Francisco Bay Area Chapter
AWG Lone Star Chapter
AWG Puget Sound Chapter
AWG Minnesota Chapter
Geology societies
Organizations for women in science and technology
Organizations established in 1977
History of women in California | Association for Women Geoscientists | [
"Technology"
] | 385 | [
"Organizations for women in science and technology",
"Women in science and technology"
] |
5,481,056 | https://en.wikipedia.org/wiki/Analogical%20modeling | Analogical modeling (AM) is a formal theory of exemplar based analogical reasoning, proposed by Royal Skousen, professor of Linguistics and English language at Brigham Young University in Provo, Utah. It is applicable to language modeling and other categorization tasks. Analogical modeling is related to connectionism and nearest neighbor approaches, in that it is data-based rather than abstraction-based; but it is distinguished by its ability to cope with imperfect datasets (such as caused by simulated short term memory limits) and to base predictions on all relevant segments of the dataset, whether near or far. In language modeling, AM has successfully predicted empirically valid forms for which no theoretical explanation was known (see the discussion of Finnish morphology in Skousen et al. 2002).
Implementation
Overview
An exemplar-based model consists of a general-purpose modeling engine and a problem-specific dataset. Within the dataset, each exemplar (a case to be reasoned from, or an informative past experience) appears as a feature vector: a row of values for the set of parameters that define the problem. For example, in a spelling-to-sound task, the feature vector might consist of the letters of a word. Each exemplar in the dataset is stored with an outcome, such as a phoneme or phone to be generated. When the model is presented with a novel situation (in the form of an outcome-less feature vector), the engine algorithmically sorts the dataset to find exemplars that helpfully resemble it, and selects one, whose outcome is the model's prediction. The particulars of the algorithm distinguish one exemplar-based modeling system from another.
In AM, we think of the feature values as characterizing a context, and the outcome as a behavior that occurs within that context. Accordingly, the novel situation is known as the given context. Given the known features of the context, the AM engine systematically generates all contexts that include it (all of its supracontexts), and extracts from the dataset the exemplars that belong to each. The engine then discards those supracontexts whose outcomes are inconsistent (this measure of consistency will be discussed further below), leaving an analogical set of supracontexts, and probabilistically selects an exemplar from the analogical set with a bias toward those in large supracontexts. This multilevel search exponentially magnifies the likelihood of a behavior's being predicted as it occurs reliably in settings that specifically resemble the given context.
Analogical modeling in detail
AM performs the same process for each case it is asked to evaluate. The given context, consisting of n variables, is used as a template to generate supracontexts. Each supracontext is a set of exemplars in which one or more variables have the same values that they do in the given context, and the other variables are ignored. In effect, each is a view of the data, created by filtering for some criteria of similarity to the given context, and the total set of supracontexts exhausts all such views. Alternatively, each supracontext is a theory of the task or a proposed rule whose predictive power needs to be evaluated.
It is important to note that the supracontexts are not equal peers one with another; they are arranged by their distance from the given context, forming a hierarchy. If a supracontext specifies all of the variables that another one does and more, it is a subcontext of that other one, and it lies closer to the given context. (The hierarchy is not strictly branching; each supracontext can itself be a subcontext of several others, and can have several subcontexts.) This hierarchy becomes significant in the next step of the algorithm.
The engine now chooses the analogical set from among the supracontexts. A supracontext may contain exemplars that only exhibit one behavior; it is deterministically homogeneous and is included. It is a view of the data that displays regularity, or a relevant theory that has never yet been disproven. A supracontext may exhibit several behaviors, but contain no exemplars that occur in any more specific supracontext (that is, in any of its subcontexts); in this case it is non-deterministically homogeneous and is included. Here there is no great evidence that a systematic behavior occurs, but also no counterargument. Finally, a supracontext may be heterogeneous, meaning that it exhibits behaviors that are found in a subcontext (closer to the given context), and also behaviors that are not. Where the ambiguous behavior of the nondeterministically homogeneous supracontext was accepted, this is rejected because the intervening subcontext demonstrates that there is a better theory to be found. The heterogeneous supracontext is therefore excluded. This guarantees that we see an increase in meaningfully consistent behavior in the analogical set as we approach the given context.
With the analogical set chosen, each appearance of an exemplar (for a given exemplar may appear in several of the analogical supracontexts) is given a pointer to every other appearance of an exemplar within its supracontexts. One of these pointers is then selected at random and followed, and the exemplar to which it points provides the outcome. This gives each supracontext an importance proportional to the square of its size, and makes each exemplar likely to be selected in direct proportion to the sum of the sizes of all analogically consistent supracontexts in which it appears. Then, of course, the probability of predicting a particular outcome is proportional to the summed probabilities of all the exemplars that support it.
(Skousen 2002, in Skousen et al. 2002, pp. 11–25, and Skousen 2003, both passim)
Formulas
Given a context with elements:
total number of pairings:
number of agreements for outcome i:
number of disagreements for outcome i:
total number of agreements:
total number of disagreements:
Example
This terminology is best understood through an example. In the example used in the second chapter of Skousen (1989), each context consists of three variables with potential values 0-3
Variable 1: 0,1,2,3
Variable 2: 0,1,2,3
Variable 3: 0,1,2,3
The two outcomes for the dataset are e and r, and the exemplars are:
3 1 0 e
0 3 2 r
2 1 0 r
2 1 2 r
3 1 1 r
We define a network of pointers like so:
The solid lines represent pointers between exemplars with matching outcomes; the dotted lines represent pointers between exemplars with non-matching outcomes.
The statistics for this example are as follows:
total number of pairings:
number of agreements for outcome r:
number of agreements for outcome e:
number of disagreements for outcome r:
number of disagreements for outcome e:
total number of agreements:
total number of disagreements:
uncertainty or fraction of disagreement:
Behavior can only be predicted for a given context; in this example, let us predict the outcome for the context "3 1 2". To do this, we first find all of the contexts containing the given context; these contexts are called supracontexts. We find the supracontexts by systematically eliminating the variables in the given context; with m variables, there will generally be supracontexts. The following table lists each of the sub- and supracontexts; means "not x", and - means "anything".
These contexts are shown in the venn diagram below:
The next step is to determine which exemplars belong to which contexts in order to determine which of the contexts are homogeneous. The table below shows each of the subcontexts, their behavior in terms of the given exemplars, and the number of disagreements within the behavior:
Analyzing the subcontexts in the table above, we see that there is only 1 subcontext with any disagreements: "3 1 ", which in the dataset consists of "3 1 0 e" and "3 1 1 r". There are 2 disagreements in this subcontext; 1 pointing from each of the exemplars to the other (see the pointer network pictured above). Therefore, only supracontexts containing this subcontext will contain any disagreements. We use a simple rule to identify the homogeneous supracontexts:
If the number if disagreements in the supracontext is greater than the number of disagreements in the contained subcontext, we say that it is heterogeneous; otherwise, it is homogeneous.
There are 3 situations that produce a homogeneous supracontext:
The supracontext is empty. This is the case for "3 - 2", which contains no data points. There can be no increase in the number of disagreements, and the supracontext is trivially homogeneous.
The supracontext is deterministic, meaning that only one type of outcome occurs in it. This is the case for "- 1 2" and "- - 2", which contain only data with the r outcome.
Only one subcontext contains any data. The subcontext does not have to be deterministic for the supracontext to be homogeneous. For example, while the supracontexts "3 1 -" and "- 1 2" are deterministic and only contain one non-empty subcontext, "3 - -" contains only the subcontext "3 1 ". This subcontext contains "3 1 0 e" and "3 1 1 r", making it non-deterministic. We say that this type of supracontext is unobstructed and non-deterministic.
The only two heterogeneous supracontexts are "- 1 -" and "- - -". In both of them, it is the combination of the non-deterministic "3 1 " with other subcontexts containing the r outcome which causes the heterogeneity.
There is actually a 4th type of homogeneous supracontext: it contains more than one non-empty subcontext and it is non-deterministic, but the frequency of outcomes in each sub-context is exactly the same. Analogical modeling does not consider this situation, however, for 2 reasons:
Determining whether this 4 situation has occurred requires a test. This is the only test of homogeneity that requires arithmetic, and ignoring it allows our tests of homogeneity to become statistically free, which makes AM better for modeling human reasoning.
It is an extremely rare situation, and thus ignoring it will can be expected not to have a large effect on the predicted outcome.
Next we construct the analogical set, which consists of all of the pointers and outcomes from the homogeneous supracontexts.
The figure below shows the pointer network with the homogeneous contexts highlighted.
The pointers are summarized in the following table:
4 of the pointers in the analogical set are associated with the outcome e, and the other 9 are associated with r. In AM, a pointer is randomly selected and the outcome it points to is predicted. With a total of 13 pointers, the probability of the outcome e being predicted is 4/13 or 30.8%, and for outcome r it is 9/13 or 69.2%. We can create a more detailed account by listing the pointers for each of the occurrences in the homogeneous supracontexts:
We can then see the analogical effect of each of the instances in the data set.
Historical context
Analogy has been considered useful in describing language at least since the time of Saussure. Noam Chomsky and others have more recently criticized analogy as too vague to really be useful (Bańko 1991), an appeal to a deus ex machina. Skousen's proposal appears to address that criticism by proposing an explicit mechanism for analogy, which can be tested for psychological validity.
Applications
Analogical modeling has been employed in experiments ranging from phonology and morphology (linguistics) to orthography and syntax.
Problems
Though analogical modeling aims to create a model free from rules seen as contrived by linguists, in its current form it still requires researchers to select which variables to take into consideration. This is necessary because of the so-called "exponential explosion" of processing power requirements of the computer software used to implement analogical modeling. Recent research suggests that quantum computing could provide the solution to such performance bottlenecks (Skousen et al. 2002, see pp 45–47).
See also
Computational Linguistics
Connectionism
Instance-based learning
k-nearest neighbor algorithm
References
Skousen, Royal. (2003). Analogical Modeling: Exemplars, Rules, and Quantum Computing. Presented at the Berkeley Linguistics Society conference.
External links
Analogical Modeling Research Group Homepage
LINGUIST List Announcement of Analogical Modeling, Skousen et al. (2002)
Classification algorithms
Computational linguistics
Analogy | Analogical modeling | [
"Technology"
] | 2,812 | [
"Natural language and computing",
"Computational linguistics"
] |
5,481,270 | https://en.wikipedia.org/wiki/LetterWise | LetterWise and WordWise were predictive text entry systems developed by Eatoni Ergonomics (Eatoni) for handheld devices with ambiguous keyboards / keypads, typically non-smart traditional cellphones and portable devices with keypads. All patents covering those systems have expired. LetterWise used a prefix based predictive disambiguation method and can be demonstrated to have some advantages over the non-predictive Multi-tap technique that was in widespread use at the time that system was developed. WordWise was not a dictionary-based predictive system, but rather an extension of the LetterWise system to predict whole words from their linguistic components. It was designed to compete with dictionary-based predictive systems such as T9 and iTap which were commonly used with mobile phones with 12-key telephone keypads.
History
The court dismissed a claim that Eatoni Ergonomics came into being in the Spring 1998 as an orally agreed partnership between Howard Gutowitz, David A. Kosower and Eugene Skepner; the former pair having met as social acquaintances and Skepner noted for programming skills. The Eatoni project had the objective of developing reduced size keypads for portable devices. By August 1999 Kosower stopped working on the project due to a disagreements with Gutowitz over terms for setting up the new company and patents Gutowitz had or intended to file which was eventually to result in a subsequent lawsuit. In September 1999 Gutowitz went on to form Delaware limited liability company, Eatoni Ergonomics LLC and on 16 February 2000 formed the Delaware Corporation Eatoni Ergonomics Inc. with Gutowitz as CEO.
Eatoni composed a conference paper for March 2001 on Linguistically Optimized Text Entry on a Mobile Phone but it was not accepted.
In November 2001 at the 14th annual ACM symposium on User interface software and technology a paper prepared by academic Scott MacKenzie and Hedy Kober supported by three from Eatomi including Skepner described experimental results comparing LetterWise against other schemes though notably WordWise was for whatever reason absent from the presentation despite being announced over a year previously.
By May 2002 Gutowitz admitted adoption by established cell phone manufactures was proving difficult although Benq was taking the technology.
Eatoni was involved in a series of lawsuits and countersuits mobile phone manufacturer BlackBerry (RIM) between 2005 and 2012 relative to alleged patent infringement and a settlement to jointly develop software for a reduced keyboard in 2007 and take Eatoni equity stock in 2007.
In the 2010s Eatoni have examined applying the cellphone keytap technology to threatened languages, in particular N'ko; Gutowitz said he had eventually given up trying to get it supported by cellphone manufactures and begun to trial native language applications instead.
Letterwise
Design
Unlike most if not all other predictive text entry systems, LetterWise does not depend on a work dictionary but is a prefix based predictive system. For each letter in the word the user taps the key associated with that letter on the keypad. If the letter chosen is the one required the user simply repeats the process for the next letter in the word, other Next is tapped until the required letter appears. It is claimed this is a very simple and efficient system to use, with no Multi-tap style time-outs or dictionary limitations. In an instruction manual it can be described in the following single sentence: "Hit the key with the letter you want, if it doesn't come up, hit Next until it does.".
Letterwise is not designed to be eyes-free, that is the associated device display must be monitored to perform the next action. This contrasts to Multi-tap and some two key systems where some skilled and expert users are able to input using whilst not referring to the screen.
Example
Entering the word sirs which is a word with biases LetterWise strongly. A Multi-tap timeout will typically be one to two seconds wait for the cursor to move to the next letter but this can be interrupted by tapping the timeout kill or advance button (often down).
A word such as mama would be more favourable to Multi-tap where 4 taps and no timeouts would be required; far less than the 14 taps and 1 timeout required for sirs.
Software app versions
Despite not included as a system keyboard, LetterWise was available in Email / Twitter / SMS / LiveJournal clients for Symbian, iOS as well as Qualcomm's BREW platform (distributed by the Verizon Wireless Get It Now service).
Performance
Performance figures for predictive text examples typically depend on use of natural language. Use of SMS language abbreviations and slang can reduce any advantage. For the tests done by Scott Mackenzie a selection of words from the British National Corpus were used as a representative sample of the English language.
LetterWise uses the probability of letters occurring in a particular sequence to achieve performance.
One measure of performance for text entry systems is "key strokes per character" (kspc). As a baseline the full English PC keyboard has a kspc of 1 as precisely one key stroke is required per typed character. Scott Mackenzie and other academics presented with Eatoni that they had evaluated LetterWise to have a kspc of 1.15 for English. This typically relates to one extra tap per 6 letters compared to standard keyboard. In contrast multi-tap, where a key is repeatedly pressed until the desired letter is found whereupon no further taps are made until the cursor moves to the next letter, has been evaluated to have a kspc of about 2.03.
The pangram The quick brown fox jumps over the lazy dog is sometimes used for keyboard practice. The Eatoni website claims this 35 letter nine word phrase requires only 14 additional keystrokes with LetterWise compared to 42 additional keystrokes for MultiTap.
Memory / storage requirements
Eatoni engineers claim LetterWise has relatively low storage requirements compared to dictionary based solutions. The Eatoni website claims in the storage space typically required for a single dictionary database (30–100kb) it would be possible to fit LetterWise databases for 10–20 different languages. The website says device random-access memory requirements are similarly low, typically under 2kb, and there has been an implementation for 200 bytes of available memory.
Experimental work
LetterWise was also used in TongueWise, a tongue-computer interface for tetraplegics using the LetterWise engine. Clinical evaluations showed LetterWise could offer an almost 50% increase in throughput compared to Multi-tap for English language words.
Chinese LetterWise
The Chinese LetterWise can be loosely described as a two-level version of alphabetic LetterWise. A phonetic character (e.g. Pinyin or Bopomofo) is entered on the first level which is converted automatically to Hanzi ready for the second level (Next Hanzi) key. the Eatoni website showed three related cordless or answerphone physical devices from the same manufacture having adopted the technology.
WordWise
Eatoni Ergonomics also developed and patented the dictionary word based predictive text input system WordWise announcing it in September 2000 with claims it was even faster than LetterWise. Wigdor and Balakrishnan indicated WordWise performs similarly to earlier techniques but with subtle advantages, though as with all predictive techniques the efficiency relied essentially upon the use of natural language with techniques such as abbreviations tending to nullify any advantage.
In addition to the standard version of WordWise Eatoni's website also notes they developed a more advanced version termed shift WordWise. Shift-WordWise required use of a modified CHELNSTY keypad with those letters being selected by a shift key that could be allocated to the 1 button.
From his lawsuit Kosover alleges he has some input into the development of the WordWise system during his time at Eatoni to August 1999. It was designed to complement LetterWise and targeted for keyboards on mobile devices.
Eatoni's website indicates that it is possible for standard WordWise to add additional words to the dictionary on the device however this capability is no mentioned in Iridium satellite phone manuals so the capability might not be present on all versions of WordWise.
If WordWise is unable to suggest the required word either through it not being in the dictionary or due to a keying error the required word will need to be entered in another mode such as LetterWise which can be switched to relatively easily.
It has been suggested that WordWise is less sensitive to keystroke errors than competing T9 text prediction technology.
A multilingual WordWise implementation is included in Iridium satellite phones. Eatoni's website also indicated it was included as their SMS, Twitter and e-mail downloadable client applications for certain Symbian and Apple IOS based products.
Adoption
Despite intensive marketing attempts in the early 2000s LetterWise and WordWise were not widely adopted by cell phone manufacturers with the Multi-tap and T9 system holding the market. LetterWise did find some adoption for DECT cordless phones, which were typically constrained by more limited resources, with Eatoni claiming over 20 million devices capable of LetterWise being shipped. From 2009 certain Iridium satellite phone models were shipped with both LetterWise and WordWise though not necessarily enabled by default; as of May 2019 some of these models seem current.
Notes and references
Notes
References
External links
LetterWise
Input methods for handheld devices | LetterWise | [
"Technology"
] | 1,896 | [
"Input methods for handheld devices"
] |
5,481,296 | https://en.wikipedia.org/wiki/Iterated%20binary%20operation | In mathematics, an iterated binary operation is an extension of a binary operation on a set S to a function on finite sequences of elements of S through repeated application. Common examples include the extension of the addition operation to the summation operation, and the extension of the multiplication operation to the product operation. Other operations, e.g., the set-theoretic operations union and intersection, are also often iterated, but the iterations are not given separate names. In print, summation and product are represented by special symbols; but other iterated operators often are denoted by larger variants of the symbol for the ordinary binary operator. Thus, the iterations of the four operations mentioned above are denoted
and , respectively.
More generally, iteration of a binary function is generally denoted by a slash: iteration of over the sequence is denoted by , following the notation for reduce in Bird–Meertens formalism.
In general, there is more than one way to extend a binary operation to operate on finite sequences, depending on whether the operator is associative, and whether the operator has identity elements.
Definition
Denote by aj,k, with and , the finite sequence of length of elements of S, with members (ai), for . Note that if , the sequence is empty.
For , define a new function Fl on finite nonempty sequences of elements of S, where
Similarly, define
If f has a unique left identity e, the definition of Fl can be modified to operate on empty sequences by defining the value of Fl on an empty sequence to be e (the previous base case on sequences of length 1 becomes redundant). Similarly, Fr can be modified to operate on empty sequences if f has a unique right identity.
If f is associative, then Fl equals Fr, and we can simply write F. Moreover, if an identity element e exists, then it is unique (see Monoid).
If f is commutative and associative, then F can operate on any non-empty finite multiset by applying it to an arbitrary enumeration of the multiset. If f moreover has an identity element e, then this is defined to be the value of F on an empty multiset. If f is idempotent, then the above definitions can be extended to finite sets.
If S also is equipped with a metric or more generally with topology that is Hausdorff, so that the concept of a limit of a sequence is defined in S, then an infinite iteration on a countable sequence in S is defined exactly when the corresponding sequence of finite iterations converges. Thus, e.g., if a0, a1, a2, a3, … is an infinite sequence of real numbers, then the infinite product is defined, and equal to if and only if that limit exists.
Non-associative binary operation
The general, non-associative binary operation is given by a magma. The act of iterating on a non-associative binary operation may be represented as a binary tree.
Notation
Iterated binary operations are used to represent an operation that will be repeated over a set subject to some constraints. Typically the lower bound of a restriction is written under the symbol, and the upper bound over the symbol, though they may also be written as superscripts and subscripts in compact notation. Interpolation is performed over positive integers from the lower to upper bound, to produce the set which will be substituted into the index (below denoted as i) for the repeated operations.
Common notations include the big Sigma (repeated sum) and big Pi (repeated product) notations.
It is possible to specify set membership or other logical constraints in place of explicit indices, in order to implicitly specify which elements of a set shall be used:
Multiple conditions may be written either joined with a logical and or separately:
Less commonly, any binary operator such as exclusive or or set union may also be used. For example, if S is a set of logical propositions:
which is true iff all of the elements of S are true.
See also
Unary operation
Unary function
Binary operation
Binary function
Ternary operation
References
External links
Bulk action
Parallel prefix operation
Nuprl iterated binary operations
Binary operations | Iterated binary operation | [
"Mathematics"
] | 867 | [
"Binary operations",
"Mathematical relations",
"Binary relations"
] |
5,481,404 | https://en.wikipedia.org/wiki/Scale-space%20axioms | In image processing and computer vision, a scale space framework can be used to represent an image as a family of gradually smoothed images. This framework is very general and a variety of scale space representations exist. A typical approach for choosing a particular type of scale space representation is to establish a set of scale-space axioms, describing basic properties of the desired scale-space representation and often chosen so as to make the representation useful in practical applications. Once established, the axioms narrow the possible scale-space representations to a smaller class, typically with only a few free parameters.
A set of standard scale space axioms, discussed below, leads to the linear Gaussian scale-space, which is the most common type of scale space used in image processing and computer vision.
Scale space axioms for the linear scale-space representation
The linear scale space representation of signal obtained by smoothing with the Gaussian kernel satisfies a number of properties 'scale-space axioms' that make it a special form of multi-scale representation:
linearity
where and are signals while and are constants,
shift invariance
where denotes the shift (translation) operator
semi-group structure
with the associated cascade smoothing property
existence of an infinitesimal generator
non-creation of local extrema (zero-crossings) in one dimension,
non-enhancement of local extrema in any number of dimensions
at spatial maxima and at spatial minima,
rotational symmetry
for some function ,
scale invariance
for some functions and where denotes the Fourier transform of ,
positivity
,
normalization
.
In fact, it can be shown that the Gaussian kernel is a unique choice given several different combinations of subsets of these scale-space axioms:
most of the axioms (linearity, shift-invariance, semigroup) correspond to scaling being a semigroup of shift-invariant linear operator, which is satisfied by a number of families integral transforms, while "non-creation of local extrema" for one-dimensional signals or "non-enhancement of local extrema" for higher-dimensional signals are the crucial axioms which relate scale-spaces to smoothing (formally, parabolic partial differential equations), and hence select for the Gaussian.
The Gaussian kernel is also separable in Cartesian coordinates, i.e. . Separability is, however, not counted as a scale-space axiom, since it is a coordinate dependent property related to issues of implementation. In addition, the requirement of separability in combination with rotational symmetry per se fixates the smoothing kernel to be a Gaussian.
There exists a generalization of the Gaussian scale-space theory to more general affine and spatio-temporal scale-spaces. In addition to variabilities over scale, which original scale-space theory was designed to handle, this generalized scale-space theory also comprises other types of variabilities, including image deformations caused by viewing variations, approximated by local affine transformations, and relative motions between objects in the world and the observer, approximated by local Galilean transformations. In this theory, rotational symmetry is not imposed as a necessary scale-space axiom and is instead replaced by requirements of affine and/or Galilean covariance. The generalized scale-space theory leads to predictions about receptive field profiles in good qualitative agreement with receptive field profiles measured by cell recordings in biological vision.
In the computer vision, image processing and signal processing literature there are many other multi-scale approaches, using wavelets and a variety of other kernels, that do not exploit or require the same requirements as scale space descriptions do; please see the article on related multi-scale approaches. There has also been work on discrete scale-space concepts that carry the scale-space properties over to the discrete domain; see the article on scale space implementation for examples and references.
See also
Scale space implementation
References
Image processing
Computer vision | Scale-space axioms | [
"Engineering"
] | 806 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
573,644 | https://en.wikipedia.org/wiki/Loyd%20Blankenship | Loyd Blankenship (born 1965), better known by his pseudonym The Mentor, is an American computer hacker and writer. He has been active since the 1970s, when he was a member of the hacker groups Extasyy Elite and Legion of Doom.
Writings
Hacker Manifesto
He is the author of The Conscience of a Hacker (also known as The Hacker Manifesto); the essay was written after he was arrested and was published in the ezine Phrack. Since the essay's publication in 1986, it has been the subject of numerous panels and T-shirts.
Role-playing games
Blankenship was hired by Steve Jackson Games in 1989. He authored the cyberpunk role-playing sourcebook GURPS Cyberpunk, the manuscript of which was seized in a 1990 raid of Steve Jackson Games headquarters by the U.S. Secret Service. The raid resulted in the subsequent legal case Steve Jackson Games, Inc. v. United States Secret Service.
References
External links
ElfQrin.com Interview with The Mentor (July 31, 2000)
"The Conscience of a Hacker", published in Phrack Volume 1 Issue 7
1965 births
GURPS writers
Hackers
Legion of Doom (hacker group)
Living people | Loyd Blankenship | [
"Technology"
] | 249 | [
"Lists of people in STEM fields",
"Hackers"
] |
573,694 | https://en.wikipedia.org/wiki/Charles%20%C3%89douard%20Guillaume | Charles Édouard Guillaume (; 15 February 1861 – 13 May 1938) was a Swiss physicist who received the Nobel Prize in Physics in 1920 "for the service he had rendered to precision measurements in physics by his discovery of anomalies in nickel steel alloys". In 1919, he gave the fifth Guthrie Lecture at the Institute of Physics in London with the title "The Anomaly of the Nickel-Steels".
Personal life
Charles-Edouard Guillaume was born in Fleurier, Switzerland, on 15 February 1861. Guillaume received his early education in Neuchâtel, and obtained a doctoral degree in Physics at ETH Zurich in 1883.
Guillaume was married in 1888 to A. M. Taufflieb, with whom he had three children.
He died on 13 May 1938 at Sèvres, aged 77.
Scientific career
Guillaume was head of the International Bureau of Weights and Measures. He also worked with Kristian Birkeland, serving at the Observatoire de Paris – Section de Meudon. He conducted several experiments with thermostatic measurements at the observatory.
Nickel–steel alloy
Guillaume is known for his discovery of nickel–steel alloys he named invar, elinvar and , also known as red platinum. Invar has a near-zero coefficient of thermal expansion, making it useful in constructing precision instruments whose dimensions need to remain constant in spite of varying temperature. Elinvar has a near-zero thermal coefficient of the modulus of elasticity, making it useful in constructing instruments with springs that need to be unaffected by varying temperature, such as the marine chronometer. Elinvar is also non-magnetic, which is a secondary useful property for antimagnetic watches.
Space radiation
Guillaume is also known for the earliest estimation of the "radiation of the stars” in his 1896 article ("The Temperature of Space"). This publication made him a pioneer in plasma cosmology, the study of conditions far from any particular star. The concept was later known as the Cosmic microwave background. He was one of the first people in history to estimate the temperature of space, as 5–6 K.
Horology
As the son of a Swiss horologist, Guillaume took an interest in marine chronometers. For use as the compensation balance he developed a slight variation of the invar alloy which had a negative quadratic coefficient of expansion. The purpose of doing this was to eliminate the "middle-temperature" error of the balance wheel. The Guillaume balance (a type of balance wheel) in horology is named after him.
Publications
1886: Études thermométriques (Studies on Thermometry)
1889: Traité de thermométrie de Precision (Treatise on Thermometry) via Internet Archive
1894: Unités et Étalons (Units and Standards)
1896: Les rayons X et la Photographie a traves les corps opaques (X-Rays) via Internet Archive
1896:
1898: Recherches sur le nickel et ses alliages (Investigations on Nickel and its Alloys)
1899: La vie de la matière (The Life of Matter)
1902: La Convention du Mètre et le Bureau international des Poids et Mesures (Metrical Convention and the International Bureau of Weights and Measures)
1904: Les applications des aciers au nickel (Applications of Nickel-Steels) via Internet Archive
1907: Des états de la matière (States of Matter)
1909: Initiation à la Mécanique (Introduction to Mechanics) Hathi Trust record
1913: [1907] Les récents progrès du système métrique (Recent progress in the Metric System)
See also
Carlos Ibáñez e Ibáñez de Ibero – 1st president of the International Committee for Weights and Measures
Notes
References
Nobel Lectures, Physics 1901–1921, "Charles-Edouard Guillaume – Biography". Elsevier Publishing Company, Amsterdam.
Rupert Thomas Gould (1960) The Marine Chronometer: its history and development, Holland Press.
C. E. Guillaume in Nature 1934
Further reading
Robert W. Cahn (2005) "An Unusual Nobel Prize", Notes and Records 59(2).
External links
including the Nobel Lecture, December 11, 1920 Invar and Elinvar
1861 births
1938 deaths
20th-century Swiss physicists
People from Val-de-Travers District
Experimental physicists
ETH Zurich alumni
Nobel laureates in Physics
Swiss Nobel laureates
19th-century Swiss physicists
Swiss Protestants | Charles Édouard Guillaume | [
"Physics"
] | 903 | [
"Experimental physics",
"Experimental physicists"
] |
573,844 | https://en.wikipedia.org/wiki/Aeromancy | Aeromancy (from Greek ἀήρ aḗr, "air", and manteia, "divination") is divination that is conducted by interpreting atmospheric conditions. Alternate terms include "arologie", "aeriology", and "aërology".
Practice
Aeromancy uses cloud formations, wind currents, and cosmological events such as comets, to attempt to divine the past, present, or future. There are sub-types of this practice which are as follows: austromancy (wind divination), ceraunoscopy (observing thunder and lightning), chaomancy (aerial vision), meteormancy (meteors, AKA shooting stars), and nephomancy (cloud divination).
History
Variations on the concept have been used throughout history, the practice is thought to have been used by the ancient Babylonian priests, and is probably alluded to in the bible.
Damascius, the last of the Neoplatonists, records an account of nephomancy in the 5th century CE, during the reign of Leo I:
Cultural influence
The ancient Etruscans produced guides to brontoscopic and fulgural divination of the future, based upon the omens that were supposedly displayed by thunder or lightning that occurred on particular days of the year, or in particular places.
Divination by clouds was condemned by Moses in Deuteronomy 18:10 and 18:14 in the Hebrew Bible. In contrast, english christian bibles typically translate the same hebrew words into "soothsayers" and "conjurers" or the like.
In Renaissance magic, aeromancy was classified as one of the seven "forbidden arts", along with necromancy, geomancy, hydromancy, pyromancy, chiromancy (palmistry), and spatulamancy (scapulimancy). It was thus condemned by Albertus Magnus in Speculum Astronomiae as a derivative of necromancy. The practice was further debunked by Luis de Valladolid in his 1889 work Historia de vita et doctrina Alberti Magni.
See also
References
Divination
Weather prediction | Aeromancy | [
"Physics"
] | 460 | [
"Weather",
"Weather prediction",
"Physical phenomena"
] |
573,846 | https://en.wikipedia.org/wiki/Manne%20Siegbahn | Karl Manne Georg Siegbahn (; 3 December 1886 – 26 September 1978) was a Swedish physicist who received the Nobel Prize in Physics in 1924 "for his discoveries and research in the field of X-ray spectroscopy".
Biography
Siegbahn was born in Örebro, Sweden, the son of Georg Siegbahn and his wife, Emma Zetterberg.
He graduated in Stockholm 1906 and began his studies at Lund University in the same year. During his education he was secretarial assistant to Johannes Rydberg. In 1908 he studied at the University of Göttingen. He obtained his doctorate (PhD) at the Lund University in 1911, his thesis was titled Magnetische Feldmessungen (magnetic field measurements). He became acting professor for Rydberg when his (Rydberg's) health was failing, and succeeded him as full professor in 1920. However, in 1922 he left Lund for a professorship at Uppsala University.
In 1937, Siegbahn was appointed Director of the Physics Department of the Nobel Institute of the Royal Swedish Academy of Sciences. In 1988 this was renamed the Manne Siegbahn Institute (MSI). The institute research groups have been reorganized since, but the name lives on in the Manne Siegbahn Laboratory hosted by Stockholm University.
X-ray spectroscopy
Manne Siegbahn began his studies of X-ray spectroscopy in 1914. Initially he used the same type of spectrometer as Henry Moseley had done for finding the relationship between the wavelength of some elements and their place at the periodic system. Shortly thereafter he developed improved experimental apparatus which allowed him to make very accurate measurements of the X-ray wavelengths produced by atoms of different elements. Also, he found that several of the spectral lines that Moseley had discovered consisted of more components. By studying these components and improving the spectrometer, Siegbahn got an almost complete understanding of the electron shell. He developed a convention for naming the different spectral lines that are characteristic to elements in X-ray spectroscopy, the Siegbahn notation. Siegbahn's precision measurements drove many developments in quantum theory and atomic physics.
Awards and honours
Siegbahn was awarded the Nobel Prize in Physics in 1924. He won the Hughes Medal 1934 and Rumford Medal 1940. In 1944, he patented the Siegbahn pump. Siegbahn was elected a Foreign Member of the Royal Society in 1954.
There is a street, Route Siegbahn, named after Siegbahn at CERN, on the Prévessin site in France.
Personal life
Siegbahn married Karin Högbom in 1914. They had two children: Bo Siegbahn (1915–2008), a diplomat and politician, and Kai Siegbahn (1918–2007), a physicist who received the Nobel Prize in Physics in 1981 for his contribution to the development of X-ray photoelectron spectroscopy.
Awards and decorations
Commander Grand Cross of the Order of the Polar Star (6 June 1947)
Nobel Prize in Physics (1924)
Hughes Medal (1934)
Rumford Medal (1940)
Works
The Spectroscopy of X-Rays (1925)
References
External links
including the Nobel Lecture, December 11, 1925 The X-ray Spectra and the Structure of the Atoms
1886 births
1978 deaths
20th-century Swedish physicists
People from Örebro
Experimental physicists
Lund University alumni
Nobel laureates in Physics
Swedish Nobel laureates
Academic staff of Uppsala University
Members of the French Academy of Sciences
Foreign members of the Royal Society
Foreign members of the USSR Academy of Sciences
Spectroscopists
Amanuenses
Commanders Grand Cross of the Order of the Polar Star
Presidents of the International Union of Pure and Applied Physics
Members of the Royal Society of Sciences in Uppsala | Manne Siegbahn | [
"Physics",
"Chemistry"
] | 746 | [
"Spectrum (physical sciences)",
"Physical chemists",
"Analytical chemists",
"Spectroscopists",
"Experimental physics",
"Spectroscopy",
"Experimental physicists"
] |
573,875 | https://en.wikipedia.org/wiki/Measurement%20in%20quantum%20mechanics | In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic. The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be moving, if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about "local hidden variables" within quantum systems.
Measuring a quantum system generally changes the quantum state that describes that system. This is a central feature of quantum mechanics, one that is both mathematically intricate and conceptually subtle. The mathematical tools for making predictions about what measurement outcomes may occur, and how quantum states can change, were developed during the 20th century and make use of linear algebra and functional analysis. Quantum physics has proven to be an empirical success and to have wide-ranging applicability. However, on a more philosophical level, debates continue about the meaning of the measurement concept.
Mathematical formalism
"Observables" as self-adjoint operators
In quantum mechanics, each physical system is associated with a Hilbert space, each element of which represents a possible state of the physical system. The approach codified by John von Neumann represents a measurement upon a physical system by a self-adjoint operator on that Hilbert space termed an "observable". These observables play the role of measurable quantities familiar from classical physics: position, momentum, energy, angular momentum and so on. The dimension of the Hilbert space may be infinite, as it is for the space of square-integrable functions on a line, which is used to define the quantum physics of a continuous degree of freedom. Alternatively, the Hilbert space may be finite-dimensional, as occurs for spin degrees of freedom. Many treatments of the theory focus on the finite-dimensional case, as the mathematics involved is somewhat less demanding. Indeed, introductory physics texts on quantum mechanics often gloss over mathematical technicalities that arise for continuous-valued observables and infinite-dimensional Hilbert spaces, such as the distinction between bounded and unbounded operators; questions of convergence (whether the limit of a sequence of Hilbert-space elements also belongs to the Hilbert space), exotic possibilities for sets of eigenvalues, like Cantor sets; and so forth. These issues can be satisfactorily resolved using spectral theory; the present article will avoid them whenever possible.
Projective measurement
The eigenvectors of a von Neumann observable form an orthonormal basis for the Hilbert space, and each possible outcome of that measurement corresponds to one of the vectors comprising the basis. A density operator is a positive-semidefinite operator on the Hilbert space whose trace is equal to 1. For each measurement that can be defined, the probability distribution over the outcomes of that measurement can be computed from the density operator. The procedure for doing so is the Born rule, which states that
where is the density operator, and is the projection operator onto the basis vector corresponding to the measurement outcome . The average of the eigenvalues of a von Neumann observable, weighted by the Born rule probabilities, is the expectation value of that observable. For an observable , the expectation value given a quantum state is
A density operator that is a rank-1 projection is known as a pure quantum state, and all quantum states that are not pure are designated mixed. Pure states are also known as wavefunctions. Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system (i.e., for some outcome ). Any mixed state can be written as a convex combination of pure states, though not in a unique way. The state space of a quantum system is the set of all states, pure and mixed, that can be assigned to it.
The Born rule associates a probability with each unit vector in the Hilbert space, in such a way that these probabilities sum to 1 for any set of unit vectors comprising an orthonormal basis. Moreover, the probability associated with a unit vector is a function of the density operator and the unit vector, and not of additional information like a choice of basis for that vector to be embedded in. Gleason's theorem establishes the converse: all assignments of probabilities to unit vectors (or, equivalently, to the operators that project onto them) that satisfy these conditions take the form of applying the Born rule to some density operator.
Generalized measurement (POVM)
In functional analysis and quantum measurement theory, a positive-operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalisation of projection-valued measures (PVMs) and, correspondingly, quantum measurements described by POVMs are a generalisation of quantum measurement described by PVMs. In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system (see Schrödinger–HJW theorem); analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system. POVMs are the most general kind of measurement in quantum mechanics, and can also be used in quantum field theory. They are extensively used in the field of quantum information.
In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, a POVM is a set of positive semi-definite matrices on a Hilbert space that sum to the identity matrix,
In quantum mechanics, the POVM element is associated with the measurement outcome , such that the probability of obtaining it when making a measurement on the quantum state is given by
,
where is the trace operator. When the quantum state being measured is a pure state this formula reduces to
.
State change due to measurement
A measurement upon a quantum system will generally bring about a change of the quantum state of that system. Writing a POVM does not provide the complete information necessary to describe this state-change process. To remedy this, further information is specified by decomposing each POVM element into a product:
The Kraus operators , named for Karl Kraus, provide a specification of the state-change process. They are not necessarily self-adjoint, but the products are. If upon performing the measurement the outcome is obtained, then the initial state is updated to
An important special case is the Lüders rule, named for Gerhart Lüders. If the POVM is itself a PVM, then the Kraus operators can be taken to be the projectors onto the eigenspaces of the von Neumann observable:
If the initial state is pure, and the projectors have rank 1, they can be written as projectors onto the vectors and , respectively. The formula simplifies thus to
Lüders rule has historically been known as the "reduction of the wave packet" or the "collapse of the wavefunction". The pure state implies a probability-one prediction for any von Neumann observable that has as an eigenvector. Introductory texts on quantum theory often express this by saying that if a quantum measurement is repeated in quick succession, the same outcome will occur both times. This is an oversimplification, since the physical implementation of a quantum measurement may involve a process like the absorption of a photon; after the measurement, the photon does not exist to be measured again.
We can define a linear, trace-preserving, completely positive map, by summing over all the possible post-measurement states of a POVM without the normalisation:
It is an example of a quantum channel, and can be interpreted as expressing how a quantum state changes if a measurement is performed but the result of that measurement is lost.
Examples
The prototypical example of a finite-dimensional Hilbert space is a qubit, a quantum system whose Hilbert space is 2-dimensional. A pure state for a qubit can be written as a linear combination of two orthogonal basis states and with complex coefficients:
A measurement in the basis will yield outcome with probability and outcome with probability , so by normalization,
An arbitrary state for a qubit can be written as a linear combination of the Pauli matrices, which provide a basis for self-adjoint matrices:
where the real numbers are the coordinates of a point within the unit ball and
POVM elements can be represented likewise, though the trace of a POVM element is not fixed to equal 1. The Pauli matrices are traceless and orthogonal to one another with respect to the Hilbert–Schmidt inner product, and so the coordinates of the state are the expectation values of the three von Neumann measurements defined by the Pauli matrices. If such a measurement is applied to a qubit, then by the Lüders rule, the state will update to the eigenvector of that Pauli matrix corresponding to the measurement outcome. The eigenvectors of are the basis states and , and a measurement of is often called a measurement in the "computational basis." After a measurement in the computational basis, the outcome of a or measurement is maximally uncertain.
A pair of qubits together form a system whose Hilbert space is 4-dimensional. One significant von Neumann measurement on this system is that defined by the Bell basis, a set of four maximally entangled states:
A common and useful example of quantum mechanics applied to a continuous degree of freedom is the quantum harmonic oscillator. This system is defined by the Hamiltonian
where , the momentum operator and the position operator are self-adjoint operators on the Hilbert space of square-integrable functions on the real line. The energy eigenstates solve the time-independent Schrödinger equation:
These eigenvalues can be shown to be given by
and these values give the possible numerical outcomes of an energy measurement upon the oscillator. The set of possible outcomes of a position measurement on a harmonic oscillator is continuous, and so predictions are stated in terms of a probability density function that gives the probability of the measurement outcome lying in the infinitesimal interval from to .
History of the measurement concept
The "old quantum theory"
The old quantum theory is a collection of results from the years 1900–1925 which predate modern quantum mechanics. The theory was never complete or self-consistent, but was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include Planck's calculation of the blackbody radiation spectrum, Einstein's explanation of the photoelectric effect, Einstein and Debye's work on the specific heat of solids, Bohr and van Leeuwen's proof that classical physics cannot account for diamagnetism, Bohr's model of the hydrogen atom and Arnold Sommerfeld's extension of the Bohr model to include relativistic effects.
The Stern–Gerlach experiment, proposed in 1921 and implemented in 1922, became a prototypical example of a quantum measurement having a discrete set of possible outcomes. In the original experiment, silver atoms were sent through a spatially varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment are deflected, due to the magnetic field gradient, from a straight path. The screen reveals discrete points of accumulation, rather than a continuous distribution, owing to the particles' quantized spin.
Transition to the “new” quantum theory
A 1925 paper by Heisenberg, known in English as "Quantum theoretical re-interpretation of kinematic and mechanical relations", marked a pivotal moment in the maturation of quantum physics. Heisenberg sought to develop a theory of atomic phenomena that relied only on "observable" quantities. At the time, and in contrast with the later standard presentation of quantum mechanics, Heisenberg did not regard the position of an electron bound within an atom as "observable". Instead, his principal quantities of interest were the frequencies of light emitted or absorbed by atoms.
The uncertainty principle dates to this period. It is frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment where one attempts to measure an electron's position and momentum simultaneously. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position-momentum uncertainty principle is due to Kennard, Pauli, and Weyl, and its generalization to arbitrary pairs of noncommuting observables is due to Robertson and Schrödinger.
Writing and for the self-adjoint operators representing position and momentum respectively, a standard deviation of position can be defined as
and likewise for the momentum:
The Kennard–Pauli–Weyl uncertainty relation is
This inequality means that no preparation of a quantum particle can imply simultaneously precise predictions for a measurement of position and for a measurement of momentum. The Robertson inequality generalizes this to the case of an arbitrary pair of self-adjoint operators and . The commutator of these two operators is
and this provides the lower bound on the product of standard deviations:
Substituting in the canonical commutation relation , an expression first postulated by Max Born in 1925, recovers the Kennard–Pauli–Weyl statement of the uncertainty principle.
From uncertainty to no-hidden-variables
The existence of the uncertainty principle naturally raises the question of whether quantum mechanics can be understood as an approximation to a more exact theory. Do there exist "hidden variables", more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory can provide? A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics.
Bell published the theorem now known by his name in 1964, investigating more deeply a thought experiment originally proposed in 1935 by Einstein, Podolsky and Rosen. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. If a Bell test is performed in a laboratory and the results are not thus constrained, then they are inconsistent with the hypothesis that local hidden variables exist. Such results would support the position that there is no way to explain the phenomena of quantum mechanics in terms of a more fundamental description of nature that is more in line with the rules of classical physics. Many types of Bell test have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests". To date, Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave.
Quantum systems as measuring devices
The Robertson–Schrödinger uncertainty principle establishes that when two observables do not commute, there is a tradeoff in predictability between them. The Wigner–Araki–Yanase theorem demonstrates another consequence of non-commutativity: the presence of a conservation law limits the accuracy with which observables that fail to commute with the conserved quantity can be measured. Further investigation in this line led to the formulation of the Wigner–Yanase skew information.
Historically, experiments in quantum physics have often been described in semiclassical terms. For example, the spin of an atom in a Stern–Gerlach experiment might be treated as a quantum degree of freedom, while the atom is regarded as moving through a magnetic field described by the classical theory of Maxwell's equations. But the devices used to build the experimental apparatus are themselves physical systems, and so quantum mechanics should be applicable to them as well. Beginning in the 1950s, Rosenfeld, von Weizsäcker and others tried to develop consistency conditions that expressed when a quantum-mechanical system could be treated as a measuring apparatus. One proposal for a criterion regarding when a system used as part of a measuring device can be modeled semiclassically relies on the Wigner function, a quasiprobability distribution that can be treated as a probability distribution on phase space in those cases where it is everywhere non-negative.
Decoherence
A quantum state for an imperfectly isolated system will generally evolve to be entangled with the quantum state for the environment. Consequently, even if the system's initial state is pure, the state at a later time, found by taking the partial trace of the joint system-environment state, will be mixed. This phenomenon of entanglement produced by system-environment interactions tends to obscure the more exotic features of quantum mechanics that the system could in principle manifest. Quantum decoherence, as this effect is known, was first studied in detail during the 1970s. (Earlier investigations into how classical physics might be obtained as a limit of quantum mechanics had explored the subject of imperfectly isolated systems, but the role of entanglement was not fully appreciated.) A significant portion of the effort involved in quantum computing is to avoid the deleterious effects of decoherence.
To illustrate, let denote the initial state of the system, the initial state of the environment and the Hamiltonian specifying the system-environment interaction. The density operator can be diagonalized and written as a linear combination of the projectors onto its eigenvectors:
Expressing time evolution for a duration by the unitary operator , the state for the system after this evolution is
which evaluates to
The quantities surrounding can be identified as Kraus operators, and so this defines a quantum channel.
Specifying a form of interaction between system and environment can establish a set of "pointer states," states for the system that are (approximately) stable, apart from overall phase factors, with respect to environmental fluctuations. A set of pointer states defines a preferred orthonormal basis for the system's Hilbert space.
Quantum information and computation
Quantum information science studies how information science and its application as technology depend on quantum-mechanical phenomena. Understanding measurement in quantum physics is important for this field in many ways, some of which are briefly surveyed here.
Measurement, entropy, and distinguishability
The von Neumann entropy is a measure of the statistical uncertainty represented by a quantum state. For a density matrix , the von Neumann entropy is
writing in terms of its basis of eigenvectors,
the von Neumann entropy is
This is the Shannon entropy of the set of eigenvalues interpreted as a probability distribution, and so the von Neumann entropy is the Shannon entropy of the random variable defined by measuring in the eigenbasis of . Consequently, the von Neumann entropy vanishes when is pure. The von Neumann entropy of can equivalently be characterized as the minimum Shannon entropy for a measurement given the quantum state , with the minimization over all POVMs with rank-1 elements.
Many other quantities used in quantum information theory also find motivation and justification in terms of measurements. For example, the trace distance between quantum states is equal to the largest difference in probability that those two quantum states can imply for a measurement outcome:
Similarly, the fidelity of two quantum states, defined by
expresses the probability that one state will pass a test for identifying a successful preparation of the other. The trace distance provides bounds on the fidelity via the Fuchs–van de Graaf inequalities:
Quantum circuits
Quantum circuits are a model for quantum computation in which a computation is a sequence of quantum gates followed by measurements. The gates are reversible transformations on a quantum mechanical analog of an n-bit register. This analogous structure is referred to as an n-qubit register. Measurements, drawn on a circuit diagram as stylized pointer dials, indicate where and how a result is obtained from the quantum computer after the steps of the computation are executed. Without loss of generality, one can work with the standard circuit model, in which the set of gates are single-qubit unitary transformations and controlled NOT gates on pairs of qubits, and all measurements are in the computational basis.
Measurement-based quantum computation
Measurement-based quantum computation (MBQC) is a model of quantum computing in which the answer to a question is, informally speaking, created in the act of measuring the physical system that serves as the computer.
Quantum tomography
Quantum state tomography is a process by which, given a set of data representing the results of quantum measurements, a quantum state consistent with those measurement results is computed. It is named by analogy with tomography, the reconstruction of three-dimensional images from slices taken through them, as in a CT scan. Tomography of quantum states can be extended to tomography of quantum channels and even of measurements.
Quantum metrology
Quantum metrology is the use of quantum physics to aid the measurement of quantities that, generally, had meaning in classical physics, such as exploiting quantum effects to increase the precision with which a length can be measured. A celebrated example is the introduction of squeezed light into the LIGO experiment, which increased its sensitivity to gravitational waves.
Laboratory implementations
The range of physical procedures to which the mathematics of quantum measurement can be applied is very broad. In the early years of the subject, laboratory procedures involved the recording of spectral lines, the darkening of photographic film, the observation of scintillations, finding tracks in cloud chambers, and hearing clicks from Geiger counters. Language from this era persists, such as the description of measurement outcomes in the abstract as "detector clicks".
The double-slit experiment is a prototypical illustration of quantum interference, typically described using electrons or photons. The first interference experiment to be carried out in a regime where both wave-like and particle-like aspects of photon behavior are significant was G. I. Taylor's test in 1909. Taylor used screens of smoked glass to attenuate the light passing through his apparatus, to the extent that, in modern language, only one photon would be illuminating the interferometer slits at a time. He recorded the interference patterns on photographic plates; for the dimmest light, the exposure time required was roughly three months. In 1974, the Italian physicists Pier Giorgio Merli, Gian Franco Missiroli, and Giulio Pozzi implemented the double-slit experiment using single electrons and a television tube. A quarter-century later, a team at the University of Vienna performed an interference experiment with buckyballs, in which the buckyballs that passed through the interferometer were ionized by a laser, and the ions then induced the emission of electrons, emissions which were in turn amplified and detected by an electron multiplier.
Modern quantum optics experiments can employ single-photon detectors. For example, in the "BIG Bell test" of 2018, several of the laboratory setups used single-photon avalanche diodes. Another laboratory setup used superconducting qubits. The standard method for performing measurements upon superconducting qubits is to couple a qubit with a resonator in such a way that the characteristic frequency of the resonator shifts according to the state for the qubit, and detecting this shift by observing how the resonator reacts to a probe signal.
Interpretations of quantum mechanics
Despite the consensus among scientists that quantum physics is in practice a successful theory, disagreements persist on a more philosophical level. Many debates in the area known as quantum foundations concern the role of measurement in quantum mechanics. Recurring questions include which interpretation of probability theory is best suited for the probabilities calculated from the Born rule; and whether the apparent randomness of quantum measurement outcomes is fundamental, or a consequence of a deeper deterministic process. Worldviews that present answers to questions like these are known as "interpretations" of quantum mechanics; as the physicist N. David Mermin once quipped, "New interpretations appear every year. None ever disappear."
A central concern within quantum foundations is the "quantum measurement problem," though how this problem is delimited, and whether it should be counted as one question or multiple separate issues, are contested topics. Of primary interest is the seeming disparity between apparently distinct types of time evolution. Von Neumann declared that quantum mechanics contains "two fundamentally different types" of quantum-state change. First, there are those changes involving a measurement process, and second, there is unitary time evolution in the absence of measurement. The former is stochastic and discontinuous, writes von Neumann, and the latter deterministic and continuous. This dichotomy has set the tone for much later debate. Some interpretations of quantum mechanics find the reliance upon two different types of time evolution distasteful and regard the ambiguity of when to invoke one or the other as a deficiency of the way quantum theory was historically presented. To bolster these interpretations, their proponents have worked to derive ways of regarding "measurement" as a secondary concept and deducing the seemingly stochastic effect of measurement processes as approximations to more fundamental deterministic dynamics. However, consensus has not been achieved among proponents of the correct way to implement this program, and in particular how to justify the use of the Born rule to calculate probabilities. Other interpretations regard quantum states as statistical information about quantum systems, thus asserting that abrupt and discontinuous changes of quantum states are not problematic, simply reflecting updates of the available information. Of this line of thought, Bell asked, "Whose information? Information about what?" Answers to these questions vary among proponents of the informationally-oriented interpretations.
See also
Einstein's thought experiments
Holevo's theorem
Quantum error correction
Quantum limit
Quantum logic
Quantum Zeno effect
Schrödinger's cat
SIC-POVM
Notes
References
Further reading
Philosophy of physics
fr:Problème de la mesure quantique | Measurement in quantum mechanics | [
"Physics"
] | 5,548 | [
"Philosophy of physics",
"Quantum measurement",
"Applied and interdisciplinary physics",
"Quantum mechanics"
] |
573,880 | https://en.wikipedia.org/wiki/Fine-tuned%20universe | The fine-tuned universe is the hypothesis that, because "life as we know it" could not exist if the constants of nature – such as the electron charge, the gravitational constant and others – had been even slightly different, the universe must be tuned specifically for life. In practice, this hypothesis is formulated in terms of dimensionless physical constants.
History
In 1913, chemist Lawrence Joseph Henderson wrote The Fitness of the Environment, one of the first books to explore fine tuning in the universe. Henderson discusses the importance of water and the environment to living things, pointing out that life as it exists on Earth depends entirely on Earth's very specific environmental conditions, especially the prevalence and properties of water.
In 1961, physicist Robert H. Dicke argued that certain forces in physics, such as gravity and electromagnetism, must be perfectly fine-tuned for life to exist in the universe. Fred Hoyle also argued for a fine-tuned universe in his 1983 book The Intelligent Universe, writing, "The list of anthropic properties, apparent accidents of a non-biological nature without which carbon-based and hence human life could not exist, is large and impressive".
Belief in the fine-tuned universe led to the expectation that the Large Hadron Collider would produce evidence of physics beyond the Standard Model, such as supersymmetry, but by 2012 it had not produced evidence for supersymmetry at the energy scales it was able to probe.
Motivation
Physicist Paul Davies said: "There is now broad agreement among physicists and cosmologists that the Universe is in several respects 'fine-tuned' for life. But the conclusion is not so much that the Universe is fine-tuned for life; rather it is fine-tuned for the building blocks and environments that life requires". He also said that anthropic' reasoning fails to distinguish between minimally biophilic universes, in which life is permitted, but only marginally possible, and optimally biophilic universes, in which life flourishes because biogenesis occurs frequently". Among scientists who find the evidence persuasive, a variety of natural explanations have been proposed, such as the existence of multiple universes introducing a survivorship bias under the anthropic principle.
The premise of the fine-tuned universe assertion is that a small change in several of the physical constants would make the universe radically different. Stephen Hawking observed: "The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. ... The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life".
For example, if the strong nuclear force were 2% stronger than it is (i.e. if the coupling constant representing its strength were 2% larger) while the other constants were left unchanged, diprotons would be stable; according to Davies, hydrogen would fuse into them instead of deuterium and helium. This would drastically alter the physics of stars, and presumably preclude the existence of life similar to what we observe on Earth. The diproton's existence would short-circuit the slow fusion of hydrogen into deuterium. Hydrogen would fuse so easily that it is likely that all the universe's hydrogen would be consumed in the first few minutes after the Big Bang. This "diproton argument" is disputed by other physicists, who calculate that as long as the increase in strength is less than 50%, stellar fusion could occur despite the existence of stable diprotons.
The precise formulation of the idea is made difficult by the fact that it is not yet known how many independent physical constants there are. The standard model of particle physics has 25 freely adjustable parameters and general relativity has one more, the cosmological constant, which is known to be nonzero but profoundly small in value. Because physicists have not developed an empirically successful theory of quantum gravity, there is no known way to combine quantum mechanics, on which the standard model depends, and general relativity.
Without knowledge of this more complete theory suspected to underlie the standard model, it is impossible to definitively count the number of truly independent physical constants. In some candidate theories, the number of independent physical constants may be as small as one. For example, the cosmological constant may be a fundamental constant but attempts have also been made to calculate it from other constants, and according to the author of one such calculation, "the small value of the cosmological constant is telling us that a remarkably precise and totally unexpected relation exists among all the parameters of the Standard Model of particle physics, the bare cosmological constant and unknown physics".
Examples
Martin Rees formulates the fine-tuning of the universe in terms of the following six dimensionless physical constants.
N, the ratio of the electromagnetic force to the gravitational force between a pair of protons, is approximately 1036. According to Rees, if it were significantly smaller, only a small and short-lived universe could exist. If it were large enough, they would repel them so violently that larger atoms would never be generated.
Epsilon (ε), a measure of the nuclear efficiency of fusion from hydrogen to helium, is 0.007: when four nucleons fuse into helium, 0.007 (0.7%) of their mass is converted to energy. The value of ε is in part determined by the strength of the strong nuclear force. If ε were 0.006, a proton could not bond to a neutron, and only hydrogen could exist, and complex chemistry would be impossible. According to Rees, if it were above 0.008, no hydrogen would exist, as all the hydrogen would have been fused shortly after the Big Bang. Other physicists disagree, calculating that substantial hydrogen remains as long as the strong force coupling constant increases by less than about 50%.
Omega (Ω), commonly known as the density parameter, is the relative importance of gravity and expansion energy in the universe. It is the ratio of the mass density of the universe to the "critical density" and is approximately 1. If gravity were too strong compared with dark energy and the initial cosmic expansion rate, the universe would have collapsed before life could have evolved. If gravity were too weak, no stars would have formed.
Lambda (Λ), commonly known as the cosmological constant, describes the ratio of the density of dark energy to the critical energy density of the universe, given certain reasonable assumptions such as that dark energy density is a constant. In terms of Planck units, and as a natural dimensionless value, Λ is on the order of . This is so small that it has no significant effect on cosmic structures that are smaller than a billion light-years across. A slightly larger value of the cosmological constant would have caused space to expand rapidly enough that stars and other astronomical structures would not be able to form.
Q, the ratio of the gravitational energy required to pull a large galaxy apart to the energy equivalent of its mass, is around 10−5. If it is too small, no stars can form. If it is too large, no stars can survive because the universe is too violent, according to Rees.
D, the number of spatial dimensions in spacetime, is 3. Rees claims that life could not exist if there were 2 or 4 spatial dimensions. Rees argues this does not preclude the existence of ten-dimensional strings.
Max Tegmark argued that if there is more than one time dimension, then physical systems' behavior could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, protons and electrons would be unstable and could decay into particles having greater mass than themselves. This is not a problem if the particles have a sufficiently low temperature.
Carbon and oxygen
An older example is the Hoyle state, the third-lowest energy state of the carbon-12 nucleus, with an energy of 7.656 MeV above the ground level. According to one calculation, if the state's energy level were lower than 7.3 or greater than 7.9 MeV, insufficient carbon would exist to support life. To explain the universe's abundance of carbon, the Hoyle state must be further tuned to a value between 7.596 and 7.716 MeV. A similar calculation, focusing on the underlying fundamental constants that give rise to various energy levels, concludes that the strong force must be tuned to a precision of at least 0.5%, and the electromagnetic force to a precision of at least 4%, to prevent either carbon production or oxygen production from dropping significantly.
Explanations
Some explanations of fine-tuning are naturalistic. First, the fine-tuning might be an illusion: more fundamental physics may explain the apparent fine-tuning in physical parameters in the current understanding by constraining the values those parameters are likely to take. As Lawrence Krauss put it, "certain quantities have seemed inexplicable and fine-tuned, and once we understand them, they don't seem to be so fine-tuned. We have to have some historical perspective". Some argue it is possible that a final fundamental theory of everything will explain the underlying causes of the apparent fine-tuning in every parameter.
Still, as modern cosmology developed, various hypotheses not presuming hidden order have been proposed. One is a multiverse, where fundamental physical constants are postulated to have different values outside of the known universe. On this hypothesis, separate parts of reality would have wildly different characteristics. In such scenarios, the appearance of fine-tuning is explained as a consequence of the weak anthropic principle and selection bias, specifically survivorship bias. Only those universes with fundamental constants hospitable to life, such as on Earth, could contain life forms capable of observing the universe who can contemplate the question of fine-tuning. Zhi-Wei Wang and Samuel L. Braunstein argue that the apparent fine-tuning of fundamental constants could be due to the lack of understanding of these constants.
Multiverse
If the universe is just one of many and possibly infinite universes, each with different physical phenomena and constants, it is unsurprising that there is a universe hospitable to intelligent life. Some versions of the multiverse hypothesis therefore provide a simple explanation for any fine-tuning, while the analysis of Wang and Braunstein challenges the view that this universe is unique in its ability to support life.
The multiverse idea has led to considerable research into the anthropic principle and has been of particular interest to particle physicists because theories of everything do apparently generate large numbers of universes in which the physical constants vary widely. Although there is no evidence for the existence of a multiverse, some versions of the theory make predictions of which some researchers studying M-theory and gravity leaks hope to see some evidence soon. According to Laura Mersini-Houghton, the WMAP cold spot could provide testable empirical evidence of a parallel universe. Variants of this approach include Lee Smolin's notion of cosmological natural selection, the ekpyrotic universe, and the bubble universe theory.
It has been suggested that invoking the multiverse to explain fine-tuning is a form of the inverse gambler's fallacy.
Top-down cosmology
Stephen Hawking and Thomas Hertog proposed that the universe's initial conditions consisted of a superposition of many possible initial conditions, only a small fraction of which contributed to the conditions seen today. According to their theory, the universe's "fine-tuned" physical constants are inevitable, because the universe "selects" only those histories that led to the present conditions. In this way, top-down cosmology provides an anthropic explanation for why this universe allows matter and life without invoking the multiverse.
Carbon chauvinism
Some forms of fine-tuning arguments about the formation of life assume that only carbon-based life forms are possible, an assumption sometimes called carbon chauvinism. Conceptually, alternative biochemistry or other forms of life are possible.
Simulation hypothesis
The simulation hypothesis holds that the universe is fine-tuned simply because the more technologically advanced simulation operator(s) programmed it that way.
No improbability
Graham Priest, Mark Colyvan, Jay L. Garfield, and others have argued against the presupposition that "the laws of physics or the boundary conditions of the universe could have been other than they are".
Religious apologetics
Some scientists, theologians, and philosophers, as well as certain religious groups, argue that providence or creation are responsible for fine-tuning. Christian philosopher Alvin Plantinga argues that random chance, applied to a single and sole universe, only raises the question as to why this universe could be so "lucky" as to have precise conditions that support life at least at some place (the Earth) and time (within millions of years of the present).
William Lane Craig, a philosopher and Christian apologist, cites this fine-tuning of the universe as evidence for the existence of God or some form of intelligence capable of manipulating (or designing) the basic physics that governs the universe. Philosopher and theologian Richard Swinburne reaches the design conclusion using Bayesian probability. Scientist and theologian Alister McGrath observed that the fine-tuning of carbon is even responsible for nature's ability to tune itself to any degree.
The entire biological evolutionary process depends upon the unusual chemistry of carbon, which allows it to bond to itself, as well as other elements, creating highly complex molecules that are stable over prevailing terrestrial temperatures, and are capable of conveying genetic information (especially DNA). [...] Whereas it might be argued that nature creates its own fine-tuning, this can only be done if the primordial constituents of the universe are such that an evolutionary process can be initiated. The unique chemistry of carbon is the ultimate foundation of the capacity of nature to tune itself.
Theoretical physicist and Anglican priest John Polkinghorne stated: "Anthropic fine tuning is too remarkable to be dismissed as just a happy accident". Theologian and philosopher Andrew Loke argues that there are only five possible categories of hypotheses concerning fine-tuning and order: (i) chance, (ii) regularity, (iii) combinations of regularity and chance, (iv) uncaused, and (v) design, and that only design gives an exclusively logical explanation of order in the universe. He argues that the Kalam Cosmological Argument strengthens the teleological argument by answering the question "Who designed the Designer?". Creationist Hugh Ross advances a number of fine-tuning hypotheses. One is the existence of what Ross calls "vital poisons", which are elemental nutrients that are harmful in large quantities but essential for animal life in smaller quantities.
Robin Collins argues that the universe is fine-tuned for scientific discoverability, and that this fine-tuning cannot be explained by the multiverse hypothesis. According to Collins, the universe's laws, fundamental parameters, and initial conditions must be just right for the universe to be as discoverable as ours. According to Collins, examples of fine-tuning for discoverability include:
The fine-structure constant is fine-tuned for energy usage. If it were stronger, there would be no practical way to harness energy. If it were weaker, fire would burn through wood too quickly and energy usage would be impractical.
The baryon-to-photon ratio allowed for the discovery of the big bang via the cosmic microwave background.
Many things in particle physics are within a narrow range required for discoverability, such as the mass of the Higgs boson.
See also
Fine-tuning (disambiguation)
God of the gaps
References
Further reading
John D. Barrow (2003). The Constants of Nature, Pantheon Books,
Bernard Carr, ed. (2007). Universe or Multiverse? Cambridge University Press.
Mark Colyvan, Jay L. Garfield, Graham Priest (2005). "Problems with the Argument from Fine Tuning". Synthese 145: 325–38.
Paul Davies (1982). The Accidental Universe, Cambridge University Press,
Paul Davies (2007). Cosmic Jackpot: Why Our Universe Is Just Right for Life, Houghton Mifflin Harcourt, . Reprinted as: The Goldilocks Enigma: Why Is the Universe Just Right for Life?, 2008, Mariner Books, .
Geraint F. Lewis and Luke A. Barnes (2016). A Fortunate Universe: Life in a finely tuned cosmos, Cambridge University Press.
Alister McGrath (2009). A Fine-Tuned Universe: The Quest for God in Science and Theology, Westminster John Knox Press, .
Timothy J. McGrew, Lydia McGrew, Eric Vestrup (2001). "Probabilities and the Fine-Tuning Argument: A Sceptical View". Mind 110: 1027–37.
Simon Conway Morris (2003). Life's Solution: Inevitable Humans in a Lonely Universe. Cambridge Univ. Press.
Martin Rees (1999). Just Six Numbers, HarperCollins Publishers, .
Victor J. Stenger (2011). The Fallacy of Fine-Tuning: Why the Universe Is Not Designed for Us. Prometheus Books. .
Peter Ward and Donald Brownlee (2000). Rare Earth: Why Complex Life is Uncommon in the Universe. Springer Verlag.
Jeffrey Koperski (2015). The Physics of Theism: God, Physics, and the Philosophy of Science, John Wiley & Sons
External links
Defense of fine-tuning
Anil Ananthaswamy: Is the Universe Fine-tuned for Life?
Francis Collins, Why I'm a man of science-and faith. National Geographic article.
Custom Universe, Documentary of fine-tuning with scientific experts.
Hugh Ross: Evidence for the Fine Tuning of the Universe
Interview with Charles Townes discussing science and religion.
Criticism of fine tuning
Bibliography of online Links to criticisms of the Fine-Tuning Argument. Secular Web.
Victor Stenger:
"A Case Against the Fine-Tuning of the Cosmos"
"Does the Cosmos Show Evidence of Purpose?"
"Is the Universe fine-tuned for us?"
Elliott Sober, "The Design Argument." An earlier version appeared in the Blackwell Companion to the Philosophy of Religion (2004).
Anthropic principle
Astronomical hypotheses
Intelligent design
Philosophical arguments
Physical cosmology | Fine-tuned universe | [
"Physics",
"Astronomy",
"Engineering"
] | 3,814 | [
"Astronomical hypotheses",
"Astronomical sub-disciplines",
"Philosophy of astronomy",
"Intelligent design",
"Theoretical physics",
"Astrophysics",
"Astronomical controversies",
"Design",
"Anthropic principle",
"Physical cosmology"
] |
573,943 | https://en.wikipedia.org/wiki/General%20protection%20fault | A general protection fault (GPF) in the x86 instruction set architectures (ISAs) is a fault (a type of interrupt) initiated by ISA-defined protection mechanisms in response to an access violation caused by some running code, either in the kernel or a user program. The mechanism is first described in Intel manuals and datasheets for the Intel 80286 CPU, which was introduced in 1983; it is also described in section 9.8.13 in the Intel 80386 programmer's reference manual from 1986. A general protection fault is implemented as an interrupt (vector number 13 (0Dh)). Some operating systems may also classify some exceptions not related to access violations, such as illegal opcode exceptions, as general protection faults, even though they have nothing to do with memory protection. If a CPU detects a protection violation, it stops executing the code and sends a GPF interrupt. In most cases, the operating system removes the failing process from the execution queue, signals the user, and continues executing other processes. If, however, the operating system fails to catch the general protection fault, i.e. another protection violation occurs before the operating system returns from the previous GPF interrupt, the CPU signals a double fault, stopping the operating system. If yet another failure (triple fault) occurs, the CPU is unable to recover; since 80286, the CPU enters a special halt state called "Shutdown", which can only be exited through a hardware reset. The IBM PC AT, the first PC-compatible system to contain an 80286, has hardware that detects the Shutdown state and automatically resets the CPU when it occurs. All descendants of the PC AT do the same, so in a PC, a triple fault causes an immediate system reset.
Specific behavior
Microsoft Windows
In Microsoft Windows, the general protection fault presents with varied language, depending on product version:
In Windows 95, 98 and Me, there is an alternate error message, used mostly with Windows 3.x programs: "An error has occurred in your program. To keep working anyway, click Ignore and save your work in a new file. To quit this program, click Close. You will lose information you entered since your last save." Clicking "Close" results in one of the error messages above, depending on Windows version. "Ignore" sometimes does this too.
Unix
In Linux and other Unices, the errors are reported separately (e.g. segmentation fault for memory errors).
Memory errors
In memory errors, the faulting program accesses memory that it should not access. Examples include:
Attempting to write to a read-only portion of memory
Attempting to execute bytes in memory which are not designated as instructions
Attempting to read as data bytes in memory which are designated as instructions
Other miscellaneous conflicts between the designation of a part of memory and its use
However, many modern operating systems implement their memory access-control schemes via paging instead of segmentation, so it is often the case that invalid memory references in operating systems such as Windows are reported via page faults instead of general protection faults. Operating systems typically provide an abstraction layer (such as exception handling or signals) that hides whatever internal processor mechanism was used to raise a memory access error from a program, for the purposes of providing a standard interface for handling many different types of processor-generated error conditions.
In terms of the x86 architecture, general protection faults are specific to segmentation-based protection when it comes to memory accesses. However, general protection faults are still used to report other protection violations (aside from memory access violations) when paging is used, such as the use of instructions not accessible from the current privilege level (CPL).
While it is theoretically possible for an operating system to utilize both paging and segmentation, for the most part, common operating systems typically rely on paging for the bulk of their memory access control needs.
Privilege errors
There are some things on a computer which are reserved for the exclusive use of the operating system. If a program which is not part of the operating system attempts to use one of these features, it may cause a general protection fault.
Additionally, there are storage locations which are reserved both for the operating system and the processor itself. As a consequence of their reservation, they are read-only and an attempt to write data to them by an unprivileged program produces an error.
Technical causes for faults
General protection faults are raised by the processor when a protected instruction is encountered which exceeds the permission level of the currently executing task, either because a user-mode program is attempting a protected instruction, or because the operating system has issued a request which would put the processor into an undefined state.
General protection faults are caught and handled by modern operating systems. Generally, if the fault originated in a user-mode program, the user-mode program is terminated. If, however, the fault originated in a core system driver or the operating system itself, the operating system usually saves diagnostic information either to a file or to the screen and stops operating. It either restarts the computer or displays an error screen, such as a Blue Screen of Death or kernel panic.
Segment limits exceeded
Segment limits can be exceeded:
with code segment (CS), data segment (DS), or ES, FS, or GS (extra segment) registers; or
accessing descriptor tables such as the Global Descriptor Table (GDT), the Interrupt descriptor table (IDT) and the Local Descriptor Table (LDT).
Segment permissions violated
Segment permissions can be violated by:
jumping to non-executable segments
writing to code segments, or read only segments
reading execute-only segments
Segments illegally loaded
This can occur when:
a stack segment (SS) is loaded with a segment selector for a read only, executable, null segment, or segment with descriptor privilege not matching the current privilege in CS
a code segment (CS) loaded with a segment selector for a data, system, or null segment
SS, DS, ES, FS, or GS are segments loaded with a segment selector for a system segment
SS, DS, ES, FS, or GS are segments loaded with a segment selector for an execute-only code segment
accessing memory using DS, ES, FS, or GS registers, when they contain a null selector
Switching
Faults can occur in the task state segment (TSS) structure when:
switching to a busy task during a call or jump instruction
switching to an available task during an interrupt return (IRET) instruction
using a segment selector on a switch pointing to a TSS descriptor in the LDT
Miscellaneous
Other causes of general protection faults are:
attempting to access an interrupt/exception handler from virtual 8086 mode when the handler's code segment descriptor privilege level (DPL) is greater than zero
attempting to write a one into the reserved bits of CR4
attempting to execute privileged instructions when the current privilege level (CPL) is not zero
attempting to execute a single instruction with a length greater than 15 bytes (possibly by prepending the instruction with superfluous prefixes)
writing to a reserved bit in an MSR instruction
accessing a gate containing a null segment selector
executing a software interrupt when the CPL is greater than the DPL set for the interrupt gate
the segment selector in a call, interrupt or trap gate does not point to a code segment
violating privilege rules
enabling paging whilst disabling protection
referencing the interrupt descriptor table following an interrupt or exception that is not an interrupt, trap, or a task gate
Legacy SSE: Memory operand is not 16-byte aligned.
References
Further reading
Intel Architecture Software Developer's Manual–Volume 3: System Programming
Operating system technology
Computer errors
de:Allgemeine Schutzverletzung | General protection fault | [
"Technology"
] | 1,590 | [
"Computer errors"
] |
574,024 | https://en.wikipedia.org/wiki/Hilbert%20transform | In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, of a real variable and produces another function of a real variable . The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see ). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see ). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal . The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
Definition
The Hilbert transform of can be thought of as the convolution of with the function , known as the Cauchy kernel. Because 1/ is not integrable across , the integral defining the convolution does not always converge. Instead, the Hilbert transform is defined using the Cauchy principal value (denoted here by ). Explicitly, the Hilbert transform of a function (or signal) is given by
provided this integral exists as a principal value. This is precisely the convolution of with the tempered distribution . Alternatively, by changing variables, the principal-value integral can be written explicitly as
When the Hilbert transform is applied twice in succession to a function , the result is
provided the integrals defining both iterations converge in a suitable sense. In particular, the inverse transform is
. This fact can most easily be seen by considering the effect of the Hilbert transform on the Fourier transform of (see below).
For an analytic function in the upper half-plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. That is, if is analytic in the upper half complex plane , and , then up to an additive constant, provided this Hilbert transform exists.
Notation
In signal processing the Hilbert transform of is commonly denoted by . However, in mathematics, this notation is already extensively used to denote the Fourier transform of . Occasionally, the Hilbert transform may be denoted by . Furthermore, many sources define the Hilbert transform as the negative of the one defined here.
History
The Hilbert transform arose in Hilbert's 1905 work on a problem Riemann posed concerning analytic functions, which has come to be known as the Riemann–Hilbert problem. Hilbert's work was mainly concerned with the Hilbert transform for functions defined on the circle. Some of his earlier work related to the Discrete Hilbert Transform dates back to lectures he gave in Göttingen. The results were later published by Hermann Weyl in his dissertation. Schur improved Hilbert's results about the discrete Hilbert transform and extended them to the integral case. These results were restricted to the spaces and . In 1928, Marcel Riesz proved that the Hilbert transform can be defined for u in (Lp space) for , that the Hilbert transform is a bounded operator on for , and that similar results hold for the Hilbert transform on the circle as well as the discrete Hilbert transform. The Hilbert transform was a motivating example for Antoni Zygmund and Alberto Calderón during their study of singular integrals. Their investigations have played a fundamental role in modern harmonic analysis. Various generalizations of the Hilbert transform, such as the bilinear and trilinear Hilbert transforms are still active areas of research today.
Relationship with the Fourier transform
The Hilbert transform is a multiplier operator. The multiplier of is , where is the signum function. Therefore:
where denotes the Fourier transform. Since , it follows that this result applies to the three common definitions of .
By Euler's formula,
Therefore, has the effect of shifting the phase of the negative frequency components of by +90° ( radians) and the phase of the positive frequency components by −90°, and has the effect of restoring the positive frequency components while shifting the negative frequency ones an additional +90°, resulting in their negation (i.e., a multiplication by −1).
When the Hilbert transform is applied twice, the phase of the negative and positive frequency components of are respectively shifted by +180° and −180°, which are equivalent amounts. The signal is negated; i.e., , because
Table of selected Hilbert transforms
In the following table, the frequency parameter is real.
Notes
An extensive table of Hilbert transforms is available.
Note that the Hilbert transform of a constant is zero.
Domain of definition
It is by no means obvious that the Hilbert transform is well-defined at all, as the improper integral defining it must converge in a suitable sense. However, the Hilbert transform is well-defined for a broad class of functions, namely those in for .
More precisely, if is in for , then the limit defining the improper integral
exists for almost every . The limit function is also in and is in fact the limit in the mean of the improper integral as well. That is,
as in the norm, as well as pointwise almost everywhere, by the Titchmarsh theorem.
In the case , the Hilbert transform still converges pointwise almost everywhere, but may itself fail to be integrable, even locally. In particular, convergence in the mean does not in general happen in this case. The Hilbert transform of an function does converge, however, in -weak, and the Hilbert transform is a bounded operator from to . (In particular, since the Hilbert transform is also a multiplier operator on , Marcinkiewicz interpolation and a duality argument furnishes an alternative proof that is bounded on .)
Properties
Boundedness
If , then the Hilbert transform on is a bounded linear operator, meaning that there exists a constant such that
for all
The best constant is given by
An easy way to find the best for being a power of 2 is through the so-called Cotlar's identity that for all real valued . The same best constants hold for the periodic Hilbert transform.
The boundedness of the Hilbert transform implies the convergence of the symmetric partial sum operator
to in
Anti-self adjointness
The Hilbert transform is an anti-self adjoint operator relative to the duality pairing between and the dual space where and are Hölder conjugates and . Symbolically,
for and
Inverse transform
The Hilbert transform is an anti-involution, meaning that
provided each transform is well-defined. Since preserves the space this implies in particular that the Hilbert transform is invertible on and that
Complex structure
Because ("" is the identity operator) on the real Banach space of real-valued functions in the Hilbert transform defines a linear complex structure on this Banach space. In particular, when , the Hilbert transform gives the Hilbert space of real-valued functions in the structure of a complex Hilbert space.
The (complex) eigenstates of the Hilbert transform admit representations as holomorphic functions in the upper and lower half-planes in the Hardy space by the Paley–Wiener theorem.
Differentiation
Formally, the derivative of the Hilbert transform is the Hilbert transform of the derivative, i.e. these two linear operators commute:
Iterating this identity,
This is rigorously true as stated provided and its first derivatives belong to One can check this easily in the frequency domain, where differentiation becomes multiplication by .
Convolutions
The Hilbert transform can formally be realized as a convolution with the tempered distribution
Thus formally,
However, a priori this may only be defined for a distribution of compact support. It is possible to work somewhat rigorously with this since compactly supported functions (which are distributions a fortiori) are dense in . Alternatively, one may use the fact that h(t) is the distributional derivative of the function ; to wit
For most operational purposes the Hilbert transform can be treated as a convolution. For example, in a formal sense, the Hilbert transform of a convolution is the convolution of the Hilbert transform applied on only one of either of the factors:
This is rigorously true if and are compactly supported distributions since, in that case,
By passing to an appropriate limit, it is thus also true if and provided that
from a theorem due to Titchmarsh.
Invariance
The Hilbert transform has the following invariance properties on .
It commutes with translations. That is, it commutes with the operators for all in
It commutes with positive dilations. That is it commutes with the operators for all .
It anticommutes with the reflection .
Up to a multiplicative constant, the Hilbert transform is the only bounded operator on 2 with these properties.
In fact there is a wider set of operators that commute with the Hilbert transform. The group acts by unitary operators on the space by the formula
This unitary representation is an example of a principal series representation of In this case it is reducible, splitting as the orthogonal sum of two invariant subspaces, Hardy space and its conjugate. These are the spaces of boundary values of holomorphic functions on the upper and lower halfplanes. and its conjugate consist of exactly those functions with Fourier transforms vanishing on the negative and positive parts of the real axis respectively. Since the Hilbert transform is equal to , with being the orthogonal projection from onto and the identity operator, it follows that and its orthogonal complement are eigenspaces of for the eigenvalues . In other words, commutes with the operators . The restrictions of the operators to and its conjugate give irreducible representations of – the so-called limit of discrete series representations.
Extending the domain of definition
Hilbert transform of distributions
It is further possible to extend the Hilbert transform to certain spaces of distributions . Since the Hilbert transform commutes with differentiation, and is a bounded operator on , restricts to give a continuous transform on the inverse limit of Sobolev spaces:
The Hilbert transform can then be defined on the dual space of , denoted , consisting of distributions. This is accomplished by the duality pairing:
For define:
It is possible to define the Hilbert transform on the space of tempered distributions as well by an approach due to Gel'fand and Shilov, but considerably more care is needed because of the singularity in the integral.
Hilbert transform of bounded functions
The Hilbert transform can be defined for functions in as well, but it requires some modifications and caveats. Properly understood, the Hilbert transform maps to the Banach space of bounded mean oscillation (BMO) classes.
Interpreted naïvely, the Hilbert transform of a bounded function is clearly ill-defined. For instance, with , the integral defining diverges almost everywhere to . To alleviate such difficulties, the Hilbert transform of an function is therefore defined by the following regularized form of the integral
where as above and
The modified transform agrees with the original transform up to an additive constant on functions of compact support from a general result by Calderón and Zygmund. Furthermore, the resulting integral converges pointwise almost everywhere, and with respect to the BMO norm, to a function of bounded mean oscillation.
A deep result of Fefferman's work is that a function is of bounded mean oscillation if and only if it has the form for some
Conjugate functions
The Hilbert transform can be understood in terms of a pair of functions and such that the function
is the boundary value of a holomorphic function in the upper half-plane. Under these circumstances, if and are sufficiently integrable, then one is the Hilbert transform of the other.
Suppose that Then, by the theory of the Poisson integral, admits a unique harmonic extension into the upper half-plane, and this extension is given by
which is the convolution of with the Poisson kernel
Furthermore, there is a unique harmonic function defined in the upper half-plane such that is holomorphic and
This harmonic function is obtained from by taking a convolution with the conjugate Poisson kernel
Thus
Indeed, the real and imaginary parts of the Cauchy kernel are
so that is holomorphic by Cauchy's integral formula.
The function obtained from in this way is called the harmonic conjugate of . The (non-tangential) boundary limit of as is the Hilbert transform of . Thus, succinctly,
Titchmarsh's theorem
Titchmarsh's theorem (named for E. C. Titchmarsh who included it in his 1937 work) makes precise the relationship between the boundary values of holomorphic functions in the upper half-plane and the Hilbert transform. It gives necessary and sufficient conditions for a complex-valued square-integrable function on the real line to be the boundary value of a function in the Hardy space of holomorphic functions in the upper half-plane .
The theorem states that the following conditions for a complex-valued square-integrable function are equivalent:
is the limit as of a holomorphic function in the upper half-plane such that
The real and imaginary parts of are Hilbert transforms of each other.
The Fourier transform vanishes for .
A weaker result is true for functions of class for . Specifically, if is a holomorphic function such that
for all , then there is a complex-valued function in such that in the norm as (as well as holding pointwise almost everywhere). Furthermore,
where is a real-valued function in and is the Hilbert transform (of class ) of .
This is not true in the case . In fact, the Hilbert transform of an function need not converge in the mean to another function. Nevertheless, the Hilbert transform of does converge almost everywhere to a finite function such that
This result is directly analogous to one by Andrey Kolmogorov for Hardy functions in the disc. Although usually called Titchmarsh's theorem, the result aggregates much work of others, including Hardy, Paley and Wiener (see Paley–Wiener theorem), as well as work by Riesz, Hille, and Tamarkin
Riemann–Hilbert problem
One form of the Riemann–Hilbert problem seeks to identify pairs of functions and such that is holomorphic on the upper half-plane and is holomorphic on the lower half-plane, such that for along the real axis,
where is some given real-valued function of The left-hand side of this equation may be understood either as the difference of the limits of from the appropriate half-planes, or as a hyperfunction distribution. Two functions of this form are a solution of the Riemann–Hilbert problem.
Formally, if solve the Riemann–Hilbert problem
then the Hilbert transform of is given by
Hilbert transform on the circle
For a periodic function the circular Hilbert transform is defined:
The circular Hilbert transform is used in giving a characterization of Hardy space and in the study of the conjugate function in Fourier series. The kernel,
is known as the Hilbert kernel since it was in this form the Hilbert transform was originally studied.
The Hilbert kernel (for the circular Hilbert transform) can be obtained by making the Cauchy kernel periodic. More precisely, for
Many results about the circular Hilbert transform may be derived from the corresponding results for the Hilbert transform from this correspondence.
Another more direct connection is provided by the Cayley transform , which carries the real line onto the circle and the upper half plane onto the unit disk. It induces a unitary map
of onto The operator carries the Hardy space onto the Hardy space .
Hilbert transform in signal processing
Bedrosian's theorem
Bedrosian's theorem states that the Hilbert transform of the product of a low-pass and a high-pass signal with non-overlapping spectra is given by the product of the low-pass signal and the Hilbert transform of the high-pass signal, or
where and are the low- and high-pass signals respectively. A category of communication signals to which this applies is called the narrowband signal model. A member of that category is amplitude modulation of a high-frequency sinusoidal "carrier":
where is the narrow bandwidth "message" waveform, such as voice or music. Then by Bedrosian's theorem:
Analytic representation
A specific type of conjugate function is:
known as the analytic representation of The name reflects its mathematical tractability, due largely to Euler's formula. Applying Bedrosian's theorem to the narrowband model, the analytic representation is:
A Fourier transform property indicates that this complex heterodyne operation can shift all the negative frequency components of above 0 Hz. In that case, the imaginary part of the result is a Hilbert transform of the real part. This is an indirect way to produce Hilbert transforms.
Angle (phase/frequency) modulation
The form:
is called angle modulation, which includes both phase modulation and frequency modulation. The instantaneous frequency is For sufficiently large , compared to
and:
Single sideband modulation (SSB)
When in is also an analytic representation (of a message waveform), that is:
the result is single-sideband modulation:
whose transmitted component is:
Causality
The function presents two causality-based challenges to practical implementation in a convolution (in addition to its undefined value at 0):
Its duration is infinite (technically infinite support). Finite-length windowing reduces the effective frequency range of the transform; shorter windows result in greater losses at low and high frequencies. See also quadrature filter.
It is a non-causal filter. So a delayed version, is required. The corresponding output is subsequently delayed by When creating the imaginary part of an analytic signal, the source (real part) must also be delayed by .
Discrete Hilbert transform
For a discrete function, with discrete-time Fourier transform (DTFT), , and discrete Hilbert transform the DTFT of in the region is given by:
The inverse DTFT, using the convolution theorem, is:
where
which is an infinite impulse response (IIR).
Practical considerations
Method 1: Direct convolution of streaming data with an FIR approximation of which we will designate by Examples of truncated are shown in figures 1 and 2. Fig 1 has an odd number of anti-symmetric coefficients and is called Type III. This type inherently exhibits responses of zero magnitude at frequencies 0 and Nyquist, resulting in a bandpass filter shape. A Type IV design (even number of anti-symmetric coefficients) is shown in Fig 2. It has a highpass frequency response. Type III is the usual choice. for these reasons:
A typical (i.e. properly filtered and sampled) sequence has no useful components at the Nyquist frequency.
The Type IV impulse response requires a sample shift in the sequence. That causes the zero-valued coefficients to become non-zero, as seen in Figure 2. So a Type III design is potentially twice as efficient as Type IV.
The group delay of a Type III design is an integer number of samples, which facilitates aligning with to create an analytic signal. The group delay of Type IV is halfway between two samples.
The abrupt truncation of creates a rippling (Gibbs effect) of the flat frequency response. That can be mitigated by use of a window function to taper to zero.
Method 2: Piecewise convolution. It is well known that direct convolution is computationally much more intensive than methods like overlap-save that give access to the efficiencies of the Fast Fourier transform via the convolution theorem. Specifically, the discrete Fourier transform (DFT) of a segment of is multiplied pointwise with a DFT of the sequence. An inverse DFT is done on the product, and the transient artifacts at the leading and trailing edges of the segment are discarded. Over-lapping input segments prevent gaps in the output stream. An equivalent time domain description is that segments of length (an arbitrary parameter) are convolved with the periodic function:
When the duration of non-zero values of is the output sequence includes samples of outputs are discarded from each block of and the input blocks are overlapped by that amount to prevent gaps.
Method 3: Same as method 2, except the DFT of is replaced by samples of the distribution (whose real and imaginary components are all just or ) That convolves with a periodic summation:
for some arbitrary parameter, is not an FIR, so the edge effects extend throughout the entire transform. Deciding what to delete and the corresponding amount of overlap is an application-dependent design issue.
Fig 3 depicts the difference between methods 2 and 3. Only half of the antisymmetric impulse response is shown, and only the non-zero coefficients. The blue graph corresponds to method 2 where is truncated by a rectangular window function, rather than tapered. It is generated by a Matlab function, hilb(65). Its transient effects are exactly known and readily discarded. The frequency response, which is determined by the function argument, is the only application-dependent design issue.
The red graph is corresponding to method 3. It is the inverse DFT of the distribution. Specifically, it is the function that is convolved with a segment of by the MATLAB function, hilbert(u,512). The real part of the output sequence is the original input sequence, so that the complex output is an analytic representation of
When the input is a segment of a pure cosine, the resulting convolution for two different values of is depicted in Fig 4 (red and blue plots). Edge effects prevent the result from being a pure sine function (green plot). Since is not an FIR sequence, the theoretical extent of the effects is the entire output sequence. But the differences from a sine function diminish with distance from the edges. Parameter is the output sequence length. If it exceeds the length of the input sequence, the input is modified by appending zero-valued elements. In most cases, that reduces the magnitude of the edge distortions. But their duration is dominated by the inherent rise and fall times of the impulse response.
Fig 5 is an example of piecewise convolution, using both methods 2 (in blue) and 3 (red dots). A sine function is created by computing the Discrete Hilbert transform of a cosine function, which was processed in four overlapping segments, and pieced back together. As the FIR result (blue) shows, the distortions apparent in the IIR result (red) are not caused by the difference between and (green and red in Fig 3). The fact that is tapered (windowed) is actually helpful in this context. The real problem is that it's not windowed enough. Effectively, whereas the overlap-save method needs
Number-theoretic Hilbert transform
The number theoretic Hilbert transform is an extension of the discrete Hilbert transform to integers modulo an appropriate prime number. In this it follows the generalization of discrete Fourier transform to number theoretic transforms. The number theoretic Hilbert transform can be used to generate sets of orthogonal discrete sequences.
See also
Analytic signal
Harmonic conjugate
Hilbert spectroscopy
Hilbert transform in the complex plane
Hilbert–Huang transform
Kramers–Kronig relation
Riesz transform
Single-sideband signal
Singular integral operators of convolution type
Notes
Page citations
References
; also http://www.fuchs-braun.com/media/d9140c7b3d5004fbffff8007fffffff0.pdf
; also https://www.dsprelated.com/freebooks/mdft/Analytic_Signals_Hilbert_Transform.html
Further reading
External links
Derivation of the boundedness of the Hilbert transform
Mathworld Hilbert transform — Contains a table of transforms
an entry level introduction to Hilbert transformation.
Harmonic functions
Integral transforms
Signal processing
Singular integrals
Schwartz distributions | Hilbert transform | [
"Technology",
"Engineering"
] | 4,887 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
574,169 | https://en.wikipedia.org/wiki/Blacklight%20paint | Black light paint or black light fluorescent paint is luminous paint that glows under a black light. It is based on pigments that respond to light in the ultraviolet segment of the electromagnetic spectrum. The paint may or may not be colorful under ordinary light. Black light paint should not be confused with phosphorescent (glow-in-the-dark) or daylight fluorescent paint.
History
The invention of black light paint is attributed to brothers Joseph and Robert Switzer in the 1930s. After a fall, Robert suffered a severe head injury that resulted in a severed optic nerve. His doctor confined him to a dark room while he waited for his sight to recover. Joseph, who was a chemistry major at the University of California, Berkeley, worked with Robert to investigate fluorescent compounds. They brought a black light into the storeroom of their father's drugstore looking for naturally fluorescing organic compounds and mixed those compounds with shellac to develop the first black light fluorescent paints. The first use of these paints was for Joseph's amateur magic shows.
The brothers founded the Fluor-S-Art Company, later named Day-Glo Color Corp., to develop and sell their products. Day-Glo is a registered trademark of the Day-Glo Color Corporation. The first commercial uses of black light fluorescent paints were for store displays and movie theaters. During World War II, black light fluorescent paints were used on U.S. naval carriers to allow planes to land at night.
Characteristics and uses
Black light paints and inks are commonly used in the production of black light posters. Under daylight, the poster may or may not be vibrant in color, but under black light (with little or no visible light present), the effect produced can be psychedelic. The inks are normally highly sensitive to direct sunlight and other powerful light sources. The fluorescent dyes cause a chemical reaction when exposed to high intensity light sources (HILS) and the visual result is a fading in the colors of the inks. With paper, significant visible change in the color saturation can typically be observed within 45 minutes to one hour of exposure to the HILS. To date, there is no absolute method to prevent this phenomenon, although certain laminations, lacquer coatings and glass or plastic protective sheets can effectively slow the fading characteristics of the dyes.
Other common usage of the black light pigments is in security features of money notes, various certificates printed on paper, meal coupons, tickets and similar things that represent a value (monetary or otherwise). The black light printed figures used for this purpose are usually invisible under normal lighting, even when they are exposed to direct sunlight (which contains ultraviolet light) but they show up glowing when exposed to black light source. This defeats simple and inexpensive attempts to counterfeit them by scanning the original using a high resolution scanner and printing them using an inexpensive high resolution printer (most if not all inexpensive printers do not allow using black light inks for printing) and no special equipment is needed to verify the presence and correctness of this feature (an inexpensive black light source being all that is required). Some coupons and tickets use colorful black light inks.
On many German locomotives the control panel labels were printed with black light paint and a black light source was provided in the cab. This left the driver with full night vision while still enabling him to distinguish between the different switches and levers to operate his locomotive.
Black light paints are sometimes used in the scenery of amusement park dark rides: a black light illuminates the vivid colors of the scenery, while the vehicle and other passengers remain dimly lit or barely visible. This can enhance the effect of being in a fantasy world.
Black light paints may be fluorescent or, more rarely, phosphorescent, containing a phosphor that continues to glow for a time after the black light has been removed. Black light paint can be mixed with similar shades of normal pigments, ‘brightening’ them when viewed in sunlight.
References
Paints
Inks
Printing materials | Blacklight paint | [
"Physics",
"Chemistry"
] | 817 | [
"Paints",
"Coatings",
"Materials",
"Printing materials",
"Matter"
] |
574,337 | https://en.wikipedia.org/wiki/Cochran%27s%20theorem | In statistics, Cochran's theorem, devised by William G. Cochran, is a theorem used to justify results relating to the probability distributions of statistics that are used in the analysis of variance.
Examples
Sample mean and sample variance
If X1, ..., Xn are independent normally distributed random variables with mean μ and standard deviation σ then
is standard normal for each i. Note that the total Q is equal to sum of squared Us as shown here:
which stems from the original assumption that .
So instead we will calculate this quantity and later separate it into Qi's. It is possible to write
(here is the sample mean). To see this identity, multiply throughout by and note that
and expand to give
The third term is zero because it is equal to a constant times
and the second term has just n identical terms added together. Thus
and hence
Now with the matrix of ones which has rank 1. In turn given that . This expression can be also obtained by expanding in matrix notation. It can be shown that the rank of is as the addition of all its rows is equal to zero. Thus the conditions for Cochran's theorem are met.
Cochran's theorem then states that Q1 and Q2 are independent, with chi-squared distributions with n − 1 and 1 degree of freedom respectively. This shows that the sample mean and sample variance are independent. This can also be shown by Basu's theorem, and in fact this property characterizes the normal distribution – for no other distribution are the sample mean and sample variance independent.
Distributions
The result for the distributions is written symbolically as
Both these random variables are proportional to the true but unknown variance σ2. Thus their ratio does not depend on σ2 and, because they are statistically independent. The distribution of their ratio is given by
where F1,n − 1 is the F-distribution with 1 and n − 1 degrees of freedom (see also Student's t-distribution). The final step here is effectively the definition of a random variable having the F-distribution.
Estimation of variance
To estimate the variance σ2, one estimator that is sometimes used is the maximum likelihood estimator of the variance of a normal distribution
Cochran's theorem shows that
and the properties of the chi-squared distribution show that
Alternative formulation
The following version is often seen when considering linear regression. Suppose that is a standard multivariate normal random vector (here denotes the n-by-n identity matrix), and if are all n-by-n symmetric matrices with . Then, on defining , any one of the following conditions implies the other two:
(thus the are positive semidefinite)
is independent of for
Statement
Let U1, ..., UN be i.i.d. standard normally distributed random variables, and . Let be symmetric matrices. Define ri to be the rank of . Define , so that the Qi are quadratic forms. Further assume .
Cochran's theorem states that the following are equivalent:
,
the Qi are independent
each Qi has a chi-squared distribution with ri degrees of freedom.
Often it's stated as , where is idempotent, and is replaced by . But after an orthogonal transform, , and so we reduce to the above theorem.
Proof
Claim: Let be a standard Gaussian in , then for any symmetric matrices , if and have the same distribution, then have the same eigenvalues (up to multiplicity).
Claim: .
Lemma: If , all symmetric, and have eigenvalues 0, 1, then they are simultaneously diagonalizable.
Now we prove the original theorem. We prove that the three cases are equivalent by proving that each case implies the next one in a cycle ().
See also
Cramér's theorem, on decomposing normal distribution
Infinite divisibility (probability)
References
Theorems in statistics
Characterization of probability distributions | Cochran's theorem | [
"Mathematics"
] | 788 | [
"Mathematical problems",
"Mathematical theorems",
"Theorems in statistics"
] |
574,491 | https://en.wikipedia.org/wiki/Pessary | A pessary is a prosthetic device inserted into the vagina for structural and pharmaceutical purposes. It is most commonly used to treat stress urinary incontinence to stop urinary leakage and to treat pelvic organ prolapse to maintain the location of organs in the pelvic region. It can also be used to administer medications locally in the vagina or as a method of contraception.
Pessaries come in different shapes and sizes, so it is important that individuals be fitted for them by health care professionals to avoid any complications. However, there are a few instances and circumstances that allow pessaries to be purchased without a prescription or without seeking help from a health care professional. Some side effects may occur if pessaries are not sized properly or regularly maintained, but with the appropriate care, pessaries are generally safe and well tolerated.
History
Early use of pessaries dates back to the ancient Egyptians, as they described using pessaries to treat pelvic organ prolapse. The term 'pessary' itself, is derived from the Ancient Greek word 'pessós', meaning round stone used for games. Pessaries are even mentioned in the oldest surviving copy of the Greek medical text, Hippocratic Oath, as something that physicians should never administer for the purposes of an abortion: "Similarly I will not give to a woman a pessary to cause abortion." The earliest documented pessaries were natural products. For example, Greek physicians Hippocrates and Soranus described inserting half of a pomegranate into the vagina to treat prolapse. It was not until the 16th century that the first purpose-made pessaries were made. For instance, in the late 1500s, Ambroise Paré was described as making oval pessaries from hammered brass and waxed cork. Nowadays, pessaries are generally made from silicone and are well tolerated and effective among patients who need them.
Medical uses
Pelvic organ prolapse
The most common use for pessaries is to treat pelvic organ prolapse. A pelvic organ prolapse can occur when the muscles and tissues surrounding the bladder, uterus, vagina, small bowel, and rectum stop working properly to hold the organs in place and the organs begin to drop outside the body. The most common cause of such prolapse is childbirth, usually multiple births. Obesity, long-term respiratory problems, constipation, pelvic organ cancers, and hysterectomies can all be causes for pelvic organ prolapses as well. Some signs and symptoms include feeling pressure in the pelvic area, lower back pain, painful intercourse, urinary incontinence, a feeling that something is out of place, constipation, or bleeding from the vagina. Pessaries are manual devices that are inserted into the vagina to help support and reposition descended pelvic organs, which helps to prevent the worsening of prolapse, helps with symptom relief, and can delay or prevent the need for surgery. Further, pessaries can be used for surgery preparation as a way to maintain prolapse without progression. This is especially useful when a surgery may need to be delayed.
Stress urinary incontinence
Stress urinary incontinence is leakage of urine that is caused by sudden pressure on the bladder. It occurs during activities that increase the amount of pressure on the bladder such as coughing, sneezing, laughing, and exercising. The pressure causes opening of the sphincter muscles which usually help prevent urine leakage. Stress urinary incontinence is a common medical problem especially in women as about 1 in 3 women are affected by this condition at some point in their lives. Pessaries are considered a safe non-surgical treatment option for stress urinary incontinence as it can control the urine leakage by pushing the urethra closed. Pessaries can be removed any time.
Other
Some additional uses for pessaries are for an incarcerated uterus, prevention of preterm birth and an incompetent cervix. In early pregnancy the uterus can be displaced, which can lead to pain and rectal and urinary complications. A pessary can be used to treat this condition and support the uterus. Preterm birth is when babies are born prematurely, which puts the baby at increased risk for complications and even death. Currently, the use of pessaries to help prevent preterm birth is an ongoing area of research. The use of pessaries for an incompetent cervix is not commonly practiced today, but they have been used in the past. Specifically, an incompetent cervix is when the cervix begins to open up prematurely. This can lead to a preterm birth or even a miscarriage. Pessaries can be used to correctly position the cervix, increasing the success of pregnancy.
Types of pessaries
Therapeutic pessaries
A therapeutic pessary is a medical device similar to the outer ring of a diaphragm. Therapeutic pessaries are used to support the uterus, vagina, bladder, or rectum. Pessaries are most commonly used for pelvic organ prolapse and considered a good treatment option for women who need or desire non-surgical management or future pregnancy. It is used to treat prolapse of uterine, vaginal wall (vaginal vault), bladder (cystocele), rectum (rectocele), or small bowel (enterocele). It is also used to treat stress urinary incontinence.
There are different types of pessaries but most of them are made out of silicone—a harmless and durable material. Pessaries are mainly categorized into two types, supporting pessaries and space-occupying pessaries. Support pessaries function by supporting the prolapse and space-occupying pessaries by filling the vaginal space. There are also lever type pessaries.
Support pessary
Ring with support pessaries are the supporting type. These are often used as a first-line treatment and used for earlier stage prolapse since individuals can easily insert and remove them on their own without a doctor's help. These can be easily folded in half for insertion.
Gellhorn pessaries are considered a type of supporting and space-occupying pessary. These resemble the shape of a mushroom and are used for more advanced pelvic organ prolapse. These are less preferred than ring with support pessaries due to difficulty with self-removal and insertion.
Marland pessaries are another type of supporting pessary. These are used to treat pelvic organ prolapse as well as stress urinary incontinence. These pessaries have a ring at their base and a wedge-shaped ridge on one side. Although these pessaries are less likely to fall out than standard ring with support pessaries, individuals find it difficult to insert or remove them on their own.
Space-occupying pessary
Donut pessaries are considered space-occupying pessaries. These are used for more advanced pelvic organ prolapse including cystocele or rectocele as well as a second or third-degree uterine prolapse. Due to its shape and size, it is one of the hardest ones to insert and remove.
Cube pessaries are space-occupying pessaries in the shape of a cube that are available in 7 sizes. The pessary is inserted into the vagina and kept in place by the suction of its 6 surfaces to the vaginal wall. Cube pessaries must be removed before sexual intercourse and replaced daily. Cube pessaries are generally used as a last resort only if the individuals cannot retain any other pessaries. This is due to undesirable side effects such as vaginal discharge and erosion of the vaginal wall. In order to remove the cube pessary, the suction must be broken by grasping the device.
Gehrung pessaries are space-occupying pessaries that are similar to the Gellhorn pessaries. They are silicone devices that are placed into the vagina and used for second or third degree (more severe) uterine prolapse. These contain metal and should be removed prior to any MRI, ultrasound or X-rays. They can also be used to help with stress urinary incontinence such as urine leaks during exercising or coughing. These types of pessaries need to be fitted by a health care professional to ensure proper size. Once placed it should not move when standing, sitting, or squatting. It should be cleaned with mild soap and warm water every day or two.
Lever pessary
Hodge pessaries are a type of lever pessary. Although these can be used for mild cystocele and stress urinary incontinence, they are not commonly used. Smith, and Risser pessaries are other types of lever pessaries and they differ in shape.
Pharmaceutical pessaries
Treating vaginal yeast infections is one of the most common uses of pharmaceutical pessaries. They are also known as vaginal suppositories, which are inserted into the vagina and are designed to dissolve at body temperature. They usually contain a single use antifungal agent such as clotrimazole. Oral antifungal agents are also available.
Pessaries can also be used in a similar way to help induce labor for women who have overdue expected delivery dates or who experience premature rupture of membranes. Prostaglandins are usually the medication used in these kinds of pessaries in order to relax the cervix and promote contractions.
According to Pliny the Elder, pessaries were used as birth control in ancient times.
Occlusive pessaries
Occlusive pessaries are most commonly used for contraception. Also known as a contraceptive cap, they work similar to a diaphragm as a barrier form of contraception. They are inserted into the vagina and block sperm from entering the uterus through the cervix. The cap must be used in conjunction with a spermicide in order to be effective in preventing pregnancy. When used correctly the cap is thought to be 92–96% effective. These caps are reusable but come in different sizes. It is recommended for anyone attempting this form of contraception to be fitted for the correct size by a trained health care professional.
Stem pessary
The stem pessary, a type of occlusive pessary, was an early form of the cervical cap. Shaped like a dome, it covered the cervix, and a central rod or "stem" entered the uterus through the external orifice of the uterus, also known as the cervical canal or the os, to hold it in place.
Side effects and complications
When pessaries are used correctly, they are tolerated well for pelvic organ prolapse or stress urinary incontinence. However, pessaries are still a foreign device that is inserted into the vagina, so side effects can occur. Some more common side effects include vaginal discharge and odor. Vaginal discharge and odor may be associated with bacterial vaginosis, characterized by an overgrowth of naturally occurring bacteria in the vagina. These symptoms can be treated with the appropriate medications.
More serious side effects include fistula formation between the vagina and rectum or the vagina and bladder, or erosion, or thinning, of the vaginal wall. Fistula formation is rare, but erosion of the vaginal wall occurs more frequently. Low estrogen production can also increase the risk of vaginal wall thinning. For individuals with pessaries that are not fitted for them, herniations of the cervix and uterus can occur through the opening of the pessary. This can lead to tissue necrosis in the cervix and uterus. To prevent these side effects, individuals can be fitted properly for their pessaries and undergo routine follow-up visits with their health care professionals to ensure the individual has the correct pessary size and no other complications. In addition, those with an increased risk of vaginal wall thinning can be prescribed estrogen to prevent erosion and prevent these complications.
If pessaries are not used properly or not maintained periodically, more serious complications can occur. For example, the pessary can become embedded into the vagina, which makes it harder to remove. Estrogen can decrease the inflammation of the vaginal walls and promote skin cells in the vagina to mature, so use of estrogen cream can allow removal of the pessary more easily. In rare cases, pessaries would need to be removed through surgical procedures.
To prevent complications, individuals should not use pessaries if they have characteristics that exclude them from this method of therapy. Contraindications to pessary use include current infections in the pelvis or vagina, or allergies to the material of the pessary (which can be silicone or latex). In addition, individuals should not be fitted for a pessary if they are less likely to properly maintain their pessary.
See also
United States v. One Package of Japanese Pessaries
Diaphragm (birth control)
Suppository
References
Dosage forms
Drug delivery devices
Implants (medicine)
Medical equipment
Vagina | Pessary | [
"Chemistry",
"Biology"
] | 2,834 | [
"Pharmacology",
"Drug delivery devices",
"Medical equipment",
"Medical technology"
] |
574,544 | https://en.wikipedia.org/wiki/Circular%20motion | In physics, circular motion is movement of an object along the circumference of a circle or rotation along a circular arc. It can be uniform, with a constant rate of rotation and constant tangential speed, or non-uniform with a changing rate of rotation. The rotation around a fixed axis of a three-dimensional body involves the circular motion of its parts. The equations of motion describe the movement of the center of mass of a body, which remains at a constant distance from the axis of rotation. In circular motion, the distance between the body and a fixed point on its surface remains the same, i.e., the body is assumed rigid.
Examples of circular motion include: special satellite orbits around the Earth (circular orbits), a ceiling fan's blades rotating around a hub, a stone that is tied to a rope and is being swung in circles, a car turning through a curve in a race track, an electron moving perpendicular to a uniform magnetic field, and a gear turning inside a mechanism.
Since the object's velocity vector is constantly changing direction, the moving object is undergoing acceleration by a centripetal force in the direction of the center of rotation. Without this acceleration, the object would move in a straight line, according to Newton's laws of motion.
Uniform circular motion
In physics, uniform circular motion describes the motion of a body traversing a circular path at a constant speed. Since the body describes circular motion, its distance from the axis of rotation remains constant at all times. Though the body's speed is constant, its velocity is not constant: velocity, a vector quantity, depends on both the body's speed and its direction of travel. This changing velocity indicates the presence of an acceleration; this centripetal acceleration is of constant magnitude and directed at all times toward the axis of rotation. This acceleration is, in turn, produced by a centripetal force which is also constant in magnitude and directed toward the axis of rotation.
In the case of rotation around a fixed axis of a rigid body that is not negligibly small compared to the radius of the path, each particle of the body describes a uniform circular motion with the same angular velocity, but with velocity and acceleration varying with the position with respect to the axis.
Formula
For motion in a circle of radius , the circumference of the circle is . If the period for one rotation is , the angular rate of rotation, also known as angular velocity, is:
and the units are radians/second.
The speed of the object traveling the circle is:
The angle swept out in a time is:
The angular acceleration, , of the particle is:
In the case of uniform circular motion, will be zero.
The acceleration due to change in the direction is:
The centripetal and centrifugal force can also be found using acceleration:
The vector relationships are shown in Figure 1. The axis of rotation is shown as a vector perpendicular to the plane of the orbit and with a magnitude . The direction of is chosen using the right-hand rule. With this convention for depicting rotation, the velocity is given by a vector cross product as
which is a vector perpendicular to both and , tangential to the orbit, and of magnitude . Likewise, the acceleration is given by
which is a vector perpendicular to both and of magnitude and directed exactly opposite to .
In the simplest case the speed, mass, and radius are constant.
Consider a body of one kilogram, moving in a circle of radius one metre, with an angular velocity of one radian per second.
The speed is 1 metre per second.
The inward acceleration is 1 metre per square second, .
It is subject to a centripetal force of 1 kilogram metre per square second, which is 1 newton.
The momentum of the body is 1 kg·m·s−1.
The moment of inertia is 1 kg·m2.
The angular momentum is 1 kg·m2·s−1.
The kinetic energy is 0.5 joule.
The circumference of the orbit is 2 (~6.283) metres.
The period of the motion is 2 seconds.
The frequency is (2)−1 hertz.
In polar coordinates
During circular motion, the body moves on a curve that can be described in the polar coordinate system as a fixed distance from the center of the orbit taken as the origin, oriented at an angle from some reference direction. See Figure 4. The displacement vector is the radial vector from the origin to the particle location:
where is the unit vector parallel to the radius vector at time and pointing away from the origin. It is convenient to introduce the unit vector orthogonal to as well, namely . It is customary to orient to point in the direction of travel along the orbit.
The velocity is the time derivative of the displacement:
Because the radius of the circle is constant, the radial component of the velocity is zero. The unit vector has a time-invariant magnitude of unity, so as time varies its tip always lies on a circle of unit radius, with an angle the same as the angle of . If the particle displacement rotates through an angle in time , so does , describing an arc on the unit circle of magnitude . See the unit circle at the left of Figure 4. Hence:
where the direction of the change must be perpendicular to (or, in other words, along ) because any change in the direction of would change the size of . The sign is positive because an increase in implies the object and have moved in the direction of .
Hence the velocity becomes:
The acceleration of the body can also be broken into radial and tangential components. The acceleration is the time derivative of the velocity:
The time derivative of is found the same way as for . Again, is a unit vector and its tip traces a unit circle with an angle that is . Hence, an increase in angle by implies traces an arc of magnitude , and as is orthogonal to , we have:
where a negative sign is necessary to keep orthogonal to . (Otherwise, the angle between and would decrease with an increase in .) See the unit circle at the left of Figure 4. Consequently, the acceleration is:
The centripetal acceleration is the radial component, which is directed radially inward:
while the tangential component changes the magnitude of the velocity:
Using complex numbers
Circular motion can be described using complex numbers. Let the axis be the real axis and the axis be the imaginary axis. The position of the body can then be given as , a complex "vector":
where is the imaginary unit, and is the argument of the complex number as a function of time, .
Since the radius is constant:
where a dot indicates differentiation in respect of time.
With this notation, the velocity becomes:
and the acceleration becomes:
The first term is opposite in direction to the displacement vector and the second is perpendicular to it, just like the earlier results shown before.
Velocity
Figure 1 illustrates velocity and acceleration vectors for uniform motion at four different points in the orbit. Because the velocity is tangent to the circular path, no two velocities point in the same direction. Although the object has a constant speed, its direction is always changing. This change in velocity is caused by an acceleration , whose magnitude is (like that of the velocity) held constant, but whose direction also is always changing. The acceleration points radially inwards (centripetally) and is perpendicular to the velocity. This acceleration is known as centripetal acceleration.
For a path of radius , when an angle is swept out, the distance traveled on the periphery of the orbit is . Therefore, the speed of travel around the orbit is
where the angular rate of rotation is . (By rearrangement, .) Thus, is a constant, and the velocity vector also rotates with constant magnitude , at the same angular rate .
Relativistic circular motion
In this case, the three-acceleration vector is perpendicular to the three-velocity vector,
and the square of proper acceleration, expressed as a scalar invariant, the same in all reference frames,
becomes the expression for circular motion,
or, taking the positive square root and using the three-acceleration, we arrive at the proper acceleration for circular motion:
Acceleration
The left-hand circle in Figure 2 is the orbit showing the velocity vectors at two adjacent times. On the right, these two velocities are moved so their tails coincide. Because speed is constant, the velocity vectors on the right sweep out a circle as time advances. For a swept angle the change in is a vector at right angles to and of magnitude , which in turn means that the magnitude of the acceleration is given by
Non-uniform circular motion
In non-uniform circular motion, an object moves in a circular path with varying speed. Since the speed is changing, there is tangential acceleration in addition to normal acceleration.
The net acceleration is directed towards the interior of the circle (but does not pass through its center).
The net acceleration may be resolved into two components: tangential acceleration and centripetal acceleration. Unlike tangential acceleration, centripetal acceleration is present in both uniform and non-uniform circular motion.
In non-uniform circular motion, the normal force does not always point to the opposite direction of weight.
The normal force is actually the sum of the radial and tangential forces. The component of weight force is responsible for the tangential force (when we neglect friction). The centripetal force is due to the change in the direction of velocity.
The normal force and weight may also point in the same direction. Both forces can point downwards, yet the object will remain in a circular path without falling down.
The normal force can point downwards. Considering that the object is a person sitting inside a plane moving in a circle, the two forces (weight and normal force) will point down only when the plane reaches the top of the circle. The reason for this is that the normal force is the sum of the tangential force and centripetal force. The tangential force is zero at the top (as no work is performed when the motion is perpendicular to the direction of force). Since weight is perpendicular to the direction of motion of the object at the top of the circle and the centripetal force points downwards, the normal force will point down as well.
From a logical standpoint, a person travelling in that plane will be upside down at the top of the circle. At that moment, the person's seat is actually pushing down on the person, which is the normal force.
The reason why an object does not fall down when subjected to only downward forces is a simple one. Once an object is thrown into the air, there is only the downward gravitational force that acts on the object. That does not mean that once an object is thrown into the air, it will fall instantly. The velocity of the object keeps it up in the air. The first of Newton's laws of motion states that an object's inertia keeps it in motion; since the object in the air has a velocity, it will tend to keep moving in that direction.
A varying angular speed for an object moving in a circular path can also be achieved if the rotating body does not have a homogeneous mass distribution.
One can deduce the formulae of speed, acceleration and jerk, assuming that all the variables to depend on :
Further transformations may involve and their corresponding derivatives:
Applications
Solving applications dealing with non-uniform circular motion involves force analysis. With a uniform circular motion, the only force acting upon an object traveling in a circle is the centripetal force. In a non-uniform circular motion, there are additional forces acting on the object due to a non-zero tangential acceleration. Although there are additional forces acting upon the object, the sum of all the forces acting on the object will have to be equal to the centripetal force.
Radial acceleration is used when calculating the total force. Tangential acceleration is not used in calculating total force because it is not responsible for keeping the object in a circular path. The only acceleration responsible for keeping an object moving in a circle is the radial acceleration. Since the sum of all forces is the centripetal force, drawing centripetal force into a free body diagram is not necessary and usually not recommended.
Using , we can draw free body diagrams to list all the forces acting on an object and then set it equal to . Afterward, we can solve for whatever is unknown (this can be mass, velocity, radius of curvature, coefficient of friction, normal force, etc.). For example, the visual above showing an object at the top of a semicircle would be expressed as .
In a uniform circular motion, the total acceleration of an object in a circular path is equal to the radial acceleration. Due to the presence of tangential acceleration in a non uniform circular motion, that does not hold true any more. To find the total acceleration of an object in a non uniform circular, find the vector sum of the tangential acceleration and the radial acceleration.
Radial acceleration is still equal to . Tangential acceleration is simply the derivative of the speed at any given point: . This root sum of squares of separate radial and tangential accelerations is only correct for circular motion; for general motion within a plane with polar coordinates , the Coriolis term should be added to , whereas radial acceleration then becomes .
See also
Angular momentum
Equations of motion for circular motion
Fictitious force
Geostationary orbit
Geosynchronous orbit
Pendulum (mechanics)
Reactive centrifugal force
Reciprocating motion
Sling (weapon)
References
External links
Physclips: Mechanics with animations and video clips from the University of New South Wales
Circular Motion – a chapter from an online textbook, Mechanics, by Benjamin Crowell (2019)
Circular Motion Lecture – a video lecture on CM
– an online textbook with different analysis for circular motion
Rotation
Classical mechanics
Motion (physics)
Circles | Circular motion | [
"Physics",
"Mathematics"
] | 2,834 | [
"Physical phenomena",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Mechanics",
"Space",
"Spacetime",
"Circles",
"Pi"
] |
574,570 | https://en.wikipedia.org/wiki/Aerial%20lift | An aerial lift, also known as a cable car or ropeway, is a means of cable transport in which cabins, cars, gondolas, or open chairs are hauled above the ground by means of one or more cables. Aerial lift systems are frequently employed in a mountainous territory where roads are relatively difficult to build and use, and have seen extensive use in mining. Aerial lift systems are relatively easy to move and have been used to cross rivers and ravines. In more recent times, the cost-effectiveness and flexibility of aerial lifts have seen an increase of gondola lift being integrated into urban public transport systems.
Types
Cable Car
A cable car (British English) or an aerial tramway, aerial tram (American English), uses one or two stationary ropes for support while a separate moving rope provides propulsion. The grip of an aerial tramway is permanently fixed onto the propulsion rope. Aerial trams used for urban transport include the Roosevelt Island Tramway in New York City, as well as the Portland Aerial Tram.
Gondola lift
A gondola lift consists of a continuously circulating cable that is strung between two or more stations, over intermediate supporting towers. The cable is driven by a bullwheel in a terminal, which is connected to an engine or electric motor. Multiple gondola cabins are attached to the cable, usually with detachable grips, enabling them to slow down in the stations to facilitate safe boarding. Fixed grip variants exist, although these are considerably less common. Lifts with a single cable are sometimes referred to as "mono-cable" gondola lifts. Depending on the design of the individual lift, the capacity, cost, and functionality of a gondola lift will differ dramatically. Because of the proliferation of such systems in the Alpine regions of Europe, the French language name of télécabine is also used in an English language context. Gondola lifts are also used for urban transportation. Examples include the Singapore Cable Car, Metrocable (Medellín), Metrocable (Caracas), Mi Teleférico (La Paz), and London Cable Car.
Bicable and tricable gondola lifts
Gondola lifts which feature one stationary 'support' rope and one haul rope are known as bi-cable gondola lifts, while lifts that feature two support ropes and one haul rope are known as tri-cable gondola lifts. Examples include Ngong Ping Skyrail (Hong Kong) and the Peak 2 Peak Gondola (Canada).
Funitel
A funitel differs from a standard gondola lift through the use of two overhead arms, attached to two parallel haul cables, providing more stability in high winds. The name funitel is a blend of the French words funiculaire and telepherique. Systems may sometimes be referred to as "double monocable" (DMC), where two separate haul cables are used, or "double loop monocable" (DLM) where a single haul cable is looped round twice.
Funitels combine a short time between successive cabins with a high capacity (20 to 30 people) per cabin.
Funifor
A funifor is a type of cable car with two support ropes and a haul rope, looped around. Each system is composed of a single cabin shuttling back-and-forth. Many installations are built with two parallel, but independent, lines. The funifor design was developed by the Italian manufacturer, Hölzl, which later merged with Doppelmayr Italia. Today, the design is therefore patented by Doppelmayr Garaventa Group.
At the top of each track, the haul rope loops back to the bottom instead of looping over to serve the other track, as would occur with a normal aerial tramway. This is shown in the diagram below. This feature allows for a single cabin operation when traffic warrants. The independent drive also allows for evacuations to occur by means of a bridge connection between adjacent cabins.
The main advantage of the funifor system is its stability in high wind conditions owing to the horizontal distance between the two support ropes.
Chairlift
Chairlifts are continuously circulating systems carrying chairs, which usually enable skiers to board without removing skis. They are a common type of lift at most ski areas and in mountainous areas. They can also be found at some amusement parks and tourist attractions.
Detachable chairlifts usually move far faster than fixed-grip chairlifts, typically compared with . Because the cable moves faster than most passengers could safely disembark and load, each chair is connected to the cable by a powerful spring-loaded cable grip which detaches at terminals, allowing the chair to slow considerably for convenient loading and unloading at a typical speed of , a speed slower even than fixed-grip. Chairs may be fitted with a "bubble" canopy to offer weather protection.
Hybrid lift
A hybrid lift is a fusion of a gondola lift and a chair lift. The company Leitner refers to it as telemix, while Doppelmayr uses the term combination lift. An example is Ski Arlberg's Weibermahd lift in Vorarlberg (Austria) which alternates between 8-person chairlifts and 10-person gondolas.
Hand-powered
In undeveloped areas with rough terrain, simple hand-powered cable-cars may be used for crossing rivers, such as the tuin used in Nepal.
Material ropeways
A material ropeway or ropeway conveyor is an aerial lift from which containers for goods rather than passenger cars are suspended. These are usually monocable or bicable gondola lifts.
Material ropeways are typically found around large mining concerns, and can be of considerable length. The COMILOG Cableway, which ran from Moanda in Gabon to Mbinda in the Republic of the Congo, was over 75 km in length. The Norsjö aerial tramway in Sweden had a length of 96 kilometers.
Abbreviations
The following abbreviations are frequently used in the industry:
See also
Blondin (quarry equipment)
Cable car (railway)
Funicular
List of aerial lift manufacturers
Space elevator
Surface lift, another transportation technology
References
Vertical transport devices
Articles containing video clips | Aerial lift | [
"Technology"
] | 1,254 | [
"Vertical transport devices",
"Transport systems"
] |
574,587 | https://en.wikipedia.org/wiki/KIO | KIO (KDE Input/Output) is a system library incorporated into KDE Frameworks and KDE Software Compilation 4. It provides access to files, web sites and other resources through a single consistent API. Applications, such as Konqueror and Dolphin, which are written using this framework, can operate on files stored on remote servers in exactly the same way as they operate on those stored locally, effectively making KDE network-transparent. This allows for an application like Konqueror to be both a file manager as well as a web browser.
KIO Slaves (renamed to KIO Workers during the development of KDE Frameworks 6
)
are libraries that provide support for individual protocols (e.g. WebDAV, FTP, SMB, SSH, FISH, SFTP, SVN, TAR).
The KDE manual app KHelpCenter has a KIOSlaves (KIOWorkers in Frameworks 6) section that lists the available protocols with a short description of each.
See also
GIO and GVfs – provides equivalent functionality for GNOME, XFCE and Cinnamon
References
External links
KIO API documentation
Source Code
A Quick and Easy Guide to KDE KIO slaves, by Tavis J. Hampton
KDE Frameworks
KDE Platform | KIO | [
"Technology"
] | 261 | [
"KDE Platform",
"KDE Frameworks",
"Computing platforms"
] |
574,759 | https://en.wikipedia.org/wiki/Feed%20forward%20%28control%29 | A feed forward (sometimes written feedforward) is an element or pathway within a control system that passes a controlling signal from a source in its external environment to a load elsewhere in its external environment. This is often a command signal from an external operator.
In control engineering, a feedforward control system is a control system that uses sensors to detect disturbances affecting the system and then applies an additional input to minimize the effect of the disturbance. This requires a mathematical model of the system so that the effect of disturbances can be properly predicted.
A control system which has only feed-forward behavior responds to its control signal in a pre-defined way without responding to the way the system reacts; it is in contrast with a system that also has feedback, which adjusts the input to take account of how it affects the system, and how the system itself may vary unpredictably.
In a feed-forward system, the control variable adjustment is not error-based. Instead it is based on knowledge about the process in the form of a mathematical model of the process and knowledge about, or measurements of, the process disturbances.
Some prerequisites are needed for control scheme to be reliable by pure feed-forward without feedback: the external command or controlling signal must be available, and the effect of the output of the system on the load should be known (that usually means that the load must be predictably unchanging with time). Sometimes pure feed-forward control without feedback is called 'ballistic', because once a control signal has been sent, it cannot be further adjusted; any corrective adjustment must be by way of a new control signal. In contrast, 'cruise control' adjusts the output in response to the load that it encounters, by a feedback mechanism.
These systems could relate to control theory, physiology, or computing.
Overview
With feed-forward or feedforward control, the disturbances are measured and accounted for before they have time to affect the system. In the house example, a feed-forward system may measure the fact that the door is opened and automatically turn on the heater before the house can get too cold. The difficulty with feed-forward control is that the effects of the disturbances on the system must be accurately predicted, and there must not be any unmeasured disturbances. For instance, if a window was opened that was not being measured, the feed-forward-controlled thermostat might let the house cool down.
The term has specific meaning within the field of CPU-based automatic control. The discipline of feedforward control as it relates to modern, CPU based automatic controls is widely discussed, but is seldom practiced due to the difficulty and expense of developing or providing for the mathematical model required to facilitate this type of control. Open-loop control and feedback control, often based on canned PID control algorithms, are much more widely used.
There are three types of control systems: open loop, feed-forward, and feedback. An example of a pure open loop control system is manual non-power-assisted steering of a motor car; the steering system does not have access to an auxiliary power source and does not respond to varying resistance to turning of the direction wheels; the driver must make that response without help from the steering system. In comparison, power steering has access to a controlled auxiliary power source, which depends on the engine speed. When the steering wheel is turned, a valve is opened which allows fluid under pressure to turn the driving wheels. A sensor monitors that pressure so that the valve only opens enough to cause the correct pressure to reach the wheel turning mechanism. This is feed-forward control where the output of the system, the change in direction of travel of the vehicle, plays no part in the system. See Model predictive control.
If the driver is included in the system, then they do provide a feedback path by observing the direction of travel and compensating for errors by turning the steering wheel. In that case you have a feedback system, and the block labeled System in Figure(c) is a feed-forward system.
In other words, systems of different types can be nested, and the overall system regarded as a black-box.
Feedforward control is distinctly different from open loop control and teleoperator systems. Feedforward control requires a mathematical model of the plant (process and/or machine being controlled) and the plant's relationship to any inputs or feedback the system might receive. Neither open loop control nor teleoperator systems require the sophistication of a mathematical model of the physical system or plant being controlled. Control based on operator input without integral processing and interpretation through a mathematical model of the system is a teleoperator system and is not considered feedforward control.
History
Historically, the use of the term feedforward is found in works by Harold S. Black in US patent 1686792 (invented 17 March 1923) and D. M. MacKay as early as 1956. While MacKay's work is in the field of biological control theory, he speaks only of feedforward systems. MacKay does not mention feedforward control or allude to the discipline of feedforward controls. MacKay and other early writers who use the term feedforward are generally writing about theories of how human or animal brains work. Black also has US patent 2102671 invented 2 August 1927 on the technique of feedback applied to electronic systems.
The discipline of feedforward controls was largely developed by professors and graduate students at Georgia Tech, MIT, Stanford and Carnegie Mellon. Feedforward is not typically hyphenated in scholarly publications. Meckl and Seering of MIT and Book and Dickerson of Georgia Tech began the development of the concepts of Feedforward Control in the mid-1970s. The discipline of Feedforward Controls was well defined in many scholarly papers, articles and books by the late 1980s.
Benefits
The benefits of feedforward control are significant and can often justify the extra cost, time and effort required to implement the technology. Control accuracy can often be improved by as much as an order of magnitude if the mathematical model is of sufficient quality and implementation of the feedforward control law is well thought out. Energy consumption by the feedforward control system and its driver is typically substantially lower than with other controls. Stability is enhanced such that the controlled device can be built of lower cost, lighter weight, springier materials while still being highly accurate and able to operate at high speeds. Other benefits of feedforward control include reduced wear and tear on equipment, lower maintenance costs, higher reliability and a substantial reduction in hysteresis. Feedforward control is often combined with feedback control to optimize performance.
Model
The mathematical model of the plant (machine, process or organism) used by the feedforward control system may be created and input by a control engineer or it may be learned by the control system. Control systems capable of learning and/or adapting their mathematical model have become more practical as microprocessor speeds have increased. The discipline of modern feedforward control was itself made possible by the invention of microprocessors.
Feedforward control requires integration of the mathematical model into the control algorithm such that it is used to determine the control actions based on what is known about the state of the system being controlled. In the case of control for a lightweight, flexible robotic arm, this could be as simple as compensating between when the robot arm is carrying a payload and when it is not. The target joint angles are adjusted to place the payload in the desired position based on knowing the deflections in the arm from the mathematical model's interpretation of the disturbance caused by the payload. Systems that plan actions and then pass the plan to a different system for execution do not satisfy the above definition of feedforward control. Unless the system includes a means to detect a disturbance or receive an input and process that input through the mathematical model to determine the required modification to the control action, it is not true feedforward control.
Open system
In systems theory, an open system is a feed forward system that does not have any feedback loop to control its output. In contrast, a closed system uses on a feedback loop to control the operation of the system. In an open system, the output of the system is not fed back into the input to the system for control or operation.
Applications
Physiological feed-forward system
In physiology, feed-forward control is exemplified by the normal anticipatory regulation of heartbeat in advance of actual physical exertion by the central autonomic network. Feed-forward control can be likened to learned anticipatory responses to known cues (predictive coding). Feedback regulation of the heartbeat provides further adaptiveness to the running eventualities of physical exertion. Feedforward systems are also found in biological control of other variables by many regions of animals brains.
Even in the case of biological feedforward systems, such as in the human brain, knowledge or a mental model of the plant (body) can be considered to be mathematical as the model is characterized by limits, rhythms, mechanics and patterns.
A pure feed-forward system is different from a homeostatic control system, which has the function of keeping the body's internal environment 'steady' or in a 'prolonged steady state of readiness.' A homeostatic control system relies mainly on feedback (especially negative), in addition to the feedforward elements of the system.
Gene regulation and feed-forward
Feed-forward loops (FFLs), a three-node graph of the form A affects B and C and B affects C, are frequently observed in transcription networks in several organisms including E. coli and S. cerevisiae, suggesting that they perform functions that are important for the functioning of these organisms. In E. coli and S. cerevisiae transcription networks have been extensively studied, FFLs occur approximately three times more frequently than expected based on random (Erdös-Rényi) networks.
Edges in transcription networks are directed and signed, as they represent activation (+) or repression (-). The sign of a path in a transcription network can be obtained by multiplying the signs of the edges in the path, so a path with an odd number of negative signs is negative. There are eight possible three-node FFLs as each of the three arrows can be either repression or activation, which can be classified into coherent or incoherent FFLs. Coherent FFLs have the same sign for both the paths from A to C, and incoherent FFLs have different signs for the two paths.
The temporal dynamics of FFLs show that coherent FFLs can be sign-sensitive delays that filter input into the circuit. We consider the differential equations for a Type-I coherent FFL, where all the arrows are positive:
Where and are increasing functions in and representing production, and and are rate constants representing degradation or dilution of and respectively. can represent an AND gate where if either or , for instance if where and are step functions. In this case the FFL creates a time-delay between a sustained on-signal, i.e. increase in and the output increase in . This is because production of must first induce production of , which is then needed to induce production of . However, there is no time-delay in for an off-signal because a reduction of immediately results in a decrease in the production term . This system therefore filters out fluctuations in the on-signal and detects persistent signals. This is particularly relevant in settings with stochastically fluctuating signals. In bacteria these circuits create time delays ranging from a few minutes to a few hours.
Similarly, an inclusive-OR gate in which is activated by either or is a sign-sensitive delay with no delay after the ON step but with a delay after the OFF step. This is because an ON pulse immediately activates B and C, but an OFF step does not immediately result in deactivation of C because B can still be active. This can protect the system from fluctuations that result in the transient loss of the ON signal and can also provide a form of memory. Kalir, Mangan, and Alon, 2005 show that the regulatory system for flagella in E. coli is regulated with a Type 1 coherent feedforward loop.
For instance, the regulation of the shift from one carbon source to another in diauxic growth in E. coli can be controlled via a type-1 coherent FFL. In diauxic growth cells growth using two carbon sources by first rapidly consuming the preferred carbon source, and then slowing growth in a lag phase before consuming the second less preferred carbon source. In E. coli, glucose is preferred over both arabinose and lactose. The absence of glucose is represented via a small molecule cAMP. Diauxic growth in glucose and lactose is regulated by a simple regulatory system involving cAMP and the lac operon. However, growth in arabinose is regulated by a feedforward loop with an AND gate which confers an approximately 20 minute time delay between the ON-step in which cAMP concentration increases when glucose is consumed and when arabinose transporters are expressed. There is no time delay with the OFF signal which occurs when glucose is present. This prevents the cell from shifting to growth on arabinose based on short term fluctuations in glucose availability.
Additionally, feedforward loops can facilitate cellular memory. Doncic and Skotheim (2003) show this effect in the regulation in the mating of yeast, where extracellular mating pheromone induces mating behavior, including preventing cells from entering the cell cycle. The mating pheromone activates the MAPK pathway, which then activates the cell-cycle inhibitor Far1 and the transcription factor Ste12, which in turn increases the synthesis of inactive Far1. In this system, the concentration of active Far1 depends on the time integral of a function of the external mating pheromone concentration. This dependence on past levels of mating pheromone is a form of cellular memory. This system simultaneously allows for the stability and reversibility.
Incoherent feedforward loops, in which the two paths from the input to the output node have different signs result in short pulses in response to an ON signal. In this system, input A simultaneous directly increases and indirectly decreases synthesis of output node C. If the indirect path to C (via B) is slower than the direct path a pulse of output is produced in the time period before levels of B are high enough to inhibit synthesis of C. Response to epidermal growth factor (EGF) in dividing mammalian cells is an example of a Type-1 incoherent FFL.
The frequent observation of feed-forward loops in various biological contexts across multiple scales suggests that they have structural properties that are highly adaptive in many contexts. Several theoretical and experimental studies including those discussed here show that FFLs create a mechanism for biological systems to process and store information, which is important for predictive behavior and survival in complex dynamically changing environments.
Feed-forward systems in computing
In computing, feed-forward normally refers to a perceptron network in which the outputs from all neurons go to following but not preceding layers, so there are no feedback loops. The connections are set up during a training phase, which in effect is when the system is a feedback system.
Long distance telephony
In the early 1970s, intercity coaxial transmission systems, including L-carrier, used feed-forward amplifiers to diminish linear distortion. This more complex method allowed wider bandwidth than earlier feedback systems. Optical fiber, however, made such systems obsolete before many were built.
Automation and machine control
Feedforward control is a discipline within the field of automatic controls used in automation.
Parallel feed-forward compensation with derivative (PFCD)
The method is rather a new technique that changes the phase of an open-loop transfer function of a non-minimum phase system into minimum phase.
See also
Black box
Smith predictor
References
Further reading
S. Mangan A. Zaslaver & U. Alon, "The coherent feed-forward loop serves as a sign-sensitive delay element in transcription networks", J. Molecular Biology 334:197-204 (2003).
Foss, S., Foss, K., & Trapp. (2002). Contemporary Perspectives on Rhetoric (3rd ed.). Waveland Press, Inc.
Book, W.J. and Cetinkunt, S., "Optimum Control of Flexible Robot Arms OR Fixed Paths", IEEE Conference on Decision and Control. December 1985.
Meckl, P.H. and Seering, W.P., "Feedforward Control Techniques Achieve Fast Settling Time in Robots", Automatic Control Conference Proceedings. 1986, pp 58–64.
Sakawa, Y., Matsuno, F. and Fukushima, S., "Modeling and Feedback Control of a Flexible Arm", Journal of Robotic Systems. August 1985, pp 453–472.
Truckenbrodt, A., "Modeling and Control of Flexible Manipulator Structures", 4th CISM-IFToMM Symp., Warszawa, 1981.
Leu, M.C., Dukovski, V. and Wang, K.K., "An Analytical and Experimental Study of the Stiffness of Robot Manipulators with Parallel Mechanisms", 1985 ASME Winter Annual Meeting PRD-Vol. 15 Robotics and Manufacturing Automation, pp. 137–144
Asada, H., Youcef-Toumi, K. and Ramirez, R.B., "Designing of the MIT Direct Drive Arm", Int. Symp. on Design and Synthesis, Japan, July 1984.
Rameriz, R.B., Design of a High Speed Graphite Composite Robot Arm, M.S. Thesis, M.E. Dept., MIT, Feb. 1984.
Balas, M.J., "Feedback Control of Flexible Systems", IEEE Trans. on Automatic Control, Vol.AC-23, No.4, Aug. 1978, pp. 673–679.
Balas, M.J., "Active Control of Flexible Systems", J. of Optim. Th. and App., Vol.25, No.3, July 1978,
Book, W.J., Maizzo Neto, 0. and Whitney, D.E., "Feedback Control of Two Beam, Two Joint Systems With Distributed Flexibility", Journal of Dynamic Systems, Measurement and Control, Vol.97, No.4, December 1975, pp. 424–430.
Book, W.J., "Analysis of Massless Elastic Chains With Servo Controlled Joints", Journal of Dynamic Systems, Measurement and Control, Vol.101, September 1979, pp. 187–192.
Book, W.J., "Recursive Lagrangian Dynamics of Flexible Manipulator Arms Via Transformation Matrices", Carnegie-Mellon University Robotics Institute Technical Report, CMU-RI-TR-8323, Dec. 1983.
Hughes, P.C., "Dynamics of a Flexible Manipulator Arm for the Space Shuttle", AAS/AIAA Astrodynamics Conference, September 1977, Jackson Lake Lodge, Wyoming.
Hughes, P.C., "Dynamics of a Chain of Flexible Bodies", Journal of Astronautical Sciences, 27,4, Oct.-Dec. 1979, pp. 359–380.
Meirovitch, L., "Modeling and control of Distributed Structures" Proc. of the Workshop on Application of Distributed System Theory to Large Space Structures, JPL/CIT, NTIS #N83- 36064, July 1, 1983.
Schmitz, E., "Experiments on the End-point Position Control of a Very Flexible One Link.Manipulator", Ph.D. Dissertation,-Stanford Univ., Dept. of Aero & Astro., June 1985.
Martin, G.D., On the Control of Flexible Mechanical Systems, Ph.D. Dissertation, Stanford Univ., Dept. of E.E., May 1978.
Zalucky, A. and Hardt, D.E., "Active Control of Robot Structure Deflections", J. of Dynamic Systems, Measurement and Control, Vol. 106, March 1984, pp. 63–69.
Sangveraphunsiri, V., The Optimal Control and Design of a Flexible Manipulator Arm, Ph.D. Dissertation, Dept. of Mech. Eng., Georgia Inst, of Tech., 1984. 1985.
Nemir, D. C, Koivo, A. J., and Kashyap, R. L., "Pseudolinks and the Self-Tuning Control of a Nonrigid Link Mechanism", Purdue University, Advance copy submitted for publication, 1987.
Widmann, G. R. and Ahmad, S., "Control of Industrial Robots with Flexible Joints", Purdue University, Advance copy submitted for publication, 1987.
Hollars, M. G., Uhlik, C. R., and Cannon, R. H., "Comparison of Decoupled and Exact Computed Torque Control for Robots with Elastic Joints", Advance copy submitted for publication, 1987.
Cannon, R. H. and Schmitz, E., "Initial Experiments on the End- Point Control of a Flexible One Link Robot", International Journal of Robotics Research, November 1983.
Oosting, K.W. and Dickerson, S.L., "Low-Cost, High Speed Automated Inspection", 1991, Industry Report
Oosting, K.W. and Dickerson, S.L., "Feed Forward Control for Stabilization", 1987, ASME
Oosting, K.W. and Dickerson, S.L., "Control of a Lightweight Robot Arm", 1986, IEEE International Conference on Industrial Automation
Oosting, K.W., "Actuated Feedforward Controlled Solar Tracking System", 2009, Patent Pending
Oosting, K.W., "Feedforward Control System for a Solar Tracker", 2009, Patent Pending
Oosting, K.W., "Smart Solar Tracking", July, 2010, InterSolar NA Presentation
Control theory
Artificial neural networks
Neuroethology concepts
& | Feed forward (control) | [
"Mathematics",
"Engineering"
] | 4,596 | [
"Applied mathematics",
"Control theory",
"Automation",
"Control engineering",
"Dynamical systems"
] |
574,775 | https://en.wikipedia.org/wiki/Abstraction%20layer | In computing, an abstraction layer or abstraction level is a way of hiding the working details of a subsystem. Examples of software models that use layers of abstraction include the OSI model for network protocols, OpenGL, and other graphics libraries, which allow the separation of concerns to facilitate interoperability and platform independence.
In computer science, an abstraction layer is a generalization of a conceptual model or algorithm, away from any specific implementation. These generalizations arise from broad similarities that are best encapsulated by models that express similarities present in various specific implementations. The simplification provided by a good abstraction layer allows for easy reuse by distilling a useful concept or design pattern so that situations, where it may be accurately applied, can be quickly recognized. Just composing lower-level elements into a construct doesn't count as an abstraction layer unless it shields users from its underlying complexity.
A layer is considered to be on top of another if it depends on it. Every layer can exist without the layers above it, and requires the layers below it to function. Frequently abstraction layers can be composed into a hierarchy of abstraction levels. The OSI model comprises seven abstraction layers. Each layer of the model encapsulates and addresses a different part of the needs of digital communications, thereby reducing the complexity of the associated engineering solutions.
A famous aphorism of David Wheeler is, "All problems in computer science can be solved by another level of indirection." This is often deliberately misquoted with "abstraction" substituted for "indirection." It is also sometimes misattributed to Butler Lampson. Kevlin Henney's corollary to this is, "...except for the problem of too many layers of indirection."
Computer architecture
In a computer architecture, a computer system is usually represented as consisting of several abstraction levels such as:
software
programmable logic
hardware
Programmable logic is often considered part of the hardware, while the logical definitions are also sometimes seen as part of a device's software or firmware. Firmware may include only low-level software, but can also include all software, including an operating system and applications. The software layers can be further divided into hardware abstraction layers, physical and logical device drivers, repositories such as filesystems, operating system kernels, middleware, applications, and others. A distinction can also be made from low-level programming languages like VHDL, machine language, assembly language to a compiled language, interpreter, and script language.
Input and output
In the Unix operating system, most types of input and output operations are considered to be streams of bytes read from a device or written to a device. This stream of bytes model is used for file I/O, socket I/O, and terminal I/O in order to provide device independence. In order to read and write to a device at the application level, the program calls a function to open the device, which may be a real device such as a terminal or a virtual device such as a network port or a file in a file system. The device's physical characteristics are mediated by the operating system which in turn presents an abstract interface that allows the programmer to read and write bytes from/to the device. The operating system then performs the actual transformation needed to read and write the stream of bytes to the device.
Graphics
Most graphics libraries such as OpenGL provide an abstract graphical device model as an interface. The library is responsible for translating the commands provided by the programmer into the specific device commands needed to draw the graphical elements and objects. The specific device commands for a plotter are different from the device commands for a CRT monitor, but the graphics library hides the implementation and device-dependent details by providing an abstract interface which provides a set of primitives that are generally useful for drawing graphical objects.
See also
Application programming interface (API)
Application binary interface (ABI)
Compiler, a tool for abstraction between source code and machine code
Hardware abstraction
Information hiding
Layer (object-oriented design)
Namespace violation
Protection ring
Operating system, an abstraction layer between a program and computer hardware
Software engineering
References
Computer architecture
Abstraction | Abstraction layer | [
"Technology",
"Engineering"
] | 845 | [
"Computers",
"Computer engineering",
"Computer architecture"
] |
574,917 | https://en.wikipedia.org/wiki/Autocrine%20signaling | Autocrine signaling is a form of cell signaling in which a cell secretes a hormone or chemical messenger (called the autocrine agent) that binds to autocrine receptors on that same cell, leading to changes in the cell. This can be contrasted with paracrine signaling, intracrine signaling, or classical endocrine signaling.
Examples
An example of an autocrine agent is the cytokine interleukin-1 in monocytes. When interleukin-1 is produced in response to external stimuli, it can bind to cell-surface receptors on the same cell that produced it.
Another example occurs in activated T cell lymphocytes, i.e., when a T cell is induced to mature by binding to a peptide:MHC complex on a professional antigen-presenting cell and by the B7:CD28 costimulatory signal. Upon activation, "low-affinity" IL-2 receptors are replaced by "high-affinity" IL-2 receptors consisting of α, β, and γ chains. The cell then releases IL-2, which binds to its own new IL-2 receptors, causing self-stimulation and ultimately a monoclonal population of T cells. These T cells can then go on to perform effector functions such as macrophage activation, B cell activation, and cell-mediated cytotoxicity.
Cancer
Tumor development is a complex process that requires cell division, growth, and survival. One approach used by tumors to upregulate growth and survival is through autocrine production of growth and survival factors. Autocrine signaling plays critical roles in cancer activation and also in providing self-sustaining growth signals to tumors.
In the Wnt pathway
Normally, the Wnt signaling pathway leads to stabilization of β-catenin through inactivation of a protein complex containing the tumor suppressors APC and Axin. This destruction complex normally triggers β-catenin phosphorylation, inducing its degradation. De-regulation of the autocrine Wnt signaling pathway via mutations in APC and Axin have been linked to activation of various types of human cancer. Genetic alterations that lead to de-regulation of the autocrine Wnt pathway result in transactivation of epidermal growth factor receptor (EGFR) and other pathways, in turn contributing to proliferation of tumor cells. In colorectal cancer, for example, mutations in APC, axin, or β-catenin promote β-catenin stabilization and transcription of genes encoding cancer-associated proteins. Furthermore, in human breast cancer, interference with the de-regulated Wnt signaling pathway reduces proliferation and survival of cancer. These findings suggest that interference with Wnt signaling at the ligand-receptor level may improve the effectiveness of cancer therapies.
IL-6
Interleukin 6 (acronym: IL-6) is a cytokine that is important for many aspects of cellular biology including immune responses, cell survival, apoptosis, as well as proliferation. Several studies have outlined the importance of autocrine IL-6 signaling in lung and breast cancers. For example, one group found a positive correlation between persistently activated tyrosine-phosphorylated STAT3 (pSTAT3), found in 50% of lung adenocarcinomas, and IL-6. Further investigation revealed that mutant EGFR could activate the oncogenic STAT3 pathway via upregulated IL-6 autocrine signaling.
Similarly, HER2 overexpression occurs in approximately a quarter of breast cancers and correlates with poor prognosis. Recent research revealed that IL-6 secretion induced by HER2 overexpression activated STAT3 and altered gene expression, resulting in an autocrine loop of IL-6/STAT3 expression. Both mouse and human in vivo models of HER2-overexpressing breast cancers relied critically on this HER2–IL-6–STAT3 signaling pathway. Another group found that high serum levels of IL-6 correlated with poor outcome in breast cancer tumors. Their research showed that autocrine IL-6 signaling induced malignant features in Notch-3 expressing mammospheres.
IL-7
A study demonstrates how the autocrine production of the IL-7 cytokine mediated by T-cell acute lymphoblastic leukemia (T-ALL) can be involved in the oncogenic development of T-ALL and offer novel insights into T-ALL spreading.
VEGF
Another agent involved in autocrine cancer signaling is vascular endothelial growth factor (VEGF). VEGF, produced by carcinoma cells, acts through paracrine signaling on endothelial cells and through autocrine signaling on carcinoma cells. Evidence shows that autocrine VEGF is involved in two major aspects of invasive carcinoma: survival and migration. Moreover, it was shown that tumor progression selects for cells that are VEGF-dependent, challenging the belief that VEGF's role in cancer is limited to angiogenesis. Instead, this research suggests that VEGF receptor-targeted therapeutics may impair cancer survival and invasion as well as angiogenesis.
Promotion of metastasis
Metastasis is a major cause of cancer deaths, and strategies to prevent or halt invasion are lacking. One study showed that autocrine PDGFR signaling plays an essential role in epithelial-mesenchymal transition (EMT) maintenance in vitro, which is known to correlate well with metastasis in vivo. The authors showed that the metastatic potential of oncogenic mammary epithelial cells required an autocrine PDGF/PDGFR signaling loop, and that cooperation of autocrine PDGFR signaling with oncogenic was required for survival during EMT. Autocrine PDGFR signaling also contributes to maintenance of EMT, possibly through activation of STAT1 and other distinct pathways. In addition, expression of PDGFRα and -β correlated with invasive behavior in human mammary carcinomas. This indicates the numerous pathways through which autocrine signaling can regulate metastatic processes in a tumor.
Development of therapeutic targets
The growing knowledge behind the mechanism of autocrine signaling in cancer progression has revealed new approaches for therapeutic treatment. For example, autocrine Wnt signaling could provide a novel target for therapeutic intervention by means of Wnt antagonists or other molecules that interfere with ligand-receptor interactions of the Wnt pathway. In addition, VEGF-A production and VEGFR-2 activation on the surface of breast cancer cells indicates the presence of a distinct autocrine signaling loop that enables breast cancer cells to promote their own growth and survival by phosphorylation and activation of VEGFR-2. This autocrine loop is another example of an attractive therapeutic target.
In HER2 overexpressing breast cancers, the HER2–IL-6–STAT3 signaling relationship could be targeted to develop new therapeutic strategies. HER2 kinase inhibitors, such as lapatinib, have also demonstrated clinical efficacy in HER2 overexpressing breast cancers by disrupting a neuregulin-1 (NRG1)-mediated autocrine loop.
In the case of PDGFR signalling, overexpression of a dominant-negative PDGFR or application of the cancer drug STI571 are both approaches being explored to therapeutically interference with metastasis in mice.
In addition, drugs may be developed that activate autocrine signaling in cancer cells that would not otherwise occur. For example, a small-molecule mimetic of Smac/Diablo that counteracts the inhibition of apoptosis has been shown to enhance apoptosis caused by chemotherapeutic drugs through autocrine-secreted tumor necrosis factor alpha (TNFα). In response to autocrine TNFα signaling, the Smac mimetic promotes formation of a RIPK1-dependent caspase-8-activating complex, leading to apoptosis.
Role in drug resistance
Recent studies have reported the ability of drug-resistant cancer cells to acquire mitogenic signals from previously neglected autocrine loops, causing tumor recurrence.
For example, despite widespread expression of epidermal growth factor receptors (EGFRs) and EGF family ligands in non-small-cell lung cancer (NSCLC), EGFR-specific tyrosine kinase inhibitors such as gefitinib have shown limited therapeutic success. This resistance is proposed to be because autocrine growth signaling pathways distinct from EGFR are active in NSCLC cells. Gene expression profiling revealed the prevalence of specific fibroblast growth factors (FGFs) and FGF receptors in NSCLC cell lines, and found that FGF2, FGF9 and their receptors encompass a growth factor autocrine loop that is active in a subset of gefitinib-resistant NSCLC cell lines.
In breast cancer, the acquisition of tamoxifen resistance is another major therapeutic problem. It has been shown that phosphorylation of STAT3 and RANTES expression are increased in response to tamoxifen in human breast cancer cells. In a recent study, one group showed that STAT3 and RANTES contribute to the maintenance of drug resistance by upregulating anti-apoptotic signals and inhibiting caspase cleavage. These mechanisms of STAT3-RANTES autocrine signaling suggest a novel strategy for management of patients with tamoxifen-resistant tumors.
See also
Paracrine signaling is a form of cell-cell communication in which a cell produces a signal to induce changes in nearby cells, altering the behavior or differentiation of nearby cells.
Intracrine
Local hormone
Endocrine system
References
External links
"Autocrine versus juxtacrine signaling modes" - illustration at sysbio.org
Signal transduction | Autocrine signaling | [
"Chemistry",
"Biology"
] | 2,035 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
575,176 | https://en.wikipedia.org/wiki/Antifungal | An antifungal medication, also known as an antimycotic medication, is a pharmaceutical fungicide or fungistatic used to treat and prevent mycosis such as athlete's foot, ringworm, candidiasis (thrush), serious systemic infections such as cryptococcal meningitis, and others. Such drugs are usually obtained by a doctor's prescription, but a few are available over the counter (OTC). The evolution of antifungal resistance is a growing threat to health globally.
Routes of administration
Ocular
Indicated when the fungal infection is located in the eye. There is currently only one ocular antifungal available. This is Natamycin. However, various other antifungal agents could be compounded in this formulation.
Intrathecal
Used occasionally when there's an infection of the central nervous system and other systemic options cannot reach the concentration required in that region for therapeutic benefit. Example(s): amphotericin B.
Vaginal
This may be used to treat some fungal infections of the vaginal region. An example of a condition they are sometimes used for is candida vulvovaginitis which is treated with intravaginal Clotrimazole
Topical
This is sometimes indicated when there's a fungal infection on the skin. An example is tinea pedis; this is sometimes treated with topical terbinafine.
Oral
If the antifungal has good bioavailability, this is a common route to handle a fungal infection. An example is the use of ketoconazole to treat coccidioidomycosis.
Intravenous
Like the oral route, this will reach the bloodstream and distribute throughout the body. However, it is faster and a good option if the drug has poor bioavailability. An example of this is IV amphotericin B for the treatment of coccidioidomycosis.
Classes
The available classes of antifungal drugs are still limited but as of 2021 novel classes of antifungals are being developed and are undergoing various stages of clinical trials to assess performance.
Polyenes
A polyene is a molecule with multiple conjugated double bonds. A polyene antifungal is a macrocyclic polyene with a heavily hydroxylated region on the ring opposite the conjugated system. This makes polyene antifungals amphiphilic. The polyene antimycotics bind with sterols in the fungal cell membrane, principally ergosterol. This changes the transition temperature (Tg) of the cell membrane, thereby placing the membrane in a less fluid, more crystalline state. (In ordinary circumstances membrane sterols increase the packing of the phospholipid bilayer making the plasma membrane more dense.) As a result, the cell's contents including monovalent ions (K+, Na+, H+, and Cl−) and small organic molecules leak, which is regarded as one of the primary ways a cell dies. Animal cells contain cholesterol instead of ergosterol and so they are much less susceptible. However, at therapeutic doses, some amphotericin B may bind to animal membrane cholesterol, increasing the risk of human toxicity. Amphotericin B is nephrotoxic when given intravenously. As a polyene's hydrophobic chain is shortened, its sterol binding activity is increased. Therefore, further reduction of the hydrophobic chain may result in it binding to cholesterol, making it toxic to animals.
Amphotericin B
Candicidin
Filipin – 35 carbons, binds to cholesterol (toxic)
Hamycin
Natamycin – 33 carbons, binds well to ergosterol
Nystatin
Rimocidin
Azoles
Azole antifungals inhibit the conversion of lanosterol to ergosterol by inhibiting lanosterol 14α-demethylase. These compounds have a five-membered ring containing two or three nitrogen atoms. The imidazole antifungals contain a 1,3-diazole (imidazole) ring (two nitrogen atoms), whereas the triazole antifungals have a ring with three nitrogen atoms.
Imidazoles
Bifonazole
Butoconazole
Clotrimazole
Econazole
Fenticonazole
Isoconazole
Ketoconazole
Luliconazole
Miconazole
Omoconazole
Oxiconazole
Sertaconazole
Sulconazole
Tioconazole
Triazoles
Albaconazole
Cyproconazole
Efinaconazole
Epoxiconazole
Fluconazole
Isavuconazole
Itraconazole
Posaconazole
Propiconazole
Ravuconazole
Terconazole
Voriconazole
Thiazoles
Abafungin
Allylamines
Allylamines inhibit squalene epoxidase, another enzyme required for ergosterol synthesis. Examples include butenafine, naftifine, and terbinafine.
Echinocandins
Echinocandins inhibit the creation of glucan in the fungal cell wall by inhibiting 1,3-Beta-glucan synthase:
Anidulafungin
Caspofungin
Micafungin
Echinocandins are administered intravenously, particularly for the treatment of resistant Candida species.
Triterpenoids
Ibrexafungerp
Others
Acrisorcin
Amorolfine – a morpholine derivative used topically in dermatophytosis
Aurones – possess antifungal properties
Benzoic acid – has antifungal properties, such as in Whitfield's ointment, Friar's Balsam, and Balsam of Peru
Carbol fuchsin (Castellani's paint)
Ciclopirox (ciclopirox olamine) – a hydroxypyridone antifungal that interferes with active membrane transport, cell membrane integrity, and fungal respiratory processes. It is most useful against tinea versicolour.
Clioquinol
Coal tar
Copper(II) sulfate
Crystal violet – a triarylmethane dye. It has antibacterial, antifungal, and anthelmintic properties and was formerly important as a topical antiseptic.
Chlorhexidine is a topical antibacterial and antifungal. It is commonly used in hospitals as an antiseptic. It is much more strongly antibacterial than antifungal, requiring at least a 10 times higher concentration to kill yeast compared to gram negative bacteria
Chlorophetanol
Diiodohydroxyquinoline (Iodoquinol)
Flucytosine (5-fluorocytosine) – an antimetabolite pyrimidine analog
Fumagillin
Griseofulvin – binds to microtubules and inhibits mitosis
Haloprogin – discontinued due to the emergence of antifungals with fewer side effects
Miltefosine works by damaging fungal cell membranes
Nikkomycin – blocks formation of chitin present in the cell wall of fungus.
Orotomide (F901318) – pyrimidine synthesis inhibitor
Piroctone olamine
Pentanenitrile
Potassium iodide – preferred treatment for lymphocutaneous sporotrichosis and subcutaneous zygomycosis (basidiobolomycosis). The mode of action is obscure.
Potassium permanganate - for use only on thicker, more insensitive skin such as the soles of the feet.
Selenium disulfide
Sodium thiosulfate
Sulfur
Tolnaftate – a thiocarbamate antifungal, which inhibits fungal squalene epoxidase (similar mechanism to allylamines like terbinafine)
Triacetin – hydrolysed to acetic acid by fungal esterases
Undecylenic acid – an unsaturated fatty acid derived from natural castor oil; fungistatic, antibacterial, antiviral, and inhibits Candida morphogenesis
Zinc pyrithione
Side effects
Incidents of liver injury or failure among modern antifungal medicines are very low to non-existent. However, some can cause allergic reactions in people.
There are also many drug interactions. Patients must read in detail the enclosed data sheet(s) of any medicine. For example, the azole antifungals such as ketoconazole or itraconazole can be both substrates and inhibitors of the P-glycoprotein, which (among other functions) excretes toxins and drugs into the intestines. Azole antifungals are also both substrates and inhibitors of the cytochrome P450 family CYP3A4, causing increased concentration when administering, for example, calcium channel blockers, immunosuppressants, chemotherapeutic drugs, benzodiazepines, tricyclic antidepressants, macrolides and SSRIs.
Before oral antifungal therapies are used to treat nail disease, a confirmation of the fungal infection should be made. Approximately half of suspected cases of fungal infection in nails have a non-fungal cause. The side effects of oral treatment are significant and people without an infection should not take these drugs.
Azoles are the group of antifungals which act on the cell membrane of fungi. They inhibit the enzyme 14-alpha-sterol demethylase, a microsomal CYP, which is required for the biosynthesis of ergosterol for the cytoplasmic membrane. This leads to the accumulation of 14-alpha-methylsterols resulting in impairment of function of certain membrane-bound enzymes and disruption of close packing of acyl chains of phospholipids, thus inhibiting growth of the fungi. Some azoles directly increase permeability of the fungal cell membrane.
Resistance
Antifungal resistance is a subset of antimicrobial resistance, that specifically applies to fungi that have become resistant to antifungals. Resistance to antifungals can arise naturally, for example by genetic mutation or through aneuploidy. Extended use of antifungals leads to the development of antifungal resistance through various mechanisms.
Some fungi (e.g. Candida krusei and fluconazole) exhibit intrinsic resistance to certain antifungal drugs or classes, whereas some species develop antifungal resistance to external pressures. Antifungal resistance is a One Health concern, driven by multiple extrinsic factors, including extensive fungicidal use, overuse of clinical antifungals, environmental change and host factors.
Like resistance to antibacterials, antifungal resistance can be driven by antifungal use in agriculture. Currently there is no regulation on the use of similar antifungal classes in agriculture and the clinic.
The emergence of Candida auris as a potential human pathogen that sometimes exhibits multi-class antifungal drug resistance is concerning and has been associated with several outbreaks globally. The WHO has released a priority fungal pathogen list, including pathogens with antifungal resistance.
References
External links
Antifungal Drugs – Detailed information on antifungals from the Fungal Guide written by R. Thomas and K. Barber
Anti-infective agents
. | Antifungal | [
"Chemistry",
"Biology"
] | 2,387 | [
"Anti-infective agents",
"Fungicides",
"Chemicals in medicine",
"Biocides"
] |
575,207 | https://en.wikipedia.org/wiki/1862%20Apollo | 1862 Apollo is a stony asteroid, approximately 1.5 kilometers in diameter, classified as a near-Earth object (NEO). It was discovered by German astronomer Karl Reinmuth at Heidelberg Observatory on 24 April 1932, but lost and not recovered until 1973.
It is the namesake and the first recognized member of the Apollo asteroids, a subgroup of NEOs which are Earth-crossers, that is, they cross the orbit of the Earth when viewed perpendicularly to the ecliptic plane (crossing an orbit is a more general term than actually intersecting it). In addition, since Apollo's orbit is highly eccentric, it crosses the orbits of Venus and Mars and is therefore called a Venus-crosser and Mars-crosser as well.
Although Apollo was the first Apollo asteroid to be discovered, its official IAU-number (1862) is higher than that of some other Apollo asteroids such as 1566 Icarus, due to the fact that it was a lost asteroid for more than 40 years and other bodies were numbered in the meantime. The analysis of its rotation provided observational evidence of the YORP effect.
It is named after the Greek god Apollo. He is the god of the Sun, child of Zeus and Leto, after which the minor planets 5731 Zeus and 68 Leto are named.
Satellite
On November 4, 2005, it was announced that an asteroid moon, or satellite of Apollo, had been detected by radar observations from Arecibo Observatory, Puerto Rico, October 29 – November 2, 2005. The announcement is contained in the International Astronomical Union Circular (IAUC) 8627. The satellite is only across and orbits Apollo just away from the asteroid itself. From the surface of Apollo, S/2005 (1862) 1 would have an angular diameter of about 2.0835 degrees.
Potentially hazardous object
1862 Apollo is a potentially hazardous asteroid (PHA) because its minimum orbit intersection distance (MOID) is less than 0.05 AU and its diameter is greater than 150 meters. Apollo's Earth MOID is . Its orbit is well-determined for the next several hundred years. On 17 May 2075 it will pass from Venus.
See also
Lost asteroid
Notes
References
External links
Lightcurve plot of 1862 Apollo, Palmer Divide Observatory, B. D. Warner (2005)
Asteroids with Satellites, Robert Johnston, johnstonsarchive.net
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend
001862
Discoveries by Karl Wilhelm Reinmuth
Named minor planets
001862
001862
Earth-crossing asteroids
Venus-crossing asteroids
001862
001862
19320424
Recovered astronomical objects | 1862 Apollo | [
"Astronomy"
] | 564 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
575,388 | https://en.wikipedia.org/wiki/Hand-kissing | Hand-kissing is a greeting gesture that indicates courtesy, politeness, respect, admiration, affection or even devotion by one person toward another. A hand-kiss is considered a respectful way for a gentleman to greet a lady. Today, non-ritual hand-kissing is rare and takes place mostly within conservative class or diplomatic contexts. Today, the hand kiss has largely been replaced by a kiss on the cheek or a handshake.
A non-ritual hand-kiss can be initiated by the lady, who would hold out her right hand with the back of the hand facing upward; or by the gentleman extending his right hand with the palm facing upward to invite the lady to put her right hand lightly on it facing downward. The gentleman may bow towards the offered hand and (often symbolically) would touch her knuckles with his lips, while lightly holding the offered hand. However, the lips do not actually touch the hand in modern tradition, especially in a formal environment where any intimate or romantic undertones could be considered inappropriate. The gesture is short, lasting less than a second.
Around the world
In Arab World, Iran, Turkey, Malaysia, Indonesia, and Brunei, hand-kissing is a common way to greet elder people of all genders, primarily the closest relatives (both parents, grandparents, and uncles or aunts) and teachers. Occasionally, after kissing the hand, the greeter will draw the hand to his own forehead. In the Philippines, the gesture evolved into just touching the hand to the forehead; hand-kissing itself has become a separate kind of gesture that has merged with the European custom concerning when it may be used.
In Southern Italy, especially Sicily, the verbal greeting "I kiss the hands." () derives from this usage. Similarly, in Hungary the verbal greeting "I kiss your hand." (Hungarian: "Kezét csókolom.") is sometimes used, especially when greeting elders and in rural communities. The shortened version "I kiss it." (Hungarian: "Csókolom.") is more wide spread. A similar expression exists also in Poland (Polish: "Całuję rączki", meaning "I kiss [your] little hands"), although nowadays it's considered obsolete.
In Romania the gesture is reserved for priests and women and it is common greeting when first introduced to a woman in parts of the country. The verbal expression towards women is "I kiss your hand" (Romanian: "sarut mana" and sometimes shortened to "saru-mana") Towards priests it is sometimes changed into "i kiss your right" due to the belief that the right hand of the priest is holy and blessed regardless of the priest himself and any eventual shortcomings. In the past both parents used to get their hand kissed and seen as a type of blessing, however the expression is now almost exclusively towards women.
Chivalrous gesture
A hand-kiss was considered a respectful way for a gentleman to greet a lady. The practice originated in the Polish–Lithuanian Commonwealth and the Spanish courts of the 17th and 18th centuries. The gesture is still at times observed in Central Europe, namely in Poland, Austria and Hungary, among others.
Traditionally, the hand-kiss was initiated by a woman, who offered her hand to a man to kiss. The lady offering her hand was expected to be of the same or higher social status than the man. It was a gesture of courtesy and extreme politeness, and it was considered impolite and even rude to refuse an offered hand. Today, the practice is very uncommon in many European countries, and has been largely replaced by a kiss on the cheek or a handshake.
Kissing the ring
Kissing the hand, or particularly a ring on the hand was also a gesture of formal submission or pledge of allegiance of man to man, or as a diplomatic gesture. The gesture would indicate submission by kissing the signet ring (a form of seal worn as a jewelry ring), the person's symbol of authority. The gesture was common in the European upper class throughout the 18th and 19th centuries. It started to disappear in the 20th century, to be replaced by the egalitarian handshake. However, former French president Jacques Chirac made hand-kissing his trademark and the gesture is still encountered in diplomatic situations.
Religious usage
In the Catholic Church, a Catholic meeting the Pope or a Cardinal, or even a lower-ranking prelate, will kiss the ring on his hand. This has become uncommon in circles not used to formal protocol, even often dispensed with amongst clergy. Sometimes, the devout Catholic combines the hand kissing with kneeling on the left knee as an even stronger expression of filial respect for the clerically high-ranking father. The cleric may then in a fatherly way lay his other hand on the kisser's head or even bless him/her by a manual cross sign. In the Catholic Church, it is also traditional for the laity to kiss the hands of a newly-ordained priest after his inaugural mass, in veneration of the Body of Christ, which is held in the priest's hands during the Holy Eucharist. In May 2014, Pope Francis kissed the hands of six Holocaust survivors to honour the six million Jews killed in the Holocaust.
In the Eastern Orthodox Church, and Oriental Orthodox Churches, it is appropriate and common for laity to greet clergy, whether priests or bishops, by making a profound bow and saying, "Father, bless" (to a priest) or "Master, bless" (to a bishop) while placing their right hand, palm up, in front of their bodies. The priest then blesses them with the sign of the cross and then places his hand in theirs, offering the opportunity to kiss his hand. Orthodox Christians kiss their priest's hands not only to honor their spiritual father confessor, but in veneration of the Body of Christ which the priest handles during the Divine Liturgy as he prepares Holy Communion. It is also a common practice when writing a letter to a priest to begin with the words "Father Bless" rather than "Dear Father" and end the letter with the words "Kissing your right hand" rather than "Sincerely."
During liturgical services, altar servers and lower clergy will kiss the hand of a priest when handing him something in the course of their duties, such as a censer, when he receives it in his right hand, and a bishop when he receives it in either hand since a bishop bestows blessings with both hands.
There are records of hand-kissing in the Islamic Caliphate as early as the 7th century. Hand-kissing known as Taqbil, as a respect for nobility, is practiced by the Hadharem of Yemen.
In popular culture
The hand-kiss is used quite prominently in The Godfather series, as a way to indicate the person who is the Don. It also features in period films, such as Dangerous Liaisons.
See also
Greeting
Salute
Kissing hands
Mano (gesture)
References
External links
Catholics kissing prelate's hands on a church watching blog
Gestures
Kissing
Gestures of respect
Bowing
Hand | Hand-kissing | [
"Biology"
] | 1,441 | [
"Behavior",
"Gestures",
"Human behavior"
] |
575,454 | https://en.wikipedia.org/wiki/Interleukin | Interleukins (ILs) are a group of cytokines (secreted proteins and signal molecules) that are expressed and secreted by white blood cells (leukocytes) as well as some other body cells. The human genome encodes more than 50 interleukins and related proteins.
The function of the immune system primarily depends on interleukins, and rare deficiencies of a number of them have been described, all featuring autoimmune diseases or immune deficiency. The majority of interleukins are synthesized by CD4 helper T-lymphocytes, as well as through monocytes, macrophages, and endothelial cells. They promote the development and differentiation of T and B lymphocytes, and hematopoietic cells.
Interleukin receptors on astrocytes in the hippocampus are also known to be involved in the development of spatial memories in mice.
History and name
The name "interleukin" was chosen in 1979, to replace the various different names used by different research groups to designate interleukin 1 (lymphocyte activating factor, mitogenic protein, T-cell replacing factor III, B-cell activating factor, B-cell differentiation factor, and "Heidikine") and interleukin 2 (TSF, etc.). This decision was taken during the Second International Lymphokine Workshop in Switzerland (27–31 May 1979 in Ermatingen).
The term interleukin derives from (inter-) "as a means of communication", and (-leukin) "deriving from the fact that many of these proteins are produced by leukocytes and act on leukocytes". The name is something of a relic; it has since been found that interleukins are produced by a wide variety of body cells. The term was coined by Dr Vern Paetkau, University of Victoria.
Some interleukins are classified as lymphokines, lymphocyte-produced cytokines that mediate immune responses.
Common families
Interleukin 1
Interleukin 1 alpha and interleukin 1 beta (IL1 alpha and IL1 beta) are cytokines that participate in the regulation of immune responses, inflammatory reactions, and hematopoiesis. Two types of IL-1 receptor, each with three extracellular immunoglobulin (Ig)-like domains, limited sequence similarity (28%) and different pharmacological characteristics have been cloned from mouse and human cell lines: these have been termed type I and type II receptors. The receptors both exist in transmembrane (TM) and soluble forms: the soluble IL-1 receptor is thought to be post-translationally derived from cleavage of the extracellular portion of the membrane receptors.
Both IL-1 receptors (CD121a/IL1R1, CD121b/IL1R2) appear to be well conserved in evolution, and map to the same chromosomal location. The receptors can both bind all three forms of IL-1 (IL-1 alpha, IL-1 beta and IL-1 receptor antagonist).
The crystal structures of IL1A and IL1B have been solved, showing them to share the same 12-stranded beta-sheet structure as both the heparin binding growth factors and the Kunitz-type soybean trypsin inhibitors. The beta-sheets are arranged in 4 similar lobes around a central axis, 8 strands forming an anti-parallel beta-barrel. Several regions, especially the loop between strands 4 and 5, have been implicated in receptor binding.
Molecular cloning of the Interleukin 1 Beta converting enzyme is generated by the proteolytic cleavage of an inactive precursor molecule. A complementary DNA encoding protease that carries out this cleavage has been cloned. Recombinant expression enables cells to process precursor Interleukin 1 Beta to the mature form of the enzyme.
Interleukin 1 also plays a role in the central nervous system. Research indicates that mice with a genetic deletion of the type I IL-1 receptor display markedly impaired hippocampal-dependent memory functioning and long-term potentiation, although memories that do not depend on the integrity of the hippocampus seem to be spared. However, when mice with this genetic deletion have wild-type neural precursor cells injected into their hippocampus and these cells are allowed to mature into astrocytes containing the interleukin-1 receptors, the mice exhibit normal hippocampal-dependent memory function, and partial restoration of long-term potentiation.
Interleukin 2
T lymphocytes regulate the growth and differentiation of T cells and certain B cells through the release of secreted protein factors. These factors, which include interleukin 2 (IL2), are secreted by lectin- or antigen-stimulated T cells, and have various physiological effects. IL2 is a lymphokine that induces the proliferation of responsive T cells. In addition, it acts on some B cells, via receptor-specific binding, as a growth factor and antibody production stimulant. The protein is secreted as a single glycosylated polypeptide, and cleavage of a signal sequence is required for its activity. Solution NMR suggests that the structure of IL2 comprises a bundle of 4 helices (termed A-D), flanked by 2 shorter helices and several poorly defined loops. Residues in helix A, and in the loop region between helices A and B, are important for receptor binding. Secondary structure analysis has suggested similarity to IL4 and granulocyte-macrophage colony stimulating factor (GMCSF).
Interleukin 3
Interleukin 3 (IL3) is a cytokine that regulates hematopoiesis by controlling the production, differentiation and function of granulocytes and macrophages. The protein, which exists in vivo as a monomer, is produced in activated T cells and mast cells, and is activated by the cleavage of an N-terminal signal sequence.
IL3 is produced by T lymphocytes and T-cell lymphomas only after stimulation with antigens, mitogens, or chemical activators such as phorbol esters. However, IL3 is constitutively expressed in the myelomonocytic leukaemia cell line WEHI-3B. It is thought that the genetic change of the cell line to constitutive production of IL3 is the key event in development of this leukaemia.
Interleukin 4
Interleukin 4 (IL4) is produced by CD4+ T cells specialized in providing help to B cells to proliferate and to undergo class switch recombination and somatic hypermutation. Th2 cells, through production of IL-4, have an important function in B-cell responses that involve class switch recombination to the IgG1 and IgE isotypes.
Interleukin 5
Interleukin 5 (IL5), also known as eosinophil differentiation factor (EDF), is a lineage-specific cytokine for eosinophilpoiesis. It regulates eosinophil growth and activation, and thus plays an important role in diseases associated with increased levels of eosinophils, including asthma. IL5 has a similar overall fold to other cytokines (e.g., IL2, IL4 and GCSF), but while these exist as monomeric structures, IL5 is a homodimer. The fold contains an anti-parallel 4-alpha-helix bundle with a left handed twist, connected by a 2-stranded anti-parallel beta-sheet. The monomers are held together by 2 interchain disulphide bonds.
Interleukin 6
Interleukin 6 (IL6), also referred to as B-cell stimulatory factor-2 (BSF-2) and interferon beta-2, is a cytokine involved in a wide variety of biological functions. It plays an essential role in the final differentiation of B cells into immunoglobulin-secreting cells, as well as inducing myeloma/plasmacytoma growth, nerve cell differentiation, and, in hepatocytes, acute-phase reactants.
A number of other cytokines may be grouped with IL6 on the basis of sequence similarity. These include granulocyte colony-stimulating factor (GCSF) and myelomonocytic growth factor (MGF). GCSF acts in hematopoiesis by affecting the production, differentiation, and function of two related white cell groups in the blood. MGF also acts in hematopoiesis, stimulating proliferation and colony formation of normal and transformed avian cells of the myeloid lineage.
Cytokines of the IL6/GCSF/MGF family are glycoproteins of about 170 to 180 amino acid residues that contain four conserved cysteine residues involved in two disulphide bonds. They have a compact, globular fold (similar to other interleukins), stabilised by the two disulphide bonds. One half of the structure is dominated by a 4-alpha-helix bundle with a left-handed twist; the helices are anti-parallel, with two overhand connections, which fall into a double-stranded anti-parallel beta-sheet. The fourth alpha-helix is important to the biological activity of the molecule.
Interleukin 7
Interleukin 7 (IL-7) is a cytokine that serves as a growth factor for early lymphoid cells of both B- and T-cell lineages.
Interleukin 8
Interleukin 8 is a chemokine produced by macrophages and other cell types such as epithelial cells, airway smooth muscle cells and endothelial cells. Endothelial cells store IL-8 in their storage vesicles, the Weibel-Palade bodies. In humans, the interleukin-8 protein is encoded by the CXCL8 gene. IL-8 is initially produced as a precursor peptide of 99 amino acids which then undergoes cleavage to create several active IL-8 isoforms. In culture, a 72 amino acid peptide is the major form secreted by macrophages.
There are many receptors on the surface membrane capable of binding IL-8; the most frequently studied types are the G protein-coupled serpentine receptors CXCR1 and CXCR2. Expression and affinity for IL-8 differs between the two receptors (CXCR1 > CXCR2). Through a chain of biochemical reactions, IL-8 is secreted and is an important mediator of the immune reaction in the innate immune system response.
Interleukin 9
Interleukin 9 (IL-9) is a cytokine that supports IL-2 independent and IL-4 independent growth of helper T cells. Early studies had indicated that Interleukin 9 and 7 seem to be evolutionary related and Pfam, InterPro and PROSITE entries exist for interleukin 7/interleukin 9 family. However, a recent study has shown that IL-9 is, in fact, much closer to both IL-2 and IL-15, than to IL-7. Moreover, the study showed irreconcilable structural differences between IL-7 and all the remaining cytokines signalling through the γc receptor ( IL-2, IL-4, IL-7, IL-9, IL-15 and IL-21).
Interleukin 10
Interleukin 10 (IL-10) is a protein that inhibits the synthesis of a number of cytokines, including IFN-gamma, IL-2, IL-3, TNF, and GM-CSF produced by activated macrophages and by helper T cells. In structure, IL-10 is a protein of about 160 amino acids that contains four conserved cysteines involved in disulphide bonds. IL-10 is highly similar to the Human herpesvirus 4 (Epstein-Barr virus) BCRF1 protein, which inhibits the synthesis of gamma-interferon and to Equid herpesvirus 2 (Equine herpesvirus 2) protein E7. It is also similar, but to a lesser degree, with human protein mda-7. a protein that has antiproliferative properties in human melanoma cells. Mda-7 contains only two of the four cysteines of IL-10.
Interleukin 11
Interleukin 11 (IL-11) is a secreted protein that stimulates megakaryocytopoiesis, initially thought to lead to an increased production of platelets (it has since been shown to be redundant to normal platelet formation), as well as activating osteoclasts, inhibiting epithelial cell proliferation and apoptosis, and inhibiting macrophage mediator production. These functions may be particularly important in mediating the hematopoietic, osseous and mucosal protective effects of interleukin 11.
Interleukin 12
Interleukin 12 (IL-12) is a disulphide-bonded heterodimer consisting of a 35kDa alpha subunit and a 40kDa beta subunit. It is involved in the stimulation and maintenance of Th1 cellular immune responses, including the normal host defence against various intracellular pathogens, such as Leishmania, Toxoplasma, Measles virus, and Human immunodeficiency virus 1 (HIV). IL-12 also has an important role in enhancing the cytotoxic function of NK cells and role in pathological Th1 responses, such as in inflammatory bowel disease and multiple sclerosis. Suppression of IL-12 activity in such diseases may have therapeutic benefit. On the other hand, administration of recombinant IL-12 may have therapeutic benefit in conditions associated with pathological Th2 responses.
Interleukin 13
Interleukin 13 (IL-13) is a pleiotropic cytokine that may be important in the regulation of the inflammatory and immune responses. It inhibits inflammatory cytokine production and synergises with IL-2 in regulating interferon-gamma synthesis. The sequences of IL-4 and IL-13 are distantly related.
Interleukin 15
Interleukin 15 (IL-15) is a cytokine that possesses a variety of biological functions, including stimulation and maintenance of cellular immune responses. IL-15 stimulates the proliferation of T lymphocytes, which requires interaction of IL-15 with IL-15R alpha and components of IL-2R, including IL-2R beta and IL-2R gamma (common gamma chain, γc), but not IL-2R alpha.
Interleukin 17
Interleukin 17 (IL-17) is a potent proinflammatory cytokine produced by activated memory T cells. This cytokine is characterized by its proinflammatory properties, role in recruiting neutrophils, and importance in innate and adaptive immunity. Not only does IL-17 play a key role in inflammation of many autoimmune diseases, such as RA, allergies, asthma, psoriasis, and more, but it also plays a key role in the pathogenesis of these diseases. Additionally, some studies have found that IL-17 plays a role in tumorigenesis (initial formation of a tumor) and transplant rejection. The IL-17 family is thought to represent a distinct signaling system that appears to have been highly conserved across vertebrate evolution.
In humans
International nonproprietary names for analogues and derivatives
References
External links
Cytokines & Cells Online Pathfinder Encyclopedia
Cytokines | Interleukin | [
"Chemistry"
] | 3,349 | [
"Cytokines",
"Signal transduction"
] |
575,603 | https://en.wikipedia.org/wiki/Five%20Suns | In creation myths, the term "Five Suns" refers to the belief of certain Nahua cultures and Aztec peoples that the world has gone through five distinct cycles of creation and destruction, with the current era being the fifth. It is primarily derived from a combination of myths, cosmologies, and eschatological beliefs that were originally held by pre-Columbian peoples in the Mesoamerican region, including central Mexico, and it is part of a larger mythology of Fifth World or Fifth Sun beliefs.
The late Postclassic Aztecs created and developed their own version of the "Five Suns" myth, which incorporated and transformed elements from previous Mesoamerican creation myths, while also introducing new ideas that were specific to their culture.
In the Aztec and other Nahua creation myths, it was believed that the universe had gone through four iterations before the current one, and each of these prior worlds had been destroyed by Gods due to the behavior of its inhabitants.
The current world is a product of the Aztecs' self-imposed mission to provide Tlazcaltiliztli to the sun, giving it the nourishment it needs to stay in existence and ensuring that the entire universe remains in balance. Thus, the Aztecs’ sacrificial rituals were essential to the functioning of the world, and ultimately to its continued survival.
Legend
According to the legend, from the void that was the rest of the universe, the first god, Ometeotl, created itself. The nature of Ometeotl, the "God of duality" was both male and female, shared by Ometecuhtli, "Lord of duality," and Omecihuatl, "Lady of duality". Ometeotl gave birth to four children, the four Tezcatlipocas, who each preside over one of the four cardinal directions. Over the West presides the White Tezcatlipoca, Quetzalcoatl, the god of light, mercy and wind. Over the South presides the Blue Tezcatlipoca, Huitzilopochtli, the god of war. Over the East presides the Red Tezcatlipoca, Xipe Totec, the god of gold, farming and spring time. And over the North presides the Black Tezcatlipoca, also called simply Tezcatlipoca, the god of judgment, night, deceit, sorcery and the Earth.
The Aztecs believed that the gods created the universe at Teotihuacan. The name was given by the Nahuatl-speaking Aztecs centuries after the fall of the city around 550 CE. The term has been glossed as "birthplace of the gods", or "place where gods were born", reflecting Nahua creation myths that were said to occur in Teotihuacan.
First sun
It was four gods who eventually created all the other gods and the world we know today, but before they could create they had to destroy, for every time they attempted to create something, it would fall into the water beneath them and be eaten by Cipactli, the giant earth crocodile, who swam through the water with mouths at every one of her joints. From the four Tezcatlipocas descended the first people who were giants. They created the other gods, the most important of whom were the water gods: Tlaloc, the god of rain and fertility and Chalchiuhtlicue, the goddess of lakes, rivers and oceans and also the goddess of beauty. To give light, they needed a god to become the sun and the Black Tezcatlipoca was chosen, but either because he had lost a leg or because he was god of the night, he only managed to become half a sun. The world continued on in this way for some time, but a sibling rivalry grew between Quetzalcoatl and his brother the mighty sun, who Quetzalcoatl eventually decided to knock from the sky with a stone club. With no sun, the world was totally black and in his anger, Tezcatlipoca commanded his jaguars to eat all the people.
Second sun
The gods created humans who were of normal stature, with Quetzalcoatl serving as the sun for the new civilization, as an attempt to bring balance to the world, but their attempts ultimately failed as humans began to drift away from the beliefs and teachings of the gods and instead embraced greed and corruption.
As a consequence, Tezcatlipoca showcased his dominance and strength as a god of magic and justice by transforming the human-like people into monkeys. Quetzalcoatl, who had held the flawed people in great regard, was greatly distressed and sent away the monkeys with a powerful hurricane. After they were banished, Quetzalcoatl stepped down from his role as the sun and crafted a new, more perfect race of humans.
Third sun
Tlaloc was crowned the new sun, but Tezcatlipoca, the mischievous god, tricked and deceived him, snatching away the love of his life, Xochiquetzal, the deity of beauty, flowers, and corn.
Tlaloc had become so consumed by his own grief and sorrow that he was no longer able to fulfil his duties as the sun; therefore, a great drought befell the people of the world. People desperately prayed for rain and begged for mercy, but their pleas fell on deaf ears.
In a fit of rage, Tlaloc unleashed a rain of fire upon the earth, completely destroying it and leaving nothing but ashes in its wake. Following this cataclysmic event, the gods then worked together to create a new earth, allowing life to be reborn from the seemingly lifeless and barren land.
Fourth sun
The next sun and also Tlaloc's new wife, was Chalchiuhtlicue. She was very loving towards the people, but Tezcatlipoca was not. Both the people and Chalchiuhtlicue felt his judgement when he told the water goddess that she was not truly loving and only faked kindness out of selfishness to gain the people's praise. Chalchiuhtlicue was so crushed by these words that she cried blood for the next fifty-two years, causing a horrific flood that drowned everyone on Earth. Humans became fish in order to survive.
Fifth sun
Quetzalcoatl would not accept the destruction of his people and went to the underworld where he stole their bones from the god Mictlantecuhtli. He dipped these bones in his own blood to resurrect his people, who reopened their eyes to a sky illuminated by the current sun, Huitzilopochtli.
The Centzonhuītznāhua, or the stars of the south, became jealous of their brighter, more important brother Huitzilopochtli. Their leader, Coyolxauhqui, goddess of the moon, led them in an assault on the sun and every night they come close to victory when they shine throughout the sky, but are beaten back by the mighty Huitzilopochtli who rules the daytime sky. To aid this all-important god in his continuing war, the Aztecs offer him the nourishment of human sacrifices. They also offer human sacrifices to Tezcatlipoca in fear of his judgment, offer their own blood to Quetzalcoatl, who opposes fatal sacrifices, in thanks of his blood sacrifice for them and give offerings to many other gods for many purposes. Should these sacrifices cease, or should mankind fail to please the gods for any other reason, this fifth sun will go black, the world will be shattered by a catastrophic earthquake, and the Tzitzimitl will slay Huitzilopochtli and all of humanity.
Variations and alternative myths
Most of what is known about the ancient Aztecs comes from the few codices to survive the Spanish conquest. Their myths can be confusing because of the lack of documentation and also because there are many popular myths that seem to contradict one another. This happened due to the fact that they were originally passed down by word of mouth and because the Aztecs adopted many of their gods from other tribes, both assigning their own new aspects to these gods and endowing them with those of similar gods from various other cultures. Older myths can be very similar to newer myths while contradicting one another by claiming that a different god performed the same action, probably because myths changed in correlation to the popularity of each of the gods at a given time.
Other variations on this myth state that Coatlicue, the earth goddess, was the mother of the four Tezcatlipocas and the Tzitzimitl. Some versions say that Quetzalcoatl was born to her first, while she was still a virgin, often mentioning his twin brother Xolotl, the guide of the dead and god of fire. Tezcatlipoca was then born to her by an obsidian knife, followed by the Tzitzimitl and then Huitzilopochtli. The most popular variation including Coatlicue depicts her giving birth first to the Tzitzimitl. Much later she gave birth to Huitzilopochtli when a mysterious ball of feathers appeared to her. The Tzitzimitl then decapitated the pregnant Coatlicue, believing it to be insulting that she had given birth to another child. Huitzilopochtli then sprang forth from her womb wielding a serpent of fire and began his epic war with the Tzitzimitl, who were also referred to as the Centzon Huitznahuas. Sometimes he is said to have decapitated Coyolxauhqui and either used her head to make the moon or thrown it into a canyon. Further variations depict the ball of feathers as being the father of Huitzilopochtli or the father of Quetzalcoatl and sometimes Xolotl.
Other variations of this myth claim that only Quetzalcoatl and Tezcatlipoca were born to Ometeotl, who was replaced by Coatlicue in this myth probably because it had absolutely no worshipers or temples by the time the Spanish arrived. It is sometimes said that the male characteristic of Ometeotl is named Ometecuhtli and that the female characteristic is named Omecihualt. Further variations on this myth state that it was only Quetzalcoatl and Tezcatlipoca who pulled apart Cipactli, also known as Tlaltecuhtli, and that Xipe Totec and Huitzilopochtli then constructed the world from her body. Some versions claim that Tezcatlipoca actually used his leg as bait for Cipactli, before dismembering her.
The order of the first four suns varies as well, though the above version is the most common. Each world's end correlates consistently to the god that was the sun at the time throughout all variations of the myth, though the loss of Xochiquetzal is not always identified as Tlaloc's reason for the rain of fire, which is not otherwise given and it is sometimes said that Chalchiuhtlicue flooded the world on purpose, without the involvement of Tezcatlipoca. It is also said that Tezcatlipoca created half a sun, which his jaguars then ate before eating the giants.
The fifth sun however is sometimes said to be a god named Nanauatzin. In this version of the myth, the gods convened in darkness to choose a new sun, who was to sacrifice himself by jumping into a gigantic bonfire. The two volunteers were the young son of Tlaloc and Chalchiuhtlicue, Tecuciztecatl, and the old Nanauatzin. It was believed that Nanauatzin was too old to make a good sun, but both were given the opportunity to jump into the bonfire. Tecuciztecatl tried first but was not brave enough to walk through the heat near the flames and turned around. Nanauatzin then walked slowly towards and then into the flames and was consumed. Tecuciztecatl then followed. The braver Nanauatzin became what is now the sun and Tecuciztecatl became the much less spectacular moon. A god that bridges the gap between Nanauatzin and Huitzilopochtli is Tonatiuh, who was sick, but rejuvenated himself by burning himself alive and then became the warrior sun and wandered through the heavens with the souls of those who died in battle, refusing to move if not offered enough sacrifices.
Brief summation
Nāhui-Ocēlōtl (Jaguar Sun) – Inhabitants were giants who were devoured by jaguars. The world was destroyed.
Nāhui-Ehēcatl (Wind Sun) – Inhabitants were transformed into monkeys. This world was destroyed by hurricanes.
Nāhui-Quiyahuitl (Rain Sun) – Inhabitants were destroyed by rain of fire. Only birds survived (or inhabitants survived by becoming birds).
Nāhui-Ātl (Water Sun) – This world was flooded turning the inhabitants into fish. A couple escaped but were transformed into dogs.
Nāhui-Olīn (Earthquake Sun) – Current humans are the inhabitants of this world. Should the gods be displeased, this world will be destroyed by earthquakes (or one large earthquake) and the Tzitzimimeh will annihilate all its inhabitants.
In popular culture
The version of the myth with Nanahuatzin serves as a framing device for the 1991 Mexican film, In Necuepaliztli in Aztlan (Return a Aztlán), by Juan Mora Catlett.
The version of the myth with Nanahuatzin is in the 1996 film, The Five Suns: A Sacred History of Mexico, by Patricia Amlin.
Rage Against the Machine refers to intercultural violence as "the fifth sunset" in their song "People of the Sun", on the album Evil Empire.
Thomas Harlan's science fiction series "In the Time of the Sixth Sun" uses this myth as a central plot point, where an ancient star-faring civilization ("people of the First Sun") had disappeared and left the galaxy with many dangerous artifacts.
The Shadowrun role-playing game takes place in the "Sixth World."
The concept of the five suns is alluded to in Onyx Equinox, where Quetzalcoatl claims that the gods made humanity four times before. Tezcatlipoca seeks to end the current human era, since he believes humans are too greedy and waste their blood in battle rather than as sacrifices.
The final episode of Victor and Valentino is called "The Fall of the Fifth Sun", and also features Tezcatlipoca in a central role.
See also
Aztec mythology
Aztec religion
Aztec philosophy
Fifth World (mythology)
Mesoamerican creation accounts
Sun stone
Thirteen Heavens
References
Further reading
Eschatology
Creation myths
Eschatology
Aztec philosophy | Five Suns | [
"Astronomy"
] | 3,096 | [
"Cosmogony",
"Creation myths"
] |
575,613 | https://en.wikipedia.org/wiki/Moons%20of%20Jupiter | There are 95 moons of Jupiter with confirmed orbits . This number does not include a number of meter-sized moonlets thought to be shed from the inner moons, nor hundreds of possible kilometer-sized outer irregular moons that were only briefly captured by telescopes. All together, Jupiter's moons form a satellite system called the Jovian system. The most massive of the moons are the four Galilean moons: Io, Europa, Ganymede, and Callisto, which were independently discovered in 1610 by Galileo Galilei and Simon Marius and were the first objects found to orbit a body that was neither Earth nor the Sun. Much more recently, beginning in 1892, dozens of far smaller Jovian moons have been detected and have received the names of lovers (or other sexual partners) or daughters of the Roman god Jupiter or his Greek equivalent Zeus. The Galilean moons are by far the largest and most massive objects to orbit Jupiter, with the remaining 91 known moons and the rings together comprising just 0.003% of the total orbiting mass.
Of Jupiter's moons, eight are regular satellites with prograde and nearly circular orbits that are not greatly inclined with respect to Jupiter's equatorial plane. The Galilean satellites are nearly spherical in shape due to their planetary mass, and are just massive enough that they would be considered major planets if they were in direct orbit around the Sun. The other four regular satellites, known as the inner moons, are much smaller and closer to Jupiter; these serve as sources of the dust that makes up Jupiter's rings. The remainder of Jupiter's moons are outer irregular satellites whose prograde and retrograde orbits are much farther from Jupiter and have high inclinations and eccentricities. The largest of these moons were likely asteroids that were captured from solar orbits by Jupiter before impacts with other small bodies shattered them into many kilometer-sized fragments, forming collisional families of moons sharing similar orbits. Jupiter is expected to have about 100 irregular moons larger than in diameter, plus around 500 more smaller retrograde moons down to diameters of . Of the 87 known irregular moons of Jupiter, 38 of them have not yet been officially given names.
Characteristics
The physical and orbital characteristics of the moons vary widely. The four Galileans are all over in diameter; the largest Galilean, Ganymede, is the ninth largest object in the Solar System, after the Sun and seven of the planets, Ganymede being larger than Mercury. All other Jovian moons are less than in diameter, with most barely exceeding . Their orbital shapes range from nearly perfectly circular to highly eccentric and inclined, and many revolve in the direction opposite to Jupiter's rotation (retrograde motion).
Origin and evolution
Jupiter's regular satellites are believed to have formed from a circumplanetary disk, a ring of accreting gas and solid debris analogous to a protoplanetary disk. They may be the remnants of a score of Galilean-mass satellites that formed early in Jupiter's history.
Simulations suggest that, while the disk had a relatively high mass at any given moment, over time a substantial fraction (several tens of a percent) of the mass of Jupiter captured from the solar nebula was passed through it. However, only 2% of the proto-disk mass of Jupiter is required to explain the existing satellites. Thus, several generations of Galilean-mass satellites may have been in Jupiter's early history. Each generation of moons might have spiraled into Jupiter, because of drag from the disk, with new moons then forming from the new debris captured from the solar nebula. By the time the present (possibly fifth) generation formed, the disk had thinned so that it no longer greatly interfered with the moons' orbits. The current Galilean moons were still affected, falling into and being partially protected by an orbital resonance with each other, which still exists for Io, Europa, and Ganymede: they are in a 1:2:4 resonance. Ganymede's larger mass means that it would have migrated inward at a faster rate than Europa or Io. Tidal dissipation in the Jovian system is still ongoing and Callisto will likely be captured into the resonance in about 1.5 billion years, creating a 1:2:4:8 chain.
The outer, irregular moons are thought to have originated from captured asteroids, whereas the protolunar disk was still massive enough to absorb much of their momentum and thus capture them into orbit. Many are believed to have been broken up by mechanical stresses during capture, or afterward by collisions with other small bodies, producing the moons we see today.
History and discovery
Visual observations
Chinese historian Xi Zezong claimed that the earliest record of a Jovian moon (Ganymede or Callisto) was a note by Chinese astronomer Gan De of an observation around 364 BC regarding a "reddish star". However, the first certain observations of Jupiter's satellites were those of Galileo Galilei in 1609. By January 1610, he had sighted the four massive Galilean moons with his 20× magnification telescope, and he published his results in March 1610.
Simon Marius had independently discovered the moons one day after Galileo, although he did not publish his book on the subject until 1614. Even so, the names Marius assigned are used today: Ganymede, Callisto, Io, and Europa. No additional satellites were discovered until E. E. Barnard observed Amalthea in 1892.
Photographic and spacecraft observations
With the aid of telescopic photography with photographic plates, further discoveries followed quickly over the course of the 20th century. Himalia was discovered in 1904, Elara in 1905, Pasiphae in 1908, Sinope in 1914, Lysithea and Carme in 1938, Ananke in 1951, and Leda in 1974.
By the time that the Voyager space probes reached Jupiter, around 1979, thirteen moons had been discovered, not including Themisto, which had been observed in 1975, but was lost until 2000 due to insufficient initial observation data. The Voyager spacecraft discovered an additional three inner moons in 1979: Metis, Adrastea, and Thebe.
Digital telescopic observations
No additional moons were discovered until two decades later, with the fortuitous discovery of Callirrhoe by the Spacewatch survey in October 1999. During the 1990s, photographic plates phased out as digital charge-coupled device (CCD) cameras began emerging in telescopes on Earth, allowing for wide-field surveys of the sky at unprecedented sensitivities and ushering in a wave of new moon discoveries. Scott Sheppard, then a graduate student of David Jewitt, demonstrated this extended capability of CCD cameras in a survey conducted with the Mauna Kea Observatory's UH88 telescope in November 2000, discovering eleven new irregular moons of Jupiter including the previously lost Themisto with the aid of automated computer algorithms.
From 2001 onward, Sheppard and Jewitt alongside other collaborators continued surveying for Jovian irregular moons with the Canada-France-Hawaii Telescope (CFHT), discovering an additional eleven in December 2001, one in October 2002, and nineteen in February 2003. At the same time, another independent team led by Brett J. Gladman also used the CFHT in 2003 to search for Jovian irregular moons, discovering four and co-discovering two with Sheppard. From the start to end of these CCD-based surveys in 2000–2004, Jupiter's known moon count had grown from 17 to 63. All of these moons discovered after 2000 are faint and tiny, with apparent magnitudes between 22–23 and diameters less than . As a result, many could not be reliably tracked and ended up becoming lost.
Beginning in 2009, a team of astronomers, namely Mike Alexandersen, Marina Brozović, Brett Gladman, Robert Jacobson, and Christian Veillet, began a campaign to recover Jupiter's lost irregular moons using the CFHT and Palomar Observatory's Hale Telescope. They discovered two previously unknown Jovian irregular moons during recovery efforts in September 2010, prompting further follow-up observations to confirm these by 2011. One of these moons, S/2010 J 2 (now Jupiter LII), has an apparent magnitude of 24 and a diameter of only , making it one of the faintest and smallest confirmed moons of Jupiter even . Meanwhile, in September 2011, Scott Sheppard, now a faculty member of the Carnegie Institution for Science, discovered two more irregular moons using the institution's Magellan Telescopes at Las Campanas Observatory, raising Jupiter's known moon count to 67. Although Sheppard's two moons were followed up and confirmed by 2012, both became lost due to insufficient observational coverage.
In 2016, while surveying for distant trans-Neptunian objects with the Magellan Telescopes, Sheppard serendipitously observed a region of the sky located near Jupiter, enticing him to search for Jovian irregular moons as a detour. In collaboration with Chadwick Trujillo and David Tholen, Sheppard continued surveying around Jupiter from 2016 to 2018 using the Cerro Tololo Observatory's Víctor M. Blanco Telescope and Mauna Kea Observatory's Subaru Telescope. In the process, Sheppard's team recovered several lost moons of Jupiter from 2003 to 2011 and reported two new Jovian irregular moons in June 2017. Then in July 2018, Sheppard's team announced ten more irregular moons confirmed from 2016 to 2018 observations, bringing Jupiter's known moon count to 79. Among these was Valetudo, which has an unusually distant prograde orbit that crosses paths with the retrograde irregular moons. Several more unidentified Jovian irregular satellites were detected in Sheppard's 2016–2018 search, but were too faint for follow-up confirmation.
From November 2021 to January 2023, Sheppard discovered twelve more irregular moons of Jupiter and confirmed them in archival survey imagery from 2003 to 2018, bringing the total count to 92. Among these was S/2018 J 4, a highly inclined prograde moon that is now known to be in same orbital grouping as the moon Carpo, which was previously thought to be solitary. On 22 February 2023, Sheppard announced three more moons discovered in a 2022 survey, now bringing Jupiter's total known moon count to 95. In a February 2023 interview with NPR, Sheppard noted that he and his team are currently tracking even more moons of Jupiter, which should place Jupiter's moon count over 100 once confirmed over the next two years.
Many more irregular moons of Jupiter will inevitably be discovered in the future, especially after the beginning of deep sky surveys by the upcoming Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope in the mid-2020s. The Rubin Observatory's aperture telescope and 3.5 square-degree field of view will probe Jupiter's irregular moons down to diameters of at apparent magnitudes of 24.5, with the potential of increasing the known population by up to tenfold. Likewise, the Roman Space Telescope's aperture and 0.28 square-degree field of view will probe Jupiter's irregular moons down to diameters of at magnitude 27.7, with the potential of discovering approximately 1,000 Jovian moons above this size. Discovering these many irregular satellites will help reveal their population's size distribution and collisional histories, which will place further constraints to how the Solar System formed.
Naming
The Galilean moons of Jupiter (Io, Europa, Ganymede, and Callisto) were named by Simon Marius soon after their discovery in 1610. However, these names fell out of favor until the 20th century. The astronomical literature instead simply referred to "Jupiter I", "Jupiter II", etc., or "the first satellite of Jupiter", "Jupiter's second satellite", and so on. The names Io, Europa, Ganymede, and Callisto became popular in the mid-20th century, whereas the rest of the moons remained unnamed and were usually numbered in Roman numerals V (5) to XII (12). Jupiter V was discovered in 1892 and given the name Amalthea by a popular though unofficial convention, a name first used by French astronomer Camille Flammarion.
The other moons were simply labeled by their Roman numeral (e.g. Jupiter IX) in the majority of astronomical literature until the 1970s. Several different suggestions were made for names of Jupiter's outer satellites, but none were universally accepted until 1975 when the International Astronomical Union's (IAU) Task Group for Outer Solar System Nomenclature granted names to satellites V–XIII, and provided for a formal naming process for future satellites still to be discovered. The practice was to name newly discovered moons of Jupiter after lovers and favorites of the god Jupiter (Zeus) and, since 2004, also after their descendants. All of Jupiter's satellites from XXXIV (Euporie) onward are named after descendants of Jupiter or Zeus, except LIII (Dia), named after a lover of Jupiter. Names ending with "a" or "o" are used for prograde irregular satellites (the latter for highly inclined satellites), and names ending with "e" are used for retrograde irregulars. With the discovery of smaller, kilometre-sized moons around Jupiter, the IAU has established an additional convention to limit the naming of small moons with absolute magnitudes greater than 18 or diameters smaller than . Some of the most recently confirmed moons have not received names.
Some asteroids share the same names as moons of Jupiter: 9 Metis, 38 Leda, 52 Europa, 85 Io, 113 Amalthea, 239 Adrastea. Two more asteroids previously shared the names of Jovian moons until spelling differences were made permanent by the IAU: Ganymede and asteroid 1036 Ganymed; and Callisto and asteroid 204 Kallisto.
Groups
Regular satellites
These have prograde and nearly circular orbits of low inclination and are split into two groups:
Inner satellites or Amalthea group: Metis, Adrastea, Amalthea, and Thebe. These orbit very close to Jupiter; the innermost two orbit in less than a Jovian day. The latter two are respectively the fifth and seventh largest moons in the Jovian system. Observations suggest that at least the largest member, Amalthea, did not form on its present orbit, but farther from the planet, or that it is a captured Solar System body. These moons, along with a number of seen and as-yet-unseen inner moonlets (see Amalthea moonlets), replenish and maintain Jupiter's faint ring system. Metis and Adrastea help to maintain Jupiter's main ring, whereas Amalthea and Thebe each maintain their own faint outer rings.
Main group or Galilean moons: Io, Europa, Ganymede and Callisto. They are some of the largest objects in the Solar System outside the Sun and the eight planets in terms of mass, larger than any known dwarf planet. Ganymede exceeds (and Callisto nearly equals) even the planet Mercury in diameter, though they are less massive. They are respectively the fourth-, sixth-, first-, and third-largest natural satellites in the Solar System, containing approximately 99.997% of the total mass in orbit around Jupiter, while Jupiter is almost 5,000 times more massive than the Galilean moons. The inner moons are in a 1:2:4 orbital resonance. Models suggest that they formed by slow accretion in the low-density Jovian subnebula—a disc of the gas and dust that existed around Jupiter after its formation—which lasted up to 10 million years in the case of Callisto. Europa, Ganymede, and Callisto are suspected of having subsurface water oceans, and Io may have a subsurface magma ocean.
Irregular satellites
The irregular satellites are substantially smaller objects with more distant and eccentric orbits. They form families with shared similarities in orbit (semi-major axis, inclination, eccentricity) and composition; it is believed that these are at least partially collisional families that were created when larger (but still small) parent bodies were shattered by impacts from asteroids captured by Jupiter's gravitational field. These families bear the names of their largest members. The identification of satellite families is tentative, but the following are typically listed:
Prograde satellites:
Themisto is the innermost irregular moon and is not part of a known family.
The Himalia group is confined within semi-major axes between , inclinations between 27 and 29°, and eccentricities between 0.12 and 0.21. It has been suggested that the group could be a remnant of the break-up of an asteroid from the asteroid belt. The largest two members, Himalia and Elara, are respectively the sixth- and eighth-largest Jovian moons.
The Carpo group includes two known moons on very high orbital inclinations of 50° and semi-major axes between . Due to their exceptionally high inclinations, the moons of the Carpo group are subject to gravitational perturbations that induce the Lidov–Kozai resonance in their orbits, which cause their eccentricities and inclinations to periodically oscillate in correspondence with each other. The Lidov–Kozai resonance can significantly alter the orbits of these moons: for example, the eccentricity and inclination of the group's namesake Carpo can fluctuate between 0.19–0.69 and 44–59°, respectively.
Valetudo is the outermost prograde moon and is not part of a known family. Its prograde orbit crosses paths with several moons that have retrograde orbits and may in the future collide with them.
Retrograde satellites:
The Carme group is tightly confined within semi-major axes between , inclinations between 164 and 166°, and eccentricities between 0.25 and 0.28. It is very homogeneous in color (light red) and is believed to have originated as collisional fragments from a D-type asteroid progenitor, possibly a Jupiter trojan.
The Ananke group has a relatively wider spread than the previous groups, with semi-major axes between , inclinations between 144 and 156°, and eccentricities between 0.09 and 0.25. Most of the members appear gray, and are believed to have formed from the breakup of a captured asteroid.
The Pasiphae group is quite dispersed, with semi-major axes spread over , inclinations between 141° and 157°, and higher eccentricities between 0.23 and 0.44. The colors also vary significantly, from red to grey, which might be the result of multiple collisions. Sinope, sometimes included in the Pasiphae group, is red and, given the difference in inclination, it could have been captured independently; Pasiphae and Sinope are also trapped in secular resonances with Jupiter.
Based on their survey discoveries in 2000–2003, Sheppard and Jewitt predicted that Jupiter should have approximately 100 irregular satellites larger than in diameter, or brighter than magnitude 24. Survey observations by Alexandersen et al. in 2010–2011 agreed with this prediction, estimating that approximately 40 Jovian irregular satellites of this size remained undiscovered in 2012.
In September 2020, researchers from the University of British Columbia identified 45 candidate irregular moons from an analysis of archival images taken in 2010 by the CFHT. These candidates were mainly small and faint, down to magnitude of 25.7 or above in diameter. From the number of candidate moons detected within a sky area of one square degree, the team extrapolated that the population of retrograde Jovian moons brighter than magnitude 25.7 is around within a factor of 2. Although the team considers their characterized candidates to be likely moons of Jupiter, they all remain unconfirmed due to insufficient observation data for determining reliable orbits. The true population of Jovian irregular moons is likely complete down to magnitude 23.2 at diameters over .
List
The moons of Jupiter are listed below by orbital period. Moons massive enough for their surfaces to have collapsed into a spheroid are highlighted in bold. These are the four Galilean moons, which are comparable in size to the Moon. The other moons are much smaller. The Galilean moon with the smallest amount of mass is greater than 7,000 times more massive than the most massive of the other moons. The irregular captured moons are shaded light gray and orange when prograde and yellow, red, and dark gray when retrograde.
The orbits and mean distances of the irregular moons are highly variable over short timescales due to frequent planetary and solar perturbations, so proper orbital elements which are averaged over a period of time are preferably used. The proper orbital elements of the irregular moons listed here are averaged over a 400-year numerical integration by the Jet Propulsion Laboratory: for the above reasons, they may strongly differ from osculating orbital elements provided by other sources. Otherwise, recently discovered irregular moons without published proper elements are temporarily listed here with inaccurate osculating orbital elements that are italicized to distinguish them from other irregular moons with proper orbital elements. Some of the irregular moons' proper orbital periods in this list may not scale accordingly with their proper semi-major axes due to the aforementioned perturbations. The irregular moons' proper orbital elements are all based on the reference epoch of 1 January 2000.
Some irregular moons have only been observed briefly for a year or two, but their orbits are known accurately enough that they will not be lost to positional uncertainties.
Exploration
Nine spacecraft have visited Jupiter. The first were Pioneer 10 in 1973, and Pioneer 11 a year later, taking low-resolution images of the four Galilean moons and returning data on their atmospheres and radiation belts. The Voyager 1 and Voyager 2 probes visited Jupiter in 1979, discovering the volcanic activity on Io and the presence of water ice on the surface of Europa. Ulysses further studied Jupiter's magnetosphere in 1992 and then again in 2000.
The Galileo spacecraft was the first to enter orbit around Jupiter, arriving in 1995 and studying it until 2003. During this period, Galileo gathered a large amount of information about the Jovian system, making close approaches to all of the Galilean moons and finding evidence for thin atmospheres on three of them, as well as the possibility of liquid water beneath the surfaces of Europa, Ganymede, and Callisto. It also discovered a magnetic field around Ganymede.
Then the Cassini probe to Saturn flew by Jupiter in 2000 and collected data on interactions of the Galilean moons with Jupiter's extended atmosphere. The New Horizons spacecraft flew by Jupiter in 2007 and made improved measurements of its satellites' orbital parameters.
In 2016, the Juno spacecraft imaged the Galilean moons from above their orbital plane as it approached Jupiter orbit insertion, creating a time-lapse movie of their motion. With a mission extension, Juno has since begun close flybys of the Galileans, flying by Ganymede in 2021 followed by Europa and Io in 2022. It flew by Io again in late 2023 and once more in early 2024.
See also
Jupiter's moons in fiction
Satellite system (astronomy)
Notes
References
External links
Scott S. Sheppard: Moons of Jupiter
Scott S. Sheppard: The Jupiter Satellite and Moon Page
Jupiter Moons by NASA's Solar System Exploration
Archive of Jupiter System Articles in Planetary Science Research Discoveries
Tilmann Denk: Outer Moons of Jupiter
Lists of moons
Solar System | Moons of Jupiter | [
"Astronomy"
] | 4,765 | [
"Outer space",
"Solar System"
] |
575,641 | https://en.wikipedia.org/wiki/Casting%20out%20nines | Casting out nines is any of three arithmetical procedures:
Adding the decimal digits of a positive whole number, while optionally ignoring any 9s or digits which sum to 9 or a multiple of 9. The result of this procedure is a number which is smaller than the original whenever the original has more than one digit, leaves the same remainder as the original after division by nine, and may be obtained from the original by subtracting a multiple of 9 from it. The name of the procedure derives from this latter property.
Repeated application of this procedure to the results obtained from previous applications until a single-digit number is obtained. This single-digit number is called the "digital root" of the original. If a number is divisible by 9, its digital root is 9. Otherwise, its digital root is the remainder it leaves after being divided by 9.
A sanity test in which the above-mentioned procedures are used to check for errors in arithmetical calculations. The test is carried out by applying the same sequence of arithmetical operations to the digital roots of the operands as are applied to the operands themselves. If no mistakes are made in the calculations, the digital roots of the two resultants will be the same. If they are different, therefore, one or more mistakes must have been made in the calculations.
Digit sums
To "cast out nines" from a single number, its decimal digits can be simply added together to obtain its so-called digit sum. The digit sum of 2946, for example is 2 + 9 + 4 + 6 = 21. Since 21 = 2946 − 325 × 9, the effect of taking the digit sum of 2946 is to "cast out" 325 lots of 9 from it. If the digit 9 is ignored when summing the digits, the effect is to "cast out" one more 9 to give the result 12.
More generally, when casting out nines by summing digits, any set of digits which add up to 9, or a multiple of 9, can be ignored. In the number 3264, for example, the digits 3 and 6 sum to 9. Ignoring these two digits, therefore, and summing the other two, we get 2 + 4 = 6. Since 6 = 3264 − 362 × 9, this computation has resulted in casting out 362 lots of 9 from 3264.
For an arbitrary number, , normally represented by the sequence of decimal digits, , the digit sum is . The difference between the original number and its digit sum is
Because numbers of the form are always divisible by 9 (since ), replacing the original number by its digit sum has the effect of casting out
lots of 9.
Digital roots
If the procedure described in the preceding paragraph is repeatedly applied to the result of each previous application, the eventual result will be a single-digit number from which all 9s, with the possible exception of one, have been "cast out". The resulting single-digit number is called the digital root of the original. The exception occurs when the original number has a digital root of 9, whose digit sum is itself, and therefore will not be cast out by taking further digit sums.
The number 12565, for instance, has digit sum 1+2+5+6+5 = 19, which, in turn, has digit sum 1+9=10, which, in its turn has digit sum 1+0=1, a single-digit number. The digital root of 12565 is therefore 1, and its computation has the effect of casting out (12565 - 1)/9 = 1396 lots of 9 from 12565.
Checking calculations by casting out nines
To check the result of an arithmetical calculation by casting out nines, each number in the calculation is replaced by its digital root and the same calculations applied to these digital roots. The digital root of the result of this calculation is then compared with that of the result of the original calculation. If no mistake has been made in the calculations, these two digital roots must be the same. Examples in which casting-out-nines has been used to check addition, subtraction, multiplication, and division are given below.
Examples
Addition
In each addend, cross out all 9s and pairs of digits that total 9, then add together what remains. These new values are called excesses. Add up leftover digits for each addend until one digit is reached. Now process the sum and also the excesses to get a final excess.
Subtraction
Multiplication
*8 times 8 is 64; 6 and 4 are 10; 1 and 0 are 1.
Division
How it works
The method works because the original numbers are 'decimal' (base 10), the modulus is chosen to differ by 1, and casting out is equivalent to taking a digit sum. In general any two 'large' integers, x and y, expressed in any smaller modulus as x and y' (for example, modulo 7) will always have the same sum, difference or product as their originals. This property is also preserved for the 'digit sum' where the base and the modulus differ by 1.
If a calculation was correct before casting out, casting out on both sides will preserve correctness. However, it is possible that two previously unequal integers will be identical modulo 9 (on average, a ninth of the time).
The operation does not work on fractions, since a given fractional number does not have a unique representation.
A variation on the explanation
A trick to learn to add with nines is to add ten to the digit and to count back one. Since we are adding 1 to the tens digit and subtracting one from the units digit, the sum of the digits should remain the same. For example, 9 + 2 = 11 with 1 + 1 = 2. When adding 9 to itself, we would thus expect the sum of the digits to be 9 as follows: 9 + 9 = 18, (1 + 8 = 9) and 9 + 9 + 9 = 27, (2 + 7 = 9). Let us look at a simple multiplication: 5 × 7 = 35, (3 + 5 = 8). Now consider (7 + 9) × 5 = 16 × 5 = 80, (8 + 0 = 8) or 7 × (9 + 5) = 7 × 14 = 98, (9 + 8 = 17), (1 + 7 = 8).
Any non-negative integer can be written as 9×n + a, where 'a' is a single digit from 0 to 8, and 'n' is some non-negative integer.
Thus, using the distributive rule, (9×n + a)×(9×m + b)= 9×9×n×m + 9(am + bn) + ab. Since the first two factors are multiplied by 9, their sums will end up being 9 or 0, leaving us with 'ab'. In our example, 'a' was 7 and 'b' was 5. We would expect that in any base system, the number before that base would behave just like the nine.
Limitation to casting out nines
While extremely useful, casting out nines does not catch all errors made while doing calculations. For example, the casting-out-nines method would not recognize the error in a calculation of 5 × 7 which produced any of the erroneous results 8, 17, 26, etc. (that is, any result congruent to 8 modulo 9). In particular, casting out nines does not catch transposition errors, such as 1324 instead of 1234. In other words, the method only catches erroneous results whose digital root is one of the 8 digits that is different from that of the correct result.
History
A form of casting out nines known to ancient Greek mathematicians was described by the Roman bishop Hippolytus (170–235) in The Refutation of all Heresies, and more briefly by the Syrian Neoplatonist philosopher Iamblichus (c.245–c.325) in his commentary on the Introduction to Arithmetic of Nicomachus of Gerasa. Both Hippolytus's and Iamblichus's descriptions, though, were limited to an explanation of how repeated digital sums of Greek numerals were used to compute a unique "root" between 1 and 9. Neither of them displayed any awareness of how the procedure could be used to check the results of arithmetical computations.
The earliest known surviving work which describes how casting out nines can be used to check the results of arithmetical computations is the Mahâsiddhânta, written around 950 by the Indian mathematician and astronomer Aryabhata II (c.920–c.1000).
Writing about 1020, the Persian polymath, Ibn Sina (Avicenna) (c.980–1037), also gave full details of what he called the "Hindu method" of checking arithmetical calculations by casting out nines.
The procedure was described by Fibonacci in his Liber Abaci.
Generalization
This method can be generalized to determine the remainders of division by certain prime numbers.
Since 3·3 = 9,
So we can use the remainder from casting out nines to get the remainder of division by three.
Casting out ninety nines is done by adding groups of two digits instead just one digit.
Since 11·9 = 99,
So we can use the remainder from casting out ninety nines to get the remainder of division by eleven. This is called casting out elevens'. The same result can also be calculated directly by alternately adding and subtracting the digits that make up . Eleven divides if and only if eleven divides that sum.
Casting out nine hundred ninety nines is done by adding groups of three digits.
Since 37·27 = 999,
So we can use the remainder from casting out nine hundred ninety nines to get the remainder of division by thirty seven.
Notes
References
External links
"Numerology" by R. Buckminster Fuller
"Paranormal Numbers" by Paul Niquette
Arithmetic
Error detection and correction | Casting out nines | [
"Mathematics",
"Engineering"
] | 2,077 | [
"Arithmetic",
"Reliability engineering",
"Number theory",
"Error detection and correction"
] |
575,697 | https://en.wikipedia.org/wiki/Cheminformatics | Cheminformatics (also known as chemoinformatics) refers to the use of physical chemistry theory with computer and information science techniques—so called "in silico" techniques—in application to a range of descriptive and prescriptive problems in the field of chemistry, including in its applications to biology and related molecular fields. Such in silico techniques are used, for example, by pharmaceutical companies and in academic settings to aid and inform the process of drug discovery, for instance in the design of well-defined combinatorial libraries of synthetic compounds, or to assist in structure-based drug design. The methods can also be used in chemical and allied industries, and such fields as environmental science and pharmacology, where chemical processes are involved or studied.
History
Cheminformatics has been an active field in various guises since the 1970s and earlier, with activity in academic departments and commercial pharmaceutical research and development departments. The term chemoinformatics was defined in its application to drug discovery by F.K. Brown in 1998:Chemoinformatics is the mixing of those information resources to transform data into information and information into knowledge for the intended purpose of making better decisions faster in the area of drug lead identification and optimization. Since then, both terms, cheminformatics and chemoinformatics, have been used, although, lexicographically, cheminformatics appears to be more frequently used, despite academics in Europe declaring for the variant chemoinformatics in 2006. In 2009, a prominent Springer journal in the field was founded by transatlantic executive editors named the Journal of Cheminformatics.
Background
Cheminformatics combines the scientific working fields of chemistry, computer science, and information science—for example in the areas of topology, chemical graph theory, information retrieval and data mining in the chemical space. Cheminformatics can also be applied to data analysis for various industries like paper and pulp, dyes and such allied industries.
Applications
Storage and retrieval
A primary application of cheminformatics is the storage, indexing, and search of information relating to chemical compounds. The efficient search of such stored information includes topics that are dealt with in computer science, such as data mining, information retrieval, information extraction, and machine learning. Related research topics include:
Digital libraries
Unstructured data
Structured data mining and mining of structured data
Database mining
Graph mining
Molecule mining
Sequence mining
Tree mining
File formats
The in silico representation of chemical structures uses specialized formats such as the Simplified molecular input line entry specifications (SMILES) or the XML-based Chemical Markup Language. These representations are often used for storage in large chemical databases. While some formats are suited for visual representations in two- or three-dimensions, others are more suited for studying physical interactions, modeling and docking studies.
Virtual libraries
Chemical data can pertain to real or virtual molecules. Virtual libraries of compounds may be generated in various ways to explore chemical space and hypothesize novel compounds with desired properties. Virtual libraries of classes of compounds (drugs, natural products, diversity-oriented synthetic products) were recently generated using the FOG (fragment optimized growth) algorithm. This was done by using cheminformatic tools to train transition probabilities of a Markov chain on authentic classes of compounds, and then using the Markov chain to generate novel compounds that were similar to the training database.
Virtual screening
In contrast to high-throughput screening, virtual screening involves computationally
screening in silico libraries of compounds, by means of various methods such as
docking, to identify members likely to possess desired properties
such as biological activity against a given target. In some cases, combinatorial chemistry is used in the development of the library to increase the efficiency in mining the chemical space. More commonly, a diverse library of small molecules or natural products is screened.
Quantitative structure-activity relationship (QSAR)
This is the calculation of quantitative structure–activity relationship and quantitative structure property relationship values, used to predict the activity of compounds from their structures. In this context there is also a strong relationship to chemometrics. Chemical expert systems are also relevant, since they represent parts of chemical knowledge as an in silico representation. There is a relatively new concept of matched molecular pair analysis or prediction-driven MMPA which is coupled with QSAR model in order to identify activity cliff.
See also
Bioinformatics
Chemical file format
Chemicalize.org
Cheminformatics toolkits
Chemogenomics
Computational chemistry
Information engineering
Journal of Chemical Information and Modeling
Journal of Cheminformatics
Materials informatics
Molecular design software
Molecular graphics
Molecular Informatics
Molecular modelling
Nanoinformatics
Software for molecular modeling
WorldWide Molecular Matrix
Molecular descriptor
References
Further reading
External links
Computational chemistry
Drug discovery
Computational fields of study
Applied statistics | Cheminformatics | [
"Chemistry",
"Mathematics",
"Technology",
"Biology"
] | 968 | [
"Computational fields of study",
"Life sciences industry",
"Drug discovery",
"Applied mathematics",
"Theoretical chemistry",
"Computational chemistry",
"Computing and society",
"Cheminformatics",
"nan",
"Medicinal chemistry",
"Applied statistics"
] |
575,749 | https://en.wikipedia.org/wiki/Moons%20of%20Saturn | The moons of Saturn are numerous and diverse, ranging from tiny moonlets only tens of meters across to the enormous Titan, which is larger than the planet Mercury. There are 146 moons with confirmed orbits, the most of any planet in the Solar System. This number does not include the many thousands of moonlets embedded within Saturn's dense rings, nor hundreds of possible kilometer-sized distant moons that have been observed on single occasions. Seven Saturnian moons are large enough to have collapsed into a relaxed, ellipsoidal shape, though only one or two of those, Titan and possibly Rhea, are currently in hydrostatic equilibrium. Three moons are particularly notable. Titan is the second-largest moon in the Solar System (after Jupiter's Ganymede), with a nitrogen-rich Earth-like atmosphere and a landscape featuring river networks and hydrocarbon lakes. Enceladus emits jets of ice from its south-polar region and is covered in a deep layer of snow. Iapetus has contrasting black and white hemispheres as well as an extensive ridge of equatorial mountains among the tallest in the solar system.
Twenty-four of the known moons are regular satellites; they have prograde orbits not greatly inclined to Saturn's equatorial plane, with the exception of Iapetus which has a prograde but highly inclined orbit, an unusual characteristic for a regular moon. They include the seven major satellites, four small moons that exist in a trojan orbit with larger moons, and five that act as shepherd moons, of which two are mutually co-orbital. Two tiny moons orbit inside of Saturn's B and G rings. The relatively large Hyperion is locked in an orbital resonance with Titan. The remaining regular moons orbit near the outer edges of the dense A Ring and the narrow F Ring, and between the major moons Mimas and Enceladus. The regular satellites are traditionally named after Titans and Titanesses or other figures associated with the mythological Saturn.
The remaining 122, with mean diameters ranging from , orbit much farther from Saturn. They are irregular satellites, having high orbital inclinations and eccentricities mixed between prograde and retrograde. These moons are probably captured minor planets, or fragments from the collisional breakup of such bodies after they were captured, creating collisional families. Saturn is expected to have around 150 irregular satellites larger than in diameter, plus many hundreds more that are even smaller. The irregular satellites are classified by their orbital characteristics into the prograde Inuit and Gallic groups and the large retrograde Norse group, and their names are chosen from the corresponding mythologies (with the Gallic group corresponding to Celtic mythology). The sole exception is Phoebe, the largest irregular Saturnian moon, discovered at the end of the 19th century; it is part of the Norse group but named for a Greek Titaness.
The rings of Saturn are made up of objects ranging in size from microscopic to moonlets hundreds of meters across, each in its own orbit around Saturn. Thus an absolute number of Saturnian moons cannot be given, because there is no consensus on a boundary between the countless small unnamed objects that form Saturn's ring system and the larger objects that have been named as moons. Over 150 moonlets embedded in the rings have been detected by the disturbance they create in the surrounding ring material, though this is thought to be only a small sample of the total population of such objects.
, there are 83 designated moons that are still unnamed; all but one (the designated B-ring moonlet S/2009 S 1) are irregular. (There are many other undesignated ring moonlets.) If named, most of the irregulars will receive names from Gallic, Norse and Inuit mythology based on the orbital group of which they are a member.
Discovery
Early observations
Before the advent of telescopic photography, eight moons of Saturn were discovered by direct observation using optical telescopes. Saturn's largest moon, Titan, was discovered in 1655 by Christiaan Huygens using a objective lens on a refracting telescope of his own design. Tethys, Dione, Rhea and Iapetus (the "Sidera Lodoicea") were discovered between 1671 and 1684 by Giovanni Domenico Cassini. Mimas and Enceladus were discovered in 1789 by William Herschel. Hyperion was discovered in 1848 by W. C. Bond, G. P. Bond and William Lassell.
The use of long-exposure photographic plates made possible the discovery of additional moons. The first to be discovered in this manner, Phoebe, was found in 1899 by W. H. Pickering. In 1966 the tenth satellite of Saturn was discovered by Audouin Dollfus, when the rings were observed edge-on near an equinox. It was later named Janus. A few years later it was realized that all observations of 1966 could only be explained if another satellite had been present and that it had an orbit similar to that of Janus. This object is now known as Epimetheus, the eleventh moon of Saturn. It shares the same orbit with Janus—the only known example of co-orbitals in the Solar System. In 1980, three additional Saturnian moons were discovered from the ground and later confirmed by the Voyager probes. They are trojan moons of Dione (Helene) and Tethys (Telesto and Calypso).
Observations by spacecraft
The study of the outer planets has since been revolutionized by the use of uncrewed space probes. The arrival of the Voyager spacecraft at Saturn in 1980–1981 resulted in the discovery of three additional moons—Atlas, Prometheus and Pandora—bringing the total to 17. In addition, Epimetheus was confirmed as distinct from Janus. In 1990, Pan was discovered in archival Voyager images.
The Cassini mission, which arrived at Saturn in July 2004, initially discovered three small inner moons: Methone and Pallene between Mimas and Enceladus, and the second trojan moon of Dione, Polydeuces. It also observed three suspected but unconfirmed moons in the F Ring. In Cassini scientists announced that the structure of Saturn's rings indicates the presence of several more moons orbiting within the rings, although only one, Daphnis, had been visually confirmed at the time. In 2007 Anthe was announced. In 2008 it was reported that Cassini observations of a depletion of energetic electrons in Saturn's magnetosphere near Rhea might be the signature of a tenuous ring system around Saturn's second largest moon. In , Aegaeon, a moonlet within the G Ring, was announced. In July of the same year, S/2009 S 1, the first moonlet within the B Ring, was observed. In April 2014, the possible beginning of a new moon, within the A Ring, was reported. (related image)
Outer moons
Study of Saturn's moons has also been aided by advances in telescope instrumentation, primarily the introduction of digital charge-coupled devices which replaced photographic plates. For the 20th century, Phoebe stood alone among Saturn's known moons with its highly irregular orbit. Then in 2000, three dozen additional irregular moons were discovered using ground-based telescopes. A survey starting in late 2000 and conducted using three medium-size telescopes found thirteen new moons orbiting Saturn at a great distance, in eccentric orbits, which are highly inclined to both the equator of Saturn and the ecliptic. They are probably fragments of larger bodies captured by Saturn's gravitational pull. In 2005, astronomers using the Mauna Kea Observatory announced the discovery of twelve more small outer moons, in 2006, astronomers using the Subaru 8.2 m telescope reported the discovery of nine more irregular moons, in , Tarqeq (S/2007 S 1) was announced and in May of the same year S/2007 S 2 and S/2007 S 3 were reported. In 2019, twenty new irregular satellites of Saturn were reported, resulting in Saturn overtaking Jupiter as the planet with the most known moons for the first time since 2000.
In 2019, researchers Edward Ashton, Brett Gladman, and Matthew Beaudoin conducted a survey of Saturn's Hill sphere using the 3.6-meter Canada–France–Hawaii Telescope and discovered about 80 new Saturnian irregular moons. Follow-up observations of these new moons took place over 2019–2021, eventually leading to S/2019 S 1 being announced in November 2021 and an additional 62 moons being announced from 3–16 May 2023. These discoveries brought Saturn's total number of confirmed moons up to 145, making it the first planet known to have over 100 moons. Yet another moon, S/2006 S 20, was announced on 23 May 2023, bringing Saturn's total count moons to 146. All of these new moons are small and faint, with diameters over and apparent magnitudes of 25–27. The researchers found that the Saturnian irregular moon population is more abundant at smaller sizes, suggesting that they are likely fragments from a collision that occurred a few hundred million years ago. The researchers extrapolated that the true population of Saturnian irregular moons larger than in diameter amounts to , which is approximately three times as many Jovian irregular moons down to the same size. If this size distribution applies to even smaller diameters, Saturn would therefore intrinsically have more irregular moons than Jupiter.
Naming
The modern names for Saturnian moons were suggested by John Herschel in 1847. He proposed to name them after mythological figures associated with the Roman god of agriculture and harvest, Saturn (equated to the Greek Cronus). In particular, the then known seven satellites were named after Titans, Titanesses and Giants – brothers and sisters of Cronus. The idea was similar to Simon Marius' mythological naming scheme for the moons of Jupiter.
As Saturn devoured his children, his family could not be assembled around him, so that the choice lay among his brothers and sister, the Titans and Titanesses. The name Iapetus seemed indicated by the obscurity and remoteness of the exterior satellite, Titan by the superior size of the Huyghenian, while the three female appellations [Rhea, Dione, and Tethys] class together the three intermediate Cassinian satellites. The minute interior ones seemed appropriately characterized by a return to male appellations [Enceladus and Mimas] chosen from a younger and inferior (though still superhuman) brood. [Results of the Astronomical Observations made ... at the Cape of Good Hope, p. 415]
In 1848, Lassell proposed that the eighth satellite of Saturn be named Hyperion after another Titan. When in the 20th century the names of Titans were exhausted, the moons were named after different characters of the Greco-Roman mythology or giants from other mythologies. All the irregular moons (except Phoebe, discovered about a century before the others) are named after Inuit, and Gallic gods, and after Norse ice giants.
Some asteroids share the same names as moons of Saturn: 55 Pandora, 106 Dione, 577 Rhea, 1809 Prometheus, 1810 Epimetheus, and 4450 Pan. In addition, three more asteroids would share the names of Saturnian moons but for spelling differences made permanent by the International Astronomical Union (IAU): Calypso and asteroid 53 Kalypso; Helene and asteroid 101 Helena; and Gunnlod and asteroid 657 Gunlöd.
Physical characteristics
Saturn's satellite system is very lopsided: one moon, Titan, comprises more than 96% of the mass in orbit around the planet. The six other planemo (ellipsoidal) moons constitute roughly 4% of the mass, and the remaining small moons, together with the rings, comprise only 0.04%.
Orbital groups
Although the boundaries may be somewhat vague, Saturn's moons can be divided into ten groups according to their orbital characteristics. Many of them, such as Pan and Daphnis, orbit within Saturn's ring system and have orbital periods only slightly longer than the planet's rotation period. The innermost moons and most regular satellites all have mean orbital inclinations ranging from less than a degree to about 1.5 degrees (except Iapetus, which has an inclination of 7.57 degrees) and small orbital eccentricities. On the other hand, irregular satellites in the outermost regions of Saturn's moon system, in particular the Norse group, have orbital radii of millions of kilometers and orbital periods lasting several years. The moons of the Norse group also orbit in the opposite direction to Saturn's rotation.
Inner moons
Ring moonlets
During late July 2009, a moonlet, S/2009 S 1, was discovered in the B Ring, 480 km from the outer edge of the ring, by the shadow it cast. It is estimated to be 300 m in diameter. Unlike the A Ring moonlets (see below), it does not induce a 'propeller' feature, probably due to the density of the B Ring.
In 2006, four tiny moonlets were found in Cassini images of the A Ring. Before this discovery only two larger moons had been known within gaps in the A Ring: Pan and Daphnis. These are large enough to clear continuous gaps in the ring. In contrast, a moonlet is only massive enough to clear two small—about 10 km across—partial gaps in the immediate vicinity of the moonlet itself creating a structure shaped like an airplane propeller. The moonlets themselves are tiny, ranging from about 40 to 500 meters in diameter, and are too small to be seen directly.
In 2007, the discovery of 150 more moonlets revealed that they (with the exception of two that have been seen outside the Encke gap) are confined to three narrow bands in the A Ring between 126,750 and 132,000 km from Saturn's center. Each band is about a thousand kilometers wide, which is less than 1% the width of Saturn's rings. This region is relatively free from the disturbances caused by resonances with larger satellites, although other areas of the A Ring without disturbances are apparently free of moonlets. The moonlets were probably formed from the breakup of a larger satellite. It is estimated that the A Ring contains 7,000–8,000 propellers larger than 0.8 km in size and millions larger than 0.25 km. In April 2014, NASA scientists reported the possible consolidation of a new moon within the A Ring, implying that Saturn's present moons may have formed in a similar process in the past when Saturn's ring system was much more massive.
Similar moonlets may reside in the F Ring. There, "jets" of material may be due to collisions, initiated by perturbations from the nearby small moon Prometheus, of these moonlets with the core of the F Ring. One of the largest F Ring moonlets may be the as-yet unconfirmed object S/2004 S 6. The F Ring also contains transient "fans" which are thought to result from even smaller moonlets, about 1 km in diameter, orbiting near the F Ring core.
One recently discovered moon, Aegaeon, resides within the bright arc of G Ring and is trapped in the 7:6 mean-motion resonance with Mimas. This means that it makes exactly seven revolutions around Saturn while Mimas makes exactly six. The moon is the largest among the population of bodies that are sources of dust in this ring.
Ring shepherds
Shepherd satellites are small moons that orbit within, or just beyond, a planet's ring system. They have the effect of sculpting the rings: giving them sharp edges, and creating gaps between them. Saturn's shepherd moons are Pan (Encke gap), Daphnis (Keeler gap), Prometheus (F Ring), Janus (A Ring), and Epimetheus (A Ring). These moons probably formed as a result of accretion of the friable ring material on preexisting denser cores. The cores with sizes from one-third to one-half the present-day moons may be themselves collisional shards formed when a parental satellite of the rings disintegrated.
Janus and Epimetheus are co-orbital moons. They are of similar size, with Janus being somewhat larger than Epimetheus. They have orbits with less than a 100-kilometer difference in semi-major axis, close enough that they would collide if they attempted to pass each other. Instead of colliding, their gravitational interaction causes them to swap orbits every four years.
Other inner moons
Other inner moons that are neither ring shepherds nor ring moonlets include Atlas and Pandora.
Inner large
The innermost large moons of Saturn orbit within its tenuous E Ring, along with three smaller moons of the Alkyonides group.
Mimas is the smallest and least massive of the inner round moons, although its mass is sufficient to alter the orbit of Methone. It is noticeably ovoid-shaped, having been made shorter at the poles and longer at the equator (by about 20 km) by the effects of Saturn's gravity. Mimas has a large impact crater one-third its diameter, Herschel, situated on its leading hemisphere Mimas has no known past or present geologic activity and its surface is dominated by impact craters, though it does have a water ocean 20–30 km beneath the surface. The only tectonic features known are a few arcuate and linear troughs, which probably formed when Mimas was shattered by the Herschel impact.
Enceladus is one of the smallest of Saturn's moons that is spherical in shape—only Mimas is smaller—yet is the only small Saturnian moon that is currently endogenously active, and the smallest known body in the Solar System that is geologically active today. Its surface is morphologically diverse; it includes ancient heavily cratered terrain as well as younger smooth areas with few impact craters. Many plains on Enceladus are fractured and intersected by systems of lineaments. The area around its south pole was found by Cassini to be unusually warm and cut by a system of fractures about 130 km long called "tiger stripes", some of which emit jets of water vapor and dust. These jets form a large plume off its south pole, which replenishes Saturn's E ring and serves as the main source of ions in the magnetosphere of Saturn. The gas and dust are released with a rate of more than 100 kg/s. Enceladus may have liquid water underneath the south-polar surface. The source of the energy for this cryovolcanism is thought to be a 2:1 mean-motion resonance with Dione. The pure ice on the surface makes Enceladus one of the brightest known objects in the Solar System—its geometrical albedo is more than 140%.
Tethys is the third largest of Saturn's inner moons. Its most prominent features are a large (400 km diameter) impact crater named Odysseus on its leading hemisphere and a vast canyon system named Ithaca Chasma extending at least 270° around Tethys. The Ithaca Chasma is concentric with Odysseus, and these two features may be related. Tethys appears to have no current geological activity. A heavily cratered hilly terrain occupies the majority of its surface, while a smaller and smoother plains region lies on the hemisphere opposite to that of Odysseus. The plains contain fewer craters and are apparently younger. A sharp boundary separates them from the cratered terrain. There is also a system of extensional troughs radiating away from Odysseus. The density of Tethys (0.985 g/cm3) is less than that of water, indicating that it is made mainly of water ice with only a small fraction of rock.
Dione is the second-largest inner moon of Saturn. It has a higher density than the geologically dead Rhea, the largest inner moon, but lower than that of active Enceladus. While the majority of Dione's surface is heavily cratered old terrain, this moon is also covered with an extensive network of troughs and lineaments, indicating that in the past it had global tectonic activity. The troughs and lineaments are especially prominent on the trailing hemisphere, where several intersecting sets of fractures form what is called "wispy terrain". The cratered plains have a few large impact craters reaching 250 km in diameter. Smooth plains with low impact-crater counts are also present on a small fraction of its surface. They were probably tectonically resurfaced relatively later in the geological history of Dione. At two locations within smooth plains strange landforms (depressions) resembling oblong impact craters have been identified, both of which lie at the centers of radiating networks of cracks and troughs; these features may be cryovolcanic in origin. Dione may be geologically active even now, although on a scale much smaller than the cryovolcanism of Enceladus. This follows from Cassini magnetic measurements that show Dione is a net source of plasma in the magnetosphere of Saturn, much like Enceladus.
Alkyonides
Three small moons orbit between Mimas and Enceladus: Methone, Anthe, and Pallene. Named after the Alkyonides of Greek mythology, they are some of the smallest moons in the Saturn system. Anthe and Methone have very faint ring arcs along their orbits, whereas Pallene has a faint complete ring. Of these three moons, only Methone has been photographed at close range, showing it to be egg-shaped with very few or no craters.
Trojan
Trojan moons are a unique feature only known from the Saturnian system. A trojan body orbits at either the leading L4 or trailing L5 Lagrange point of a much larger object, such as a large moon or planet. Tethys has two trojan moons, Telesto (leading) and Calypso (trailing), and Dione also has two, Helene (leading) and Polydeuces (trailing). Helene is by far the largest trojan moon, while Polydeuces is the smallest and has the most chaotic orbit. These moons are coated with dusty material that has smoothed out their surfaces.
Outer large
These moons all orbit beyond the E Ring. They are:
Rhea is the second-largest of Saturn's moons. It is even slightly larger than Oberon, the second-largest moon of Uranus. In 2005, Cassini detected a depletion of electrons in the plasma wake of Rhea, which forms when the co-rotating plasma of Saturn's magnetosphere is absorbed by the moon. The depletion was hypothesized to be caused by the presence of dust-sized particles concentrated in a few faint equatorial rings. Such a ring system would make Rhea the only moon in the Solar System known to have rings. Subsequent targeted observations of the putative ring plane from several angles by Cassini'''s narrow-angle camera turned up no evidence of the expected ring material, leaving the origin of the plasma observations unresolved. Otherwise Rhea has rather a typical heavily cratered surface, with the exceptions of a few large Dione-type fractures (wispy terrain) on the trailing hemisphere and a very faint "line" of material at the equator that may have been deposited by material deorbiting from present or former rings. Rhea also has two very large impact basins on its anti-Saturnian hemisphere, which are about 400 and 500 km across. The first, Tirawa, is roughly comparable to the Odysseus basin on Tethys. There is also a 48 km-diameter impact crater called Inktomi at 112°W that is prominent because of an extended system of bright rays, which may be one of the youngest craters on the inner moons of Saturn. No evidence of any endogenic activity has been discovered on the surface of Rhea.
Titan, at 5,149 km diameter, is the second largest moon in the Solar System and Saturn's largest. Out of all the large moons, Titan is the only one with a dense (surface pressure of 1.5 atm), cold atmosphere, primarily made of nitrogen with a small fraction of methane. The dense atmosphere frequently produces bright white convective clouds, especially over the south pole region. On 6 June 2013, scientists at the IAA-CSIC reported the detection of polycyclic aromatic hydrocarbons in the upper atmosphere of Titan. On 23 June 2014, NASA claimed to have strong evidence that nitrogen in the atmosphere of Titan came from materials in the Oort cloud, associated with comets, and not from the materials that formed Saturn in earlier times. The surface of Titan, which is difficult to observe due to persistent atmospheric haze, shows only a few impact craters and is probably very young. It contains a pattern of light and dark regions, flow channels and possibly cryovolcanos. Some dark regions are covered by longitudinal dune fields shaped by tidal winds, where sand is made of frozen water or hydrocarbons. Titan is the only body in the Solar System beside Earth with bodies of liquid on its surface, in the form of methane–ethane lakes in Titan's north and south polar regions. The largest lake, Kraken Mare, is larger than the Caspian Sea. Like Europa and Ganymede, it is believed that Titan has a subsurface ocean made of water mixed with ammonia, which can erupt to the surface of the moon and lead to cryovolcanism. On 2 July 2014, NASA reported the ocean inside Titan may be "as salty as the Earth's Dead Sea".
Hyperion is Titan's nearest neighbor in the Saturn system. The two moons are locked in a 4:3 mean-motion resonance with each other, meaning that while Titan makes four revolutions around Saturn, Hyperion makes exactly three. With an average diameter of about 270 km, Hyperion is smaller and lighter than Mimas. It has an extremely irregular shape, and a very odd, tan-colored icy surface resembling a sponge, though its interior may be partially porous as well. The average density of about 0.55 g/cm3 indicates that the porosity exceeds 40% even assuming it has a purely icy composition. The surface of Hyperion is covered with numerous impact craters—those with diameters 2–10 km are especially abundant. It is the only moon besides the small moons of Pluto known to have a chaotic rotation, which means Hyperion has no well-defined poles or equator. While on short timescales the satellite approximately rotates around its long axis at a rate of 72–75° per day, on longer timescales its axis of rotation (spin vector) wanders chaotically across the sky. This makes the rotational behavior of Hyperion essentially unpredictable.
Iapetus is the third-largest of Saturn's moons. Orbiting the planet at km, it is by far the most distant of Saturn's large moons, and also has the largest orbital inclination, at 15.47°. Iapetus has long been known for its unusual two-toned surface; its leading hemisphere is pitch-black and its trailing hemisphere is almost as bright as fresh snow. Cassini images showed that the dark material is confined to a large near-equatorial area on the leading hemisphere called Cassini Regio, which extends approximately from 40°N to 40°S. The pole regions of Iapetus are as bright as its trailing hemisphere. Cassini also discovered a 20 km tall equatorial ridge, which spans nearly the moon's entire equator. Otherwise both dark and bright surfaces of Iapetus are old and heavily cratered. The images revealed at least four large impact basins with diameters from 380 to 550 km and numerous smaller impact craters. No evidence of any endogenic activity has been discovered. A clue to the origin of the dark material covering part of Iapetus's starkly dichromatic surface may have been found in 2009, when NASA's Spitzer Space Telescope discovered a vast, nearly invisible disk around Saturn, just inside the orbit of the moon Phoebe – the Phoebe ring. Scientists believe that the disk originates from dust and ice particles kicked up by impacts on Phoebe. Because the disk particles, like Phoebe itself, orbit in the opposite direction to Iapetus, Iapetus collides with them as they drift in the direction of Saturn, darkening its leading hemisphere slightly. Once a difference in albedo, and hence in average temperature, was established between different regions of Iapetus, a thermal runaway process of water ice sublimation from warmer regions and deposition of water vapor onto colder regions ensued. Iapetus's present two-toned appearance results from the contrast between the bright, primarily ice-coated areas and regions of dark lag, the residue left behind after the loss of surface ice.
Irregular
Irregular moons are small satellites with large-radii, inclined, and frequently retrograde orbits, believed to have been acquired by the parent planet through a capture process. They often occur as collisional families or groups. The precise size as well as albedo of the irregular moons are not known for sure because the moons are very small to be resolved by a telescope, although the latter is usually assumed to be quite low—around 6% (albedo of Phoebe) or less. The irregulars generally have featureless visible and near infrared spectra dominated by water absorption bands. They are neutral or moderately red in color—similar to C-type, P-type, or D-type asteroids, though they are much less red than Kuiper belt objects.
Inuit
The Inuit group includes thirteen prograde outer moons that are similar enough in their distances from the planet (190–300 radii of Saturn), their orbital inclinations (45–50°) and their colors that they can be considered a group. The Inuit group is further split into three distinct subgroups at different semi-major axes, and are named after their respective largest members. Ordered by increasing semi-major axis, these subgroups are the Kiviuq group, the Paaliaq group, and the Siarnaq group. The Kiviuq group includes five members: Kiviuq, Ijiraq, S/2005 S 4, S/2019 S 1, and S/2020 S 1. The Siarnaq group includes seven members: Siarnaq, Tarqeq, S/2004 S 31, S/2019 S 14, S/2020 S 3, S/2019 S 6, and S/2020 S 5. In contrast to the Kiviuq and Siarnaq subgroups, the Paaliaq subgroup does not contain any other known members besides Paaliaq itself. Of the entire Inuit group, Siarnaq is the largest member with an estimated size of about 39 km.
Gallic
The Gallic group includes seven prograde outer moons that are similar enough in their distance from the planet (200–300 radii of Saturn), their orbital inclination (35–40°) and their color that they can be considered a group. They are Albiorix, Bebhionn, Erriapus, Tarvos, Saturn LX, S/2007 S 8, and S/2020 S 4. The largest of these moons is Albiorix with an estimated diameter of about 29 km.
Norse
All 100 retrograde outer moons of Saturn are broadly classified into the Norse group. They are Aegir, Angrboda, Alvaldi, Beli, Bergelmir, Bestla, Eggther, Farbauti, Fenrir, Fornjot, Geirrod, Gerd, Greip, Gridr, Gunnlod, Hati, Hyrrokkin, Jarnsaxa, Kari, Loge, Mundilfari, Narvi, Phoebe, Skathi, Skoll, Skrymir, Surtur, Suttungr, Thiazzi, Thrymr, Ymir, and 69 unnamed satellites. After Phoebe, Ymir is the largest of the known retrograde irregular moons, with an estimated diameter of only 22 km.
Phoebe, at in diameter, is by far the largest of Saturn's irregular satellites. It has a retrograde orbit and rotates on its axis every 9.3 hours. Phoebe was the first moon of Saturn to be studied in detail by Cassini, in ; during this encounter Cassini was able to map nearly 90% of the moon's surface. Phoebe has a nearly spherical shape and a relatively high density of about 1.6 g/cm3. Cassini images revealed a dark surface scarred by numerous impacts—there are about 130 craters with diameters exceeding 10 km. Such impacts may have ejected fragments of Phoebe into orbit around Saturn—two of these may be S/2006 S 20 and S/2006 S 9, whose orbits are similar to Phoebe. Spectroscopic measurement showed that the surface is made of water ice, carbon dioxide, phyllosilicates, organics and possibly iron-bearing minerals. Phoebe is believed to be a captured centaur that originated in the Kuiper belt. It also serves as a source of material for the largest known ring of Saturn, which darkens the leading hemisphere of Iapetus (see above).
Outlier prograde satellites
Two prograde moons of Saturn do not definitively belong to either the Inuit or Gallic groups. S/2004 S 24 and S/2006 S 12 have similar orbital inclinations as the Gallic group, but have much more distant orbits with semi-major axes of ~400 Saturn radii and ~340 Saturn radii, respectively.
List
Confirmed
The Saturnian moons are listed here by orbital period (or semi-major axis), from shortest to longest. Moons massive enough for their surfaces to have collapsed into a spheroid are highlighted in bold and marked with a blue background, while the irregular moons are listed in red, orange, green, and gray background. The orbits and mean distances of the irregular moons are strongly variable over short timescales due to frequent planetary and solar perturbations, so the orbital elements of irregular moons listed here are averaged over a 5,000-year numerical integration by the Jet Propulsion Laboratory. These may sometimes strongly differ from the osculating orbital elements provided by other sources. Their orbital elements are all based on a reference epoch of 1 January 2000.
Unconfirmed
These F Ring moonlets listed in the following table (observed by Cassini) have not been confirmed as solid bodies. It is not yet clear if these are real satellites or merely persistent clumps within the F Ring.
Spurious
Two moons were claimed to be discovered by different astronomers but never seen again. Both moons were said to orbit between Titan and Hyperion.
Chiron which was supposedly sighted by Hermann Goldschmidt in 1861, but never observed by anyone else.
Themis was allegedly discovered in 1905 by astronomer William Pickering, but never seen again. Nevertheless, it was included in numerous almanacs and astronomy books until the 1960s.
Hypothetical
In 2022, scientists of the Massachusetts Institute of Technology proposed the hypothetical former moon Chrysalis, using data from the Cassini–Huygens mission. Chrysalis would have orbited between Titan and Iapetus, but its orbit would have gradually become more eccentric until it was torn apart by Saturn. 99% of its mass would have been absorbed by Saturn, while the remaining 1% would have formed Saturn's rings.
Temporary
Much like Jupiter, asteroids and comets will infrequently make close approaches to Saturn, even more infrequently becoming captured into orbit of the planet. The comet P/2020 F1 (Leonard) is calculated to have made a close approach of km ( mi) to Saturn on 8 May 1936, closer than the orbit of Titan to the planet, with an orbital eccentricity of only . The comet may have been orbiting Saturn prior to this as a temporary satellite, but difficulty modelling the non-gravitational forces makes whether or not it was indeed a temporary satellite uncertain.
Other comets and asteroids may have temporarily orbited Saturn at some point, but none are presently known to have.
Formation
It is thought that the Saturnian system of Titan, mid-sized moons, and rings developed from a set-up closer to the Galilean moons of Jupiter, though the details are unclear. It has been proposed either that a second Titan-sized moon broke up, producing the rings and inner mid-sized moons, or that two large moons fused to form Titan, with the collision scattering icy debris that formed the mid-sized moons. On 23 June 2014, NASA claimed to have strong evidence that nitrogen in the atmosphere of Titan came from materials in the Oort cloud, associated with comets, and not from the materials that formed Saturn in earlier times. Studies based on Enceladus's tidal-based geologic activity and the lack of evidence of extensive past resonances in Tethys, Dione, and Rhea's orbits suggest that the moons up to and including Rhea may be only 100 million years old.
See also
List of natural satellites
Notes
References
External links
Scott S. Sheppard: Saturn Moons
Rotate and Spin Maps of 7 Moons at The New York Times''
Planetary Society blog post (2017-05-17) by Emily Lakdawalla with images giving comparative sizes of the moons
Tilmann Denk: Outer Moons of Saturn
Lists of moons
Solar System | Moons of Saturn | [
"Astronomy"
] | 7,689 | [
"Outer space",
"Solar System"
] |
575,754 | https://en.wikipedia.org/wiki/Pierre-Gilles%20de%20Gennes | Pierre-Gilles de Gennes (; 24 October 1932 – 18 May 2007) was a French physicist and the Nobel Prize laureate in physics in 1991.
Education and early life
He was born in Paris, France, and was home-schooled to the age of 12. By the age of 13, he had adopted adult reading habits and was visiting museums.
Later, de Gennes studied at the École Normale Supérieure. After leaving the École in 1955, he became a research engineer at the Saclay center of the Commissariat à l'Énergie Atomique, working mainly on neutron scattering and magnetism, with advice from Anatole Abragam and Jacques Friedel. He defended his Ph.D. in 1957 at the University of Paris.
Career and research
In 1959, he was a postdoctoral research visitor with Charles Kittel at the University of California, Berkeley, and then spent 27 months in the French Navy. In 1961, he was assistant professor in Orsay and soon started the Orsay group on superconductors. In 1968, he switched to studying liquid crystals.
In 1971, he became professor at the Collège de France, and participated in STRASACOL (a joint action of Strasbourg, Saclay and Collège de France) on polymer physics. From 1980 on, he became interested in interfacial problems: the dynamics of wetting and adhesion.
He worked on granular materials and on the nature of memory objects in the brain.
Awards and honours
Awarded the Fernand Holweck Medal and Prize in 1968.
He was awarded the Harvey Prize, Lorentz Medal and Wolf Prize in 1988 and 1990. In 1991, he received the Nobel Prize in Physics. He was then director of the École Supérieure de Physique et de Chimie Industrielles de la Ville de Paris (ESPCI), a post he held from 1976 until his retirement in 2002.
P.G. de Gennes has also received the F.A. Cotton Medal for Excellence in Chemical Research of the American Chemical Society in 1997, the Holweck Prize from the joint French and British Physical Society; the Ampere Prize, French Academy of Science; the gold medal from the French CNRS; the Matteuci Medal, Italian Academy; the Harvey Prize, Israel; and polymer awards from both APS and ACS.
He was awarded the above-mentioned Nobel Prize for discovering that "methods developed for studying order phenomena in simple systems can be generalized to more complex forms of matter, in particular to liquid crystals and polymers".
The Royal Society of Chemistry awards the De Gennes Prize biennially, in his honour. He was elected a Foreign Member of the Royal Society (ForMemRS) in 1984. He was awarded A. Cemal Eringen Medal in 1998.
Personal life
He married Anne-Marie Rouet (born in 1933) in June 1954. They remained married until his death and had three children together: Christian (born 9 December 1954), Dominique (born 6 May 1956) and Marie-Christine (born 11 January 1958).
He also has four children with physicist Françoise Brochard-Wyart (born in 1944) who was one of his former doctoral students and then colleague and co-author. The children are: Claire Wyart (born 16 February 1977), Matthieu Wyart (born 24 May 1978), Olivier Wyart (born 3 August 1984) and Marc de Gennes (born 16 January 1991).
Professors John Goodby and George Gray noted in an obituary: "Pierre was a man of great charm and humour, capable of making others believe they, too, were wise. We will remember him as an inspirational lecturer and teacher, an authority on Shakespeare, an expert skier who attended conference lectures appropriately attired with skis to hand, and, robed in red, at the Bordeaux liquid crystal conference in 1978, took great delight in being inaugurated as a Vignoble de St Émilion."
In 2003 he was one of 22 Nobel Laureates who signed the Humanist Manifesto.
On 22 May 2007, his death was made public as official messages and tributes poured in.
Publications
Books
References
External links
including the Nobel Lecture, 9 December 1991 Soft Matter
1932 births
2007 deaths
École Normale Supérieure alumni
University of California, Berkeley staff
Experimental physicists
Academic staff of the Collège de France
Foreign associates of the National Academy of Sciences
Foreign members of the Russian Academy of Sciences
Foreign members of the Royal Society
French physicists
Members of the Brazilian Academy of Sciences
Members of the French Academy of Sciences
Nobel laureates in Physics
French Nobel laureates
Lorentz Medal winners
Wolf Prize in Physics laureates
Academic staff of ESPCI Paris
Liquid crystals
Lycée Saint-Louis alumni
Fellows of the Australian Academy of Science
Scientists from Paris
Fellows of the American Physical Society
Paris-Saclay University people
Paris-Saclay University alumni
Recipients of the Matteucci Medal
French materials scientists
Presidents of the Société Française de Physique | Pierre-Gilles de Gennes | [
"Physics"
] | 1,011 | [
"Experimental physics",
"Experimental physicists"
] |
575,776 | https://en.wikipedia.org/wiki/Lethal%20dose | In toxicology, the lethal dose (LD) is an indication of the lethal toxicity of a given substance or type of radiation. Because resistance varies from one individual to another, the "lethal dose" represents a dose (usually recorded as dose per kilogram of subject body weight) at which a given percentage of subjects will die. The lethal concentration is a lethal dose measurement used for gases or particulates. The LD may be based on the standard person concept, a theoretical individual that has perfectly "normal" characteristics, and thus not apply to all sub-populations.
Median lethal dose (LD50)
The median lethal dose, LD50 (abbreviation for "lethal dose, 50%"), LC50 (lethal concentration, 50%) or LCt50 (lethal concentration and time) of a toxin, radiation, or pathogen is the dose required to kill half the members of a tested population after a specified test duration. LD50 figures are frequently used as a general indicator of a substance's acute toxicity. A lower LD50 is indicative of increased toxicity.
History
The test was created by J.W. Trevan in 1927. The term "semilethal dose" is occasionally used with the same meaning, in particular in translations from non-English-language texts, but can also refer to a sublethal dose; because of this ambiguity, it is usually avoided. LD50 is usually determined by tests on animals such as laboratory mice. In 2011 the US Food and Drug Administration approved alternative methods to LD50 for testing the cosmetic drug Botox without animal tests.
Units and measurement
The LD50 is usually expressed as the mass of substance administered per unit mass of test subject, typically as milligrams of substance per kilogram of body mass, but stated as nanograms (suitable for botulinum), micrograms, milligrams, or grams (suitable for paracetamol) per kilogram. Stating it this way allows the relative toxicity of different substances to be compared, and normalizes for the variation in the size of the animals exposed, although toxicity does not always scale simply with body mass.
The choice of 50% lethality as a benchmark avoids the potential for ambiguity of making measurements in the extremes and reduces the amount of testing required. However, this also means that LD50 is not the lethal dose for all subjects; some may be killed by much less, while others survive doses far higher than the LD50. Measures such as "LD1" and "LD99" (dosage required to kill 1% or 99%, respectively, of the test population) are occasionally used for specific purposes.
Lethal dosage often varies depending on the method of administration; for instance, many substances are less toxic when administered orally than when intravenously administered. For this reason, LD50 figures are often qualified with the mode of administration, e.g., "LD50 i.v."
The related quantities LD50/30 or LD50/60 are used to refer to a dose that without treatment will be lethal to 50% of the population within (respectively) 30 or 60 days. These measures are used more commonly with radiation, as survival beyond 60 days usually results in recovery.
Estimation using model organisms
LD values for humans are best estimated by extrapolating results from human cell cultures. One form of measuring LD is to use model organisms, particularly animals like mice or rats, converting to dosage per kilogram of biomass, and extrapolating to human norms. The degree of error from animal-extrapolated LD values is large. The biology of test animals differs in important aspects to that of humans. For instance, mouse tissue is approximately fifty times less responsive than human tissue to the venom of the Sydney funnel-web spider. The square–cube law also complicates the scaling relationships involved. Researchers are shifting away from animal-based LD measurements in some instances. The U.S. Food and Drug Administration has begun to approve more non-animal methods in response to animal welfare concerns.
Median infective dose
The median infective dose (ID50) is the number of organisms received by a person or test animal qualified by the route of administration (e.g., 1,200 org/man per oral). Because of the difficulties in counting actual organisms in a dose, infective doses may be expressed in terms of biological assay, such as the number of LD50's to some test animal. In biological warfare infective dosage is the number of infective doses per minute for a cubic meter (e.g., ICt50 is 100 medium doses - min/m3).)
Lowest lethal dose
The lowest lethal dose (LDLo) is the least amount of drug that can produce death in a given animal species under controlled conditions. The dosage is given per unit of bodyweight (typically stated in milligrams per kilogram) of a substance known to have resulted in fatality in a particular species. When quoting an LDLo, the particular species and method of administration (e.g. ingested, inhaled, intravenous) are typically stated.
Median lethal concentration
For gases and aerosols, lethal concentration (given in mg/m3 or ppm, parts per million) is the analogous concept, although this also depends on the duration of exposure, which has to be included in the definition. The term incipient lethal level is used to describe a LC50 value that is independent of time.
A comparable measurement is LCt50, which relates to lethal dosage from exposure, where C is concentration and t is time. It is often expressed in terms of mg-min/m3. LCt50 is the dose that will cause incapacitation rather than death. These measures are commonly used to indicate the comparative efficacy of chemical warfare agents, and dosages are typically qualified by rates of breathing (e.g., resting = 10 L/min) for inhalation, or degree of clothing for skin penetration. The concept of Ct was first proposed by Fritz Haber and is sometimes referred to as Haber's law, which assumes that exposure to 1 minute of 100 mg/m3 is equivalent to 10 minutes of 10 mg/m3 (1 × 100 = 100, as does 10 × 10 = 100).{{Citation needed|reason=A citation for what?
There is a Wikipedia article about Habers law clearly linked to on the paragraph - please elaborate or remove this citation needed somebody, as I am not sure which to do. Thank you. date=April 2016|date=June 2017}}
Some chemicals, such as hydrogen cyanide, are rapidly detoxified by the human body, and do not follow Haber's Law. So, in these cases, the lethal concentration may be given simply as LC50 and qualified by a duration of exposure (e.g., 10 minutes). The material safety data sheets for toxic substances frequently use this form of the term even if the substance does follow Haber's Law.
Lowest lethal concentration
The LCLo is the lowest concentration of a chemical, given over a period of time, that results in the fatality of an individual animal. LCLo is typically for an acute (<24 hour) exposure. It is related to the LC50, the median lethal concentration. The LCLo is used for gases and aerosolized material.
Limitations
As a measure of toxicity, lethal dose is somewhat unreliable and results may vary greatly between testing facilities due to factors such as the genetic characteristics of the sample population, animal species tested, environmental factors and mode of administration.
There can be wide variability between species as well; what is relatively safe for rats may very well be extremely toxic for humans (cf.'' paracetamol toxicity), and vice versa. For example, chocolate, comparatively harmless to humans, is known to be toxic to many animals. When used to test venom from venomous creatures, such as snakes, LD50 results may be misleading due to the physiological differences between mice, rats, and humans. Many venomous snakes are specialized predators of mice, and their venom may be adapted specifically to incapacitate mice; and mongooses may be exceptionally resistant. While most mammals have a very similar physiology, LD50 results may or may not have equal bearing upon every mammal species, including humans.
Animal rights concerns
Animal-rights and animal-welfare groups, such as Animal Rights International, have campaigned against LD50 testing on animals in particular as, in the case of some substances, causing the animals to die slow, painful deaths. Several countries, including the UK, have taken steps to ban the oral LD50, and the Organisation for Economic Co-operation and Development (OECD) abolished the requirement for the oral test in 2001.
See also
References
Causes of death
Concentration indicators
Medical emergencies
Toxicology | Lethal dose | [
"Environmental_science"
] | 1,806 | [
"Toxicology"
] |
575,893 | https://en.wikipedia.org/wiki/Spacecraft%20Event%20Time | Spacecraft Event Time (SCET) is the spacecraft-local time for events that happen at the spacecraft. SCET is used for command programs that control the timing of spacecraft operations and to identify when specific events occur on the spacecraft relative to Earth time.
SCET versus Earth time
Since signals between the spacecraft and Earth are limited to the speed of light, there is a delay between the time an event happens on the spacecraft (such as the transmission of data taken from an instrument reading) and the time that a signal reporting the event reaches Earth. Similarly, there is a delay between when instructions are sent from Earth and when the spacecraft receives the instructions. The length of delay is related to the distance between the sending and receiving points. Failure to take this delay into account could result in inaccurate data or mistakes in spacecraft control.
Calculating SCET
Determining the Spacecraft Event Time involves taking the time at Earth and adding or subtracting the signal travel time, depending on whether the signal is being sent to or received from the spacecraft. For events transmitted from the spacecraft to Earth, the SCET of an event on the spacecraft can be defined as equal to the ERT (Earth-Received Time) minus the OWLT (One-Way Light Time). For events transmitted from Earth to the spacecraft, the calculation is TRM (transmission time) plus OWLT. For example, if a signal were received on Earth at exactly 11:00 UTC from a spacecraft showing that it had just completed a maneuvering thrust, but the spacecraft was four light-hours away from Earth (the distance of the New Horizons spacecraft at one point as it approaches Pluto), the SCET time of the thrust maneuver would have been four hours earlier, at 07:00.
Spacecraft Event Time in UTC is also known as Orbiter UTC, and Earth-received time as Ground UTC.
Spacecraft control
Since it takes time for a radio transmission to reach a spacecraft from Earth, the usual operation of a spacecraft is controlled with an uploaded command script containing SCET markers to ensure a certain timeline of events. Because of the delay between the sending of instructions from Earth and their receipt and execution by the spacecraft, real-time commanding of robotic spacecraft is done rarely: usually only in response to an emergency event, when changes in spacecraft operations must be made as soon as possible. For example, a spacecraft could be instructed to go into safe mode to protect it during a coronal mass ejection (CME) from the Sun.
Presentation format
Spacecraft event times stored in relation to instrument data from spacecraft events (e.g. images) are generally presented in ISO 8601 using one of the following formats:
CCYY-MM-DDTHH:MM:SS.sssZ (preferred format)
CCYY-DDDTHH:MM:SS.sssZ
However, the trailing Z (which indicates that the time is given in UTC) is often assumed/omitted.
Notes
References
Basics of Space Flight Glossary, JPL/NASA
Data Standards, PDS/NASA
Spaceflight concepts
Time scales | Spacecraft Event Time | [
"Physics",
"Astronomy"
] | 615 | [
"Physical quantities",
"Time",
"Astronomical coordinate systems",
"Spacetime",
"Time scales"
] |
575,912 | https://en.wikipedia.org/wiki/Research%20chemical | Research chemicals are chemical substances scientists use for medical and scientific research purposes. One characteristic of a research chemical is that it is for laboratory research use only; a research chemical is not intended for human or veterinary use. This distinction is required on the labels of research chemicals and exempts them from regulation under parts 100-740 in Title 21 of the Code of Federal Regulations (21CFR).
Background
Agricultural research chemicals
Research agrochemicals are created and evaluated to select effective substances for commercial off-the-shelf end-user products. Many research agrochemicals are never publicly marketed. Agricultural research chemicals often use sequential code names.
References
Drug culture
Drug discovery
Medicinal chemistry | Research chemical | [
"Chemistry",
"Biology"
] | 136 | [
"Life sciences industry",
"Drug discovery",
"nan",
"Medicinal chemistry",
"Biochemistry"
] |
575,920 | https://en.wikipedia.org/wiki/Oncomouse | The OncoMouse or Harvard mouse is a type of laboratory mouse (Mus musculus) that has been genetically modified using modifications designed by Philip Leder and Timothy A Stewart of Harvard University to carry a specific gene called an activated oncogene (v-Ha-ras under the control of the mouse mammary tumor virus promoter). The activated oncogene significantly increases the mouse's susceptibility to cancer, and thus makes the mouse a suitable model for cancer research.
OncoMouse was not the first transgenic mouse to be developed for use in cancer research. Ralph L. Brinster and Richard Palmiter had developed such mice previously. However, OncoMouse was the first mammal to be patented. Because DuPont had funded Philip Leder's research, Harvard University agreed to give DuPont exclusive rights to any inventions commercialized as a result of the funding. Patent applications on the OncoMouse were filed back in the mid-1980s in numerous countries such as in the United States, in Canada, in Europe through the European Patent Office (EPO) and in Japan. Initially the rights to the OncoMouse invention were owned by DuPont. However, in 2011 the USPTO decided that the final patent actually expired in 2005, which meant that the Oncomouse became free for use by other parties (although the name is not, as "OncoMouse" is a registered trademark).
The patenting of OncoMouse had a significant effect on mouse geneticists, who had previously shared their information and mice from their colonies openly. Once a strain of mice had been first described in published research, mice were stored and acquired through Jackson Laboratory, a nonprofit research institute. The patenting of OncoMouse, and the breadth of the claims made in those patents, were considered to be unreasonable by many of their contemporaries. More broadly, the patenting of OncoMouse was a first step in shifting academic research away from a culture of open and free (or very inexpensive) shared resources towards a commercial culture of expensive proprietary purchase and licensing requirements. This shift was felt far beyond the mouse genetics community. Harvard later said that it regretted the handling of the OncoMouse patents.
Patent procedures
Canada
In Canada, the Supreme Court in 2002 rejected the patent in Harvard College v. Canada (Commissioner of Patents), overturning a Federal Court of Appeal verdict which ruled in favor of the patent. However, on 7 October 2003, Canadian patent 1,341,442 was granted to Harvard College. The patent was amended to omit the "composition of matter" claims on the transgenic mice. The Supreme Court had rejected the entire patent application on the basis of these claims, but Canadian patent law allowed the amended claims to grant under rules that predated the General Agreement on Tariffs and Trade, and the patent was valid until 2020.
Europe (through the EPO)
European patent application 85304490.7 was filed in June 1985 by "The President and Fellows of Harvard College". It was initially refused in 1989 by an Examining Division of the European Patent Office (EPO) among other things on the grounds that the European Patent Convention (EPC) excludes patentability of animals per se. The decision was appealed and the Board of Appeal held that animal varieties were excluded of patentability by the EPC (and especially its ), while animals (as such) were not excluded from patentability. The Examining Division then granted the patent in 1992 (its publication number is ).
The European patent was then opposed by several third parties, more precisely by 17 opponents, notably on the grounds laid out in , according to which "inventions, the publication or exploitation of which would be contrary to "ordre public" or morality are excluded from patentability. After oral proceedings took place in November 2001, the patent was maintained in amended form. This decision was then appealed and the appeal decision was taken on July 6, 2004. The case was eventually remitted to the first instance, i.e. the Opposition Division, with the order to maintain the patent on a newly amended form. However, revocation of the patent was eventually published on August 16, 2006, more than 20 years after the filing date (the normal term of a European patent under ), for failure to pay the fees and to file the translations of the amended claims under .
United States
In 1988, the United States Patent and Trademark Office (USPTO) granted (filed Jun 22, 1984, issued Apr 12, 1988, expired April 12, 2005) to Harvard College claiming "a transgenic non-human mammal whose germ cells and somatic cells contain a recombinant activated oncogene sequence introduced into said mammal..." The claim explicitly excluded humans, apparently reflecting moral and legal concerns about patents on human beings, and about modification of the human genome. Remarkably, there were no US courts called to decide on the validity of this patent. Two separate patents were issued to Harvard College covering methods for providing a cell culture from a transgenic non-human animal (; filed Mar 22, 1988, issued Feb 11, 1992, expired Feb 11, 2009) and testing methods using transgenic mice expressing an oncogene (; filed Sep 19, 1991, issued Jul 20, 1999, expires July 20, 2016). Both these patents were found to expire in 2005 by the USPTO due to a terminal disclaimer. Dupont is currently bringing suit in the Eastern District of Virginia.
See also
Biobreeding rat
Biological patent
Knockout mouse
Animal testing
References
Further reading
Bioethics
Genetically modified organisms
Patent law
Laboratory mouse strains
Harvard University
Cancer research | Oncomouse | [
"Technology",
"Engineering",
"Biology"
] | 1,149 | [
"Bioethics",
"Genetic engineering",
"Genetically modified organisms",
"Ethics of science and technology"
] |
576,108 | https://en.wikipedia.org/wiki/Parametric%20equation | In mathematics, a parametric equation expresses several quantities, such as the coordinates of a point, as functions of one or several variables called parameters.
In the case of a single parameter, parametric equations are commonly used to express the trajectory of a moving point, in which case, the parameter is often, but not necessarily, time, and the point describes a curve, called a parametric curve. In the case of two parameters, the point describes a surface, called a parametric surface. In all cases, the equations are collectively called a parametric representation, or parametric system, or parameterization (alternatively spelled as parametrisation) of the object.
For example, the equations
form a parametric representation of the unit circle, where is the parameter: A point is on the unit circle if and only if there is a value of such that these two equations generate that point. Sometimes the parametric equations for the individual scalar output variables are combined into a single parametric equation in vectors:
Parametric representations are generally nonunique (see the "Examples in two dimensions" section below), so the same quantities may be expressed by a number of different parameterizations.
In addition to curves and surfaces, parametric equations can describe manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).
Parametric equations are commonly used in kinematics, where the trajectory of an object is represented by equations depending on time as the parameter. Because of this application, a single parameter is often labeled ; however, parameters can represent other physical quantities (such as geometric variables) or can be selected arbitrarily for convenience. Parameterizations are non-unique; more than one set of parametric equations can specify the same curve.
Implicitization
Converting a set of parametric equations to a single implicit equation involves eliminating the variable from the simultaneous equations This process is called . If one of these equations can be solved for , the expression obtained can be substituted into the other equation to obtain an equation involving and only: Solving to obtain and using this in gives the explicit equation while more complicated cases will give an implicit equation of the form
If the parametrization is given by rational functions
where , , and are set-wise coprime polynomials, a resultant computation allows one to implicitize. More precisely, the implicit equation is the resultant with respect to of and .
In higher dimensions (either more than two coordinates or more than one parameter), the implicitization of rational parametric equations may by done with Gröbner basis computation; see .
To take the example of the circle of radius , the parametric equations
can be implicitized in terms of and by way of the Pythagorean trigonometric identity. With
and
we get
and thus
which is the standard equation of a circle centered at the origin.
Parametric plane curves
Parabola
The simplest equation for a parabola,
can be (trivially) parameterized by using a free parameter , and setting
Explicit equations
More generally, any curve given by an explicit equation
can be (trivially) parameterized by using a free parameter , and setting
Circle
A more sophisticated example is the following. Consider the unit circle which is described by the ordinary (Cartesian) equation
This equation can be parameterized as follows:
With the Cartesian equation it is easier to check whether a point lies on the circle or not. With the parametric version it is easier to obtain points on a plot.
In some contexts, parametric equations involving only rational functions (that is fractions of two polynomials) are preferred, if they exist. In the case of the circle, such a is
With this pair of parametric equations, the point is not represented by a real value of , but by the limit of and when tends to infinity.
Ellipse
An ellipse in canonical position (center at origin, major axis along the -axis) with semi-axes and can be represented parametrically as
An ellipse in general position can be expressed as
as the parameter varies from to . Here is the center of the ellipse, and is the angle between the -axis and the major axis of the ellipse.
Both parameterizations may be made rational by using the tangent half-angle formula and setting
Lissajous curve
A Lissajous curve is similar to an ellipse, but the and sinusoids are not in phase. In canonical position, a Lissajous curve is given by
where and are constants describing the number of lobes of the figure.
Hyperbola
An east-west opening hyperbola can be represented parametrically by
or, rationally
A north-south opening hyperbola can be represented parametrically as
or, rationally
In all these formulae are the center coordinates of the hyperbola, is the length of the semi-major axis, and is the length of the semi-minor axis. Note that in the rational forms of these formulae, the points and , respectively, are not represented by a real value of , but are the limit of and as tends to infinity.
Hypotrochoid
A hypotrochoid is a curve traced by a point attached to a circle of radius rolling around the inside of a fixed circle of radius , where the point is at a distance from the center of the interior circle.
The parametric equations for the hypotrochoids are:
Some examples:
Parametric space curves
Helix
Parametric equations are convenient for describing curves in higher-dimensional spaces. For example:
describes a three-dimensional curve, the helix, with a radius of and rising by units per turn. The equations are identical in the plane to those for a circle.
Such expressions as the one above are commonly written as
where is a three-dimensional vector.
Parametric surfaces
A torus with major radius and minor radius may be defined parametrically as
where the two parameters and both vary between and .
As varies from to the point on the surface moves about a short circle passing through the hole in the torus. As varies from to the point on the surface moves about a long circle around the hole in the torus.
Straight line
The parametric equation of the line through the point and parallel to the vector is
Applications
Kinematics
In kinematics, objects' paths through space are commonly described as parametric curves, with each spatial coordinate depending explicitly on an independent parameter (usually time). Used in this way, the set of parametric equations for the object's coordinates collectively constitute a vector-valued function for position. Such parametric curves can then be integrated and differentiated termwise. Thus, if a particle's position is described parametrically as
then its velocity can be found as
and its acceleration as
Computer-aided design
Another important use of parametric equations is in the field of computer-aided design (CAD). For example, consider the following three representations, all of which are commonly used to describe planar curves.
Each representation has advantages and drawbacks for CAD applications.
The explicit representation may be very complicated, or even may not exist. Moreover, it does not behave well under geometric transformations, and in particular under rotations. On the other hand, as a parametric equation and an implicit equation may easily be deduced from an explicit representation, when a simple explicit representation exists, it has the advantages of both other representations.
Implicit representations may make it difficult to generate points on the curve, and even to decide whether there are real points. On the other hand, they are well suited for deciding whether a given point is on a curve, or whether it is inside or outside of a closed curve.
Such decisions may be difficult with a parametric representation, but parametric representations are best suited for generating points on a curve, and for plotting it.
Integer geometry
Numerous problems in integer geometry can be solved using parametric equations. A classical such solution is Euclid's parametrization of right triangles such that the lengths of their sides and their hypotenuse are coprime integers. As and are not both even (otherwise and would not be coprime), one may exchange them to have even, and the parameterization is then
where the parameters and are positive coprime integers that are not both odd.
By multiplying and by an arbitrary positive integer, one gets a parametrization of all right triangles whose three sides have integer lengths.
Underdetermined linear systems
A system of linear equations in unknowns is underdetermined if it has more than one solution. This occurs when the matrix of the system and its augmented matrix have the same rank and . In this case, one can select unknowns as parameters and represent all solutions as a parametric equation where all unknowns are expressed as linear combinations of the selected ones. That is, if the unknowns are one can reorder them for expressing the solutions as
Such a parametric equation is called a of the solution of the system.
The standard method for computing a parametric form of the solution is to use Gaussian elimination for computing a reduced row echelon form of the augmented matrix. Then the unknowns that can be used as parameters are the ones that correspond to columns not containing any leading entry (that is the left most non zero entry in a row or the matrix), and the parametric form can be straightforwardly deduced.
See also
Curve
Parametric estimating
Position vector
Vector-valued function
Parametrization by arc length
Parametric derivative
Notes
External links
Web application to draw parametric curves on the plane
Multivariable calculus
Equations
Geometry processing | Parametric equation | [
"Mathematics"
] | 1,994 | [
"Multivariable calculus",
"Mathematical objects",
"Equations",
"Calculus"
] |
576,120 | https://en.wikipedia.org/wiki/Balzan%20Prize | The International Balzan Prize Foundation awards four annual monetary prizes to people or organizations who have made outstanding achievements in the fields of humanities, natural sciences, culture, as well as for endeavours for peace and the brotherhood of man.
History
The assets behind the foundation were established by the Italian Eugenio Balzan (1874–1953), a part-owner of who had invested his assets in Switzerland and in 1933 had left Italy in protest against fascism. He left a substantial inheritance to his daughter Angela Lina Balzan (1892–1956), who at the time was suffering an incurable disease. Before her death, she left instructions for the foundation and since then it has two headquarters, the Prize administered from Milan, the Fund from Zurich.
The first award was in fact one million Swiss francs to the Nobel Foundation in 1961. After 1962, a gap of 16 years followed when prizes recommenced with an award of half a million Swiss francs to Mother Teresa. Award ceremonies alternate between Bern and the Accademia dei Lincei in Rome, and frequently winners have later won a Nobel Prize.
Procedure
All awards are decided by a single committee. The Balzan Prize committee comprises twenty members of the prestigious learned societies of Europe.
Each year the foundation chooses the fields eligible for the next year's prizes, and determines the prize amount. These are generally announced in May, with the winners announced the September the following year.
Rewards and assets
Since 2001 the prize money has increased to 1 million Swiss Francs per prize, on condition that half the money is used for projects involving young researchers.
As of 2017, the amount of each of the four Balzan Prizes is now 750,000 Swiss francs (approx. €760,000; $750,000; £660,000).
Categories
Four prizes have been awarded annually since 1978. The award fields vary each year and can be related to either a specific or an interdisciplinary field. The prizes go beyond the traditional subjects both in the humanities (literature, the moral sciences and the arts) and in the sciences (medicine and the physical, mathematical and natural sciences), with an emphasis on innovative research.
In different fields the prize is considered a significant prize, for example in sociology.
Every 3 to 7 years the foundation also awards the Prize for humanity, peace and brotherhood among peoples. It was last awarded in 2014 to Vivre en Famille.
Recipients
See: List of Balzan Prize recipients
See also
List of general science and technology awards
List of astronomy awards
References
External links
The Balzan Foundation – List of Balzan prizewinners
Prizes named after people
Awards established in 1961
Science and technology awards
Peace awards
Academic awards
Astronomy prizes | Balzan Prize | [
"Astronomy",
"Technology"
] | 538 | [
"Science and technology awards",
"Astronomy prizes"
] |
576,142 | https://en.wikipedia.org/wiki/Amyloplast | Amyloplasts are a type of plastid, double-enveloped organelles in plant cells that are involved in various biological pathways. Amyloplasts are specifically a type of leucoplast, a subcategory for colorless, non-pigment-containing plastids. Amyloplasts are found in roots and storage tissues, and they store and synthesize starch for the plant through the polymerization of glucose. Starch synthesis relies on the transportation of carbon from the cytosol, the mechanism by which is currently under debate.
Starch synthesis and storage also takes place in chloroplasts, a type of pigmented plastid involved in photosynthesis. Amyloplasts and chloroplasts are closely related, and amyloplasts can turn into chloroplasts; this is for instance observed when potato tubers are exposed to light and turn green.
Role in gravity sensing
Amyloplasts are thought to play a vital role in gravitropism. Statoliths, a specialized starch-accumulating amyloplast, are denser than cytoplasm, and are able to settle to the bottom of the gravity-sensing cell, called a statocyte. This settling is a vital mechanism in plant's perception of gravity, triggering the asymmetrical distribution of auxin that causes the curvature and growth of stems against the gravity vector, as well as growth of roots along the gravity vector. A plant lacking in phosphoglucomutase (pgm), for example, is a starchless mutant plant, thus preventing the settling of the statoliths. This mutant shows a significantly weaker gravitropic response as compared to a non-mutant plant. A normal gravitropic response can be rescued with hypergravity.
In roots, gravity is sensed in the root cap, a section of tissue at the very tip of the root. Upon removal of the root cap, the root loses its ability to sense gravity. However, if the root cap is regrown, the root's gravitropic response will recover.
In stems, gravity is sensed in the endodermal cells of the shoots.
References
Organelles
Plant cells
Plant physiology
Cell anatomy | Amyloplast | [
"Biology"
] | 475 | [
"Plant physiology",
"Plants"
] |
576,155 | https://en.wikipedia.org/wiki/Chromoplast | Chromoplasts are plastids, heterogeneous organelles responsible for pigment synthesis and storage in specific photosynthetic eukaryotes. It is thought (according to symbiogenesis) that like all other plastids including chloroplasts and leucoplasts they are descended from symbiotic prokaryotes.
Function
Chromoplasts are found in fruits, flowers, roots, and stressed and aging leaves, and are responsible for their distinctive colors. This is always associated with a massive increase in the accumulation of carotenoid pigments. The conversion of chloroplasts to chromoplasts in ripening is a classic example.
They are generally found in mature tissues and are derived from preexisting mature plastids. Fruits and flowers are the most common structures for the biosynthesis of carotenoids, although other reactions occur there as well including the synthesis of sugars, starches, lipids, aromatic compounds, vitamins, and hormones. The DNA in chloroplasts and chromoplasts is identical. One subtle difference in DNA was found after a liquid chromatography analysis of tomato chromoplasts was conducted, revealing increased cytosine methylation.
Chromoplasts synthesize and store pigments such as orange carotene, yellow xanthophylls, and various other red pigments. As such, their color varies depending on what pigment they contain. The main evolutionary purpose of chromoplasts is probably to attract pollinators or eaters of colored fruits, which help disperse seeds. However, they are also found in roots such as carrots and sweet potatoes. They allow the accumulation of large quantities of water-insoluble compounds in otherwise watery parts of plants.
When leaves change color in the autumn, it is due to the loss of green chlorophyll, which unmasks preexisting carotenoids. In this case, relatively little new carotenoid is produced—the change in plastid pigments associated with leaf senescence is somewhat different from the active conversion to chromoplasts observed in fruit and flowers.
There are some species of flowering plants that contain little to no carotenoids. In such cases, there are plastids present within the petals that closely resemble chromoplasts and are sometimes visually indistinguishable. Anthocyanins and flavonoids located in the cell vacuoles are responsible for other colors of pigment.
The term "chromoplast" is occasionally used to include any plastid that has pigment, mostly to emphasize the difference between them and the various types of leucoplasts, plastids that have no pigments. In this sense, chloroplasts are a specific type of chromoplast. Still, "chromoplast" is more often used to denote plastids with pigments other than chlorophyll.
Structure and classification
Using a light microscope chromoplasts can be differentiated and are classified into four main types. The first type is composed of proteic stroma with granules. The second is composed of protein crystals and amorphous pigment granules. The third type is composed of protein and pigment crystals. The fourth type is a chromoplast which only contains crystals.
An electron microscope reveals even more, allowing for the identification of substructures such as globules, crystals, membranes, fibrils and tubules. The substructures found in chromoplasts are not found in the mature plastid that it divided from.
The presence, frequency and identification of substructures using an electron microscope has led to further classification, dividing chromoplasts into five main categories: Globular chromoplasts, crystalline chromoplasts, fibrillar chromoplasts, tubular chromoplasts and membranous chromoplasts. It has also been found that different types of chromoplasts can coexist in the same organ. Some examples of plants in the various categories include mangoes, which have globular chromoplasts, and carrots which have crystalline chromoplasts.
Although some chromoplasts are easily categorized, others have characteristics from multiple categories that make them hard to place. Tomatoes accumulate carotenoids, mainly lycopene crystalloids in membrane-shaped structures, which could place them in either the crystalline or membranous category.
Evolution
Plastids lining which pollinators visit a flower, as specific colors attract specific pollinators. White flowers tend to attract beetles, bees are most often attracted to violet and blue flowers, and butterflies are often attracted to warmer colors like yellows and oranges.
Research
Chromoplasts are not widely studied and are rarely the main focus of scientific research. They often play a role in research on the tomato plant (Solanum lycopersicum). Lycopene is responsible for the red color of a ripe fruit in the cultivated tomato, while the yellow color of the flowers is due to xanthophylls violaxanthin and neoxanthin.
Carotenoid biosynthesis occurs in both chromoplasts and chloroplasts. In the chromoplasts of tomato flowers, carotenoid synthesis is regulated by the genes Psyl, Pds, Lcy-b, and Cyc-b. These genes, in addition to others, are responsible for the formation of carotenoids in organs and structures. For example, the Lcy-e gene is highly expressed in leaves, which results in the production of the carotenoid lutein.
White flowers are caused by a recessive allele in tomato plants. They are less desirable in cultivated crops because they have a lower pollination rate. In one study, it was found that chromoplasts are still present in white flowers. The lack of yellow pigment in their petals and anthers is due to a mutation in the CrtR-b2 gene which disrupts the carotenoid biosynthesis pathway.
The entire process of chromoplast formation is not yet completely understood on the molecular level. However, electron microscopy has revealed part of the transformation from chloroplast to chromoplast. The transformation starts with remodeling of the internal membrane system with the lysis of the intergranal thylakoids and the grana. New membrane systems form in organized membrane complexes called thylakoid plexus. The new membranes are the site of the formation of carotenoid crystals. These newly synthesized membranes do not come from the thylakoids, but rather from vesicles generated from the inner membrane of the plastid. The most obvious biochemical change would be the downregulation of photosynthetic gene expression which results in the loss of chlorophyll and stops photosynthetic activity.
In oranges, the synthesis of carotenoids and the disappearance of chlorophyll causes the color of the fruit to change from green to yellow. The orange color is often added artificially—light yellow-orange is the natural color created by the actual chromoplasts.
Valencia oranges Citris sinensis L are a cultivated orange grown extensively in the state of Florida. In the winter, Valencia oranges reach their optimum orange-rind color while reverting to a green color in the spring and summer. While it was originally thought that chromoplasts were the final stage of plastid development, in 1966 it was proved that chromoplasts can revert to chloroplasts, which causes the oranges to turn back to green.
Compare plastids
Plastid
Chloroplast and etioplast
Chromoplast
Lycopene: red color of tomato
Capasanthin: red color of peppers
β-Carotene: red color of carrot
Xanthophyll: yellow coloration
Anthocyanin: purple, red, blue, or black coloration
Leucoplast
Amyloplast
Elaioplast
Proteinoplast (aleuroplast)
References
External links
http://www.daviddarling.info/encyclopedia/C/chromoplast.html
http://www.thefreedictionary.com/chromoplasts
Plastids | Chromoplast | [
"Chemistry"
] | 1,792 | [
"Photosynthesis",
"Plastids"
] |
576,159 | https://en.wikipedia.org/wiki/Isochrony | Isochrony is a linguistic analysis or hypothesis assuming that any spoken language's utterances are divisible into equal rhythmic portions of some kind. Under this assumption, languages are proposed to broadly fall into one of two categories based on rhythm or timing: syllable-timed or stress-timed languages (or, in some analyses, a third category: mora-timed languages). However, empirical studies have been unable to directly or fully support the hypothesis, so the concept remains controversial in linguistics.
History
Rhythm is an aspect of prosody, others being intonation, stress, and tempo of speech. Isochrony refers to rhythmic division of time into equal portions by a language. The idea of was first expressed thus by Kenneth L. Pike in 1945, though the concept of language naturally occurring in chronologically and rhythmically equal measures is found at least as early as 1775 (in Prosodia Rationalis). Soames (1889) attributed the idea to Curwen. This has implications for linguistic typology: D. Abercrombie claimed "As far as is known, every language in the world is spoken with one kind of rhythm or with the other ... French, Telugu and Yoruba ... are syllable-timed languages, ... English, Russian and Arabic ... are stress-timed languages."
While many linguists find the idea of different rhythm types appealing, empirical studies have not been able to find acoustic correlates of the postulated types, calling into question the validity of these types. However, when viewed as a matter of degree, relative differences in the variability of syllable duration across languages have been found.
Alternative division of time
Three alternative ways in which a language can divide time are postulated:
The duration of every syllable is equal (syllable-timed);
The duration of every mora is equal (mora-timed).
The interval between two stressed syllables is equal (stress-timed).
Syllable timing
In a syllable-timed language, every syllable is perceived as taking up roughly the same amount of time, though the absolute length of time depends on the prosody. Syllable-timed languages tend to give syllables approximately equal prominence and generally lack reduced vowels.
French, Italian, Spanish, Romanian, Brazilian Portuguese, Icelandic, Singlish, Cantonese, Mandarin Chinese, Armenian, Turkish and Korean are commonly quoted as examples of syllable-timed languages. This type of rhythm was originally metaphorically referred to as "machine-gun rhythm" because each underlying rhythmical unit is of the same duration, similar to the transient bullet noise of a machine gun.
Since the 1950s, speech scientists have tried to show the existence of equal syllable durations in the acoustic speech signal without success. More recent research claims that the duration of consonantal and vocalic intervals is responsible for syllable-timed perception.
Mora timing
Some languages like Japanese, Gilbertese, Slovak and Ganda also have regular pacing but are mora-timed, rather than syllable-timed. In Japanese, a V or CV syllable takes up one timing unit. Japanese does not have diphthongs but double vowels, so CVV takes roughly twice the time as CV. A final /N/ also takes roughly as much time as a CV syllable, as does the extra length of a geminate consonant.
Ancient Greek and Vedic Sanskrit were also strictly mora-timed. Classical Persian was also mora-timed, though most modern dialects are not. Mora-timing is still common when reciting classical Persian poetry and music.
Stress timing
In a stress-timed language, syllables may last different amounts of time, but there is perceived to be a fairly constant amount of time (on average) between consecutive stressed syllables. Consequently, unstressed syllables between stressed syllables tend to be compressed to fit into the time interval: if two stressed syllables are separated by a single unstressed syllable, as in delicious tea, the unstressed syllable will be relatively long, while if a larger number of unstressed syllables intervenes, as in tolerable tea, the unstressed syllables will be shorter.
Stress-timing is sometimes called Morse-code rhythm, but any resemblance between the two is only superficial. Stress-timing is strongly related to vowel reduction processes. English, Thai, Lao, German, Russian, Danish, Swedish, Norwegian, Faroese, Dutch, European Portuguese, and Iranian Persian are typical stress-timed languages. Some stress-timed languages (for example Arabic) retain unreduced vowels.
Degrees of durational variability
Despite the relative simplicity of the classifications above, in the real world languages do not fit quite so easily into such precise categories. Languages exhibit degrees of durational variability both in relation to other languages and to other standards of the same language.
There can be varying degrees of stress-timing within the various standards of a language. Some southern dialects of Italian, a syllable-timed language, are effectively stress-timed. English, a stress-timed language, has become so widespread that some standards tend to be more syllable-timed than the British or North American standards, an effect which comes from the influence of other languages spoken in the relevant region. Indian English, for example, tends toward syllable-timing. This does not necessarily mean the language standard itself is to be classified as syllable-timed, of course, but rather that this feature is more pronounced. A subtle example is that to a native English speaker, for example, some accents from Wales may sound more syllable-timed.
A better-documented case of these varying degrees of stress-timing in a language comes from Portuguese. European Portuguese is more stress-timed than the Brazilian standard. The latter has mixed characteristics and varies according to speech rate, sex and dialect. At fast speech rates, Brazilian Portuguese is more stress-timed, while in slow speech rates, it can be more syllable-timed. The accents of rural, southern Rio Grande do Sul and the Northeast (especially Bahia) are considered to sound more syllable-timed than the others, while the southeastern dialects such as the mineiro, in central Minas Gerais, the paulistano, of the northern coast and eastern regions of São Paulo, and the fluminense, along Rio de Janeiro, Espírito Santo and eastern Minas Gerais as well the Federal District, are most frequently essentially stress-timed. Also, male speakers of Brazilian Portuguese speak faster than female speakers and speak in a more stress-timed manner.
Linguist Peter Ladefoged has proposed (citing work by Grabe and Low ) that, since languages differ from each other in terms of the amount of difference between the durations of vowels in adjacent syllables, it is possible to calculate a Pairwise Variability Index (PVI) from measured vowel durations to quantify the differences. The data show that, for example, Dutch (traditionally classed as a stress-timed language) exhibits a higher PVI than Spanish (traditionally a syllable-timed language).
The stress-timing–syllable-timing distinction as a continuum
Given the lack of solid evidence for a clear-cut categorical distinction between the two rhythmical types, it seems reasonable to suggest instead that all languages (and all their accents) display both types of rhythm to a greater or lesser extent. T. F. Mitchell claimed that there is no language which is totally syllable-timed or totally stress-timed; rather, all languages display both sorts of timing. Languages will, however, differ in which type of timing predominates. This view was developed by Dauer in such a way that a metric was provided allowing researchers to place any language on a scale from maximally stress-timed to maximally syllable-timed. Examples of this approach in use are Dimitrova's study of Bulgarian and Olivo's study of the rhythm of Ashanti Twi.
According to Dafydd Gibbon and Briony Williams, Welsh is neither syllable-timed nor stress-timed, as syllable length varies less than in stress-timed languages.
See also
Stress and vowel reduction in English
References
External links
Roach, Peter (1998). Language Myths, "Some Languages are Spoken More Quickly Than Others", eds. L. Bauer and P. Trudgill, Penguin, 1998, pp. 150–8
Étude sur la discrimination des langues par la prosodie (pdf document) (French)
Languages' rhythm and language acquisition (pdf document)
Supra-segmental Phonology (rhythm, intonation and stress-timing)
Phonetics
Rhythm and meter | Isochrony | [
"Physics"
] | 1,724 | [
"Spacetime",
"Rhythm and meter",
"Physical quantities",
"Time"
] |
576,245 | https://en.wikipedia.org/wiki/Pavel%20Alexandrov | Pavel Sergeyevich Alexandrov (), sometimes romanized Paul Alexandroff (7 May 1896 – 16 November 1982), was a Soviet mathematician. He wrote roughly three hundred papers, making important contributions to set theory and topology. In topology, the Alexandroff compactification and the Alexandrov topology are named after him.
Biography
Alexandrov attended Moscow State University where he was a student of Dmitri Egorov and Nikolai Luzin. Together with Pavel Urysohn, he visited the University of Göttingen in 1923 and 1924. After getting his Ph.D. in 1927, he continued to work at Moscow State University and also joined the Steklov Institute of Mathematics.
He was made a member of the Russian Academy of Sciences in 1953.
Personal life
Luzin challenged Alexandrov to determine if the continuum hypothesis is true. This still unsolved problem was too much for Alexandrov and he had a creative crisis at the end of 1917. The failure was a heavy blow for Alexandrov: "It became clear to me that the work on the continuum problem ended in a serious disaster. I also felt that I could no longer move on to mathematics and, so to speak, to the next tasks, and that some decisive turning point must come in my life."
Alexandrov went to Chernihiv, where he participated in the organization of the drama theater. "I met L. V. Sobinov there, who was at that time the head of the Department of Arts of the Ukrainian People's Commissariat of Education."
During this period, Alexandrov visited Denikin prison[4] and was ill with typhus.
In 1921, he married Ekaterina Romanovna Eiges (1890-1958) who was a poet and memoirist, library worker and mathematician.
In 1955, he signed the "Letter of Three Hundred" with criticism of Lysenkoism.
Alexandrov made lifelong friends with Andrey Kolmogorov, about whom he said: "In 1979 this friendship [with Kolmogorov] celebrated its fiftieth anniversary and over the whole of this half century there was not only never any breach in it, there was also never any quarrel, in all this time there was never any misunderstanding between us on any question, no matter how important for our lives and our philosophy; even when our opinions on one of these questions differed, we showed complete understanding and sympathy for the views of each other." Researchers have since conjectured that the two men were in a secret gay relationship.
He was buried at the Kavezinsky cemetery of the Pushkinsky district of the Moscow region.
Scientific activity
Alexandrov's main works are on topology, set theory, theory of functions of a real variable, geometry, calculus of variations, mathematical logic, and foundations of mathematics.
He introduced the new concept of compactness (Alexandrov himself called it "Bicompactness", and applied the term compact to only countably compact spaces, as was customary before him). Together with P. S. Urysohn, Alexandrov showed the full meaning of this concept; in particular, he proved the first general metrization theorem and the famous compactification theorem of any locally compact Hausdorff space by adding a single point.
From 1923 P. S. Alexandrov began to study combinatorial topology, and he managed to combine this branch of topology with general topology and significantly advance the resulting theory, which became the basis for modern algebraic topology. It was he who introduced one of the basic concepts of algebraic topology — the concept of an exact sequence. Alexandrov also introduced the notion of a nerve of a covering, which led him (independently of E. Cech) to the discovery of Alexandrov-Cech Cohomology.
In 1924, Alexandrov proved that in every open cover of a separable metric space, a locally finite open cover can be inscribed (this very concept, one of the key concepts in general topology, was first introduced by Alexandrov. In fact, this proved the paracompact nature of separable metric spaces (although the term "paracompact space" was introduced by Jean Dieudonné in 1944, and in 1948 Arthur Harold Stone showed that the requirement of separability can be abandoned).
He significantly advanced the theory of dimension (in particular, he became the founder of the homological theory of dimension — its basic concepts were defined by Alexandrov in 1932. He developed methods of combinatorial research of general topological spaces, proved a number of basic laws of topological duality. In 1927, he generalized Alexander's theorem to the case of an arbitrary closed set.
Alexandrov and P. S. Urysohn were the founders of the Moscow topological school, which received international recognition. A number of concepts and theorems of topology bear Alexandrov's name: the Alexandrov compactification, the Alexandrov-Hausdorff theorem on the cardinality of a-sets, the Alexandrov topology, and the Alexandrov — Cech homology and cohomology.
His books played an important role in the development of science and mathematics education in Russia: Introduction to the General Theory of Sets and Functions, Combinatorial Topology, Lectures on Analytical Geometry, Dimension Theory (together with B. A. Pasynkov) and Introduction to Homological Dimension Theory.
The textbook Topologie I, written together with Heinz Hopf in German (Alexandroff P., Hopf H. (1935) Topologie Band 1 — Berlin) became the classic course of topology of its time.
The Luzin Affair
In 1936, Alexandrov was an active participant in the political offensive against his former mentor Luzin that is known as the Luzin affair.
Despite the fact that P. S. Alexandrov was a student of N. N. Luzin and one of the members of Lusitania, during the persecution of Luzin (the Luzin Affair), Alexandrov was one of the most active persecutors of the scientist. Relations between Luzin and Alexandrov remained very strained until the end of Luzin's life, and Alexandrov became an academician only after Luzin's death.
Students
Among the students of P. S. Alexandrov, the most famous are Lev Pontryagin, Andrey Tychonoff and Aleksandr Kurosh. The older generation of his students includes L. A. Tumarkin, V. V. Nemytsky, A. N. Cherkasov, N. B. Vedenisov, G. S. Chogoshvili. The group of "Forties" includes Yu. M. Smirnov, K. A. Sitnikov, O. V. Lokutsievsky, E. F. Mishchenko, M. R. Shura-Bura. The generation of the fifties includes A.V. Arkhangelsky, B. A. Pasynkov, V. I. Ponomarev, as well as E. G. Sklyarenko and A. A. Maltsev, who were in graduate school under Yu.M. Smirnov and K. A. Sitnikov, respectively. The group of the youngest students is formed by V. V. Fedorchuk, V. I. Zaitsev and E. V. Shchepin.
Honours and awards
Hero of Socialist Labour
Stalin Prize
Order of Lenin, six times (1946, 1953, 1961, 1966, 1969 and 1975)
Order of the October Revolution
Order of the Red Banner of Labour
Order of the Badge of Honour
Member of the American Philosophical Society (1946)
Member of the United States National Academy of Sciences (1947)
Books
Alexandroff P., Hopf H. Topologie Bd.1 — B:, 1935
Books In Russian
Notes
External links
The 1936 Luzin affair – from the MacTutor History of Mathematics archive
Lorentz G.G., Mathematics and Politics in the Soviet Union from 1928 to 1953
Kutateladze S.S., The Tragedy of Mathematics in Russia
1896 births
1982 deaths
20th-century Russian mathematicians
People from Noginsk
Academicians of the USSR Academy of Pedagogical Sciences
Foreign associates of the National Academy of Sciences
Full Members of the USSR Academy of Sciences
Imperial Moscow University alumni
Members of the Austrian Academy of Sciences
Members of the German Academy of Sciences at Berlin
Academic staff of Moscow State University
Heroes of Socialist Labour
Recipients of the Stalin Prize
Recipients of the Order of Lenin
Recipients of the Order of the Red Banner of Labour
Soviet mathematicians
Topologists
Members of the Göttingen Academy of Sciences and Humanities
Members of the American Philosophical Society
Recipients of the Cothenius Medal | Pavel Alexandrov | [
"Mathematics"
] | 1,791 | [
"Topologists",
"Topology"
] |
576,246 | https://en.wikipedia.org/wiki/Pribnow%20box | The Pribnow box (also known as the Pribnow-Schaller box) is a sequence of TATAAT of six nucleotides (thymine, adenine, thymine, etc.) that is an essential part of a promoter site on DNA for transcription to occur in bacteria. It is an idealized or consensus sequence—that is, it shows the most frequently occurring base at each position in many promoters analyzed; individual promoters often vary from the consensus at one or more positions. It is also commonly called the -10 sequence or element, because it is centered roughly ten base pairs upstream from the site of initiation of transcription.
The Pribnow box has a function similar to the TATA box that occurs in promoters in eukaryotes and archaea: it is recognized and bound by a subunit of RNA polymerase during initiation of transcription. This region of the DNA is also the first place where base pairs separate during prokaryotic transcription to allow access to the template strand. The AT-richness is important to allow this separation, since adenine and thymine are easier to break apart (not only due to fewer hydrogen bonds, but also due to weaker base stacking effects).
It is named after David Pribnow and Heinz Schaller.
Probability of occurrence of each nucleotide in E. coli
In fiction
The term "Pribnow box" is used in episode 13 of Neon Genesis Evangelion, in reference to the chamber holding simulation Evangelions for testing purposes.
See also
TATA box
References
Regulatory sequences | Pribnow box | [
"Chemistry"
] | 316 | [
"Gene expression",
"Regulatory sequences"
] |
576,344 | https://en.wikipedia.org/wiki/Technorealism | Technorealism is an attempt to expand the middle ground between techno-utopianism and Neo-Luddism by assessing the social and political implications of technologies so that people might all have more control over the shape of their future. An account cited that technorealism emerged in the early 1990s and was introduced by Douglas Rushkoff and Andrew Shapiro. In the Technorealism manifesto, which described the term as a new generation of cultural criticism, it was stated that the goal was not to promote or dismiss technology but to understand it so the application could be aligned with basic human values. Technorealism suggests that a technology, however revolutionary it may seem, remains a continuation of similar revolutions throughout human history.
Approach
The technorealist approach involves a continuous critical examination of how technologies might help or hinder people in the struggle to improve the quality of their lives, their communities, and their economic, social, and political structures. In addition, instead of policy wonks, experts, and the elite, it is the technology critic who assumes the center stage in the discourse of technology policy issues.
Although technorealism began with a focus on U.S.-based concerns about information technology, it has evolved into an international intellectual movement with a variety of interests such as biotechnology and nanotechnology.
See also
Technocriticism
History of science and technology
Ethics
Bioethics
Infoethics
Neuroethics
Nanoethics
Roboethics
Technoethics
Technology
References
External links
technorealism.org, historical site
Applied ethics
Ethics of science and technology
Technology neologisms
Philosophy of technology
Political theories
Technology systems | Technorealism | [
"Technology",
"Engineering",
"Biology"
] | 322 | [
"Systems engineering",
"Behavior",
"Technology systems",
"Philosophy of technology",
"Science and technology studies",
"nan",
"Ethics of science and technology",
"Human behavior",
"Applied ethics"
] |
576,354 | https://en.wikipedia.org/wiki/142857 | 142,857 is the natural number following 142,856 and preceding 142,858. It is a Kaprekar number.
Cyclic number
142857 is the best-known cyclic number in base 10, being the six repeating digits of (0.).
If 142857 is multiplied by 2, 3, 4, 5 or 6, the answer will be a cyclic permutation of itself, and will correspond to the repeating digits of , , , or respectively:
1 × 142,857 = 142,857
2 × 142,857 = 285,714
3 × 142,857 = 428,571
4 × 142,857 = 571,428
5 × 142,857 = 714,285
6 × 142,857 = 857,142
7 × 142,857 = 999,999
If multiplying by an integer greater than 7, there is a simple process to get to a cyclic permutation of 142857. By adding the rightmost six digits (ones through hundred thousands) to the remaining digits and repeating this process until only six digits are left, it will result in a cyclic permutation of 142857:
142857 × 8 = 1142856
1 + 142856 = 142857
142857 × 815 = 116428455
116 + 428455 = 428571
1428572 = 142857 × 142857 = 20408122449
20408 + 122449 = 142857
Multiplying by a multiple of 7 will result in 999999 through this process:
142857 × 74 = 342999657
342 + 999657 = 999999
If you square the last three digits and subtract the square of the first three digits, you also get back a cyclic permutation of the number.
8572 = 734449
1422 = 20164
734449 − 20164 = 714285
It is the repeating part in the decimal expansion of the rational number = 0.. Thus, multiples of are simply repeated copies of the corresponding multiples of 142857:
Connection to the enneagram
The 142857 number sequence is used in the enneagram figure, a symbol of the Gurdjieff Work used to explain and visualize the dynamics of the interaction between the two great laws of the Universe (according to G. I. Gurdjieff), the Law of Three and the Law of Seven. The movement of the numbers of 142857 divided by , . etc., and the subsequent movement of the enneagram, are portrayed in Gurdjieff's sacred dances known as the movements.
Other properties
The 142857 number sequence is also found in several decimals in which the denominator has a factor of 7. In the examples below, the numerators are all 1, however there are instances where it does not have to be, such as (0.).
For example, consider the fractions and equivalent decimal values listed below:
= 0....
= 0.0...
= 0.03...
= 0.0...
= 0.017...
= 0.0...
The above decimals follow the 142857 rotational sequence. There are fractions in which the denominator has a factor of 7, such as and , that do not follow this sequence and have other values in their decimal digits.
References
Integers | 142857 | [
"Mathematics"
] | 733 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
576,387 | https://en.wikipedia.org/wiki/Finite%20field%20arithmetic | In mathematics, finite field arithmetic is arithmetic in a finite field (a field containing a finite number of elements) contrary to arithmetic in a field with an infinite number of elements, like the field of rational numbers.
There are infinitely many different finite fields. Their number of elements is necessarily of the form pn where p is a prime number and n is a positive integer, and two finite fields of the same size are isomorphic. The prime p is called the characteristic of the field, and the positive integer n is called the dimension of the field over its prime field.
Finite fields are used in a variety of applications, including in classical coding theory in linear block codes such as BCH codes and Reed–Solomon error correction, in cryptography algorithms such as the Rijndael (AES) encryption algorithm, in tournament scheduling, and in the design of experiments.
Effective polynomial representation
The finite field with pn elements is denoted GF(pn) and is also called the Galois field of order pn, in honor of the founder of finite field theory, Évariste Galois. GF(p), where p is a prime number, is simply the ring of integers modulo p. That is, one can perform operations (addition, subtraction, multiplication) using the usual operation on integers, followed by reduction modulo p. For instance, in GF(5), is reduced to 2 modulo 5. Division is multiplication by the inverse modulo p, which may be computed using the extended Euclidean algorithm.
A particular case is GF(2), where addition is exclusive OR (XOR) and multiplication is AND. Since the only invertible element is 1, division is the identity function.
Elements of GF(pn) may be represented as polynomials of degree strictly less than n over GF(p). Operations are then performed modulo m(x) where m(x) is an irreducible polynomial of degree n over GF(p), for instance using polynomial long division. Addition is the usual addition of polynomials, but the coefficients are reduced modulo p. Multiplication is also the usual multiplication of polynomials, but with coefficients multiplied modulo p and polynomials multiplied modulo the polynomial m(x). This representation in terms of polynomial coefficients is called a monomial basis (a.k.a. 'polynomial basis').
There are other representations of the elements of GF(pn); some are isomorphic to the polynomial representation above and others look quite different (for instance, using matrices). Using a normal basis may have advantages in some contexts.
When the prime is 2, it is conventional to express elements of GF(pn) as binary numbers, with the coefficient of each term in a polynomial represented by one bit in the corresponding element's binary expression. Braces ( "{" and "}" ) or similar delimiters are commonly added to binary numbers, or to their hexadecimal equivalents, to indicate that the value gives the coefficients of a basis of a field, thus representing an element of the field. For example, the following are equivalent representations of the same value in a characteristic 2 finite field:
Primitive polynomials
There are many irreducible polynomials (sometimes called reducing polynomials) that can be used to generate a finite field, but they do not all give rise to the same representation of the field.
A monic irreducible polynomial of degree having coefficients in the finite field GF(), where for some prime and positive integer , is called a primitive polynomial if all of its roots are primitive elements of GF(). In the polynomial representation of the finite field, this implies that is a primitive element. There is at least one irreducible polynomial for which is a primitive element. In other words, for a primitive polynomial, the powers of generate every nonzero value in the field.
In the following examples it is best not to use the polynomial representation, as the meaning of changes between the examples. The monic irreducible polynomial over GF(2) is not primitive. Let be a root of this polynomial (in the polynomial representation this would be ), that is, . Now , so is not a primitive element of GF(28) and generates a multiplicative subgroup of order 51. The monic irreducible polynomial over GF(2) is primitive, and all 8 roots are generators of .
All GF(28) have a total of 128 generators (see Number of primitive elements), and for a primitive polynomial, 8 of them are roots of the reducing polynomial. Having as a generator for a finite field is beneficial for many computational mathematical operations.
Addition and subtraction
Addition and subtraction are performed by adding or subtracting two of these polynomials together, and reducing the result modulo the characteristic.
In a finite field with characteristic 2, addition modulo 2, subtraction modulo 2, and XOR are identical. Thus,
Under regular addition of polynomials, the sum would contain a term 2x6. This term becomes 0x6 and is dropped when the answer is reduced modulo 2.
Here is a table with both the normal algebraic sum and the characteristic 2 finite field sum of a few polynomials:
In computer science applications, the operations are simplified for finite fields of characteristic 2, also called GF(2n) Galois fields, making these fields especially popular choices for applications.
Multiplication
Multiplication in a finite field is multiplication modulo an irreducible reducing polynomial used to define the finite field. (I.e., it is multiplication followed by division using the reducing polynomial as the divisor—the remainder is the product.) The symbol "•" may be used to denote multiplication in a finite field.
Rijndael's (AES) finite field
Rijndael (standardised as AES) uses the characteristic 2 finite field with 256 elements, which can also be called the Galois field GF(28). It employs the following reducing polynomial for multiplication:
x8 + x4 + x3 + x + 1.
For example, {53} • {CA} = {01} in Rijndael's field because
{|
|-
| || (x6 + x4 + x + 1)(x7 + x6 + x3 + x)
|-
| = || (x13 + x12 + x9 + x7) + (x11 + x10 + x7 + x5) + (x8 + x7 + x4 + x2) + (x7 + x6 + x3 + x)
|-
| = || x13 + x12 + x9 + x11 + x10 + x5 + x8 + x4 + x2 + x6 + x3 + x
|-
| = || x13 + x12 + x11 + x10 + x9 + x8 + x6 + x5 + x4 + x3 + x2 + x
|}
and
{|
|-
| || x13 + x12 + x11 + x10 + x9 + x8 + x6 + x5 + x4 + x3 + x2 + x mod x8 + x4 + x3 + x1 + 1
|-
| = || (11111101111110 mod 100011011)
|-
| = || {3F7E mod 11B} = {01}
|-
| = || 1 (decimal)
|}
The latter can be demonstrated through long division (shown using binary notation, since it lends itself well to the task. Notice that exclusive OR is applied in the example and not arithmetic subtraction, as one might use in grade-school long division.):
11111101111110 (mod) 100011011
^100011011
01110000011110
^100011011
0110110101110
^100011011
010101110110
^100011011
00100011010
^100011011
000000001
(The elements {53} and {CA} are multiplicative inverses of one another since their product is 1.)
Multiplication in this particular finite field can also be done using a modified version of the "peasant's algorithm". Each polynomial is represented using the same binary notation as above. Eight bits is sufficient because only degrees 0 to 7 are possible in the terms of each (reduced) polynomial.
This algorithm uses three variables (in the computer programming sense), each holding an eight-bit representation. a and b are initialized with the multiplicands; p accumulates the product and must be initialized to 0.
At the start and end of the algorithm, and the start and end of each iteration, this invariant is true: a b + p is the product. This is obviously true when the algorithm starts. When the algorithm terminates, a or b will be zero so p will contain the product.
Run the following loop eight times (once per bit). It is OK to stop when a or b is zero before an iteration:
If the rightmost bit of b is set, exclusive OR the product p by the value of a. This is polynomial addition.
Shift b one bit to the right, discarding the rightmost bit, and making the leftmost bit have a value of zero. This divides the polynomial by x, discarding the x0 term.
Keep track of whether the leftmost bit of a is set to one and call this value carry.
Shift a one bit to the left, discarding the leftmost bit, and making the new rightmost bit zero. This multiplies the polynomial by x, but we still need to take account of carry which represented the coefficient of x7.
If carry had a value of one, exclusive or a with the hexadecimal number 0x1b (00011011 in binary). 0x1b corresponds to the irreducible polynomial with the high term eliminated. Conceptually, the high term of the irreducible polynomial and carry add modulo 2 to 0.
p now has the product
This algorithm generalizes easily to multiplication over other fields of characteristic 2, changing the lengths of a, b, and p and the value 0x1b appropriately.
Multiplicative inverse
The multiplicative inverse for an element a of a finite field can be calculated a number of different ways:
By multiplying a by every number in the field until the product is one. This is a brute-force search.
Since the nonzero elements of GF(pn) form a finite group with respect to multiplication, (for ), thus the inverse of a is a. This algorithm is a generalization of the modular multiplicative inverse based on Fermat's little theorem.
Multiplicative inverse based on the Fermat's little theorem can also be interpreted using the multiplicative Norm function in finite field. This new viewpoint leads to an inverse algorithm based on the additive Trace function in finite field.
By using the extended Euclidean algorithm.
By making logarithm and exponentiation tables for the finite field, subtracting the logarithm from pn − 1 and exponentiating the result.
By making a modular multiplicative inverse table for the finite field and doing a lookup.
By mapping to a composite field where inversion is simpler, and mapping back.
By constructing a special integer (in case of a finite field of a prime order) or a special polynomial (in case of a finite field of a non-prime order) and dividing it by a.
Implementation tricks
Generator based tables
When developing algorithms for Galois field computation on small Galois fields, a common performance optimization approach is to find a generator g and use the identity:
to implement multiplication as a sequence of table look ups for the logg(a) and gy functions and an integer addition operation. This exploits the property that every finite field contains generators. In the Rijndael field example, the polynomial (or {03}) is one such generator. A necessary but not sufficient condition for a polynomial to be a generator is to be irreducible.
An implementation must test for the special case of a or b being zero, as the product will also be zero.
This same strategy can be used to determine the multiplicative inverse with the identity:
Here, the order of the generator, , is the number of non-zero elements of the field. In the case of GF(28) this is . That is to say, for the Rijndael example: . So this can be performed with two look up tables and an integer subtract. Using this idea for exponentiation also derives benefit:
This requires two table look ups, an integer multiplication and an integer modulo operation. Again a test for the special case must be performed.
However, in cryptographic implementations, one has to be careful with such implementations since the cache architecture of many microprocessors leads to variable timing for memory access. This can lead to implementations that are vulnerable to a timing attack.
Carryless multiply
For binary fields GF(2n), field multiplication can be implemented using a carryless multiply such as CLMUL instruction set, which is good for n ≤ 64. A multiplication uses one carryless multiply to produce a product (up to 2n − 1 bits), another carryless multiply of a pre-computed inverse of the field polynomial to produce a quotient = ⌊product / (field polynomial)⌋, a multiply of the quotient by the field polynomial, then an xor: result = product ⊕ ((field polynomial) ⌊product / (field polynomial)⌋). The last 3 steps (pclmulqdq, pclmulqdq, xor) are used in the Barrett reduction step for fast computation of CRC using the x86 pclmulqdq instruction.
Composite exponent
When k is a composite number, there will exist isomorphisms from a binary field GF(2k) to an extension field of one of its subfields, that is, GF((2m)n) where . Utilizing one of these isomorphisms can simplify the mathematical considerations as the degree of the extension is smaller with the trade off that the elements are now represented over a larger subfield. To reduce gate count for hardware implementations, the process may involve multiple nesting, such as mapping from GF(28) to GF(((22)2)2).
Program examples
C programming example
Here is some C code which will add and multiply numbers in the characteristic 2 finite field of order 28, used for example by Rijndael algorithm or Reed–Solomon, using the Russian peasant multiplication algorithm:
/* Add two numbers in the GF(2^8) finite field */
uint8_t gadd(uint8_t a, uint8_t b) {
return a ^ b;
}
/* Multiply two numbers in the GF(2^8) finite field defined
* by the modulo polynomial relation x^8 + x^4 + x^3 + x + 1 = 0
* (the other way being to do carryless multiplication followed by a modular reduction)
*/
uint8_t gmul(uint8_t a, uint8_t b) {
uint8_t p = 0; /* accumulator for the product of the multiplication */
while (a != 0 && b != 0) {
if (b & 1) /* if the polynomial for b has a constant term, add the corresponding a to p */
p ^= a; /* addition in GF(2^m) is an XOR of the polynomial coefficients */
if (a & 0x80) /* GF modulo: if a has a nonzero term x^7, then must be reduced when it becomes x^8 */
a = (a << 1) ^ 0x11b; /* subtract (XOR) the primitive polynomial x^8 + x^4 + x^3 + x + 1 (0b1_0001_1011) – you can change it but it must be irreducible */
else
a <<= 1; /* equivalent to a*x */
b >>= 1;
}
return p;
}
This example has cache, timing, and branch prediction side-channel leaks, and is not suitable for use in cryptography.
D programming example
This D program will multiply numbers in Rijndael's finite field and generate a PGM image:
/**
Multiply two numbers in the GF(2^8) finite field defined
by the polynomial x^8 + x^4 + x^3 + x + 1.
*/
ubyte gMul(ubyte a, ubyte b) pure nothrow {
ubyte p = 0;
foreach (immutable ubyte counter; 0 .. 8) {
p ^= -(b & 1) & a;
auto mask = -((a >> 7) & 1);
// 0b1_0001_1011 is x^8 + x^4 + x^3 + x + 1.
a = cast(ubyte)((a << 1) ^ (0b1_0001_1011 & mask));
b >>= 1;
}
return p;
}
void main() {
import std.stdio, std.conv;
enum width = ubyte.max + 1, height = width;
auto f = File("rijndael_finite_field_multiplication.pgm", "wb");
f.writefln("P5\n%d %d\n255", width, height);
foreach (immutable y; 0 .. height)
foreach (immutable x; 0 .. width) {
immutable char c = gMul(x.to!ubyte, y.to!ubyte);
f.write(c);
}
}
This example does not use any branches or table lookups in order to avoid side channels and is therefore suitable for use in cryptography.
See also
Zech's logarithm
References
Sources
(reissued in 1984 by Cambridge University Press ).
External links
Wikiversity: Reed–Solomon for Coders – Finite Field Arithmetic
Arithmetic
Arithmetic
Articles with example D code
Articles with example C code | Finite field arithmetic | [
"Mathematics"
] | 3,877 | [
"Arithmetic",
"Number theory"
] |
576,487 | https://en.wikipedia.org/wiki/Time%20%28Unix%29 | In computing, time is a command in Unix and Unix-like operating systems. It is used to determine the duration of execution of a particular command.
Overview
time(1) can exist as a standalone program (such as GNU time) or as a shell builtin in most cases (e.g. in sh, bash, tcsh or in zsh).
User time vs system time
The total CPU time is the combination of the amount of time the CPU or CPUs spent performing some action for a program and the amount of time they spent performing system calls for the kernel on the program's behalf. When a program loops through an array, it is accumulating user CPU time. Conversely, when a program executes a system call such as exec or fork, it is accumulating system CPU time.
Real time vs CPU time
The term "real time" in this context refers to elapsed wall-clock time, like using a stop watch. The total CPU time (user time + sys time) may be more or less than that value. Because a program may spend some time waiting and not executing at all (whether in user mode or system mode) the real time may be greater than the total CPU time. Because a program may fork children whose CPU times (both user and sys) are added to the values reported by the time command, but on a multicore system these tasks are run in parallel, the total CPU time may be greater than the real time.
Usage
To use the command, one simply precedes any command by the word time, such as:
$ time ls
When the command completes, time will report how long it took to execute the ls command in terms of user CPU time, system CPU time, and real time. The output format varies between different versions of the command, and some give additional statistics, as in this example:
$ time host wikipedia.org
wikipedia.org has address 103.102.166.224
wikipedia.org mail is handled by 50 mx2001.wikimedia.org.
wikipedia.org mail is handled by 10 mx1001.wikimedia.org.
host wikipedia.org 0.04s user 0.02s system 7% cpu 0.780 total
$
(either a standalone program, or when Bash shell is running in POSIX mode AND is invoked as time -p) reports to standard error output.
time -p
Portable scripts should use time -p mode, which uses a different output format, but which is consistent with various implementations:
$ time -p sha256sum /bin/ls
12477deb0e25209768cbd79328f943a7ea8533ece70256cdea96fae0ae34d1cc /bin/ls
real 0.00
user 0.00
sys 0.00
$
Implementations
GNU time
Current versions of GNU time, report more than just a time by default:
$ /usr/bin/time sha256sum /bin/ls
12477deb0e25209768cbd79328f943a7ea8533ece70256cdea96fae0ae34d1cc /bin/ls
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 2156maxresident)k
0inputs+0outputs (0major+96minor)pagefaults 0swaps
$
Format of the output for GNU time, can be adjusted using TIME environment variable, and it can include information other than the execution time (i.e. memory usage). This behavior is not available in general POSIX-compliant time, or when executing as time -p.
Documentation of this can be usually accessed using man 1 time.
Method of operation
According to the source code of the GNU implementation of time, most information shown by time is derived from the wait3 system call. On systems that do not have a wait3 call that returns status information, the times system call is used instead.
Bash
In a popular Unix shell Bash, time is a special keyword, that can be put before a pipeline (or single command), that measures time of entire pipeline, not just a singular (first) command, and uses a different default format, and puts empty line before reporting times:
$ time seq 10000000 | wc -l
10000000
real 0m0.078s
user 0m0.116s
sys 0m0.029s
$
The reported time is a time used by both seq and wc -l added up. Format of the output can be adjusted using TIMEFORMAT variable.
The is not a builtin, but a special keyword, and can't be treated as a function or command. It also ignores pipeline redirections (even when executed as time -p, unless entire Bash is run in "POSIX mode").
Documentation of this can be accessed using man 1 bash, or within bash itself using help time.
See also
System time
Cron process for scheduling jobs to run at a particular time
TIME (command)
References
Unix SUS2008 utilities
Unix process- and task-management-related software
Inferno (operating system) commands | Time (Unix) | [
"Technology"
] | 1,115 | [
"Computing commands",
"Inferno (operating system) commands"
] |
576,510 | https://en.wikipedia.org/wiki/Durham%20tube | Durham tubes are used in microbiology to detect production of gas by microorganisms. They are simply smaller test tubes inserted upside down in another test tube so they are freely movable. The culture media to be tested is then added to the larger tube and sterilized, which also eliminates the initial air gap produced when the tube is inserted upside down. The culture media typically contains a single substance to be tested with the organism, such as to determine whether an organism can ferment a particular carbohydrate. After inoculation and incubation, any gas that is produced will form a visible gas bubble inside the small tube. Litmus solution can also be added to the culture media to give a visual representation of pH changes that occur during the production of gas. The method was first reported in 1898 by British microbiologist Herbert Durham.
One limitation of the Durham tube is that it does not allow for precise determination of the type of gas that is produced within the inner tube, or measurements of the quantity of gas produced. However, Durham argued that quantitive measurements are of limited value because of the culture solution will absorb some of the gas in unknown, variable proportions. Additionally, using Durham tubes to provide evidence of fermentation may not be able to detect slow- or weakly-fermenting organisms when the resultant carbon dioxide diffuses back into the solution as quickly as it is formed, so a negative test using Durham tubes does not indicate decisive physiological significance.
References
Microbiology equipment | Durham tube | [
"Biology"
] | 306 | [
"Microbiology equipment"
] |
576,557 | https://en.wikipedia.org/wiki/Bacteriological%20water%20analysis | Bacteriological water analysis is a method of analysing water to estimate the numbers of bacteria present and, if needed, to find out what sort of bacteria they are. It represents one aspect of water quality. It is a microbiological analytical procedure which uses samples of water and from these samples determines the concentration of bacteria. It is then possible to draw inferences about the suitability of the water for use from these concentrations. This process is used, for example, to routinely confirm that water is safe for human consumption or that bathing and recreational waters are safe to use.
The interpretation and the action trigger levels for different waters vary depending on the use made of the water. Whilst very stringent levels apply to drinking water, more relaxed levels apply to marine bathing waters, where much lower volumes of water are expected to be ingested by users.
Approach
The common feature of all these routine screening procedures is that the primary analysis is for indicator organisms rather than the pathogens that might cause concern. Indicator organisms are bacteria such as non-specific coliforms, Escherichia coli and Pseudomonas aeruginosa that are very commonly found in the human or animal gut and which, if detected, may suggest the presence of sewage. Indicator organisms are used because even when a person is infected with a more pathogenic bacteria, they will still be excreting many millions times more indicator organisms than pathogens. It is therefore reasonable to surmise that if indicator organism levels are low, then pathogen levels will be very much lower or absent. Judgements as to suitability of water for use are based on very extensive precedents and relate to the probability of any sample population of bacteria being able to be infective at a reasonable statistical level of confidence.
Analysis is usually performed using culture, biochemical and sometimes optical methods. When indicator organisms levels exceed pre-set triggers, specific analysis for pathogens may then be undertaken and these can be quickly detected (where suspected) using specific culture methods or molecular biology.
Methodologies
The most reliable methods are direct plate count method and membrane filtration method. mEndo Agar is used in the membrane filtration while VRBA Agar is used in the direct plate count method. VRBA stands for violet red bile agar. A media that contains bile salts which promotes the growth of gram negative and has inhibitory characteristic to gram positive although not complete inhibitory.
These media contain lactose which is usually fermented by lactose fermenting bacteria producing colonies that can be identified and characterised. Lactose fermenting produce colored colonies while non lactose fermenting produce colorless ones. Because the analysis is always based on a very small sample taken from a very large volume of water, all methods rely on statistical principles.
Multiple tube method
One of the oldest methods is called the multiple tube method. In this method a measured sub-sample (perhaps 10 ml) is diluted with 100 ml of sterile growth medium and an aliquot of 10 ml is then decanted into each of ten tubes. The remaining 10 ml is then diluted again and the process repeated. At the end of 5 dilutions this produces 50 tubes covering the dilution range of 1:10 through to 1:10000.
The tubes are then incubated at a pre-set temperature for a specified time and at the end of the process the number of tubes with growth in is counted for each dilution. Statistical tables are then used to derive the concentration of organisms in the original sample. This method can be enhanced by using indicator medium which changes colour when acid forming species are present and by including a tiny inverted tube called a Durham tube in each sample tube. The Durham inverted tube catches any gas produced. The production of gas at 37 degrees Celsius is a strong indication of the presence of Escherichia coli.
ATP testing
An ATP test is the process of rapidly measuring active microorganisms in water through detection adenosine triphosphate (ATP). ATP is a molecule found only in and around living cells, and as such it gives a direct measure of biological concentration and health. ATP is quantified by measuring the light produced through its reaction with the naturally occurring enzyme firefly luciferase using a luminometer. The amount of light produced is directly proportional to the amount of biological energy present in the sample.
Second generation ATP tests are specifically designed for water, wastewater and industrial applications where, for the most part, samples contain a variety of components that can interfere with the ATP assay.
Plate count
The plate count method relies on bacteria growing a colony on a nutrient medium so that the colony becomes visible to the naked eye and the number of colonies on a plate can be counted. To be effective, the dilution of the original sample must be arranged so that on average between 30 and 300 colonies of the target bacterium are grown. Fewer than 30 colonies makes the interpretation statistically unsound whilst greater than 300 colonies often results in overlapping colonies and imprecision in the count. To ensure that an appropriate number of colonies will be generated several dilutions are normally cultured. This approach is widely utilised for the evaluation of the effectiveness of water treatment by the inactivation of representative microbial contaminants such as E. coli following ASTM D5465.
The laboratory procedure involves making serial dilutions of the sample (1:10, 1:100, 1:1000, etc.) in sterile water and cultivating these on nutrient agar in a dish that is sealed and incubated. Typical media include plate count agar for a general count or MacConkey agar to count Gram-negative bacteria such as E. coli. Typically one set of plates is incubated at 22 °C and for 24 hours and a second set at 37 °C for 24 hours. The composition of the nutrient usually includes reagents that resist the growth of non-target organisms and make the target organism easily identified, often by a colour change in the medium. Some recent methods include a fluorescent agent so that counting of the colonies can be automated. At the end of the incubation period the colonies are counted by eye, a procedure that takes a few moments and does not require a microscope as the colonies are typically a few millimetres across.
Membrane filtration
Most modern laboratories use a refinement of total plate count in which serial dilutions of the sample are vacuum filtered through purpose made membrane filters and these filters are themselves laid on nutrient medium within sealed plates. The methodology is otherwise similar to conventional total plate counts. Membranes have a printed millimetre grid printed on and can be reliably used to count the number of colonies under a binocular microscope.
Pour plate method
When the analysis is looking for bacterial species that grow poorly in air, the initial analysis is done by mixing serial dilutions of the sample in liquid nutrient agar which is then poured into bottles which are then sealed and laid on their sides to produce a sloping agar surface. Colonies that develop in the body of the medium can be counted by eye after incubation.
The total number of colonies is referred to as the total viable count (TVC). The unit of measurement is cfu/ml (or colony forming units per millilitre) and relates to the original sample. Calculation of this is a multiple of the counted number of colonies multiplied by the dilution used.
Pathogen analysis
When samples show elevated levels of indicator bacteria, further analysis is often undertaken to look for specific pathogenic bacteria. Species commonly investigated in the temperate zone include Salmonella typhi and Salmonella Typhimurium.
Depending on the likely source of contamination investigation may also extend to organisms such as Cryptosporidium spp.
In tropical areas analysis of Vibrio cholerae is also routinely undertaken.
Types of nutrient media used in analysis
MacConkey agar is culture medium designed to grow Gram-negative bacteria and stain them for lactose fermentation. It contains bile salts (to inhibit most Gram-positive bacteria), crystal violet dye (which also inhibits certain Gram-positive bacteria), neutral red dye (which stains microbes fermenting lactose), lactose and peptone. Alfred Theodore MacConkey developed it while working as a bacteriologist for the Royal Commission on Sewage Disposal in the United Kingdom.
Endo agar contains peptone, lactose, dipotassium phosphate, agar, sodium sulfite, basic fuchsin and was originally developed for the isolation of Salmonella typhi, but is now commonly used in water analysis. As in MacConkey agar, coliform organisms ferment the lactose, and the colonies become red. Non-lactose-fermenting organisms produce clear, colourless colonies against the faint pink background of the medium.
mFC medium is used in membrane filtration and contains selective and differential agents. These include rosolic acid to inhibit bacterial growth in general, except for fecal coliforms, bile salts inhibit non-enteric bacteria and aniline blue indicates the ability of fecal coliforms to ferment lactose to acid that causes a pH change in the medium.
TYEA medium contains tryptone, yeast extract, common salt and L-arabinose per liter of glass distilled
water and is a non selective medium usually cultivated at two temperatures (22 and 36 °C) to determine a general level of contamination (a.k.a. colony count).
See also
Water testing
Water quality
References
Aquatic ecology
Water
Microbiology techniques
Water quality indicators | Bacteriological water analysis | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,945 | [
"Hydrology",
"Water pollution",
"Microbiology techniques",
"Water quality indicators",
"Ecosystems",
"Water",
"Aquatic ecology"
] |
576,635 | https://en.wikipedia.org/wiki/Port%20scanner | A port scanner is an application designed to probe a server or host for open ports. Such an application may be used by administrators to verify security policies of their networks and by attackers to identify network services running on a host and exploit vulnerabilities.
A port scan or portscan is a process that sends client requests to a range of server port addresses on a host, with the goal of finding an active port; this is not a nefarious process in and of itself. The majority of uses of a port scan are not attacks, but rather simple probes to determine services available on a remote machine.
To portsweep is to scan multiple hosts for a specific listening port. The latter is typically used to search for a specific service, for example, an SQL-based computer worm may portsweep looking for hosts listening on TCP port 1433.
TCP/IP basics
The design and operation of the Internet is based on the Internet Protocol Suite, commonly also called TCP/IP. In this system, network services are referenced using two components: a host address and a port number. There are 65535 distinct and usable port numbers, numbered 1 … 65535. (Port zero is not a usable port number.) Most services use one, or at most a limited range of, port numbers.
Some port scanners scan only the most common port numbers, or ports most commonly associated with vulnerable services, on a given host.
The result of a scan on a port is usually generalized into one of three categories:
Open or Accepted: The host sent a reply indicating that a service is listening on the port.
Closed or Denied or Not Listening: The host sent a reply indicating that connections will be denied to the port.
Filtered, Dropped or Blocked: There was no reply from the host.
Open ports present two vulnerabilities of which administrators must be wary:
Security and stability concerns associated with the program responsible for delivering the service - Open ports.
Security and stability concerns associated with the operating system that is running on the host - Open or Closed ports.
Filtered ports do not tend to present vulnerabilities.
Assumptions
All forms of port scanning rely on the assumption that the targeted host is compliant with RFC. Although this is the case most of the time, there is still a chance a host might send back strange packets or even generate false positives when the TCP/IP stack of the host is non-RFC-compliant or has been altered. This is especially true for less common scan techniques that are OS-dependent (FIN scanning, for example). The TCP/IP stack fingerprinting method also relies on these types of different network responses from a specific stimulus to guess the type of the operating system the host is running.
Types of scans
TCP scanning
The simplest port scanners use the operating system's network functions and are generally the next option to go to when SYN is not a feasible option (described next). Nmap calls this mode connect scan, named after the Unix connect() system call. If a port is open, the operating system completes the TCP three-way handshake, and the port scanner immediately closes the connection to avoid performing a Denial-of-service attack. Otherwise an error code is returned. This scan mode has the advantage that the user does not require special privileges. However, using the OS network functions prevents low-level control, so this scan type is less common. This method is "noisy", particularly if it is a "portsweep": the services can log the sender IP address and Intrusion detection systems can raise an alarm.
SYN scanning
SYN scan is another form of TCP scanning. Rather than using the operating system's network functions, the port scanner generates raw IP packets itself, and monitors for responses. This scan type is also known as "half-open scanning", because it never actually opens a full TCP connection. The port scanner generates a SYN packet. If the target port is open, it will respond with a SYN-ACK packet. The scanner host responds with an RST packet, closing the connection before the handshake is completed. If the port is closed but unfiltered, the target will instantly respond with an RST packet.
The use of raw networking has several advantages, giving the scanner full control of the packets sent and the timeout for responses, and allowing detailed reporting of the responses. There is debate over which scan is less intrusive on the target host. SYN scan has the advantage that the individual services never actually receive a connection. However, the RST during the handshake can cause problems for some network stacks, in particular simple devices like printers. There are no conclusive arguments either way.
UDP scanning
UDP scanning is also possible, although there are technical challenges. UDP is a connectionless protocol so there is no equivalent to a TCP SYN packet. However, if a UDP packet is sent to a port that is not open, the system will respond with an ICMP port unreachable message. Most UDP port scanners use this scanning method, and use the absence of a response to infer that a port is open. However, if a port is blocked by a firewall, this method will falsely report that the port is open. If the port unreachable message is blocked, all ports will appear open. This method is also affected by ICMP rate limiting.
An alternative approach is to send application-specific UDP packets, hoping to generate an application layer response. For example, sending a DNS query to port 53 will result in a response, if a DNS server is present. This method is much more reliable at identifying open ports. However, it is limited to scanning ports for which an application specific probe packet is available. Some tools (e.g., Nmap, Unionscan) generally have probes for less than 20 UDP services, while some commercial tools have as many as 70. In some cases, a service may be listening on the port, but configured not to respond to the particular probe packet.
ACK scanning
ACK scanning is one of the more unusual scan types, as it does not exactly determine whether the port is open or closed, but whether the port is filtered or unfiltered. This is especially good when attempting to probe for the existence of a firewall and its rulesets. Simple packet filtering will allow established connections (packets with the ACK bit set), whereas a more sophisticated stateful firewall might not.
Window scanning
Rarely used because of its outdated nature, window scanning is fairly untrustworthy in determining whether a port is opened or closed. It generates the same packet as an ACK scan, but checks whether the window field of the packet has been modified. When the packet reaches its destination, a design flaw attempts to create a window size for the packet if the port is open, flagging the window field of the packet with 1's before it returns to the sender. Using this scanning technique with systems that no longer support this implementation returns 0's for the window field, labeling open ports as closed.
FIN scanning
Since SYN scans are not surreptitious enough, firewalls are, in general, scanning for and blocking packets in the form of SYN packets. FIN packets can bypass firewalls without modification. Closed ports reply to a FIN packet with the appropriate RST packet, whereas open ports ignore the packet on hand. This is typical behavior due to the nature of TCP, and is in some ways an inescapable downfall.
Other scan types
Some more unusual scan types exist. These have various limitations and are not widely used. Nmap supports most of these.
X-mas and Null Scan - are similar to FIN scanning, but:
X-mas sends packets with FIN, URG and PUSH flags turned on like a Christmas tree
Null sends a packet with no TCP flags set
Protocol scan - determines what IP level protocols (TCP, UDP, GRE, etc.) are enabled.
Proxy scan - a proxy (SOCKS or HTTP) is used to perform the scan. The target will see the proxy's IP address as the source. This can also be done using some FTP servers.
Idle scan - Another method of scanning without revealing one's IP address, taking advantage of the predictable IP ID flaw.
CatSCAN - Checks ports for erroneous packets.
ICMP scan - determines if a host responds to ICMP requests, such as echo (ping), netmask, etc.
Port filtering by ISPs
Many Internet service providers restrict their customers' ability to perform port scans to destinations outside of their home networks. This is usually covered in the terms of service or acceptable use policy to which the customer must agree. Some ISPs implement packet filters or transparent proxies that prevent outgoing service requests to certain ports. For example, if an ISP provides a transparent HTTP proxy on port 80, port scans of any address will appear to have port 80 open, regardless of the target host's actual configuration.
Security
The information gathered by a port scan has many legitimate uses including network inventory and the verification of the security of a network. Port scanning can, however, also be used to compromise security. Many exploits rely upon port scans to find open ports and send specific data patterns in an attempt to trigger a condition known as a buffer overflow. Such behavior can compromise the security of a network and the computers therein, resulting in the loss or exposure of sensitive information and the ability to do work.
The threat level caused by a port scan can vary greatly according to the method used to scan, the kind of port scanned, its number, the value of the targeted host and the administrator who monitors the host. But a port scan is often viewed as a first step for an attack, and is therefore taken seriously because it can disclose much sensitive information about the host.
Despite this, the probability of a port scan alone followed by a real attack is small. The probability of an attack is much higher when the port scan is associated with a vulnerability scan.
Legal implications
Because of the inherently open and decentralized architecture of the Internet, lawmakers have struggled since its creation to define legal boundaries that permit effective prosecution of cybercriminals. Cases involving port scanning activities are an example of the difficulties encountered in judging violations. Although these cases are rare, most of the time the legal process involves proving that an intent to commit a break-in or unauthorized access existed, rather than just the performance of a port scan.
In June 2003, an Israeli, Avi Mizrahi, was accused by the Israeli authorities of the offense of attempting the unauthorized access of computer material. He had port scanned the Mossad website. He was acquitted of all charges on February 29, 2004. The judge ruled that these kinds of actions should not be discouraged when they are performed in a positive way.
A 17-year-old Finn was accused of attempted computer break-in by a major Finnish bank. On April 9, 2003, he was convicted of the charge by the Supreme Court of Finland and ordered to pay US$12,000 for the expense of the forensic analysis made by the bank. In 1998, he had port scanned the bank network in an attempt to access the closed network, but failed to do so.
In 2006, the UK Parliament had voted an amendment to the Computer Misuse Act 1990 such that a person is guilty of an offence who "makes, adapts, supplies or offers to supply any article knowing that it is designed or adapted for use in the course of or in connection with an offence under section 1 or 3 [of the CMA]". Nevertheless, the area of effect of this amendment is blurred, and widely criticized by Security experts as such.
Germany, with the Strafgesetzbuch § 202a,b,c also has a similar law, and the Council of the European Union has issued a press release stating they plan to pass a similar one too, albeit more precise.
United States
Moulton v. VC3
In December 1999, Scott Moulton was arrested by the FBI and accused of attempted computer trespassing under Georgia's Computer Systems Protection Act and Computer Fraud and Abuse Act of America. At this time, his IT service company had an ongoing contract with Cherokee County of Georgia to maintain and upgrade the 911 center security. He performed several port scans on Cherokee County servers to check their security and eventually port scanned a web server monitored by another IT company, provoking a tiff which ended up in a tribunal. He was acquitted in 2000, with judge Thomas Thrash ruling in Moulton v. VC3 (N.D.Ga. 2000) that there was no damage impairing the integrity and availability of the network.
See also
Content Vectoring Protocol
List of TCP and UDP port numbers
Service scan
References
External links
Teo, Lawrence (December, 2000). Network Probes Explained: Understanding Port Scans and Ping Sweeps. Linux Journal, Retrieved September 5, 2009, from Linuxjournal.com
Computer security software
Computer security exploits
Internet Protocol based network software
Network analyzers
Port scanners | Port scanner | [
"Technology",
"Engineering"
] | 2,677 | [
"Cybersecurity engineering",
"Computer security software",
"Computer security exploits"
] |
576,646 | https://en.wikipedia.org/wiki/2.5D | 2.5D (basic pronunciation two-and-a-half dimensional) perspective refers to gameplay or movement in a video game or virtual reality environment that is restricted to a two-dimensional (2D) plane with little to no access to a third dimension in a space that otherwise appears to be three-dimensional and is often simulated and rendered in a 3D digital environment.
This is similar but different from pseudo-3D perspective (sometimes called three-quarter view when the environment is portrayed from an angled top-down perspective), which refers to 2D graphical projections and similar techniques used to cause images or scenes to simulate the appearance of being three-dimensional (3D) when in fact they are not.
By contrast, games, spaces or perspectives that are simulated and rendered in 3D and used in 3D level design are said to be true 3D, and 2D rendered games made to appear as 2D without approximating a 3D image are said to be true 2D.
Common in video games, 2.5D projections have also been useful in geographic visualization (GVIS) to help understand visual-cognitive spatial representations or 3D visualization.
The terms three-quarter perspective and three-quarter view trace their origins to the three-quarter profile in portraiture and facial recognition, which depicts a person's face that is partway between a frontal view and a side view.
Computer graphics
Axonometric and oblique projection
In axonometric projection and oblique projection, two forms of parallel projection, the viewpoint is rotated slightly to reveal other facets of the environment than what are visible in a top-down perspective or side view, thereby producing a three-dimensional effect. An object is "considered to be in an inclined position resulting in foreshortening of all three axes", and the image is a "representation on a single plane (as a drawing surface) of a three-dimensional object placed at an angle to the plane of projection." Lines perpendicular to the plane become points, lines parallel to the plane have true length, and lines inclined to the plane are foreshortened.
They are popular camera perspectives among 2D video games, most commonly those released for 16-bit or earlier and handheld consoles, as well as in later strategy and role-playing video games. The advantage of these perspectives is that they combine the visibility and mobility of a top-down game with the character recognizability of a side-scrolling game. Thus the player can be presented an overview of the game world in the ability to see it from above, more or less, and with additional details in artwork made possible by using an angle: Instead of showing a humanoid in top-down perspective, as a head and shoulders seen from above, the entire body can be drawn when using a slanted angle; turning a character around would reveal how it looks from the sides, the front and the back, while the top-down perspective will display the same head and shoulders regardless.
There are three main divisions of axonometric projection: isometric (equal measure), dimetric (symmetrical and unsymmetrical), and trimetric (single-view or only two sides). The most common of these drawing types in engineering drawing is isometric projection. This projection is tilted so that all three axes create equal angles at intervals of 120 degrees. The result is that all three axes are equally foreshortened. In video games, a form of dimetric projection with a 2:1 pixel ratio is more common due to the problems of anti-aliasing and square pixels found on most computer monitors.
In oblique projection typically all three axes are shown without foreshortening. All lines parallel to the axes are drawn to scale, and diagonals and curved lines are distorted. One tell-tale sign of oblique projection is that the face pointed toward the camera retains its right angles with respect to the image plane.
Two examples of oblique projection are Ultima VII: The Black Gate and Paperboy. Examples of axonometric projection include SimCity 2000, and the role-playing games Diablo and Baldur's Gate.
Billboarding
In three-dimensional scenes, the term billboarding is applied to a technique in which objects are sometimes represented by two-dimensional images applied to a single polygon which is typically kept perpendicular to the line of sight. The name refers to the fact that objects are seen as if drawn on a billboard. This technique was commonly used in early 1990s video games when consoles did not have the hardware power to render fully 3D objects. This is also known as a backdrop. This can be used to good effect for a significant performance boost when the geometry is sufficiently distant that it can be seamlessly replaced with a 2D sprite. In games, this technique is most frequently applied to objects such as particles (smoke, sparks, rain) and low-detail vegetation. It has since become mainstream, and is found in many games such as Rome: Total War, where it is exploited to simultaneously display thousands of individual soldiers on a battlefield. Early examples include early first-person shooters like Marathon Trilogy, Wolfenstein 3D, Doom, Hexen and Duke Nukem 3D as well as racing games like Carmageddon and Super Mario Kart and platformers like Super Mario 64.
Skyboxes and skydomes
Skyboxes and skydomes are methods used to easily create a background to make a game level look bigger than it really is. If the level is enclosed in a cube, the sky, distant mountains, distant buildings, and other unreachable objects are rendered onto the cube's faces using a technique called cube mapping, thus creating the illusion of distant three-dimensional surroundings. A skydome employs the same concept but uses a sphere or hemisphere instead of a cube.
As a viewer moves through a 3D scene, it is common for the skybox or skydome to remain stationary with respect to the viewer. This technique gives the skybox the illusion of being very far away since other objects in the scene appear to move, while the skybox does not. This imitates real life, where distant objects such as clouds, stars and even mountains appear to be stationary when the viewpoint is displaced by relatively small distances. Effectively, everything in a skybox will always appear to be infinitely distant from the viewer. This consequence of skyboxes dictates that designers should be careful not to carelessly include images of discrete objects in the textures of a skybox since the viewer may be able to perceive the inconsistencies of those objects' sizes as the scene is traversed.
Scaling along the Z axis
In some games, sprites are scaled larger or smaller depending on its distance to the player, producing the illusion of motion along the Z (forward) axis. Sega's 1986 video game Out Run, which runs on the Sega OutRun arcade system board, is a good example of this technique.
In Out Run, the player drives a Ferrari into depth of the game window. The palms on the left and right side of the street are the same bitmap, but have been scaled to different sizes, creating the illusion that some are closer than others. The angles of movement are "left and right" and "into the depth" (while still capable of doing so technically, this game did not allow making a U-turn or going into reverse, therefore moving "out of the depth", as this did not make sense to the high-speed game play and tense time limit). Notice the view is comparable to that which a driver would have in reality when driving a car. The position and size of any billboard is generated by a (complete 3D) perspective transformation as are the vertices of the poly-line representing the center of the street. Often the center of the street is stored as a spline and sampled in a way that on straight streets every sampling point corresponds to one scan-line on the screen. Hills and curves lead to multiple points on one line and one has to be chosen. Or one line is without any point and has to be interpolated lineary from the adjacent lines. Very memory intensive billboards are used in Out Run to draw corn-fields and water waves which are wider than the screen even at the largest viewing distance and also in Test Drive to draw trees and cliffs.
Drakkhen was notable for being among the first role-playing video games to feature a three-dimensional playing field. However, it did not employ a conventional 3D game engine, instead emulating one using character-scaling algorithms. The player's party travels overland on a flat terrain made up of vectors, on which 2D objects are zoomed. Drakkhen features an animated day-night cycle, and the ability to wander freely about the game world, both rarities for a game of its era. This type of engine was later used in the game Eternam.
Some mobile games that were released on the Java ME platform, such as the mobile version of Asphalt: Urban GT and Driver: L.A. Undercover, used this method for rendering the scenery. While the technique is similar to some of Sega's arcade games, such as Thunder Blade and Cool Riders and the 32-bit version of Road Rash, it uses polygons instead of sprite scaling for buildings and certain objects though it looks flat shaded. Later mobile games (mainly from Gameloft), such as Asphalt 4: Elite Racing and the mobile version of Iron Man 2, uses a mix of sprite scaling and texture mapping for some buildings and objects.
Parallax scrolling
Parallaxing refers to when a collection of 2D sprites or layers of sprites are made to move independently of each other and/or the background to create a sense of added depth. This depth cue is created by relative motion of layers. The technique grew out of the multiplane camera technique used in traditional animation since the 1940s. This type of graphical effect was first used in the 1982 arcade game Moon Patrol.
Examples include the skies in Rise of the Triad, the arcade version of Rygar, Sonic the Hedgehog, Street Fighter II, Shadow of the Beast and Dracula X Chronicles, as well as Super Mario World.
Mode 7
Mode 7, a display system effect that included rotation and scaling, allowed for a 3D effect while moving in any direction without any actual 3D models, and was used to simulate 3D graphics on the SNES.
Ray casting
Ray casting is a first person pseudo-3D technique in which a ray for every vertical slice of the screen is sent from the position of the camera. These rays shoot out until they hit an object or wall, and that part of the wall is rendered in that vertical screen slice. Due to the limited camera movement and internally 2D playing field, this is often considered 2.5D.
Bump, normal and parallax mapping
Bump mapping, normal mapping and parallax mapping are techniques applied to textures in 3D rendering applications such as video games to simulate bumps and wrinkles on the surface of an object without using more polygons. To the end user, this means that textures such as stone walls will have more apparent depth and thus greater realism with less of an influence on the performance of the simulation.
Bump mapping is achieved by perturbing the surface normals of an object and using a grayscale image and the perturbed normal during illumination calculations. The result is an apparently bumpy surface rather than a perfectly smooth surface although the surface of the underlying object is not actually changed. Bump mapping was introduced by Blinn in 1978.
In normal mapping, the unit vector from the shading point to the light source is dotted with the unit vector normal to that surface, and the dot product is the intensity of the light on that surface. Imagine a polygonal model of a sphere—you can only approximate the shape of the surface. By using a 3-channel bitmapped image textured across the model, more detailed normal vector information can be encoded. Each channel in the bitmap corresponds to a spatial dimension (x, y and z). These spatial dimensions are relative to a constant coordinate system for object-space normal maps, or to a smoothly varying coordinate system (based on the derivatives of position with respect to texture coordinates) in the case of tangent-space normal maps. This adds much more detail to the surface of a model, especially in conjunction with advanced lighting techniques.
Parallax mapping (also called offset mapping or virtual displacement mapping) is an enhancement of the bump mapping and normal mapping techniques implemented by displacing the texture coordinates at a point on the rendered polygon by a function of the view angle in tangent space (the angle relative to the surface normal) and the value of the height map at that point. At steeper view-angles, the texture coordinates are displaced more, giving the illusion of depth due to parallax effects as the view changes.
Film and animation techniques
The term is also used to describe an animation effect commonly used in music videos and, more frequently, title sequences. Brought to wide attention by the motion picture The Kid Stays in the Picture, an adaptation of film producer Robert Evans's memoir, it involves the layering and animating of two-dimensional pictures in three-dimensional space. Earlier examples of this technique include Liz Phair's music video "Down" (directed by Rodney Ascher) and "A Special Tree" (directed by musician Giorgio Moroder).
On a larger scale, the 2018 movie In Saturn's Rings used over 7.5 million separate two-dimensional images, captured in space or by telescopes, which were composited and moved using multi-plane animation techniques.
Graphic design
The term also refers to an often-used effect in the design of icons and graphical user interfaces (GUIs), where a slight 3D illusion is created by the presence of a virtual light source to the left (or in some cases right) side, and above a person's computer monitor. The light source itself is always invisible, but its effects are seen in the lighter colors for the top and left side, simulating reflection, and the darker colours to the right and below of such objects, simulating shadow.
An advanced version of this technique can be found in some specialised graphic design software, such as Pixologic's ZBrush. The idea is that the program's canvas represents a normal 2D painting surface, but that the data structure that holds the pixel information is also able to store information with respect to a z-index, as well material settings, specularity, etc. Again, with this data it is thus possible to simulate lighting, shadows, and so forth.
History
The first video games that used pseudo-3D were primarily arcade games, the earliest known examples dating back to the mid-1970s, when they began using microprocessors. In 1975, Taito released Interceptor, an early first-person shooter and combat flight simulator that involved piloting a jet fighter, using an eight-way joystick to aim with a crosshair and shoot at enemy aircraft that move in formations of two and increase/decrease in size depending on their distance to the player. In 1976, Sega released Moto-Cross, an early black-and-white motorbike racing video game, based on the motocross competition, that was most notable for introducing an early three-dimensional third-person perspective. Later that year, Sega-Gremlin re-branded the game as Fonz, as a tie-in for the popular sitcom Happy Days. Both versions of the game displayed a constantly changing forward-scrolling road and the player's bike in a third-person perspective where objects nearer to the player are larger than those nearer to the horizon, and the aim was to steer the vehicle across the road, racing against the clock, while avoiding any on-coming motorcycles or driving off the road. That same year also saw the release of two arcade games that extended the car driving subgenre into three dimensions with a first-person perspective: Sega's Road Race, which displayed a constantly changing forward-scrolling S-shaped road with two obstacle race cars moving along the road that the player must avoid crashing while racing against the clock, and Atari's Night Driver, which presented a series of posts by the edge of the road though there was no view of the road or the player's car. Games using vector graphics had an advantage in creating pseudo-3D effects. 1979's Speed Freak recreated the perspective of Night Driver in greater detail.
In 1979, Nintendo debuted Radar Scope, a shoot 'em up that introduced a three-dimensional third-person perspective to the genre, imitated years later by shooters such as Konami's Juno First and Activision's Beamrider. In 1980, Atari's Battlezone was a breakthrough for pseudo-3D gaming, recreating a 3D perspective with unprecedented realism, though the gameplay was still planar. It was followed up that same year by Red Baron, which used scaling vector images to create a forward scrolling rail shooter.
Sega's arcade shooter Space Tactics, released in 1980, allowed players to take aim using crosshairs and shoot lasers into the screen at enemies coming towards them, creating an early 3D effect. It was followed by other arcade shooters with a first-person perspective during the early 1980s, including Taito's 1981 release Space Seeker, and Sega's Star Trek in 1982. Sega's SubRoc-3D in 1982 also featured a first-person perspective and introduced the use of stereoscopic 3-D through a special eyepiece. Sega's Astron Belt in 1983 was the first laserdisc video game, using full-motion video to display the graphics from a first-person perspective. Third-person rail shooters were also released in arcades at the time, including Sega's Tac/Scan in 1982, Nippon's Ambush in 1983, Nichibutsu's Tube Panic in 1983, and Sega's 1982 release Buck Rogers: Planet of Zoom, notable for its fast pseudo-3D scaling and detailed sprites.
In 1981, Sega's Turbo was the first racing game to use sprite scaling with full-colour graphics. Pole Position by Namco is one of the first racing games to use the trailing camera effect that is now so familiar . In this particular example, the effect was produced by linescroll—the practice of scrolling each line independently in order to warp an image. In this case, the warping would simulate curves and steering. To make the road appear to move towards the player, per-line color changes were used, though many console versions opted for palette animation instead.
Zaxxon, a shooter introduced by Sega in 1982, was the first game to use isometric axonometric projection, from which its name is derived. Though Zaxxon's playing field is semantically 3D, the game has many constraints which classify it as 2.5D: a fixed point of view, scene composition from sprites, and movements such as bullet shots restricted to straight lines along the axes. It was also one of the first video games to display shadows. The following year, Sega released the first pseudo-3D isometric platformer, Congo Bongo. Another early pseudo-3D platform game released that year was Konami's Antarctic Adventure, where the player controls a penguin in a forward-scrolling third-person perspective while having to jump over pits and obstacles. It was one of the earliest pseudo-3D games available on a computer, released for the MSX in 1983. That same year, Irem's Moon Patrol was a side-scrolling run & gun platform-shooter that introduced the use of layered parallax scrolling to give a pseudo-3D effect. In 1985, Space Harrier introduced Sega's "Super Scaler" technology that allowed pseudo-3D sprite-scaling at high frame rates, with the ability to scale 32,000 sprites and fill a moving landscape with them.
The first original home console game to use pseudo-3D, and also the first to use multiple camera angles mirrored on television sports broadcasts, was Intellivision World Series Baseball (1983) by Don Daglow and Eddie Dombrower, published by Mattel. Its television sports style of display was later adopted by 3D sports games and is now used by virtually all major team sports titles. In 1984, Sega ported several pseudo-3D arcade games to the Sega SG-1000 console, including a smooth conversion of the third-person pseudo-3D rail shooter Buck Rogers: Planet of Zoom.
By 1989, 2.5D representations were surfaces drawn with depth cues and a part of graphic libraries like GINO. 2.5D was also used in terrain modeling with software packages such as ISM from Dynamic Graphics, GEOPAK from Uniras and the Intergraph DTM system. 2.5D surface techniques gained popularity within the geography community because of its ability to visualize the normal thickness to area ratio used in many geographic models; this ratio was very small and reflected the thinness of the object in relation to its width, which made it the object realistic in a specific plane. These representations were axiomatic in that the entire subsurface domain was not used or the entire domain could not be reconstructed; therefore, it used only a surface and a surface is one aspect not the full 3D identity.
The specific term "two-and-a-half-D" was used as early as 1994 by Warren Spector in an interview in the North American premiere issue of PC Gamer magazine. At the time, the term was understood to refer specifically to first-person shooters like Wolfenstein 3D and Doom, to distinguish them from System Shock's "true" 3D engine.
With the advent of consoles and computer systems that were able to handle several thousand polygons (the most basic element of 3D computer graphics) per second and the usage of 3D specialized graphics processing units, pseudo-3D became obsolete. But even today, there are computer systems in production, such as cellphones, which are often not powerful enough to display true 3D graphics, and therefore use pseudo-3D for that purpose. Many games from the 1980s' pseudo-3D arcade era and 16-bit console era are ported to these systems, giving the manufacturers the possibility to earn revenues from games that are several decades old.
The resurgence of 2.5D or visual analysis, in natural and earth science, has increased the role of computer systems in the creation of spatial information in mapping. GVIS has made real the search for unknowns, real-time interaction with spatial data, and control over map display and has paid particular attention to three-dimensional representations. Efforts in GVIS have attempted to expand higher dimensions and make them more visible; most efforts have focused on "tricking" vision into seeing three dimensions in a 2D plane. Much like 2.5D displays where the surface of a three-dimensional object is represented but locations within the solid are distorted or not accessible.
Technical aspects and generalizations
The reason for using pseudo-3D instead of "real" 3D computer graphics is that the system that has to simulate a 3D-looking graphic is not powerful enough to handle the calculation-intensive routines of 3D computer graphics, yet is capable of using tricks of modifying 2D graphics like bitmaps. One of these tricks is to stretch a bitmap more and more, therefore making it larger with each step, as to give the effect of an object coming closer and closer towards the player.
Even simple shading and size of an image could be considered pseudo-3D, as shading makes it look more realistic. If the light in a 2D game were 2D, it would only be visible on the outline, and because outlines are often dark, they would not be very clearly visible. However, any visible shading would indicate the usage of pseudo-3D lighting and that the image uses pseudo-3D graphics. Changing the size of an image can cause the image to appear to be moving closer or further away, which could be considered simulating a third dimension.
Dimensions are the variables of the data and can be mapped to specific locations in space; 2D data can be given 3D volume by adding a value to the x, y, or z plane. "Assigning height to 2D regions of a topographic map" associating every 2D location with a height/elevation value creates a 2.5D projection; this is not considered a "true 3D representation", however is used like 3D visual representation to "simplify visual processing of imagery and the resulting spatial cognition".
See also
3D computer graphics
Bas-relief
Cel-shaded animation
Flash animation
Head-coupled perspective
Isometric graphics in video games
Limited animation
List of stereoscopic video games
Live2D
Ray casting
Trompe-l'œil
Vector graphics
References
Video game development
Video game graphics
Dimension | 2.5D | [
"Physics"
] | 5,047 | [
"Geometric measurement",
"Dimension",
"Physical quantities",
"Theory of relativity"
] |
576,681 | https://en.wikipedia.org/wiki/Speedometer | A speedometer or speed meter is a gauge that measures and displays the instantaneous speed of a vehicle. Now universally fitted to motor vehicles, they started to be available as options in the early 20th century, and as standard equipment from about 1910 onwards. Other vehicles may use devices analogous to the speedometer with different means of sensing speed, eg. boats use a pit log, while aircraft use an airspeed indicator.
Charles Babbage is credited with creating an early type of a speedometer, which was usually fitted to locomotives.
The electric speedometer was invented by the Croat Josip Belušić in 1888 and was originally called a velocimeter.
History
The speedometer was originally patented by Josip Belušić (Giuseppe Bellussich) in 1888. He presented his invention at the 1889 Exposition Universelle in Paris. His invention had a pointer and a magnet, using electricity to work.
German inventor Otto Schultze patented his version (which, like Belušić's, ran on eddy currents) on 7 October 1902.
Operation
Mechanical
Many speedometers use a rotating flexible cable driven by gearing linked to the vehicle's transmission. The early Volkswagen Beetle and many motorcycles, however, use a cable driven from a front wheel.
Some early mechanical speedometers operated on the governor principle where a rotating weight acting against a spring moved further out as the speed increased, similar to the governor used on steam engines. This movement was transferred to the pointer to indicate speed.
This was followed by the Chronometric speedometer where the distance traveled was measured over a precise interval of time (Some Smiths speedometers used 3/4 of a second) measured by an escapement. This was transferred to the speedometer pointer. The chronometric speedometer is tolerant of vibration and was used in motorcycles up to the 1970s.
When the vehicle is in motion, a speedometer gear assembly turns a speedometer cable, which then turns the speedometer mechanism itself. A small permanent magnet affixed to the speedometer cable interacts with a small aluminium cup (called a speedcup) attached to the shaft of the pointer on the analogue speedometer instrument. As the magnet rotates near the cup, the changing magnetic field produces eddy current in the cup, which itself produces another magnetic field. The effect is that the magnet exerts a torque on the cup, "dragging" it, and thus the speedometer pointer, in the direction of its rotation with no mechanical connection between them.
The pointer shaft is held toward zero by a fine torsion spring. The torque on the cup increases with the speed of rotation of the magnet. Thus an increase in the speed of the car will twist the cup and speedometer pointer against the spring. The cup and pointer will turn until the torque of the eddy currents on the cup are balanced by the opposing torque of the spring, and then stop. Given the torque on the cup is proportional to the car's speed, and the spring's deflection is proportional to the torque, the angle of the pointer is also proportional to the speed, so that equally spaced markers on the dial can be used for gaps in speed. At a given speed, the pointer will remain motionless and point to the appropriate number on the speedometer's dial.
The return spring is calibrated such that a given revolution speed of the cable corresponds to a specific speed indication on the speedometer. This calibration must take into account several factors, including ratios of the tail shaft gears that drive the flexible cable, the final drive ratio in the differential, and the diameter of the driven tires.
One of the key disadvantages of the eddy current speedometer is that it cannot show the vehicle speed when running in reverse gear since the cup would turn in the opposite direction – in this scenario, the needle would be driven against its mechanical stop pin on the zero position.
Electronic
Many modern speedometers are electronic. In designs derived from earlier eddy-current models, a rotation sensor mounted in the transmission delivers a series of electronic pulses whose frequency corresponds to the (average) rotational speed of the driveshaft, and therefore the vehicle's speed, assuming the wheels have full traction. The sensor is typically a set of one or more magnets mounted on the output shaft or (in transaxles) differential crown wheel, or a toothed metal disk positioned between a magnet and a magnetic field sensor. As the part in question turns, the magnets or teeth pass beneath the sensor, each time producing a pulse in the sensor as they affect the strength of the magnetic field it is measuring. Alternatively, particularly in vehicles with multiplex wiring, some manufacturers use the pulses coming from the ABS wheel sensors which communicate to the instrument panel via the CAN Bus. Most modern electronic speedometers have the additional ability over the eddy current type to show the vehicle's speed when moving in reverse gear.
A computer converts the pulses to a speed and displays this speed on an electronically controlled, analogue-style needle or a digital display. Pulse information is also used for a variety of other purposes by the ECU or full-vehicle control system, e.g. triggering ABS or traction control, calculating average trip speed, or increment the odometer in place of it being turned directly by the speedometer cable.
Another early form of electronic speedometer relies upon the interaction between a precision watch mechanism and a mechanical pulsator driven by the car's wheel or transmission. The watch mechanism endeavours to push the speedometer pointer toward zero, while the vehicle-driven pulsator tries to push it toward infinity. The position of the speedometer pointer reflects the relative magnitudes of the outputs of the two mechanisms.
Virtual Speedometer
A virtual speedometer is a computer-generated tool that displays the current speed of a vehicle or object. The virtual speedometer typically calculates the object's speed based on the distance it travels over time. Such speedometers are programmed using programming languages such as HTML, CSS, and Javascript. The program uses the mobile device's GPS module.
Consistent use of the GPS module on mobile devices can result in faster battery drain. Furthermore, virtual speedometers calculate speed by measuring the distance and time between two points using GPS signals. However, various environmental factors such as weather conditions, terrain, and obstructions can interfere with the accuracy of these signals and result in inaccurate speed readings.
Bicycle speedometers
Typical bicycle speedometers measure the time between each wheel revolution and give a readout on a small, handlebar-mounted digital display. The sensor is mounted on the bike at a fixed location, pulsing when the spoke-mounted magnet passes by. In this way, it is analogous to an electronic car speedometer using pulses from an ABS sensor, but with a much cruder time/distance resolution – typically one pulse/display update per revolution, or as seldom as once every 2–3 seconds at low speed with a wheel. However, this is rarely a critical problem, and the system provides frequent updates at higher road speeds where the information is of more importance. The low pulse frequency also has little impact on measurement accuracy, as these digital devices can be programmed by wheel size, or additionally by wheel or tire circumference to make distance measurements more accurate and precise than a typical motor vehicle gauge. However, these devices carry some minor disadvantages in requiring power from batteries that must be replaced every so often in the receiver (and sensor, for wireless models), and, in wired models, the signal is carried by a thin cable that is much less robust than that used for brakes, gears, or cabled speedometers.
Other, usually older bicycle speedometers are cable driven from one or other wheel, as in the motorcycle speedometers described above. These do not require battery power, but can be relatively bulky and heavy, and may be less accurate. The turning force at the wheel may be provided either from a gearing system at the hub (making use of the presence of e.g. a hub brake, cylinder gear, or dynamo) as per a typical motorcycle, or with a friction wheel device that pushes against the outer edge of the rim (same position as rim brakes, but on the opposite edge of the fork) or the sidewall of the tire itself. The former type is quite reliable and low maintenance but needs a gauge and hub gearing properly matched to the rim and tire size, whereas the latter requires little or no calibration for a moderately accurate readout (with standard tires, the "distance" covered in each wheel rotation by a friction wheel set against the rim should scale fairly linearly with wheel size, almost as if it were rolling along the ground itself) but are unsuitable for off-road use, and must be kept properly tensioned and clean of road dirt to avoid slipping or jamming.
Error
Most speedometers have tolerances of some ±10%, mainly due to variations in tire diameter. Sources of error due to tire diameter variations are wear, temperature, pressure, vehicle load, and nominal tire size. Vehicle manufacturers usually calibrate speedometers to read high by an amount equal to the average error, to ensure that their speedometers never indicate a lower speed than the actual speed of the vehicle, to ensure they are not liable for drivers violating speed limits.
Excessive speedometer errors after manufacture can come from several causes, but most commonly is due to nonstandard tire diameter, in which case the error is:
Nearly all tires now have their size is shown as "T/A_W" on the side of the tire (See: Tire code), and the tires.
For example, a standard tire is "185/70R14" with diameter = 2*185*(70/100)+(14*25.4) = 614.6 mm (185x70/1270 + 14 = 24.20 in). Another is "195/50R15" with 2*195*(50/100)+(15*25.4) = 576.0 mm (195x50/1270 + 15 = 22.68 in). Replacing the first tire (and wheels) with the second (on 15" = 381 mm wheels), a speedometer reads 100 * ((614.6/576) - 1) = 100 * (24.20/22.68 - 1) = 6.7% higher than the actual speed. At an actual speed of 100 km/h (60 mph), the speedometer will indicate 100 x 1.067 = 106.7 km/h (60 * 1.067 = 64.02 mph), approximately.
In the case of wear, a new "185/70R14" tire of 620 mm (24.4 inch) diameter will have ≈8 mm tread depth, at legal limit this reduces to 1.6 mm, the difference being 12.8 mm in diameter or 0.5 inches which is 2% in 620 mm (24.4 inches).
International agreements
In many countries the legislated error in speedometer readings is ultimately governed by the United Nations Economic Commission for Europe (UNECE) Regulation 39, which covers those aspects of vehicle type approval that relate to speedometers. The main purpose of the UNECE regulations is to facilitate trade in motor vehicles by agreeing on uniform type approval standards rather than requiring a vehicle model to undergo different approval processes in each country where it is sold.
European Union member states must also grant type approval to vehicles meeting similar EU standards. The ones covering speedometers are similar to the UNECE regulation in that they specify that:
The indicated speed must never be less than the actual speed, i.e. it should not be possible to inadvertently speed because of an incorrect speedometer reading.
The indicated speed must not be more than 110 percent of the true speed plus at specified test speeds. For example, at , the indicated speed must be no more than .
The standards specify both the limits on accuracy and many of the details of how it should be measured during the approvals process. For example, the test measurements should be made (for most vehicles) at , and at a particular ambient temperature and road surface. There are slight differences between the different standards, for example in the minimum accuracy of the equipment measuring the true speed of the vehicle.
The UNECE regulation relaxes the requirements for vehicles mass-produced following type approval. At Conformity of Production Audits the upper limit on indicated speed is increased to 110 percent plus for cars, buses, trucks, and similar vehicles, and 110 percent plus for two- or three-wheeled vehicles that have a maximum speed above (or a cylinder capacity, if powered by a heat engine, of more than ). European Union Directive 2000/7/EC, which relates to two- and three-wheeled vehicles, provides similar slightly relaxed limits in production.
Australia
There were no Australian Design Rules in place for speedometers in Australia before July 1988. They had to be introduced when speed cameras were first used. This means there are no legally accurate speedometers for these older vehicles. All vehicles manufactured on or after 1 July 2007, and all models of vehicle introduced on or after 1 July 2006, must conform to UNECE Regulation 39.
The speedometers in vehicles manufactured before these dates but after 1 July 1995 (or 1 January 1995 for forward control passenger vehicles and off-road passenger vehicles) must conform to the previous Australian design rule. This specifies that they need only display the speed to an accuracy of ±10% at speeds above 40 km/h, and there is no specified accuracy at all for speeds below 40 km/h.
All vehicles manufactured in Australia or imported for supply to the Australian market must comply with the Australian Design Rules. The state and territory governments may set policies for the tolerance of speed over the posted speed limits that may be lower than the 10% in the earlier versions of the Australian Design Rules permitted, such as in Victoria. This has caused some controversy since it would be possible for a driver to be unaware that they are speeding should their vehicle be fitted with an under-reading speedometer.
United Kingdom
The amended Road Vehicles (Construction and Use) Regulations 1986 permits the use of speedometers that meet either the requirements of EC Council Directive 75/443 (as amended by Directive 97/39) or UNECE Regulation 39.
The Motor Vehicles (Approval) Regulations 2001 permits single vehicles to be approved. As with the UNECE regulation and the EC Directives, the speedometer must never show an indicated speed less than the actual speed. However, it differs slightly from them in specifying that for all actual speeds between 25 mph and 70 mph (or the vehicles' maximum speed if it is lower than this), the indicated speed must not exceed 110% of the actual speed, plus 6.25 mph.
For example, if the vehicle is actually traveling at 50 mph, the speedometer must not show more than 61.25 mph or less than 50 mph.
United States
Federal standards in the United States allow a maximum 5 mph error at a speed of 50 mph on speedometer readings for commercial vehicles. Aftermarket modifications, such as different tire and wheel sizes or different differential gearing, can cause speedometer inaccuracy.
Regulation in the US
Starting with U.S. automobiles manufactured on or after 1 September 1979, the NHTSA required speedometers to have a special emphasis on 55 mph (90 km/h) and display no more than a maximum speed of 85 mph (136 km/h). On 25 March 1982, the NHTSA revoked the rule because no "significant safety benefits" could come from maintaining the standard.
GPS
GPS devices can measure speeds in two ways:
The first and simpler method is based on how far the receiver has moved since the last measurement. Such speed calculations are not subject to the same sources of error as the vehicle's speedometer (wheel size, transmission/drive ratios). Instead, the GPS's positional accuracy, and therefore the accuracy of its calculated speed, is dependent on the satellite signal quality at the time. Speed calculations will be more accurate at higher speeds when the ratio of positional error to positional change is lower. The GPS software may also use a moving average calculation to reduce error. Some GPS devices do not take into account the vertical position of the car so will under-report the speed by the road's gradient.
Alternatively, the GPS may take advantage of the Doppler effect to estimate its velocity. In ideal conditions, the accuracy for commercial devices is within 0.2–0.5 km/h, but it may worsen if the signal quality degrades.
As mentioned in the satnav article, GPS data has been used to overturn a speeding ticket; the GPS logs showed the defendant traveling below the speed limit when they were ticketed. That the data came from a GPS device was likely less important than the fact that it was logged; logs from the vehicle's speedometer could likely have been used instead, had they existed.
See also
Airspeed indicator
Hubometer
Tachometer
Taximeter
References
External links
Autoblog: Gauging changes
Speedometer
Measuring instruments
Speed sensors
Vehicle parts
Vehicle technology
Croatian inventions | Speedometer | [
"Technology",
"Engineering"
] | 3,517 | [
"Vehicle parts",
"Measuring instruments",
"Vehicle technology",
"Mechanical engineering by discipline",
"Speed sensors",
"Components"
] |
576,694 | https://en.wikipedia.org/wiki/C-terminus | The C-terminus (also known as the carboxyl-terminus, carboxy-terminus, C-terminal tail, carboxy tail, C-terminal end, or COOH-terminus) is the end of an amino acid chain (protein or polypeptide), terminated by a free carboxyl group (-COOH). When the protein is translated from messenger RNA, it is created from N-terminus to C-terminus. The convention for writing peptide sequences is to put the C-terminal end on the right and write the sequence from N- to C-terminus.
Chemistry
Each amino acid has a carboxyl group and an amine group. Amino acids link to one another to form a chain by a dehydration reaction which joins the amine group of one amino acid to the carboxyl group of the next. Thus polypeptide chains have an end with an unbound carboxyl group, the C-terminus, and an end with an unbound amine group, the N-terminus. Proteins are naturally synthesized starting from the N-terminus and ending at the C-terminus.
Function
C-terminal retention signals
While the N-terminus of a protein often contains targeting signals, the C-terminus can contain retention signals for protein sorting. The most common ER retention signal is the amino acid sequence -KDEL (Lys-Asp-Glu-Leu) or -HDEL (His-Asp-Glu-Leu) at the C-terminus. This keeps the protein in the endoplasmic reticulum and prevents it from entering the secretory pathway.
Peroxisomal targeting signal
The sequence -SKL (Ser-Lys-Leu) or similar near C-terminus serves as peroxisomal targeting signal 1, directing the protein into peroxisome.
C-terminal modifications
The C-terminus of proteins can be modified posttranslationally, most commonly by the addition of a lipid anchor to the C-terminus that allows the protein to be inserted into a membrane without having a transmembrane domain.
Prenylation
One form of C-terminal modification is prenylation. During prenylation, a farnesyl- or geranylgeranyl-isoprenoid membrane anchor is added to a cysteine residue near the C-terminus. Small, membrane-bound G proteins are often modified this way.
GPI anchors
Another form of C-terminal modification is the addition of a phosphoglycan, glycosylphosphatidylinositol (GPI), as a membrane anchor. The GPI anchor is attached to the C-terminus after proteolytic cleavage of a C-terminal propeptide. The most prominent example for this type of modification is the prion protein.
Methylation
C-terminal leucine is methylated at carboxyl group by enzyme leucine carboxyl methyltransferase 1 in vertebrates, forming methyl ester.
C-terminal domain
The C-terminal domain of some proteins has specialized functions. In humans, the CTD of RNA polymerase II typically consists of up to 52 repeats of the sequence Tyr-Ser-Pro-Thr-Ser-Pro-Ser. This allows other proteins to bind to the C-terminal domain of RNA polymerase in order to activate polymerase activity. These domains are then involved in the initiation of DNA transcription, the capping of the RNA transcript, and attachment to the spliceosome for RNA splicing.
See also
N-terminus
TopFIND, a scientific database covering proteases, their cleavage site specificity, substrates, inhibitors and protein termini originating from their activity
References
Post-translational modification
Protein structure | C-terminus | [
"Chemistry"
] | 780 | [
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Structural biology",
"Protein structure"
] |
576,704 | https://en.wikipedia.org/wiki/N-terminus | The N-terminus (also known as the amino-terminus, NH2-terminus, N-terminal end or amine-terminus) is the start of a protein or polypeptide, referring to the free amine group (-NH2) located at the end of a polypeptide. Within a peptide, the amine group is bonded to the carboxylic group of another amino acid, making it a chain. That leaves a free carboxylic group at one end of the peptide, called the C-terminus, and a free amine group on the other end called the N-terminus. By convention, peptide sequences are written N-terminus to C-terminus, left to right (in LTR writing systems). This correlates the translation direction to the text direction, because when a protein is translated from messenger RNA, it is created from the N-terminus to the C-terminus, as amino acids are added to the carboxyl end of the protein.
Chemistry
Each amino acid has an amine group and a carboxylic group. Amino acids link to one another by peptide bonds which form through a dehydration reaction that joins the carboxyl group of one amino acid to the amine group of the next in a head-to-tail manner to form a polypeptide chain. The chain has two ends – an amine group, the N-terminus, and an unbound carboxyl group, the C-terminus.
When a protein is translated from messenger RNA, it is created from N-terminus to C-terminus. The amino end of an amino acid (on a charged tRNA) during the elongation stage of translation, attaches to the carboxyl end of the growing chain. Since the start codon of the genetic code codes for the amino acid methionine, most protein sequences start with a methionine (or, in bacteria, mitochondria and chloroplasts, the modified version N-formylmethionine, fMet). However, some proteins are modified posttranslationally, for example, by cleavage from a protein precursor, and therefore may have different amino acids at their N-terminus.
Function
N-terminal targeting signals
The N-terminus is the first part of the protein that exits the ribosome during protein biosynthesis. It often contains signal peptide sequences, "intracellular postal codes" that direct delivery of the protein to the proper organelle. The signal peptide is typically removed at the destination by a signal peptidase. The N-terminal amino acid of a protein is an important determinant of its half-life (likelihood of being degraded). This is called the N-end rule.
Signal peptide
The N-terminal signal peptide is recognized by the signal recognition particle (SRP) and results in the targeting of the protein to the secretory pathway. In eukaryotic cells, these proteins are synthesized at the rough endoplasmic reticulum. In prokaryotic cells, the proteins are exported across the cell membrane. In chloroplasts, signal peptides target proteins to the thylakoids.
Mitochondrial targeting peptide
The N-terminal mitochondrial targeting peptide (mtTP) allows the protein to be imported into the mitochondrion.
Chloroplast targeting peptide
The N-terminal chloroplast targeting peptide (cpTP) allows for the protein to be imported into the chloroplast.
N-terminal modifications
Protein N-termini can be modified co - or post-translationally. Modifications include the removal of initiator methionine (iMet) by aminopeptidases, attachment of small chemical groups such as acetyl, propionyl and methyl, and the addition of membrane anchors, such as palmitoyl and myristoyl groups
N-terminal acetylation
N-terminal acetylation is a form of protein modification that can occur in both prokaryotes and eukaryotes. It has been suggested that N-terminal acetylation can prevent a protein from following a secretory pathway.
N-Myristoylation
The N-terminus can be modified by the addition of a myristoyl anchor. Proteins that are modified this way contain a consensus motif at their N-terminus as a modification signal.
N-Acylation
The N-terminus can also be modified by the addition of a fatty acid anchor to form N-acetylated proteins. The most common form of such modification is the addition of a palmitoyl group.
See also
C-terminus
TopFIND, a scientific database covering proteases, their cleavage site specificity, substrates, inhibitors and protein termini originating from their activity
References
Post-translational modification
Proteins
Protein structure | N-terminus | [
"Chemistry"
] | 990 | [
"Biomolecules by chemical classification",
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Structural biology",
"Molecular biology",
"Proteins",
"Protein structure"
] |
576,713 | https://en.wikipedia.org/wiki/Congruence%20of%20squares | In number theory, a congruence of squares is a congruence commonly used in integer factorization algorithms.
Derivation
Given a positive integer n, Fermat's factorization method relies on finding numbers x and y satisfying the equality
We can then factor n = x2 − y2 = (x + y)(x − y). This algorithm is slow in practice because we need to search many such numbers, and only a few satisfy the equation. However, n may also be factored if we can satisfy the weaker congruence of squares conditions:
From here we easily deduce
This means that n divides the product (x + y)(x − y). The second non-triviality condition guarantees that n does not divide (x + y) nor (x − y) individually. Thus (x + y) and (x − y) each contain some, but not all, factors of n, and the greatest common divisors of (x + y, n) and of (x − y, n) will give us these factors. This can be done quickly using the Euclidean algorithm.
Most algorithms for finding congruences of squares do not actually guarantee non-triviality; they only make it likely. There is a chance that a congruence found will be trivial, in which case we need to continue searching for another x and y.
Congruences of squares are extremely useful in integer factorization algorithms. Conversely, because finding square roots modulo a composite number turns out to be probabilistic polynomial-time equivalent to factoring that number, any integer factorization algorithm can be used efficiently to identify a congruence of squares.
Using a factor base
A technique pioneered by Dixon's factorization method and improved by continued fraction factorization, the quadratic sieve, and the general number field sieve, is to construct a congruence of squares using a factor base.
Instead of looking for one pair directly, we find many "relations" where the y have only small prime factors (they are smooth numbers), and multiply some of them together to get a square on the right-hand side.
The set of small primes which all the y factor into is called the factor base. Construct a logical matrix where each row describes one y, each column corresponds to one prime in the factor base, and the entry is the parity (even or odd) of the number of times that factor occurs in y. Our goal is to select a subset of rows whose sum is the all-zero row. This corresponds to a set of y values whose product is a square number, i.e. one whose factorization has only even exponents. The products of x and y values together form a congruence of squares.
This is a classic system of linear equations problem, and can be efficiently solved using Gaussian elimination as soon as the number of rows exceeds the number of columns. Some additional rows are often included to ensure that several solutions exist in the nullspace of our matrix, in case the first solution produces a trivial congruence.
A great advantage of this technique is that the search for relations is embarrassingly parallel; a large number of computers can be set to work searching different ranges of x values and trying to factor the resultant ys. Only the found relations need to be reported to a central computer, and there is no particular hurry to do so. The searching computers do not even have to be trusted; a reported relation can be verified with minimal effort.
There are numerous elaborations on this technique. For example, in addition to relations where y factors completely in the factor base, the "large prime" variant also collects "partial relations" where y factors completely except for one larger factor. A second partial relation with the same larger factor can be multiplied by the first to produce a "complete relation".
Examples
Factorize 35
We take n = 35 and find that
.
We thus factor as
Factorize 1649
Using n = 1649, as an example of finding a congruence of squares built up from the products of non-squares (see Dixon's factorization method), first we obtain several congruences
Of these, the first and third have only small primes as factors, and a product of these has an even power of each small prime, and is therefore a square
yielding the congruence of squares
So using the values of 80 and 114 as our x and y gives factors
See also
Congruence relation
References
Equivalence (mathematics)
Integer factorization algorithms
Modular arithmetic
Squares in number theory | Congruence of squares | [
"Mathematics"
] | 934 | [
"Arithmetic",
"Modular arithmetic",
"Number theory",
"Squares in number theory"
] |
576,855 | https://en.wikipedia.org/wiki/Binary%20decision%20diagram | In computer science, a binary decision diagram (BDD) or branching program is a data structure that is used to represent a Boolean function. On a more abstract level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, operations are performed directly on the compressed representation, i.e. without decompression.
Similar data structures include negation normal form (NNF), Zhegalkin polynomials, and propositional directed acyclic graphs (PDAG).
Definition
A Boolean function can be represented as a rooted, directed, acyclic graph, which consists of several (decision) nodes and two terminal nodes. The two terminal nodes are labeled 0 (FALSE) and 1 (TRUE). Each (decision) node is labeled by a Boolean variable and has two child nodes called low child and high child. The edge from node to a low (or high) child represents an assignment of the value FALSE (or TRUE, respectively) to variable . Such a BDD is called 'ordered' if different variables appear in the same order on all paths from the root. A BDD is said to be 'reduced' if the following two rules have been applied to its graph:
Merge any isomorphic subgraphs.
Eliminate any node whose two children are isomorphic.
In popular usage, the term BDD almost always refers to Reduced Ordered Binary Decision Diagram (ROBDD in the literature, used when the ordering and reduction aspects need to be emphasized). The advantage of an ROBDD is that it is canonical (unique up to isomorphism) for a particular function and variable order. This property makes it useful in functional equivalence checking and other operations like functional technology mapping.
A path from the root node to the 1-terminal represents a (possibly partial) variable assignment for which the represented Boolean function is true. As the path descends to a low (or high) child from a node, then that node's variable is assigned to 0 (respectively 1).
Example
The left figure below shows a binary decision tree (the reduction rules are not applied), and a truth table, each representing the function . In the tree on the left, the value of the function can be determined for a given variable assignment by following a path down the graph to a terminal. In the figures below, dotted lines represent edges to a low child, while solid lines represent edges to a high child. Therefore, to find , begin at x1, traverse down the dotted line to x2 (since x1 has an assignment to 0), then down two solid lines (since x2 and x3 each have an assignment to one). This leads to the terminal 1, which is the value of .
The binary decision tree of the left figure can be transformed into a binary decision diagram by maximally reducing it according to the two reduction rules. The resulting BDD is shown in the right figure.
Another notation for writing this Boolean function is .
Complemented edges
An ROBDD can be represented even more compactly, using complemented edges, also known as complement links. The resulting BDD is sometimes known as a typed BDD or signed BDD.
Complemented edges are formed by annotating low edges as complemented or not. If an edge is complemented, then it refers to the negation of the Boolean function that corresponds to the node that the edge points to (the Boolean function represented by the BDD with root that node). High edges are not complemented, in order to ensure that the resulting BDD representation is a canonical form. In this representation, BDDs have a single leaf node, for reasons explained below.
Two advantages of using complemented edges when representing BDDs are:
computing the negation of a BDD takes constant time
space usage (i.e., required memory) is reduced (by a factor at most 2)
However, Knuth argues otherwise:
Although such links are used by all the major BDD packages, they are hard to recommend because the computer programs become much more complicated. The memory saving is usually negligible, and never better than a factor of 2; furthermore, the author's experiments show little gain in running time.
A reference to a BDD in this representation is a (possibly complemented) "edge" that points to the root of the BDD. This is in contrast to a reference to a BDD in the representation without use of complemented edges, which is the root node of the BDD. The reason why a reference in this representation needs to be an edge is that for each Boolean function, the function and its negation are represented by an edge to the root of a BDD, and a complemented edge to the root of the same BDD. This is why negation takes constant time. It also explains why a single leaf node suffices: FALSE is represented by a complemented edge that points to the leaf node, and TRUE is represented by an ordinary edge (i.e., not complemented) that points to the leaf node.
For example, assume that a Boolean function is represented with a BDD represented using complemented edges. To find the value of the Boolean function for a given assignment of (Boolean) values to the variables, we start at the reference edge, which points to the BDD's root, and follow the path that is defined by the given variable values (following a low edge if the variable that labels a node equals FALSE, and following the high edge if the variable that labels a node equals TRUE), until we reach the leaf node. While following this path, we count how many complemented edges we have traversed. If when we reach the leaf node we have crossed an odd number of complemented edges, then the value of the Boolean function for the given variable assignment is FALSE, otherwise (if we have crossed an even number of complemented edges), then the value of the Boolean function for the given variable assignment is TRUE.
An example diagram of a BDD in this representation is shown on the right, and represents the same Boolean expression as shown in diagrams above, i.e., . Low edges are dashed, high edges solid, and complemented edges are signified by a circle at their source. The node with the @ symbol represents the reference to the BDD, i.e., the reference edge is the edge that starts from this node.
History
The basic idea from which the data structure was created is the Shannon expansion. A switching function is split into two sub-functions (cofactors) by assigning one variable (cf. if-then-else normal form). If such a sub-function is considered as a sub-tree, it can be represented by a binary decision tree. Binary decision diagrams (BDDs) were introduced by C. Y. Lee, and further studied and made known by Sheldon B. Akers and Raymond T. Boute. Independently of these authors, a BDD under the name "canonical bracket form" was realized Yu. V. Mamrukov in a CAD for analysis of speed-independent circuits. The full potential for efficient algorithms based on the data structure was investigated by Randal Bryant at Carnegie Mellon University: his key extensions were to use a fixed variable ordering (for canonical representation) and shared sub-graphs (for compression). Applying these two concepts results in an efficient data structure and algorithms for the representation of sets and relations. By extending the sharing to several BDDs, i.e. one sub-graph is used by several BDDs, the data structure Shared Reduced Ordered Binary Decision Diagram is defined. The notion of a BDD is now generally used to refer to that particular data structure.
In his video lecture Fun With Binary Decision Diagrams (BDDs), Donald Knuth calls BDDs "one of the only really fundamental data structures that came out in the last twenty-five years" and mentions that Bryant's 1986 paper was for some time one of the most-cited papers in computer science.
Adnan Darwiche and his collaborators have shown that BDDs are one of several normal forms for Boolean functions, each induced by a different combination of requirements. Another important normal form identified by Darwiche is decomposable negation normal form or DNNF.
Applications
BDDs are extensively used in CAD software to synthesize circuits (logic synthesis) and in formal verification. There are several lesser known applications of BDD, including fault tree analysis, Bayesian reasoning, product configuration, and private information retrieval.
Every arbitrary BDD (even if it is not reduced or ordered) can be directly implemented in hardware by replacing each node with a 2 to 1 multiplexer; each multiplexer can be directly implemented by a 4-LUT in a FPGA. It is not so simple to convert from an arbitrary network of logic gates to a BDD (unlike the and-inverter graph).
BDDs have been applied in efficient Datalog interpreters.
Variable ordering
The size of the BDD is determined both by the function being represented and by the chosen ordering of the variables. There exist Boolean functions for which depending upon the ordering of the variables we would end up getting a graph whose number of nodes would be linear (in n) at best and exponential at worst (e.g., a ripple carry adder). Consider the Boolean function Using the variable ordering , the BDD needs nodes to represent the function. Using the ordering , the BDD consists of nodes.
It is of crucial importance to care about variable ordering when applying this data structure in practice. The problem of finding the best variable ordering is NP-hard. For any constant c > 1 it is even NP-hard to compute a variable ordering resulting in an OBDD with a size that is at most c times larger than an optimal one. However, there exist efficient heuristics to tackle the problem.
There are functions for which the graph size is always exponential—independent of variable ordering. This holds e.g. for the multiplication function. In fact, the function computing the middle bit of the product of two -bit numbers does not have an OBDD smaller than vertices. (If the multiplication function had polynomial-size OBDDs, it would show that integer factorization is in P/poly, which is not known to be true.)
Researchers have suggested refinements on the BDD data structure giving way to a number of related graphs, such as BMD (binary moment diagrams), ZDD (zero-suppressed decision diagrams), FBDD (free binary decision diagrams), FDD (functional decision diagrams), PDD (parity decision diagrams), and MTBDDs (multiple terminal BDDs).
Logical operations on BDDs
Many logical operations on BDDs can be implemented by polynomial-time graph manipulation algorithms:
conjunction
disjunction
negation
However, repeating these operations several times, for example forming the conjunction or disjunction of a set of BDDs, may in the worst case result in an exponentially big BDD. This is because any of the preceding operations for two BDDs may result in a BDD with a size proportional to the product of the BDDs' sizes, and consequently for several BDDs the size may be exponential in the number of operations. Variable ordering needs to be considered afresh; what may be a good ordering for (some of) the set of BDDs may not be a good ordering for the result of the operation. Also, since constructing the BDD of a Boolean function solves the NP-complete Boolean satisfiability problem and the co-NP-complete tautology problem, constructing the BDD can take exponential time in the size of the Boolean formula even when the resulting BDD is small.
Computing existential abstraction over multiple variables of reduced BDDs is NP-complete.
Model-counting, counting the number of satisfying assignments of a Boolean formula, can be done in polynomial time for BDDs. For general propositional formulas the problem is ♯P-complete and the best known algorithms require an exponential time in the worst case.
See also
Boolean satisfiability problem, the canonical NP-complete computational problem
L/poly, a complexity class that strictly contains the set of problems with polynomially sized BDDs
Model checking
Radix tree
Barrington's theorem
Hardware acceleration
Karnaugh map, a method of simplifying Boolean algebra expressions
Zero-suppressed decision diagram
Algebraic decision diagram, a generalization of BDDs from two-element to arbitrary finite sets
Sentential Decision Diagram, a generalization of OBDDs
Influence diagram
References
Further reading
Complete textbook available for download.
External links
Fun With Binary Decision Diagrams (BDDs), lecture by Donald Knuth
List of BDD software libraries for several programming languages.
Diagrams
Graph data structures
Model checking
Boolean algebra
Knowledge compilation | Binary decision diagram | [
"Mathematics"
] | 2,662 | [
"Boolean algebra",
"Fields of abstract algebra",
"Mathematical logic"
] |
576,974 | https://en.wikipedia.org/wiki/Mother%20ship | A mother ship, mothership or mother-ship is a large vehicle that leads, serves, or carries other smaller vehicles. A mother ship may be a maritime ship, aircraft, or spacecraft.
Examples include bombers converted to carry experimental aircraft to altitudes where they can conduct their research (such as the B-52 carrying the X-15), or ships that carry small submarines to an area of ocean to be explored (such as the Atlantis II carrying the Alvin).
A mother ship may also be used to recover smaller craft, or go its own way after releasing them. A smaller vessel serving or caring for larger craft is usually called a tender.
Maritime craft
During World War II, the German Type XIV submarine or Milchkuh (Milk cow) was a type of large submarine used to resupply the U-boats.
Mother ships can carry small submersibles and submarines to an area of ocean to be explored (such as the Atlantis II carrying the DSV Alvin).
Somali pirates use mother ships to extend their reach in the Indian Ocean. For example, the FV Win Far 161 was captured and used as a mother ship in the Maersk Alabama hijacking.
Aircraft
In aviation, motherships have been used in the airborne aircraft carrier, air launch and captive carry roles. Some large long-range aircraft act as motherships to parasite aircraft. A mothership may also form the larger component of a composite aircraft.
Airborne aircraft carriers
During the age of the great airships, the United States built two rigid airships, and , with onboard hangars able to house a number of Curtiss F9C Sparrowhawk biplane fighters. These airborne aircraft carriers operated successfully for several years. These airships utilized an internal hangar bay using a "trapeze" to hold the aircraft.
Air launch
In the air launch role, a large carrier aircraft or mother ship carries a smaller payload aircraft to a launch point before releasing it.
During World War II the Japanese Mitsubishi G4M bomber was used to carry the rocket-powered Yokosuka MXY7 Ohka aircraft, used for kamikaze attacks, within range of a target ship. Germany also planned a jet-carrying bomber, called the Daimler-Benz Project C.
In the US, NASA has used converted bombers as launch platforms for experimental aircraft. Notable among these was the use during the 1960s of a modified Boeing B-52 Stratofortress for the repeated launching of the North American X-15.
Captive carry
Experiments on air launching the Shuttle were carried out with the test frame Enterprise, but none of the Space Shuttle fleet was launched in this way once the Space Shuttle program was commenced. In a captive carry arrangement the payload craft, such as a rocket, missile, aeroplane or spaceplane, does not separate from the carrier aircraft.
Captive carry is typically used to conduct initial testing on a new airframe or system, before it is ready for free flight
Captive carry is sometimes also used to transport an aircraft or spacecraft on a ferry flight. Notable examples include:
A pair of modified Boeing 747s known as the Shuttle Carrier Aircraft, were used by NASA to transport the Space Shuttle orbiter and to launch the orbiter for flight tests.
The Soviet Union developed and used the Antonov An-225 Mriya to ferry the Buran spacecraft.
Parasite carriers
Some large long-range aircraft have been modified as motherships in order to carry parasite aircraft which support the mothership by extending its role, for example for reconnaissance, or acting in a support role such as fighter defence.
The first experiments with rigid airships to launch and recover fighters were carried out during World War I.
The British experimented with the 23-class airships from that time. Then in the 1920s, as part of the "Airship Development Programme", they used the R33 for experiments. A de Havilland Humming Bird light aeroplane with a hook fitted was slung beneath it. In October 1925 Squadron Leader Rollo Haig, was released from the R33, and then reattached. Later that year, the attempt was repeated and the Humming Bird remained attached until the airship landed.
In 1926, it carried two Gloster Grebe fighters releasing them at the Pulham and Cardington airship stations.
In the U.S., USS Los Angeles (ZR-3), used for prototype testing for the Akron and Macon airborne aircraft carriers.
During World War II the Soviet Tupolev-Vakhmistrov Zveno project developed converted Tupolev TB-1 and TB-3 aircraft to carry and launch up to five smaller craft, typically in roles such as fighter escort or fighter-bomber.
During the early days of the jet age, fighter aircraft could not fly long distances and still match point defence fighters or interceptors in dogfighting. The solution was long-range bombers that would carry or tow their escort fighters.
B-29 Superfortress and B-36 Peacemaker bombers were tested as carriers for the RF-84K Thunderflash (FICON project) and XF-85 Goblin fighters.
In November 2014, the U.S. Defense Advanced Research Projects Agency (DARPA) requested industry proposals for a system in which small unmanned aerial vehicles (UAVs) would be launched and recovered by their existing conventional large aircraft, including the B-52 Stratofortress and B-1 Lancer bombers and C-130 Hercules and C-17 Globemaster III transports.
Composites
In a composite aircraft, two or more component aircraft take off as a single unit and later separate. The British Short S.21 Maia experimental flying boat served as the mother ship component of the Short Mayo Composite two-plane maritime trans-Atlantic project design in the 1930s.
Spacecraft
The mother ship concept was used in Moon landings performed in the 1960s. Both the 1962 American Ranger and the 1966 Soviet Luna uncrewed landers were spherical capsules designed to be ejected at the last moment from mother ships that carried them to the Moon, and crashed onto its surface. In the crewed Apollo program, astronauts in the Lunar Module left the Command/Service Module mother ship in lunar orbit, descended to the surface, and returned to dock in a lunar orbit rendezvous with the mother ship once more for the return to Earth.
The Scaled Composites White Knight series of aircraft are designed to launch spacecraft which they carry underneath them.
In popular culture
UFO lore
There have been numerous sightings of unidentified flying objects (UFOs) claimed to be mother ships, many in the U.S. during the summer of 1947. A woman in Palmdale, California, was quoted by contemporary press as describing a "mother saucer (with a) bunch of little saucers playing around it". The term mothership was also popularized in UFO lore by contactee George Adamski, who claimed in the 1950s to sometimes see large cigar-shaped Venusian motherships, out of which flew smaller-sized flying saucer scout ships. Adamski claimed to have met and befriended the pilots of these scout ships, including a Venusian named Orthon.
Science fiction
The concept of a mother ship also occurs in science fiction, extending the idea to spaceships that serve as the equivalent of flagships among a fleet. In this context, mother ship is often spelled as one word: mothership.
A mothership may be large enough that its body contains a station for the rest of the fleet. Examples include the large craft in Close Encounters of the Third Kind and Battlestar Galactica.
In other languages
In many Asian languages, such as Chinese, Japanese, Korean and Indonesian, the word mothership (, , , , literally "mother" + "(war)ship") typically refers to an aircraft carrier, which is translated as "aircraft/aviation mothership" (, , , ).
See also
Submarine aircraft carrier
Fictional airborne aircraft carriers
References
Transport systems
Ship types
Naval ships
Aircraft by design configuration
Spacecraft | Mother ship | [
"Physics",
"Technology"
] | 1,597 | [
"Physical systems",
"Transport",
"Transport systems"
] |
577,053 | https://en.wikipedia.org/wiki/Association%20rule%20learning | Association rule learning is a rule-based machine learning method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness. In any given transaction with a variety of items, association rules are meant to discover the rules that determine how or why certain items are connected.
Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as, e.g., promotional pricing or product placements.
In addition to the above example from market basket analysis, association rules are employed today in many application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.
The association rule algorithm itself consists of various parameters that can make it difficult for those without some expertise in data mining to execute, with many rules that are arduous to understand.
Definition
Following the original definition by Agrawal, Imieliński, Swami the problem of association rule mining is defined as:
Let be a set of binary attributes called items.
Let be a set of transactions called the database.
Each transaction in has a unique transaction ID and contains a subset of the items in .
A rule is defined as an implication of the form:
, where .
In Agrawal, Imieliński, Swami a rule is defined only between a set and a single item, for .
Every rule is composed by two different sets of items, also known as itemsets, and , where is called antecedent or left-hand-side (LHS) and consequent or right-hand-side (RHS). The antecedent is that item that can be found in the data while the consequent is the item found when combined with the antecedent. The statement is often read as if then , where the antecedent ( ) is the if and the consequent () is the then. This simply implies that, in theory, whenever occurs in a dataset, then will as well.
Process
Association rules are made by searching data for frequent if-then patterns and by using a certain criterion under Support and Confidence to define what the most important relationships are. Support is the evidence of how frequent an item appears in the data given, as Confidence is defined by how many times the if-then statements are found true. However, there is a third criteria that can be used, it is called Lift and it can be used to compare the expected Confidence and the actual Confidence. Lift will show how many times the if-then statement is expected to be found to be true.
Association rules are made to calculate from itemsets, which are created by two or more items. If the rules were built from the analyzing from all the possible itemsets from the data then there would be so many rules that they wouldn’t have any meaning. That is why Association rules are typically made from rules that are well represented by the data.
There are many different data mining techniques you could use to find certain analytics and results, for example, there is Classification analysis, Clustering analysis, and Regression analysis. What technique you should use depends on what you are looking for with your data. Association rules are primarily used to find analytics and a prediction of customer behavior. For Classification analysis, it would most likely be used to question, make decisions, and predict behavior. Clustering analysis is primarily used when there are no assumptions made about the likely relationships within the data. Regression analysis Is used when you want to predict the value of a continuous dependent from a number of independent variables.
Benefits
There are many benefits of using Association rules like finding the pattern that helps understand the correlations and co-occurrences between data sets. A very good real-world example that uses Association rules would be medicine. Medicine uses Association rules to help diagnose patients. When diagnosing patients there are many variables to consider as many diseases will share similar symptoms. With the use of the Association rules, doctors can determine the conditional probability of an illness by comparing symptom relationships from past cases.
Downsides
However, Association rules also lead to many different downsides such as finding the appropriate parameter and threshold settings for the mining algorithm. But there is also the downside of having a large number of discovered rules. The reason is that this does not guarantee that the rules will be found relevant, but it could also cause the algorithm to have low performance. Sometimes the implemented algorithms will contain too many variables and parameters. For someone that doesn’t have a good concept of data mining, this might cause them to have trouble understanding it.
ThresholdsWhen using Association rules, you are most likely to only use Support and Confidence. However, this means you have to satisfy a user-specified minimum support and a user-specified minimum confidence at the same time. Usually, the Association rule generation is split into two different steps that needs to be applied:
A minimum Support threshold to find all the frequent itemsets that are in the database.
A minimum Confidence threshold to the frequent itemsets found to create rules.
The Support Threshold is 30%, Confidence Threshold is 50%
The Table on the left is the original unorganized data and the table on the right is organized by the thresholds. In this case Item C is better than the thresholds for both Support and Confidence which is why it is first. Item A is second because its threshold values are spot on. Item D has met the threshold for Support but not Confidence. Item B has not met the threshold for either Support or Confidence and that is why it is last.
To find all the frequent itemsets in a database is not an easy task since it involves going through all the data to find all possible item combinations from all possible itemsets. The set of possible itemsets is the power set over and has size , of course this means to exclude the empty set which is not considered to be a valid itemset. However, the size of the power set will grow exponentially in the number of item that is within the power set . An efficient search is possible by using the downward-closure property of support (also called anti-monotonicity). This would guarantee that a frequent itemset and all its subsets are also frequent and thus will have no infrequent itemsets as a subset of a frequent itemset. Exploiting this property, efficient algorithms (e.g., Apriori and Eclat) can find all frequent itemsets.
Useful Concepts
To illustrate the concepts, we use a small example from the supermarket domain. Table 2 shows a small database containing the items where, in each entry, the value 1 means the presence of the item in the corresponding transaction, and the value 0 represents the absence of an item in that transaction. The set of items is .
An example rule for the supermarket could be meaning that if butter and bread are bought, customers also buy milk.
In order to select interesting rules from the set of all possible rules, constraints on various measures of significance and interest are used. The best-known constraints are minimum thresholds on support and confidence.
Let be itemsets, an association rule and a set of transactions of a given database.
Note: this example is extremely small. In practical applications, a rule needs a support of several hundred transactions before it can be considered statistically significant, and datasets often contain thousands or millions of transactions.
Support
Support is an indication of how frequently the itemset appears in the dataset.
In our example, it can be easier to explain support by writing where A and B are separate item sets that occur at the same time in a transaction.
Using Table 2 as an example, the itemset has a support of since it occurs in 20% of all transactions (1 out of 5 transactions). The argument of support of X is a set of preconditions, and thus becomes more restrictive as it grows (instead of more inclusive).
Furthermore, the itemset has a support of as it appears in 20% of all transactions as well.
When using antecedents and consequents, it allows a data miner to determine the support of multiple items being bought together in comparison to the whole data set. For example, Table 2 shows that if milk is bought, then bread is bought has a support of 0.4 or 40%. This because in 2 out 5 of the transactions, milk as well as bread are bought. In smaller data sets like this example, it is harder to see a strong correlation when there are few samples, but when the data set grows larger, support can be used to find correlation between two or more products in the supermarket example.
Minimum support thresholds are useful for determining which itemsets are preferred or interesting.
If we set the support threshold to ≥0.4 in Table 3, then the would be removed since it did not meet the minimum threshold of 0.4. Minimum threshold is used to remove samples where there is not a strong enough support or confidence to deem the sample as important or interesting in the dataset.
Another way of finding interesting samples is to find the value of (support)×(confidence); this allows a data miner to see the samples where support and confidence are high enough to be highlighted in the dataset and prompt a closer look at the sample to find more information on the connection between the items.
Support can be beneficial for finding the connection between products in comparison to the whole dataset, whereas confidence looks at the connection between one or more items and another item. Below is a table that shows the comparison and contrast between support and support × confidence, using the information from Table 4 to derive the confidence values.
The support of with respect to is defined as the proportion of transactions in the dataset which contains the itemset . Denoting a transaction by where is the unique identifier of the transaction and is its itemset, the support may be written as:
This notation can be used when defining more complicated datasets where the items and itemsets may not be as easy as our supermarket example above. Other examples of where support can be used is in finding groups of genetic mutations that work collectively to cause a disease, investigating the number of subscribers that respond to upgrade offers, and discovering which products in a drug store are never bought together.
Confidence
Confidence is the percentage of all transactions satisfying that also satisfy .
With respect to , the confidence value of an association rule, often denoted as , is the ratio of transactions containing both and to the total amount of values present, where is the antecedent and is the consequent.
Confidence can also be interpreted as an estimate of the conditional probability , the probability of finding the RHS of the rule in transactions under the condition that these transactions also contain the LHS.
It is commonly depicted as:
The equation illustrates that confidence can be computed by calculating the co-occurrence of transactions and within the dataset in ratio to transactions containing only . This means that the number of transactions in both and is divided by those just in .
For example, Table 2 shows the rule which has a confidence of in the dataset, which denotes that every time a customer buys butter and bread, they also buy milk. This particular example demonstrates the rule being correct 100% of the time for transactions containing both butter and bread. The rule , however, has a confidence of . This suggests that eggs are bought 67% of the times that fruit is brought. Within this particular dataset, fruit is purchased a total of 3 times, with two of those times consisting of egg purchases.
For larger datasets, a minimum threshold, or a percentage cutoff, for the confidence can be useful for determining item relationships. When applying this method to some of the data in Table 2, information that does not meet the requirements are removed. Table 4 shows association rule examples where the minimum threshold for confidence is 0.5 (50%). Any data that does not have a confidence of at least 0.5 is omitted. Generating thresholds allow for the association between items to become stronger as the data is further researched by emphasizing those that co-occur the most. The table uses the confidence information from Table 3 to implement the Support × Confidence column, where the relationship between items via their both confidence and support, instead of just one concept, is highlighted. Ranking the rules by Support × Confidence multiples the confidence of a particular rule to its support and is often implemented for a more in-depth understanding of the relationship between the items.
Overall, using confidence in association rule mining is great way to bring awareness to data relations. Its greatest benefit is highlighting the relationship between particular items to one another within the set, as it compares co-occurrences of items to the total occurrence of the antecedent in the specific rule. However, confidence is not the optimal method for every concept in association rule mining. The disadvantage of using it is that it does not offer multiple difference outlooks on the associations. Unlike support, for instance, confidence does not provide the perspective of relationships between certain items in comparison to the entire dataset, so while milk and bread, for example, may occur 100% of the time for confidence, it only has a support of 0.4 (40%). This is why it is important to look at other viewpoints, such as Support × Confidence, instead of solely relying on one concept incessantly to define the relationships.
Lift
The lift of a rule is defined as:
or the ratio of the observed support to that expected if X and Y were independent.
For example, the rule has a lift of .
If the rule had a lift of 1, it would imply that the probability of occurrence of the antecedent and that of the consequent are independent of each other. When two events are independent of each other, no rule can be drawn involving those two events.
If the lift is > 1, that lets us know the degree to which those two occurrences are dependent on one another, and makes those rules potentially useful for predicting the consequent in future data sets.
If the lift is < 1, that lets us know the items are substitute to each other. This means that presence of one item has negative effect on presence of other item and vice versa.
The value of lift is that it considers both the support of the rule and the overall data set.
[rede]
Conviction
The conviction of a rule is defined as .
For example, the rule has a conviction of , and can be interpreted as the ratio of the expected frequency that X occurs without Y (that is to say, the frequency that the rule makes an incorrect prediction) if X and Y were independent divided by the observed frequency of incorrect predictions. In this example, the conviction value of 1.2 shows that the rule would be incorrect 20% more often (1.2 times as often) if the association between X and Y was purely random chance.
Alternative measures of interestingness
In addition to confidence, other measures of interestingness for rules have been proposed. Some popular measures are:
All-confidence
Collective strength
Leverage
Several more measures are presented and compared by Tan et al. and by Hahsler. Looking for techniques that can model what the user has known (and using these models as interestingness measures) is currently an active research trend under the name of "Subjective Interestingness."
History
The concept of association rules was popularized particularly due to the 1993 article of Agrawal et al., which has acquired more than 23,790 citations according to Google Scholar, as of April 2021, and is thus one of the most cited papers in the Data Mining field. However, what is now called "association rules" is introduced already in the 1966 paper on GUHA, a general data mining method developed by Petr Hájek et al.
An early (circa 1989) use of minimum support and confidence to find all association rules is the Feature Based Modeling framework, which found all rules with and greater than user defined constraints.
Statistically sound associations
One limitation of the standard approach to discovering associations is that by searching massive numbers of possible associations to look for collections of items that appear to be associated, there is a large risk of finding many spurious associations. These are collections of items that co-occur with unexpected frequency in the data, but only do so by chance. For example, suppose we are considering a collection of 10,000 items and looking for rules containing two items in the left-hand-side and 1 item in the right-hand-side. There are approximately 1,000,000,000,000 such rules. If we apply a statistical test for independence with a significance level of 0.05 it means there is only a 5% chance of accepting a rule if there is no association. If we assume there are no associations, we should nonetheless expect to find 50,000,000,000 rules. Statistically sound association discovery controls this risk, in most cases reducing the risk of finding any spurious associations to a user-specified significance level.
Algorithms
Many algorithms for generating association rules have been proposed.
Some well-known algorithms are Apriori, Eclat and FP-Growth, but they only do half the job, since they are algorithms for mining frequent itemsets. Another step needs to be done after to generate rules from frequent itemsets found in a database.
Apriori algorithm
Apriori is given by R. Agrawal and R. Srikant in 1994 for frequent item set mining and association rule learning. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often. The name of the algorithm is Apriori because it uses prior knowledge of frequent itemset properties.
Overview: Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as candidate generation), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found. Apriori uses breadth-first search and a Hash tree structure to count candidate item sets efficiently. It generates candidate item sets of length from item sets of length . Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequent -length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates.
Example: Assume that each row is a cancer sample with a certain combination of mutations labeled by a character in the alphabet. For example a row could have {a, c} which means it is affected by mutation 'a' and mutation 'c'.
Now we will generate the frequent item set by counting the number of occurrences of each character. This is also known as finding the support values. Then we will prune the item set by picking a minimum support threshold. For this pass of the algorithm we will pick 3.
Since all support values are three or above there is no pruning. The frequent item set is {a}, {b}, {c}, and {d}. After this we will repeat the process by counting pairs of mutations in the input set.
Now we will make our minimum support value 4 so only {a, d} will remain after pruning. Now we will use the frequent item set to make combinations of triplets. We will then repeat the process by counting occurrences of triplets of mutations in the input set.
Since we only have one item the next set of combinations of quadruplets is empty so the algorithm will stop.
Advantages and Limitations:
Apriori has some limitations. Candidate generation can result in large candidate sets. For example a 10^4 frequent 1-itemset will generate a 10^7 candidate 2-itemset. The algorithm also needs to frequently scan the database, to be specific n+1 scans where n is the length of the longest pattern. Apriori is slower than the Eclat algorithm. However, Apriori performs well compared to Eclat when the dataset is large. This is because in the Eclat algorithm if the dataset is too large the tid-lists become too large for memory. FP-growth outperforms the Apriori and Eclat. This is due to the FP-growth algorithm not having candidate generation or test, using a compact data structure, and only having one database scan.
Eclat algorithm
Eclat (alt. ECLAT, stands for Equivalence Class Transformation) is a backtracking algorithm, which traverses the frequent itemset lattice graph in a depth-first search (DFS) fashion. Whereas the breadth-first search (BFS) traversal used in the Apriori algorithm will end up checking every subset of an itemset before checking it, DFS traversal checks larger itemsets and can save on checking the support of some of its subsets by virtue of the downward-closer property. Furthermore it will almost certainly use less memory as DFS has a lower space complexity than BFS.
To illustrate this, let there be a frequent itemset {a, b, c}. a DFS may check the nodes in the frequent itemset lattice in the following order: {a} → {a, b} → {a, b, c}, at which point it is known that {b}, {c}, {a, c}, {b, c} all satisfy the support constraint by the downward-closure property. BFS would explore each subset of {a, b, c} before finally checking it. As the size of an itemset increases, the number of its subsets undergoes combinatorial explosion.
It is suitable for both sequential as well as parallel execution with locality-enhancing properties.
FP-growth algorithm
FP stands for frequent pattern.
In the first pass, the algorithm counts the occurrences of items (attribute-value pairs) in the dataset of transactions, and stores these counts in a 'header table'. In the second pass, it builds the FP-tree structure by inserting transactions into a trie.
Items in each transaction have to be sorted by descending order of their frequency in the dataset before being inserted so that the tree can be processed quickly.
Items in each transaction that do not meet the minimum support requirement are discarded.
If many transactions share most frequent items, the FP-tree provides high compression close to tree root.
Recursive processing of this compressed version of the main dataset grows frequent item sets directly, instead of generating candidate items and testing them against the entire database (as in the apriori algorithm).
Growth begins from the bottom of the header table i.e. the item with the smallest support by finding all sorted transactions that end in that item. Call this item .
A new conditional tree is created which is the original FP-tree projected onto . The supports of all nodes in the projected tree are re-counted with each node getting the sum of its children counts. Nodes (and hence subtrees) that do not meet the minimum support are pruned. Recursive growth ends when no individual items conditional on meet the minimum support threshold. The resulting paths from root to will be frequent itemsets. After this step, processing continues with the next least-supported header item of the original FP-tree.
Once the recursive process has completed, all frequent item sets will have been found, and association rule creation begins.
Others
ASSOC
The ASSOC procedure is a GUHA method which mines for generalized association rules using fast bitstrings operations. The association rules mined by this method are more general than those output by apriori, for example "items" can be connected both with conjunction and disjunctions and the relation between antecedent and consequent of the rule is not restricted to setting minimum support and confidence as in apriori: an arbitrary combination of supported interest measures can be used.
OPUS search
OPUS is an efficient algorithm for rule discovery that, in contrast to most alternatives, does not require either monotone or anti-monotone constraints such as minimum support. Initially used to find rules for a fixed consequent it has subsequently been extended to find rules with any item as a consequent. OPUS search is the core technology in the popular Magnum Opus association discovery system.
Lore
A famous story about association rule mining is the "beer and diaper" story. A purported survey of behavior of supermarket shoppers discovered that customers (presumably young men) who buy diapers tend also to buy beer. This anecdote became popular as an example of how unexpected association rules might be found from everyday data. There are varying opinions as to how much of the story is true. Daniel Powers says:
In 1992, Thomas Blischok, manager of a retail consulting group at Teradata, and his staff prepared an analysis of 1.2 million market baskets from about 25 Osco Drug stores. Database queries were developed to identify affinities. The analysis "did discover that between 5:00 and 7:00 p.m. that consumers bought beer and diapers". Osco managers did NOT exploit the beer and diapers relationship by moving the products closer together on the shelves.
Other types of association rule mining
Multi-Relation Association Rules (MRAR): These are association rules where each item may have several relations. These relations indicate indirect relationships between the entities. Consider the following MRAR where the first item consists of three relations live in, nearby and humid: “Those who live in a place which is nearby a city with humid climate type and also are younger than 20 their health condition is good”. Such association rules can be extracted from RDBMS data or semantic web data.
Contrast set learning is a form of associative learning. Contrast set learners use rules that differ meaningfully in their distribution across subsets.
Weighted class learning is another form of associative learning where weights may be assigned to classes to give focus to a particular issue of concern for the consumer of the data mining results.
High-order pattern discovery facilitates the capture of high-order (polythetic) patterns or event associations that are intrinsic to complex real-world data.
K-optimal pattern discovery provides an alternative to the standard approach to association rule learning which requires that each pattern appear frequently in the data.
Approximate Frequent Itemset mining is a relaxed version of Frequent Itemset mining that allows some of the items in some of the rows to be 0.
Generalized Association Rules hierarchical taxonomy (concept hierarchy)
Quantitative Association Rules categorical and quantitative data
Interval Data Association Rules e.g. partition the age into 5-year-increment ranged
Sequential pattern mining discovers subsequences that are common to more than minsup (minimum support threshold) sequences in a sequence database, where minsup is set by the user. A sequence is an ordered list of transactions.
Subspace Clustering, a specific type of clustering high-dimensional data, is in many variants also based on the downward-closure property for specific clustering models.
Warmr, shipped as part of the ACE data mining suite, allows association rule learning for first order relational rules.
See also
Sequence mining
Production system (computer science)
Learning classifier system
Rule-based machine learning
References
Bibliographies
Annotated Bibliography on Association Rules by M. Hahsler
Data management
Data mining | Association rule learning | [
"Technology"
] | 5,657 | [
"Data management",
"Data"
] |
577,097 | https://en.wikipedia.org/wiki/Quadtree | A quadtree is a tree data structure in which each internal node has exactly four children. Quadtrees are the two-dimensional analog of octrees and are most often used to partition a two-dimensional space by recursively subdividing it into four quadrants or regions. The data associated with a leaf cell varies by application, but the leaf cell represents a "unit of interesting spatial information".
The subdivided regions may be square or rectangular, or may have arbitrary shapes. This data structure was named a quadtree by Raphael Finkel and J.L. Bentley in 1974. A similar partitioning is also known as a Q-tree.
All forms of quadtrees share some common features:
They decompose space into adaptable cells.
Each cell (or bucket) has a maximum capacity. When maximum capacity is reached, the bucket splits.
The tree directory follows the spatial decomposition of the quadtree.
A tree-pyramid (T-pyramid) is a "complete" tree; every node of the T-pyramid has four child nodes except leaf nodes; all leaves are on the same level, the level that corresponds to individual pixels in the image. The data in a tree-pyramid can be stored compactly in an array as an implicit data structure similar to the way a complete binary tree can be stored compactly in an array.
Types
Quadtrees may be classified according to the type of data they represent, including areas, points, lines and curves. Quadtrees may also be classified by whether the shape of the tree is independent of the order in which data is processed. The following are common types of quadtrees.
Region quadtree
The region quadtree represents a partition of space in two dimensions by decomposing the region into four equal quadrants, subquadrants, and so on with each leaf node containing data corresponding to a specific subregion. Each node in the tree either has exactly four children, or has no children (a leaf node). The height of quadtrees that follow this decomposition strategy (i.e. subdividing subquadrants as long as there is interesting data in the subquadrant for which more refinement is desired) is sensitive to and dependent on the spatial distribution of interesting areas in the space being decomposed. The region quadtree is a type of trie.
A region quadtree with a depth of n may be used to represent an image consisting of 2n × 2n pixels, where each pixel value is 0 or 1. The root node represents the entire image region. If the pixels in any region are not entirely 0s or 1s, it is subdivided. In this application, each leaf node represents a block of pixels that are all 0s or all 1s. Note the potential savings in terms of space when these trees are used for storing images; images often have many regions of considerable size that have the same colour value throughout. Rather than store a big 2-D array of every pixel in the image, a quadtree can capture the same information potentially many divisive levels higher than the pixel-resolution sized cells that we would otherwise require. The tree resolution and overall size is bounded by the pixel and image sizes.
A region quadtree may also be used as a variable resolution representation of a data field. For example, the temperatures in an area may be stored as a quadtree, with each leaf node storing the average temperature over the subregion it represents.
Point quadtree
The point quadtree is an adaptation of a binary tree used to represent two-dimensional point data. It shares the features of all quadtrees but is a true tree as the center of a subdivision is always on a point. It is often very efficient in comparing two-dimensional, ordered data points, usually operating in O(log n) time. Point quadtrees are worth mentioning for completeness, but they have been surpassed by k-d trees as tools for generalized binary search.
Point quadtrees are constructed as follows. Given the next point to insert, we find the cell in which it lies and add it to the tree. The new point is added such that the cell that contains it is divided into quadrants by the vertical and horizontal lines that run through the point. Consequently, cells are rectangular but not necessarily square. In these trees, each node contains one of the input points.
Since the division of the plane is decided by the order of point-insertion, the tree's height is sensitive to and dependent on insertion order. Inserting in a "bad" order can lead to a tree of height linear in the number of input points (at which point it becomes a linked-list). If the point-set is static, pre-processing can be done to create a tree of balanced height.
Node structure for a point quadtree
A node of a point quadtree is similar to a node of a binary tree, with the major difference being that it has four pointers (one for each quadrant) instead of two ("left" and "right") as in an ordinary binary tree. Also a key is usually decomposed into two parts, referring to x and y coordinates. Therefore, a node contains the following information:
four pointers: quad[‘NW’], quad[‘NE’], quad[‘SW’], and quad[‘SE’]
point; which in turn contains:
key; usually expressed as x, y coordinates
value; for example a name
Point-region (PR) quadtree
Point-region (PR) quadtrees are very similar to region quadtrees. The difference is the type of information stored about the cells. In a region quadtree, a uniform value is stored that applies to the entire area of the cell of a leaf. The cells of a PR quadtree, however, store a list of points that exist within the cell of a leaf. As mentioned previously, for trees following this decomposition strategy the height depends on the spatial distribution of the points. Like the point quadtree, the PR quadtree may also have a linear height when given a "bad" set.
Edge quadtree
Edge quadtrees (much like PM quadtrees) are used to store lines rather than points. Curves are approximated by subdividing cells to a very fine resolution, specifically until there is a single line segment per cell. Near corners/vertices, edge quadtrees will continue dividing until they reach their maximum level of decomposition. This can result in extremely unbalanced trees which may defeat the purpose of indexing.
Polygonal map (PM) quadtree
The polygonal map quadtree (or PM Quadtree) is a variation of quadtree which is used to store collections of polygons that may be degenerate (meaning that they have isolated vertices or edges).
A big difference between PM quadtrees and edge quadtrees is that the cell under consideration is not subdivided if the segments meet at a vertex in the cell.
There are three main classes of PM Quadtrees, which vary depending on what information they store within each black node. PM3 quadtrees can store any amount of non-intersecting edges and at most one point. PM2 quadtrees are the same as PM3 quadtrees except that all edges must share the same end point. Finally PM1 quadtrees are similar to PM2, but black nodes can contain a point and its edges or just a set of edges that share a point, but you cannot have a point and a set of edges that do not contain the point.
Compressed quadtrees
This section summarizes a subsection from a book by Sariel Har-Peled.
If we were to store every node corresponding to a subdivided cell, we may end up storing a lot of empty nodes. We can cut down on the size of such sparse trees by only storing subtrees whose leaves have interesting data (i.e. "important subtrees"). We can actually cut down on the size even further. When we only keep important subtrees, the pruning process may leave long paths in the tree where the intermediate nodes have degree two (a link to one parent and one child). It turns out that we only need to store the node at the beginning of this path (and associate some meta-data with it to represent the removed nodes) and attach the subtree rooted at its end to . It is still possible for these compressed trees to have a linear height when given "bad" input points.
Although we trim a lot of the tree when we perform this compression, it is still possible to achieve logarithmic-time search, insertion, and deletion by taking advantage of Z-order curves. The Z-order curve maps each cell of the full quadtree (and hence even the compressed quadtree) in time to a one-dimensional line (and maps it back in time too), creating a total order on the elements. Therefore, we can store the quadtree in a data structure for ordered sets (in which we store the nodes of the tree).
We must state a reasonable assumption before we continue: we assume that given two real numbers expressed as binary, we can compute in time the index of the first bit in which they differ. We also assume that we can compute in time the lowest common ancestor of two points/cells in the quadtree and establish their relative Z-ordering, and we can compute the floor function in time.
With these assumptions, point location of a given point (i.e. determining the cell that would contain ), insertion, and deletion operations can all be performed in time (i.e. the time it takes to do a search in the underlying ordered set data structure).
To perform a point location for (i.e. find its cell in the compressed tree):
Find the existing cell in the compressed tree that comes before in the Z-order. Call this cell .
If , return .
Else, find what would have been the lowest common ancestor of the point and the cell in an uncompressed quadtree. Call this ancestor cell .
Find the existing cell in the compressed tree that comes before in the Z-order and return it.
Without going into specific details, to perform insertions and deletions we first do a point location for the thing we want to insert/delete, and then insert/delete it. Care must be taken to reshape the tree as appropriate, creating and removing nodes as needed.
Some common uses of quadtrees
Image representation
Image processing
Mesh generation
Spatial indexing, point location queries, and range queries
Efficient collision detection in two dimensions
View frustum culling of terrain data
Storing sparse data, such as a formatting information for a spreadsheet or for some matrix calculations
Solution of multidimensional fields (computational fluid dynamics, electromagnetism)
Conway's Game of Life simulation program.
State estimation
Quadtrees are also used in the area of fractal image analysis
Maximum disjoint sets
Image processing using quadtrees
Quadtrees, particularly the region quadtree, have lent themselves well to image processing applications. We will limit our discussion to binary image data, though region quadtrees and the image processing operations performed on them are just as suitable for colour images.
Image union / intersection
One of the advantages of using quadtrees for image manipulation is that the set operations of union and intersection can be done simply and quickly.
Given two binary images, the image union (also called overlay) produces an image wherein a pixel is black if either of the input images has a black pixel in the same location. That is, a pixel in the output image is white only when the corresponding pixel in both input images is white, otherwise the output pixel is black. Rather than do the operation pixel by pixel, we can compute the union more efficiently by leveraging the quadtree's ability to represent multiple pixels with a single node. For the purposes of discussion below, if a subtree contains both black and white pixels we will say that the root of that subtree is coloured grey.
The algorithm works by traversing the two input quadtrees ( and ) while building the output quadtree . Informally, the algorithm is as follows. Consider the nodes and corresponding to the same region in the images.
If or is black, the corresponding node is created in and is colored black. If only one of them is black and the other is gray, the gray node will contain a subtree underneath. This subtree need not be traversed.
If (respectively, ) is white, (respectively, ) and the subtree underneath it (if any) is copied to .
If both and are gray, then the corresponding children of and are considered.
While this algorithm works, it does not by itself guarantee a minimally sized quadtree. For example, consider the result if we were to union a checkerboard (where every tile is a pixel) of size with its complement. The result is a giant black square which should be represented by a quadtree with just the root node (coloured black), but instead the algorithm produces a full 4-ary tree of depth . To fix this, we perform a bottom-up traversal of the resulting quadtree where we check if the four children nodes have the same colour, in which case we replace their parent with a leaf of the same colour.
The intersection of two images is almost the same algorithm. One way to think about the intersection of the two images is that we are doing a union with respect to the white pixels. As such, to perform the intersection we swap the mentions of black and white in the union algorithm.
Connected component labelling
Consider two neighbouring black pixels in a binary image. They are adjacent if they share a bounding horizontal or vertical edge. In general, two black pixels are connected if one can be reached from the other by moving only to adjacent pixels (i.e. there is a path of black pixels between them where each consecutive pair is adjacent). Each maximal set of connected black pixels is a connected component. Using the quadtree representation of images, Samet showed how we can find and label these connected components in time proportional to the size of the quadtree. This algorithm can also be used for polygon colouring.
The algorithm works in three steps:
establish the adjacency relationships between black pixels
process the equivalence relations from the first step to obtain one unique label for each connected component
label the black pixels with the label associated with their connected component
To simplify the discussion, let us assume the children of a node in the quadtree follow the Z-order (SW, NW, SE, NE). Since we can count on this structure, for any cell we know how to navigate the quadtree to find the adjacent cells in the different levels of the hierarchy.
Step one is accomplished with a post-order traversal of the quadtree. For each black leaf we look at the node or nodes representing cells that are Northern neighbours and Eastern neighbours (i.e. the Northern and Eastern cells that share edges with the cell of ). Since the tree is organized in Z-order, we have the invariant that the Southern and Western neighbours have already been taken care of and accounted for. Let the Northern or Eastern neighbour currently under consideration be . If represents black pixels:
If only one of or has a label, assign that label to the other cell
If neither of them have labels, create one and assign it to both of them
If and have different labels, record this label equivalence and move on
Step two can be accomplished using the union-find data structure. We start with each unique label as a separate set. For every equivalence relation noted in the first step, we union the corresponding sets. Afterwards, each distinct remaining set will be associated with a distinct connected component in the image.
Step three performs another post-order traversal. This time, for each black node we use the union-find's find operation (with the old label of ) to find and assign its new label (associated with the connected component of which is part).
Mesh generation using quadtrees
This section summarizes a chapter from a book by Har-Peled and de Berg et al.
Mesh generation is essentially the triangulation of a point set for which further processing may be performed. As such, it is desirable for the resulting triangulation to have certain properties (like non-uniformity, triangles that are not "too skinny", large triangles in sparse areas and small triangles in dense ones, etc.) to make further processing quicker and less error-prone. Quadtrees built on the point set can be used to create meshes with these desired properties.
Consider a leaf of the quadtree and its corresponding cell . We say is balanced (for mesh generation) if the cell's sides are intersected by the corner points of neighbouring cells at most once on each side. This means that the quadtree levels of leaves adjacent to differ by at most one from the level of . When this is true for all leaves, we say the whole quadtree is balanced (for mesh generation).
Consider the cell and the neighbourhood of same-sized cells centred at . We call this neighbourhood the extended cluster. We say the quadtree is well-balanced if it is balanced, and for every leaf that contains a point of the point set, its extended cluster is also in the quadtree and the extended cluster contains no other point of the point set.
Creating the mesh is done as follows:
Build a quadtree on the input points.
Ensure the quadtree is balanced. For every leaf, if there is a neighbour that is too large, subdivide the neighbour. This is repeated until the tree is balanced. We also make sure that for a leaf with a point in it, the nodes for each leaf's extended cluster are in the tree.
For every leaf node that contains a point, if the extended cluster contains another point, we further subdivide the tree and rebalance as necessary. If we needed to subdivide, for each child of we ensure the nodes of 's extended cluster are in the tree (and re-balance as required).
Repeat the previous step until the tree is well-balanced.
Transform the quadtree into a triangulation.
We consider the corner points of the tree cells as vertices in our triangulation. Before the transformation step we have a bunch of boxes with points in some of them. The transformation step is done in the following manner: for each point, warp the closest corner of its cell to meet it and triangulate the resulting four quadrangles to make "nice" triangles (the interested reader is referred to chapter 12 of Har-Peled for more details on what makes "nice" triangles).
The remaining squares are triangulated according to some simple rules. For each regular square (no points within and no corner points in its sides), introduce the diagonal. Due to the way in which we separated points with the well-balancing property, no square with a corner intersecting a side is one that was warped. As such, we can triangulate squares with intersecting corners as follows. If there is one intersected side, the square becomes three triangles by adding the long diagonals connecting the intersection with opposite corners. If there are four intersected sides, we split the square in half by adding an edge between two of the four intersections, and then connect these two endpoints to the remaining two intersection points. For the other squares, we introduce a point in the middle and connect it to all four corners of the square as well as each intersection point.
At the end of it all, we have a nice triangulated mesh of our point set built from a quadtree.
Pseudocode
The following pseudo code shows one means of implementing a quadtree which handles only points. There are other approaches available.
Prerequisites
It is assumed these structures are used.
// Simple coordinate object to represent points and vectors
struct XY
{
float x;
float y;
function __construct(float _x, float _y) {...}
}
// Axis-aligned bounding box with half dimension and center
struct AABB
{
XY center;
float halfDimension;
function __construct(XY _center, float _halfDimension) {...}
function containsPoint(XY point) {...}
function intersectsAABB(AABB other) {...}
}
QuadTree class
This class represents both one quad tree and the node where it is rooted.
class QuadTree
{
// Arbitrary constant to indicate how many elements can be stored in this quad tree node
constant int QT_NODE_CAPACITY = 4;
// Axis-aligned bounding box stored as a center with half-dimensions
// to represent the boundaries of this quad tree
AABB boundary;
// Points in this quad tree node
Array of XY [size = QT_NODE_CAPACITY] points;
// Children
QuadTree* northWest;
QuadTree* northEast;
QuadTree* southWest;
QuadTree* southEast;
// Methods
function __construct(AABB _boundary) {...}
function insert(XY p) {...}
function subdivide() {...} // create four children that fully divide this quad into four quads of equal area
function queryRange(AABB range) {...}
}
Insertion
The following method inserts a point into the appropriate quad of a quadtree, splitting if necessary.
class QuadTree
{
...
// Insert a point into the QuadTree
function insert(XY p)
{
// Ignore objects that do not belong in this quad tree
if (!boundary.containsPoint(p))
return false; // object cannot be added
// If there is space in this quad tree and if doesn't have subdivisions, add the object here
if (points.size < QT_NODE_CAPACITY && northWest == null)
{
points.append(p);
return true;
}
// Otherwise, subdivide and then add the point to whichever node will accept it
if (northWest == null)
subdivide();
// We have to add the points/data contained in this quad array to the new quads if we only want
// the last node to hold the data
if (northWest->insert(p)) return true;
if (northEast->insert(p)) return true;
if (southWest->insert(p)) return true;
if (southEast->insert(p)) return true;
// Otherwise, the point cannot be inserted for some unknown reason (this should never happen)
return false;
}
}
Query range
The following method finds all points contained within a range.
class QuadTree
{
...
// Find all points that appear within a range
function queryRange(AABB range)
{
// Prepare an array of results
Array of XY pointsInRange;
// Automatically abort if the range does not intersect this quad
if (!boundary.intersectsAABB(range))
return pointsInRange; // empty list
// Check objects at this quad level
for (int p = 0; p < points.size; p++)
{
if (range.containsPoint(points[p]))
pointsInRange.append(points[p]);
}
// Terminate here, if there are no children
if (northWest == null)
return pointsInRange;
// Otherwise, add the points from the children
pointsInRange.appendArray(northWest->queryRange(range));
pointsInRange.appendArray(northEast->queryRange(range));
pointsInRange.appendArray(southWest->queryRange(range));
pointsInRange.appendArray(southEast->queryRange(range));
return pointsInRange;
}
}
See also
Adaptive mesh refinement
Binary space partitioning
Binary tiling
k-d tree
Octree
R-tree
UB-tree
Spatial database
Subpaving
Z-order curve
References
Surveys by Aluru and Samet give a nice overview of quadtrees.
Notes
General references
Chapter 14: Quadtrees: pp. 291–306.
Trees (data structures)
Database index techniques
Geometric data structures
Rectangular subdivisions | Quadtree | [
"Physics"
] | 5,031 | [
"Tessellation",
"Rectangular subdivisions",
"Symmetry"
] |
577,162 | https://en.wikipedia.org/wiki/Relativistic%20wave%20equations | In physics, specifically relativistic quantum mechanics (RQM) and its applications to particle physics, relativistic wave equations predict the behavior of particles at high energies and velocities comparable to the speed of light. In the context of quantum field theory (QFT), the equations determine the dynamics of quantum fields.
The solutions to the equations, universally denoted as or (Greek psi), are referred to as "wave functions" in the context of RQM, and "fields" in the context of QFT. The equations themselves are called "wave equations" or "field equations", because they have the mathematical form of a wave equation or are generated from a Lagrangian density and the field-theoretic Euler–Lagrange equations (see classical field theory for background).
In the Schrödinger picture, the wave function or field is the solution to the Schrödinger equation;
one of the postulates of quantum mechanics. All relativistic wave equations can be constructed by specifying various forms of the Hamiltonian operator Ĥ describing the quantum system. Alternatively, Feynman's path integral formulation uses a Lagrangian rather than a Hamiltonian operator.
More generally – the modern formalism behind relativistic wave equations is Lorentz group theory, wherein the spin of the particle has a correspondence with the representations of the Lorentz group.
History
Early 1920s: Classical and quantum mechanics
The failure of classical mechanics applied to molecular, atomic, and nuclear systems and smaller induced the need for a new mechanics: quantum mechanics. The mathematical formulation was led by De Broglie, Bohr, Schrödinger, Pauli, and Heisenberg, and others, around the mid-1920s, and at that time was analogous to that of classical mechanics. The Schrödinger equation and the Heisenberg picture resemble the classical equations of motion in the limit of large quantum numbers and as the reduced Planck constant , the quantum of action, tends to zero. This is the correspondence principle. At this point, special relativity was not fully combined with quantum mechanics, so the Schrödinger and Heisenberg formulations, as originally proposed, could not be used in situations where the particles travel near the speed of light, or when the number of each type of particle changes (this happens in real particle interactions; the numerous forms of particle decays, annihilation, matter creation, pair production, and so on).
Late 1920s: Relativistic quantum mechanics of spin-0 and spin-1/2 particles
A description of quantum mechanical systems which could account for relativistic effects was sought for by many theoretical physicists from the late 1920s to the mid-1940s. The first basis for relativistic quantum mechanics, i.e. special relativity applied with quantum mechanics together, was found by all those who discovered what is frequently called the Klein–Gordon equation:
by inserting the energy operator and momentum operator into the relativistic energy–momentum relation:
The solutions to () are scalar fields. The KG equation is undesirable due to its prediction of negative energies and probabilities, as a result of the quadratic nature of () – inevitable in a relativistic theory. This equation was initially proposed by Schrödinger, and he discarded it for such reasons, only to realize a few months later that its non-relativistic limit (what is now called the Schrödinger equation) was still of importance. Nevertheless, () is applicable to spin-0 bosons.
Neither the non-relativistic nor relativistic equations found by Schrödinger could predict the fine structure in the Hydrogen spectral series. The mysterious underlying property was spin. The first two-dimensional spin matrices (better known as the Pauli matrices) were introduced by Pauli in the Pauli equation; the Schrödinger equation with a non-relativistic Hamiltonian including an extra term for particles in magnetic fields, but this was phenomenological. Weyl found a relativistic equation in terms of the Pauli matrices; the Weyl equation, for massless spin- fermions. The problem was resolved by Dirac in the late 1920s, when he furthered the application of equation () to the electron – by various manipulations he factorized the equation into the form:
and one of these factors is the Dirac equation (see below), upon inserting the energy and momentum operators. For the first time, this introduced new four-dimensional spin matrices and in a relativistic wave equation, and explained the fine structure of hydrogen. The solutions to () are multi-component spinor fields, and each component satisfies (). A remarkable result of spinor solutions is that half of the components describe a particle while the other half describe an antiparticle; in this case the electron and positron. The Dirac equation is now known to apply for all massive spin- fermions. In the non-relativistic limit, the Pauli equation is recovered, while the massless case results in the Weyl equation.
Although a landmark in quantum theory, the Dirac equation is only true for spin- fermions, and still predicts negative energy solutions, which caused controversy at the time (in particular – not all physicists were comfortable with the "Dirac sea" of negative energy states).
1930s–1960s: Relativistic quantum mechanics of higher-spin particles
The natural problem became clear: to generalize the Dirac equation to particles with any spin; both fermions and bosons, and in the same equations their antiparticles (possible because of the spinor formalism introduced by Dirac in his equation, and then-recent developments in spinor calculus by van der Waerden in 1929), and ideally with positive energy solutions.
This was introduced and solved by Majorana in 1932, by a deviated approach to Dirac. Majorana considered one "root" of ():
where is a spinor field now with infinitely many components, irreducible to a finite number of tensors or spinors, to remove the indeterminacy in sign. The matrices and are infinite-dimensional matrices, related to infinitesimal Lorentz transformations. He did not demand that each component of satisfy equation (); instead he regenerated the equation using a Lorentz-invariant action, via the principle of least action, and application of Lorentz group theory.
Majorana produced other important contributions that were unpublished, including wave equations of various dimensions (5, 6, and 16). They were anticipated later (in a more involved way) by de Broglie (1934), and Duffin, Kemmer, and Petiau (around 1938–1939) see Duffin–Kemmer–Petiau algebra. The Dirac–Fierz–Pauli formalism was more sophisticated than Majorana's, as spinors were new mathematical tools in the early twentieth century, although Majorana's paper of 1932 was difficult to fully understand; it took Pauli and Wigner some time to understand it, around 1940.
Dirac in 1936, and Fierz and Pauli in 1939, built equations from irreducible spinors and , symmetric in all indices, for a massive particle of spin for integer (see Van der Waerden notation for the meaning of the dotted indices):
where is the momentum as a covariant spinor operator. For , the equations reduce to the coupled Dirac equations and and together transform as the original Dirac spinor. Eliminating either or shows that and each fulfill (). The direct derivation of the Dirac-Pauli-Fierz equations using the Bargmann-Wigner operators is given in.
In 1941, Rarita and Schwinger focussed on spin- particles and derived the Rarita–Schwinger equation, including a Lagrangian to generate it, and later generalized the equations analogous to spin for integer . In 1945, Pauli suggested Majorana's 1932 paper to Bhabha, who returned to the general ideas introduced by Majorana in 1932. Bhabha and Lubanski proposed a completely general set of equations by replacing the mass terms in () and () by an arbitrary constant, subject to a set of conditions which the wave functions must obey.
Finally, in the year 1948 (the same year as Feynman's path integral formulation was cast), Bargmann and Wigner formulated the general equation for massive particles which could have any spin, by considering the Dirac equation with a totally symmetric finite-component spinor, and using Lorentz group theory (as Majorana did): the Bargmann–Wigner equations. In the early 1960s, a reformulation of the Bargmann–Wigner equations was made by H. Joos and Steven Weinberg, the Joos–Weinberg equation. Various theorists at this time did further research in relativistic Hamiltonians for higher spin particles.
1960s–present
The relativistic description of spin particles has been a difficult problem in quantum theory. It is still an area of the present-day research because the problem is only partially solved; including interactions in the equations is problematic, and paradoxical predictions (even from the Dirac equation) are still present.
Linear equations
The following equations have solutions which satisfy the superposition principle, that is, the wave functions are additive.
Throughout, the standard conventions of tensor index notation and Feynman slash notation are used, including Greek indices which take the values 1, 2, 3 for the spatial components and 0 for the timelike component of the indexed quantities. The wave functions are denoted , and are the components of the four-gradient operator.
In matrix equations, the Pauli matrices are denoted by in which , where is the identity matrix:
and the other matrices have their usual representations. The expression
is a matrix operator which acts on 2-component spinor fields.
The gamma matrices are denoted by , in which again , and there are a number of representations to select from. The matrix is not necessarily the identity matrix. The expression
is a matrix operator which acts on 4-component spinor fields.
Note that terms such as "" scalar multiply an identity matrix of the relevant dimension, the common sizes are or , and are conventionally not written for simplicity.
Linear gauge fields
The Duffin–Kemmer–Petiau equation is an alternative equation for spin-0 and spin-1 particles:
Constructing RWEs
Using 4-vectors and the energy–momentum relation
Start with the standard special relativity (SR) 4-vectors
4-position
4-velocity
4-momentum
4-wavevector
4-gradient
Note that each 4-vector is related to another by a Lorentz scalar:
, where is the proper time
, where is the rest mass
, which is the 4-vector version of the Planck–Einstein relation & the de Broglie matter wave relation
, which is the 4-gradient version of complex-valued plane waves
Now, just apply the standard Lorentz scalar product rule to each one:
The last equation is a fundamental quantum relation.
When applied to a Lorentz scalar field , one gets the Klein–Gordon equation, the most basic of the quantum relativistic wave equations.
: in 4-vector format
: in tensor format
: in factored tensor format
The Schrödinger equation is the low-velocity limiting case () of the Klein–Gordon equation.
When the relation is applied to a four-vector field instead of a Lorentz scalar field , then one gets the Proca equation (in Lorenz gauge):
If the rest mass term is set to zero (light-like particles), then this gives the free Maxwell equation (in Lorenz gauge)
Representations of the Lorentz group
Under a proper orthochronous Lorentz transformation in Minkowski space, all one-particle quantum states of spin with spin z-component locally transform under some representation of the Lorentz group:
where is some finite-dimensional representation, i.e. a matrix. Here is thought of as a column vector containing components with the allowed values of . The quantum numbers and as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of may occur more than once depending on the representation. Representations with several possible values for are considered below.
The irreducible representations are labeled by a pair of half-integers or integers . From these all other representations can be built up using a variety of standard methods, like taking tensor products and direct sums. In particular, space-time itself constitutes a 4-vector representation so that . To put this into context; Dirac spinors transform under the representation. In general, the representation space has subspaces that under the subgroup of spatial rotations, SO(3), transform irreducibly like objects of spin j, where each allowed value:
occurs exactly once. In general, tensor products of irreducible representations are reducible; they decompose as direct sums of irreducible representations.
The representations and can each separately represent particles of spin . A state or quantum field in such a representation would satisfy no field equation except the Klein–Gordon equation.
Non-linear equations
There are equations which have solutions that do not satisfy the superposition principle.
Nonlinear gauge fields
Yang–Mills equation: describes a non-abelian gauge field
Yang–Mills–Higgs equations: describes a non-abelian gauge field coupled with a massive spin-0 particle
Spin 2
Einstein field equations: describe interaction of matter with the gravitational field (massless spin-2 field): The solution is a metric tensor field, rather than a wave function.
See also
List of equations in nuclear and particle physics
List of equations in quantum mechanics
Lorentz transformation
Mathematical descriptions of the electromagnetic field
Quantization of the electromagnetic field
Minimal coupling
Scalar field theory
Status of special relativity
References
Further reading
Equations of physics
Quantum field theory
Quantum mechanics
Wave equations
Waves | Relativistic wave equations | [
"Physics",
"Mathematics"
] | 2,892 | [
"Quantum field theory",
"Physical phenomena",
"Equations of physics",
"Theoretical physics",
"Mathematical objects",
"Quantum mechanics",
"Equations",
"Special relativity",
"Waves",
"Motion (physics)",
"Theory of relativity"
] |
577,240 | https://en.wikipedia.org/wiki/Alap | The Alap (; ) is the opening section of a typical North Indian classical performance. It is a form of melodic improvisation that introduces and develops a raga. In dhrupad singing the alap is unmetered, improvised (within the raga) and unaccompanied (except for the tanpura drone), and started at a slow tempo.
For people unfamiliar with the raga form, it introduces the thaat to the listener. It defines the raga, its mood, and the emphasized notes and notes with a secondary role.
Instead of wholly free improvisation, many musicians perform alap schematically, for example by way of vistar, where the notes of the raga are introduced one at a time, so that phrases never travel further than one note above or below what has been covered before. In such cases, the first reach into a new octave can be a powerful event.
In instrumental music, when a steady pulse is introduced into the alap, it is called jor; when the tempo has been greatly increased, or when the rhythmic element overtakes the melodic, it is called jhala (dhrupad: nomtom). The jor and jhala can be seen as separate sections of the performance, or as parts of the alap; in the same way, jhala can be seen as a part of jor.
Classifications
Several musicologists have proposed much more complicated classifications and descriptions of alap. In the same way as traditional four-part compositions have a sthai, antara, sanchar and abhog, some treat alap with a four-part scheme using the same names. Bengali researcher Bimalakanto Raychoudhuri in his Bharatiya Sangeetkosh suggests classification both by length (aochar being the shortest, followed by bandhan, kayed and vistar) and by performance style (according to the four ancient vanis or singing styles – Gohar, Nauhar, Dagar and Khandar), and proceeds to list thirteen stages:
Vilambit
Madhya laya
Drut
Jhala
Thok
Lari/Ladi
Larguthav
Larlapet
Paran
Sath
Dhuya
Matha
Paramatha
Even though Raychoudhuri admits the 13th stage is wholly extinct, as we see we are in jhala already at the fourth stage; the sthai-to-abhog movement is all part of the first stage (vilambit). Stages six and up are for instrumentalists only. Other authorities have forwarded other classifications. For example, when alap is sung with lyrics or at least syllables, as in dhrupad, it is called sakshar as opposed to anakshar.
See also
Alapana
Hindustani classical music
Buka
References
Hindustani music terminology
Formal sections in music analysis | Alap | [
"Technology"
] | 577 | [
"Components",
"Formal sections in music analysis"
] |
577,296 | https://en.wikipedia.org/wiki/Mass%20concentration%20%28astronomy%29 | In astronomy, astrophysics and geophysics, a mass concentration (or mascon) is a region of a planet's or moon's crust that contains a large positive gravity anomaly. In general, the word "mascon" can be used as a noun to refer to an excess distribution of mass on or beneath the surface of an astronomical body (compared to some suitable average), such as is found around Hawaii on Earth. However, this term is most often used to describe a geologic structure that has a positive gravitational anomaly associated with a feature (e.g. depressed basin) that might otherwise have been expected to have a negative anomaly, such as the "mascon basins" on the Moon.
Lunar mascons
The Moon is the most gravitationally "lumpy" major body known in the Solar System. Its largest mascons can cause a plumb bob to hang about a third of a degree off vertical, pointing toward the mascon, and increase the force of gravity by one-half percent.
Typical examples of mascon basins on the Moon are the Imbrium, Serenitatis, Crisium and Orientale impact basins, all of which exhibit significant topographic depressions and positive gravitational anomalies. Examples of mascon basins on Mars are the Argyre, Isidis, and Utopia basins. Theoretical considerations imply that a topographic low in isostatic equilibrium would exhibit a slight negative gravitational anomaly. Thus, the positive gravitational anomalies associated with these impact basins indicate that some form of positive density anomaly must exist within the crust or upper mantle that is currently supported by the lithosphere. One possibility is that these anomalies are due to dense mare basaltic lavas, which might reach up to 6 kilometers in thickness for the Moon. While these lavas certainly contribute to the observed gravitational anomalies, uplift of the crust-mantle interface is also required to account for their magnitude. Indeed, some mascon basins on the Moon do not appear to be associated with any signs of volcanic activity. Theoretical considerations in either case indicate that all the lunar mascons are super-isostatic (that is, supported above their isostatic positions). The huge expanse of mare basaltic volcanism associated with Oceanus Procellarum does not possess a positive gravitational anomaly.
Origin of lunar mascons
Since their identification in 1968 by Doppler tracking of the five Lunar Orbiter spacecraft, the origin of the mascons beneath the surface of the Moon has been subject to much debate, but they are now regarded as being the result of the impact of asteroids during the Late Heavy Bombardment.
Effect of lunar mascons on satellite orbits
Lunar mascons alter the local gravity above and around them sufficiently that low and uncorrected lunar orbits of satellites around the Moon are unstable on a timescale of months or years. The small perturbations in the orbits accumulate and eventually distort the orbit enough for the satellite to impact the surface.
Because of its mascons, the Moon has only four "frozen orbit" inclination zones where a lunar satellite can stay in a low orbit indefinitely. Lunar subsatellites were released on two of the last three Apollo crewed lunar landing missions in 1971 and 1972; the subsatellite PFS-2 released from Apollo 16 was expected to stay in orbit for one and a half years, but lasted only 35 days before crashing into the lunar surface since it had to be deployed in a much lower orbit than initially planned. It was only in 2001 that the mascons were mapped and the frozen orbits were discovered.
The Luna 10 orbiter was the first artificial object to orbit the Moon, and it returned tracking data indicating that the lunar gravitational field caused larger than expected perturbations, presumably due to "roughness" of the lunar gravitational field. The Lunar mascons were discovered by Paul M. Muller and William L. Sjogren of the NASA Jet Propulsion Laboratory (JPL) in 1968 from a new analytic method applied to the highly precise navigation data from the uncrewed pre-Apollo Lunar Orbiter spacecraft. This discovery observed the consistent 1:1 correlation between very large positive gravity anomalies and depressed circular basins on the Moon. This fact places key limits on models attempting to follow the history of the Moon's geological development and explain the current lunar internal structures.
At that time, one of NASA's highest priority "tiger team" projects was to explain why the Lunar Orbiter spacecraft being used to test the accuracy of Project Apollo navigation were experiencing errors in predicted position of ten times the mission specification (2 kilometers instead of 200 meters). This meant that the predicted landing areas were 100 times as large as those being carefully defined for reasons of safety. Lunar orbital effects principally resulting from the strong gravitational perturbations of the mascons were ultimately revealed as the cause. William Wollenhaupt and Emil Schiesser of the NASA Manned Spacecraft Center in Houston then worked out the "fix" that was first applied to Apollo 12 and permitted its landing within 163 m (535 ft) of the target, the previously landed Surveyor 3 spacecraft.
Mapping
In May 2013 a NASA study was published with results from the twin GRAIL probes, that mapped mass concentrations on Earth's Moon.
China's Chang’e 5T1 mission also mapped Moon's mascons.
Earth's mascons
Mascons on Earth are often measured by means of satellite gravimetry, such as the GRACE satellites.
Mascons are often reported in terms of a derived physical quantity called "equivalent water thickness", "equivalent water height", or "water equivalent height", obtained dividing the surface mass density redistribution by the density of water.
Mercurian mascons
Mascons exist on Mercury. They were mapped by the MESSENGER spacecraft which orbited the planet from 2011 to 2015. Two are shown in the image at right, at Caloris Planitia and at Sobkou Planitia.
See also
References
Further reading
Gravimetry
Geophysics
Lunar science | Mass concentration (astronomy) | [
"Physics"
] | 1,234 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
577,301 | https://en.wikipedia.org/wiki/Magnitude%20%28mathematics%29 | In mathematics, the magnitude or size of a mathematical object is a property which determines whether the object is larger or smaller than other objects of the same kind. More formally, an object's magnitude is the displayed result of an ordering (or ranking) of the class of objects to which it belongs. Magnitude as a concept dates to Ancient Greece and has been applied as a measure of distance from one object to another. For numbers, the absolute value of a number is commonly applied as the measure of units between a number and zero.
In vector spaces, the Euclidean norm is a measure of magnitude used to define a distance between two points in space. In physics, magnitude can be defined as quantity or distance. An order of magnitude is typically defined as a unit of distance between one number and another's numerical places on the decimal scale.
History
Ancient Greeks distinguished between several types of magnitude, including:
Positive fractions
Line segments (ordered by length)
Plane figures (ordered by area)
Solids (ordered by volume)
Angles (ordered by angular magnitude)
They proved that the first two could not be the same, or even isomorphic systems of magnitude. They did not consider negative magnitudes to be meaningful, and magnitude is still primarily used in contexts in which zero is either the smallest size or less than all possible sizes.
Numbers
The magnitude of any number is usually called its absolute value or modulus, denoted by .
Real numbers
The absolute value of a real number r is defined by:
Absolute value may also be thought of as the number's distance from zero on the real number line. For example, the absolute value of both 70 and −70 is 70.
Complex numbers
A complex number z may be viewed as the position of a point P in a 2-dimensional space, called the complex plane. The absolute value (or modulus) of z may be thought of as the distance of P from the origin of that space. The formula for the absolute value of is similar to that for the Euclidean norm of a vector in a 2-dimensional Euclidean space:
where the real numbers a and b are the real part and the imaginary part of z, respectively. For instance, the modulus of is . Alternatively, the magnitude of a complex number z may be defined as the square root of the product of itself and its complex conjugate, , where for any complex number , its complex conjugate is .
(where ).
Vector spaces
Euclidean vector space
A Euclidean vector represents the position of a point P in a Euclidean space. Geometrically, it can be described as an arrow from the origin of the space (vector tail) to that point (vector tip). Mathematically, a vector x in an n-dimensional Euclidean space can be defined as an ordered list of n real numbers (the Cartesian coordinates of P): x = [x1, x2, ..., xn]. Its magnitude or length, denoted by , is most commonly defined as its Euclidean norm (or Euclidean length):
For instance, in a 3-dimensional space, the magnitude of [3, 4, 12] is 13 because
This is equivalent to the square root of the dot product of the vector with itself:
The Euclidean norm of a vector is just a special case of Euclidean distance: the distance between its tail and its tip. Two similar notations are used for the Euclidean norm of a vector x:
A disadvantage of the second notation is that it can also be used to denote the absolute value of scalars and the determinants of matrices, which introduces an element of ambiguity.
Normed vector spaces
By definition, all Euclidean vectors have a magnitude (see above). However, a vector in an abstract vector space does not possess a magnitude.
A vector space endowed with a norm, such as the Euclidean space, is called a normed vector space. The norm of a vector v in a normed vector space can be considered to be the magnitude of v.
Pseudo-Euclidean space
In a pseudo-Euclidean space, the magnitude of a vector is the value of the quadratic form for that vector.
Logarithmic magnitudes
When comparing magnitudes, a logarithmic scale is often used. Examples include the loudness of a sound (measured in decibels), the brightness of a star, and the Richter scale of earthquake intensity. Logarithmic magnitudes can be negative. In the natural sciences, a logarithmic magnitude is typically referred to as a level.
Order of magnitude
Orders of magnitude denote differences in numeric quantities, usually measurements, by a factor of 10—that is, a difference of one digit in the location of the decimal point.
Other mathematical measures
See also
Number sense
Vector notation
Set size
References
Elementary mathematics
Unary operations | Magnitude (mathematics) | [
"Mathematics"
] | 964 | [
"Functions and mappings",
"Unary operations",
"Mathematical objects",
"Elementary mathematics",
"Mathematical relations"
] |
577,317 | https://en.wikipedia.org/wiki/Cal%20%28command%29 | is a command-line utility on a number of computer operating systems including Unix, Plan 9, Inferno and Unix-like operating systems such as Linux that prints an ASCII calendar of the given month or year. If the user does not specify any command-line options, cal will print a calendar of the current month. The command is a standard program on Unix and specified in the Single UNIX Specification.
Implementations
The cal command was present in 1st Edition Unix. A cal command is also part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. It is also available for FreeDOS. This implementation only supports the Gregorian calendar (New Style) and may be distributed freely, with or without source. The FreeDOS version was developed by Charles Dye.
Examples
$ cal
February 2024
Su Mo Tu We Th Fr Sa
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29
$ cal -3 (shows the previous, current and next month)
June 2022 July 2022 August 2022
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 1 2 1 2 3 4 5 6
5 6 7 8 9 10 11 3 4 5 6 7 8 9 7 8 9 10 11 12 13
12 13 14 15 16 17 18 10 11 12 13 14 15 16 14 15 16 17 18 19 20
19 20 21 22 23 24 25 17 18 19 20 21 22 23 21 22 23 24 25 26 27
26 27 28 29 30 24 25 26 27 28 29 30 28 29 30 31
$ cal 2023
2023
January February March
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7 1 2 3 4 1 2 3 4
8 9 10 11 12 13 14 5 6 7 8 9 10 11 5 6 7 8 9 10 11
15 16 17 18 19 20 21 12 13 14 15 16 17 18 12 13 14 15 16 17 18
22 23 24 25 26 27 28 19 20 21 22 23 24 25 19 20 21 22 23 24 25
29 30 31 26 27 28 26 27 28 29 30 31
April May June
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 1 2 3 4 5 6 1 2 3
2 3 4 5 6 7 8 7 8 9 10 11 12 13 4 5 6 7 8 9 10
9 10 11 12 13 14 15 14 15 16 17 18 19 20 11 12 13 14 15 16 17
16 17 18 19 20 21 22 21 22 23 24 25 26 27 18 19 20 21 22 23 24
23 24 25 26 27 28 29 28 29 30 31 25 26 27 28 29 30
30
July August September
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 1 2 3 4 5 1 2
2 3 4 5 6 7 8 6 7 8 9 10 11 12 3 4 5 6 7 8 9
9 10 11 12 13 14 15 13 14 15 16 17 18 19 10 11 12 13 14 15 16
16 17 18 19 20 21 22 20 21 22 23 24 25 26 17 18 19 20 21 22 23
23 24 25 26 27 28 29 27 28 29 30 31 24 25 26 27 28 29 30
30 31
October November December
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7 1 2 3 4 1 2
8 9 10 11 12 13 14 5 6 7 8 9 10 11 3 4 5 6 7 8 9
15 16 17 18 19 20 21 12 13 14 15 16 17 18 10 11 12 13 14 15 16
22 23 24 25 26 27 28 19 20 21 22 23 24 25 17 18 19 20 21 22 23
29 30 31 26 27 28 29 30 24 25 26 27 28 29 30
31
$ cal 6 2023
June 2023
Su Mo Tu We Th Fr Sa
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30
Quirks (1752)
$ cal 9 1752
September 1752
S M Tu W Th F S
1 2 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
The Gregorian calendar reform was adopted by the Kingdom of Great Britain, including its possessions in North America (later to become eastern USA and Canada), in September 1752. As a result, the September 1752 cal shows the adjusted days missing. This month was the official (British) adoption of the Gregorian calendar from the previously used Julian calendar. This has been documented in the man pages for Sun Solaris as follows. "An unusual calendar is printed for September 1752. That is the month when 11 days were skipped to make up for lack of leap year adjustments." The Plan 9 from Bell Labs manual states: "Try ." Date of adoption of the reform differs widely between countries so, for some users, this feature may be a bug. Special handling of 1752 is known to have appeared as early as the first edition of the Unix Programmer's Manual in 1971.
See also
Cron – process for scheduling jobs to run on a particular date
List of Unix commands
References
Sources
External links
Source of explanation of cal 9 1752 phenomena (humor)
Calendaring software
Cal
Unix SUS2008 utilities
Plan 9 commands
Inferno (operating system) commands | Cal (command) | [
"Technology"
] | 1,112 | [
"Computing commands",
"Standard Unix programs",
"Plan 9 commands",
"Inferno (operating system) commands"
] |
577,340 | https://en.wikipedia.org/wiki/California%20State%20University%2C%20Fullerton | California State University, Fullerton (CSUF or Cal State Fullerton) is a public research university in Fullerton, California. With a total enrollment of more than 41,000, it has the largest student body of the California State University (CSU) system, and its graduate student body of more than 5,000 is one of the largest in the CSU and in all of California. As of fall 2016, the school had 2,083 faculty, of whom 782 were on the tenure track. The university offers 109 degree programs: 55 undergraduate degrees and 54 graduate degrees, including 3 doctoral programs.
Cal State Fullerton is classified among "R2: Doctoral Universities – High research activity". It is also a Hispanic-serving institution (HSI) and is eligible to be designated as an Asian American Native American Pacific Islander serving institution (AANAPISI).
CSUF athletic teams compete in Division I of the NCAA and are collectively known as the CSUF Titans. They compete in the Big West Conference.
History
Founding
In 1957, Orange County State College became the 12th state college in California to be authorized by the state legislature as a degree-granting institution. The following year, a site was designated for the campus to be established in northeast Fullerton. The property was purchased in 1959. The same year, William B. Langsdorf was appointed as founding president of the school.
Classes began with 452 students in September 1959. The name of the school was changed to Orange State College in July 1962. In 1964, its name was changed to California State College at Fullerton. In June 1972, the final name change occurred and the school became California State University, Fullerton.
Mascot
The choice of the elephant as the university's mascot, dubbed Tuffy the Titan, dates to 1962, when the campus hosted "The First Intercollegiate Elephant Race in Human History." The May 11 event attracted 10,000 spectators, 15 pachyderm entrants, and worldwide news coverage.
Campus violence
The campus has seen three significant instances of violence with people killed. On July 12, 1976, Edward Charles Allaway, a campus janitor with paranoid schizophrenia, shot nine people, killing seven, in the University Library (now the Pollak Library) on the Cal State Fullerton campus. At the time, it was the worst mass shooting in Orange County history.
On October 13, 1984, Edward Cooperman, a physics professor, was shot and killed by his former student, Minh Van Lam, in McCarthy Hall.
On August 19, 2019, Steven Shek Keung Chan, a retired budget director working as a consultant in the international student affairs office, was found dead from multiple stab wounds in a campus parking lot. Chuyen Vo, a co-worker in the same office, was charged with murder.
2000s: Modern growth
The university grew rapidly in the first decade of the 2000s. The Performing Arts Center was built in January 2006, and in the summer of 2008 the newly constructed Steven G. Mihaylo Hall and the new Student Recreation Center opened.
In fall 2008, the Performing Arts Center was renamed the Joseph A.W. Clayes III Performing Arts Center, in honor of a $5 million pledge made to the university by the trustees of the Joseph A.W. Clayes III Charitable Trust. Since 1963, the curriculum has expanded to include many graduate programs, including multiple doctorate degrees, as well as numerous credential and certificate programs.
In 2021, president of the university Framroze Virjee acknowledged the university's location on the lands of the Tongva and Acjachemen and pledged for the university to be more committed toward partnering with Indigenous peoples.
Campus
The campus is on the site of former citrus groves in northeast Fullerton. It is bordered on the east by the Orange Freeway (SR-57), on the west by State College Boulevard, on the north by Yorba Linda Boulevard, and on the south by Nutwood Avenue.
Although established in the late 1950s, much of the initial construction on campus took place in the late 1960s, under the supervision of artist and architect Howard van Heuklyn, who gave the campus a striking, futuristic architecture (buildings like Pollak Library South, Titan Shops, Humanities, McCarthy Hall). This was in response to the numerous Googie buildings in the Fullerton community.
The University Archives & Special Collections in the Pollak Library houses the Philip K. Dick papers and Frank Herbert papers as part of the Willis McNelly Science Fiction collection.
Since 1993, the campus has added the College Park Building, Steven G. Mihaylo Hall, University Hall, the Titan Student Union, the Student Recreation Center, the Nutwood Parking Structure, the State College Parking Structure, Dan Black Hall, Joseph A.W. Clayes III Performing Arts Center West, Phase III Housing, the Grand Central Art Center, and Pollak Library. In order to generate power for the university and become more sustainable, the campus installed solar panels on top of a number of buildings. The panels, which generate up to 7–8 percent of the electrical power used daily, are atop the Eastside Parking Structure, Clayes Performing Arts Center and the Kinesiology and Health Science Building.
In August 2011, the university added a $143 million housing complex, which included five new residence halls, a convenience store and a 565-seat dining hall called the Gastronome.
El Dorado Ranch serves as the university president's residence.
Satellite campus
The university opened a satellite campus in Irvine, California in 1989, approximately south of the original Fullerton location. Amid the COVID-19 pandemic, the satellite campus closed in July 2021.
Proposed expansion
CSUF announced plans in May 2010 to buy the lot occupied by Hope International University, but this deal fell through.
CSUF also announced plans in September 2010 to expand into the area south of Nutwood Avenue to construct a project called CollegeTown, which would integrate the surrounding residential areas and retail spaces into the campus. After community opposition, the Fullerton planning commission indefinitely postponed any action on the project in February 2016.
Desert Studies Center
The Desert Studies Center is a field station of the California State University located in Zzyzx, California in the Mojave Desert. The purpose of the center is to provide opportunities to conduct research, receive instruction and experience the Mojave Desert environment. It is officially operated by the California Desert Studies Consortium, a consortium of 7 CSU campuses: Fullerton, Cal Poly Pomona, Long Beach, San Bernardino, Northridge, Dominguez Hills and Los Angeles.
Academics
Admissions and enrollment
Fall freshman statistics
As of the fall 2013 semester, CSUF is the third most applied to CSU out of all 23 campuses receiving nearly 65,000 applications, including over 40,000 for incoming freshmen and nearly 23,000 transfer applications, the second highest in the CSU.
Rankings and distinctions
The 2024 edition of U.S. News & World Report ranked Fullerton tied for 2nd "Performers on Social Mobility," tied 70 in top public schools, tied 31 for best undergraduate teaching, 211 for best value schools, and the undergraduate engineering program tied for 40, tied 8 in computer engineering, tied 8 in civil engineering and tied 9 in electrical/electronic/communications, tied 201 in economics, and tied 154 for Nursing.
Money magazine ranked Cal State Fullerton 34th in the country out of 739 schools evaluated for its 2020 "Best Colleges for Your Money" edition and 22nd in its list of the 50 best public schools in the U.S.
Athletics
CSUF participates in the NCAA Division I Big West Conference and MPSF. Cal State Fullerton Athletics boasts 31 national championships covering 11 sports and dating back to its first in 1967. There are 12 team national titles and 19 individual championships. The Titans became an NCAA Div. I program for the 1974-75 academic year and have since produced 11 (6 team and 5 individual) national titles, four of them by the Titans' baseball team. Eighteen of the titles come from men's sports, 12 from women's. 12 team national championships in eight different sports. (1970, women's basketball (CIAW); 1971, 1972, 1974 men's gymnastics; 1971 cross country team; 1973 women's fencing; 1979, women's gymnastics; 1979, 1984, 1995, 2004 baseball; 1986 softball). Their baseball team is a perennial national powerhouse with four national titles and dozens of players playing Major League Baseball. The CSUF Dance Team currently holds the most national titles at the school, with 15 national titles from UDA Division 1 Jazz; 2000, 2001, 2002, 2003, 2004, 2006, 2007, 2008, 2010, 2011, 2012, 2013, 2014, 2015, 2016 and 2017; and one national title from UDAs in Division 1 Hip Hop. The Dance Team also holds multiple titles from United Spirit Association.
CSUF holds the Ben Brown Invitational every track and field season. CSUF currently supports 21 club sports on top of its Division I varsity teams, which are archery, baseball, cycling, equestrian, grappling and jiu jitsu, ice hockey, men's lacrosse, women's lacrosse, nazara Bollywood dance, men's rugby, women's rugby, roller hockey, salsa team, men's soccer, women's soccer, table tennis, tennis, ultimate Frisbee, men's volleyball, women's volleyball, skiing, and wushu.
Because of the proximity to Long Beach State, the schools are considered rivals. The rivalry is especially heated in baseball with the Long Beach State baseball team also having a competitive college baseball program.
Student life
CSUF was the first college in Orange County to have a Greek system, with its first fraternity founded in 1960. The Daily Titan, the official student newspaper of the university, also started in 1960. Other official student media includes Titan Radio.
On April 23, 2014, Cal State Fullerton opened the Titan Dreamers Resource Center. The center was the first resource center for undocumented students in the CSU system.
Notable alumni
CSUF alumni include: an astronaut who, , is participating in her third trip to space; a speaker of the California Assembly; other politicians and Academy Award-winning directors, actors, producers, and cinematographers; award-winning journalists, authors, and screenwriters; nationally recognized teachers; presidents and CEOs of leading corporations; international opera stars, musicians, and Broadway stars; professional athletes and Olympians; doctors, scientists and researchers; and social activists.
Titan alumni number more than 210,000. An active alumni association keeps them connected through numerous networking and social events, and also sponsors nationwide chapters.
Notes
References
External links
Cal State Fullerton Athletics website
Universities and colleges established in 1957
Fullerton
California State University, Fullerton
Education in Fullerton, California
Universities and colleges in Orange County, California
Schools accredited by the Western Association of Schools and Colleges
1957 establishments in California
Glassmaking schools | California State University, Fullerton | [
"Materials_science",
"Engineering"
] | 2,225 | [
"Glass engineering and science",
"Glassmaking schools"
] |
577,366 | https://en.wikipedia.org/wiki/Banach%E2%80%93Alaoglu%20theorem | In functional analysis and related branches of mathematics, the Banach–Alaoglu theorem (also known as Alaoglu's theorem) states that the closed unit ball of the dual space of a normed vector space is compact in the weak* topology.
A common proof identifies the unit ball with the weak-* topology as a closed subset of a product of compact sets with the product topology.
As a consequence of Tychonoff's theorem, this product, and hence the unit ball within, is compact.
This theorem has applications in physics when one describes the set of states of an algebra of observables, namely that any state can be written as a convex linear combination of so-called pure states.
History
According to Lawrence Narici and Edward Beckenstein, the Alaoglu theorem is a “very important result—maybe most important fact about the weak-* topology—[that] echos throughout functional analysis.”
In 1912, Helly proved that the unit ball of the continuous dual space of is countably weak-* compact.
In 1932, Stefan Banach proved that the closed unit ball in the continuous dual space of any separable normed space is sequentially weak-* compact (Banach only considered sequential compactness).
The proof for the general case was published in 1940 by the mathematician Leonidas Alaoglu.
According to Pietsch [2007], there are at least twelve mathematicians who can lay claim to this theorem or an important predecessor to it.
The Bourbaki–Alaoglu theorem is a generalization of the original theorem by Bourbaki to dual topologies on locally convex spaces.
This theorem is also called the Banach–Alaoglu theorem or the weak-* compactness theorem and it is commonly called simply the Alaoglu theorem.
Statement
If is a vector space over the field then will denote the algebraic dual space of and these two spaces are henceforth associated with the bilinear defined by
where the triple forms a dual system called the .
If is a topological vector space (TVS) then its continuous dual space will be denoted by where always holds.
Denote the weak-* topology on by and denote the weak-* topology on by
The weak-* topology is also called the topology of pointwise convergence because given a map and a net of maps the net converges to in this topology if and only if for every point in the domain, the net of values converges to the value
Proof involving duality theory
If is a normed vector space, then the polar of a neighborhood is closed and norm-bounded in the dual space.
In particular, if is the open (or closed) unit ball in then the polar of is the closed unit ball in the continuous dual space of (with the usual dual norm).
Consequently, this theorem can be specialized to:
When the continuous dual space of is an infinite dimensional normed space then it is for the closed unit ball in to be a compact subset when has its usual norm topology.
This is because the unit ball in the norm topology is compact if and only if the space is finite-dimensional (cf. F. Riesz theorem).
This theorem is one example of the utility of having different topologies on the same vector space.
It should be cautioned that despite appearances, the Banach–Alaoglu theorem does imply that the weak-* topology is locally compact.
This is because the closed unit ball is only a neighborhood of the origin in the strong topology, but is usually not a neighborhood of the origin in the weak-* topology, as it has empty interior in the weak* topology, unless the space is finite-dimensional.
In fact, it is a result of Weil that all locally compact Hausdorff topological vector spaces must be finite-dimensional.
Elementary proof
The following elementary proof does not utilize duality theory and requires only basic concepts from set theory, topology, and functional analysis.
What is needed from topology is a working knowledge of net convergence in topological spaces and familiarity with the fact that a linear functional is continuous if and only if it is bounded on a neighborhood of the origin (see the articles on continuous linear functionals and sublinear functionals for details).
Also required is a proper understanding of the technical details of how the space of all functions of the form is identified as the Cartesian product and the relationship between pointwise convergence, the product topology, and subspace topologies they induce on subsets such as the algebraic dual space and products of subspaces such as
An explanation of these details is now given for readers who are interested.
For every real will denote the closed ball of radius centered at and for any
Identification of functions with tuples
The Cartesian product is usually thought of as the set of all -indexed tuples but, since tuples are technically just functions from an indexing set, it can also be identified with the space of all functions having prototype as is now described:
: A function belonging to is identified with its (-indexed) ""
: A tuple in is identified with the function defined by ; this function's "tuple of values" is the original tuple
This is the reason why many authors write, often without comment, the equality
and why the Cartesian product is sometimes taken as the definition of the set of maps (or conversely).
However, the Cartesian product, being the (categorical) product in the category of sets (which is a type of inverse limit), also comes equipped with associated maps that are known as its (coordinate) .
The at a given point is the function
where under the above identification, sends a function to
Stated in words, for a point and function "plugging into " is the same as "plugging into ".
In particular, suppose that are non-negative real numbers.
Then where under the above identification of tuples with functions, is the set of all functions such that for every
If a subset partitions into then the linear bijection
canonically identifies these two Cartesian products; moreover, this map is a homeomorphism when these products are endowed with their product topologies.
In terms of function spaces, this bijection could be expressed as
Notation for nets and function composition with nets
A net in is by definition a function from a non-empty directed set
Every sequence in which by definition is just a function of the form is also a net.
As with sequences, the value of a net at an index is denoted by ; however, for this proof, this value may also be denoted by the usual function parentheses notation
Similarly for function composition, if is any function then the net (or sequence) that results from "plugging into " is just the function although this is typically denoted by (or by if is a sequence).
In the proofs below, this resulting net may be denoted by any of the following notations
depending on whichever notation is cleanest or most clearly communicates the intended information.
In particular, if is continuous and in then the conclusion commonly written as may instead be written as or
Topology
The set is assumed to be endowed with the product topology. It is well known that the product topology is identical to the topology of pointwise convergence.
This is because given and a net where and every is an element of then the net converges in the product topology if and only if
for every the net converges in
where because and
this happens if and only if
for every the net converges in
Thus converges to in the product topology if and only if it converges to pointwise on
This proof will also use the fact that the topology of pointwise convergence is preserved when passing to topological subspaces.
This means, for example, that if for every is some (topological) subspace of then the topology of pointwise convergence (or equivalently, the product topology) on is equal to the subspace topology that the set inherits from
And if is closed in for every then is a closed subset of
Characterization of
An important fact used by the proof is that for any real
where denotes the supremum and
As a side note, this characterization does not hold if the closed ball is replaced with the open ball (and replacing with the strict inequality will not change this; for counter-examples, consider and the identity map on ).
The essence of the Banach–Alaoglu theorem can be found in the next proposition, from which the Banach–Alaoglu theorem follows.
Unlike the Banach–Alaoglu theorem, this proposition does require the vector space to endowed with any topology.
Before proving the proposition above, it is first shown how the Banach–Alaoglu theorem follows from it (unlike the proposition, Banach–Alaoglu assumes that is a topological vector space (TVS) and that is a neighborhood of the origin).
The conclusion that the set is closed can also be reached by applying the following more general result, this time proved using nets, to the special case and
Observation: If is any set and if is a closed subset of a topological space then is a closed subset of in the topology of pointwise convergence.
Proof of observation: Let and suppose that is a net in that converges pointwise to It remains to show that which by definition means For any because in and every value belongs to the closed (in ) subset so too must this net's limit belong to this closed set; thus which completes the proof.
Let and suppose that is a net in the converges to in
To conclude that it must be shown that is a linear functional. So let be a scalar and let
For any let denote
Because in which has the topology of pointwise convergence, in for every
By using in place of it follows that each of the following nets of scalars converges in
Proof that
Let be the "multiplication by " map defined by
Because is continuous and in it follows that where the right hand side is and the left hand side is
which proves that Because also and limits in are unique, it follows that as desired.
Proof that
Define a net by letting for every
Because and it follows that in
Let be the addition map defined by
The continuity of implies that in where the right hand side is and the left hand side is
which proves that Because also it follows that as desired.
The lemma above actually also follows from its corollary below since is a Hausdorff complete uniform space and any subset of such a space (in particular ) is closed if and only if it is complete.
Because the underlying field is a complete Hausdorff locally convex topological vector space, the same is true of the product space
A closed subset of a complete space is complete, so by the lemma, the space is complete.
The above elementary proof of the Banach–Alaoglu theorem actually shows that if is any subset that satisfies (such as any absorbing subset of ), then is a weak-* compact subset of
As a side note, with the help of the above elementary proof, it may be shown (see this footnote)
that there exist -indexed non-negative real numbers such that
where these real numbers can also be chosen to be "minimal" in the following sense:
using (so as in the proof) and defining the notation for any if
then and for every
which shows that these numbers are unique; indeed, this infimum formula can be used to define them.
In fact, if denotes the set of all such products of closed balls containing the polar set
then
where denotes the intersection of all sets belonging to
This implies (among other things)
that the unique least element of with respect to this may be used as an alternative definition of this (necessarily convex and balanced) set.
The function is a seminorm and it is unchanged if is replaced by the convex balanced hull of (because ).
Similarly, because is also unchanged if is replaced by its closure in
Sequential Banach–Alaoglu theorem
A special case of the Banach–Alaoglu theorem is the sequential version of the theorem, which asserts that the closed unit ball of the dual space of a separable normed vector space is sequentially compact in the weak-* topology.
In fact, the weak* topology on the closed unit ball of the dual of a separable space is metrizable, and thus compactness and sequential compactness are equivalent.
Specifically, let be a separable normed space and the closed unit ball in Since is separable, let be a countable dense subset.
Then the following defines a metric, where for any
in which denotes the duality pairing of with
Sequential compactness of in this metric can be shown by a diagonalization argument similar to the one employed in the proof of the Arzelà–Ascoli theorem.
Due to the constructive nature of its proof (as opposed to the general case, which is based on the axiom of choice), the sequential Banach–Alaoglu theorem is often used in the field of partial differential equations to construct solutions to PDE or variational problems.
For instance, if one wants to minimize a functional on the dual of a separable normed vector space one common strategy is to first construct a minimizing sequence which approaches the infimum of use the sequential Banach–Alaoglu theorem to extract a subsequence that converges in the weak* topology to a limit and then establish that is a minimizer of
The last step often requires to obey a (sequential) lower semi-continuity property in the weak* topology.
When is the space of finite Radon measures on the real line (so that is the space of continuous functions vanishing at infinity, by the Riesz representation theorem), the sequential Banach–Alaoglu theorem is equivalent to the Helly selection theorem.
Consequences
Consequences for normed spaces
Assume that is a normed space and endow its continuous dual space with the usual dual norm.
The closed unit ball in is weak-* compact. So if is infinite dimensional then its closed unit ball is necessarily compact in the norm topology by F. Riesz's theorem (despite it being weak-* compact).
A Banach space is reflexive if and only if its closed unit ball is -compact; this is known as James' theorem.
If is a reflexive Banach space, then every bounded sequence in has a weakly convergent subsequence.
(This follows by applying the Banach–Alaoglu theorem to a weakly metrizable subspace of ; or, more succinctly, by applying the Eberlein–Šmulian theorem.)
For example, suppose that is the space Lp space where and let satisfy
Let be a bounded sequence of functions in
Then there exists a subsequence and an such that
The corresponding result for is not true, as is not reflexive.
Consequences for Hilbert spaces
In a Hilbert space, every bounded and closed set is weakly relatively compact, hence every bounded net has a weakly convergent subnet (Hilbert spaces are reflexive).
As norm-closed, convex sets are weakly closed (Hahn–Banach theorem), norm-closures of convex bounded sets in Hilbert spaces or reflexive Banach spaces are weakly compact.
Closed and bounded sets in are precompact with respect to the weak operator topology (the weak operator topology is weaker than the ultraweak topology which is in turn the weak-* topology with respect to the predual of the trace class operators). Hence bounded sequences of operators have a weak accumulation point.
As a consequence, has the Heine–Borel property, if equipped with either the weak operator or the ultraweak topology.
Relation to the axiom of choice and other statements
The Banach–Alaoglu may be proven by using Tychonoff's theorem, which under the Zermelo–Fraenkel set theory (ZF) axiomatic framework is equivalent to the axiom of choice.
Most mainstream functional analysis relies on ZF + the axiom of choice, which is often denoted by ZFC.
However, the theorem does rely upon the axiom of choice in the separable case (see above): in this case there actually exists a constructive proof.
In the general case of an arbitrary normed space, the ultrafilter Lemma, which is strictly weaker than the axiom of choice and equivalent to Tychonoff's theorem for compact spaces, suffices for the proof of the Banach–Alaoglu theorem, and is in fact equivalent to it.
The Banach–Alaoglu theorem is equivalent to the ultrafilter lemma, which implies the Hahn–Banach theorem for real vector spaces (HB) but is not equivalent to it (said differently, Banach–Alaoglu is also strictly stronger than HB).
However, the Hahn–Banach theorem is equivalent to the following weak version of the Banach–Alaoglu theorem for normed space in which the conclusion of compactness (in the weak-* topology of the closed unit ball of the dual space) is replaced with the conclusion of (also sometimes called );
Compactness implies convex compactness because a topological space is compact if and only if every family of closed subsets having the finite intersection property (FIP) has non-empty intersection.
The definition of convex compactness is similar to this characterization of compact spaces in terms of the FIP, except that it only involves those closed subsets that are also convex (rather than all closed subsets).
See also
Notes
Proofs
Citations
References
See Theorem 3.15, p. 68.
Further reading
Articles containing proofs
Compactness theorems
Theorems in functional analysis
Topological vector spaces
Linear functionals | Banach–Alaoglu theorem | [
"Mathematics"
] | 3,600 | [
"Compactness theorems",
"Theorems in mathematical analysis",
"Vector spaces",
"Topological vector spaces",
"Space (mathematics)",
"Theorems in topology",
"Theorems in functional analysis",
"Articles containing proofs"
] |
577,438 | https://en.wikipedia.org/wiki/Minisatellite | In genetics, a minisatellite is a tract of repetitive DNA in which certain DNA motifs (ranging in length from 10–60 base pairs) are typically repeated two to several hundred times. Minisatellites occur at more than 1,000 locations in the human genome and they are notable for their high mutation rate and high diversity in the population. Minisatellites are prominent in the centromeres and telomeres of chromosomes, the latter protecting the chromosomes from damage. The name "satellite" refers to the early observation that centrifugation of genomic DNA in a test tube separates a prominent layer of bulk DNA from accompanying "satellite" layers of repetitive DNA. Minisatellites are small sequences of DNA that do not encode proteins but appear throughout the genome hundreds of times, with many repeated copies lying next to each other.
Minisatellites and their shorter cousins, the microsatellites, together are classified as VNTR (variable number of tandem repeats) DNA. Confusingly, minisatellites are often referred to as VNTRs, and microsatellites are often referred to as short tandem repeats (STRs) or simple sequence repeats (SSRs).
Structure
Minisatellites consist of repetitive, generally GC-rich, motifs that range in length from 10 to over 100 base pairs. These variant repeats are tandemly intermingled. Some minisatellites contain a central sequence (or "core unit") of nucleobases "GGGCAGGANG" (where N can be any base) or more generally consist of sequence motifs of purines (adenine (A) and guanine (G)) and pyrimidines (cytosine (C) and thymine (T)).
Hypervariable minisatellites have core units 9–64 bp long and are found mainly at the centromeric regions.
In humans, 90% of minisatellites are found at the sub-telomeric region of chromosomes. The human telomere sequence itself is a tandem repeat: TTAGGG TTAGGG TTAGGG ...
Function
Minisatellites have been implicated as regulators of gene expression (e.g. at levels of transcription, alternative splicing, or imprint control). They are generally non-coding DNA but sometimes are part of possible genes.
Minisatellites also constitute the chromosomal telomeres, which protect the ends of a chromosome from deterioration or from fusion with neighbouring chromosomes.
Mutability
Minisatellites have been associated with chromosome fragile sites and are proximal to a number of recurrent translocation breakpoints.
Some human minisatellites (~1%) have been demonstrated to be hypermutable, with an average mutation rate in the germline higher than 0.5% up to over 20%, making them the most unstable region in the human genome known to date. While other genomes (mouse, rat and pig) contain minisatellite-like sequences, none was found to be hypermutable. Since all hypermutable minisatellites contain internal variants, they provide extremely informative systems for analyzing the complex turnover processes that occur at this class of tandem repeat. Minisatellite variant repeat mapping by PCR (MVR-PCR) has been extensively used to chart the interspersion patterns of variant repeats along the array, which provides details on the structure of the alleles before and after mutation.
Studies have revealed distinct mutation processes operating in somatic and germline cells. Somatic instability detected in blood DNA shows simple and rare intra-allelic events two to three orders of magnitude lower than in sperm. In contrast, complex inter-allelic conversion-like events occur in the germline.
Additional analyses of DNA sequences flanking human minisatellites have also revealed an intense and highly localized meiotic crossover hotspot that is centered upstream of the unstable side of minisatellite arrays. Repeat turnover therefore appears to be controlled by recombinational activity in DNA that flanks the repeat array and results in a polarity of mutation. These findings have suggested that minisatellites most probably evolved as bystanders of localized meiotic recombination hotspots in the human genome.
It has been proposed that minisatellite sequences encourage chromosomes to swap DNA. In alternative models, it is the presence of neighbouring double-strand hotspots which is the primary cause of minisatellite repeat copy number variations. Somatic changes are suggested to result from replication difficulties (which might include replication slippage, among other phenomena).
Studies have shown that the evolutionary fate of minisatellites tends towards an equilibrium distribution in the size of alleles, until mutations in the flanking DNA affect the recombinational activity of a minisatellite by suppressing DNA instability. Such an event would ultimately lead to the extinction of a hypermutable minisatellite by meiotic drive.
History
The first human minisatellite was discovered in 1980 by A.R. Wyman and R. White,. Discovering their high level of variability, Sir Alec Jeffreys developed DNA fingerprinting based on minisatellites, solving the first immigration case by DNA in 1985, and the first forensic murder case, the Enderby murders in the United Kingdom, in 1986. Minisatellites were subsequently also used for genetic markers in linkage analysis and population studies, but were soon replaced by microsatellite profiling in the 1990s.
The term satellite DNA originates from the observation in the 1960s of a fraction of sheared DNA that showed a distinct buoyant density, detectable as a "satellite peak" in density gradient centrifugation, and that was subsequently identified as large centromeric tandem repeats. When shorter (10–30-bp) tandem repeats were later identified, they came to be known as minisatellites. Finally, with the discovery of tandem iterations of simple sequence motifs, the term microsatellites was coined.
External links
Search tools:
SERF De Novo Genome Analysis and Tandem Repeats Finder
TRF Tandem Repeats Finder
See also
Microsatellite
Tandem repeat
Telomere
References
Repetitive DNA sequences | Minisatellite | [
"Biology"
] | 1,279 | [
"Molecular genetics",
"Repetitive DNA sequences"
] |
577,441 | https://en.wikipedia.org/wiki/Compact%20operator | In functional analysis, a branch of mathematics, a compact operator is a linear operator , where are normed vector spaces, with the property that maps bounded subsets of to relatively compact subsets of (subsets with compact closure in ). Such an operator is necessarily a bounded operator, and so continuous. Some authors require that are Banach, but the definition can be extended to more general spaces.
Any bounded operator that has finite rank is a compact operator; indeed, the class of compact operators is a natural generalization of the class of finite-rank operators in an infinite-dimensional setting. When is a Hilbert space, it is true that any compact operator is a limit of finite-rank operators, so that the class of compact operators can be defined alternatively as the closure of the set of finite-rank operators in the norm topology. Whether this was true in general for Banach spaces (the approximation property) was an unsolved question for many years; in 1973 Per Enflo gave a counter-example, building on work by Alexander Grothendieck and Stefan Banach.
The origin of the theory of compact operators is in the theory of integral equations, where integral operators supply concrete examples of such operators. A typical Fredholm integral equation gives rise to a compact operator K on function spaces; the compactness property is shown by equicontinuity. The method of approximation by finite-rank operators is basic in the numerical solution of such equations. The abstract idea of Fredholm operator is derived from this connection.
Equivalent formulations
A linear map between two topological vector spaces is said to be compact if there exists a neighborhood of the origin in such that is a relatively compact subset of .
Let be normed spaces and a linear operator. Then the following statements are equivalent, and some of them are used as the principal definition by different authors
is a compact operator;
the image of the unit ball of under is relatively compact in ;
the image of any bounded subset of under is relatively compact in ;
there exists a neighbourhood of the origin in and a compact subset such that ;
for any bounded sequence in , the sequence contains a converging subsequence.
If in addition is Banach, these statements are also equivalent to:
the image of any bounded subset of under is totally bounded in .
If a linear operator is compact, then it is continuous.
Properties
In the following, are Banach spaces, is the space of bounded operators under the operator norm, and denotes the space of compact operators . denotes the identity operator on , , and .
is a closed subspace of (in the norm topology). Equivalently,
given a sequence of compact operators mapping (where are Banach) and given that converges to with respect to the operator norm, is then compact.
Conversely, if are Hilbert spaces, then every compact operator from is the limit of finite rank operators. Notably, this "approximation property" is false for general Banach spaces X and Y.
where the composition of sets is taken element-wise. In particular, forms a two-sided ideal in .
Any compact operator is strictly singular, but not vice versa.
A bounded linear operator between Banach spaces is compact if and only if its adjoint is compact (Schauder's theorem).
If is bounded and compact, then:
the closure of the range of is separable.
if the range of is closed in Y, then the range of is finite-dimensional.
If is a Banach space and there exists an invertible bounded compact operator then is necessarily finite-dimensional.
Now suppose that is a Banach space and is a compact linear operator, and is the adjoint or transpose of T.
For any , is a Fredholm operator of index 0. In particular, is closed. This is essential in developing the spectral properties of compact operators. One can notice the similarity between this property and the fact that, if and are subspaces of where is closed and is finite-dimensional, then is also closed.
If is any bounded linear operator then both and are compact operators.
If then the range of is closed and the kernel of is finite-dimensional.
If then the following are finite and equal:
The spectrum of is compact, countable, and has at most one limit point, which would necessarily be the origin.
If is infinite-dimensional then .
If and then is an eigenvalue of both and .
For every the set is finite, and for every non-zero the range of is a proper subset of X.
Origins in integral equation theory
A crucial property of compact operators is the Fredholm alternative, which asserts that the existence of solution of linear equations of the form
(where K is a compact operator, f is a given function, and u is the unknown function to be solved for) behaves much like as in finite dimensions. The spectral theory of compact operators then follows, and it is due to Frigyes Riesz (1918). It shows that a compact operator K on an infinite-dimensional Banach space has spectrum that is either a finite subset of C which includes 0, or the spectrum is a countably infinite subset of C which has 0 as its only limit point. Moreover, in either case the non-zero elements of the spectrum are eigenvalues of K with finite multiplicities (so that K − λI has a finite-dimensional kernel for all complex λ ≠ 0).
An important example of a compact operator is compact embedding of Sobolev spaces, which, along with the Gårding inequality and the Lax–Milgram theorem, can be used to convert an elliptic boundary value problem into a Fredholm integral equation. Existence of the solution and spectral properties then follow from the theory of compact operators; in particular, an elliptic boundary value problem on a bounded domain has infinitely many isolated eigenvalues. One consequence is that a solid body can vibrate only at isolated frequencies, given by the eigenvalues, and arbitrarily high vibration frequencies always exist.
The compact operators from a Banach space to itself form a two-sided ideal in the algebra of all bounded operators on the space. Indeed, the compact operators on an infinite-dimensional separable Hilbert space form a maximal ideal, so the quotient algebra, known as the Calkin algebra, is simple. More generally, the compact operators form an operator ideal.
Compact operator on Hilbert spaces
For Hilbert spaces, another equivalent definition of compact operators is given as follows.
An operator on an infinite-dimensional Hilbert space ,
,
is said to be compact if it can be written in the form
,
where and are orthonormal sets (not necessarily complete), and is a sequence of positive numbers with limit zero, called the singular values of the operator, and the series on the right hand side converges in the operator norm. The singular values can accumulate only at zero. If the sequence becomes stationary at zero, that is for some and every , then the operator has finite rank, i.e., a finite-dimensional range, and can be written as
.
An important subclass of compact operators is the trace-class or nuclear operators, i.e., such that . While all trace-class operators are compact operators, the converse is not necessarily true. For example tends to zero for while .
Completely continuous operators
Let X and Y be Banach spaces. A bounded linear operator T : X → Y is called completely continuous if, for every weakly convergent sequence from X, the sequence is norm-convergent in Y . Compact operators on a Banach space are always completely continuous. If X is a reflexive Banach space, then every completely continuous operator T : X → Y is compact.
Somewhat confusingly, compact operators are sometimes referred to as "completely continuous" in older literature, even though they are not necessarily completely continuous by the definition of that phrase in modern terminology.
Examples
Every finite rank operator is compact.
For and a sequence (tn) converging to zero, the multiplication operator (Tx)n = tn xn is compact.
For some fixed g ∈ C([0, 1]; R), define the linear operator T from C([0, 1]; R) to C([0, 1]; R) by That the operator T is indeed compact follows from the Ascoli theorem.
More generally, if Ω is any domain in Rn and the integral kernel k : Ω × Ω → R is a Hilbert–Schmidt kernel, then the operator T on L2(Ω; R) defined by is a compact operator.
By Riesz's lemma, the identity operator is a compact operator if and only if the space is finite-dimensional.
See also
Notes
References
(Section 7.5)
Compactness (mathematics)
Linear operators
Operator theory | Compact operator | [
"Mathematics"
] | 1,794 | [
"Mathematical objects",
"Functions and mappings",
"Mathematical relations",
"Linear operators"
] |
577,446 | https://en.wikipedia.org/wiki/Satellite%20DNA | Satellite DNA consists of very large arrays of tandemly repeating, non-coding DNA. Satellite DNA is the main component of functional centromeres, and form the main structural constituent of heterochromatin.
The name "satellite DNA" refers to the phenomenon that repetitions of a short DNA sequence tend to produce a different frequency of the bases adenine, cytosine, guanine, and thymine, and thus have a different density from bulk DNA such that they form a second or "satellite" band(s) when genomic DNA is separated along a cesium chloride density gradient using buoyant density centrifugation.
Sequences with a greater ratio of A+T display a lower density while those with a greater ratio of G+C display a higher density than the bulk of genomic DNA. Some repetitive sequences are ~50% G+C/A+T and thus have buoyant densities the same as bulk genomic DNA. These satellites are called "cryptic" satellites because they form a band hidden within the main band of genomic DNA. "Isopycnic" is another term used for cryptic satellites.
Satellite DNA families in humans
Satellite DNA, together with minisatellite and microsatellite DNA, constitute the tandem repeats. The size of satellite DNA arrays varies greatly between individuals.
The major satellite DNA families in humans are called:
Length
A repeated pattern can be between 1 base pair (bp) long (a mononucleotide repeat) to several thousand base pairs long, and the total size of a satellite DNA block can be several megabases without interruption. Long repeat units have been described containing domains of shorter repeated segments and mononucleotides (1-5 bp), arranged in clusters of microsatellites, wherein differences among individual copies of the longer repeat units were clustered. Most satellite DNA is localized to the telomeric or the centromeric region of the chromosome. The nucleotide sequence of the repeats is fairly well conserved across species. However, variation in the length of the repeat is common.
Low-resolution sequencing-based studies have demonstrated variation in human population satellite array lengths as well as in the frequency of certain sequence and structural variations (11–13, 29). However, due to a lack of full centromere assemblies, base-level understanding of satellite array variation and evolution has remained weak. For example, minisatellite DNA is a short region (1-5 kb) of repeating elements with length >9 nucleotides. Whereas microsatellites in DNA sequences are considered to have a length of 1-8 nucleotides. The difference in how many of the repeats is present in the region (length of the region) is the basis for DNA profiling.
Origin
Microsatellites are thought to have originated by polymerase slippage during DNA replication. This comes from the observation that microsatellite alleles usually are length polymorphic; specifically, the length differences observed between microsatellite alleles are generally multiples of the repeat unit length.
Structure
Satellite DNA adopts higher-order three-dimensional structures in a naturally occurring complex satellite DNA from the land crab Gecarcinus lateralis, whose genome contains 3% of a GC-rich satellite band consisting of a ~2100 bp "repeat unit" sequence motif called RU. The RU was arranged in long tandem arrays with approximately 16,000 copies per genome. Several RU sequences were cloned and sequenced to reveal conserved regions of conventional DNA sequences over stretches greater than 550 bp, interspersed with five "divergent domains" within each copy of RU.
Four divergent domains consisted of microsatellite repeats, biased in base composition, with purines on one strand and pyrimidines on the other. Some contained mononucleotide repeats of C:G base pairs approximately 20 bp in length. These strand-biased microsatellite domains ranged in length from approximately 20 bp to greater than 250 bp. The most prevalent repeated sequences in the embedded microsatellite regions were CT:AG, CCT:AGG, CCCT:AGGG, and CGCAC:GTGCG These repeating sequences were shown to adopt altered structures including triple-stranded DNA, Z-DNA, stem-loop, and other conformations under superhelical stress.
Between the strand-biased microsatellite repeats and C:G mononucleotide repeats, all sequence variations retained one or two base pairs with A (purine) interrupting the pyrimidine-rich strand and T (pyrimidine) interrupting the purine-rich strand. These interruptions in compositional bias adopted highly distorted conformations as shown by their response to structrural nuclease enzymes including S1, P1, and mung bean nucleases.
The most complex compositionally-biased microsatellite domain of RU included the sequence TTAA:TTAA as well as a mirror repeat. It produced the strongest signal in response to nucleases compared to all other altered structures in experimental observations. That particular strand-biased divergent domain was subcloned and its altered helical structure was studied in greater detail.
A fifth divergent domain in the RU sequence was characterized by variations of a symmetrical DNA sequence motif of alternating purines and pyrimidines shown to adopt a left-handed Z-DNA or stem-loop structure under superhelical stress. The conserved symmetrical Z-DNA was abbreviated Z4Z5NZ15NZ5Z4, where Z represents alternating purine/pyrimidine sequences. A stem-loop structure was centered in the Z15 element at the highly conserved palindromic sequence CGCACGTGCG:CGCACGTGCG and was flanked by extended palindromic Z-DNA sequences over a 35 bp region. Many RU variants showed deletions of at least 10 bp outside the Z4Z5NZ15NZ5Z4 structural element, while others had additional Z-DNA sequences lengthening the alternating purine and pyrimidine domain to over 50 bp.
One extended RU sequence (EXT) was shown to have six tandem copies of a 142 bp amplified (AMPL) sequence motif inserted into a region bordered by inverted repeats where most copies contained just one AMPL sequence element. There were no nuclease-sensitive altered structures or significant sequence divergence in the relatively conventional AMPL sequence. A truncated RU sequence (TRU), 327 bp shorter than most clones, arose from a single base change leading to a second EcoRI restriction site in TRU.
Another crab, the hermit crab Pagurus pollicaris, was shown to have a family of AT-rich satellites with inverted repeat structures that comprised 30% of the entire genome. Another cryptic satellite from the same crab with the sequence CCTA:TAGG
was found inserted into some of the palindromes.
See also
Buoyant density centrifugation
DNA profiling
DNA supercoil
Eukaryotic chromosome fine structure
Gene expression
Polymerase chain reaction
Tengiz Beridze, scientist who discovered satellite DNA in plants
References
Further reading
External links
Search tools:
SERF De Novo Genome Analysis and Tandem Repeats Finder
TRF Tandem Repeats Finder
DNA
Repetitive DNA sequences | Satellite DNA | [
"Biology"
] | 1,490 | [
"Molecular genetics",
"Repetitive DNA sequences"
] |
577,454 | https://en.wikipedia.org/wiki/Center%20frequency | In electrical engineering and telecommunications, the center frequency of a filter or channel is a measure of a central frequency between the upper and lower cutoff frequencies. It is usually defined as either the arithmetic mean or the geometric mean of the lower cutoff frequency and the upper cutoff frequency of a band-pass system or a band-stop system.
Typically, the geometric mean is used in systems based on certain transformations of lowpass filter designs, where the frequency response is constructed to be symmetric on a logarithmic frequency scale. The geometric center frequency corresponds to a mapping of the DC response of the prototype lowpass filter, which is a resonant frequency sometimes equal to the peak frequency of such systems, for example as in a Butterworth filter.
The arithmetic definition is used in more general situations, such as in describing passband telecommunication systems, where filters are not necessarily symmetric but are treated on a linear frequency scale for applications such as frequency-division multiplexing.
References
External links
Calculations and comparisons between the geometric mean and the arithmetic mean
Electrical engineering
Telecommunication theory
Frequency-domain analysis | Center frequency | [
"Physics",
"Engineering"
] | 217 | [
"Frequency-domain analysis",
"Electrical engineering",
"Spectrum (physical sciences)"
] |
577,486 | https://en.wikipedia.org/wiki/Simple%20algebra%20%28universal%20algebra%29 | In universal algebra, an abstract algebra A is called simple if and only if it has no nontrivial congruence relations, or equivalently, if every homomorphism with domain A is either injective or constant.
As congruences on rings are characterized by their ideals, this notion is a straightforward generalization of the notion from ring theory: a ring is simple in the sense that it has no nontrivial ideals if and only if it is simple in the sense of universal algebra. The same remark applies with respect to groups and normal subgroups; hence the universal notion is also a generalization of a simple group (it is a matter of convention whether a one-element algebra should be or should not be considered simple, hence only in this special case the notions might not match).
A theorem by Roberto Magari in 1969 asserts that every variety contains a simple algebra.
See also
Simple group
Simple ring
Central simple algebra
References
Algebras
Ring theory | Simple algebra (universal algebra) | [
"Mathematics"
] | 195 | [
"Mathematical structures",
"Algebras",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
577,500 | https://en.wikipedia.org/wiki/Jor%20%28music%29 | In Hindustani classical music, the jor (Hindi: जोर, ; also spelt jod and jhor) is a formal section of composition in the long elaboration (alap) of a raga that forms the beginning of a performance. It comes after alap and precedes jhala, the climax. Jor is the instrumental equivalent of nomtom in the dhrupad vocal style of Indian music. Both have a simple pulse but no well-defined rhythmic cycle.
Origin and terminology
Jor (or jod) is an instrumental interpretation of nomtom which is an introductory style that is characterised by its modest rhythm and lack of rhythmic cycle (also known as tal). Jor is present in most Hindustani classical music through the raga, as an articulate and rapid pulse that the alap transitions into, followed by jhala.
In Hindustani music
Indian classical music is divided into 2 sections: Hindustani and Carnatic Music. Both musical styles inhabit the core traditions of India culture, and are portrayed as one of the most prestigious types of music. These two traditional types of music are both defined by the Sanskrit term "sangita" which refers to the fusion of all the elements, of song, instrumental music and dance. Hindustani Music is primarily found in the Northern areas of India, Bangladesh and Pakistan. The oral nature of North Indian Music allows for expressive communication between the performers and audience. Raga and Dhrupad are the two main forms of Hindustani classical music and form the prominent structure of Indian Classical Music. The musical section of Jor is prominent in Raga and follows the cyclical and linear progression of Hindustani music.
Raga
The concept of Raga (Rag) can be divided into 5 different components as proposed by Nazir Ali Jairazbhoy:
Scale
Ascending and descending line
Transilience
Emphasised notes and register
Intonation and obligatory embellishments
Raga is derived from Sanskrit, a classical language from South Asia, and is defined as "the act of colouring and dyeing." Amongst Indian Classical Music, raga is identified as the basic melodic framework and acts as a communication medium for two musicians. In the Raga there is a constant interplay between what is learnt from the performance, what is known and what is continued through improvisation. A raga utilises a particular scale and combines it with prototypical melodic patterns, creating combinations of tonic intervals which evoke unique emotions.
A typical raga composition is shown as sequences of events, starting with the alap and followed by the gat. A performance of raga depends on the balance between the melody and the way the audience and performer engage with the material. The Jor is situated between the Alap and Jhala, commonly known as the instrumental Alap-Jor-Jhala-Gat format. This framework details the unmetered instrumental structure of the Raga, which is performed with a regular pulse and over a wider melodic range. This format is the foundation of Dhrupad as was introduced into the West in the 20th century.
In a full performance, the Raga can be split into three sections, the alap, jor and jhala. Jor and Alap acts as equivalents to one another, and the Jhala is a fast and unaccompanied part where the Jor is accelerated to reach a peak/climax. Within the Jor and Jhala, a pulse can be heard throughout. These sections, especially Jor is described as not a beat nor a rhythm but a movement that helps the Raga gain momentum in the beginning of the piece. The frameworks and methods employed may vary according to whether the raga is performed by a vocalist or an instrumentalist.
Dhrupad
Dhrupad is another form of Raga that is older and restricts the Alap, Jor and Jhala sections in such a way that is heard more frequently in present day. This genre of Indian Music formed the foundation for the Alap-Jor-Jhala-Gat structure to be welcomed to the West in the 20th Century. In Dhrupad, its distinctive feature is the climatic beginning in comparison to Raga's ascending composition. The word Dhrupad meaning ‘fixed verse’ refers to the complex opening section (Alap) allowing the other sections to grow and expand.
It is common in Dhrupad, for the Alap to be extended and unaccompanied, similar to most instrumental genres in Northern Indian music. It also focuses on a longer and more structured version of the alap-jor section. In comparison to Khayal, there is a clear structural division between the opening of Raga-alap and Jor. The Jor section in Dhrupad can be heard by its increasingly articulated and rapid pulse. Within this section, Jor follows the most common rhythmic cycle in Dhrupad, being the twelve beat. The theme of intensification is prominent as the subsequent switch from the alap to jor, is identified as more rhythmic once it reaches the Jor section. A common instrument utilised throughout Dhrupad, in the jor, is the Rudra Vina, a string instrument that evokes a melodic rhythm.
A study conducted by Napier, portrays that the end point of the Dhrupad is the articulation of the last note played in the Alap during the Jor section. It also notes that there is a large sectionalisation in the Dhrupad that is obvious in Jor as the consistency in regard to rhythmic composition is hard to recognise in Dhrupad.
Structure
Rhythm
An instrumental performance of jor comprises sitarist who will pluck certain notes on a guitar or another instrument, with a consistent rhythm. A vocal performance of Jor will show a vocalist singing each phrase in equal time. A recording of two Raga performances conducted by Dhaeambir Singh Dadhyalla, indicated that Jor, along with Alap, lasts for 5 minutes which reduced the amount of tempo change throughout the beginning section.
An experiment conducted by Will, Clayton, Wertheim, Leante, Berg shows that the pulse progression in the Jor and Alap sections of Raga are distinctively different. The alap shows that there are at least nine different pulse rates, whilst in Jor there are only three. These characteristics in the Jor that distinguish it from the other sections are what creates different responses from audiences.
In the 3-section format of the Raga, Alap-Jor-Jhala, Jor and Alap can share similarities in its composition and rhythmic style. However, Jor contains distinctive features which makes it stand out in the Raga. Jor in comparison to the Alap, is usually slow in its introduction into the Raga, but continuously builds until it reaches a fast tempo. This allows a steady transition into the Jhala, as it continues the quickened beat set by Jor. Alap is solely defined by its free rhythm, whereas Jor is limited to its regular pulse following a simple beat pattern which can be elaborated in some cases. As the Raga progresses, Jor acts as the link between the Alap and Jhala as it applies the melody introduced in the Alap and expands it through to the Jhala. The utilisation of Gamak throughout Jor, which consists of a pattern of three notes exploring a wider range or octave. Another difference between the two opening sections is the freedom granted to Jor as it moves between different pulses and speeds, whilst still focusing on certain smaller parts within the song.
Transition
The transition between the three section, alap-jor-jhala, is continuous and each part builds from its predecessor. Jor (literal meaning, "join") acts the second introduction after the alap, within a raga performance. It follows a similar structure to alap, with a shift in rhythmic style. As the Raga transitions into the Jor, the pulse is introduced by the melody instrumentalist. The Jor utilises the features of scale and patterns in the previous section (Alap) and improvises to create a new variation of these features. During jor, the performance must maintain a steady pulse with the exclusion of drums, which remains the same throughout Alap, Jor and Jhala.
In musical notation, jor follows the same notes as alap, with a constant steady beat between each. Narayan states that Jor is the "faster portion of alap, with rythmn," but deviates from Alap through its ability to concentrate on smaller sections or notes throughout the Raga. The distinction between alap and jor is made between the increase in regularity in the jor in comparison to previous alap section. Similarly, the basic sound sequence in this section is formed by chikari events to evoke a prominent timbral-rhythmic pattern. The relationship between the two, forms the definition of a bridge that is connected with the light characteristics of alap and followed by the controlled design of the raga, where the drums decide the join into the arrangement. The theme formed in the introduction of the Alap, is continued to Jor, where the drums and rhythmic beats are excluded, and the chosen melodic instrument is strummed at an accelerated pace or the performer increases the phrasing of each syllable.
References
Formal sections in music analysis
Hindustani music terminology | Jor (music) | [
"Technology"
] | 1,930 | [
"Components",
"Formal sections in music analysis"
] |
577,559 | https://en.wikipedia.org/wiki/List%20of%20schools%20of%20mines | A school of mines (or mining school) is an engineering school, often established in the 18th and 19th centuries, that originally focused on mining engineering and applied science. Most have been integrated within larger constructs such as mineral engineering, some no longer focusing primarily on mining subjects, while retaining the name.
Universities offering degrees in mining engineering
Africa
Asia
Europe
North America
Oceania
South America
See also
List of colleges of natural resources
Ranking QS 2019, Subject: Engineering - Mineral and Mining.
References
Mine
Sc | List of schools of mines | [
"Engineering"
] | 100 | [
"Schools of mines",
"Engineering universities and colleges"
] |
8,638,963 | https://en.wikipedia.org/wiki/Superferromagnetism | Superferromagnetism is the magnetism of an ensemble of magnetically interacting super-moment-bearing material particles that would be superparamagnetic if they were not interacting. Nanoparticles of iron oxides, such as ferrihydrite (nominally FeOOH), often cluster and interact magnetically. These interactions change the magnetic behaviours of the nanoparticles (both above and below their blocking temperatures) and lead to an ordered low-temperature phase with non-randomly oriented particle super-moments.
Discovery
The phenomenon appears to have been first described and the term "superferromagnatism" introduced by Bostanjoglo and Röhkel, for a metallic film system. A decade later, the same phenomenon was rediscovered and described to occur in small-particle systems. The discovery is attributed as such in the scientific literature.
References
Magnetic ordering | Superferromagnetism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 181 | [
"Magnetic ordering",
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
8,639,701 | https://en.wikipedia.org/wiki/Chemical%20Physics%20Letters | Chemical Physics Letters is a biweekly peer-reviewed scientific journal covering research in chemical physics and physical chemistry. It was established in 1967 and is published by Elsevier. The editors-in-chief are David C. Clary, B. Dietzek, K-L. Han, and A. Karton.
External links
Chemical physics journals
Academic journals established in 1967
Elsevier academic journals
English-language journals | Chemical Physics Letters | [
"Chemistry"
] | 83 | [
"Chemical physics journals"
] |
8,640,320 | https://en.wikipedia.org/wiki/Dioptric%20correction | Dioptric correction is the expression for the adjustment of the optical instrument to the varying visual acuity of a person's eyes. It is the adjustment of one lens to provide compatible focus when the viewer's eyes have differing visual capabilities. One result is less strain on the eyes that allow for optimal viewing and depth and contrast focusing when composing a photograph or viewing an item through a device made of lenses or lens elements.
References
Optics | Dioptric correction | [
"Physics",
"Chemistry"
] | 89 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
8,640,413 | https://en.wikipedia.org/wiki/Beta-glucan | Beta-glucans, β-glucans comprise a group of β-D-glucose polysaccharides (glucans) naturally occurring in the cell walls of cereals, bacteria, and fungi, with significantly differing physicochemical properties dependent on source. Typically, β-glucans form a linear backbone with 1–3 β-glycosidic bonds but vary with respect to molecular mass, solubility, viscosity, branching structure, and gelation properties, causing diverse physiological effects in animals.
At dietary intake levels of at least 3 g per day, oat fiber β-glucan decreases blood levels of LDL cholesterol and so may reduce the risk of cardiovascular diseases. β-glucans are natural gums and are used as texturing agents in various nutraceutical and cosmetic products, and as soluble fiber supplements.
History
Cereal and fungal products have been used for centuries for medicinal and cosmetic purposes; however, the specific role of β-glucan was not explored until the 20th century. β-glucans were first discovered in lichens, and shortly thereafter in barley. A particular interest in oat β-glucan arose after a cholesterol lowering effect from oat bran reported in 1981.
In 1997, the FDA approved of a claim that intake of at least 3.0 g of β-glucan from oats per day decreased absorption of dietary cholesterol and reduced the risk of coronary heart disease. The approved health claim was later amended to include these sources of β-glucan: rolled oats (oatmeal), oat bran, whole oat flour, oatrim (the soluble fraction of alpha-amylase hydrolyzed oat bran or whole oat flour), whole grain barley and barley beta-fiber. An example of an allowed label claim: "Soluble fiber from foods such as oatmeal, as part of a diet low in saturated fat and cholesterol, may reduce the risk of heart disease. A serving of oatmeal supplies 0.75 grams of the 3.0 g of β-glucan soluble fiber necessary per day to have this effect." The claim language is in the Federal Register 21 CFR 101.81 Health Claims: "Soluble fiber from certain foods and risk of coronary heart disease (CHD)".
Structure
Glucans are arranged in six-sided D-glucose rings connected linearly at varying carbon positions depending on the source, although most commonly β-glucans include a 1-3 glycosidic link in their backbone. Although technically β-glucans are chains of D-glucose polysaccharides linked by β-type glycosidic bonds, by convention not all β-D-glucose polysaccharides are categorized as β-glucans. Cellulose is not conventionally considered a β-glucan, as it is insoluble and does not exhibit the same physicochemical properties as other cereal or yeast β-glucans.
Some β-glucan molecules have branching glucose side-chains attached to other positions on the main D-glucose chain, which branch off the β-glucan backbone. In addition, these side-chains can be attached to other types of molecules, like proteins, as in polysaccharide-K.
The most common forms of β-glucans are those comprising D-glucose units with β-1,3 links. Yeast and fungal β-glucans contain 1-6 side branches, while cereal β-glucans contain both β-1,3 and β-1,4 backbone bonds, but no β-1,3 branching. Seaweeds consist of a backbone that is primarily β-1,3-glucan, but with some β-1,6-glucan in the backbone as well as in side chains.
The frequency, location, and length of the side-chains may play a role in immunomodulation. Differences in molecular weight, shape, and structure of β-glucans dictate the differences in biological activity.
In general, β-1,3 linkages are created by 1,3-beta-glucan synthase, and β-1,4 linkages are created by cellulose synthase. The process leading to β-1,6 linkages is poorly understood: although genes important in the process have been identified, not much is known about what each of them do.
β-glucan types
β-glucans form a natural component of the cell walls of bacteria, fungi, yeast, and cereals such as oat and barley. Each type of beta-glucan comprises a different molecular backbone, level of branching, and molecular weight which affects its solubility and physiological impact. One of the most common sources of β(1,3)D-glucan for supplement use is derived from the cell wall of baker's yeast (Saccharomyces cerevisiae). β-glucans found in the cell walls of yeast contain a 1,3 glucose backbone with elongated 1,6 glucose branches. Other sources include seaweed, and various mushrooms, such as lingzhi, shiitake, chaga, and maitake, which are under preliminary research for their potential immune effects.
Fermentable fiber
In the diet, β-glucans are a source of soluble, fermentable fiber – also called prebiotic fiber – which provides a substrate for microbiota within the large intestine, increasing fecal bulk and producing short-chain fatty acids as byproducts with wide-ranging physiological activities. This fermentation impacts the expression of many genes within the large intestine, which further affects digestive function and cholesterol and glucose metabolism, as well as the immune system and other systemic functions.
Cereal
Cereal β-glucans from oat, barley, wheat, and rye have been studied for their effects on cholesterol levels in people with normal cholesterol levels and in those with hypercholesterolemia. Intake of oat β-glucan at daily amounts of at least 3 grams lowers total and low-density lipoprotein cholesterol levels by 5 to 10% in people with normal or elevated blood cholesterol levels.
Oats and barley differ in the ratio of trimer and tetramer 1-4 linkages. Barley has more 1-4 linkages with a degree of polymerization higher than 4. However, the majority of barley blocks remain trimers and tetramers. In oats, β-glucan is found mainly in the endosperm of the oat kernel, especially in the outer layers of that endosperm.
β-glucan absorption
Enterocytes facilitate the transportation of β(1,3)-glucans and similar compounds across the intestinal cell wall into the lymph, where they begin to interact with macrophages to activate immune function. Radiolabeled studies have verified that both small and large fragments of β-glucans are found in the serum, which indicates that they are absorbed from the intestinal tract. M cells within the Peyer's patches physically transport the insoluble whole glucan particles into the gut-associated lymphoid tissue.
(1,3)-β-D-glucan medical application
An assay to detect the presence of (1,3)-β-D-glucan in blood is marketed as a means of identifying invasive or disseminated fungal infections. This test should be interpreted within the broader clinical context, however, as a positive test does not render a diagnosis, and a negative test does not rule out infection. False positives may occur because of fungal contaminants in the antibiotics amoxicillin-clavulanate, and piperacillin/tazobactam. False positives can also occur with contamination of clinical specimens with the bacteria Streptococcus pneumoniae, Pseudomonas aeruginosa, and Alcaligenes faecalis, which also produce (1→3)β-D-glucan. This test can aid in the detection of Aspergillus, Candida, and Pneumocystis jirovecii. This test cannot be used to detect Mucor or Rhizopus, the fungi responsible for mucormycosis, as they do not produce (1,3)-beta-D-glucan.
See also
Prebiotic (nutrition)
Resistant starch
Xylooligosaccharides
Zymosan
References
External links
Edible thickening agents
Food additives
Medicinal fungi
Natural gums
Polysaccharides | Beta-glucan | [
"Chemistry"
] | 1,828 | [
"Carbohydrates",
"Polysaccharides"
] |
8,640,428 | https://en.wikipedia.org/wiki/Product%20fit%20analysis | A Product fit analysis (PFA) is a form of requirements analysis of the gap between an IT product's functionality and required functions. It is a document which consists of all the business requirements which are mapped to the product or application.
Requirements are specifically mentioned and the application is designed accordingly.
A PFA document is designed covering all the functionality required by the business and how it is addressed in the application.
It covers all the data inputs, data processing and data outputs.
References
Performing a Product FIT Analysis
Design | Product fit analysis | [
"Engineering"
] | 104 | [
"Design"
] |
8,641,308 | https://en.wikipedia.org/wiki/Disk%20buffer | In computer storage, a disk buffer (often ambiguously called a disk cache or a cache buffer) is the embedded memory in a hard disk drive (HDD) or solid-state drive (SSD) acting as a buffer between the rest of the computer and the physical hard disk platter or flash memory that is used for storage. Modern hard disk drives come with 8 to 256 MiB of such memory, and solid-state drives come with up to 4 GB of cache memory.
Since the late 1980s, nearly all disks sold have embedded microcontrollers and either an ATA, Serial ATA, SCSI, or Fibre Channel interface. The drive circuitry usually has a small amount of memory, used to store the data going to and coming from the disk platters.
The disk buffer is physically distinct from and is used differently from the page cache typically kept by the operating system in the computer's main memory. The disk buffer is controlled by the microcontroller in the hard disk drive, and the page cache is controlled by the computer to which that disk is attached. The disk buffer is usually quite small, ranging between 8 MB to 4 GB, and the page cache is generally all unused main memory. While data in the page cache is reused multiple times, the data in the disk buffer is rarely reused. In this sense, the terms disk cache and cache buffer are misnomers; the embedded controller's memory is more appropriately called disk buffer.
Note that disk array controllers, as opposed to disk controllers, usually have normal cache memory of around 0.5–8 GiB.
Uses
Read-ahead/read-behind
When a disk's controller executes a physical read, the actuator moves the read/write head to (or near to) the correct cylinder. After some settling and possibly fine-actuating the read head begins to pick up track data, and all is left to do is wait until platter rotation brings the requested data.
The data read ahead of request during this wait is unrequested but free, so typically saved in the disk buffer in case it is requested later.
Similarly, data can be read for free behind the requested one if the head can stay on track because there is no other read to execute or the next actuating can start later and still complete in time.
If several requested reads are on the same track (or close by on a spiral track), most unrequested data between them will be both read ahead and behind.
Speed matching
The speed of the disk's I/O interface to the computer almost never matches the speed at which the bits are transferred to and from the hard disk platter. The disk buffer is used so that both the I/O interface and the disk read/write head can operate at full speed.
Write acceleration
The disk's embedded microcontroller may signal the main computer that a disk write is complete immediately after receiving the write data, before the data is actually written to the platter. This early signal allows the main computer to continue working even though the data has not actually been written yet. This can be somewhat dangerous, because if power is lost before the data is permanently fixed in the magnetic media, the data will be lost from the disk buffer, and the file system on the disk may be left in an inconsistent state.
On some disks, this vulnerable period between signaling the write complete and fixing the data can be arbitrarily long, as the write can be deferred indefinitely by newly arriving requests. For this reason, the use of write acceleration can be controversial. Consistency can be maintained, however, by using a battery-backed memory system for caching data, although this is typically only found in high-end RAID controllers.
Alternatively, the caching can simply be turned off when the integrity of data is deemed more important than write performance. Another option is to send data to disk in a carefully managed order and to issue "cache flush" commands in the right places, which is usually referred to as the implementation of write barriers.
Command queuing
Newer SATA and most SCSI disks can accept multiple commands while any one command is in operation through "command queuing" (see NCQ and TCQ). These commands are stored by the disk's embedded controller until they are completed. One benefit is that the commands can be re-ordered to be processed more efficiently, so that commands affecting the same area of a disk are grouped together. Should a read reference the data at the destination of a queued write, the to-be-written data will be returned.
NCQ is usually used in combination with enabled write buffering. In case of a read/write FPDMA command with Force Unit Access (FUA) bit set to 0 and enabled write buffering, an operating system may see the write operation finished before the data is physically written to the media. In case of FUA bit set to 1 and enabled write buffering, write operation returns only after the data is physically written to the media.
Cache control from the host
Cache flushing
Data that was accepted in write cache of a disk device will be eventually written to disk platters, provided that no starvation condition occurs as a result of firmware flaw, and that disk power supply is not interrupted before cached writes are forced to disk platters. In order to control write cache, ATA specification included FLUSH CACHE (E7h) and FLUSH CACHE EXT (EAh) commands. These commands cause the disk to complete writing data from its cache, and disk will return good status after data in the write cache is written to disk media. In addition, when the drive received STANDBY IMMEDIATE command, on disk media this command will park the head, on flash media this command will save FTL mapping table.
An operating system will send FLUSH CACHE and STANDBY IMMEDIATE command to hard disk drives in the shutdown process.
Mandatory cache flushing is used in Linux for write barriers in some filesystems (for example, ext4), together with Force Unit Access write command for journal commit blocks.
Force Unit Access (FUA)
Force Unit Access (FUA) is an I/O write command option that forces written data all the way to stable storage. FUA write commands (WRITE DMA FUA EXT 3Dh, WRITE DMA QUEUED FUA EXT 3Eh, WRITE MULTIPLE FUA EXT CEh), in contrast to corresponding commands without FUA, write data directly to the media, regardless of whether write caching in the device is enabled or not. FUA write command will not return until data is written to media, thus data written by a completed FUA write command is on permanent media even if the device is powered off before issuing a FLUSH CACHE command.
FUA appeared in the SCSI command set, and was later adopted by SATA with NCQ. FUA is more fine-grained as it allows a single write operation to be forced to stable media and thus has smaller overall performance impact when compared to commands that flush the entire disk cache, such as the ATA FLUSH CACHE family of commands.
Windows (Vista and up) supports FUA as part of Transactional NTFS, but only for SCSI or Fibre Channel disks where support for FUA is common. It is not known whether a SATA drive that supports FUA write commands will actually honor the command and write data to disk platters as instructed; thus, Windows 8 and Windows Server 2012 instead send commands to flush the disk write cache after certain write operations.
Although the Linux kernel gained support for NCQ around 2007, SATA FUA remains disabled by default because of regressions that were found in 2012 when the kernel's support for FUA was tested. The Linux kernel supports FUA at the block layer level.
See also
Hybrid array
Hybrid drive
References
Computer storage devices
Hard disk computer storage
Solid-state computer storage | Disk buffer | [
"Technology"
] | 1,605 | [
"Computer storage devices",
"Recording devices"
] |
8,641,870 | https://en.wikipedia.org/wiki/Weinstein%E2%80%93Aronszajn%20identity | In mathematics, the Weinstein–Aronszajn identity states that if and are matrices of size and respectively (either or both of which may be infinite) then,
provided (and hence, also ) is of trace class,
where is the identity matrix.
It is closely related to the matrix determinant lemma and its generalization. It is the determinant analogue of the Woodbury matrix identity for matrix inverses.
Proof
The identity may be proved as follows.
Let be a matrix consisting of the four blocks , , and :
Because is invertible, the formula for the determinant of a block matrix gives
Because is invertible, the formula for the determinant of a block matrix gives
Thus
Substituting for then gives the Weinstein–Aronszajn identity.
Applications
Let . The identity can be used to show the somewhat more general statement that
It follows that the non-zero eigenvalues of and are the same.
This identity is useful in developing a Bayes estimator for multivariate Gaussian distributions.
The identity also finds applications in random matrix theory by relating determinants of large matrices to determinants of smaller ones.
References
Determinants
Matrix theory
Theorems in linear algebra | Weinstein–Aronszajn identity | [
"Mathematics"
] | 253 | [
"Theorems in algebra",
"Theorems in linear algebra"
] |
8,642,422 | https://en.wikipedia.org/wiki/Language%20identification | In natural language processing, language identification or language guessing is the problem of determining which natural language given content is in. Computational approaches to this problem view it as a special case of text categorization, solved with various statistical methods.
Overview
There are several statistical approaches to language identification using different techniques to classify the data. One technique is to compare the compressibility of the text to the compressibility of texts in a set of known languages. This approach is known as mutual information based distance measure. The same technique can also be used to empirically construct family trees of languages which closely correspond to the trees constructed using historical methods. Mutual information based distance measure is essentially equivalent to more conventional model-based methods and is not generally considered to be either novel or better than simpler techniques.
Another technique, as described by Cavnar and Trenkle (1994) and Dunning (1994) is to create a language n-gram model from a "training text" for each of the languages. These models can be based on characters (Cavnar and Trenkle) or encoded bytes (Dunning); in the latter, language identification and character encoding detection are integrated. Then, for any piece of text needing to be identified, a similar model is made, and that model is compared to each stored language model. The most likely language is the one with the model that is most similar to the model from the text needing to be identified. This approach can be problematic when the input text is in a language for which there is no model. In that case, the method may return another, "most similar" language as its result. Also problematic for any approach are pieces of input text that are composed of several languages, as is common on the Web.
For a more recent method, see Řehůřek and Kolkus (2009). This method can detect multiple languages in an unstructured piece of text and works robustly on short texts of only a few words: something that the n-gram approaches struggle with.
An older statistical method by Grefenstette was based on the prevalence of certain function words (e.g., "the" in English).
A common non-statistical intuitive approach (though highly uncertain) is to look for common letter combinations, or distinctive diacritics or punctuation.
Identifying similar languages
One of the great bottlenecks of language identification systems is to distinguish between closely related languages. Similar languages like Bulgarian and Macedonian or Indonesian and Malay present significant lexical and structural overlap, making it challenging for systems to discriminate between them.
In 2014 the DSL shared task has been organized providing a dataset (Tan et al., 2014) containing 13 different languages (and language varieties) in six language groups: Group A (Bosnian, Croatian, Serbian), Group B (Indonesian, Malaysian), Group C (Czech, Slovak), Group D (Brazilian Portuguese, European Portuguese), Group E (Peninsular Spanish, Argentine Spanish), Group F (American English, British English). The best system reached performance of over 95% results (Goutte et al., 2014). Results of the DSL shared task are described in Zampieri et al. 2014.
Software
Apache OpenNLP includes char n-gram based statistical detector and comes with a model that can distinguish 103 languages
Apache Tika contains a language detector for 18 languages
See also
Native Language Identification
Algorithmic information theory
Artificial grammar learning
Family name affixes
Kolmogorov complexity
Language Analysis for the Determination of Origin
Machine translation
Translation
References
Benedetto, D., E. Caglioti and V. Loreto. Language trees and zipping. Physical Review Letters, 88:4 (2002), Complexity theory.
Cavnar, William B. and John M. Trenkle. "N-Gram-Based Text Categorization". Proceedings of SDAIR-94, 3rd Annual Symposium on Document Analysis and Information Retrieval (1994) .
Cilibrasi, Rudi and Paul M.B. Vitanyi. "Clustering by compression". IEEE Transactions on Information Theory 51(4), April 2005, 1523–1545.
Dunning, T. (1994) "Statistical Identification of Language". Technical Report MCCS 94-273, New Mexico State University, 1994.
Goodman, Joshua. (2002) Extended comment on "Language Trees and Zipping". Microsoft Research, Feb 21 2002. (This is a criticism of the data compression in favor of the Naive Bayes method.)
Goutte, C.; Leger, S.; Carpuat, M. (2014) The NRC System for Discriminating Similar Languages. Proceedings of the Coling 2014 workshop "Applying NLP Tools to Similar Languages, Varieties and Dialects"
Grefenstette, Gregory. (1995) Comparing two language identification schemes. Proceedings of the 3rd International Conference on the Statistical Analysis of Textual Data (JADT 1995).
Poutsma, Arjen. (2001) Applying Monte Carlo techniques to language identification. SmartHaven, Amsterdam. Presented at CLIN 2001 .
Tan, L.; Zampieri, M.; Ljubešić, N.; Tiedemann, J. (2014) Merging Comparable Data Sources for the Discrimination of Similar Languages: The DSL Corpus Collection. Proceedings of the 7th Workshop on Building and Using Comparable Corpora (BUCC). Reykjavik, Iceland. p. 6-10
The Economist. (2002) "The elements of style: Analysing compressed data leads to impressive results in linguistics"
Radim Řehůřek and Milan Kolkus. (2009) "Language Identification on the Web: Extending the Dictionary Method" Computational Linguistics and Intelligent Text Processing.
Zampieri, M.; Tan, L.; Ljubešić, N.; Tiedemann, J. (2014) A Report on the DSL Shared Task 2014. Proceedings of the 1st Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects (VarDial). Dublin, Ireland. p. 58-67.
References
Computational linguistics
Natural language processing
Translation
Tasks of natural language processing | Language identification | [
"Technology"
] | 1,250 | [
"Machine translation",
"Natural language processing",
"Natural language and computing",
"Computational linguistics"
] |
8,642,531 | https://en.wikipedia.org/wiki/Machine%20translation%20software%20usability | The sections below give objective criteria for evaluating the usability of machine translation software output.
Stationarity or canonical form
Do repeated translations converge on a single expression in both languages? I.e. does the translation method show stationarity or produce a canonical form? Does the translation become stationary without losing the original meaning? This metric has been criticized as not being well correlated with BLEU (BiLingual Evaluation Understudy) scores.
Adaptive to colloquialism, argot or slang
Is the system adaptive to colloquialism, argot or slang? The French language has many rules for creating words in the speech and writing of popular culture. Two such rules are: (a) The reverse spelling of words such as femme to meuf. (This is called verlan.) (b) The attachment of the suffix -ard to a noun or verb to form a proper noun. For example, the noun faluche means "student hat". The word faluchard formed from faluche colloquially can mean, depending on context, "a group of students", "a gathering of students" and "behavior typical of a student". The Google translator as of 28 December 2006 doesn't derive the constructed words as for example from rule (b), as shown here:
Il y a une chorale falucharde mercredi, venez nombreux, les faluchards chantent des paillardes! ==> There is a choral society falucharde Wednesday, come many, the faluchards sing loose-living women!
French argot has three levels of usage:
familier or friendly, acceptable among friends, family and peers but not at work
grossier or swear words, acceptable among friends and peers but not at work or in family
verlan or ghetto slang, acceptable among lower classes but not among middle or upper classes
The United States National Institute of Standards and Technology conducts annual evaluations of machine translation systems based on the BLEU-4 criterion . A combined method called IQmt which incorporates BLEU and additional metrics NIST, GTM, ROUGE and METEOR has been implemented by Gimenez and Amigo .
Well-formed output
Is the output grammatical or well-formed in the target language? Using an interlingua should be helpful in this regard, because with a fixed interlingua one should be able to write a grammatical mapping to the target language from the interlingua. Consider the following Arabic language input and English language translation result from the Google translator as of 27 December 2006 . This Google translator output doesn't parse using a reasonable English grammar:
وعن حوادث التدافع عند شعيرة رمي الجمرات -التي كثيرا ما يسقط فيها العديد من الضحايا- أشار الأمير نايف إلى إدخال "تحسينات كثيرة في جسر الجمرات ستمنع بإذن الله حدوث أي تزاحم".
==>
And incidents at the push Carbuncles-throwing ritual, which often fall where many of the victims - Prince Nayef pointed to the introduction of "many improvements in bridge Carbuncles God would stop the occurrence of any competing."
Semantics preservation
Do repeated re-translations preserve the semantics of the original sentence? For example, consider the following English input passed multiple times into and out of French using the Google translator as of 27 December 2006:
Better a day earlier than a day late. ==>
Améliorer un jour plus tôt qu'un jour tard. ==>
To improve one day earlier than a day late. ==>
Pour améliorer un jour plus tôt qu'un jour tard. ==>
To improve one day earlier than a day late.
As noted above and in, this kind of round-trip translation is a very unreliable method of evaluation.
Trustworthiness and security
An interesting peculiarity of Google Translate as of 24 January 2008 (corrected as of 25 January 2008) is the following result when translating from English to Spanish, which shows an embedded joke in the English-Spanish dictionary which has some added poignancy given recent events:
Heath Ledger is dead ==>
Tom Cruise está muerto
This raises the issue of trustworthiness when relying on a machine translation system embedded in a Life-critical system in which the translation system has input to a Safety Critical Decision Making process. Conjointly it raises the issue of whether in a given use the software of the machine translation system is safe from hackers.
It is not known whether this feature of Google Translate was the result of a joke/hack or perhaps an unintended consequence of the use of a method such as statistical machine translation. Reporters from CNET Networks asked Google for an explanation on January 24, 2008; Google said only that it was an "internal issue with Google Translate". The mistranslation was the subject of much hilarity and speculation on the Internet.
If it is an unintended consequence of the use of a method such as statistical machine translation, and not a joke/hack, then this event is a demonstration of a potential source of critical unreliability in the statistical machine translation method.
In human translations, in particular on the part of interpreters, selectivity on the part of the translator in performing a translation is often commented on when one of the two parties being served by the interpreter knows both languages.
This leads to the issue of whether a particular translation could be considered verifiable. In this case, a converging round-trip translation would be a kind of verification.
See also
Comparison of machine translation applications
Evaluation of machine translation
Round-trip translation
Translation
Notes
References
Gimenez, Jesus and Enrique Amigo. (2005) IQmt: A framework for machine translation evaluation.
NIST. Annual machine translation system evaluations and evaluation plan.
Papineni, Kishore, Salim Roukos, Todd Ward and Wei-Jing Zhu. (2002) BLEU: A Method for automatic evaluation of machine translation. Proc. 40th Annual Meeting of the ACL, July, 2002, pp. 311–318.
Computational linguistics
Natural language processing | Machine translation software usability | [
"Technology"
] | 1,351 | [
"Natural language processing",
"Natural language and computing",
"Computational linguistics"
] |
8,642,567 | https://en.wikipedia.org/wiki/Process%20integration | Process integration is a term in chemical engineering which has two possible meanings.
A holistic approach to process design which emphasizes the unity of the process and considers the interactions between different unit operations from the outset, rather than optimising them separately. This can also be called integrated process design or process synthesis. El-Halwagi (1997 and 2006) and Smith (2005) describe the approach well. An important first step is often product design (Cussler and Moggridge 2003) which develops the specification for the product to fulfil its required purpose.
Pinch analysis, a technique for designing a process to minimise energy consumption and maximise heat recovery, also known as heat integration, energy integration or pinch technology. The technique calculates thermodynamically attainable energy targets for a given process and identifies how to achieve them. A key insight is the pinch temperature, which is the most constrained point in the process. The most detailed explanation of the techniques is by Linnhoff et al. (1982), Shenoy (1995), Kemp (2006) and Kemp and Lim (2020), and it also features strongly in Smith (2005). This definition reflects the fact that the first major success for process integration was the thermal pinch analysis addressing energy problems and pioneered by Linnhoff and co-workers. Later, other pinch analyses were developed for several applications such as mass-exchange networks (El-Halwagi and Manousiouthakis, 1989), water minimization (Wang and Smith, 1994), and material recycle (El-Halwagi et al., 2003). A very successful extension was "Hydrogen Pinch", which was applied to refinery hydrogen management (Nick Hallale et al., 2002 and 2003). This allowed refiners to minimise the capital and operating costs of hydrogen supply to meet ever stricter environmental regulations and also increase hydrotreater yields.
Description
In the context of chemical engineering, process integration can be defined as a holistic approach to process design and optimization, which exploits the interactions between different units in order to employ resources effectively and minimize costs.
Process integration is not limited to the design of new plants, but it also covers retrofit design (e.g. new units to be installed in an old plant) and the operation of existing systems. Nick Hallale (2001) explains that with process integration, industries are making more money from their raw materials and capital assets while becoming cleaner and more sustainable.
The main advantage of process integration is to consider a system as a whole (i.e. integrated or holistic approach) in order to improve their design and/or operation. In contrast, an analytical approach would attempt to improve or optimize process units separately without necessarily taking advantage of potential interactions among them.
For instance, by using process integration techniques it might be possible to identify that a process can use the heat rejected by another unit and reduce the overall energy consumption, even if the units are not running at optimum conditions on their own. Such an opportunity would be missed with an analytical approach, as it would seek to optimize each unit, and thereafter it wouldn’t be possible to re-use the heat internally.
Typically, process integration techniques are employed at the beginning of a project (e.g. a new plant or the improvement of an existing one) to screen out promising options to optimize the design and/or operation of a process plant.
Also it is often employed, in conjunction with simulation and mathematical optimization tools to identify opportunities in order to better integrate a system (new or existing) and reduce capital and/or operating costs.
Most process integration techniques employ Pinch analysis or Pinch Tools to evaluate several processes as a whole system. Therefore, strictly speaking, both concepts are not the same, even if in certain contexts they are used interchangeably. The review by Nick Hallale (2001) explains that in the future, several trends are to be expected in the field. In the future, it seems probable that the boundary between targets and design will be blurred and that these will be based on more structural information regarding the process network. Second, it is likely that we will see a much wider range of applications of process integration. There is still much work to be carried out in the area of separation, not only in complex distillation systems, but also in mixed types of separation systems. This includes processes involving solids, such as flotation and crystallization. The use of process integration techniques for reactor design has seen rapid progress, but is still in its early stages. Third, a new generation of software tools is expected. The emergence of commercial software for process integration is fundamental to its wider application in process design.
References
Cussler, E.L. and Moggridge, G.D. (2001). Chemical Product Design. Cambridge University Press (Cambridge Series in Chemical Engineering).
El-Halwagi, M. M., (2006) "Process Integration", Elsevier
El-Halwagi, M. M., (1997) "Pollution Prevention through Process Integration", Academic Press
El-Halwagi, M. M., F. Gabriel, and D. Harell, (2003) “Rigorous Graphical Targeting for Resource Conservation via Material Recycle/Reuse Networks”, Ind. Eng. Chem. Res., 42, 4319-4328
El-Halwagi, M. M., and Manousiouthakis, V. (1989). Synthesis of mass exchange networks. AIChE J. 35(8), 1233-1244.
Hallale, Nick, (2001), "Burning Bright: Trends in Process Integration", Chemical Engineering Progress, July 2001
Hallale, N. Ian Moore, Dennis Vauk, "Hydrogen optimization at minimal investment", Petroleum Technology Quarterly (PTQ), Spring (2003)
Kemp, I.C. (2006). Pinch Analysis and Process Integration: A User Guide on Process Integration for the Efficient Use of Energy, 2nd edition. Butterworth-Heinemann. . Includes downloadable spreadsheet software.
Kemp, I.C. and Lim, J.S. (2020). Pinch Analysis for Energy and Carbon Footprint Reduction: A User Guide on Process Integration for the Efficient Use of Energy, 3rd edition. Includes downloadable spreadsheet software. Butterworth-Heinemann. .
Linnhoff, B., D.W. Townsend, D. Boland, G.F. Hewitt, B.E.A. Thomas, A.R. Guy and R.H. Marsland, (1982) “A User Guide on Process Integration for the Efficient Use of Energy," IChemE, UK.
Shenoy, U.V. (1995). "Heat Exchanger Network Synthesis: Process Optimization by Energy and Resource Analysis". Includes two computer disks. Gulf Publishing Company, Houston, TX, USA. .
Smith, R. (2005). Chemical Process Design and Integration. John Wiley and Sons.
Wang, Y. P. and R. Smith (1994). Wastewater Minimisation. Chem. Eng. Sci., 49, 981-1006
Mechanical engineering
Chemical process engineering
Building engineering
Process engineering | Process integration | [
"Physics",
"Chemistry",
"Engineering"
] | 1,479 | [
"Process engineering",
"Applied and interdisciplinary physics",
"Building engineering",
"Chemical engineering",
"Civil engineering",
"Mechanical engineering by discipline",
"Mechanical engineering",
"Chemical process engineering",
"Architecture"
] |
8,642,593 | https://en.wikipedia.org/wiki/Pinch%20analysis | Pinch analysis is a methodology for minimising energy consumption of chemical processes by calculating thermodynamically feasible energy targets (or minimum energy consumption) and achieving them by optimising heat recovery systems, energy supply methods and process operating conditions. It is also known as process integration, heat integration, energy integration or pinch technology.
The process data is represented as a set of energy flows, or streams, as a function of heat load (product of specific enthalpy and mass flow rate; SI unit W) against temperature (SI unit K). These data are combined for all the streams in the plant to give composite curves, one for all hot streams (releasing heat) and one for all cold streams (requiring heat). The point of closest approach between the hot and cold composite curves is the pinch point (or just pinch) with a hot stream pinch temperature and a cold stream pinch temperature. This is where the design is most constrained. Hence, by finding this point and starting the design there, the energy targets can be achieved using heat exchangers to recover heat between hot and cold streams in two separate systems, one for temperatures above pinch temperatures and one for temperatures below pinch temperatures. In practice, during the pinch analysis of an existing design, often cross-pinch exchanges of heat are found between a hot stream with its temperature above the pinch and a cold stream below the pinch. Removal of those exchangers by alternative matching makes the process reach its energy target.
History
In 1971, Ed Hohmann stated in his PhD that 'one can
compute the least amount of hot and cold utilities required for a process
without knowing the heat exchanger network that could accomplish it. One
also can estimate the heat exchange area required'.
In late 1977, Ph.D. student Bodo Linnhoff under the supervision of Dr John Flower at the University of Leeds showed the existence in many processes of a heat integration bottleneck, ‘the pinch’, which laid the basis for the technique, known today as pinch-analysis. At that time he had joined Imperial Chemical Industries (ICI) where he led practical applications and further method development.
Bodo Linnhoff developed the 'Problem Table', an algorithm for calculating the energy targets and worked out the basis for a calculation of the surface area required, known as ‘the spaghetti network’. These algorithms enabled practical application of the technique.
In 1982 he joined University of Manchester Institute of Technology (UMIST, present day University of Manchester) to continue the work. In 1983 he set up a consultation firm known as Linnhoff March International later acquired by KBC Energy Services.
Many refinements have been developed since and used in a wide range of industries, including extension to heat and power systems and
non-process situations. The most detailed explanation of the techniques is by Linnhoff et al. (1982), Shenoy (1995), Kemp (2006) and Kemp and Lim (2020), while Smith (2005) includes several chapters on them. Both detailed and simplified (spreadsheet) programs are now available to calculate the energy targets. See Pinch Analysis Software below.
In recent years, Pinch analysis has been extended beyond energy applications. It now includes:
Mass Exchange Networks (El-Halwagi and Manousiouthakis, 1989)
Water pinch (Yaping Wang and Robin Smith, 1994; Nick Hallale, 2002; Prakash and Shenoy, 2005)
Hydrogen pinch (Nick Hallale et al., 2003; Agrawal and Shenoy, 2006)
Carbon pinch (referenced in Kemp and Lim, 2020)
Weaknesses
Classical pinch-analysis primarily calculates the energy costs for the heating and cooling utility. At the pinch point, where the hot and cold streams are the most constrained, large heat exchangers are required to transfer heat between the hot and cold streams. Large heat exchangers entail high investment costs. In order to reduce capital cost, in practice a minimum temperature difference (Δ T) at the pinch point is demanded, e.g., 10 °F. It is possible to estimate the heat exchanger area and capital cost, and hence the optimal Δ T minimum value. However, the cost curve is quite flat and the optimum may be affected by "topology traps". The pinch method is not always appropriate for simple networks or where severe operating constraints exist. Kemp (2006) and Kemp and Lim (2019) discuss these aspects in detail.
Recent developments
The problem of integrating heat between hot and cold streams, and finding the optimal network, in particular in terms of costs, may today be solved with numerical algorithms. The network can be formulated as a so-called mixed integer non-linear programming (MINLP) problem and solved with an appropriate numerical solver. Nevertheless, large-scale MINLP problems can still be hard to solve for today's numerical algorithms. Alternatively, some attempts were made to formulate the MINLP problems to mixed integer linear problems, where then possible networks are screened and optimized. For simple networks of a few streams and heat exchangers, hand design methods with simple targeting software are often adequate, and aid the engineer in understanding the process.
See also
References
El-Halwagi, M. M. and V. Manousiouthakis, 1989, "Synthesis of Mass Exchange Networks", AIChE J., 35(8), 1233–1244.
Kemp, I.C. (2006). Pinch Analysis and Process Integration: A User Guide on Process Integration for the Efficient Use of Energy, 2nd edition. Includes spreadsheet software. Butterworth-Heinemann. . (1st edition: Linnhoff et al., 1982).
Kemp, I.C. and Lim, J.S. (2020). Pinch Analysis for Energy and Carbon Footprint Reduction: A User Guide on Process Integration for the Efficient Use of Energy, 3rd edition. Includes spreadsheet software. Butterworth-Heinemann. .
Linnhoff, B., D.W. Townsend, D. Boland, G.F. Hewitt, B.E.A. Thomas, A.R. Guy and R.H. Marsland, (1982) A User Guide on Process Integration for the Efficient Use of Energy. IChemE, UK.
Shenoy, U.V. (1995). Heat Exchanger Network Synthesis: Process Optimization by Energy and Resource Analysis. Includes two computer disks. Gulf Publishing Company, Houston, TX, USA. .
Smith, R. (2005). Chemical Process Design and Integration. John Wiley and Sons.
Hallale, Nick. (2002). A New Graphical Targeting Method for Water Minimisation. Advances in Environmental Research. 6(3): 377-390
Nick Hallale, Ian Moore, Dennis Vauk, "Hydrogen optimization at minimal investment", Petroleum Technology Quarterly (PTQ), Spring (2003)
Agrawal, V. and U. V. Shenoy, 2006, "Unified Conceptual Approach to Targeting and Design of Water and Hydrogen Networks", AIChE J., 52(3), 1071–1082.
Wang, Y. P. and Smith, R. (1994). Wastewater Minimisation. Chemical Engineering Science. 49: 981-1006
Prakash, R. and Shenoy, U.V. (2005) Targeting and Design of Water Networks for Fixed Flowrate and Fixed Contaminant Load Operations. Chemical Engineering Science. 60(1), 255-268
de Klerk, LW, de Klerk, MP and van der Westhuizen, D "Improvements in hydrometallurgical uranium circuit capital and operating costs by water management and integration of utility and process energy targets" AusImm Conference, U 2015
External links
PinCH - Software for continuous and batch processes including indirect heat recovery loops and energy storages. Free manuals, tutorials, case studies and success stories available
HeatIT - Free (light) version of Pinch Analysis software that runs in Excel - developed by Pinchco, a consultancy company offering expert advice on energy related matters
Simulis Pinch - Tool from ProSim SA that can be used directly in Excel and that is dedicated to the diagnosis and the energy integration of the processes.
Integration - A practical and low-cost process integration computation tool developed by CanmetENERGY, Canada's leading research and technology organization in the field of clean energy.
Pinch Analysis Tool - TLK-Energy's online pinch analysis software enables you to swiftly identify efficiency potential in unused waste heat flows and optimally integrate heat pumps for your industrial operation.
Mechanical engineering
Chemical process engineering
Heat exchangers
Building engineering
Energy recovery
Analysis | Pinch analysis | [
"Physics",
"Chemistry",
"Engineering"
] | 1,755 | [
"Applied and interdisciplinary physics",
"Building engineering",
"Chemical equipment",
"Chemical engineering",
"Civil engineering",
"Heat exchangers",
"Mechanical engineering",
"Chemical process engineering",
"Architecture"
] |
8,643,269 | https://en.wikipedia.org/wiki/DPVweb | DPVweb is a database for virologists working on plant viruses combining taxonomic, bioinformatic and symptom data.
Description
DPVweb is a central web-based source of information about viruses, viroids and satellites of plants, fungi and protozoa.
It provides comprehensive taxonomic information, including brief descriptions of each family and genus, and classified lists of virus sequences. It makes use of a large database that also holds detailed, curated, information for all sequences of viruses, viroids and satellites of plants, fungi and protozoa that are complete or that contain at least one complete gene. There are currently about 10,000 such sequences. For comparative purposes, DPVweb also contains a representative sequence of all other fully sequenced virus species with an RNA or single-stranded DNA genome. For each curated sequence the database contains the start and end positions of each feature (gene, non-translated region, etc.), and these have been checked for accuracy. As far as possible, the nomenclature for genes and proteins are standardized within genera and families. Sequences of features (either as DNA or amino acid sequences) can be directly downloaded from the website in FASTA format.
The sequence information can also be accessed via client software for personal computers.
History
The Descriptions of Plant Viruses (DPVs) were first published by the Association of Applied Biologists in 1970 as a series of leaflets, each one written by an expert describing a particular plant virus. In 1998 all of the 354 DPVs published in paper were scanned, and converted into an electronic format in a database and distributed on CDROM. In 2001 the descriptions were made available on the new DPVweb site, providing open access to the now 400+ DPVs (currently 415) as well as taxonomic and sequence data on all plant viruses.
Uses
DPVweb is an aid to researchers in the field of plant virology as well as an educational resource for students of virology and molecular biology.
The site provides a single point of access for all known plant virus genome sequences making it easy to collect these sequences together for further analysis and comparison. Sequence data from the DPVweb database have proved valuable for a number of projects:
survey of codon usage bias amongst all plant viruses,
two-way comparisons between comprehensive sets of sequences from the families Flexiviridae and Potyviridae that have helped inform taxonomy and clarify genus and species discrimination criteria,
a survey and verification of the polyprotein cleavage sites within the family Potyviridae.
See also
Transmission of plant viruses
References
Citation by Danish Institute of Agricultural Sciences
Citation by the John Innes Centre, United Kingdom
External links
DPVweb EDAM bioinformatics ontology
Molecular biology
Plant taxonomy
Biological databases
Viral plant pathogens and diseases
Virology
Ontology (information science)
Information science | DPVweb | [
"Chemistry",
"Biology"
] | 587 | [
"Plants",
"Bioinformatics",
"Plant taxonomy",
"Molecular biology",
"Biochemistry",
"Biological databases"
] |
8,643,466 | https://en.wikipedia.org/wiki/Protocol%20pipelining | Protocol pipelining is a technique in which multiple requests are written out to a single socket without waiting for the corresponding responses. Pipelining can be used in various application layer network protocols, like HTTP/1.1, SMTP and FTP.
The pipelining of requests results in a dramatic improvement in protocol performance, especially over high latency connections (such as satellite Internet connections). Pipelining reduces waiting time of a process.
See also
HTTP pipelining
References
External links
HTTP/1.1 Pipelining FAQ at mozilla.org
"Network Performance Effects of HTTP/1.1, CSS1, and PNG" at w3.org
FTP pipelining
SMTP Service Extension for Command Pipelining (STD 60)
Network protocols | Protocol pipelining | [
"Technology"
] | 154 | [
"Computing stubs",
"Computer network stubs"
] |
8,644,089 | https://en.wikipedia.org/wiki/Cassareep | Cassareep is a thick black liquid made from cassava root, which is used as a base for many sauces and especially in Guyanese pepperpot. Besides use as a flavoring and browning agent, it is commonly regarded as a food preservative although laboratory testing is inconclusive.
Production
Cassareep is made from the juice of the bitter cassava root, which is poisonous, as it contains acetone cyanohydrin, a compound which decomposes to the highly toxic hydrogen cyanide on contact with water. Hydrogen cyanide, traditionally called "prussic acid", is volatile and quickly dissipates when heated. Nevertheless, improperly cooked cassava has been blamed for a number of deaths. Indigenous peoples in Guyana reportedly made an antidote by steeping chili peppers in rum.
To make cassareep, the juice is boiled until it is reduced by half in volume, to the consistency of molasses and flavored with spices—including cloves, cinnamon, salt, sugar, and cayenne pepper. Traditionally, cassareep was boiled in a soft pot, the actual "pepper pot", which would absorb the flavors and also impart them (even if dry) to foods such as rice and chicken cooked in it.
Most cassareep is exported from Guyana. The natives of Guyana traditionally brought the product to town in bottles, and it is available on the US market in bottled form. Though the cassava root traveled from Brazil to Africa, where the majority of cassava is grown, there is no production of cassareep in Africa.
Culinary use
Cassareep is used for two distinct goals, that originate from two important aspects of the ingredient: its particular flavor, and its preservative quality.
Cassareep is essential in the preparation of pepperpot, and gives the dish its "distinctive bittersweet flavor." Cassareep can also be used as an added flavoring to dishes, "imparting upon them the richness and flavour of strong beef-soup."
A peculiar quality of cassareep, which works as an antiseptic, is that it allows food to be kept "on the back of the stove" for indefinite lengths of time, as long as additional cassareep is added every time meat is added. According to legend, Betty Mascoll of Grenada had a pepperpot that was maintained like this for more than a century. Dutch planters in Suriname reportedly had pepperpots in daily use that they kept cooking for many years, as did "businessmen's clubs" in the Caribbean.
Medical application
The antiseptic qualities of cassareep are well known—so well known, in fact, that the Reverend J.G. Wood, who published his Wanderings in South America in 1879, was criticized for not mentioning the "antiseptic properties of cassava juice (cassareep), which enables the Indian on a canoe voyage to take with him a supply of meat for several days."
In the mid- to late nineteenth century, as reports of adventures by English explorers became widely read in England, statements about cassareep and its antiseptic qualities became easily available; an early example was a publication in The Pharmaceutical Journal from 1847, and similar references can be found throughout the late nineteenth century, such as in the work of Irish naturalist and explorer Thomas Heazle Parke and in pharmaceutical and trade journals. Professor Attfield, professor of practical chemistry for the Royal Pharmaceutical Society of Great Britain, however, in the 1870 edition of the Year-book of Pharmacy, claimed that his laboratory studies proved no effectiveness whatsoever. Still, pharmaceutical journals and handbooks began to report of the possible use of cassareep, and suggested it might be helpful in the treatment of, for instance, eye afflictions such as corneal ulcers and conjunctivitis.
References
Further reading
Cassareep recipe.
Food ingredients
Guyanese cuisine | Cassareep | [
"Technology"
] | 809 | [
"Food ingredients",
"Components"
] |
8,644,283 | https://en.wikipedia.org/wiki/Membrane%20oxygenator | A membrane oxygenator is a device used to add oxygen to, and remove carbon dioxide from the blood. It can be used in two principal modes: to imitate the function of the lungs in cardiopulmonary bypass (CPB), and to oxygenate blood in longer term life support, termed extracorporeal membrane oxygenation (ECMO). A membrane oxygenator consists of a thin gas-permeable membrane separating the blood and gas flows in the CPB circuit; oxygen diffuses from the gas side into the blood, and carbon dioxide diffuses from the blood into the gas for disposal.
History
The history of the oxygenator, or artificial lung, dates back to 1885, with the first demonstration of a disc oxygenator, on which blood was exposed to the atmosphere on rotating discs by Von Frey and Gruber. These pioneers noted the dangers of blood streaming, foaming and clotting. In the 1920s and 30s, research into developing extracorporeal oxygenation continued. Working independently, Brukhonenko in the USSR and John Heysham Gibbon in the US demonstrated the feasibility of extracorporeal oxygenation. Brukhonenko used excised dog lungs, while Gibbon used a direct-contact drum-type oxygenator, perfusing cats for up to 25 minutes in the 1930s.
Gibbon's pioneering work was rewarded in May 1953 with the first successful cardiopulmonary bypass operation. The oxygenator was of the stationary film type, in which oxygen was exposed to a film of blood as it flowed over a series of stainless steel plates.
The disadvantages of direct contact between the blood and air were well recognized, and the less traumatic membrane oxygenator was developed to overcome these. The first membrane artificial lung was demonstrated in 1955 by the group led by Willem Kolff, and in 1956 the first disposable-membrane oxygenator removed the need for time-consuming cleaning before re-use. No patent was filed as Kolff believed that doctors should make technology available to all, without mind to profit.
The first membrane artificial lungs were composed of large flat sheets of thin silicone rubber used to separate blood and gas. Dr. Kolff recognized the need for a more compact lung design and constructed the first coiled lung design using polyethylene. However, these first designs were impractical due to high resistance and large priming volume. Inspired by Kolff's design, Theodor Kolobow designed the first successful spiral coil membrane lung in the laboratory of George Henry Alexander Clowes using a vinyl fiberglass screen to allow gas to more easily flow in the tube. For these and other innovations, including applying slight suction to form a tight seal and prevent hypobaric gas emboli, NIH was issued a patent in 1970 for the silicon rubber spiral coil membrane lung invented by Dr. Kolobow.
Kolobow, with the assistance of Dr. Warren Zapol and NIH veterinarian Joseph Price, attempted the first in vivo experiments using the spiral membrane artificial lung on canines and lambs. The team went on to invent the first artificial placenta in 1967.
The early artificial lungs used relatively impermeable polyethylene or Teflon homogeneous membranes, and it was not until more highly permeable silicone rubber membranes were introduced in the 1960s (and as hollow fibres in 1971) that the membrane oxygenator became commercially successful. The introduction of microporous hollow fibres with very low resistance to mass transfer revolutionized the design of membrane modules, as the limiting factor to oxygenator performance became the blood resistance. Current designs of oxygenator typically use an extraluminal flow regime, where the blood flows outside the gas-filled hollow fibers, for short term life support, while only the homogeneous membranes are approved for long term use.
See also
Bubble oxygenator
Extracorporeal circulation
E. Converse Peirce, made refinements to membrane oxygenator
Experiments in the Revival of Organisms
References
Dorson, W.J. and Loria, J.B., "Heart Lung Machines", in: Webster's Encyclopaedia of Medical Devices and Instrumentation, Vol. 3 (1988), Wiley, New York: 1440–1457.
Galletti, P.M., "Cardiopulmonary Bypass: A Historical Perspective", Artificial Organs 17:8 (1993), 675–686.
Gibbon, J.H. Chairman's address to the American Society for Artificial Internal Organs, Transactions of the American Society for Artificial Internal Organs, 1 (1955), 58–62.
Kolff, W.J., and Balzer R., "The Artificial Coil Lung", Transactions of the American Society for Artificial Internal Organs, 1 (1955), 39–42.
Kolff, W.J., and Effler, D.B., "Disposable Membrane Oxygenator (Heart-Lung Machine) and its use in Experimental and Clinical Surgery while the Heart is Arrested with Potassium Citrate According to the Melrose Technique, Transactions of the American Society for Artificial Internal Organs, 2 (1956), 13-17.
Kolobow, T., and Bowman, R.L., "Construction and Evaluation of an Alveolar Membrane Artificial Heart-Lung", Transactions of the American Society for Artificial Internal Organs, 9 (1963), 238–241.
Dutton, R.C., et al., "Development and Evaluation of a New Hollow Fibre Membrane Oxygenator", Transactions of the American Society for Artificial Internal Organs, 17 (1971), 331–336.
Gaylor, J.D.S., "Membrane Oxygenators: Current Developments in Design and Application", Journal of Biomedical Engineering 10 (1988), 541–547.
External links
Oxygenator summary in Cardiac Surgery in the Adult
Medical equipment
Membrane technology | Membrane oxygenator | [
"Chemistry",
"Biology"
] | 1,219 | [
"Medical technology",
"Membrane technology",
"Medical equipment",
"Separation processes"
] |
8,646,237 | https://en.wikipedia.org/wiki/Polloc%20and%20Govan%20Railway | The Polloc and Govan Railway was an early mineral railway near Glasgow in Scotland, constructed to bring coal and iron from William Dixon's collieries and ironworks to the River Clyde for onward transportation.
When the Clydesdale Junction Railway was projected in the nineteenth century, it used part of the alignment of the Polloc line to reach Glasgow from Rutherglen, and that part of the route is in use today as the main access to Glasgow Central station from the Motherwell direction.
John Dixon: first waggonway
John Dixon came from Sunderland to Glasgow and established coal pits at Knightswood and Gartnavel, in what are now the western suburbs of Glasgow. About 1750 he purchases a glassworks at Dumbarton, and to transport his coal to the works, he built a wooden waggonway from the pit mouth to Yoker. The coal was loaded into barges, which went down with the ebb tide to Leven. By 1785 the glassworks was the largest in the United Kingdom, consuming 1,500 tons of coal per annum.
A newspaper correspondent wrote in 1852:
The coal from the pits of the Woodside district about the middle of the last century was mostly consumed at the glass works at Dumbarton. My informant says that there was at this time a wooden tram road commencing at the Woodsise coal pits which crossed the Dumbarton Road, and extended to a quay situated on the river, nearly opposite to Renfrew, from which quay the coals were shipped by gabberts to Dumbarton. I do not think that this tram road existed in my day, but about 70 years ago, I walked on the tram road from the Little Govan Coal Works to the Coal Quay, then situated on the south banks of the river at the grounds lately of Todd and Higginbotham, and I rather think that the Dumbarton Glass Works Company were at that time interested in the Little Govan Coal Works as well as the Woodside Coal Works.
The Govan Waggonway
The Knightswood pit became exhausted and Dixon acquired mineral rights in the Little Govan estate. Between 1775 and 1778, his son William Dixon built a line from Govan coal pits to Springfield on the south bank of the Clyde. At that time "Govan" extended to the south-east of the city; the coal pits were in the area bounded by the present-day M74, Polmadie Road and Aikenhead Road. "Springfield" was a quay on the south bank of the Clyde, immediately west of West Street, although Wherry Wharf was the actual quay used. The alignment of the waggonway was broadly south-east to north-west, skirting round the south of the built up area of the time, and the approach to the Clyde was along what became West Street. Privately built and not requiring Parliamentary authority, this became known as the Govan Waggonway.
Dixon built it on the principle familiar to him from Tyneside, with timber cross-sleepers and timber rails, and wagons with flanged wheels were pulled by horses.
In 1810 the Glasgow, Paisley and Johnstone Canal was nearing completion, with its Glasgow termination at Port Eglinton; this faced the west side of Eglinton Street immediately south of, and opposite, the Cumberland Street junction; the area is long since built over. According to Paterson (page 207), "On 1 August 1811, William Dixon (Junior), coalmaster, bought 1,242 square yards of ground from the Corporation of Glasgow of building a tramway on which to convey coal from his Govan pits to the Ardrossan Canal basin at Port Eglinton." The main line of the waggonway was of course already long established, and this must refer to Dixon's intention to build a short connecting branch to the canal basin.
Upgrading the line
Dixon later built an ironworks a little to the west of the Govan coal pit, in the area immediately east of the point where Cathcart Road crosses the M74. From the flames issuing from the furnaces the works became known as Dixon's Blazes. The Govan coal pits had expanded with surface equipment over a wide area; the ironworks was connected to the pits by local tramways, but the coal and iron needed to transported further afield. The Govan Waggonway, with wooden rails and horse traction, was technologically inadequate. By 1830 railways using stone block sleepers and cast iron rails were well established technology, and Dixon commissioned Thomas Grainger and John Miller to design a conversion of his waggonway to a railway. Grainger and Miller had been responsible for several of the "coal railways" in central Scotland, notably the Monkland and Kirkintilloch Railway, opened in 1826. The track gauge was 4 ft 6in, which Grainger and Miller had adopted on most of the other lines.
On 29 May 1830, the Polloc and Govan Railway was authorised as a public company by an act of Parliament, the (11 Geo. 4 & 1 Will. 4. c. lxii), with capital of £10,000 and authorised borrowing of £5,000.
At the eastern end the terminal was in lands in the ownership of the Trustees of Hutcheson's Hospital, "whereby the fair advantage which the measure was calculated to produce might be secured to the institution". Robertson also shows a short westward spur from Eglinton Street towards Shields Bridge; this is referred to as the "Polloc Estate branch" by Robertson. The total length of the lines authorised was 0.85 miles (main line) and 0.34 miles (branches), a total of nearly 2 km.
The line opens
The line opened on 22 August 1840, "from Rutherglen to the Broomielaw Harbour", after two further acts of Parliament were obtained, the (1 & 2 Will. 4. c. lviii) and the (7 Will. 4 & 1 Vict. c. cxviii), authorising considerably more capital: £36,000 in share value. Cobb suggests that the 1840 opening was from Polmadie Bridge, i.e. Dixon's ironworks and coalpits, with an eastward extension to a station at Rutherglen in 1842.
The Clydesdale Junction Railway
The Caledonian Railway (CR) opened its main line from Glasgow in 1849; the route was from Townhead over the Glasgow, Garnkirk and Coatbridge Railway (GG&CR), an extended successor to the earlier Garnkirk and Glasgow Railway, which had been built as a coal line. The GG&CR had been upgraded but the route was roundabout. A shorter route between Motherwell and Glasgow had been promoted earlier; it obtained authority by an act of Parliament, the Clydesdale Junction Railway Act 1845 (8 & 9 Vict. c. clx), on 31 July 1845, and was called the Clydesdale Junction Railway. The CR made provisional arrangements to lease the Polloc and Govan line on 29 January 1845, and soon afterwards to lease the Clydesdale Junction line itself. The CR purchased the Polloc and Govan Railway on 18 August 1846, and William Dixon received 2,400 CR shares in payment. The CR upgraded the Polloc and Govan and regauged it to standard gauge, and used its alignment for part of the route: it formed an end-on junction with the line at Rutherglen. At Eglinton Street the new line diverged to the north and terminated at the Southside railway station, which was shared with the Glasgow, Barrhead and Neilston Direct Railway.
On 30 March 1849 the General Terminus opened; it was a large goods handling depot on the River Clyde, immediately to the west of the Polloc and Govan's "Broomielaw" terminal at Windmillcroft, and superseding it. The obsolete rails in West Street remained in place for another eighteen years: on 14 March 1867 an act of Parliament was obtained to lift part of the line, in West Street to the River Clyde.
The Clydesdale Junction Railway was absorbed by the Caledonian Railway.
Links to other lines
Clydesdale Junction Railway. End to end link made:
General Terminus and Glasgow Harbour Railway
Notes
References
Sources
Cameron, Jim (Compiler) (2006). Glasgow Central: Central to Glasgow. Boat of Garten: Strathwood. .
C.J.A., Robertson(1983). The Origins of the Scottish Railway System: 1722 – 1844, Edinburgh: John Donald. .
Thomas, John (1971). A Regional History of the Railways of Great Britain, Volume 6, Scotland: The Lowlands and the Borders. Newton Abbott: David & Charles. .
See also
Caledonian Railway
Early Scottish railway companies
Mining railways
Horse-drawn railways
Pre-grouping British railway companies
Transport in Glasgow
Railway companies established in 1830
Railway lines opened in 1840
Railway companies disestablished in 1846
Standard gauge railways in Scotland
1830 establishments in Scotland
British companies established in 1830
British companies disestablished in 1846 | Polloc and Govan Railway | [
"Engineering"
] | 1,849 | [
"Mining equipment",
"Mining railways"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.