text
stringlengths
11
320k
source
stringlengths
26
161
Morphogenesis (from the Greek morphê shape and genesis creation, literally "the generation of form") is the biological process that causes a cell , tissue or organism to develop its shape. It is one of three fundamental aspects of developmental biology along with the control of tissue growth and patterning of cellular differentiation . The process controls the organized spatial distribution of cells during the embryonic development of an organism . Morphogenesis can take place also in a mature organism, such as in the normal maintenance of tissue by stem cells or in regeneration of tissues after damage. Cancer is an example of highly abnormal and pathological tissue morphogenesis. Morphogenesis also describes the development of unicellular life forms that do not have an embryonic stage in their life cycle. Morphogenesis is essential for the evolution of new forms. Morphogenesis is a mechanical process involving forces that generate mechanical stress, strain, and movement of cells, [ 1 ] and can be induced by genetic programs according to the spatial patterning of cells within tissues. Abnormal morphogenesis is called dysmorphogenesis . Some of the earliest ideas and mathematical descriptions on how physical processes and constraints affect biological growth, and hence natural patterns such as the spirals of phyllotaxis , were written by D'Arcy Wentworth Thompson in his 1917 book On Growth and Form [ 2 ] [ 3 ] [ note 1 ] and Alan Turing in his The Chemical Basis of Morphogenesis (1952). [ 6 ] Where Thompson explained animal body shapes as being created by varying rates of growth in different directions, for instance to create the spiral shell of a snail , Turing correctly predicted a mechanism of morphogenesis, the diffusion of two different chemical signals, one activating and one deactivating growth, to set up patterns of development, decades before the formation of such patterns was observed. [ 7 ] The fuller understanding of the mechanisms involved in actual organisms required the discovery of the structure of DNA in 1953, and the development of molecular biology and biochemistry . [ citation needed ] Several types of molecules are important in morphogenesis. Morphogens are soluble molecules that can diffuse and carry signals that control cell differentiation via concentration gradients. Morphogens typically act through binding to specific protein receptors . An important class of molecules involved in morphogenesis are transcription factor proteins that determine the fate of cells by interacting with DNA . These can be coded for by master regulatory genes , and either activate or deactivate the transcription of other genes; in turn, these secondary gene products can regulate the expression of still other genes in a regulatory cascade of gene regulatory networks . At the end of this cascade are classes of molecules that control cellular behaviors such as cell migration , or, more generally, their properties, such as cell adhesion or cell contractility. For example, during gastrulation , clumps of stem cells switch off their cell-to-cell adhesion, become migratory, and take up new positions within an embryo where they again activate specific cell adhesion proteins and form new tissues and organs. Developmental signaling pathways implicated in morphogenesis include Wnt , Hedgehog , and ephrins . [ 8 ] At a tissue level, ignoring the means of control, morphogenesis arises because of cellular proliferation and motility. [ 9 ] Morphogenesis also involves changes in the cellular structure [ 10 ] or how cells interact in tissues. These changes can result in tissue elongation, thinning, folding, invasion or separation of one tissue into distinct layers. The latter case is often referred as cell sorting . Cell "sorting out" consists of cells moving so as to sort into clusters that maximize contact between cells of the same type. The ability of cells to do this has been proposed to arise from differential cell adhesion by Malcolm Steinberg through his differential adhesion hypothesis . Tissue separation can also occur via more dramatic cellular differentiation events during which epithelial cells become mesenchymal (see Epithelial–mesenchymal transition ). Mesenchymal cells typically leave the epithelial tissue as a consequence of changes in cell adhesive and contractile properties. Following epithelial-mesenchymal transition, cells can migrate away from an epithelium and then associate with other similar cells in a new location. [ 11 ] In plants, cellular morphogenesis is tightly linked to the chemical composition and the mechanical properties of the cell wall. [ 12 ] [ 13 ] During embryonic development, cells are restricted to different layers due to differential affinities. One of the ways this can occur is when cells share the same cell-to- cell adhesion molecules . For instance, homotypic cell adhesion can maintain boundaries between groups of cells that have different adhesion molecules. Furthermore, cells can sort based upon differences in adhesion between the cells, so even two populations of cells with different levels of the same adhesion molecule can sort out. In cell culture cells that have the strongest adhesion move to the center of a mixed aggregates of cells. Moreover, cell-cell adhesion is often modulated by cell contractility, which can exert forces on the cell-cell contacts so that two cell populations with equal levels of the same adhesion molecule can sort out. The molecules responsible for adhesion are called cell adhesion molecules (CAMs). Several types of cell adhesion molecules are known and one major class of these molecules are cadherins . There are dozens of different cadherins that are expressed on different cell types. Cadherins bind to other cadherins in a like-to-like manner: E-cadherin (found on many epithelial cells) binds preferentially to other E-cadherin molecules. Mesenchymal cells usually express other cadherin types such as N-cadherin. [ 14 ] [ 15 ] The extracellular matrix (ECM) is involved in keeping tissues separated, providing structural support or providing a structure for cells to migrate on. Collagen , laminin , and fibronectin are major ECM molecules that are secreted and assembled into sheets, fibers, and gels. Multisubunit transmembrane receptors called integrins are used to bind to the ECM. Integrins bind extracellularly to fibronectin, laminin, or other ECM components, and intracellularly to microfilament -binding proteins α-actinin and talin to link the cytoskeleton with the outside. Integrins also serve as receptors to trigger signal transduction cascades when binding to the ECM. A well-studied example of morphogenesis that involves ECM is mammary gland ductal branching. [ 16 ] [ 17 ] Tissues can change their shape and separate into distinct layers via cell contractility. Just as in muscle cells, myosin can contract different parts of the cytoplasm to change its shape or structure. Myosin-driven contractility in embryonic tissue morphogenesis is seen during the separation of germ layers in the model organisms Caenorhabditis elegans , Drosophila and zebrafish . There are often periodic pulses of contraction in embryonic morphogenesis. A model called the cell state splitter involves alternating cell contraction and expansion, initiated by a bistable organelle at the apical end of each cell. The organelle consists of microtubules and microfilaments in mechanical opposition. It responds to local mechanical perturbations caused by morphogenetic movements. These then trigger traveling embryonic differentiation waves of contraction or expansion over presumptive tissues that determine cell type and is followed by cell differentiation. The cell state splitter was first proposed to explain neural plate morphogenesis during gastrulation of the axolotl [ 18 ] and the model was later generalized to all of morphogenesis. [ 19 ] [ 20 ] In the development of the lung a bronchus branches into bronchioles forming the respiratory tree . [ 21 ] The branching is a result of the tip of each bronchiolar tube bifurcating, and the process of branching morphogenesis forms the bronchi, bronchioles, and ultimately the alveoli. [ 22 ] Branching morphogenesis is also evident in the ductal formation of the mammary gland . [ 23 ] [ 17 ] Primitive duct formation begins in development , but the branching formation of the duct system begins later in response to estrogen during puberty and is further refined in line with mammary gland development. [ 17 ] [ 24 ] [ 25 ] Cancer can result from disruption of normal morphogenesis, including both tumor formation and tumor metastasis . [ 26 ] Mitochondrial dysfunction can result in increased cancer risk due to disturbed morphogen signaling. [ 26 ] During assembly of the bacteriophage (phage) T4 virion , the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. [ 27 ] Phage T4 encoded proteins that determine virion structure include major structural components, minor structural components and non-structural proteins that catalyze specific steps in the morphogenesis sequence. [ 28 ] Phage T4 morphogenesis is divided into three independent pathways: the head, the tail and the long tail fibres as detailed by Yap and Rossman. [ 29 ] An approach to model morphogenesis in computer science or mathematics can be traced to Alan Turing 's 1952 paper, "The chemical basis of morphogenesis", [ 30 ] a model now known as the Turing pattern . Another famous model is the so-called French flag model , developed in the sixties. [ 31 ] Improvements in computer performance in the twenty-first century enabled the simulation of relatively complex morphogenesis models. In 2020, such a model was proposed where cell growth and differentiation is that of a cellular automaton with parametrized rules. As the rules' parameters are differentiable, they can be trained with gradient descent , a technique which has been highly optimized in recent years due to its use in machine learning . [ 32 ] This model was limited to the generation of pictures, and is thus bi-dimensional. A similar model to the one described above was subsequently extended to generate three-dimensional structures, and was demonstrated in the video game Minecraft , whose block-based nature made it particularly expedient for the simulation of 3D cellular automatons. [ 33 ]
https://en.wikipedia.org/wiki/Morphogenesis
In the developmental biology of the early twentieth century, a morphogenetic field is a research hypothesis and a discrete region of cells in an embryo . [ 1 ] [ 2 ] The term morphogenetic field conceptualizes the scientific experimental finding that an embryonic group of cells , for example a forelimb bud, could be transplanted to another part of the embryo and in ongoing individual development still give rise to a forelimb at an odd place of the organism. And it describes a group of embryonic cells able to respond to localized biochemical signals − called field − leading to the genesis of morphological structures: tissues , organs , or parts of an organism . [ 3 ] [ 4 ] The spatial and temporal extents of such a region of embryonic stem cells are dynamic, and within it is a collection of interacting cells out of which a particular tissue, organ, or body part is formed. [ 5 ] As a group, the cells within a morphogenetic field in an embryo are constrained: thus, cells in a limb field will become a limb tissue, those in a heart field will become heart tissue. [ 6 ] Individual cells within a morphogenetic field in an embryo are flexible: thus, cells in a cardiac field can be redirected via cell-to-cell signaling to replace damaged or missing cells. [ 6 ] The Imaginal disc in larvae is an example of a discrete morphogenetic field region of cells in an insect embryo. [ 7 ] The concept of the morphogenetic field was first introduced in 1910 by Alexander G. Gurwitsch . [ 1 ] Experimental support was provided by Ross Granville Harrison 's experiments transplanting fragments of a newt embryo into different locations. [ 8 ] Harrison was able to identify "fields" of cells producing organs such as limbs, tail and gills and to show that these fields could be fragmented or have undifferentiated cells added and a complete normal final structure would still result. It was thus considered that it was the "field" of cells, rather than individual cells, that were patterned for subsequent development of particular organs. The field concept was developed further by Harrison's friend Hans Spemann , and then by Paul Weiss and others. [ 5 ] The concept was similar to the meaning of the term entelechy of vitalists like Hans Adolf Eduard Driesch (1867–1941). Thus the field hypothesis of ontogeny became fundamental in the early twentieth century to the study of embryological development. By the 1930s, however, the work of geneticists, especially Thomas Hunt Morgan , revealed the importance of chromosomes and genes for controlling development, and the rise of the new synthesis in evolutionary biology lessened the perceived importance of the field hypothesis . Morgan was a particularly harsh critic of fields since the gene and the field were perceived as competitors for recognition as the basic unit of ontogeny . [ 5 ] With the discovery and mapping of master control genes, such as the homeobox genes which were first discovered in 1983, the pre-eminence of genes seemed assured. In the late twentieth century the field concept of ontogenesis was "rediscovered" as a useful part of developmental biology. It was found, for example, that different mutations could cause the same malformations, suggesting that the mutations were affecting a complex of structures as a unit, a unit that might correspond to the field of early 20th century embryology. In 1996 Scott F. Gilbert proposed that the morphogenetic field was a middle ground between genes and evolution. [ 5 ] That is, genes act upon fields , which then act upon the developing organism. [ 5 ] Then in 2000 Jessica Bolker described morphogenetic fields not merely as incipient structures or organs, but as dynamic entities with their own localized development processes, which are central to the emerging field of Evolutionary developmental biology ("evo-devo"). [ 9 ] In 2005, Sean B. Carroll and colleagues mention morphogenetic fields merely as a concept proposed by early embryologists to explain the finding that a forelimb bud could be transplanted and still give rise to a forelimb; they define "field" simply as "a discrete region" in an embryo. [ 2 ]
https://en.wikipedia.org/wiki/Morphogenetic_field
Morphogenetic robotics [ 1 ] generally refers to the methodologies that address challenges in robotics inspired by biological morphogenesis . [ 2 ] [ 3 ] This field overlaps with Morphogenetic Engineering that's extenuated inside Amorphous Computation . Morphogenetic robotics is related to, but differs from, epigenetic robotics . The main difference between morphogenetic robotics and epigenetic robotics is that the former focuses on self-organization , self-reconfiguration, self-assembly and self-adaptive control of robots using genetic and cellular mechanisms inspired from biological early morphogenesis ( activity-independent development ), during which the body and controller of the organisms are developed simultaneously, whereas the latter emphasizes the development of robots' cognitive capabilities, such as language, emotion and social skills, through experience during the lifetime ( activity-dependent development ). Morphogenetic robotics is closely connected to developmental biology and systems biology , whilst epigenetic robotics is related to developmental cognitive neuroscience emerged from cognitive science , developmental psychology and neuroscience . Morphogenetic robotics includes, but is not limited to the following main topics:
https://en.wikipedia.org/wiki/Morphogenetic_robotics
Morphological antialiasing ( MLAA ) is a technique for minimizing the distortion artifacts known as aliasing when representing a high-resolution image at a lower resolution. Contrary to multisample anti-aliasing (MSAA), which does not work for deferred rendering , MLAA is a post-process filtering which detects borders in the resulting image and then finds specific patterns in these. Anti-aliasing is achieved by blending pixels in these borders, according to the pattern they belong to and their position within the pattern. [ 1 ] [ 2 ] [ 3 ] Enhanced subpixel morphological antialiasing, or SMAA, is an image-based GPU-based implementation of MLAA [ 4 ] developed by Universidad de Zaragoza and Crytek . [ 5 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Morphological_antialiasing
Morphology in architecture is the study of the evolution of form within the built environment. Often used in reference to a particular vernacular language of building, this concept describes changes in the formal syntax of buildings and cities as their relationship to people evolves and changes. Often morphology describes processes, such as in the evolution of a design concept from first conception to production, but can also be understood as the categorical study in the change of buildings and their use from a historical perspective. Similar to genres of music, morphology concertizes 'movements' and arrives at definitions of architectural 'styles' or typologies. Paradoxically morphology can also be understood to be the qualities of a built space which are style-less or irreducible in quality. Some ideological influences on morphology which are usually cultural or philosophical in origin include: Indigenous architecture , Classical architecture , Baroque architecture , Modernism , Postmodernism , Deconstructionism , Brutalism , and Futurism . Recent contemporary advances in analytic and cross platform tools such as 3d printing , virtual reality , and building information modeling make the current contemporary typology formally difficult to pinpoint into one holistic definition. Advances in the study of Architectural (formal) morphology have the potential to influence or foster new fields of study in the realms of the arts, cognitive science, psychology, behavioral science, neurology, mapping, linguistics, and other as yet unknown cultural spatial practices or studies based upon social and environmental knowledge games. [ 1 ] Often architectural morphologies are reflexive or indicative of political influences of their time and perhaps more importantly, place. Other influences on the morphological form of the urban environment include architects, builders, developers, and the social demographic of the particular location [ 2 ] Urban morphology provides an understanding of the form, establishment and reshaping processes, spatial structure and character of human settlements through an analysis of historical development processes and the constituent parts that compose settlements. Urban morphology is used as a method of determining transformation processes of urban fabrics by which buildings (both residential and commercial), architects, streets and monuments act as elements of a multidimensional form in a dynamic relationship where built structures shape and are shaped by the open space around them. [ citation needed ] Urban places act as evolutionary open systems that are continually shaped and transformed by social and political events and by the market forces. [ 3 ] Urban morphology as an organised scientific study began formally in the 19th century [ citation needed ] due to the expansion of reliable topographic maps and reproducible plans. [ 4 ] However, architecture has existed far longer, with the first surviving written work on the subject known as De architectura , written by Roman architect and engineer Marcus Vitruvius Pollio between 30 and 15 BCE. [ 5 ] Here, Vitruvius detailed the three principals of architecture firmitas, utilitas, venustas. Translated to modern English as durability, utility and beauty. Archaeologists have studied the ruins of ancient cities such as Mesopotamia , Egypt and Minoan to show evidence that urban planning dates back to the Bronze Age , through visible patterns of paved streets lined in right angles. Originally, town planning was used as a mechanism for armed defence. The decline of the Roman Empire in 395 AD saw cities in Europe expand incoherently without formal urban planning. The Renaissance era saw urban hubs expand with enlarged extensions to allow for advancements that took place during the industrial age . It was then that urban planning became a formalised study, used to allow citizens with healthier working conditions. The theory of morphology extends many different disciplines including architectural theory branching from the philosophy of art, engineering, linguistics, culture and sociology. [ 6 ] However, advances in information technology, the development of a globalised 24-hour economy through the three leading world cities New York , London and Tokyo and their smaller counterparts have profoundly influenced the world's urban systems. [ 7 ] Currently, there is more than 300 metropolitan regions that house more than one million people and through steadily global population increases metropolitan regions will continue to increase in size. [ 8 ] Rapid urbanisation commonly causes challenges within urban places, for example, traffic jams, high cost of living, lack of green space , biodiversity loss, air pollution and other anthropogenic environmental effects. [ 9 ] As a result, urban planners, geographers and architects have put forth numerous theoretical models with aim to improve understanding upon the functionality, aesthetic nature and environmental sustainability . [ 9 ] However, it is widely accepted that there are four theoretical explanations to the morphological pattern of a city. Concentric Zone Model Although many models have been developed in geography and urban planning fields, encompassing assorted complexities one which holds its widely used and early status is the concentric zone model. [ 10 ] The concentric Zone Model provided a stylized description of the urban form, derived from Ernest Burgess 's 1920's idea: the bid-rent curve . This implicated that the core central zone of a city becomes used as the Central Business District , then surrounded in turn by a zone of transition between areas of profession and that of working-class suburbs. [ 11 ] This is then followed by middle-class suburbs, and finally situated on the outside ring of the city is a zone of commuters. Opposers have argued that this is a normative model that presents an idealistic and only hypothetical model of a city, when in fact current land use is a part of a more complicated three-dimensional system. [ 12 ] This model led to the acceptance of modern-day terms such as “inner city” and the “suburban ring”. [ 13 ] The Sectoral Model, proposed by economist Homer Hoyt in 1939, explores the notion that the development of cities is centralised around transportation lines where wedges of residential land is concentrated by social class . Although, this model acts as a highly generalised theory does not equally represent urban morphologies of the developing world . Leaders in the field state that the twenty first century global economy's tendency towards capital reduces the demand for labour and in turn causing growing unemployment. [ 13 ] This leads to the notion that innovations in transport play a significant role in the accessibility of city's surface. [ 13 ] For example, the wide implementation of suburban railway networks and motor buses were expected to have proportionately impact the higher increases in periphery land prices. Contrastingly, it is expected that in periods of low innovation within the transport sector land values would have been proportionately higher closer to the city centre. [ 13 ] Although, opposers to this theory suggest that the fact that transport innovations are typically closely associated with increases in building and development activity which in itself is known to raise land values and thus, it is difficult to assess the relative strengths and underlying influences. [ 15 ] Thus, the commercial, industrial and residential components within the urban this model's land-use pattern are susceptible to the locational analysis and their attempt to maximise profits by substituting higher rent for increased transport costs from the city centre and vice versa. [ 15 ] This acts as an alternative model to the ‘original’ urban morphology model by Burgess with proposed a city as a concentric being which adequately kept each land use activity within its proposed circle. However, in reality it is often the addition of varying transport options within differing locations that allow for the formation of individual sectors for each land use task. [ 15 ] This model developed by Harris and Ullman in their 1945 book “The Nature of Cities” [ 16 ] further expands on previous morphological theories to provide a model which encompasses the issues that arise over large territorial expanse and growing population. Where previous theories have provided the idea that a city must only have one Central Business District (CBD), the Multiple Nuclei Model has more than one CBD. Rather, this model postulates that there are a number of different growth nuclei, each of which influences the distribution of people, activities and land uses within the area. [ 17 ] Each nucleus is a highly specialised subset of the city's urban form and activities as each area is marketed to the demographic within the area from retail, manufacturing, education, health and residential areas. [ 17 ] Thus, allows for highly diversified economic functions over an extensive geographical area. This forms a kind of urban mosaic in which the city's spatial geometry is made up of differing nuclei that are no longer organised around a single centre, where the CBD still acts as a functional and important nucleus to the city. One popular weakness to the multiple nuclei model is the premise that distinctions and boundaries between nuclei zones are distinct borders, however this is largely unheard of practically. [ 18 ] This model requires the use of four basic principles: [ 17 ] One real-world example of this model is the multiple nuclei form of New York City, New York. The cluster of high-end retail institutions within the city deemed the naming of New York's “ diamond district ” which is one specialised subset of the city's economy and function. [ 19 ] It is argued that urban structure develops irrespective of its objective development by legalisation from government of developers which attempt to control the shape of urban structure. [ 20 ] Each agent within the urban landscape – developers, architects, builders, urban planners, and politicians – attempt to pursue their own goals for both the functional and aesthetic qualities of land use. Initially, the occurrence of an action of change is proceeded by a developer. It has been proven that a significant challenge of the twenty first century will be to establish effective links between urban morphologies, efficiency and resilience of urban place in the future. Urban resilience refers to the ability of an urban place to adapt, grow and evolve when it is placed under stress events. Most climate scientists agree that warmer Earth surface temperatures increase the frequency and severity of natural disasters, with these events becoming 5 times as common in recent years. [ 21 ] With this, urban microclimates have endured extreme changes, largely due to the combination of global warming and rapid urbanisation. [ 22 ] It is known that nearly a quarter of the world's population resides in areas where heat exposure us rising dramatically, in some cases to the point where habitability is questioned. [ 21 ] Thus, in order to maintain environmental sustainability into the future adaption to housing and the wider urban fabric is deemed essential. One method for this is the restricting of housing using advancements in engineering to provide innovation and increase the suitability of architecture.
https://en.wikipedia.org/wiki/Morphology_(architecture_and_engineering)
Morphology (from Ancient Greek μορφή (morphḗ) "form", and λόγος (lógos) "word, study, research") is the study of the form and structure of organisms and their specific structural features. [ 1 ] This includes aspects of the outward appearance (shape, structure, color, pattern, size), as well as the form and structure of internal parts like bones and organs , i.e., anatomy . This is in contrast to physiology , which deals primarily with function. Morphology is a branch of life science dealing with the study of the overall structure of an organism or taxon and its component parts. The etymology of the word "morphology" is from the Ancient Greek μορφή ( morphḗ ), meaning "form", and λόγος ( lógos ), meaning "word, study, research". [ 2 ] [ 3 ] While the concept of form in biology, opposed to function , dates back to Aristotle (see Aristotle's biology ), the field of morphology was developed by Johann Wolfgang von Goethe (1790) and independently by the German anatomist and physiologist Karl Friedrich Burdach (1800). [ 4 ] Among other important theorists of morphology are Lorenz Oken , Georges Cuvier , Étienne Geoffroy Saint-Hilaire , Richard Owen , Carl Gegenbaur and Ernst Haeckel . [ 5 ] [ 6 ] In 1830, Cuvier and Saint-Hilaire engaged in a famous debate , which is said to exemplify the two major deviations in biological thinking at the time – whether animal structure was due to function or evolution. [ 7 ] Most taxa differ morphologically from other taxa. Typically, closely related taxa differ much less than more distantly related ones, but there are exceptions to this. Cryptic species are species which look very similar, or perhaps even outwardly identical, but are reproductively isolated. Conversely, sometimes unrelated taxa acquire a similar appearance as a result of convergent evolution or even mimicry . In addition, there can be morphological differences within a species, such as in Apoica flavissima where queens are significantly smaller than workers. A further problem with relying on morphological data is that what may appear morphologically to be two distinct species may in fact be shown by DNA analysis to be a single species. The significance of these differences can be examined through the use of allometric engineering in which one or both species are manipulated to phenocopy the other species. A step relevant to the evaluation of morphology between traits/features within species, includes an assessment of the terms: homology and homoplasy . Homology between features indicates that those features have been derived from a common ancestor. [ 10 ] Alternatively, homoplasy between features describes those that can resemble each other, but derive independently via parallel or convergent evolution . [ 11 ] The invention and development of microscopy enabled the observation of 3-D cell morphology with both high spatial and temporal resolution. The dynamic processes of this cell morphology which are controlled by a complex system play an important role in varied important biological processes, such as immune and invasive responses. [ 12 ] [ 13 ]
https://en.wikipedia.org/wiki/Morphology_(biology)
Morphometrics (from Greek μορΦή morphe , "shape, form", and -μετρία metria , "measurement") or morphometry [ 5 ] refers to the quantitative analysis of form , a concept that encompasses size and shape. Morphometric analyses are commonly performed on organisms, and are useful in analyzing their fossil record, the impact of mutations on shape, developmental changes in form, covariances between ecological factors and shape, as well for estimating quantitative-genetic parameters of shape. Morphometrics can be used to quantify a trait of evolutionary significance, and by detecting changes in the shape, deduce something of their ontogeny , function or evolutionary relationships. A major objective of morphometrics is to statistically test hypotheses about the factors that affect shape. "Morphometrics", in the broader sense, is also used to precisely locate certain areas of organs such as the brain, [ 6 ] [ 7 ] and in describing the shapes of other things. Three general approaches to form are usually distinguished: traditional morphometrics, landmark-based morphometrics and outline-based morphometrics. Traditional morphometrics analyzes lengths, widths, masses, angles, ratios and areas. [ 8 ] In general, traditional morphometric data are measurements of size. A drawback of using many measurements of size is that most will be highly correlated; as a result, there are few independent variables despite the many measurements. For instance, tibia length will vary with femur length and also with humerus and ulna length and even with measurements of the head. Traditional morphometric data are nonetheless useful when either absolute or relative sizes are of particular interest, such as in studies of growth. These data are also useful when size measurements are of theoretical importance such as body mass and limb cross-sectional area and length in studies of functional morphology. However, these measurements have one important limitation: they contain little information about the spatial distribution of shape changes across the organism. They are also useful when determining the extent to which certain pollutants have affected an individual. These indices include the hepatosomatic index, gonadosomatic index and also the condition factors (shakumbila, 2014). In landmark-based geometric morphometrics, the spatial information missing from traditional morphometrics is contained in the data, because the data are coordinates of landmarks : discrete anatomical loci that are arguably homologous in all individuals in the analysis (i.e. they can be regarded as the "same" point in each specimens in the study). For example, where two specific sutures intersect is a landmark, as are intersections between veins on an insect wing or leaf, or foramina , small holes through which veins and blood vessels pass. Landmark-based studies have traditionally analyzed 2D data, but with the increasing availability of 3D imaging techniques, 3D analyses are becoming more feasible even for small structures such as teeth. [ 9 ] Finding enough landmarks to provide a comprehensive description of shape can be difficult when working with fossils or easily damaged specimens. That is because all landmarks must be present in all specimens, although coordinates of missing landmarks can be estimated. The data for each individual consists of a configuration of landmarks. There are three recognized categories of landmarks. [ 10 ] Type 1 landmarks are defined locally, i.e. in terms of structures close to that point; for example, an intersection between three sutures, or intersections between veins on an insect wing are locally defined and surrounded by tissue on all sides. Type 3 landmarks , in contrast, are defined in terms of points far away from the landmark, and are often defined in terms of a point "furthest away" from another point. Type 2 landmarks are intermediate; this category includes points such as the tip structure, or local minima and maxima of curvature. They are defined in terms of local features, but they are not surrounded on all sides. In addition to landmarks, there are semilandmarks , points whose position along a curve is arbitrary but which provide information about curvature in two [ 11 ] or three dimensions. [ 12 ] Shape analysis begins by removing the information that is not about shape. By definition, shape is not altered by translation, scaling or rotation. [ 13 ] Thus, to compare shapes, the non-shape information is removed from the coordinates of landmarks. There is more than one way to do these three operations. One method is to fix the coordinates of two points to (0,0) and (0,1), which are the two ends of a baseline. In one step, the shapes are translated to the same position (the same two coordinates are fixed to those values), the shapes are scaled (to unit baseline length) and the shapes are rotated. [ 10 ] An alternative, and preferred method, is Procrustes superimposition . This method translates the centroid of the shapes to (0,0); the x coordinate of the centroid is the average of the x coordinates of the landmarks, and the y coordinate of the centroid is the average of the y -coordinates. Shapes are scaled to unit centroid size, which is the square root of the summed squared distances of each landmark to the centroid. The configuration is rotated to minimize the deviation between it and a reference, typically the mean shape. In the case of semi-landmarks, variation in position along the curve is also removed. Because shape space is curved, analyses are done by projecting shapes onto a space tangent to shape space. Within the tangent space, conventional multivariate statistical methods such as multivariate analysis of variance and multivariate regression, can be used to test statistical hypotheses about shape. Procrustes-based analyses have some limitations. One is that the Procrustes superimposition uses a least-squares criterion to find the optimal rotation; consequently, variation that is localized to a single landmark will be smeared out across many. This is called the 'Pinocchio effect'. Another is that the superimposition may itself impose a pattern of covariation on the landmarks. [ 14 ] [ 15 ] Additionally, any information that cannot be captured by landmarks and semilandmarks cannot be analyzed, including classical measurements like "greatest skull breadth". Moreover, there are criticisms of Procrustes-based methods that motivate an alternative approach to analyzing landmark data. Diffeomorphometry [ 16 ] is the focus on comparison of shapes and forms with a metric structure based on diffeomorphisms, and is central to the field of computational anatomy . [ 17 ] Diffeomorphic registration, [ 18 ] introduced in the 90s, is now an important player with existing code bases organized around ANTS, [ 19 ] DARTEL, [ 20 ] DEMONS, [ 21 ] LDDMM , [ 22 ] StationaryLDDMM [ 23 ] are examples of actively used computational codes for constructing correspondences between coordinate systems based on sparse features and dense images. Voxel-based morphometry (VBM) is an important technology built on many of these principles. Methods based on diffeomorphic flows are used in For example, deformations could be diffeomorphisms of the ambient space, resulting in the LDDMM ( Large Deformation Diffeomorphic Metric Mapping ) framework for shape comparison. [ 24 ] On such deformations is the right invariant metric of Computational Anatomy which generalizes the metric of non-compressible Eulerian flows but to include the Sobolev norm ensuring smoothness of the flows, [ 25 ] metrics have now been defined associated to Hamiltonian controls of diffeomorphic flows. [ 26 ] Outline analysis is another approach to analyzing shape. What distinguishes outline analysis is that coefficients of mathematical functions are fitted to points sampled along the outline. There are a number of ways of quantifying an outline. Older techniques such as the "fit to a polynomial curve" [ 27 ] and Principal components quantitative analysis [ 28 ] have been superseded by the two main modern approaches: eigenshape analysis , [ 29 ] and elliptic Fourier analysis (EFA), [ 30 ] using hand- or computer-traced outlines. The former involves fitting a preset number of semilandmarks at equal intervals around the outline of a shape, recording the deviation of each step from semilandmark to semilandmark from what the angle of that step would be were the object a simple circle. [ 31 ] The latter defines the outline as the sum of the minimum number of ellipses required to mimic the shape. [ 32 ] Both methods have their weaknesses; the most dangerous (and easily overcome) is their susceptibility to noise in the outline. [ 33 ] Likewise, neither compares homologous points, and global change is always given more weight than local variation (which may have large biological consequences). Eigenshape analysis requires an equivalent starting point to be set for each specimen, which can be a source of error EFA also suffers from redundancy in that not all variables are independent. [ 33 ] On the other hand, it is possible to apply them to complex curves without having to define a centroid; this makes removing the effect of location, size and rotation much simpler. [ 33 ] The perceived failings of outline morphometrics are that it does not compare points of a homologous origin, and that it oversimplifies complex shapes by restricting itself to considering the outline and not internal changes. Also, since it works by approximating the outline by a series of ellipses, it deals poorly with pointed shapes. [ 34 ] One criticism of outline-based methods is that they disregard homology – a famous example of this disregard being the ability of outline-based methods to compare a scapula to a potato chip. [ 35 ] Such a comparison which would not be possible if the data were restricted to biologically homologous points. An argument against that critique is that, if landmark approaches to morphometrics can be used to test biological hypotheses in the absence of homology data, it is inappropriate to fault outline-based approaches for enabling the same types of studies. [ 36 ] Multivariate statistical methods can be used to test statistical hypotheses about factors that affect shape and to visualize their effects. To visualize the patterns of variation in the data, the data need to be reduced to a comprehensible (low-dimensional) form. Principal component analysis (PCA) is a commonly employed tool to summarize the variation. Simply put, the technique projects as much of the overall variation as possible into a few dimensions. See the figure at the right for an example. Each axis on a PCA plot is an eigenvector of the covariance matrix of shape variables. The first axis accounts for maximum variation in the sample, with further axes representing further ways in which the samples vary. The pattern of clustering of samples in this morphospace represents similarities and differences in shapes, which can reflect phylogenetic relationships . As well as exploring patterns of variation, Multivariate statistical methods can be used to test statistical hypotheses about factors that affect shape and to visualize their effects, although PCA is not needed for this purpose unless the method requires inverting the variance-covariance matrix. Landmark data allow the difference between population means, or the deviation an individual from its population mean, to be visualized in at least two ways. One depicts vectors at landmarks that show the magnitude and direction in which that landmark is displaced relative to the others. The second depicts the difference via the thin plate splines , an interpolation function that models change between landmarks from the data of changes in coordinates of landmarks. This function produces what look like deformed grids; where regions that relatively elongated, the grid will look stretched and where those regions are relatively shortened, the grid will look compressed. D'Arcy Thompson in 1917 suggested that shapes in many different species could also be related in this way. In the case of shells and horns he gave a fairly precise analysis... But he also drew various pictures of fishes and skulls, and argued that they were related by deformations of coordinates. [ 37 ] Shape analysis is widely used in ecology and evolutionary biology to study plasticity, [ 38 ] [ 39 ] [ 40 ] evolutionary changes in shape [ 41 ] [ 42 ] [ 43 ] [ 44 ] and in evolutionary developmental biology to study the evolution of the ontogeny of shape, [ 45 ] [ 46 ] [ 47 ] as well as the developmental origins of developmental stability, canalization and modularity. [ 48 ] [ 49 ] [ 50 ] [ 51 ] [ 52 ] Many other applications of shape analysis in ecology and evolutionary biology can be found in the introductory text: Zelditch, ML; Swiderski, DL; Sheets, HD (2012). Geometric Morphometrics for Biologists: A Primer . London: Elsevier: Academic Press. {{ cite book }} : CS1 maint: publisher location ( link ) In neuroimaging , the shape and structure of the brains of living creatures can be measured using magnetic resonance imaging . The most common variants are voxel-based morphometry , which measures the volume of brain structures, deformation-based morphometry , which measures differences in shape from a template brain, and surface-based morphometry which quantifies the shape of the cerebral cortex . Histomorphometry of bone involves obtaining a bone biopsy specimen and processing of bone specimens in the laboratory, obtaining estimates of the proportional volumes and surfaces occupied by different components of bone. First the bone is broken down by baths in highly concentrated ethanol and acetone . The bone is then embedded and stained so that it can be visualized/analyzed under a microscope . [ 53 ] Obtaining a bone biopsy is accomplished by using a bone biopsy trephine. [ 54 ] ^1 from Greek: "morph", meaning shape or form, and "metron", measurement
https://en.wikipedia.org/wiki/Morphometrics
The Morphs collaboration was a coordinated study to determine the morphologies of galaxies in distant clusters and to investigate the evolution of galaxies as a function of environment and epoch. Eleven clusters were examined and a detailed ground-based and space-based study was carried out. The project was begun in 1997 based upon the earlier observations by two groups [ 1 ] [ 2 ] using data from images derived from the pre-refurbished Hubble Space Telescope . It was a collaboration of Alan Dressler and Augustus Oemler Jr., at Observatory of the Carnegie Institute of Washington [ broken anchor ] , Warrick J. Couch at the University of New South Wales , Richard Ellis at Caltech , Bianca Poggianti at the University of Padua , Amy Barger at the University of Hawaii's Institute for Astronomy , Harvey Butcher at ASTRON , and Ray M. Sharples and Ian Smail at Durham University . Results were published through 2000. The collaboration sought answers to the differences in the origins of the various galaxy types — elliptical , lenticular , and spiral . The studies found that elliptical galaxies were the oldest and formed from the violent merger of other galaxies about two to three billion years after the Big Bang . Star formation in elliptical galaxies ceased about that time. On the other hand, new stars are still forming in the spiral arms of spiral galaxies. Lenticular galaxies (SO) are intermediate between the first two. They contain structures similar to spiral arms, but devoid of the gas and new stars of the spiral galaxies. Lenticular galaxies are the prevalent form in rich galaxy clusters, which suggests that spirals may be transformed into lenticular galaxies as time progresses. The exact process may be related to high galactic density, or to the total mass in a rich cluster's central core. The Morphs collaboration found that one of the principal mechanisms of this transformation involves the interaction among spiral galaxies, as they fall toward the core of the cluster. The Inamori Magellan Areal Camera and Spectrograph (IMACS) Cluster Building Survey is the follow-on project to the Morphs collaboration. [ 3 ]
https://en.wikipedia.org/wiki/Morphs_collaboration
Morrie's law is a special trigonometric identity . Its name is due to the physicist Richard Feynman , who used to refer to the identity under that name. Feynman picked that name because he learned it during his childhood from a boy with the name Morrie Jacobs and afterwards remembered it for all of his life. [ 1 ] It is a special case of the more general identity with n = 3 and α = 20° and the fact that since A similar identity for the sine function also holds: Moreover, dividing the second identity by the first, the following identity is evident: Consider a regular nonagon A B C D E F G H I {\displaystyle ABCDEFGHI} with side length 1 {\displaystyle 1} and let M {\displaystyle M} be the midpoint of A B {\displaystyle AB} , L {\displaystyle L} the midpoint B F {\displaystyle BF} and J {\displaystyle J} the midpoint of B D {\displaystyle BD} . The inner angles of the nonagon equal 140 ∘ {\displaystyle 140^{\circ }} and furthermore γ = ∠ F B M = 80 ∘ {\displaystyle \gamma =\angle FBM=80^{\circ }} , β = ∠ D B F = 40 ∘ {\displaystyle \beta =\angle DBF=40^{\circ }} and α = ∠ C B D = 20 ∘ {\displaystyle \alpha =\angle CBD=20^{\circ }} (see graphic). Applying the cosinus definition in the right angle triangles △ B F M {\displaystyle \triangle BFM} , △ B D L {\displaystyle \triangle BDL} and △ B C J {\displaystyle \triangle BCJ} then yields the proof for Morrie's law: [ 2 ] Recall the double angle formula for the sine function Solve for cos ⁡ ( α ) {\displaystyle \cos(\alpha )} It follows that: Multiplying all of these expressions together yields: The intermediate numerators and denominators cancel leaving only the first denominator, a power of 2 and the final numerator. Note that there are n terms in both sides of the expression. Thus, which is equivalent to the generalization of Morrie's law.
https://en.wikipedia.org/wiki/Morrie's_law
Morris N. Beitman was born in December 1911 in Cook County, Illinois. Started as an engineer with the US Army Signal Corps , he used his experience and abilities to form two career paths. He became a teacher in the Chicago public schools high school system as a radio instructor. Soon after he got into technical publishing as Supreme Publications. As publisher his goal was to support the radio and television servicing industry with easy to understand reference materials. He was married to Rose Rissman. [ 1 ] [ 2 ] Morris Beitman received his BS in Mathematics at Illinois Institute of Technology and later served in the military as an engineer with the US Army Signal Corps. He later became an associate member with the Institute of Radio Engineers (now called Institute of Electrical and Electronics Engineers or IEEE). His known pre- and post-military career was a teacher in the Chicago public high schools as radio instructor. [ 3 ] By the mid-1930s the growth in ownership of radio receivers in the United States spun off other business opportunities. One of them was the repair of radio and radio-phonograph sets and eventually, television. [ 4 ] Hugo Gernsback was an early publisher of repair manuals. Soon others were publishing. John F. Rider in the early 1930s began to compile complete volumes of radio servicing diagrams of many radio manufacturers called the Perpetual Troubleshooter's Manual . [ 5 ] In time these servicing manuals became quite large and contained information on radios that were not common or were produced in small quantities. Although Rider's Perpetual Troubleshooter's Manual became a standard reference during the 1930s, the size and bulk of these yearly volumes could become a hindrance. Rider manuals contained information on common and rare models. Service businesses were paying extra for brands they rarely or never encountered. This "opened the door" for an alternative path and Morris Beitman was able to exploit this business opportunity. In 1940-1941 Morris N. Beitman under the name Supreme Publications (located at 328 S. Jefferson St. in Chicago) produced a series of books called, "Most-Often-Needed Radio Diagrams and Servicing Information" and later "Most-Often-Needed Television Servicing Information" for successive years. These books offered radio and television repair businesses a condensed version of Rider's "Perpetual Troubleshooter's Manual", by only providing models that were common or made in large quantities. The radio series started with 1926-1938 models in one volume. Each year after that represented a new volume until 1969 when the last volume was published. The television series started in 1946 and continued into the early 1970s. Their last known location was 1760 Balsam Road, Highland Park, Illinois. [ 6 ] From 1940 to 1960 Supreme published other volume sets for record players, tape recorders, wire recorders and other specialized consumer electronics using the "most often needed" title. These references like radios and televisions were in yearly bound sets but were not consistent year-to-year sets. These volumes never reached the quantity, consistency or longevity of the radio or television titles. Beitman under the Supreme Publication name was author to a number of other publications devoted to radio and television. Dates given are the earliest known date of first publication. [ 7 ] Little can be found on Morris N. Beitman's career activities and Supreme Publications. It is also possible that Supreme Publications was a side business for Morris Beitman since he was also a radio instructor in the Chicago high schools. In the 1950s, like John F. Rider , Beitman had to revise many of his publications due to rapid changes in technology used in consumer electronics. In the 1960s his son Hartford assisted him in Supreme Publications by continuing the yearly service volumes for radio and television. Many of Morris Beitman's publications either had no copyright or they had expired. Hartford spent time to straighten these matters and compiled the "1967-1969 Most Often Needed Radio Diagram and Servicing Information" book. [ 6 ] Supreme would stop publishing by the early 1970s. One reason was the declining need for repairing consumer electronics due to the increased reliability and the low cost of electronics (cost less to buy one than repair it). He died in February 1980, in Highland Park, Illinois. Radio and television servicing is no longer a significant business segment in small or independent business. The rise in interest is in preserving old consumer technology. Interest in Morris Beitman's "Most Often Needed Radio Diagrams" resurfaced in the early 1980s with the rise of restoring antique and collectible radios made before the 1940s. Vintage Radio, founded by Morgan E. McMahon, was a publishing company specializing in preserving early radio and television technology. [ 8 ] They reprinted "Most Often Needed 1926-1938 Radio Diagrams and Servicing Information" and sold it to radio collectors. [ 9 ] From 1986 to 1989, Hartford Beitman and Kristina Hund Beitman made an effort to resurrect the "Most Often Needed" series by compiling past servicing information from 1926 to 1950 based on brand name. This series of books was published by A R S Enterprises. No more new copies are available and A R S Enterprises publishing current status has not been found. Hartford Beitman's other books are Recorded Sound (1981), Seattle Seating (1984), Aggressive Investment Marketing (1985), Radio Hobbyist Handbook (1988) and Financial Services Marketing (1990). [ 10 ] Supreme's "Most Often Needed" series have become public domain. There are groups interested in technology preservation. These groups, through scanning or digitizing the thousands of pages of the series, have made them available online for non-commercial use. [ 11 ] Radio collecting and restoration books suggest using Supreme's "Most-Often-Needed" volumes as a starting source to find information. [ 12 ] Preserving old technology is often called "dead technology", [ 13 ] in other words technology that is no longer advancing or used in mass production. [ 14 ] The American Radio History website offers the complete set of Supreme Publications "Most Often Need Radio Servicing Information" and "Most Often Needed Television Servicing Information" to be viewed online. Selecting a title and year allows one to scan through the entire manual. [ 15 ]
https://en.wikipedia.org/wiki/Morris_Beitman
In applied statistics , the Morris method for global sensitivity analysis is a so-called one-factor-at-a-time method , meaning that in each run only one input parameter is given a new value. It facilitates a global sensitivity analysis by making a number r {\displaystyle r} of local changes at different points x ( 1 → r ) {\displaystyle x(1\rightarrow r)} of the possible range of input values. The finite distribution of elementary effects associated with the i t h {\displaystyle i_{th}} input factor, is obtained by randomly sampling different x {\displaystyle x} from Ω {\displaystyle \Omega } , and is denoted by F i {\displaystyle F_{i}} . [ 1 ] In the original work of Morris the two sensitivity measures proposed were respectively the mean, μ {\displaystyle \mu } , and the standard deviation, σ {\displaystyle \sigma } , of F i {\displaystyle F_{i}} . However, choosing Morris has the drawback that, if the distribution F i {\displaystyle F_{i}} contains negative elements, which occurs when the model is non-monotonic, when computing the mean some effects may cancel each other out. Thus, the measure μ {\displaystyle \mu } on its own is not reliable for ranking factors in order of importance. It is necessary to consider at the same time the values of μ {\displaystyle \mu } and σ {\displaystyle \sigma } , as a factor with elementary effects of different signs (that cancel each other out) would have a low value of μ {\displaystyle \mu } but a considerable value of σ {\displaystyle \sigma } that avoids underestimating the factors. [ 1 ] If the distribution F i {\displaystyle F_{i}} contains negative elements, which occurs when the model is non-monotonic, when computing the mean some effects may cancel each other out. When the goal is to rank factors in order of importance by making use of a single sensitivity measure, scientific advice is to use μ ∗ {\displaystyle \mu *} , which by making use of the absolute value, avoids the occurrence of effects of opposite signs. [ 1 ] In Revised Morris method μ ∗ {\displaystyle \mu *} is used to detect input factors with an important overall influence on the output. σ {\displaystyle \sigma } is used to detect factors involved in interaction with other factors or whose effect is non-linear. [ 1 ] The method starts by sampling a set of start values within the defined ranges of possible values for all input variables and calculating the subsequent model outcome. The second step changes the values for one variable (all other inputs remaining at their start values) and calculates the resulting change in model outcome compared to the first run. Next, the values for another variable are changed (the previous variable is kept at its changed value and all other ones kept at their start values) and the resulting change in model outcome compared to the second run is calculated. This goes on until all input variables are changed. This procedure is repeated r {\displaystyle r} times (where r {\displaystyle r} is usually taken between 5 and 15), each time with a different set of start values, which leads to a number of r ( k + 1 ) {\displaystyle r(k+1)} runs, where k is the number of input variables. Such number is very efficient compared to more demanding methods for sensitivity analysis . [ 2 ] A sensitivity analysis method widely used to screen factors in models of large dimensionality is the design proposed by Morris. [ 3 ] The Morris method deals efficiently with models containing hundreds of input factors without relying on strict assumptions about the model, such as for instance additivity or monotonicity of the model input-output relationship. The Morris method is simple to understand and implement, and its results are easily interpreted. Furthermore, it is economic in the sense that it requires a number of model evaluations that is linear in the number of model factors. The method can be regarded as global as the final measure is obtained by averaging a number of local measures (the elementary effects), computed at different points of the input space. [ 2 ]
https://en.wikipedia.org/wiki/Morris_method
The Morse/Long-range potential ( MLR potential ) is an interatomic interaction model for the potential energy of a diatomic molecule . Due to the simplicity of the regular Morse potential (it only has three adjustable parameters), it is very limited in its applicability in modern spectroscopy . The MLR potential is a modern version of the Morse potential which has the correct theoretical long-range form of the potential naturally built into it. [ 1 ] It has been an important tool for spectroscopists to represent experimental data, verify measurements, and make predictions. It is useful for its extrapolation capability when data for certain regions of the potential are missing, its ability to predict energies with accuracy often better than the most sophisticated ab initio techniques, and its ability to determine precise empirical values for physical parameters such as the dissociation energy , equilibrium bond length , and long-range constants. Cases of particular note include: The MLR potential is based on the classic Morse potential which was first introduced in 1929 by Philip M. Morse . A primitive version of the MLR potential was first introduced in 2006 by Robert J. Le Roy and colleagues for a study on N 2 . [ 7 ] This primitive form was used on Ca 2 , [ 8 ] KLi [ 6 ] and MgH , [ 9 ] [ 10 ] [ 11 ] before the more modern version was introduced in 2009. [ 1 ] A further extension of the MLR potential referred to as the MLR3 potential was introduced in a 2010 study of Cs 2 , [ 12 ] and this potential has since been used on HF , [ 13 ] [ 14 ] HCl , [ 13 ] [ 14 ] HBr [ 13 ] [ 14 ] and HI . [ 13 ] [ 14 ] The Morse/Long-range potential energy function is of the form V ( r ) = D e ( 1 − u ( r ) u ( r e ) e − β ( r ) y p r e q ( r ) ) 2 {\displaystyle V(r)={\mathfrak {D}}_{e}\left(1-{\frac {u(r)}{u(r_{e})}}e^{-\beta (r)y_{p}^{r_{\rm {eq}}}(r)}\right)^{2}} where for large r {\displaystyle r} , V ( r ) ≃ D e − u ( r ) + u ( r ) 2 4 D e , {\displaystyle V(r)\simeq {\mathfrak {D}}_{e}-u(r)+{\frac {u(r)^{2}}{4{\mathfrak {D}}_{e}}},} so u ( r ) {\displaystyle u(r)} is defined according to the theoretically correct long-range behavior expected for the interatomic interaction. D e {\displaystyle {\mathfrak {D}}_{e}} is the depth of the potential at equilibrium. This long-range form of the MLR model is guaranteed because the argument of the exponent is defined to have long-range behavior: β ( r ) y p r r e f ( r ) ≃ β ∞ = ln ⁡ ( 2 D e u ( r e ) ) , {\displaystyle \beta (r)y_{p}^{r_{\rm {ref}}}(r)\simeq \beta _{\infty }=\ln \left({\frac {2{\mathfrak {D}}_{e}}{u(r_{e})}}\right),} where r e {\displaystyle r_{e}} is the equilibrium bond length. There are a few ways in which this long-range behavior can be achieved, the most common is to make β ( r ) {\displaystyle \beta (r)} a polynomial that is constrained to become β ∞ {\displaystyle \beta _{\infty }} at long-range: β ( r ) = ( 1 − y p r ref ( r ) ) ∑ i = 0 N β β i y q r ref ( r ) i + y p r ref ( r ) β ∞ , {\displaystyle \beta (r)=\left(1-y_{p}^{r_{\textrm {ref}}}(r)\right)\sum _{i=0}^{N_{\beta }}\beta _{i}y_{q}^{r_{\textrm {ref}}}(r)^{i}+y_{p}^{r_{\textrm {ref}}}(r)\beta _{\infty },} y n r x ( r ) = r n − r x n r n + r x n , {\displaystyle y_{n}^{r_{x}}(r)={\frac {r^{n}-r_{x}^{n}}{r^{n}+r_{x}^{n}}},} where n is an integer greater than 1, which value is defined by the model chosen for the long-range potential u LR ( r ) {\displaystyle u_{\text{LR}}(r)} . It is clear to see that: lim r → ∞ β ( r ) = β ∞ . {\displaystyle \lim _{r\to \infty }\beta (r)=\beta _{\infty }.} The MLR potential has successfully summarized all experimental spectroscopic data (and/or virial data ) for a number of diatomic molecules, including: N 2 , [ 7 ] Ca 2 , [ 8 ] KLi, [ 6 ] MgH, [ 9 ] [ 10 ] [ 11 ] several electronic states of Li 2 , [ 1 ] [ 2 ] [ 15 ] [ 3 ] [ 10 ] Cs 2 , [ 16 ] [ 12 ] Sr 2 , [ 17 ] ArXe, [ 10 ] [ 18 ] LiCa, [ 19 ] LiNa, [ 20 ] Br 2 , [ 21 ] Mg 2 , [ 22 ] HF, [ 13 ] [ 14 ] HCl, [ 13 ] [ 14 ] HBr, [ 13 ] [ 14 ] HI, [ 13 ] [ 14 ] MgD, [ 9 ] Be 2 , [ 23 ] BeH, [ 24 ] and NaH. [ 25 ] More sophisticated versions are used for polyatomic molecules. It has also become customary to fit ab initio points to the MLR potential, to achieve a fully analytic ab initio potential and to take advantage of the MLR's ability to incorporate the correct theoretically known short- and long-range behavior into the potential (the latter usually being of higher accuracy than the molecular ab initio points themselves because it is based on atomic ab initio calculations rather than molecular ones, and because features like spin-orbit coupling which are difficult to incorporate into molecular ab initio calculations can more easily be treated in the long-range). MLR has been used to represent ab initio points for KLi [ 26 ] and KBe. [ 27 ]
https://en.wikipedia.org/wiki/Morse/Long-range_potential
Morse Micro is a Sydney-based developer of Wi-Fi HaLow microprocessors; chips that enable high data rates, with long range and low power consumption. [ 1 ] [ 2 ] Amongst all Wi-Fi HaLow systems on a chip , Morse Micro processors are reported to be the smallest, fastest, longest-range with lowest-power-use. [ 3 ] [ 4 ] The main application of the technology is machine-to-machine communications. With the Internet of things expected to extend to 30 billion devices by 2025, this represents a steeply growing number of users of the technology. [ 5 ] The founders plan to be part of "expanding Wi-Fi so it can go into everything, every smoke alarm, every camera." [ 1 ] The firm has its global HQ in Sydney, which is also its main base for R&D, with additional centres in the United States, China, India, the United Kingdom and, from 2024, an operations centre in Taiwan. [ 6 ] [ 7 ] [ 2 ] [ 8 ] As of 2022, Morse Micro was producing more semiconductors than any other Australian-based tech company. [ 9 ] After eight years' development, the company's Wifi HalLow processor was reported to deliver 10 times the range of conventional Wi-Fi technology, and able to function for several years before needing battery change. [ 10 ] The microprocessor allows for a range of data rates, depending on the modulation and coding scheme (MCS) used. [ 4 ] This can be as low as 150 kilobits per second (Kbps) using MCS 10 with BPSK modulation, to a top rate of 4 megabits per second (Mbps) using MCS 9 at 256 quadrature amplitude modulation . [ 11 ] The chip uses low-bandwidth wireless network protocols, operating in the 1 GHz spectrum, while providing a communications range of 1,000 metres. [ 12 ] In one field test, researchers found the technology could sustain high speed data transmission between a device placed by the north end of Sydney Harbour Bridge and a device across the harbour at Sydney Opera House . [ 2 ] The company claims their chip provides 10 times the range, 100 times the area and 1000 times the volume of data offered by traditional wi-fi. [ 13 ] To enable networked communications between machines, a single Wi-Fi HaLow Access Point can securely connect up to 8,191 devices. [ 14 ] Applications for the WiFi HaLow technology includes the Internet of things , which may include solutions for in the home (such as lighting, monitoring and smart door locks) and in industry (such as vehicle management, high-end security and supply chain asset tracking . [ 15 ] [ 16 ] [ 17 ] Looking at its scalability, one American technical review made this assessment: That's ample capacity to connect every LED bulb, light switch, smart door lock, motorized window shade, thermostat, smoke detector, solar panel, security camera, or any imaginable smart-home device for the foreseeable future. [ 4 ] Physically, the company's microchip is one-fifth the size of a traditional Wi-Fi processor. [ 12 ] It uses very little energy, consuming a fraction of the power consumed by traditional chips, which is achieved by periodically waking and reporting. [ 18 ] [ 12 ] As such, the chips can operate for several years on a single coin-size battery. In 2020, the first generation of Morse Micro microchips went into production in Taiwan. [ 19 ] The company has onshore design and fabrication of composite semiconductors in Australia, which has been assessed as a strategic capability. [ 20 ] [ 21 ] As of late 2022, the market for Wi-Fi Ha Low products appeared to be expanding, from those developing industrial IoT in the Japanese market which, "deploy thousands of devices in warehouses which use sensors and actuators." [ 17 ] (IEEE 802.11n/ac/ax) (IEEE 802.11ah) (compared to 802.11n at 20 MHz) "Wi-Fi was invented over 20 years ago in Australia and over that time we have seen it go into every laptop, phone and tablet, and all of that came from people in Australia. Today we are opening it up and expanding Wi-Fi so it can go into everything, every smoke alarm, every camera." — Andrew Terry, founder, speaking to The Sydney Morning Herald in 2017 [ 1 ] The founding partners of Morse Micro, Andrew Terry and Michael De Nil met while working for Broadcom , the largest supplier of integrated circuits for communications. [ 1 ] De Nil said they noticed that chips designed for phones and laptops were being used for machine-to-machine communication and "that wasn't working very well." [ 22 ] They decided to create a new kind of microprocessor, specifically for the Internet of things . [ 12 ] Morse Micro Pty Ltd was established as a private company, limited by guarantee, in August 2016. The founders were later joined by several significant engineers, including: By late 2023, the company employed 180 people across Australia, the United States, China, India, UK, Singapore and Taiwan. [ 25 ] [ 26 ] [ 6 ] From this point the focus of market expansion became Japan, through its Japanese investor MegaChips. [ 25 ] Security cameras became a key application, which was recognised with the global industry award, the IoT Product of the Year, in 2022 and 2023. [ 27 ] The Australian Financial Review reported from 2024 that Morse Micro was ameliorating for geopolitical risk by maintaining two supply chains for chips and components, one from mainland China, the other Taiwan, with assembly and warehousing in Singapore. [ 6 ] The Singapore facility began operations in August 2023, and had produced over 2 million chips by November of that year. [ 7 ] The Australian Government provided the founders with seed funding in 2017 as they believed Morse Micro has the "first WiFi HaLow silicon chip that securely connects smart devices over long distances." [ 28 ] It is reported to be the best-funded Wi-Fi HaLow technology companies, with large investors from Japan, the United States and a spread of Australian retirement funds. [ 29 ] [ 30 ] By 15 February 2023, the company had an estimated value of US$700 million, just over A$1 billion. [ 31 ] In May 2019, Series A funding was provided in by a suite of investors. These included the Clean Energy Innovation Fund and CSIRO Innovation Fund, part of the Australian scientific research agency credited with inventing Wi-Fi in 1997. [ 32 ] [ 33 ] Investment also came from American entrepreneur Ray Stata of Analog Devices , Blackbird Ventures, Main Sequence Ventures, Right Click Capital, Kim Jackson and her husband Scott Farquhar through Skip Capital, Lucy and Malcolm Turnbull ; and Uniseed, the venture fund of UniSuper . This tranche totalled A$42 million. [ 2 ] [ 33 ] By September 2022 the company had announced its Series B round of A$140 million, later extended to A$170 million, attracting intense investor interest. [ 34 ] [ 35 ] The investment round was led by Japanese chip design and manufacturing giant MegaChips, with further investment from its incumbent investors, which is known to include several Australian superannuation groups, such as TelstraSuper, HESTA, Hostplus and NGS (managed by Blackbird Ventures) and UniSuper (managed by Uniseed). [ 25 ] [ 36 ] [ 10 ]
https://en.wikipedia.org/wiki/Morse_Micro
In mathematics , specifically in differential topology , Morse theory enables one to analyze the topology of a manifold by studying differentiable functions on that manifold. According to the basic insights of Marston Morse , a typical differentiable function on a manifold will reflect the topology quite directly. Morse theory allows one to find CW structures and handle decompositions on manifolds and to obtain substantial information about their homology . Before Morse, Arthur Cayley and James Clerk Maxwell had developed some of the ideas of Morse theory in the context of topography . Morse originally applied his theory to geodesics ( critical points of the energy functional on the space of paths). These techniques were used in Raoul Bott 's proof of his periodicity theorem . The analogue of Morse theory for complex manifolds is Picard–Lefschetz theory . To illustrate, consider a mountainous landscape surface M {\displaystyle M} (more generally, a manifold ). If f {\displaystyle f} is the function M → R {\displaystyle M\to \mathbb {R} } giving the elevation of each point, then the inverse image of a point in R {\displaystyle \mathbb {R} } is a contour line (more generally, a level set ). Each connected component of a contour line is either a point, a simple closed curve , or a closed curve with double point(s) . Contour lines may also have points of higher order (triple points, etc.), but these are unstable and may be removed by a slight deformation of the landscape. Double points in contour lines occur at saddle points , or passes, where the surrounding landscape curves up in one direction and down in the other. Imagine flooding this landscape with water. When the water reaches elevation a {\displaystyle a} , the underwater surface is M a = def f − 1 ( − ∞ , a ] {\displaystyle M^{a}\,{\stackrel {\text{def}}{=}}\,f^{-1}(-\infty ,a]} , the points with elevation a {\displaystyle a} or below. Consider how the topology of this surface changes as the water rises. It appears unchanged except when a {\displaystyle a} passes the height of a critical point , where the gradient of f {\displaystyle f} is 0 {\displaystyle 0} (more generally, the Jacobian matrix acting as a linear map between tangent spaces does not have maximal rank ). In other words, the topology of M a {\displaystyle M^{a}} does not change except when the water either (1) starts filling a basin, (2) covers a saddle (a mountain pass ), or (3) submerges a peak. To these three types of critical points —basins, passes, and peaks (i.e. minima, saddles, and maxima)—one associates a number called the index, the number of independent directions in which f {\displaystyle f} decreases from the point. More precisely, the index of a non-degenerate critical point p {\displaystyle p} of f {\displaystyle f} is the dimension of the largest subspace of the tangent space to M {\displaystyle M} at p {\displaystyle p} on which the Hessian of f {\displaystyle f} is negative definite. The indices of basins, passes, and peaks are 0 , 1 , {\displaystyle 0,1,} and 2 , {\displaystyle 2,} respectively. Considering a more general surface, let M {\displaystyle M} be a torus oriented as in the picture, with f {\displaystyle f} again taking a point to its height above the plane. One can again analyze how the topology of the underwater surface M a {\displaystyle M^{a}} changes as the water level a {\displaystyle a} rises. Starting from the bottom of the torus, let p , q , r , {\displaystyle p,q,r,} and s {\displaystyle s} be the four critical points of index 0 , 1 , 1 , {\displaystyle 0,1,1,} and 2 {\displaystyle 2} corresponding to the basin, two saddles, and peak, respectively. When a {\displaystyle a} is less than f ( p ) = 0 , {\displaystyle f(p)=0,} then M a {\displaystyle M^{a}} is the empty set. After a {\displaystyle a} passes the level of p , {\displaystyle p,} when 0 < a < f ( q ) , {\displaystyle 0<a<f(q),} then M a {\displaystyle M^{a}} is a disk , which is homotopy equivalent to a point (a 0-cell) which has been "attached" to the empty set. Next, when a {\displaystyle a} exceeds the level of q , {\displaystyle q,} and f ( q ) < a < f ( r ) , {\displaystyle f(q)<a<f(r),} then M a {\displaystyle M^{a}} is a cylinder, and is homotopy equivalent to a disk with a 1-cell attached (image at left). Once a {\displaystyle a} passes the level of r , {\displaystyle r,} and f ( r ) < a < f ( s ) , {\displaystyle f(r)<a<f(s),} then M a {\displaystyle M^{a}} is a torus with a disk removed, which is homotopy equivalent to a cylinder with a 1-cell attached (image at right). Finally, when a {\displaystyle a} is greater than the critical level of s , {\displaystyle s,} M a {\displaystyle M^{a}} is a torus, i.e. a torus with a disk (a 2-cell) removed and re-attached. This illustrates the following rule: the topology of M a {\displaystyle M^{a}} does not change except when a {\displaystyle a} passes the height of a critical point; at this point, a γ {\displaystyle \gamma } -cell is attached to M a {\displaystyle M^{a}} , where γ {\displaystyle \gamma } is the index of the point. This does not address what happens when two critical points are at the same height, which can be resolved by a slight perturbation of f . {\displaystyle f.} In the case of a landscape or a manifold embedded in Euclidean space , this perturbation might simply be tilting slightly, rotating the coordinate system. One must take care to make the critical points non-degenerate. To see what can pose a problem, let M = R {\displaystyle M=\mathbb {R} } and let f ( x ) = x 3 . {\displaystyle f(x)=x^{3}.} Then 0 {\displaystyle 0} is a critical point of f , {\displaystyle f,} but the topology of M a {\displaystyle M^{a}} does not change when a {\displaystyle a} passes 0. {\displaystyle 0.} The problem is that the second derivative is f ″ ( 0 ) = 0 {\displaystyle f''(0)=0} —that is, the Hessian of f {\displaystyle f} vanishes and the critical point is degenerate. This situation is unstable, since by slightly deforming f {\displaystyle f} to f ( x ) = x 3 + ϵ x {\displaystyle f(x)=x^{3}+\epsilon x} , the degenerate critical point is either removed ( ϵ > 0 {\displaystyle \epsilon >0} ) or breaks up into two non-degenerate critical points ( ϵ < 0 {\displaystyle \epsilon <0} ). For a real-valued smooth function f : M → R {\displaystyle f:M\to \mathbb {R} } on a differentiable manifold M , {\displaystyle M,} the points where the differential of f {\displaystyle f} vanishes are called critical points of f {\displaystyle f} and their images under f {\displaystyle f} are called critical values . If at a critical point p {\displaystyle p} the matrix of second partial derivatives (the Hessian matrix ) is non-singular, then p {\displaystyle p} is called a non-degenerate critical point ; if the Hessian is singular then p {\displaystyle p} is a degenerate critical point . For the functions f ( x ) = a + b x + c x 2 + d x 3 + ⋯ {\displaystyle f(x)=a+bx+cx^{2}+dx^{3}+\cdots } from R {\displaystyle \mathbb {R} } to R , {\displaystyle \mathbb {R} ,} f {\displaystyle f} has a critical point at the origin if b = 0 , {\displaystyle b=0,} which is non-degenerate if c ≠ 0 {\displaystyle c\neq 0} (that is, f {\displaystyle f} is of the form a + c x 2 + ⋯ {\displaystyle a+cx^{2}+\cdots } ) and degenerate if c = 0 {\displaystyle c=0} (that is, f {\displaystyle f} is of the form a + d x 3 + ⋯ {\displaystyle a+dx^{3}+\cdots } ). A less trivial example of a degenerate critical point is the origin of the monkey saddle . The index of a non-degenerate critical point p {\displaystyle p} of f {\displaystyle f} is the dimension of the largest subspace of the tangent space to M {\displaystyle M} at p {\displaystyle p} on which the Hessian is negative definite . This corresponds to the intuitive notion that the index is the number of directions in which f {\displaystyle f} decreases. The degeneracy and index of a critical point are independent of the choice of the local coordinate system used, as shown by Sylvester's Law . Let p {\displaystyle p} be a non-degenerate critical point of f : M → R . {\displaystyle f\colon M\to \mathbb {R} .} Then there exists a chart ( x 1 , x 2 , … , x n ) {\displaystyle \left(x_{1},x_{2},\ldots ,x_{n}\right)} in a neighborhood U {\displaystyle U} of p {\displaystyle p} such that x i ( p ) = 0 {\displaystyle x_{i}(p)=0} for all i {\displaystyle i} and f ( x ) = f ( p ) − x 1 2 − ⋯ − x γ 2 + x γ + 1 2 + ⋯ + x n 2 {\displaystyle f(x)=f(p)-x_{1}^{2}-\cdots -x_{\gamma }^{2}+x_{\gamma +1}^{2}+\cdots +x_{n}^{2}} throughout U . {\displaystyle U.} Here γ {\displaystyle \gamma } is equal to the index of f {\displaystyle f} at p {\displaystyle p} . As a corollary of the Morse lemma, one sees that non-degenerate critical points are isolated . (Regarding an extension to the complex domain see Complex Morse Lemma . For a generalization, see Morse–Palais lemma ). A smooth real-valued function on a manifold M {\displaystyle M} is a Morse function if it has no degenerate critical points. A basic result of Morse theory says that almost all functions are Morse functions. Technically, the Morse functions form an open, dense subset of all smooth functions M → R {\displaystyle M\to \mathbb {R} } in the C 2 {\displaystyle C^{2}} topology. This is sometimes expressed as "a typical function is Morse" or "a generic function is Morse". As indicated before, we are interested in the question of when the topology of M a = f − 1 ( − ∞ , a ] {\displaystyle M^{a}=f^{-1}(-\infty ,a]} changes as a {\displaystyle a} varies. Half of the answer to this question is given by the following theorem. It is also of interest to know how the topology of M a {\displaystyle M^{a}} changes when a {\displaystyle a} passes a critical point. The following theorem answers that question. These results generalize and formalize the 'rule' stated in the previous section. Using the two previous results and the fact that there exists a Morse function on any differentiable manifold, one can prove that any differentiable manifold is a CW complex with an n {\displaystyle n} -cell for each critical point of index n . {\displaystyle n.} To do this, one needs the technical fact that one can arrange to have a single critical point on each critical level, which is usually proven by using gradient-like vector fields to rearrange the critical points. Morse theory can be used to prove some strong results on the homology of manifolds. The number of critical points of index γ {\displaystyle \gamma } of f : M → R {\displaystyle f:M\to \mathbb {R} } is equal to the number of γ {\displaystyle \gamma } cells in the CW structure on M {\displaystyle M} obtained from "climbing" f . {\displaystyle f.} Using the fact that the alternating sum of the ranks of the homology groups of a topological space is equal to the alternating sum of the ranks of the chain groups from which the homology is computed, then by using the cellular chain groups (see cellular homology ) it is clear that the Euler characteristic χ ( M ) {\displaystyle \chi (M)} is equal to the sum ∑ ( − 1 ) γ C γ = χ ( M ) {\displaystyle \sum (-1)^{\gamma }C^{\gamma }\,=\chi (M)} where C γ {\displaystyle C^{\gamma }} is the number of critical points of index γ . {\displaystyle \gamma .} Also by cellular homology, the rank of the n {\displaystyle n} th homology group of a CW complex M {\displaystyle M} is less than or equal to the number of n {\displaystyle n} -cells in M . {\displaystyle M.} Therefore, the rank of the γ {\displaystyle \gamma } th homology group, that is, the Betti number b γ ( M ) {\displaystyle b_{\gamma }(M)} , is less than or equal to the number of critical points of index γ {\displaystyle \gamma } of a Morse function on M . {\displaystyle M.} These facts can be strengthened to obtain the Morse inequalities : C γ − C γ − 1 ± ⋯ + ( − 1 ) γ C 0 ≥ b γ ( M ) − b γ − 1 ( M ) ± ⋯ + ( − 1 ) γ b 0 ( M ) . {\displaystyle C^{\gamma }-C^{\gamma -1}\pm \cdots +(-1)^{\gamma }C^{0}\geq b_{\gamma }(M)-b_{\gamma -1}(M)\pm \cdots +(-1)^{\gamma }b_{0}(M).} In particular, for any γ ∈ { 0 , … , n = dim ⁡ M } , {\displaystyle \gamma \in \{0,\ldots ,n=\dim M\},} one has C γ ≥ b γ ( M ) . {\displaystyle C^{\gamma }\geq b_{\gamma }(M).} This gives a powerful tool to study manifold topology. Suppose on a closed manifold there exists a Morse function f : M → R {\displaystyle f:M\to \mathbb {R} } with precisely k critical points. In what way does the existence of the function f {\displaystyle f} restrict M {\displaystyle M} ? The case k = 2 {\displaystyle k=2} was studied by Georges Reeb in 1952; the Reeb sphere theorem states that M {\displaystyle M} is homeomorphic to a sphere S n . {\displaystyle S^{n}.} The case k = 3 {\displaystyle k=3} is possible only in a small number of low dimensions, and M is homeomorphic to an Eells–Kuiper manifold . In 1982 Edward Witten developed an analytic approach to the Morse inequalities by considering the de Rham complex for the perturbed operator d t = e − t f d e t f . {\displaystyle d_{t}=e^{-tf}de^{tf}.} [ 1 ] [ 2 ] Morse theory has been used to classify closed 2-manifolds up to diffeomorphism. If M {\displaystyle M} is oriented, then M {\displaystyle M} is classified by its genus g {\displaystyle g} and is diffeomorphic to a sphere with g {\displaystyle g} handles: thus if g = 0 , {\displaystyle g=0,} M {\displaystyle M} is diffeomorphic to the 2-sphere; and if g > 0 , {\displaystyle g>0,} M {\displaystyle M} is diffeomorphic to the connected sum of g {\displaystyle g} 2-tori. If N {\displaystyle N} is unorientable, it is classified by a number g > 0 {\displaystyle g>0} and is diffeomorphic to the connected sum of g {\displaystyle g} real projective spaces R P 2 . {\displaystyle \mathbf {RP} ^{2}.} In particular two closed 2-manifolds are homeomorphic if and only if they are diffeomorphic. [ 3 ] [ 4 ] Morse homology is a particularly easy way to understand the homology of smooth manifolds . It is defined using a generic choice of Morse function and Riemannian metric . The basic theorem is that the resulting homology is an invariant of the manifold (that is, independent of the function and metric) and isomorphic to the singular homology of the manifold; this implies that the Morse and singular Betti numbers agree and gives an immediate proof of the Morse inequalities. An infinite dimensional analog of Morse homology in symplectic geometry is known as Floer homology . The notion of a Morse function can be generalized to consider functions that have nondegenerate manifolds of critical points. A Morse–Bott function is a smooth function on a manifold whose critical set is a closed submanifold and whose Hessian is non-degenerate in the normal direction. (Equivalently, the kernel of the Hessian at a critical point equals the tangent space to the critical submanifold.) A Morse function is the special case where the critical manifolds are zero-dimensional (so the Hessian at critical points is non-degenerate in every direction, that is, has no kernel). The index is most naturally thought of as a pair ( i − , i + ) , {\displaystyle \left(i_{-},i_{+}\right),} where i − {\displaystyle i_{-}} is the dimension of the unstable manifold at a given point of the critical manifold, and i + {\displaystyle i_{+}} is equal to i − {\displaystyle i_{-}} plus the dimension of the critical manifold. If the Morse–Bott function is perturbed by a small function on the critical locus, the index of all critical points of the perturbed function on a critical manifold of the unperturbed function will lie between i − {\displaystyle i_{-}} and i + . {\displaystyle i_{+}.} Morse–Bott functions are useful because generic Morse functions are difficult to work with; the functions one can visualize, and with which one can easily calculate, typically have symmetries. They often lead to positive-dimensional critical manifolds. Raoul Bott used Morse–Bott theory in his original proof of the Bott periodicity theorem . Round functions are examples of Morse–Bott functions, where the critical sets are (disjoint unions of) circles. Morse homology can also be formulated for Morse–Bott functions; the differential in Morse–Bott homology is computed by a spectral sequence . Frederic Bourgeois sketched an approach in the course of his work on a Morse–Bott version of symplectic field theory, but this work was never published due to substantial analytic difficulties.
https://en.wikipedia.org/wiki/Morse_inequalities
The Morse potential , named after physicist Philip M. Morse , is a convenient interatomic interaction model for the potential energy of a diatomic molecule . It is a better approximation for the vibrational structure of the molecule than the quantum harmonic oscillator because it explicitly includes the effects of bond breaking, such as the existence of unbound states. It also accounts for the anharmonicity of real bonds and the non-zero transition probability for overtone and combination bands . The Morse potential can also be used to model other interactions such as the interaction between an atom and a surface. Due to its simplicity (only three fitting parameters), it is not used in modern spectroscopy. However, its mathematical form inspired the MLR ( Morse/Long-range ) potential, which is the most popular potential energy function used for fitting spectroscopic data. The Morse potential energy function is of the form Here r {\displaystyle r} is the distance between the atoms, r e {\displaystyle r_{e}} is the equilibrium bond distance, D e {\displaystyle D_{e}} is the well depth (defined relative to the dissociated atoms), and a {\displaystyle a} controls the 'width' of the potential (the smaller a {\displaystyle a} is, the larger the well). The dissociation energy of the bond can be calculated by subtracting the zero point energy E 0 {\displaystyle E_{0}} from the depth of the well. The force constant (stiffness) of the bond can be found by Taylor expansion of V ′ ( r ) {\displaystyle V'(r)} around r = r e {\displaystyle r=r_{e}} to the second derivative of the potential energy function, from which it can be shown that the parameter, a {\displaystyle a} , is where k e {\displaystyle k_{e}} is the force constant at the minimum of the well. Since the zero of potential energy is arbitrary , the equation for the Morse potential can be rewritten any number of ways by adding or subtracting a constant value. When it is used to model the atom-surface interaction, the energy zero can be redefined so that the Morse potential becomes which is usually written as where r {\displaystyle r} is now the coordinate perpendicular to the surface. This form approaches zero at infinite r {\displaystyle r} and equals − D e {\displaystyle -D_{e}} at its minimum, i.e. r = r e {\displaystyle r=r_{e}} . It clearly shows that the Morse potential is the combination of a short-range repulsion term (the former) and a long-range attractive term (the latter), analogous to the Lennard-Jones potential . Like the quantum harmonic oscillator , the energies and eigenstates of the Morse potential can be found using operator methods. [ 1 ] One approach involves applying the factorization method to the Hamiltonian. To write the stationary states on the Morse potential, i.e. solutions Ψ n ( r ) {\displaystyle \ \Psi _{n}(r)\ } and E n {\displaystyle \ E_{n}\ } of the following Schrödinger equation : it is convenient to introduce the new variables: Then, the Schrödinger equation takes the simplified form: Its eigenvalues (reduced by D e {\displaystyle \ D_{e}\ } ) and eigenstates can be written as: [ 2 ] where with ⌊ x ⌋ {\displaystyle \ \lfloor x\rfloor \ } denoting the largest integer smaller than x , {\displaystyle \ x\ ,} and where z ≡ 2 λ e − ( x − x e ) {\displaystyle ~z\equiv 2\ \lambda \ e^{-\left(x-x_{e}\right)}~} and N n ≡ n ! ( 2 λ − 2 n − 1 ) a Γ ( 2 λ − n ) {\displaystyle ~N_{n}\equiv {\sqrt {{\frac {\ n!\left(2\lambda -2n-1\right)\ a\ }{\ \Gamma (2\lambda -n)\ }}\ }}~} which satisfies the normalization condition and where L n ( α ) ( z ) {\displaystyle \ L_{n}^{(\alpha )}(z)\ } is a generalized Laguerre polynomial : There also exists the following analytical expression for matrix elements of the coordinate operator: [ 3 ] which is valid for m > n {\displaystyle ~m>n~} and N = λ − 1 2 . {\displaystyle ~N=\lambda -{\tfrac {1}{2}}~.} The eigenenergies in the initial variables have the form: where n {\displaystyle \ n\ } is the vibrational quantum number and ν 0 {\displaystyle \ \nu _{0}\ } has units of frequency. The latter is mathematically related to the particle mass, m , {\displaystyle \ m\ ,} and the Morse constants via Whereas the energy spacing between vibrational levels in the quantum harmonic oscillator is constant at h ν 0 , {\displaystyle \ h\ \nu _{0}\ ,} the energy between adjacent levels decreases with increasing v {\displaystyle \ v\ } in the Morse oscillator. Mathematically, the spacing of Morse levels is This trend matches the inharmonicity found in real molecules. However, this equation fails above some value of n m {\displaystyle \ n_{m}\ } where E ( n m + 1 ) − E ( n m ) {\displaystyle \ E(n_{m}+1)-E(n_{m})\ } is calculated to be zero or negative. Specifically, This failure is due to the finite number of bound levels in the Morse potential, and some maximum n m {\displaystyle \ n_{m}\ } that remains bound. For energies above n m , {\displaystyle \ n_{m}\ ,} all the possible energy levels are allowed and the equation for E n {\displaystyle \ E_{n}\ } is no longer valid. Below n m , {\displaystyle \ n_{m}\ ,} E n {\displaystyle \ E_{n}\ } is a good approximation for the true vibrational structure in non-rotating diatomic molecules. In fact, the real molecular spectra are generally fit to the form 1 in which the constants ω e {\displaystyle \ \omega _{e}\ } and ω e χ e {\displaystyle \ \omega _{e}\ \chi _{e}\ } can be directly related to the parameters for the Morse potential. Specifically, and Note that if ω e {\displaystyle \ \omega _{e}\ } and ω e χ e {\displaystyle \ \omega _{e}\ \chi _{e}\ } are given in c m − 1 , {\displaystyle \ {\mathsf {cm}}^{-1}\ ,} c {\displaystyle \ c\ } is in cm/s (not m/s), m {\displaystyle \ m\ } is in kg, and h {\displaystyle \ h\ } is in J · s ; in which case a {\displaystyle \ a\ } will be in m − 1 {\displaystyle \ {\mathsf {m}}^{-1}\ } and D e {\displaystyle \ D_{e}\ } will be in c m − 1 . {\displaystyle {\mathsf {cm}}^{-1}~.} As is clear from dimensional analysis , for historical reasons the last equation uses spectroscopic notation in which ω e {\displaystyle \ \omega _{e}\ } represents a wavenumber obeying E = h c ω , {\displaystyle \ E=h\ c\ \omega \ ,} and not an angular frequency given by E = ℏ ω . {\displaystyle \ E=\hbar \ \omega ~.} An extension of the Morse potential that made the Morse form useful for modern (high-resolution) spectroscopy is the MLR ( Morse/Long-range ) potential. [ 4 ] The MLR potential is used as a standard for representing spectroscopic and/or virial data of diatomic molecules by a potential energy curve. It has been used on N 2 , [ 5 ] Ca 2 , [ 6 ] KLi, [ 7 ] MgH, [ 8 ] [ 9 ] [ 10 ] several electronic states of Li 2 , [ 4 ] [ 11 ] [ 12 ] [ 13 ] [ 9 ] Cs 2 , [ 14 ] [ 15 ] Sr 2 , [ 16 ] ArXe, [ 9 ] [ 17 ] LiCa, [ 18 ] LiNa, [ 19 ] Br 2 , [ 20 ] Mg 2 , [ 21 ] HF, [ 22 ] [ 23 ] HCl, [ 22 ] [ 23 ] HBr, [ 22 ] [ 23 ] HI, [ 22 ] [ 23 ] MgD, [ 8 ] Be 2 , [ 24 ] BeH, [ 25 ] and NaH. [ 26 ] More sophisticated versions are used for polyatomic molecules.
https://en.wikipedia.org/wiki/Morse_potential
MORSE system is a unique communication system developed by RACOM . The system has been primarily designed for narrow band radio modems . However, it has been extended for the next communication channels afterwards: IP (any network using UDP/IP , e.g. Internet) and GPRS , EDGE , UMTS . The system is used primarily for communication in SCADA & Telemetry , Fleet management and transaction & financial networks . There are more than 70 automation protocols implemented in MORSE system . Some of them are implemented in “cache mode”, when there is a mirror of data from all RTU in central modem. So the response time for SCADA is really fast.
https://en.wikipedia.org/wiki/Morse_system
In mathematics , specifically in differential topology , Morse theory enables one to analyze the topology of a manifold by studying differentiable functions on that manifold. According to the basic insights of Marston Morse , a typical differentiable function on a manifold will reflect the topology quite directly. Morse theory allows one to find CW structures and handle decompositions on manifolds and to obtain substantial information about their homology . Before Morse, Arthur Cayley and James Clerk Maxwell had developed some of the ideas of Morse theory in the context of topography . Morse originally applied his theory to geodesics ( critical points of the energy functional on the space of paths). These techniques were used in Raoul Bott 's proof of his periodicity theorem . The analogue of Morse theory for complex manifolds is Picard–Lefschetz theory . To illustrate, consider a mountainous landscape surface M {\displaystyle M} (more generally, a manifold ). If f {\displaystyle f} is the function M → R {\displaystyle M\to \mathbb {R} } giving the elevation of each point, then the inverse image of a point in R {\displaystyle \mathbb {R} } is a contour line (more generally, a level set ). Each connected component of a contour line is either a point, a simple closed curve , or a closed curve with double point(s) . Contour lines may also have points of higher order (triple points, etc.), but these are unstable and may be removed by a slight deformation of the landscape. Double points in contour lines occur at saddle points , or passes, where the surrounding landscape curves up in one direction and down in the other. Imagine flooding this landscape with water. When the water reaches elevation a {\displaystyle a} , the underwater surface is M a = def f − 1 ( − ∞ , a ] {\displaystyle M^{a}\,{\stackrel {\text{def}}{=}}\,f^{-1}(-\infty ,a]} , the points with elevation a {\displaystyle a} or below. Consider how the topology of this surface changes as the water rises. It appears unchanged except when a {\displaystyle a} passes the height of a critical point , where the gradient of f {\displaystyle f} is 0 {\displaystyle 0} (more generally, the Jacobian matrix acting as a linear map between tangent spaces does not have maximal rank ). In other words, the topology of M a {\displaystyle M^{a}} does not change except when the water either (1) starts filling a basin, (2) covers a saddle (a mountain pass ), or (3) submerges a peak. To these three types of critical points —basins, passes, and peaks (i.e. minima, saddles, and maxima)—one associates a number called the index, the number of independent directions in which f {\displaystyle f} decreases from the point. More precisely, the index of a non-degenerate critical point p {\displaystyle p} of f {\displaystyle f} is the dimension of the largest subspace of the tangent space to M {\displaystyle M} at p {\displaystyle p} on which the Hessian of f {\displaystyle f} is negative definite. The indices of basins, passes, and peaks are 0 , 1 , {\displaystyle 0,1,} and 2 , {\displaystyle 2,} respectively. Considering a more general surface, let M {\displaystyle M} be a torus oriented as in the picture, with f {\displaystyle f} again taking a point to its height above the plane. One can again analyze how the topology of the underwater surface M a {\displaystyle M^{a}} changes as the water level a {\displaystyle a} rises. Starting from the bottom of the torus, let p , q , r , {\displaystyle p,q,r,} and s {\displaystyle s} be the four critical points of index 0 , 1 , 1 , {\displaystyle 0,1,1,} and 2 {\displaystyle 2} corresponding to the basin, two saddles, and peak, respectively. When a {\displaystyle a} is less than f ( p ) = 0 , {\displaystyle f(p)=0,} then M a {\displaystyle M^{a}} is the empty set. After a {\displaystyle a} passes the level of p , {\displaystyle p,} when 0 < a < f ( q ) , {\displaystyle 0<a<f(q),} then M a {\displaystyle M^{a}} is a disk , which is homotopy equivalent to a point (a 0-cell) which has been "attached" to the empty set. Next, when a {\displaystyle a} exceeds the level of q , {\displaystyle q,} and f ( q ) < a < f ( r ) , {\displaystyle f(q)<a<f(r),} then M a {\displaystyle M^{a}} is a cylinder, and is homotopy equivalent to a disk with a 1-cell attached (image at left). Once a {\displaystyle a} passes the level of r , {\displaystyle r,} and f ( r ) < a < f ( s ) , {\displaystyle f(r)<a<f(s),} then M a {\displaystyle M^{a}} is a torus with a disk removed, which is homotopy equivalent to a cylinder with a 1-cell attached (image at right). Finally, when a {\displaystyle a} is greater than the critical level of s , {\displaystyle s,} M a {\displaystyle M^{a}} is a torus, i.e. a torus with a disk (a 2-cell) removed and re-attached. This illustrates the following rule: the topology of M a {\displaystyle M^{a}} does not change except when a {\displaystyle a} passes the height of a critical point; at this point, a γ {\displaystyle \gamma } -cell is attached to M a {\displaystyle M^{a}} , where γ {\displaystyle \gamma } is the index of the point. This does not address what happens when two critical points are at the same height, which can be resolved by a slight perturbation of f . {\displaystyle f.} In the case of a landscape or a manifold embedded in Euclidean space , this perturbation might simply be tilting slightly, rotating the coordinate system. One must take care to make the critical points non-degenerate. To see what can pose a problem, let M = R {\displaystyle M=\mathbb {R} } and let f ( x ) = x 3 . {\displaystyle f(x)=x^{3}.} Then 0 {\displaystyle 0} is a critical point of f , {\displaystyle f,} but the topology of M a {\displaystyle M^{a}} does not change when a {\displaystyle a} passes 0. {\displaystyle 0.} The problem is that the second derivative is f ″ ( 0 ) = 0 {\displaystyle f''(0)=0} —that is, the Hessian of f {\displaystyle f} vanishes and the critical point is degenerate. This situation is unstable, since by slightly deforming f {\displaystyle f} to f ( x ) = x 3 + ϵ x {\displaystyle f(x)=x^{3}+\epsilon x} , the degenerate critical point is either removed ( ϵ > 0 {\displaystyle \epsilon >0} ) or breaks up into two non-degenerate critical points ( ϵ < 0 {\displaystyle \epsilon <0} ). For a real-valued smooth function f : M → R {\displaystyle f:M\to \mathbb {R} } on a differentiable manifold M , {\displaystyle M,} the points where the differential of f {\displaystyle f} vanishes are called critical points of f {\displaystyle f} and their images under f {\displaystyle f} are called critical values . If at a critical point p {\displaystyle p} the matrix of second partial derivatives (the Hessian matrix ) is non-singular, then p {\displaystyle p} is called a non-degenerate critical point ; if the Hessian is singular then p {\displaystyle p} is a degenerate critical point . For the functions f ( x ) = a + b x + c x 2 + d x 3 + ⋯ {\displaystyle f(x)=a+bx+cx^{2}+dx^{3}+\cdots } from R {\displaystyle \mathbb {R} } to R , {\displaystyle \mathbb {R} ,} f {\displaystyle f} has a critical point at the origin if b = 0 , {\displaystyle b=0,} which is non-degenerate if c ≠ 0 {\displaystyle c\neq 0} (that is, f {\displaystyle f} is of the form a + c x 2 + ⋯ {\displaystyle a+cx^{2}+\cdots } ) and degenerate if c = 0 {\displaystyle c=0} (that is, f {\displaystyle f} is of the form a + d x 3 + ⋯ {\displaystyle a+dx^{3}+\cdots } ). A less trivial example of a degenerate critical point is the origin of the monkey saddle . The index of a non-degenerate critical point p {\displaystyle p} of f {\displaystyle f} is the dimension of the largest subspace of the tangent space to M {\displaystyle M} at p {\displaystyle p} on which the Hessian is negative definite . This corresponds to the intuitive notion that the index is the number of directions in which f {\displaystyle f} decreases. The degeneracy and index of a critical point are independent of the choice of the local coordinate system used, as shown by Sylvester's Law . Let p {\displaystyle p} be a non-degenerate critical point of f : M → R . {\displaystyle f\colon M\to \mathbb {R} .} Then there exists a chart ( x 1 , x 2 , … , x n ) {\displaystyle \left(x_{1},x_{2},\ldots ,x_{n}\right)} in a neighborhood U {\displaystyle U} of p {\displaystyle p} such that x i ( p ) = 0 {\displaystyle x_{i}(p)=0} for all i {\displaystyle i} and f ( x ) = f ( p ) − x 1 2 − ⋯ − x γ 2 + x γ + 1 2 + ⋯ + x n 2 {\displaystyle f(x)=f(p)-x_{1}^{2}-\cdots -x_{\gamma }^{2}+x_{\gamma +1}^{2}+\cdots +x_{n}^{2}} throughout U . {\displaystyle U.} Here γ {\displaystyle \gamma } is equal to the index of f {\displaystyle f} at p {\displaystyle p} . As a corollary of the Morse lemma, one sees that non-degenerate critical points are isolated . (Regarding an extension to the complex domain see Complex Morse Lemma . For a generalization, see Morse–Palais lemma ). A smooth real-valued function on a manifold M {\displaystyle M} is a Morse function if it has no degenerate critical points. A basic result of Morse theory says that almost all functions are Morse functions. Technically, the Morse functions form an open, dense subset of all smooth functions M → R {\displaystyle M\to \mathbb {R} } in the C 2 {\displaystyle C^{2}} topology. This is sometimes expressed as "a typical function is Morse" or "a generic function is Morse". As indicated before, we are interested in the question of when the topology of M a = f − 1 ( − ∞ , a ] {\displaystyle M^{a}=f^{-1}(-\infty ,a]} changes as a {\displaystyle a} varies. Half of the answer to this question is given by the following theorem. It is also of interest to know how the topology of M a {\displaystyle M^{a}} changes when a {\displaystyle a} passes a critical point. The following theorem answers that question. These results generalize and formalize the 'rule' stated in the previous section. Using the two previous results and the fact that there exists a Morse function on any differentiable manifold, one can prove that any differentiable manifold is a CW complex with an n {\displaystyle n} -cell for each critical point of index n . {\displaystyle n.} To do this, one needs the technical fact that one can arrange to have a single critical point on each critical level, which is usually proven by using gradient-like vector fields to rearrange the critical points. Morse theory can be used to prove some strong results on the homology of manifolds. The number of critical points of index γ {\displaystyle \gamma } of f : M → R {\displaystyle f:M\to \mathbb {R} } is equal to the number of γ {\displaystyle \gamma } cells in the CW structure on M {\displaystyle M} obtained from "climbing" f . {\displaystyle f.} Using the fact that the alternating sum of the ranks of the homology groups of a topological space is equal to the alternating sum of the ranks of the chain groups from which the homology is computed, then by using the cellular chain groups (see cellular homology ) it is clear that the Euler characteristic χ ( M ) {\displaystyle \chi (M)} is equal to the sum ∑ ( − 1 ) γ C γ = χ ( M ) {\displaystyle \sum (-1)^{\gamma }C^{\gamma }\,=\chi (M)} where C γ {\displaystyle C^{\gamma }} is the number of critical points of index γ . {\displaystyle \gamma .} Also by cellular homology, the rank of the n {\displaystyle n} th homology group of a CW complex M {\displaystyle M} is less than or equal to the number of n {\displaystyle n} -cells in M . {\displaystyle M.} Therefore, the rank of the γ {\displaystyle \gamma } th homology group, that is, the Betti number b γ ( M ) {\displaystyle b_{\gamma }(M)} , is less than or equal to the number of critical points of index γ {\displaystyle \gamma } of a Morse function on M . {\displaystyle M.} These facts can be strengthened to obtain the Morse inequalities : C γ − C γ − 1 ± ⋯ + ( − 1 ) γ C 0 ≥ b γ ( M ) − b γ − 1 ( M ) ± ⋯ + ( − 1 ) γ b 0 ( M ) . {\displaystyle C^{\gamma }-C^{\gamma -1}\pm \cdots +(-1)^{\gamma }C^{0}\geq b_{\gamma }(M)-b_{\gamma -1}(M)\pm \cdots +(-1)^{\gamma }b_{0}(M).} In particular, for any γ ∈ { 0 , … , n = dim ⁡ M } , {\displaystyle \gamma \in \{0,\ldots ,n=\dim M\},} one has C γ ≥ b γ ( M ) . {\displaystyle C^{\gamma }\geq b_{\gamma }(M).} This gives a powerful tool to study manifold topology. Suppose on a closed manifold there exists a Morse function f : M → R {\displaystyle f:M\to \mathbb {R} } with precisely k critical points. In what way does the existence of the function f {\displaystyle f} restrict M {\displaystyle M} ? The case k = 2 {\displaystyle k=2} was studied by Georges Reeb in 1952; the Reeb sphere theorem states that M {\displaystyle M} is homeomorphic to a sphere S n . {\displaystyle S^{n}.} The case k = 3 {\displaystyle k=3} is possible only in a small number of low dimensions, and M is homeomorphic to an Eells–Kuiper manifold . In 1982 Edward Witten developed an analytic approach to the Morse inequalities by considering the de Rham complex for the perturbed operator d t = e − t f d e t f . {\displaystyle d_{t}=e^{-tf}de^{tf}.} [ 1 ] [ 2 ] Morse theory has been used to classify closed 2-manifolds up to diffeomorphism. If M {\displaystyle M} is oriented, then M {\displaystyle M} is classified by its genus g {\displaystyle g} and is diffeomorphic to a sphere with g {\displaystyle g} handles: thus if g = 0 , {\displaystyle g=0,} M {\displaystyle M} is diffeomorphic to the 2-sphere; and if g > 0 , {\displaystyle g>0,} M {\displaystyle M} is diffeomorphic to the connected sum of g {\displaystyle g} 2-tori. If N {\displaystyle N} is unorientable, it is classified by a number g > 0 {\displaystyle g>0} and is diffeomorphic to the connected sum of g {\displaystyle g} real projective spaces R P 2 . {\displaystyle \mathbf {RP} ^{2}.} In particular two closed 2-manifolds are homeomorphic if and only if they are diffeomorphic. [ 3 ] [ 4 ] Morse homology is a particularly easy way to understand the homology of smooth manifolds . It is defined using a generic choice of Morse function and Riemannian metric . The basic theorem is that the resulting homology is an invariant of the manifold (that is, independent of the function and metric) and isomorphic to the singular homology of the manifold; this implies that the Morse and singular Betti numbers agree and gives an immediate proof of the Morse inequalities. An infinite dimensional analog of Morse homology in symplectic geometry is known as Floer homology . The notion of a Morse function can be generalized to consider functions that have nondegenerate manifolds of critical points. A Morse–Bott function is a smooth function on a manifold whose critical set is a closed submanifold and whose Hessian is non-degenerate in the normal direction. (Equivalently, the kernel of the Hessian at a critical point equals the tangent space to the critical submanifold.) A Morse function is the special case where the critical manifolds are zero-dimensional (so the Hessian at critical points is non-degenerate in every direction, that is, has no kernel). The index is most naturally thought of as a pair ( i − , i + ) , {\displaystyle \left(i_{-},i_{+}\right),} where i − {\displaystyle i_{-}} is the dimension of the unstable manifold at a given point of the critical manifold, and i + {\displaystyle i_{+}} is equal to i − {\displaystyle i_{-}} plus the dimension of the critical manifold. If the Morse–Bott function is perturbed by a small function on the critical locus, the index of all critical points of the perturbed function on a critical manifold of the unperturbed function will lie between i − {\displaystyle i_{-}} and i + . {\displaystyle i_{+}.} Morse–Bott functions are useful because generic Morse functions are difficult to work with; the functions one can visualize, and with which one can easily calculate, typically have symmetries. They often lead to positive-dimensional critical manifolds. Raoul Bott used Morse–Bott theory in his original proof of the Bott periodicity theorem . Round functions are examples of Morse–Bott functions, where the critical sets are (disjoint unions of) circles. Morse homology can also be formulated for Morse–Bott functions; the differential in Morse–Bott homology is computed by a spectral sequence . Frederic Bourgeois sketched an approach in the course of his work on a Morse–Bott version of symplectic field theory, but this work was never published due to substantial analytic difficulties.
https://en.wikipedia.org/wiki/Morse_theory
In dynamical systems theory , an area of pure mathematics , a Morse–Smale system is a smooth dynamical system whose non-wandering set consists of finitely many hyperbolic equilibrium points and hyperbolic periodic orbits and satisfying a transversality condition on the stable and unstable manifolds . Morse–Smale systems are structurally stable and form one of the simplest and best studied classes of smooth dynamical systems. They are named after Marston Morse , the creator of the Morse theory , and Stephen Smale , who emphasized their importance for smooth dynamics and algebraic topology . Consider a smooth and complete vector field X defined on a compact differentiable manifold M with dimension n . The flow defined by this vector field is a Morse-Smale system if This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Morse–Smale_system
A Morton's fork is a type of false dilemma in which contradictory observations lead to the same conclusion. Its name dates back to the rationalising of a benevolence by the 15th century English prelate John Morton . The earliest known use of the term dates from the mid-19th century and the only known earlier mention is a claim by Francis Bacon of an extant tradition. [ 1 ] Under Henry VII , John Morton was made Archbishop of Canterbury in 1486 and Lord Chancellor in 1487. He rationalised requiring the payment of a benevolence (tax) to King Henry by reasoning that someone living modestly must be saving money and therefore could afford the benevolence, whereas someone living extravagantly was obviously rich and therefore could also afford the benevolence. [ 1 ] [ 2 ] Morton's Fork may have been invented by another of Henry's supporters, Richard Foxe . [ 3 ] " Morton's fork coup " is a manoeuvre in the game of bridge that uses the principle of Morton's fork. [ 4 ] [ 5 ] This logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Morton's_fork
In fluid dynamics , the Morton number ( Mo ) is a dimensionless number used together with the Eötvös number or Bond number to characterize the shape of bubbles or drops moving in a surrounding fluid or continuous phase, c . [ 1 ] It is named after Rose Morton , who described it with W. L. Haberman in 1953. [ 2 ] [ 3 ] The Morton number is defined as where g is the acceleration of gravity, μ c {\displaystyle \mu _{c}} is the viscosity of the surrounding fluid, ρ c {\displaystyle \rho _{c}} the density of the surrounding fluid, Δ ρ {\displaystyle \Delta \rho } the difference in density of the phases, and σ {\displaystyle \sigma } is the surface tension coefficient. For the case of a bubble with a negligible inner density the Morton number can be simplified to The Morton number can also be expressed by using a combination of the Weber number , Froude number and Reynolds number , The Froude number in the above expression is defined as where V is a reference velocity and d is the equivalent diameter of the drop or bubble.
https://en.wikipedia.org/wiki/Morton_number
Mosaicism or genetic mosaicism is a condition in which a multicellular organism possesses more than one genetic line as the result of genetic mutation . [ 1 ] [ 2 ] This means that various genetic lines resulted from a single fertilized egg . Mosaicism is one of several possible causes of chimerism , wherein a single organism is composed of cells with more than one distinct genotype . Genetic mosaicism can result from many different mechanisms including chromosome nondisjunction , anaphase lag , and endoreplication . [ 3 ] Anaphase lagging is the most common way by which mosaicism arises in the preimplantation embryo. [ 3 ] Mosaicism can also result from a mutation in one cell during development , in which case the mutation will be passed on only to its daughter cells (and will be present only in certain adult cells). [ 4 ] Somatic mosaicism is not generally inheritable as it does not generally affect germ cells. [ 2 ] In 1929, Alfred Sturtevant studied mosaicism in Drosophila , a genus of fruit fly. [ 5 ] H. J. Muller in 1930 demonstrated that mosaicism in Drosophila is always associated with chromosomal rearrangements , and Schultz in 1936 showed that, in all cases studied, these rearrangements were associated with heterochromatic inert regions. Several hypotheses on the nature of such mosaicism were proposed. One hypothesis assumed that mosaicism appears as the result of a break and loss of chromosome segments. Curt Stern in 1935 assumed that the structural changes in the chromosomes took place as a result of somatic crossing , as a result of which mutations or small chromosomal rearrangements in somatic cells. Thus the inert region causes an increase in mutation frequency or small chromosomal rearrangements in active segments adjacent to inert regions. [ 6 ] In the 1930s, Stern demonstrated that genetic recombination , normal in meiosis , can also take place in mitosis . [ 7 ] [ 8 ] When it does, it results in somatic (body) mosaics. These organisms contain two or more genetically distinct types of tissue. [ 9 ] The term somatic mosaicism was used by CW Cotterman in 1956 in his seminal paper on antigenic variation . [ 10 ] In 1944, M. L. Belgovskii proposed that mosaicism could not account for certain mosaic expressions caused by chromosomal rearrangements involving heterochromatic inert regions. The associated weakening of biochemical activity led to what he called a genetic chimera . [ 6 ] Germline or gonadal mosaicism is a particular form of mosaicism wherein some gametes —i.e., sperm or oocytes —carry a mutation, but the rest are normal. [ 11 ] [ 12 ] The cause is usually a mutation that occurred in an early stem cell that gave rise to all or part of the gametes. Somatic mosaicism (also known as clonal mosaicism) occurs when the somatic cells of the body are of more than one genotype. In the more common mosaics, different genotypes arise from a single fertilized egg cell, due to mitotic errors at first or later cleavages. Somatic mutation leading to mosaicism is prevalent in the beginning and end stages of human life. [ 10 ] Somatic mosaics are common in embryogenesis due to retrotransposition of long interspersed nuclear element-1 (LINE-1 or L1) and Alu transposable elements . [ 10 ] In early development, DNA from undifferentiated cell types may be more susceptible to mobile element invasion due to long, unmethylated regions in the genome. [ 10 ] Further, the accumulation of DNA copy errors and damage over a lifetime lead to greater occurrences of mosaic tissues in aging humans. As longevity has increased dramatically over the last century, human genome may not have had time to adapt to cumulative effects of mutagenesis . [ 10 ] Thus, cancer research has shown that somatic mutations are increasingly present throughout a lifetime and are responsible for most leukemia , lymphomas , and solid tumors. [ 13 ] The most common form of mosaicism found through prenatal diagnosis involves trisomies . Although most forms of trisomy are due to problems in meiosis and affect all cells of the organism, some cases occur where the trisomy occurs in only a selection of the cells. This may be caused by a nondisjunction event in an early mitosis, resulting in a loss of a chromosome from some trisomic cells. [ 14 ] Generally, this leads to a milder phenotype than in nonmosaic patients with the same disorder. In rare cases, intersex conditions can be caused by mosaicism where some cells in the body have XX and others XY chromosomes ( 46, XX/XY ). [ 15 ] [ 16 ] In the fruit fly Drosophila melanogaster , where a fly possessing two X chromosomes is a female and a fly possessing a single X chromosome is a sterile male, a loss of an X chromosome early in embryonic development can result in sexual mosaics, or gynandromorphs . [ 5 ] [ 17 ] Likewise, a loss of the Y chromosome can result in XY/X mosaic males. [ 18 ] An example of this is one of the milder forms of Klinefelter syndrome , called 46,XY/47,XXY mosaic wherein some of the patient's cells contain XY chromosomes, and some contain XXY chromosomes. The 46/47 annotation indicates that the XY cells have the normal number of 46 total chromosomes, and the XXY cells have a total of 47 chromosomes. Also monosomies can present with some form of mosaicism. The only non-lethal full monosomy occurring in humans is the one causing Turner's syndrome . Around 30% of Turner's syndrome cases demonstrate mosaicism, while complete monosomy (45, X) occurs in about 50–60% of cases. Mosaicism isn't necessarily deleterious, though. Revertant somatic mosaicism is a rare recombination event with a spontaneous correction of a mutant, pathogenic allele . [ 19 ] In revertant mosaicism, the healthy tissue formed by mitotic recombination can outcompete the original, surrounding mutant cells in tissues such as blood and epithelia that regenerate often. [ 19 ] In the skin disorder ichthyosis with confetti , normal skin spots appear early in life and increase in number and size over time. [ 19 ] Other endogenous factors can also lead to mosaicism, including mobile elements , DNA polymerase slippage, and unbalanced chromosome segregation . [ 10 ] Exogenous factors include nicotine and UV radiation . [ 10 ] Somatic mosaics have been created in Drosophila using X‑ray treatment and the use of irradiation to induce somatic mutation has been a useful technique in the study of genetics. [ 20 ] True mosaicism should not be mistaken for the phenomenon of X-inactivation , where all cells in an organism have the same genotype, but a different copy of the X chromosome is expressed in different cells. The latter is the case in normal (XX) female mammals, although it is not always visible from the phenotype (as it is in calico cats ). However, all multicellular organisms are likely to be somatic mosaics to some extent. [ 21 ] Gonosomal mosaicism is a type of somatic mosaicism that occurs very early in the organisms development and thus is present within both germline and somatic cells. [ 1 ] [ 22 ] Somatic mosaicism is not generally inheritable as it does not usually affect germ cells. In the instance of gonosomal mosaicism, organisms have the potential to pass the genetic alteration, including to potential offspring because the altered allele is present in both somatic and germline cells. [ 22 ] A frequent type of neuronal genomic mosaicism is copy number variation . Possible sources of such variation were suggested to be incorrect repairs of DNA damage and somatic recombination . [ 23 ] One basic mechanism that can produce mosaic tissue is mitotic recombination or somatic crossover . It was first discovered by Curt Stern in Drosophila in 1936. The amount of tissue that is mosaic depends on where in the tree of cell division the exchange takes place. A phenotypic character called "twin spot" seen in Drosophila is a result of mitotic recombination. However, it also depends on the allelic status of the genes undergoing recombination. Twin spot occurs only if the heterozygous genes are linked in repulsion, i.e. the trans phase. The recombination needs to occur between the centromeres of the adjacent gene. This gives an appearance of yellow patches on the wild-type background in Drosophila . another example of mitotic recombination is the Bloom's syndrome, which happens due to the mutation in the blm gene. The resulting BLM protein is defective. The defect in RecQ, a helicase, facilitates the defective unwinding of DNA during replication, thus is associated with the occurrence of this disease. [ 24 ] [ 25 ] Genetic mosaics are a particularly powerful tool when used in the commonly studied fruit fly , where specially selected strains frequently lose an X [ 17 ] or a Y [ 18 ] chromosome in one of the first embryonic cell divisions. These mosaics can then be used to analyze such things as courtship behavior, [ 17 ] and female sexual attraction. [ 26 ] More recently, the use of a transgene incorporated into the Drosophila genome has made the system far more flexible. The flip recombinase (or FLP ) is a gene from the commonly studied yeast Saccharomyces cerevisiae that recognizes "flip recombinase target" (FRT) sites, which are short sequences of DNA, and induces recombination between them. FRT sites have been inserted transgenically near the centromere of each chromosome arm of D. melanogaster . The FLP gene can then be induced selectively, commonly using either the heat shock promoter or the GAL4/UAS system . The resulting clones can be identified either negatively or positively. In negatively marked clones, the fly is transheterozygous for a gene encoding a visible marker (commonly the green fluorescent protein ) and an allele of a gene to be studied (both on chromosomes bearing FRT sites). After induction of FLP expression, cells that undergo recombination will have progeny homozygous for either the marker or the allele being studied. Therefore, the cells that do not carry the marker (which are dark) can be identified as carrying a mutation. Using negatively marked clones is sometimes inconvenient, especially when generating very small patches of cells, where seeing a dark spot on a bright background is more difficult than a bright spot on a dark background. Creating positively marked clones is possible using the so-called MARCM ("mosaic analysis with a repressible cell marker" system, developed by Liqun Luo , a professor at Stanford University , and his postdoctoral student Tzumin Lee, who now leads a group at Janelia Farm Research Campus . This system builds on the GAL4/UAS system, which is used to express GFP in specific cells. However, a globally expressed GAL80 gene is used to repress the action of GAL4, preventing the expression of GFP. Instead of using GFP to mark the wild-type chromosome as above, GAL80 serves this purpose, so that when it is removed by mitotic recombination , GAL4 is allowed to function, and GFP turns on. This results in the cells of interest being marked brightly in a dark background. [ 27 ]
https://en.wikipedia.org/wiki/Mosaic_(genetics)
Mosaic coevolution is a theory in which geographic location and community ecology shape differing coevolution between strongly interacting species in multiple populations. These populations may be separated by space and/or time. Depending on the ecological conditions, the interspecific interactions may be mutualistic or antagonistic. [ 1 ] In mutualisms, both partners benefit from the interaction, whereas one partner generally experiences decreased fitness in antagonistic interactions. Arms races consist of two species adapting ways to "one up" the other. Several factors affect these relationships, including hot spots, cold spots, and trait mixing. [ 2 ] Reciprocal selection occurs when a change in one partner puts pressure on the other partner to change in response. Hot spots are areas of strong reciprocal selection, while cold spots are areas with no reciprocal selection or where only one partner is present. [ 2 ] The three constituents of geographic structure that contribute to this particular type of coevolution are: natural selection in the form of a geographic mosaic, hot spots often surrounded by cold spots, and trait remixing by means of genetic drift and gene flow . [ 2 ] Mosaic, along with general coevolution, most commonly occurs at the population level and is driven by both the biotic and the abiotic environment. These environmental factors can constrain coevolution and affect how far it can escalate. [ 3 ] The geographical mosaic theory was first described by Ehrlich and Raven in 1964 after studying butterflies that coevolve with plants. However, the idea of coevolution itself goes all the way back to Darwin. [ 3 ] A commonly used example of mutualism in mosaic coevolution is that of the plant and pollinator . Anderson and Johnson studied the relationship between the length of the proboscis of the long-tongued fly ( P. ganglbaueri) and the corolla tube length of Zaluzianskya microsiphon , a flowering plant endemic to South Africa. [ 4 ] They suspected, as Darwin did in 1862, that flowers would adapt to become longer in order to force the fly to insert more of its body into the flower in order to reach the nectar. This causes the fly's body to come in contact with the flower's pollen. The two characteristics were measured at several different geographic locations and it was found that the length of the fly's proboscis caused strong selective pressures on the length of the corolla of the flower. An increase in proboscis length was selected for, when flowers were longer because it is their primary food source. [ 3 ] Antagonistic interactions (e.g. host-parasite and predator-prey relationships) can often result in coevolutionary trait escalation (i.e. arms races). For example, prey and predator may both evolve faster running speed in order to maximize their fitness. The plant species Camellia japonica (the Japanese camellia) and its seed predator Curculio camelliae (the camellia weevil) are an example of a coevolutionary arms race. The length of the weevil's rostrum and the thickness of the fruit's pericarp are correlated, meaning that a change in one character prompts a change in the other. The weevil will use its rostrum to burrow into the center of the camellia fruit seeking a place to lay eggs, as the weevil larva feed exclusively on the camellia seeds. This is a main cause of seed damage in the Japanese camellia and, in order to better protect its seeds, the plant will evolve to grow a thicker pericarp. [ 5 ] In some areas, the pericarp of these fruits was found to be remarkably woody. [ 1 ] The pericarp thickness of the camellia fruit was observed to be thicker in more southern locations than in the north. The areas of Hanyama and Yahazu, Japan are just under nine miles away from each other, but there was an 8 mm difference in pericarp thickness in the camellia populations sampled there. The length of the weevil's rostrum was found to be 5mm longer in the area with thicker fruit. This shows that the survival of the Japanese camellia seeds in the south is dependent upon the thick pericarp as a form of protection. However, northern areas were found to have fruit with infested seeds regardless of thickness of the pericarp. This suggests that the plants in the north were more susceptible to weevil attacks and the two traits are not as strongly correlated as they were in southern areas. [ 5 ]
https://en.wikipedia.org/wiki/Mosaic_coevolution
Mosaic evolution (or modular evolution) is the concept, mainly from palaeontology , that evolutionary change takes place in some body parts or systems without simultaneous changes in other parts. [ 1 ] Another definition is the "evolution of characters at various rates both within and between species". [ 2 ] 408 Its place in evolutionary theory comes under long-term trends or macroevolution . [ 2 ] In the neodarwinist theory of evolution , as postulated by Stephen Jay Gould , there is room for differing development, when a life form matures earlier or later, in shape and size. This is due to allomorphism . Organs develop at differing rhythms, as a creature grows and matures. Thus a " heterochronic clock" has three variants: 1) time, as a straight line; 2) general size, as a curved line; 3) shape, as another curved line. [ 3 ] When a creature is advanced in size, it may develop at a smaller rate. Alternatively, it may maintain its original size or, if delayed, it may result in a larger sized creature. That is insufficient to understand heterochronic mechanism. Size must be combined with shape, so a creature may retain paedomorphic features if advanced in shape or present recapitulatory appearance when retarded in shape. These names are not very indicative, as past theories of development were very confusing. [ 3 ] A creature in its ontogeny may combine heterochronic features in six vectors, although Gould considers that there is some binding with growth and sexual maturation. A creature may, for example, present some neotenic features and retarded development, resulting in new features derived from an original creature only by regulatory genes. Most novel human features (compared to closely related apes) were of this nature, not implying major change in structural genes, as was classically considered. [ 3 ] It is not claimed that this pattern is universal, but there is now a wide range of examples from many different taxa, including: Although mosaic evolution is usually seen in terms of animals such as Darwin's finches , it can also be seen in the evolutionary process of hominin . To help further explain the meaning of mosaic evolution in hominin, mosaicism will get broken down into three subgroups. Group 1 includes related species developing independently, of which carry deep variability in their own morphological structure. Examples of this can be seen within comparisons of A. sediba , H. naledi , and H. floresiensis . Group 2 relies on the different environmental impacts on the changes of a species. An example of this is the variability of bipedalism forming independently within all related species of hominin. Lastly, Group 3 involves the presence of behavior such as the human vernacular. Language is a mosaic composite of various elements working together for one specific attribute, and this is not a single trait an offspring can inherit directly. [ 14 ] In addition, it has been shown that an increase in social interactions corresponds to the evolution of human intelligence or in other words, an increase in brain size . This is provided and shown by Robin Dunbar 's social brain hypothesis. [ 15 ] Moreover, this can be used as a level of transition in human evolution; of which also includes dental shapes. [ 16 ] Brain size has shown intra-specific mosaic variability within its own development, as these differences are a result of environmental limitations. In other words, independent variability of brain structure is seen more when brain regions are unassociated from one another, ultimately, giving rise to perceptible features. When comparing current brain size and capacity between humans and chimpanzees, the ability to predict the evolutionary change between their ancestors was incredibly insightful. This granted the discovery that "local spatial interactions" were the main effect of the limitations. [ 17 ] Furthermore, alongside the cranial capacity and structure of the brain, dental shape provides another example of mosaicism. Using fossil record, dental shape showed mosaic evolution within the canine teeth found in early hominin. Reduction of canine sizes are seen as an authentication mark of human ancestor evolution. However, A. anamensis , discovered in Kenya, was found to have the largest mandibular canine root as part of Australopithecus evolution. This alters the authentication mark because the dimorphism between root and crown reduction has not been assessed. Although canine reduction has probably occurred prior to the evolution of Australopithecus , "changes in canine shape, in both crowns and roots, occurred in a mosaic fashion throughout the A. anamensis–afarensis lineage". [ 18 ]
https://en.wikipedia.org/wiki/Mosaic_evolution
Mosaic gold or bronze powder refers to tin(IV) sulfide [ 1 ] as used as a pigment in bronzing and gilding wood and metal work. It is obtained as a yellow scaly crystalline powder. The alchemists referred to it as aurum musivum, or aurum mosaicum. [ 2 ] The term mosaic gold has also been used to refer to ormolu [ 3 ] and to cut shapes of gold leaf , some darkened for contrast, arranged as a mosaic . [ 4 ] The term bronze powder may also refer to powdered bronze alloy. A recipe for mosaic gold is already provided in the 3th century A.D. treatise Baopuzi , composed by the Chinese alchemist Ge Hong . [ 5 ] The earliest sources for its preparation in Europe, under the name porporina or purpurina , are the late 13th-century North Italian Liber colorum secundum Magistrum Bernardum and Cennino Cennini 's Libro dell'arte from the 1420s. [ 6 ] Instructions became more widespread and varied thereafter, [ 7 ] the around 1500 recipe collection Liber illuministarum from Tegernsee Abbey in Bavaria alone offering six different methods for its preparation. [ 8 ] Alchemists prepared it by combining mercury , tin , sal ammoniac , and sublimated sulfur (fleur de soufre), grinding, mixing, then setting them for three hours in a sand heat. The dirty sublimate being taken off, aurum mosaicum was found at the bottom of the matrass . In the past it was used for medical purposes in most chronic and nervous ailments, and particularly convulsions of children; [ 9 ] however, it is no longer recommended for any medical uses. [ citation needed ]
https://en.wikipedia.org/wiki/Mosaic_gold
A mosaic virus is any virus that causes infected plant foliage to have a mottled appearance. Such viruses come from a variety of unrelated lineages and consequently there is no taxon that unites all mosaic viruses. Virus species that contained the word 'mosaic' in their English language common name are listed below, though with the nomenclature and taxonomy of the ICTV 2022 release. However, not all viruses that may cause a mottled appearance belong to species that include the word "mosaic" in the name. [ citation needed ]
https://en.wikipedia.org/wiki/Mosaic_virus
In crystallography , mosaicity is a measure of the spread of crystal plane orientations. A mosaic crystal is an idealized model of an imperfect crystal, imagined to consist of numerous small perfect crystals ( crystallites ) that are to some extent randomly misoriented. Empirically, mosaicities can be determined by measuring rocking curves . Diffraction by mosaics is described by the Darwin–Hamilton equations . The mosaic crystal model goes back to a theoretical analysis of X-ray diffraction by C. G. Darwin (1922). Currently, most studies follow Darwin in assuming a Gaussian distribution of crystallite orientations centered on some reference orientation. The mosaicity is commonly equated with the standard deviation of this distribution. An important application of mosaic crystals is in monochromators for x-ray and neutron radiation . The mosaicity enhances the reflected flux, and allows for some phase-space transformation . Pyrolitic graphite (PG) can be produced in form of mosaic crystals (HOPG: highly ordered PG) with controlled mosaicity of up to a few degrees. To describe diffraction by a thick mosaic crystal, it is usually assumed that the constituent crystallites are so thin that each of them reflects at most a small fraction of the incident beam. Primary extinction and other dynamical diffraction effects can then be neglected. Reflections by different crystallites add incoherently , and can therefore be treated by classical transport theory . When only beams within the scattering plane are considered, then they obey the Darwin–Hamilton equations (Darwin 1922, Hamilton 1957), where k ^ {\displaystyle \mathbf {\hat {k}} } are the directions of the incident and diffracted beam, I ± {\displaystyle I_{\pm }} are the corresponding currents, μ is the Bragg reflectivity, and σ accounts for losses by absorption and by thermal and elastic diffuse scattering . A generic analytical solution has been obtained remarkably late ( Sears 1997; for the case σ=0 Bacon/Lowde 1948). An exact treatment must allow for three-dimensional trajectories of multiply reflected radiation. The Darwin–Hamilton equations are then replaced by a Boltzmann equation with a very special transport kernel. In most cases, resulting corrections to the Darwin–Hamilton–Sears solutions are rather small (Wuttke 2014).
https://en.wikipedia.org/wiki/Mosaicity
The chemical agent used in the Moscow theatre hostage crisis of 26 October 2002 has never been definitively revealed by the Russian authorities, though many possible identities have been speculated. An undisclosed incapacitating agent was used by the Russian authorities in order to subdue the Chechen terrorists who had taken control of a crowded theater. At the time, the agent was surmised to be some sort of surgical anesthetic or chemical weapon . Afterwards, there were numerous speculations about the identity of the substance that was used to end the siege. Several chemicals were proposed, such as the tranquilizer diazepam (Valium), the anticholinergic BZ , the highly potent oripavine -derived Bentley-series opioid etorphine , another highly potent opioid, such as a fentanyl or an analogue thereof, such as 3-methylfentanil , and the anaesthetic halothane . Foreign embassies in Moscow issued official requests for more information on the gas to aid in treatment, but were publicly ignored. While still refusing to identify the gas, on October 28, 2002, the Russian government informed the U.S. Embassy of some of the gas' effects. Based on this information and examinations of victims, doctors suggested that the compound might be a morphine derivative. The Russian media reported the drug was Kolokol-1 , either mefentanyl or α-methylfentanil dissolved in a halothane base. It was reported that efforts to treat victims were complicated because the Russian government refused to inform doctors what type of gas had been used. In the records of the official investigation of the act, the agent is referred to as a certain "gaseous substance", in other cases it is referred to as an "unidentified chemical substance". [ 1 ] Two days after the incident, on October 30, 2002, Russia responded to increasing domestic and international pressure with a statement on the unknown gas by Health Minister Yuri Shevchenko . [ 2 ] He said that the gas was a fentanyl derivative, [ 3 ] an extremely powerful opioid . Boris Grebenyuk, the All-Russia Disaster Relief Service chief, said the services used trimethyl phentanylum ( 3-methylfentanyl , a fentanyl analog that is about 1000 times more potent than morphine, which was manufactured and abused in the former Soviet Union); New Scientist pointed out that 3-methylfentanyl is not a gas. [ 4 ] [ disputed – discuss ] Clothing samples from British survivors of the attack showed the presence of the narcotics remifentanil and carfentanil . The same study detected norcarfentanil in another survivor's urine. [ 5 ] A German toxicology professor who examined several German hostages said that their blood and urine contained halothane , a once-common inhalation anaesthetic which is now seldom used in Western countries, and that it was likely the gas had additional components. [ 6 ] However, halothane has a strong odor (although often defined as "pleasant" by comparison with other anesthetic gases ). Thus, by the time the whole theatre area would be filled with halothane to a concentration compatible with loss of consciousness (0.5–3%), it is likely that Chechens inside would have realized they were being attacked. Additionally, recovery of consciousness is rapid after the flow of gas is interrupted, unlike with high-dose fentanyl administration. Therefore, although halothane might have been a component in the aerosol, it was probably not a major component, [ 6 ] or perhaps it was a metabolite of another drug. Some of the later publications in medical journals assumed that Russian special forces used aerosol of a fentanyl derivative, such as carfentanil, and an inhalational anesthetic, such as halothane " [ 7 ] Writing in the Moscow daily Komsomolskaya Pravda , Viktor Baranets, a former Russian Defense Ministry official, stated that the Ministry of the Interior knew that any normal riot control agent , such as pepper spray or tear gas , would allow the Chechens time to harm the hostages. They decided to use the strongest agent available. The paper identified the material as a KGB -developed "psycho-chemical gas" known as Kolokol-1 , and reported that "the gas had such an influence on [Chechen siege leader Movsar] Barayev that he couldn't get up from [his] desk". [ 8 ] Russian doctors who helped hostages in the first minutes after the siege used a common antidote to fentanyl, naloxone , by injection. [ 9 ] [ unreliable source? ] But the effects of the fentanyl derivative's application, which can exacerbate chronic diseases , [ citation needed ] grew acute for the hostages, who had stayed in a closed space without water and food for several days. [ 9 ] Although the exact nature of the active chemical has not been verified, the Russian language newspaper Gazeta.ru claimed that the chemical used had been 3-methylfentanyl , attributing this information to "experts from the Moscow State University chemistry department." Prof. Thomas Zilker and Dr. Mark Wheelis , interviewed for the BBC 's Horizon documentary series, dispute that the gas could have been based on fentanyl. [ 10 ] Thomas Zilker: It seems to be different from fentanyl, carfentanil and sufentanil , but it has to be, it has to have the potency of carfentanil at least because otherwise it wouldn’t work in this circumstance. So the Russians obviously have designed a new fentanyl which we cannot detect in the west. Mark Wheelis: The fact that the Russians did it and got away with a lethality of less than twenty percent suggests to me that very likely there may have been a novel agent with a higher safety margin than normal fentanyl. In 2012, Riches et al. [ 11 ] found evidence from liquid chromatography-tandem mass spectrometry analysis of extracts of clothing from two British survivors, and urine from a third survivor, that the aerosol was a mixture carfentanil and remifentanil the exact proportions of which they could not determine. Assuming that these were the only active constituents (which has not been verified by the Russian military), the primary acute toxic effect to the theatre victims would have been opioid-induced apnea ; in this case mechanical ventilation and/or treatment with naloxone , the specific antidote for poisoning with carfentanil in humans, would have been life-saving for many or all victims. If fentanyl or a fentanyl derivative was used, the hostage liberators had only minutes to inject naloxone into the hostages before death by asphyxiation occurred.
https://en.wikipedia.org/wiki/Moscow_hostage_crisis_chemical_agent
Moser's worm problem (also known as mother worm's blanket problem ) is an unsolved problem in geometry formulated by the Austrian-Canadian mathematician Leo Moser in 1966. The problem asks for the region of smallest area that can accommodate every plane curve of length 1. Here "accommodate" means that the curve may be rotated and translated to fit inside the region. In some variations of the problem, the region is restricted to be convex . For example, a circular disk of radius 1/2 can accommodate any plane curve of length 1 by placing the midpoint of the curve at the center of the disk. Another possible solution has the shape of a rhombus with vertex angles of 60° and 120° and with a long diagonal of unit length. [ 2 ] However, these are not optimal solutions; other shapes are known that solve the problem with smaller areas. It is not completely trivial that a minimum-area cover exists. An alternative possibility would be that there is some minimal area that can be approached but not actually attained. However, there does exist a smallest convex cover. Its existence follows from the Blaschke selection theorem . [ 3 ] It is also not trivial to determine whether a given shape forms a cover. Gerriets & Poole (1974) conjectured that a shape accommodates every unit-length curve if and only if it accommodates every unit-length polygonal chain with three segments, a more easily tested condition, but Panraksa, Wetzel & Wichiramala (2007) showed that no finite bound on the number of segments in a polychain would suffice for this test. The problem remains open, but over a sequence of papers researchers have tightened the gap between the known lower and upper bounds. In particular, Norwood & Poole (2003) constructed a (nonconvex) universal cover and showed that the minimum shape has area at most 0.260437; Gerriets & Poole (1974) and Norwood, Poole & Laidacker (1992) gave weaker upper bounds. In the convex case, Wang (2006) improved an upper bound to 0.270911861. Khandhawit, Pagonakis & Sriswasdi (2013) used a min-max strategy for area of a convex set containing a segment, a triangle and a rectangle to show a lower bound of 0.232239 for a convex cover. In the 1970s, John Wetzel conjectured that a 30° circular sector of unit radius is a cover with area π / 12 ≈ 0.2618 {\displaystyle \pi /12\approx 0.2618} . Two proofs of the conjecture were independently claimed by Movshovich & Wetzel (2017) and by Panraksa & Wichiramala (2021) . If confirmed, this will reduce the upper bound for the convex cover by about 3%.
https://en.wikipedia.org/wiki/Moser's_worm_problem
In number theory , the Moser–de Bruijn sequence is an integer sequence named after Leo Moser and Nicolaas Govert de Bruijn , consisting of the sums of distinct powers of 4. Equivalently, they are the numbers whose binary representations are nonzero only in even positions. The Moser–de Bruijn numbers in this sequence grow in proportion to the square numbers . They are the squares for a modified form of arithmetic without carrying . The difference of two Moser–de Bruijn numbers, multiplied by two, is never square. Every natural number can be formed in a unique way as the sum of a Moser–de Bruijn number and twice a Moser–de Bruijn number. This representation as a sum defines a one-to-one correspondence between integers and pairs of integers, listed in order of their positions on a Z-order curve . The Moser–de Bruijn sequence can be used to construct pairs of transcendental numbers that are multiplicative inverses of each other and both have simple decimal representations. A simple recurrence relation allows values of the Moser–de Bruijn sequence to be calculated from earlier values, and can be used to prove that the Moser–de Bruijn sequence is a 2-regular sequence . The numbers in the Moser–de Bruijn sequence are formed by adding distinct powers of 4. The sequence lists these numbers in sorted order; it begins [ 1 ] For instance, 69 belongs to this sequence because it equals 64 + 4 + 1, a sum of three distinct powers of 4. Another definition of the Moser–de Bruijn sequence is that it is the ordered sequence of numbers whose binary representation has nonzero digits only in the even positions. For instance, 69 belongs to the sequence, because its binary representation 1000101 2 has nonzero digits in the positions for 2 6 , 2 2 , and 2 0 , all of which have even exponents. The numbers in the sequence can also be described as the numbers whose base-4 representation uses only the digits 0 or 1. [ 1 ] For a number in this sequence, the base-4 representation can be found from the binary representation by skipping the binary digits in odd positions, which should all be zero. The hexadecimal representation of these numbers contains only the digits 0, 1, 4, 5. For instance, 69 = 1011 4 = 45 16 . Equivalently, they are the numbers whose binary and negabinary representations are equal. [ 1 ] [ 2 ] Because there are no two consecutive nonzeros in their binary representations, the Moser–de Bruijn sequence forms a subsequence of the fibbinary numbers . It follows from either the binary or base-4 definitions of these numbers that they grow roughly in proportion to the square numbers . The number of elements in the Moser–de Bruijn sequence that are below any given threshold n {\displaystyle n} is proportional to n {\displaystyle {\sqrt {n}}} , [ 3 ] a fact which is also true of the square numbers. More precisely, the number oscillates between n {\displaystyle {\sqrt {n}}} (for numbers of the form n = 4 k {\displaystyle n=4^{k}} ) and 3 n {\displaystyle {\sqrt {3n}}} (for n ∼ 4 k / 3 {\displaystyle n\sim 4^{k}/3} ). In fact the numbers in the Moser–de Bruijn sequence are the squares for a version of arithmetic without carrying on binary numbers, in which the addition and multiplication of single bits are respectively the exclusive or and logical conjunction operations. [ 4 ] In connection with the Furstenberg–Sárközy theorem on sequences of numbers with no square difference, Imre Z. Ruzsa found a construction for large square-difference-free sets that, like the binary definition of the Moser–de Bruijn sequence, restricts the digits in alternating positions in the base- b {\displaystyle b} numbers. [ 5 ] When applied to the base b = 2 {\displaystyle b=2} , Ruzsa's construction generates the Moser–de Bruijn sequence multiplied by two, a set that is again square-difference-free. However, this set is too sparse to provide nontrivial lower bounds for the Furstenberg–Sárközy theorem. The Moser–de Bruijn sequence obeys a property similar to that of a Sidon sequence : the sums x + 2 y {\displaystyle x+2y} , where x {\displaystyle x} and y {\displaystyle y} both belong to the Moser–de Bruijn sequence, are all unique. No two of these sums have the same value. Moreover, every integer n {\displaystyle n} can be represented as a sum x + 2 y {\displaystyle x+2y} , where x {\displaystyle x} and y {\displaystyle y} both belong to the Moser–de Bruijn sequence. To find the sum that represents n {\displaystyle n} , compute x = n & 0 x 55555555 {\displaystyle x=n\ \&\ \mathrm {0x55555555} } , the bitwise Boolean and of n {\displaystyle n} with a binary value (expressed here in hexadecimal ) that has ones in all of its even positions, and set y = ( n − x ) / 2 {\displaystyle y=(n-x)/2} . [ 1 ] [ 6 ] The Moser–de Bruijn sequence is the only sequence with this property, that all integers have a unique expression as x + 2 y {\displaystyle x+2y} . It is for this reason that the sequence was originally studied by Moser (1962) . [ 7 ] Extending the property, De Bruijn (1964) found infinitely many other linear expressions like x + 2 y {\displaystyle x+2y} that, when x {\displaystyle x} and y {\displaystyle y} both belong to the Moser–de Bruijn sequence, uniquely represent all integers. [ 8 ] [ 9 ] Decomposing a number n {\displaystyle n} into n = x + 2 y {\displaystyle n=x+2y} , and then applying to x {\displaystyle x} and y {\displaystyle y} an order-preserving map from the Moser–de Bruijn sequence to the integers (by replacing the powers of four in each number by the corresponding powers of two) gives a bijection from non-negative integers to ordered pairs of non-negative integers. The inverse of this bijection gives a linear ordering on the points in the plane with non-negative integer coordinates, which may be used to define the Z-order curve . [ 1 ] [ 10 ] In connection with this application, it is convenient to have a formula to generate each successive element of the Moser–de Bruijn sequence from its predecessor. This can be done as follows. If x {\displaystyle x} is an element of the sequence, then the next member after x {\displaystyle x} can be obtained by filling in the bits in odd positions of the binary representation of x {\displaystyle x} by ones, adding one to the result, and then masking off the filled-in bits. Filling the bits and adding one can be combined into a single addition operation. That is, the next member is the number given by the formula [ 1 ] [ 6 ] [ 10 ] ( x + 0xaaaaaaab ) & 0x55555555 . {\displaystyle \displaystyle (x+{\textrm {0xaaaaaaab}})\ \&\ {\textrm {0x55555555}}.} The two hexadecimal constants appearing in this formula can be interpreted as the 2-adic numbers 1 / 3 {\displaystyle 1/3} and − 1 / 3 {\displaystyle -1/3} , respectively. [ 1 ] Golomb (1966) investigated a subtraction game , analogous to subtract a square , based on this sequence. In Golomb's game, two players take turns removing coins from a pile of n {\displaystyle n} coins. In each move, a player may remove any number of coins that belongs to the Moser–de Bruijn sequence. Removing any other number of coins is not allowed. The winner is the player who removes the last coin. As Golomb observes, the "cold" positions of this game (the ones in which the player who is about to move is losing) are exactly the positions of the form 2 y {\displaystyle 2y} where y {\displaystyle y} belongs to the Moser–de Bruijn sequence. A winning strategy for playing this game is to decompose the current number of coins, n {\displaystyle n} , into x + 2 y {\displaystyle x+2y} where x {\displaystyle x} and y {\displaystyle y} both belong to the Moser–de Bruijn sequence, and then (if x {\displaystyle x} is nonzero) to remove x {\displaystyle x} coins, leaving a cold position to the other player. If x {\displaystyle x} is zero, this strategy is not possible, and there is no winning move. [ 3 ] The Moser–de Bruijn sequence forms the basis of an example of an irrational number x {\displaystyle x} with the unusual property that the decimal representations of x {\displaystyle x} and 1 / x {\displaystyle 1/x} can both be written simply and explicitly. Let E {\displaystyle E} denote the Moser–de Bruijn sequence itself. Then for x = 3 ∑ n ∈ E 10 − n = 3.300330000000000330033 … , {\displaystyle x=3\sum _{n\in E}10^{-n}=3.300330000000000330033\dots ,} a decimal number whose nonzero digits are in the positions given by the Moser–de Bruijn sequence, it follows that the nonzero digits of its reciprocal are located in positions 1, 3, 9, 11, ..., given by doubling the numbers in E {\displaystyle E} and adding one to all of them: [ 11 ] [ 12 ] 1 x = 3 ∑ n ∈ E 10 − 2 n − 1 = 0.30300000303 … . {\displaystyle {\frac {1}{x}}=3\sum _{n\in E}10^{-2n-1}=0.30300000303\dots \ .} Alternatively, one can write: ( ∑ n ∈ E 10 − n ) ( ∑ n ∈ E 10 − 2 n ) = 10 9 . {\displaystyle \displaystyle \left(\sum _{n\in E}10^{-n}\right)\left(\sum _{n\in E}10^{-2n}\right)={\frac {10}{9}}.} Similar examples also work in other bases. For instance, the two binary numbers whose nonzero bits are in the same positions as the nonzero digits of the two decimal numbers above are also irrational reciprocals. [ 13 ] These binary and decimal numbers, and the numbers defined in the same way for any other base by repeating a single nonzero digit in the positions given by the Moser–de Bruijn sequence, are transcendental numbers . Their transcendence can be proven from the fact that the long strings of zeros in their digits allow them to be approximated more accurately by rational numbers than would be allowed by Roth's theorem if they were algebraic numbers , having irrationality measure no less than 3. [ 12 ] The generating function F ( x ) = ∏ i = 0 ∞ ( 1 + x 4 i ) = 1 + x + x 4 + x 5 + x 16 + x 17 + ⋯ , {\displaystyle F(x)=\prod _{i=0}^{\infty }(1+x^{4^{i}})=1+x+x^{4}+x^{5}+x^{16}+x^{17}+\cdots ,} whose exponents in the expanded form are given by the Moser–de Bruijn sequence, obeys the functional equations [ 1 ] [ 2 ] F ( x ) F ( x 2 ) = 1 1 − x {\displaystyle F(x)F(x^{2})={\frac {1}{1-x}}} and [ 14 ] F ( x ) = ( 1 + x ) F ( x 4 ) . {\displaystyle F(x)=(1+x)F(x^{4}).} For example, this function can be used to describe the two decimal reciprocals given above: one is 3 F ( 1 / 10 ) {\displaystyle 3F(1/10)} and the other is 3 10 F ( 1 / 100 ) {\displaystyle {\tfrac {3}{10}}F(1/100)} . The fact that they are reciprocals is an instance of the first of the two functional equations. The partial products of the product form of the generating function can be used to generate the convergents of the continued fraction expansion of these numbers themselves, as well as multiples of them. [ 11 ] The Moser–de Bruijn sequence obeys a recurrence relation that allows the n th value of the sequence, S ( n ) {\displaystyle S(n)} (starting at S ( 0 ) = 0 {\displaystyle S(0)=0} ) to be determined from the value at position ⌊ n / 2 ⌋ {\displaystyle \lfloor n/2\rfloor } : S ( 2 n ) = 4 S ( n ) S ( 2 n + 1 ) = 4 S ( n ) + 1 {\displaystyle {\begin{aligned}S(2n)&=4S(n)\\S(2n+1)&=4S(n)+1\end{aligned}}} Iterating this recurrence allows any subsequence of the form S ( 2 i n + j ) {\displaystyle S(2^{i}n+j)} to be expressed as a linear function of the original sequence, meaning that the Moser–de Bruijn sequence is a 2-regular sequence . [ 15 ]
https://en.wikipedia.org/wiki/Moser–de_Bruijn_sequence
Moshe Meiselman is an American-born Orthodox rabbi and rosh yeshiva (dean) of Yeshiva Toras Moshe in Jerusalem , which he established in 1982. He also founded and served as principal of Yeshiva University of Los Angeles (YULA) from 1977 to 1982. He is a descendant of the Lithuanian Jewish Soloveitchik rabbinic dynasty . [ 1 ] Rabbi Meiselman was born to Harry Meiselman, a dental surgeon , and Shulamit Soloveitchik, a teacher and Jewish school principal who attended New York University and Radcliffe College . On his father's side, he is a descendent of the rebbe (hereditary leader of a hasidic dynasty ) Baruch of Kossov. [ 2 ] On his mother's side, he is a descendant of the Soloveitchik rabbinic dynasty. His maternal grandfather was Rabbi Moshe Soloveichik and his maternal great-grandfather was Rabbi Chaim Soloveitchik , known as Reb Chaim Brisker. [ 2 ] His mother, Shulamit, authored the book The Soloveitchik Heritage: A Daughter's Memoir (1995). [ 3 ] Rabbi Meiselman was a nephew of Rabbi Dr. Joseph B. Soloveitchik , rosh yeshiva of R.I.E.T.S. , with whom, according to Rabbi Meiselman, he had study sessions on a near daily basis from the time he was 18 until he was 29 years old. [ 4 ] Rabbi Meiselman graduated from high school at the Boston Latin School [ 5 ] and then went on to attend Harvard College (which all three of Soloveitchik's children and his American grandchildren attended) and the Massachusetts Institute of Technology where he earned his doctorate in mathematics in 1967 with the thesis "The Operation Ring for Connective K-Theory ". [ 6 ] Rabbi Meiselman began his career teaching mathematics at City University of New York. After his marriage in 1971, he became a maggid shiur at Beis Medrash L'Torah in Skokie . Afterward, he taught at Yeshivas Brisk (Brisk Rabbinical College) in Chicago, headed for a time by his uncle, Rabbi Ahron Soloveichik . [ 2 ] In 1977 he moved to the West Coast and founded the Yeshiva University of Los Angeles (YULA), opening separate high school programs for boys and girls, a yeshivah gedolah , and a kolel . He also served as a posek (arbiter of Jewish law) for the local community. [ 2 ] In 1982, having built up enrollment to nearly 400 male and female students in YULA's various divisions, Rabbi Meiselman moved to Israel to open a yeshiva for American students, together with co-rosh yeshiva Rabbi Doniel Lehrfield (Rabbi Lehrfield and several other faculty members subsequently left to start another yeshiva, Bais Yisroel ). He named the new school Toras Moshe after his grandfather, Moshe. [ 2 ] [ 4 ] He selected Rabbis Michel Shurkin and Moshe Twersky , both close students of Rabbi Soloveitchik, to head the teaching staff. [ 2 ] In 2011, Meiselman reported about his yeshiva that "We have 96 boys in the beis medrash and 44 in the kollel, and almost all of our kollel yungerleit are home-grown". [ 7 ] Rabbi Meiselman is one of several grandchildren of Rabbi Moshe Soloveichik who have established yeshivas in Israel, perhaps the most famous being Rabbi Aharon Lichtenstein , son-in-law of Rabbi Joseph Soloveitchik, who established Yeshivat Har Etzion in the late 1960s. Yeshivat Reshit, [ 8 ] a popular yeshiva in Israel for American students in Beit Shemesh , was established by the Rabbis Marcus, also descendants of Rabbi Moshe Soloveitchik. Rabbi Meiselman is the author of several books and numerous magazine articles. His Jewish Woman in Jewish Law (1978) sparked much discussion among authors and feminists for his traditional Jewish response to feminism . Additionally, Rabbi Meiselman has authored Tiferes Tzvi, a commentary to the Rambam, as well as numerous articles on Talmudic study and thought in Hebrew. Rabbi Meiselman's 2013 book, Torah, Chazal and Science , promotes the theory that all unqualified scientific statements of the Talmudic sages are sourced from the word of God, who can not be wrong, and are therefore immutable: "All of Chazal’s (the Talmudic sages') definitive statements are to be taken as absolute fact [even] outside the realm of halakhah (Jewish law)". [ 9 ] The flip side of this thesis, and another major theme of the book, is that modern science is transitory and unreliable compared to the divine wisdom of the sages. His book has been criticized by Rabbi Dr. Natan Slifkin and others. [ 10 ] Following the opinion of some Haredi thinkers in the area of Holocaust theology , Meiselman has argued that the Holocaust was the result of Jewish cultural assimilation in Western Europe in the early twentieth century. He writes that "the turning away from the status of an am ha-nivhar , a chosen people, and the frightening rush toward assimilation were, according to the rules that govern Jewish destiny, the real causes for the Holocaust." [ 11 ] Rabbi Meiselman subscribes to Haredi views regarding the State of Israel and the Israel Defence Forces . He has stated that it is forbidden for a yeshiva student to join the Israeli army, and has criticized the Nachal Haredi , stating in an interview that Nachal Haredi has "not been successful in maintaining commitment to Torah." [ 12 ] In 2013, Rabbi Meiselman sat on the dais at a rally in NY against conscription of yeshiva students into the Israeli army. Both Satmar Rebbes were involved in the planning of, and also sat at the dais at, this rally. [ 13 ] In commenting on Modern Orthodox innovations with regard to women, Rabbi Meiselman has stated that "when it comes to the rabbis and the people who are at the forefront of pushing for these changes so that they can 'update' Orthodoxy to conform with today’s 'progressive' cultural norm ... [the] common denominator between nearly all of them is that they are largely ignorant of halacha and devoid of serious Torah scholarship. If your knowledge of Torah and halacha are limited, then you are not limited by halacha. One is never confined by things that one doesn't know and never learned!" [ 4 ] Some of Yeshiva Toras Moshe's faculty members dissuade students from enrolling at Yeshiva University when they leave Toras Moshe, while others are less opposed. [ 14 ] Meiselman is married to Rivkah Leah Eichenstein. [ 2 ]
https://en.wikipedia.org/wiki/Moshe_Meiselman
Mosher's acid , or α-methoxy-α-trifluoromethylphenylacetic acid ( MTPA ) is a carboxylic acid which was first used by Harry Stone Mosher as a chiral derivatizing agent . [ 1 ] [ 2 ] [ 3 ] [ 4 ] It is a chiral molecule, consisting of R and S enantiomers. As a chiral derivatizing agent, it reacts with an alcohol or amine [ 5 ] of unknown stereochemistry to form an ester or amide. The absolute configuration of the ester or amide is then determined by proton and/or 19 F NMR spectroscopy . Mosher's acid chloride , the acid chloride form, is sometimes used because it has better reactivity. [ 6 ]
https://en.wikipedia.org/wiki/Mosher's_acid
A moss bioreactor is a photobioreactor used for the cultivation and propagation of mosses . It is usually used in molecular farming for the production of recombinant protein using transgenic moss. In environmental science moss bioreactors are used to multiply peat mosses e.g. by the Mossclone consortium to monitor air pollution . [ citation needed ] Moss is a very frugal photoautotrophic organism that has been kept in vitro for research purposes since the beginning of the 20th century. [ 1 ] The first moss bioreactors for the model organism Physcomitrella patens were developed in the 1990s to comply with the safety standards regarding the handling of genetically modified organisms and to gain sufficient biomass for experimental purposes. [ 2 ] The moss bioreactor is used to cultivate moss in a suspension culture in agitated, and aerated liquid medium. The culture is kept under lighting with temperature and pH value held constant. The culture medium —often a minimal medium —contains all nutrients and minerals needed for growth of the moss. [ 3 ] To ensure a maximum growth rate, the moss is kept at the protonema stage by continuous mechanical disruption, e.g. by using rotating blades. [ 4 ] Once the density of the culture has reached a certain threshold, the lack of nutrients and the increasing concentration of phytohormones in the medium triggers the differentiation of the protonema to the adult gametophyte . At this point the culture has to be diluted with fresh medium if it is intended for further use. According to the intended yield, this basic principle can be adapted to various types and sizes of bioreactors. The cultivation chamber can, for example, consist of a column, a tube, or exchangeable plastic bags. [ 5 ] Various biopharmaceuticals have already been produced using moss bioreactors. [ 6 ] Ideally, the recombinant protein can be directly purified from the culture medium. [ 7 ] One example for this production method is factor H : this molecule is part of the human complement system . Defects in the corresponding gene are associated with human diseases such as severe kidney and retinal disorders . Biologically active recombinant factor H was produced in a moss bioreactor for the first time in 2011. [ 8 ] The enzyme alpha-galactosidase now is allowed to be produced in moss bioreactors by the German Federal Institute for Drugs and Medical Devices . [ 9 ] [ 10 ] It will be tested as enzyme replacement therapy in the treatment of Fabry's disease . The clinical trial phase 1 was completed in 2017. [ 11 ]
https://en.wikipedia.org/wiki/Moss_bioreactor
This article documents the most distant astronomical objects discovered and verified so far, and the time periods in which they were so classified. For comparisons with the light travel distance of the astronomical objects listed below, the age of the universe since the Big Bang is currently estimated as 13.787±0.020 Gyr. [ 1 ] Distances to remote objects, other than those in nearby galaxies, are nearly always inferred by measuring the cosmological redshift of their light. By their nature, very distant objects tend to be very faint, and these distance determinations are difficult and subject to errors. An important distinction is whether the distance is determined via spectroscopy or using a photometric redshift technique. The former is generally both more precise and also more reliable, in the sense that photometric redshifts are more prone to being wrong due to confusion with lower redshift sources that may have unusual spectra. For that reason, a spectroscopic redshift is conventionally regarded as being necessary for an object's distance to be considered definitely known, whereas photometrically determined redshifts identify "candidate" very distant sources. Here, this distinction is indicated by a "p" subscript for photometric redshifts. The proper distance provides a measurement of how far a galaxy is at a fixed moment in time. At the present time the proper distance equals the comoving distance since the cosmological scale factor has value one: a ( t 0 ) = 1 {\displaystyle a(t_{0})=1} . The proper distance represents the distance obtained as if one were able to freeze the flow of time (set d t = 0 {\displaystyle dt=0} in the FLRW metric) and walk all the way to a galaxy while using a meter stick. [ 2 ] For practical reasons, the proper distance is calculated as the distance traveled by light (set d s = 0 {\displaystyle ds=0} in the FLRW metric) from the time of emission by a galaxy to the time an observer (on Earth) receives the light signal. It differs from the “light travel distance” since the proper distance takes into account the expansion of the universe, i.e. the space expands as the light travels through it, resulting in numerical values which locate the most distant galaxies beyond the Hubble sphere and therefore with recession velocities greater than the speed of light c . [ 3 ] (Gly) [ 15 ] [ 16 ] § The tabulated distance is the light travel distance, which has no direct physical significance. See discussion at distance measures and Observable Universe † Numeric value obtained using Wright (2006) [ 5 ] with H 0 {\displaystyle H_{0}} = 70, Ω C M {\displaystyle \Omega _{CM}} = 0.30, Ω D E {\displaystyle \Omega _{DE}} = 0.70. Since the beginning of the James Webb Space Telescope 's (JWST) science operations in June 2022, numerous distant galaxies far beyond what could be seen by the Hubble Space Telescope (z = 11) have been discovered thanks to the JWST's capability of seeing far into the infrared . [ 48 ] [ 49 ] Previously in 2012, there were about 50 possible objects z = 8 or farther, and another 100 candidates at z = 7, based on photometric redshift estimates released by the Hubble eXtreme Deep Field (XDF) project from observations made between mid-2002 and December 2012. [ 50 ] Some objects included here have been observed spectroscopically, but had only one emission line tentatively detected, and are therefore still considered candidates by researchers. [ 51 ] [ 52 ] § The tabulated distance is the light travel distance, which has no direct physical significance. See discussion at distance measures and Observable Universe Previous records include SDSS J1229+1122 [ 88 ] and MACS J1149 Lensed Star 1 . [ 89 ] Objects in this list were found to be the most distant object at the time of determination of their distance. This is frequently not the same as the date of their discovery. Distances to astronomical objects may be determined through parallax measurements, use of standard references such as cepheid variables or Type Ia supernovas , or redshift measurement. Spectroscopic redshift measurement is preferred, while photometric redshift measurement is also used to identify candidate high redshift sources. The symbol z represents redshift. This list contains a list of most distant objects by year of discovery of the object, not the determination of its distance. Objects may have been discovered without distance determination, and were found subsequently to be the most distant known at that time. However, object must have been named or described. An object like OJ 287 is ignored even though it was detected as early as 1891 using photographic plates, but ignored until the advent of radiotelescopes.
https://en.wikipedia.org/wiki/Most_distant_astronomical_object
The most probable number method, otherwise known as the method of Poisson zeroes, is a method of getting quantitative data on concentrations of discrete items from positive/negative (incidence) data. There are many discrete entities that are easily detected but difficult to count. Any sort of amplification reaction or catalysis reaction obliterates easy quantification but allows presence to be detected very sensitively. Common examples include microorganism growth , enzyme action, or catalytic chemistry. The MPN method involves taking the original solution or sample, and subdividing it by orders of magnitude (frequently 10× or 2×), and assessing presence/absence in multiple subdivisions. The degree of dilution at which absence begins to appear indicates that the items have been diluted so much that there are many subsamples in which none appear. A suite of replicates at any given concentration allow finer resolution, to use the number of positive and negative samples to estimate the original concentration within the appropriate order of magnitude. In microbiology, the cultures are incubated and assessed by eye, bypassing tedious colony counting or expensive and tedious microscopic counts. Presumptive, confirmative and completed [ clarification needed ] tests are a part of MPN. [ citation needed ] In molecular biology, a common application involves DNA templates diluted into polymerase chain reactions (PCR). Reactions only proceed when a template is present, allowing for a form of quantitative PCR, to assess the original concentration of template molecules. Another application involves diluting enzyme stocks into solution containing a chromogenic substrate, or diluting antigens into solutions for ELISA (Enzyme-Linked ImmunoSorbent Assay) or some other antibody cascade detection reaction, to measure the original concentration of the enzyme or antigen. The major weakness of MPN methods is the need for large numbers of replicates at the appropriate dilution to narrow the confidence intervals. However, it is a very important method for counts when the appropriate order of magnitude is unknown a priori and sampling is necessarily destructive.
https://en.wikipedia.org/wiki/Most_probable_number
A most recent common ancestor ( MRCA ), also known as a last common ancestor ( LCA ), is the most recent individual from which all organisms of a set are inferred to have descended . The most recent common ancestor of a higher taxon is generally assumed to have been a species . The term is also used in reference to the ancestry of groups of genes ( haplotypes ) rather than organisms. The ancestry of a set of individuals can sometimes be determined by referring to an established pedigree , although this may refer only to patrilineal or matrilineal lines for sexually-reproducing organisms with two parents, four grandparents, etc. However, in general, it is impossible to identify the exact MRCA of a large set of individuals, but an estimate of the time at which the MRCA lived can often be given. Such time to most recent common ancestor ( TMRCA ) estimates can be given based on DNA test results and established mutation rates as practiced in genetic genealogy, or by reference to a non-genetic, mathematical model or computer simulation. In organisms using sexual reproduction , the matrilineal MRCA and patrilineal MRCA are the MRCAs of a given population considering only matrilineal and patrilineal descent, respectively. The MRCA of a population by definition cannot be older than either its matrilineal or its patrilineal MRCA. In the case of Homo sapiens , the matrilineal and patrilineal MRCA are also known as " Mitochondrial Eve " (mt-MRCA) and " Y-chromosomal Adam " (Y-MRCA) respectively. The age of the human MRCA is unknown. It is no greater than the age of either the Y-MRCA or the mt-MRCA, estimated at 200,000 years. Unlike in pedigrees of individual humans or domesticated lineages where historical parentage is known for some number of generations into the past, ancestors are not directly observable or recognizable in the inference of relationships among species or higher groups of taxa ( systematics or phylogenetics ). Ancestors are inferences based on patterns of relationship among taxa inferred in a phylogenetic analysis of extant organisms and/or fossils . [ 1 ] The last universal common ancestor (LUCA) is the most recent common ancestor of all current life on Earth, estimated to have lived some 3.5 to 3.8 billion years ago (in the Paleoarchean ). [ 2 ] [ 3 ] [ note 1 ] The project of a complete description of the phylogenetic relationships among all biological species is dubbed the " tree of life ". This involves inference of ages of divergence for all hypothesized clades ; for example, the MRCA of all Carnivora ( cats , dogs , etc.) is estimated to have diverged some 42 million years ago ( Miacidae ). [ 6 ] The concept of the last common ancestor from the perspective of human evolution is described for a popular audience in The Ancestor's Tale by Richard Dawkins . Dawkins lists "concestors" of the human lineage in order of increasing age, including hominin (human– chimpanzee ), hominine (human– gorilla ), hominid (human– orangutan ), hominoid (human– gibbon ), and so on in 40 stages in total, down to the last universal common ancestor (human– bacteria ). It is also possible to consider the ancestry of individual genes (or groups of genes, haplotypes) instead of an organism as a whole. Coalescent theory describes a stochastic model of how the ancestry of such genetic markers maps to the history of a population. Unlike organisms, a gene is passed down from a generation of organisms to the next generation either as perfect replicas of itself or as slightly mutated descendant genes . While organisms have ancestry graphs and progeny graphs via sexual reproduction , a gene has a single chain of ancestors and a tree of descendants. An organism produced by sexual cross-fertilization ( allogamy ) has at least two ancestors (its immediate parents), but a gene always has one ancestor per generation. Mitochondrial DNA (mtDNA) is nearly immune to sexual mixing, unlike the nuclear DNA whose chromosomes are shuffled and recombined in Mendelian inheritance . Mitochondrial DNA, therefore, can be used to trace matrilineal inheritance and to find the Mitochondrial Eve (also known as the African Eve ), the most recent common ancestor of all humans via the mitochondrial DNA pathway. Likewise, Y chromosome is present as a single sex chromosome in the male individual and is passed on to male descendants without recombination. It can be used to trace patrilineal inheritance and to find the Y-chromosomal Adam , the most recent common ancestor of all humans via the Y-DNA pathway. Approximate dates for Mitochondrial Eve and Y-chromosomal Adam have been established by researchers using genealogical DNA tests . Mitochondrial Eve is estimated to have lived about 200,000 years ago. A paper published in March 2013 determined that, with 95% confidence and that provided there are no systematic errors in the study's data, Y-chromosomal Adam lived between 237,000 and 581,000 years ago. [ 7 ] [ 8 ] The MRCA of all humans alive today would, therefore, need to have lived more recently than either. [ 9 ] [ note 2 ] It is more complicated to infer human ancestry via autosomal chromosomes . Although an autosomal chromosome contains genes that are passed down from parents to children via independent assortment from only one of the two parents, genetic recombination ( chromosomal crossover ) mixes genes from non-sister chromatids from both parents during meiosis , thus changing the genetic composition of the chromosome. Different types of MRCAs are estimated to have lived at different times in the past. These time to MRCA ( TMRCA ) estimates are also computed differently depending on the type of MRCA being considered. Patrilineal and matrilineal MRCAs (Mitochondrial Eve and Y-chromosomal Adam) are traced by single gene markers, thus their TMRCA are computed based on DNA test results and established mutation rates as practiced in genetic genealogy. The time to the genealogical MRCA (most recent common ancestor by any line of descent) of all living humans cannot be traced genetically because the DNA of the great majority of ancestors is completely lost after a few hundred years. It is therefore computed based on non-genetic, mathematical models and computer simulations. Since Mitochondrial Eve and Y-chromosomal Adam are traced by single genes via a single ancestral parent line, the time to these genetic MRCAs will necessarily be greater than that for the genealogical MRCA. This is because single genes will coalesce more slowly than tracing of conventional human genealogy via both parents. The latter considers only individual humans, without taking into account whether any gene from the computed MRCA actually survives in every single person in the current population. [ 11 ] Mitochondrial DNA can be used to trace the ancestry of a set of populations. In this case, populations are defined by the accumulation of mutations on the mtDNA, and special trees are created for the mutations and the order in which they occurred in each population. The tree is formed through the testing of a large number of individuals all over the world for the presence or lack of a certain set of mutations. Once this is done it is possible to determine how many mutations separate one population from another. The number of mutations, together with estimated mutation rate of the mtDNA in the regions tested, allows scientists to determine the approximate time to MRCA ( TMRCA ) which indicates time passed since the populations last shared the same set of mutations or belonged to the same haplogroup . In the case of Y-Chromosomal DNA, TMRCA is arrived at in a different way. Y-DNA haplogroups are defined by single-nucleotide polymorphism in various regions of the Y-DNA. The time to MRCA within a haplogroup is defined by the accumulation of mutations in STR sequences of the Y-Chromosome of that haplogroup only. Y-DNA network analysis of Y-STR haplotypes showing a non-star cluster indicates Y-STR variability due to multiple founding individuals. Analysis yielding a star cluster can be regarded as representing a population descended from a single ancestor. In this case the variability of the Y-STR sequence, also called the microsatellite variation, can be regarded as a measure of the time passed since the ancestor founded this particular population. The descendants of Genghis Khan or one of his ancestors represents a famous star cluster that can be dated back to the time of Genghis Khan. [ 12 ] TMRCA calculations are considered critical evidence when attempting to determine migration dates of various populations as they spread around the world. For example, if a mutation is deemed to have occurred 30,000 years ago, then this mutation should be found amongst all populations that diverged after this date. If archeological evidence indicates cultural spread and formation of regionally isolated populations then this must be reflected in the isolation of subsequent genetic mutations in this region. If genetic divergence and regional divergence coincide it can be concluded that the observed divergence is due to migration as evidenced by the archaeological record. However, if the date of genetic divergence occurs at a different time than the archaeological record, then scientists will have to look at alternate archaeological evidence to explain the genetic divergence. The issue is best illustrated in the debate surrounding the demic diffusion versus cultural diffusion during the European Neolithic . [ 13 ] The age of the MRCA of all living humans is unknown. It is necessarily younger than the age of either the matrilinear or the patrilinear MRCA, both of which have an estimated age of between roughly 100,000 and 200,000 years ago. [ 14 ] A study by mathematicians Joseph T. Chang, Douglas Rohde and Steve Olson used a theoretical model to calculate that the MRCA may have lived remarkably recently, possibly as recently as 2,000 years ago. It concludes that the MRCA of all living humans probably lived in East Asia, which would have given them key access to extremely isolated populations in Australia and the Americas. Possible locations for the MRCA include places such as the Chuckchi and Kamchatka Peninsulas that are close to Alaska, places such as Indonesia and Malaysia that are close to Australia or a place such as Taiwan or Japan that is more intermediate to Australia and the Americas. European colonization of the Americas and Australia was found by Chang to be too recent to have had a substantial impact on the age of the MRCA. In fact, if the Americas and Australia had never been discovered by Europeans, the MRCA would only be about 2.3% further back in the past than it is. [ 15 ] [ 16 ] [ 17 ] Note that the age of the MRCA of a population does not correspond to a population bottleneck , let alone a "first couple". It rather reflects the presence of a single individual with high reproductive success in the past, whose genetic contribution has become pervasive throughout the population over time. It is also incorrect to assume that the MRCA passed all, or indeed any, genetic information to every living person. Through sexual reproduction , an ancestor passes half of his or her genes to each descendant in the next generation; in the absence of pedigree collapse , after just 32 generations the contribution of a single ancestor would be on the order of 2 −32 , a number proportional to less than a single basepair within the human genome . [ 18 ] The MRCA is the most recent common ancestor shared by all individuals in the population under consideration. This MRCA may well have contemporaries who are also ancestral to some but not all of the extant population. The identical ancestors point is a point in the past more remote than the MRCA at which time there are no longer organisms which are ancestral to some but not all of the modern population. Due to pedigree collapse, modern individuals may still exhibit clustering, due to vastly different contributions from each of ancestral population. [ 19 ]
https://en.wikipedia.org/wiki/Most_recent_common_ancestor
In computing , bit numbering is the convention used to identify the bit positions in a binary number . In computing , the least significant bit ( LSb ) is the bit position in a binary integer representing the lowest-order place of the integer. Similarly, the most significant bit ( MSb ) represents the highest-order place of the binary integer. The LSb is sometimes referred to as the low-order bit . Due to the convention in positional notation of writing less significant digits further to the right, the LSb also might be referred to as the right-most bit . The MSb is similarly referred to as the high-order bit or left-most bit . In both cases, the LSb and MSb correlate directly to the least significant digit and most significant digit of a decimal integer. Bit indexing correlates to the positional notation of the value in base 2. For this reason, bit index is not affected by how the value is stored on the device, such as the value's byte order . Rather, it is a property of the numeric value in binary itself. This is often utilized in programming via bit shifting : A value of 1 << n corresponds to the n th bit of a binary integer (with a value of 2 n ). In digital steganography , sensitive messages may be concealed by manipulating and storing information in the least significant bits of an image or a sound file. The user may later recover this information by extracting the least significant bits of the manipulated pixels to recover the original message. This allows the storage or transfer of digital information to remain concealed. A diagram showing how manipulating the least significant bits of a color can have a very subtle and generally unnoticeable effect on the color. In this diagram, green is represented by its RGB value, both in decimal and in binary. The red box surrounding the last two bits illustrates the least significant bits changed in the binary representation. This table illustrates an example of decimal value of 149 and the location of LSb. In this particular example, the position of unit value (decimal 1 or 0) is located in bit position 0 (n = 0). MSb stands for most significant bit , while LSb stands for least significant bit . This table illustrates an example of an 8 bit signed decimal value using the two's complement method. The MSb most significant bit has a negative weight in signed integers, in this case −2 7 = −128. The other bits have positive weights. The lsb ( least significant bit ) has weight 1. The signed value is in this case −128+2 = −126. The expressions most significant bit first and least significant bit at first are indications on the ordering of the sequence of the bits in the bytes sent over a wire in a serial transmission protocol or in a stream (e.g. an audio stream). Most significant bit first means that the most significant bit will arrive first: hence e.g. the hexadecimal number 0x12 , 00010010 in binary representation, will arrive as the sequence 0 0 0 1 0 0 1 0 . Least significant bit first means that the least significant bit will arrive first: hence e.g. the same hexadecimal number 0x12 , again 00010010 in binary representation, will arrive as the (reversed) sequence 0 1 0 0 1 0 0 0 . When the bit numbering starts at zero for the least significant bit (LSb) the numbering scheme is called LSb 0 . [ 1 ] This bit numbering method has the advantage that for any unsigned number the value of the number can be calculated by using exponentiation with the bit number and a base of 2. [ 2 ] The value of an unsigned binary integer is therefore where b i denotes the value of the bit with number i , and N denotes the number of bits in total. When the bit numbering starts at zero for the most significant bit (MSb) the numbering scheme is called MSb 0 . The value of an unsigned binary integer is therefore LSb of a number can be calculated with time complexity of O ( n ) {\displaystyle O(n)} with formula a & (~ a+1) , where & means binary AND and ~ means binary NOT . For MSb 1 numbering, the value of an unsigned binary integer is PL/I numbers BIT strings starting with 1 for the leftmost bit. The Fortran BTEST function uses LSb 0 numbering.
https://en.wikipedia.org/wiki/Most_significant_bit
Mostafa A. El-Sayed ( Arabic : مصطفى السيد) is an Egyptian-American physical chemist, nanoscience researcher, member of the National Academy of Sciences and US National Medal of Science laureate. He is known for the spectroscopy rule named after him, the El-Sayed rule . [ 2 ] [ 3 ] [ 4 ] El-Sayed was born in Zifta, Egypt and spent his early life in Cairo. He earned his B.Sc. in chemistry from Ain Shams University Faculty of Science, Cairo in 1953. [ 5 ] El-Sayed earned his doctoral degree in chemistry from Florida State University working with Michael Kasha , the last student of the legendary G. N. Lewis . [ citation needed ] While attending graduate school he met and married Janice Jones, his wife of 48 years. He spent time as a post-doctoral researcher at Harvard University , Yale University and the California Institute of Technology before joining the faculty of the University of California at Los Angeles in 1961. In 1994, he retired from UCLA and accepted the position of Julius Brown Chair and Regents Professor of Chemistry and Biochemistry at the Georgia Institute of Technology . He led the Laser Dynamics Lab there until his full retirement in 2020. El-Sayed is a former editor-in-chief of the Journal of Physical Chemistry (1980–2004). [ 6 ] [ 7 ] El-Sayed's research interests include the use of steady-state and ultra fast laser spectroscopy to understand relaxation, transport and conversion of energy in molecules , in solids , in photosynthetic systems, semiconductor quantum dots and metal nanostructures . The El-Sayed group has also been involved in the development of new techniques such as magnetophotonic selection, picosecond Raman spectroscopy and phosphorescence microwave double resonance spectroscopy. A major focus of his lab is currently on the optical and chemical properties of noble metal nanoparticles and their applications in nano catalysis , nanophotonics and nanomedicine . His lab is known for the development of the gold nanorod technology. As of 2021, El-Sayed has produced over 1200 publications in refereed journals in the areas of spectroscopy , molecular dynamics and nanoscience , with over 130,000 citations. [ 8 ] For his work in the area of applying laser spectroscopic techniques to study of properties and behavior on the nanoscale , El-Sayed was elected to the National Academy of Sciences in 1980. In 1989 he received the Tolman Award , and in 2002, he won the Irving Langmuir Award in Chemical Physics. He has been the recipient of the 1990 King Faisal International Prize ("Arabian Nobel Prize") in Sciences, Georgia Tech 's highest award, "The Class of 1943 Distinguished Professor", an honorary doctorate of philosophy from the Hebrew University , and several other awards including some from the different American Chemical Society local sections. He was a Sherman Fairchild Distinguished Scholar at the California Institute of Technology and an Alexander von Humboldt Senior U.S. Scientist Awardee. He served as editor-in-chief of the Journal of Physical Chemistry from 1980 to 2004 and has also served as the U.S. editor of the International Reviews in Physical Chemistry . He is a Fellow of the American Academy of Arts and Sciences , a member of the American Physical Society , the American Association for the Advancement of Science and the Third World Academy of Science . Mostafa El-Sayed was awarded the 2007 US National Medal of Science "for his seminal and creative contributions to our understanding of the electronic and optical properties of nanomaterials and to their applications in nano catalysis and nanomedicine , for his humanitarian efforts of exchange among countries and for his role in developing the scientific leadership of tomorrow." [ 9 ] Mostafa was also announced to be the recipient of the 2009 Ahmed Zewail prize in molecular sciences. In 2011, he was listed #17 in Thomson-Reuters listing of the Top Chemists of the Past Decade. [ 10 ] Professor El-Sayed also received the 2016 Priestley Medal , the American Chemical Society ’s highest honor, for his decades-long contributions to chemistry. [ 11 ] The rate of intersystem crossing is relatively large if the radiationless transition involves a change of orbital type. This rule pertains to phosphorescence and similar phenomena. Electrons vibrate and resonate around molecules in different modes ( electronic state ), usually depending on the energy of the system of electrons. This law states that constant-energy flipping between two electronic states happens more readily when the vibrations of the electrons are preserved during the flip: any change in the spin of an electron is compensated by a change in its orbital motion ( spin-orbit coupling ). Intersystem crossing (ISC) is a photophysical process involving an isoenergetic radiationless transition between two electronic states having different multiplicities. It often results in a vibrationally excited molecular entity in the lower electronic state, which then usually decays to its lowest molecular vibrational level. ISC is forbidden by rules of conservation of angular momentum . As a consequence, ISC generally occurs on very long time scales. However, the El-Sayed rule states that the rate of intersystem crossing, e.g. from the lowest singlet state to the triplet manifold, is relatively large if the radiationless transition involves a change of molecular orbital type. [ 13 ] [ 14 ] For example, a (π,π*) singlet could transition to a (n,π*) triplet state , but not to a (π,π*) triplet state and vice versa. Formulated by El-Sayed in the 1960s, this rule found in most photochemistry textbooks as well as the IUPAC Gold Book . [ 15 ] The rule is useful in understanding phosphorescence , vibrational relaxation, intersystem crossing , internal conversion and lifetimes of excited states in molecules.
https://en.wikipedia.org/wiki/Mostafa_El-Sayed
Mostafa Kamal Tolba ( Arabic : مصطفى كمال طلبة ) (8 December 1922 – 28 March 2016) was an Egyptian scientist who served for seventeen years as the executive director of the United Nations Environment Programme (UNEP). [ 1 ] In that capacity he led development of the Montreal Protocol , which saved the ozone layer and thus millions of lives from skin cancer and other impacts. [ 2 ] Mustafa Kamal Tolba was born in the town of Zifta (located in Gharbia Governorate ), Tolba graduated from Cairo University in 1943 and obtained a PhD from Imperial College London five years later. He established his own school in microbiology at Cairo University's Faculty of Science and also taught at the University of Baghdad during the 1950s. In addition to his academic career, Tolba worked in the Egyptian civil service. [ 3 ] After serving briefly as President of the Egyptian Olympic Committee (1971–1972), [ 4 ] Tolba led Egypt's delegation to the landmark 1972 Stockholm Conference , which established the United Nations Environment Programme. Tolba became UNEP's Deputy Executive Director immediately after the conference, and two years later was promoted to executive director. During his long tenure as director of UNEP (1975–1992), he played a central role in the fight against ozone depletion , which culminated with the Vienna Convention (1985) and the Montreal Protocol (1987). [ 3 ] [ 5 ] He enabled these negotiations by serving as a bridge between ministers and scientists since they did not understand each others' language. [ 2 ] He described the Montreal Protocol as a "start and strengthen" treaty, where the parties started modestly, gained the knowledge they needed to phase out the dangerous chemicals, and gained the confidence they needed to do more. [ 6 ] He successfully steered negotiations for the Basel Convention on transboundary hazardous waste. [ 2 ] He was a significant influence in the creation and organization of the Intergovernmental Panel on Climate Change , and the Global Environment Facility . [ 7 ] He led the work to develop the Convention on Biological Diversity . [ 2 ] In 1982, Mostafa K. Tolba, as executive director of the United Nations environmental program, told UN delegates that if the nations of the world continued their present policies, they would face by the turn of the century ''an environmental catastrophe which will witness devastation as complete, as irreversible, as any nuclear holocaust .'' [ 8 ] Tolba died on 28 March 2016 in Geneva at the age of 93. [ 9 ] Tolba's publications include more than 95 papers on plant pathology , as well as over 600 statements and articles on the environment. [ 3 ] This article about an Egyptian scientist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mostafa_Kamal_Tolba
In mathematical logic , the Mostowski collapse lemma , also known as the Shepherdson–Mostowski collapse , is a theorem of set theory introduced by Andrzej Mostowski ( 1949 , theorem 3) and John Shepherdson ( 1953 ). Suppose that R is a binary relation on a class X such that The Mostowski collapse lemma states that for every such R there exists a unique transitive class (possibly proper ) whose structure under the membership relation is isomorphic to ( X , R ), and the isomorphism is unique. The isomorphism maps each element x of X to the set of images of elements y of X such that y R x (Jech 2003:69). Every well-founded set-like relation can be embedded into a well-founded set-like extensional relation. This implies the following variant of the Mostowski collapse lemma: every well-founded set-like relation is isomorphic to set-membership on a (non-unique, and not necessarily transitive) class. A mapping F such that F ( x ) = { F ( y ) : y R x } for all x in X can be defined for any well-founded set-like relation R on X by well-founded recursion . It provides a homomorphism of R onto a (non-unique, in general) transitive class. The homomorphism F is an isomorphism if and only if R is extensional. The well-foundedness assumption of the Mostowski lemma can be alleviated or dropped in non-well-founded set theories . In Boffa's set theory, every set-like extensional relation is isomorphic to set-membership on a (non-unique) transitive class. In set theory with Aczel's anti-foundation axiom , every set-like relation is bisimilar to set-membership on a unique transitive class, hence every bisimulation-minimal set-like relation is isomorphic to a unique transitive class. Every set model of ZF is set-like and extensional. If the model is well-founded, then by the Mostowski collapse lemma it is isomorphic to a transitive model of ZF and such a transitive model is unique. Saying that the membership relation of some model of ZF is well-founded is stronger than saying that the axiom of regularity is true in the model. There exists a model M (assuming the consistency of ZF) whose domain has a subset A with no R -minimal element, but this set A is not a "set in the model" ( A is not in the domain of the model, even though all of its members are). More precisely, for no such set A there exists x in M such that A = R −1 [ x ]. So M satisfies the axiom of regularity (it is "internally" well-founded) but it is not well-founded and the collapse lemma does not apply to it.
https://en.wikipedia.org/wiki/Mostowski_collapse_lemma
In mathematics, the Mostow–Palais theorem is an equivariant version of the Whitney embedding theorem . It states that if a manifold is acted on by a compact Lie group with finitely many orbit types, then it can be embedded into some finite-dimensional orthogonal representation. It was introduced by Mostow ( 1957 ) and Palais ( 1957 ). This topology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mostow–Palais_theorem
Motexafin gadolinium (proposed tradename Xcytrin ) is an inhibitor of thioredoxin reductase and ribonucleotide reductase . It has been proposed as a possible chemotherapeutic agent in the treatment of brain metastases . [ 1 ] On May 9, 2006, a New Drug Application was submitted to the United States Food and Drug Administration (FDA) by Pharmacyclics, Inc. [ 2 ] In December 2007, the FDA issued a not approvable letter for motexafin gadolinium. [ 3 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Motexafin_gadolinium
Moth traps are devices used for capturing moths for scientific research or domestic pest control. Entomologists use moth traps to study moth populations, behavior, distribution, and role in ecosystems, contributing to biodiversity conservation and ecological monitoring efforts. Homeowners, on the other hand, employ moth traps to protect their homes from moth infestations, particularly clothes moths and pantry moths, which can cause significant damage to textiles and contaminate stored food products. Entomologists primarily use light-based moth traps, which exploit the phototactic behavior of moths, attracting them to a light source. Moths navigate by using natural light sources such as the moon and stars, and artificial light sources can confuse and draw them in. The moths are then captured in a container, allowing researchers to identify and record the species present without causing harm. Various trapping methods and designs are employed, including mercury vapor light traps, actinic light traps, and LED light traps, to cater to different research objectives, environmental conditions, and target moth species. These traps often feature modifications to minimize bycatch and ensure minimal disturbance to non-target organisms , demonstrating a responsible and ethical approach to scientific research. All moth traps follow the same basic design - consisting of a mercury vapour or actinic light to attract the moths and a box in which the moths can accumulate and be examined later. The moths fly towards the light and spiral down towards the source of the light and are deflected into the box. Besides moths, several other insects will also come to light, such as scarab beetles, Ichneumonid wasps , stink bugs , stick insects , diving beetles , and water boatmen . Occasionally diurnal species such as dragonflies, yellowjacket wasps , and hover flies will also visit. The reason insects, and especially particular families of insects (e.g. moths), are attracted to light is uncertain. The most accepted theory is that moths migrate using the moon and stars as navigational aids, and that the placement of a closer-than-the-moon light causes subtended angles of light at the insect's eye to alter so rapidly that it has to fly in a spiral to reduce the angular change. This results in the insect flying into the artificial light. Yet the reason some diurnal insects visit is entirely unknown. [ citation needed ] Some moths, notably Sesiidae are monitored or collected using pheromone traps. Moth traps for household use are designed to target specific moth species that cause damage to clothing, carpets, and stored food products.
https://en.wikipedia.org/wiki/Moth_trap
In biology, mother's curse is an evolutionary effect that males inherit deleterious mitochondrial genome (mtDNA) mutations from their mother, while those mutations are beneficial, neutral or less deleterious to females. As mtDNA is usually maternally inherited, mtDNA mutations deleterious to males but beneficial, neutral or less deleterious to females are not subjected to be selected against, which results in a sex-biased selective sieve. [ 1 ] Therefore, male-specific deleterious mtDNA mutations could be maintained and reach a high frequency in populations , decreasing males' fitness and population viability. In addition, the effect of mtDNA mutations on fitness has a threshold effect, i.e. only when the number of mutation reaches the threshold, mtDNA mutations will decrease individual fitness. [ 2 ] Males are more susceptible to mtDNA defects, not only because of lack of selection for mtDNA on males but also due to sperm's higher energy requirements for motility. [ 3 ] There are evidence showing mtDNA mutations are more likely to affect males. In humans, Leber's hereditary optic neuropathy (LHON) is caused by one or several point mutations on mtDNA and LHON affects more males than females. [ 4 ] In mice, a deletion on mtDNA causes oligospermia and asthenozoospermia, resulting in infertility. [ 5 ] Taken together, mtDNA mutations pose a greater threat on males than on females. Mother's curse predicts that mtDNA mutations pose a greater threat on males and male-specific detrimental mutations in mtDNA could be maintained and reach a high frequency. Several researches support these predictions. In humans, a mtDNA haplogroup that exhibits reduction in sperm mobility reaches a frequency of 20%. [ 2 ] A 2017 study found the mother's curse preserving a mutation that causes Leber's hereditary optic neuropathy in a population of French Canadians for over 290 years. [ 6 ] In Drosophila melanogaster , mtDNA polymorphism mainly affects nuclear gene expression in males but not in females and those genes are predominantly male-biased. [ 7 ] Moreover, Camus e t al. [ 1 ] [ 8 ] constructed 13 D. melanogaster lines with isogenic nuclear genome and different mtDNA haplotypes. They demonstrated that mtDNA polymorphism is responsible for male aging, while there is no significant effect on female longevity. Smith et al. [ 3 ] analyzed two different haplotype of mtDNA in hares and found that males of those two haplogroups show variation in their reproductive success. In addition, the mitochondrial genome is associated with sperm viability and length in seed beetles ( Callosobruchus maculatus ). [ 9 ] If mtDNA mutations deleterious to male fitness could not be selected against, they would reach a high frequency despite the high fitness cost for males. Eventually, detrimental mutations would be fixed and lead to species extinction. However, we have not observed extinction in spite of high mutation load of mtDNA. So there must be ways that species could decrease the effects of male-specific deleterious mtDNA mutations. [ 2 ] Mitochondria play a pivotal role in eukaryotic respiration. Because of maternal inheritance, mtDNA has no selection in males. Instead, mutations only deleterious to males could be maintained and reach a higher frequency by selection or genetic drift in females. As a consequence, asymmetric effects of mtDNA mutations result in sexual conflict. On the other hand, to alleviate the effect of mother's curse, interaction between mtDNA and nuclear genes promotes coevolution of mitochondrial and nuclear genomes.
https://en.wikipedia.org/wiki/Mother's_curse
Mother Nature (sometimes known as Mother Earth or the Earth Mother ) is a personification of nature that focuses on the life-giving and nurturing aspects of nature by embodying it, in the form of a mother or mother goddess . The Mycenaean Greek : Ma-ka (transliterated as ma-ga ), "Mother Gaia ", written in Linear B syllabic script (13th or 12th century BC), is the earliest known instance of the concept of earth as a mother. [ 1 ] Demeter would take the place of her grandmother, Gaia , and her mother, Rhea , as goddess of the earth in a time when humans and gods thought the activities of the heavens more sacred than those of earth. [ 2 ] In Greek mythology , Persephone , daughter of Demeter (goddess of the harvest), was abducted by Hades (god of the dead), and taken to the underworld as his queen. The myth goes on to describe Demeter as so distraught that no crops would grow and the "entire human race [would] have perished of cruel, biting hunger if Zeus had not been concerned" (Larousse 152). According to myth, Zeus forced Hades to return Persephone to her mother, but while in the underworld, Persephone had eaten pomegranate seeds, the food of the dead and thus, she must then spend part of each year with Hades in the underworld. The myth continues that Demeter's grief for her daughter in the realm of the dead, was reflected in the barren winter months and her joy when Persephone returned was reflected in the bountiful summer months Roman Epicurean poet Lucretius opened his didactic poem De rerum natura by addressing Venus as a veritable mother of nature. [ 3 ] Lucretius used Venus as "a personified symbol for the generative aspect of nature". [ 4 ] This largely had to do with the nature of Lucretius' work, which presents a nontheistic understanding of the world that eschewed superstition. The pre- Socratic philosophers abstracted the entirety of phenomena of the world as singular: physis , and this was inherited by Aristotle . [ citation needed ] The word "nature" comes from the Latin word, " natura ", meaning birth or character [see nature (philosophy) ]. In English , its first recorded use (in the sense of the entirety of the phenomena of the world) was in 1266. "Natura" and the personification of Mother Nature were widely popular in the Middle Ages . As a concept, seated between the properly divine and the human, it can be traced to Ancient Greece , though Earth (or " Eorthe " in the Old English period) may have been personified as a goddess. The Norse also had a goddess called Jörð ( Jord , or Erth ). Medieval Christian thinkers did not see nature as inclusive of everything, but thought that it had been created by God ; earth lay below the unchanging heavens and moon . Nature lay somewhere in the center, with agents above her ( angels ), and below her ( demons and hell ). Therefore mother nature became only a personification, not a goddess. Amalur (sometimes Ama Lur or Ama Lurra [ 5 ] ) was believed to be the goddess of the earth in the religion of the ancient Basque people . [ 6 ] She was described as the mother of Ekhi , the sun, and Ilazki , the moon. Her name meant "mother earth" or "mother land"; the 1968 Basque documentary Ama lur was a celebration of the Basque countryside. [ 7 ] Algonquian legend says that "beneath the clouds lives the Earth-Mother from whom is derived the Water of Life, who at her bosom feeds plants, animals and human" (Larousse 428). She is otherwise known as Nokomis , the Grandmother . In Inca mythology , Mama Pacha or Pachamama was a fertility goddess who presided over planting and harvesting. Pachamama is usually translated as "Mother Earth" but a more literal translation would be "Mother Universe" (in Aymara and Quechua mama = mother / pacha = world, space-time or the universe). [ 8 ] It was believed that Pachamama and her husband, Inti , were the most benevolent deities and were worshiped in parts of the Andean mountain ranges (stretching from present day Ecuador to Chile and Argentina ). In her book Coateteleco, pueblo indígena de pescadores ("Coatetelco, indigenous fishing town", Cuernavaca, Morelos: Vettoretti, 2015), Teódula Alemán Cleto states, En nuestra cultura prehispánica el respeto y la fe a nuestra madre naturaleza fueron primordiales para vivir en plena armonía como seres humanos. ("In our [Mexican] prehispanic culture, respect and faith in our Mother Nature [emphasis added] were paramount to living in full harmony as human beings.") [ 9 ] In the Mainland Southeast Asian countries of Cambodia , Laos and Thailand , earth ( terra firma ) is personified as Phra Mae Thorani , but is believed that her role in Buddhist mythology differs considerably from that of Mother Nature. In the Malay Archipelago , that role has been filled by Dewi Sri , The Rice-mother in the East Indies .
https://en.wikipedia.org/wiki/Mother_Nature
The mother liquor (or spent liquor) is the solution remaining after a component has been removed by a process such as filtration or more commonly crystallization . It is encountered in chemical processes including sugar refining . [ 1 ] In crystallization, a solid (usually impure) is dissolved in a solvent at high temperature, taking advantage of the fact that most solids are more soluble at higher temperatures. As the solution cools, the solubility of the solute in the solvent will gradually become smaller. The resultant solution is described as supersaturated , meaning that there is more solute dissolved in the solution than would be predicted by its solubility at that temperature. Crystallization can then be induced from this supersaturated solution and the resultant pure crystals removed by such methods as filtration and centrifugal separators. The remaining solution, once the crystals have been filtered out, is known as the mother liquor, and will contain a portion of the original solute (as predicted by its solubility at that temperature) as well as any impurities that were not filtered out. Second and third crops of crystals can then be harvested from the mother liquor. [ 2 ] An alternative to second cropping is continuous recycle of a portion of the mother liquors from one batch into in subsequent batches in which an increased product yield is expected, and also leads to an accumulation of impurities. It can be shown that the impurity profile of the mother liquors, at moderate recycle levels (i.e. when x >1), quickly reaches a steady state according to (1 − x n +1 )/(1 − x ), where n is the number of times the process is operated and x is the fraction of mother liquors recycled (Fig. 1). [ 3 ] The aforementioned approach is idealised and assumes that the build up of impurities in the mother liquor does not exceed the impurity/impurities solubility. The approach has been confirmed experimentally. [ 4 ]
https://en.wikipedia.org/wiki/Mother_liquor
A mother ship , mothership or mother-ship is a large vehicle that leads, serves, or carries other smaller vehicles. A mother ship may be a maritime ship , aircraft , or spacecraft . Examples include bombers converted to carry experimental aircraft to altitudes where they can conduct their research (such as the B-52 carrying the X-15 ), or ships that carry small submarines to an area of ocean to be explored (such as the Atlantis II carrying the Alvin ). A mother ship may also be used to recover smaller craft, or go its own way after releasing them. A smaller vessel serving or caring for larger craft is usually called a tender . During World War II, the German Type XIV submarine or Milchkuh (Milk cow) was a type of large submarine used to resupply the U-boats . Mother ships can carry small submersibles and submarines to an area of ocean to be explored (such as the Atlantis II carrying the DSV Alvin ). Somali pirates use mother ships to extend their reach in the Indian Ocean. For example, the FV Win Far 161 was captured and used as a mother ship in the Maersk Alabama hijacking . In aviation , motherships have been used in the airborne aircraft carrier , air launch and captive carry roles. Some large long-range aircraft act as motherships to parasite aircraft . A mothership may also form the larger component of a composite aircraft . During the age of the great airships , the United States built two rigid airships , USS Akron (ZRS-4) and USS Macon (ZRS-5) , with onboard hangars able to house a number of Curtiss F9C Sparrowhawk biplane fighters. These airborne aircraft carriers operated successfully for several years. [ 1 ] These airships utilized an internal hangar bay using a "trapeze" to hold the aircraft. [ 2 ] In the air launch role, a large carrier aircraft or mother ship carries a smaller payload aircraft to a launch point before releasing it. During World War II the Japanese Mitsubishi G4M bomber was used to carry the rocket-powered Yokosuka MXY7 Ohka aircraft, used for kamikaze attacks, within range of a target ship. Germany also planned a jet-carrying bomber, called the Daimler-Benz Project C . In the US, NASA has used converted bombers as launch platforms for experimental aircraft . Notable among these was the use during the 1960s of a modified Boeing B-52 Stratofortress for the repeated launching of the North American X-15 . Experiments on air launching the Shuttle were carried out with the test frame Enterprise , but none of the Space Shuttle fleet was launched in this way once the Space Shuttle program was commenced. In a captive carry arrangement the payload craft, such as a rocket , missile , aeroplane or spaceplane , does not separate from the carrier aircraft. Captive carry is typically used to conduct initial testing on a new airframe or system, before it is ready for free flight [ 3 ] [ 4 ] [ 5 ] Captive carry is sometimes also used to transport an aircraft or spacecraft on a ferry flight . Notable examples include: Some large long-range aircraft have been modified as motherships in order to carry parasite aircraft which support the mothership by extending its role, for example for reconnaissance, or acting in a support role such as fighter defence. [ 7 ] [ 8 ] The first experiments with rigid airships to launch and recover fighters were carried out during World War I. The British experimented with the 23-class airships from that time. Then in the 1920s, as part of the "Airship Development Programme", they used the R33 for experiments. A de Havilland Humming Bird light aeroplane with a hook fitted was slung beneath it. [ 9 ] In October 1925 Squadron Leader Rollo Haig, was released from the R33, and then reattached. [ 10 ] Later that year, the attempt was repeated and the Humming Bird remained attached until the airship landed. In 1926, it carried two Gloster Grebe fighters releasing them at the Pulham and Cardington airship stations. [ 11 ] In the U.S., USS Los Angeles (ZR-3) , used for prototype testing for the Akron and Macon airborne aircraft carriers. During World War II the Soviet Tupolev-Vakhmistrov Zveno project developed converted Tupolev TB-1 and TB-3 aircraft to carry and launch up to five smaller craft, typically in roles such as fighter escort or fighter-bomber. During the early days of the jet age, fighter aircraft could not fly long distances and still match point defence fighters or interceptors in dogfighting. The solution was long-range bombers that would carry or tow their escort fighters. B-29 Superfortress and B-36 Peacemaker bombers were tested as carriers for the RF-84K Thunderflash ( FICON project ) and XF-85 Goblin fighters. [ 7 ] In November 2014, the U.S. Defense Advanced Research Projects Agency (DARPA) requested industry proposals for a system in which small unmanned aerial vehicles (UAVs) would be launched and recovered by their existing conventional large aircraft, including the B-52 Stratofortress and B-1 Lancer bombers and C-130 Hercules and C-17 Globemaster III transports. [ 12 ] In a composite aircraft, two or more component aircraft take off as a single unit and later separate. The British Short S.21 Maia experimental flying boat served as the mother ship component of the Short Mayo Composite two-plane maritime trans-Atlantic project design in the 1930s. [ 7 ] [ 13 ] The mother ship concept was used in Moon landings performed in the 1960s. Both the 1962 American Ranger and the 1966 Soviet Luna uncrewed landers were spherical capsules designed to be ejected at the last moment from mother ships that carried them to the Moon, and crashed onto its surface. In the crewed Apollo program , astronauts in the Lunar Module left the Command/Service Module mother ship in lunar orbit, descended to the surface, and returned to dock in a lunar orbit rendezvous with the mother ship once more for the return to Earth. [ 14 ] The Scaled Composites White Knight series of aircraft are designed to launch spacecraft which they carry underneath them. There have been numerous sightings of unidentified flying objects (UFOs) claimed to be mother ships, many in the U.S. during the summer of 1947. A woman in Palmdale, California , was quoted by contemporary press as describing a "mother saucer (with a) bunch of little saucers playing around it". [ 15 ] The term mothership was also popularized in UFO lore by contactee George Adamski , who claimed in the 1950s to sometimes see large cigar-shaped Venusian motherships, out of which flew smaller-sized flying saucer scout ships. Adamski claimed to have met and befriended the pilots of these scout ships, including a Venusian named Orthon. [ 16 ] The concept of a mother ship also occurs in science fiction , extending the idea to spaceships that serve as the equivalent of flagships among a fleet. In this context, mother ship is often spelled as one word: mothership . A mothership may be large enough that its body contains a station for the rest of the fleet. [ 17 ] Examples include the large craft in Close Encounters of the Third Kind and Battlestar Galactica . In many Asian languages , such as Chinese , Japanese , Korean and Indonesian , the word mothership ( Chinese : 母舰 , Japanese : 母艦 , Korean : 모함 , Indonesian : Kapal induk , literally "mother" + "(war)ship") typically refers to an aircraft carrier , which is translated as "aircraft/aviation mothership" ( Chinese : 航空母舰 , Japanese : 航空母艦 , Korean : 항공모함 , Malay : Kapal induk pesawat udara ).
https://en.wikipedia.org/wiki/Mother_ship
In computing , the motherboard form factor is the specification of a motherboard – the dimensions, power supply type, location of mounting holes, number of ports on the back panel, etc. Specifically, in the IBM PC compatible industry, standard form factors ensure that parts are interchangeable across competing vendors and generations of technology, while in enterprise computing, form factors ensure that server modules fit into existing rackmount systems. Traditionally, the most significant specification is for that of the motherboard, which generally dictates the overall size of the case . Small form factors have been developed and implemented. A PC motherboard is the main circuit board within a typical desktop computer , laptop or server . Its main functions are as follows: As new generations of components have been developed, the standards of motherboards have changed too. For example, the introduction of AGP and, more recently, PCI Express have influenced motherboard design. However, the standardized size and layout of motherboards have changed much more slowly and are controlled by their own standards. The list of components required on a motherboard changes far more slowly than the components themselves. For example, north bridge microchips have changed many times since their introduction with many manufacturers bringing out their own versions, but in terms of form factor standards, provisions for north bridges have remained fairly static for many years. Although it is a slower process, form factors do evolve regularly in response to changing demands. IBM's long-standing standard, AT (Advanced Technology), was superseded in 1995 by the current industry standard ATX (Advanced Technology Extended), [ 1 ] which still governs the size and design of the motherboard in most modern PCs. The latest update to the ATX standard was released in 2007. A divergent standard by chipset manufacturer VIA called EPIA (also known as ITX, and not to be confused with EPIC) is based upon smaller form factors and its own standards. Differences between form factors are most apparent in terms of their intended market sector, and involve variations in size, design compromises and typical features. Most modern computers have very similar requirements, so form factor differences tend to be based upon subsets and supersets of these. For example, a desktop computer may require more sockets for maximum flexibility and many optional connectors and other features on board, whereas a computer to be used in a multimedia system may need to be optimized for heat and size, with additional plug-in cards being less common. The smallest motherboards may sacrifice CPU flexibility in favor of a fixed manufacturer's choice. The E-ATX form factor is not standardized and may vary according to the motherboard manufacturer. [ 2 ] Processor is placed closest to the fan. May contain a CNR board. (6.89 × 9.65 in) List is incomplete ATX case compatible: PC/104 is an embedded computer standard which defines both a form factor and computer bus. PC/104 is intended for embedded computing environments. Single-board computers built to this form factor are often sold by COTS vendors, which benefits users who want a customized rugged system, without months of design and paper work. The PC/104 form factor was standardized by the PC/104 Consortium in 1992. [ 5 ] An IEEE standard corresponding to PC/104 was drafted as IEEE P996.1, but never ratified. [ 6 ] The 5.75 × 8.0 in Embedded Board eXpandable (EBX) specification, which was derived from Ampro's proprietary Little Board form-factor, resulted from a collaboration between Ampro and Motorola Computer Group . As compared with PC/104 modules, these larger (but still reasonably embeddable) SBCs tend to have everything of a full PC on them, including application oriented interfaces like audio, analog, or digital I/O in many cases. Also it's much easier to fit Pentium CPUs, whereas it's a tight squeeze (or expensive) to do so on a PC/104 SBC. Typically, EBX SBCs contain: the CPU; upgradeable RAM subassemblies (e.g., DIMM); Flash memory for solid state drive; multiple USB, serial, and parallel ports; onboard expansion via a PC/104 module stack; off-board expansion via ISA and/or PCI buses (from the PC/104 connectors); networking interface (typically Ethernet); and video (typically CRT, LCD, and TV). Mini PC is a PC small form factor very close in size to an external CD or DVD drive . Mini PCs have proven popular for use as HTPCs .
https://en.wikipedia.org/wiki/Motherboard_form_factor
mothur is an open source software package for bioinformatics data processing. [ 7 ] The package is frequently used in the analysis of DNA from uncultured microbes. mothur is capable of processing data generated from several DNA sequencing methods including 454 pyrosequencing, Illumina HiSeq and MiSeq, Sanger, PacBio, and IonTorrent. [ 8 ] The first release of mothur occurred in 2009. [ 9 ] The release of mothur was announced in a publication in the journal Applied and Environmental Microbiology . [ 1 ] As of October 26, 2022 the article releasing mothur had been cited by around 15,000 other research studies. This scientific software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mothur
Motility is the ability of an organism to move independently using metabolic energy . This biological concept encompasses movement at various levels, from whole organisms to cells and subcellular components. Motility is observed in animals, microorganisms, and even some plant structures, playing crucial roles in activities such as foraging, reproduction, and cellular functions. It is genetically determined but can be influenced by environmental factors. In multicellular organisms, motility is facilitated by systems like the nervous and musculoskeletal systems, while at the cellular level, it involves mechanisms such as amoeboid movement and flagellar propulsion . These cellular movements can be directed by external stimuli, a phenomenon known as taxis. Examples include chemotaxis (movement along chemical gradients) and phototaxis (movement in response to light). Motility also includes physiological processes like gastrointestinal movements and peristalsis. Understanding motility is important in biology, medicine, and ecology, as it impacts processes ranging from bacterial behavior to ecosystem dynamics. Motility, the ability of an organism to move independently, using metabolic energy, [ 2 ] [ 3 ] can be contrasted with sessility , the state of organisms that do not possess a means of self-locomotion and are normally immobile. Motility differs from mobility , the ability of an object to be moved. The term vagility means a lifeform that can be moved but only passively; sessile organisms including plants and fungi often have vagile parts such as fruits, seeds, or spores which may be dispersed by other agents such as wind, water, or other organisms. [ 4 ] Motility is genetically determined , [ 5 ] but may be affected by environmental factors such as toxins . The nervous system and musculoskeletal system provide the majority of mammalian motility. [ 6 ] [ 7 ] [ 8 ] In addition to animal locomotion , most animals are motile, though some are vagile, described as having passive locomotion . Many bacteria and other microorganisms , including even some viruses , [ 9 ] and multicellular organisms are motile; some mechanisms of fluid flow in multicellular organs and tissue are also considered instances of motility, as with gastrointestinal motility . Motile marine animals are commonly called free-swimming, [ 10 ] [ 11 ] [ 12 ] and motile non- parasitic organisms are called free-living. [ 13 ] Motility includes an organism's ability to move food through its digestive tract . There are two types of intestinal motility – peristalsis and segmentation . [ 14 ] This motility is brought about by the contraction of smooth muscles in the gastrointestinal tract which mix the luminal contents with various secretions (segmentation) and move contents through the digestive tract from the mouth to the anus (peristalsis). [ 15 ] At the cellular level, different modes of movement exist: Many cells are not motile, for example Klebsiella pneumoniae and Shigella , or under specific circumstances such as Yersinia pestis at 37 °C. [ citation needed ] Events perceived as movements can be directed:
https://en.wikipedia.org/wiki/Motility
In physics , motion is when an object changes its position with respect to a reference point in a given time . Motion is mathematically described in terms of displacement , distance , velocity , acceleration , speed , and frame of reference to an observer, measuring the change in position of the body relative to that frame with a change in time. The branch of physics describing the motion of objects without reference to their cause is called kinematics , while the branch studying forces and their effect on motion is called dynamics . If an object is not in motion relative to a given frame of reference, it is said to be at rest , motionless , immobile , stationary , or to have a constant or time-invariant position with reference to its surroundings. Modern physics holds that, as there is no absolute frame of reference, Isaac Newton 's concept of absolute motion cannot be determined. [ 1 ] Everything in the universe can be considered to be in motion. [ 2 ] : 20–21 Motion applies to various physical systems: objects, bodies, matter particles , matter fields, radiation , radiation fields, radiation particles, curvature , and space-time . One can also speak of the motion of images, shapes, and boundaries. In general, the term motion signifies a continuous change in the position or configuration of a physical system in space. For example, one can talk about the motion of a wave or the motion of a quantum particle, where the configuration consists of the probabilities of the wave or particle occupying specific positions. In physics, the motion of massive bodies is described through two related sets of laws of mechanics. Classical mechanics for super atomic (larger than an atom) objects (such as cars , projectiles , planets , cells , and humans ) and quantum mechanics for atomic and sub-atomic objects (such as helium , protons , and electrons ). Historically, Newton and Euler formulated three laws of classical mechanics : If the resultant force F → {\displaystyle {\vec {F}}} acting on a body or an object is not equal to zero, the body will have an acceleration a {\displaystyle a} that is in the same direction as the resultant force. Classical mechanics is used for describing the motion of macroscopic objects moving at speeds significantly slower than the speed of light, from projectiles to parts of machinery , as well as astronomical objects , such as spacecraft , planets , stars , and galaxies . It produces very accurate results within these domains and is one of the oldest and largest scientific descriptions in science , engineering , and technology . Classical mechanics is fundamentally based on Newton's laws of motion . These laws describe the relationship between the forces acting on a body and the motion of that body. They were first compiled by Sir Isaac Newton in his work Philosophiæ Naturalis Principia Mathematica , which was first published on July 5, 1687. Newton's three laws are: Newton's three laws of motion were the first to accurately provide a mathematical model for understanding orbiting bodies in outer space . This explanation unified the motion of celestial bodies and the motion of objects on Earth. Modern kinematics developed with study of electromagnetism and refers all velocities v {\displaystyle v} to their ratio to speed of light c {\displaystyle c} . Velocity is then interpreted as rapidity , the hyperbolic angle φ {\displaystyle \varphi } for which the hyperbolic tangent function tanh ⁡ φ = v ÷ c {\displaystyle \tanh \varphi =v\div c} . Acceleration , the change of velocity over time, then changes rapidity according to Lorentz transformations . This part of mechanics is special relativity . Efforts to incorporate gravity into relativistic mechanics were made by W. K. Clifford and Albert Einstein . The development used differential geometry to describe a curved universe with gravity; the study is called general relativity . Quantum mechanics is a set of principles describing physical reality at the atomic level of matter ( molecules and atoms ) and the subatomic particles ( electrons , protons , neutrons , and even smaller elementary particles such as quarks ). These descriptions include the simultaneous wave-like and particle-like behavior of both matter and radiation energy as described in the wave–particle duality . [ 6 ] In classical mechanics, accurate measurements and predictions of the state of objects can be calculated, such as location and velocity . In quantum mechanics, due to the Heisenberg uncertainty principle , the complete state of a subatomic particle, such as its location and velocity, cannot be simultaneously determined. [ 7 ] In addition to describing the motion of atomic level phenomena, quantum mechanics is useful in understanding some large-scale phenomena such as superfluidity , superconductivity , and biological systems , including the function of smell receptors and the structures of protein . [ 8 ] Humans, like all known things in the universe, are in constant motion; [ 2 ] : 8–9 however, aside from obvious movements of the various external body parts and locomotion , humans are in motion in a variety of ways that are more difficult to perceive . Many of these "imperceptible motions" are only perceivable with the help of special tools and careful observation. The larger scales of imperceptible motions are difficult for humans to perceive for two reasons: Newton's laws of motion (particularly the third), which prevents the feeling of motion on a mass to which the observer is connected, and the lack of an obvious frame of reference that would allow individuals to easily see that they are moving. [ 9 ] The smaller scales of these motions are too small to be detected conventionally with human senses . Spacetime (the fabric of the universe) is expanding , meaning everything in the universe is stretching, like a rubber band . This motion is the most obscure, not involving physical movement but a fundamental change in the universe's nature. The primary source of verification of this expansion was provided by Edwin Hubble who demonstrated that all galaxies and distant astronomical objects were moving away from Earth, known as Hubble's law , predicted by a universal expansion. [ 10 ] The Milky Way Galaxy is moving through space and many astronomers believe the velocity of this motion to be approximately 600 kilometres per second (1,340,000 mph) relative to the observed locations of other nearby galaxies. Another reference frame is provided by the Cosmic microwave background . This frame of reference indicates that the Milky Way is moving at around 582 kilometres per second (1,300,000 mph). [ 11 ] [ failed verification ] The Milky Way is rotating around its dense Galactic Center , thus the Sun is moving in a circle within the galaxy 's gravity . Away from the central bulge, or outer rim, the typical stellar velocity is between 210 and 240 kilometres per second (470,000 and 540,000 mph). [ 12 ] All planets and their moons move with the Sun. Thus, the Solar System is in motion. The Earth is rotating or spinning around its axis . This is evidenced by day and night , at the equator the earth has an eastward velocity of 0.4651 kilometres per second (1,040 mph). [ 13 ] The Earth is also orbiting around the Sun in an orbital revolution . A complete orbit around the Sun takes one year , or about 365 days; it averages a speed of about 30 kilometres per second (67,000 mph). [ 14 ] The Theory of Plate tectonics tells us that the continents are drifting on convection currents within the mantle , causing them to move across the surface of the planet at the slow speed of approximately 2.54 centimetres (1 in) per year. [ 15 ] [ 16 ] However, the velocities of plates range widely. The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of 75 millimetres (3.0 in) per year [ 17 ] and the Pacific Plate moving 52–69 millimetres (2.0–2.7 in) per year. At the other extreme, the slowest-moving plate is the Eurasian Plate , progressing at a typical rate of about 21 millimetres (0.83 in) per year. The human heart is regularly contracting to move blood throughout the body. Through larger veins and arteries in the body, blood has been found to travel at approximately 0.33 m/s. Though considerable variation exists, and peak flows in the venae cavae have been found between 0.1 and 0.45 metres per second (0.33 and 1.48 ft/s). [ 18 ] additionally, the smooth muscles of hollow internal organs are moving. The most familiar would be the occurrence of peristalsis , which is where digested food is forced throughout the digestive tract . Though different foods travel through the body at different rates, an average speed through the human small intestine is 3.48 kilometres per hour (2.16 mph). [ 19 ] The human lymphatic system is also constantly causing movements of excess fluids , lipids , and immune system related products around the body. The lymph fluid has been found to move through a lymph capillary of the skin at approximately 0.0000097 m/s. [ 20 ] The cells of the human body have many structures and organelles that move throughout them. Cytoplasmic streaming is a way in which cells move molecular substances throughout the cytoplasm , [ 21 ] various motor proteins work as molecular motors within a cell and move along the surface of various cellular substrates such as microtubules , and motor proteins are typically powered by the hydrolysis of adenosine triphosphate (ATP), and convert chemical energy into mechanical work. [ 22 ] Vesicles propelled by motor proteins have been found to have a velocity of approximately 0.00000152 m/s. [ 23 ] According to the laws of thermodynamics , all particles of matter are in constant random motion as long as the temperature is above absolute zero . Thus the molecules and atoms that make up the human body are vibrating, colliding, and moving. This motion can be detected as temperature; higher temperatures, which represent greater kinetic energy in the particles, feel warm to humans who sense the thermal energy transferring from the object being touched to their nerves. Similarly, when lower temperature objects are touched, the senses perceive the transfer of heat away from the body as a feeling of cold. [ 24 ] Within the standard atomic orbital model , electrons exist in a region around the nucleus of each atom. This region is called the electron cloud . According to Bohr's model of the atom, electrons have a high velocity , and the larger the nucleus they are orbiting the faster they would need to move. If electrons were to move about the electron cloud in strict paths the same way planets orbit the Sun, then electrons would be required to do so at speeds that would far exceed the speed of light. However, there is no reason that one must confine oneself to this strict conceptualization (that electrons move in paths the same way macroscopic objects do), rather one can conceptualize electrons to be 'particles' that capriciously exist within the bounds of the electron cloud. [ 25 ] Inside the atomic nucleus , the protons and neutrons are also probably moving around due to the electrical repulsion of the protons and the presence of angular momentum of both particles. [ 26 ] Light moves at a speed of 299,792,458 m/s, or 299,792.458 kilometres per second (186,282.397 mi/s), in a vacuum. The speed of light in vacuum (or c {\displaystyle c} ) is also the speed of all massless particles and associated fields in a vacuum, and it is the upper limit on the speed at which energy, matter, information or causation can travel. The speed of light in vacuum is thus the upper limit for speed for all physical systems. In addition, the speed of light is an invariant quantity: it has the same value, irrespective of the position or speed of the observer. This property makes the speed of light c a natural measurement unit for speed and a fundamental constant of nature. In 2019, the speed of light was redefined alongside all seven SI base units using what it calls "the explicit-constant formulation", where each "unit is defined indirectly by specifying explicitly an exact value for a well-recognized fundamental constant", as was done for the speed of light. A new, but completely equivalent, wording of the metre's definition was proposed: "The metre, symbol m, is the unit of length; its magnitude is set by fixing the numerical value of the speed of light in vacuum to be equal to exactly 299 792 458 when it is expressed in the SI unit m s −1 ." [ 27 ] This implicit change to the speed of light was one of the changes that was incorporated in the 2019 revision of the SI , also termed the New SI . [ 28 ] Some motion appears to an observer to exceed the speed of light. Bursts of energy moving out along the relativistic jets emitted from these objects can have a proper motion that appears greater than the speed of light. All of these sources are thought to contain a black hole , responsible for the ejection of mass at high velocities. Light echoes can also produce apparent superluminal motion. [ 29 ] This occurs owing to how motion is often calculated at long distances; oftentimes calculations fail to account for the fact that the speed of light is finite. When measuring the movement of distant objects across the sky, there is a large time delay between what has been observed and what has occurred, due to the large distance the light from the distant object has to travel to reach us. The error in the above naive calculation comes from the fact that when an object has a component of velocity directed towards the Earth, as the object moves closer to the Earth that time delay becomes smaller. This means that the apparent speed as calculated above is greater than the actual speed. Correspondingly, if the object is moving away from the Earth, the above calculation underestimates the actual speed. [ 30 ]
https://en.wikipedia.org/wiki/Motion
In geometry , a motion is an isometry of a metric space . For instance, a plane equipped with the Euclidean distance metric is a metric space in which a mapping associating congruent figures is a motion. [ 1 ] More generally, the term motion is a synonym for surjective isometry in metric geometry, [ 2 ] including elliptic geometry and hyperbolic geometry . In the latter case, hyperbolic motions provide an approach to the subject for beginners. Motions can be divided into direct and indirect motions. Direct, proper or rigid motions are motions like translations and rotations that preserve the orientation of a chiral shape . Indirect, or improper motions are motions like reflections , glide reflections and Improper rotations that invert the orientation of a chiral shape . Some geometers define motion in such a way that only direct motions are motions [ citation needed ] . In differential geometry , a diffeomorphism is called a motion if it induces an isometry between the tangent space at a manifold point and the tangent space at the image of that point. [ 3 ] [ 4 ] Given a geometry, the set of motions forms a group under composition of mappings. This group of motions is noted for its properties. For example, the Euclidean group is noted for the normal subgroup of translations . In the plane, a direct Euclidean motion is either a translation or a rotation , while in space every direct Euclidean motion may be expressed as a screw displacement according to Chasles' theorem . When the underlying space is a Riemannian manifold , the group of motions is a Lie group . Furthermore, the manifold has constant curvature if and only if, for every pair of points and every isometry, there is a motion taking one point to the other for which the motion induces the isometry. [ 5 ] The idea of a group of motions for special relativity has been advanced as Lorentzian motions. For example, fundamental ideas were laid out for a plane characterized by the quadratic form x 2 − y 2 {\displaystyle \ x^{2}-y^{2}\ } in American Mathematical Monthly . [ 6 ] The motions of Minkowski space were described by Sergei Novikov in 2006: [ 7 ] An early appreciation of the role of motion in geometry was given by Alhazen (965 to 1039). His work "Space and its Nature" [ 8 ] uses comparisons of the dimensions of a mobile body to quantify the vacuum of imaginary space. He was criticised by Omar Khayyam who pointed that Aristotle had condemned the use of motion in geometry. [ 9 ] In the 19th century Felix Klein became a proponent of group theory as a means to classify geometries according to their "groups of motions". He proposed using symmetry groups in his Erlangen program , a suggestion that was widely adopted. He noted that every Euclidean congruence is an affine mapping , and each of these is a projective transformation ; therefore the group of projectivities contains the group of affine maps, which in turn contains the group of Euclidean congruences. The term motion , shorter than transformation , puts more emphasis on the adjectives: projective, affine, Euclidean. The context was thus expanded, so much that "In topology , the allowed movements are continuous invertible deformations that might be called elastic motions." [ 10 ] The science of kinematics is dedicated to rendering physical motion into expression as mathematical transformation. Frequently the transformation can be written using vector algebra and linear mapping. A simple example is a turn written as a complex number multiplication: z ↦ ω z {\displaystyle z\mapsto \omega z\ } where ω = cos ⁡ θ + i sin ⁡ θ , i 2 = − 1 {\displaystyle \ \omega =\cos \theta +i\sin \theta ,\quad i^{2}=-1} . Rotation in space is achieved by use of quaternions , and Lorentz transformations of spacetime by use of biquaternions . Early in the 20th century, hypercomplex number systems were examined. Later their automorphism groups led to exceptional groups such as G2 . In the 1890s logicians were reducing the primitive notions of synthetic geometry to an absolute minimum. Giuseppe Peano and Mario Pieri used the expression motion for the congruence of point pairs. Alessandro Padoa celebrated the reduction of primitive notions to merely point and motion in his report to the 1900 International Congress of Philosophy . It was at this congress that Bertrand Russell was exposed to continental logic through Peano. In his book Principles of Mathematics (1903), Russell considered a motion to be a Euclidean isometry that preserves orientation . [ 11 ] In 1914 D. M. Y. Sommerville used the idea of a geometric motion to establish the idea of distance in hyperbolic geometry when he wrote Elements of Non-Euclidean Geometry . [ 12 ] He explains: László Rédei gives as axioms of motion: [ 13 ] Axioms 2 to 4 imply that motions form a group . Axiom 5 means that the group of motions provides group actions on R that are transitive so that there is a motion that maps every line to every line
https://en.wikipedia.org/wiki/Motion_(geometry)
In video compression technology , motion coding is a technique that can be viewed as extensions of the standard block-matching techniques in other MPEG standard to image sequences of arbitrary shape. Advanced motion compensation such as overlapped motion compensation and coding of motion vectors for 8×8 blocks, could be used. This design -related article is a stub . You can help Wikipedia by expanding it . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Motion_coding
Motion compensation in computing is an algorithmic technique used to predict a frame in a video given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. It is employed in the encoding of video data for video compression , for example in the generation of MPEG-2 files. Motion compensation describes a picture in terms of the transformation of a reference picture to the current picture. The reference picture may be previous in time or even from the future. When images can be accurately synthesized from previously transmitted/stored images, the compression efficiency can be improved. Motion compensation is one of the two key video compression techniques used in video coding standards , along with the discrete cosine transform (DCT). Most video coding standards, such as the H.26x and MPEG formats, typically use motion-compensated DCT hybrid coding, [ 1 ] [ 2 ] known as block motion compensation (BMC) or motion-compensated DCT (MC DCT). Motion compensation exploits the fact that, often, for many frames of a movie, the only difference between one frame and another is the result of either the camera moving or an object in the frame moving. In reference to a video file, this means much of the information that represents one frame will be the same as the information used in the next frame. Using motion compensation, a video stream will contain some full (reference) frames; then the only information stored for the frames in between would be the information needed to transform the previous frame into the next frame. The following is a simplistic illustrated explanation of how motion compensation works. Two successive frames were captured from the movie Elephants Dream . As can be seen from the images, the bottom (motion compensated) difference between two frames contains significantly less detail than the prior images, and thus compresses much better than the rest. Thus the information that is required to encode compensated frame will be much smaller than with the difference frame. This also means that it is also possible to encode the information using difference image at a cost of less compression efficiency but by saving coding complexity without motion compensated coding; as a matter of fact that motion compensated coding (together with motion estimation , motion compensation) occupies more than 90% of encoding complexity. In MPEG , images are predicted from previous frames ( P frames ) or bidirectionally from previous and future frames ( B frames ). B frames are more complex because the image sequence must be transmitted and stored out of order so that the future frame is available to generate the B frames. [ 3 ] After predicting frames using motion compensation, the coder finds the residual, which is then compressed and transmitted. In global motion compensation , the motion model basically reflects camera motions such as: It works best for still scenes without moving objects. There are several advantages of global motion compensation: MPEG-4 ASP supports global motion compensation with three reference points, although some implementations can only make use of one. A single reference point only allows for translational motion which for its relatively large performance cost provides little advantage over block based motion compensation. Moving objects within a frame are not sufficiently represented by global motion compensation. Thus, local motion estimation is also needed. Block motion compensation (BMC), also known as motion-compensated discrete cosine transform (MC DCT), is the most widely used motion compensation technique. [ 2 ] In BMC, the frames are partitioned in blocks of pixels (e.g. macro-blocks of 16×16 pixels in MPEG ). Each block is predicted from a block of equal size in the reference frame. The blocks are not transformed in any way apart from being shifted to the position of the predicted block. This shift is represented by a motion vector . To exploit the redundancy between neighboring block vectors, (e.g. for a single moving object covered by multiple blocks) it is common to encode only the difference between the current and previous motion vector in the bit-stream. The result of this differentiating process is mathematically equivalent to a global motion compensation capable of panning. Further down the encoding pipeline, an entropy coder will take advantage of the resulting statistical distribution of the motion vectors around the zero vector to reduce the output size. It is possible to shift a block by a non-integer number of pixels, which is called sub-pixel precision . The in-between pixels are generated by interpolating neighboring pixels. Commonly, half-pixel or quarter pixel precision ( Qpel , used by H.264 and MPEG-4/ASP) is used. The computational expense of sub-pixel precision is much higher due to the extra processing required for interpolation and on the encoder side, a much greater number of potential source blocks to be evaluated. The main disadvantage of block motion compensation is that it introduces discontinuities at the block borders (blocking artifacts). These artifacts appear in the form of sharp horizontal and vertical edges which are easily spotted by the human eye and produce false edges and ringing effects (large coefficients in high frequency sub-bands) due to quantization of coefficients of the Fourier-related transform used for transform coding of the residual frames [ 4 ] Block motion compensation divides up the current frame into non-overlapping blocks, and the motion compensation vector tells where those blocks come from (a common misconception is that the previous frame is divided up into non-overlapping blocks, and the motion compensation vectors tell where those blocks move to ). The source blocks typically overlap in the source frame. Some video compression algorithms assemble the current frame out of pieces of several different previously transmitted frames. Frames can also be predicted from future frames. The future frames then need to be encoded before the predicted frames and thus, the encoding order does not necessarily match the real frame order. Such frames are usually predicted from two directions, i.e. from the I- or P-frames that immediately precede or follow the predicted frame. These bidirectionally predicted frames are called B-frames . A coding scheme could, for instance, be IBBPBBPBBPBB. Further, the use of triangular tiles has also been proposed for motion compensation. Under this scheme, the frame is tiled with triangles, and the next frame is generated by performing an affine transformation on these triangles. [ 5 ] Only the affine transformations are recorded/transmitted. This is capable of dealing with zooming, rotation, translation etc. Variable block-size motion compensation (VBSMC) is the use of BMC with the ability for the encoder to dynamically select the size of the blocks. When coding video, the use of larger blocks can reduce the number of bits needed to represent the motion vectors, while the use of smaller blocks can result in a smaller amount of prediction residual information to encode. Other areas of work have examined the use of variable-shape feature metrics, beyond block boundaries, from which interframe vectors can be calculated. [ 6 ] Older designs such as H.261 and MPEG-1 video typically use a fixed block size, while newer ones such as H.263 , MPEG-4 Part 2 , H.264/MPEG-4 AVC , and VC-1 give the encoder the ability to dynamically choose what block size will be used to represent the motion. Overlapped block motion compensation (OBMC) is a good solution to these problems because it not only increases prediction accuracy but also avoids blocking artifacts. When using OBMC, blocks are typically twice as big in each dimension and overlap quadrant-wise with all 8 neighbouring blocks. Thus, each pixel belongs to 4 blocks. In such a scheme, there are 4 predictions for each pixel which are summed up to a weighted mean. For this purpose, blocks are associated with a window function that has the property that the sum of 4 overlapped windows is equal to 1 everywhere. Studies of methods for reducing the complexity of OBMC have shown that the contribution to the window function is smallest for the diagonally-adjacent block. Reducing the weight for this contribution to zero and increasing the other weights by an equal amount leads to a substantial reduction in complexity without a large penalty in quality. In such a scheme, each pixel then belongs to 3 blocks rather than 4, and rather than using 8 neighboring blocks, only 4 are used for each block to be compensated. Such a scheme is found in the H.263 Annex F Advanced Prediction mode In motion compensation, quarter or half samples are actually interpolated sub-samples caused by fractional motion vectors. Based on the vectors and full-samples, the sub-samples can be calculated by using bicubic or bilinear 2-D filtering. See subclause 8.4.2.2 "Fractional sample interpolation process" of the H.264 standard. Motion compensation is utilized in stereoscopic video coding . In video, time is often considered as the third dimension. Still, image coding techniques can be expanded to an extra dimension. JPEG 2000 uses wavelets, and these can also be used to encode motion without gaps between blocks in an adaptive way. Fractional pixel affine transformations lead to bleeding between adjacent pixels. If no higher internal resolution is used the delta images mostly fight against the image smearing out. The delta image can also be encoded as wavelets, so that the borders of the adaptive blocks match. 2D+Delta Encoding techniques utilize H.264 and MPEG-2 compatible coding and can use motion compensation to compress between stereoscopic images. A precursor to the concept of motion compensation dates back to 1929, when R.D. Kell in Britain proposed the concept of transmitting only the portions of an analog video scene that changed from frame-to-frame. In 1959, the concept of inter-frame motion compensation was proposed by NHK researchers Y. Taki, M. Hatori and S. Tanaka, who proposed predictive inter-frame video coding in the temporal dimension . [ 7 ] Practical motion-compensated video compression emerged with the development of motion-compensated DCT (MC DCT) coding, [ 8 ] also called block motion compensation (BMC) or DCT motion compensation. This is a hybrid coding algorithm, [ 7 ] which combines two key data compression techniques: discrete cosine transform (DCT) coding [ 8 ] in the spatial dimension , and predictive motion compensation in the temporal dimension . [ 7 ] DCT coding is a lossy block compression transform coding technique that was first proposed by Nasir Ahmed , who initially intended it for image compression , in 1972. [ 9 ] In 1974, Ali Habibi at the University of Southern California introduced hybrid coding, [ 10 ] [ 11 ] which combines predictive coding with transform coding. [ 7 ] [ 12 ] However, his algorithm was initially limited to intra-frame coding in the spatial dimension. In 1975, John A. Roese and Guner S. Robinson extended Habibi's hybrid coding algorithm to the temporal dimension, using transform coding in the spatial dimension and predictive coding in the temporal dimension, developing inter-frame motion-compensated hybrid coding. [ 7 ] [ 13 ] For the spatial transform coding, they experimented with the DCT and the fast Fourier transform (FFT), developing inter-frame hybrid coders for both, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25- bit per pixel for a videotelephone scene with image quality comparable to an intra-frame coder requiring 2-bit per pixel. [ 14 ] [ 13 ] In 1977, Wen-Hsiung Chen developed a fast DCT algorithm with C.H. Smith and S.C. Fralick. [ 15 ] In 1979, Anil K. Jain and Jaswant R. Jain further developed motion-compensated DCT video compression, [ 16 ] [ 7 ] also called block motion compensation. [ 7 ] This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981. [ 7 ] Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards. [ 17 ] [ 2 ] The first digital video coding standard was H.120 , developed by the CCITT (now ITU-T) in 1984. [ 18 ] H.120 used motion-compensated DPCM coding, [ 7 ] which was inefficient for video coding, [ 17 ] and H.120 was thus impractical due to low performance. [ 18 ] The H.261 standard was developed in 1988 based on motion-compensated DCT compression, [ 17 ] [ 2 ] and it was the first practical video coding standard. [ 18 ] Since then, motion-compensated DCT compression has been adopted by all the major video coding standards (including the H.26x and MPEG formats) that followed. [ 17 ] [ 2 ]
https://en.wikipedia.org/wiki/Motion_compensation
A motion detector is an electrical device that utilizes a sensor to detect nearby motion ( motion detection ). Such a device is often integrated as a component of a system that automatically performs a task or alerts a user of motion in an area. They form a vital component of security, automated lighting control , home control, energy efficiency , and other useful systems. It can be achieved by either mechanical or electronic methods. [ 1 ] When it is done by natural organisms, it is called motion perception . An active electronic motion detector contains an optical, microwave, or acoustic sensor, as well as a transmitter. However, a passive contains only a sensor and only senses a signature from the moving object via emission or reflection. Changes in the optical, microwave or acoustic field in the device's proximity are interpreted by the electronics based on one of several technologies. Most low-cost motion detectors can detect motion at distances of about 4.6 metres (15 ft). Specialized systems are more expensive but have either increased sensitivity or much longer ranges. Tomographic motion detection systems can cover much larger areas because the radio waves it senses are at frequencies which penetrate most walls and obstructions, and are detected in multiple locations. Motion detectors have found wide use in commercial applications. One common application is activating automatic door openers in businesses and public buildings. Motion sensors are also widely used in lieu of a true occupancy sensor in activating street lights or indoor lights in walkways, such as lobbies and staircases. In such smart lighting systems, energy is conserved by only powering the lights for the duration of a timer, after which the person has presumably left the area. A motion detector may be among the sensors of a burglar alarm that is used to alert the home owner or security service when it detects the motion of a possible intruder. Such a detector may also trigger a security camera to record the possible intrusion. Motion controllers are also used for video game consoles as game controllers . A camera can also allow the body's movements to be used for control, such as in the Kinect system. Motion can be detected by monitoring changes in: Several types of motion detection are in wide use: Passive infrared (PIR) sensors are sensitive to a person's skin temperature through emitted black-body radiation at mid-infrared wavelengths, in contrast to background objects at room temperature. No energy is emitted from the sensor, thus the name passive infrared . [ 3 ] This distinguishes it from the electric eye for instance (not usually considered a motion detector ), in which the crossing of a person or vehicle interrupts a visible or infrared beam. These devices can detect objects, people, or animals by picking up one's infrared radiation. [ 4 ] The most basic forms of mechanical motion detection utilize a switch or trigger. For example, the keys of a typewriter use a mechanical method of detecting motion, where each key is a switch that is either off or on, and each letter that appears is a result of the key's motion. These detect motion through the principle of Doppler radar , and are similar to a radar speed gun . A continuous wave of microwave radiation is emitted, and phase shifts in the reflected microwaves due to motion of an object toward (or away from) the receiver result in a heterodyne signal at a low audio frequency . An ultrasonic transducer emits an ultrasonic wave (sound at a frequency higher than a human ear can hear) and receives reflections from nearby objects. [ 5 ] Exactly as in Doppler radar , heterodyne detection of the received field indicates motion. The detected doppler shift is also at low audio frequencies (for walking speeds) since the ultrasonic wavelength of around a centimeter is similar to the wavelengths used in microwave motion detectors. One potential drawback of ultrasonic sensors is that the sensor can be sensitive to motion in areas where coverage is undesired, for instance, due to reflections of sound waves around corners. [ 6 ] Such extended coverage may be desirable for lighting control, where the goal is the detection of any occupancy in an area, but for opening an automatic door, for example, a sensor selective to traffic in the path toward the door is superior. These systems sense disturbances to radio waves as they pass from node to node of a mesh network. They have the ability to detect over large areas completely because they can sense through walls and other obstructions. RF tomographic motion detection systems may use dedicated hardware, other wireless-capable devices or a combination of the two. Other wireless capable devices can act as nodes on the mesh after receiving a software update. [ 7 ] With the proliferation of low-cost digital cameras able to shoot video, it is possible to use the output of such a camera to detect motion in its field of view using software . [ 8 ] [ 9 ] This solution is particularly attractive when the intent is to record video triggered by motion detection, as no hardware beyond the camera and computer is needed. Since the observed field may be normally illuminated, this may be considered another passive technology. However, it can also be used together with near-infrared illumination to detect motion in the dark , that is, with the illumination at a wavelength undetectable by a human eye. More complex algorithms are necessary to detect motion when the camera itself is panning , or when a specific object's motion must be detected in a field containing other, irrelevant movement—for example, a painting surrounded by visitors in an art gallery . With a panning camera, models based on optical flow are used to distinguish between apparent background motion caused by the camera's movement and that of independently moving objects. [ 10 ] Photodetectors and infrared lighting elements can support digital screens to detect hand motions and gestures with the aid of machine learning algorithms. [ 11 ] Many modern motion detectors use combinations of different technologies. While combining multiple sensing technologies into one detector can help reduce false triggering, it does so at the expense of reduced detection probabilities and increased vulnerability. [ citation needed ] For example, many dual-tech sensors combine both a PIR sensor and a microwave sensor into one unit. For motion to be detected, both sensors must trip together. [ citation needed ] This lowers the probability of a false alarm since heat and light changes may trip the (passive infrared) PIR but not the microwave, or moving tree branches may trigger the microwave but not the PIR. If an intruder is able to fool either the PIR or microwave, however, the sensor will not detect it. [ citation needed ] Often, PIR technology is paired with another model to maximize accuracy and reduce energy use. [ citation needed ] PIR draws less energy than emissive microwave detection, and so many sensors are calibrated so that when the PIR sensor is tripped, it activates a microwave sensor. [ citation needed ] [ citation needed ] If the latter also picks up an intruder, then the alarm is sounded.
https://en.wikipedia.org/wiki/Motion_detector
Motion graphic design , also known as motion design , is a subset of graphic design which combines design with animation and/or filmmaking , video production , and filmic techniques . [ 1 ] Examples include kinetic typography and graphics used in film and television opening sequences, and station identification logos of some television channels . Both design principles and animation principles are important for good motion design. [ 2 ] Some motion designers start out as traditional graphic designers and later incorporate motion into their skillsets, while others have come from filmmaking , editing , or animation backgrounds, as these fields share a number of overlapping skills. [ 3 ] Technological advancements during the 20th and 21st centuries have greatly impacted the field; chief among these are improvements in modern computing technology, as computer programs for the film and video industries became more powerful and more widely available during this period. Modern motion graphic design typically involves any of several computerized tools and processes. Adobe After Effects is one of the leading computer programs used by modern motion graphic designers. [ 2 ] It allows users to create and modify graphics over time. [ 4 ] 3D software such as Cinema 4D and Blender are part of many modern motion designers' toolkits. [ 2 ] Adobe Animate , formerly known as Flash , is a tool for 2D motion graphic design. Prior to the rise of HTML5 , it was the primary tool for web animation . [ 5 ] It has also been used for creating video animations, such as the web series Homestar Runner . [ 6 ] It is still used by some motion designers, particularly for frame-by-frame, or " cel " animation. [ 4 ] [ 7 ] Adobe Premiere Pro is often used with After Effects when combining video footage with motion graphics. [ 2 ] [ 8 ] Prior to animation, motion designers use design tools such as Adobe Photoshop for rasterized graphics, and Adobe Illustrator for vector art . [ 2 ] Photoshop can also be used for cel animation. [ 9 ] Motion by Apple Inc. , now a part of Final Cut Studio , is another tool for motion graphics. [ 10 ] Motion graphic design is often used in the film industry . Openings to movies, television shows, and news programs often use photography , typography and motion graphics to create visually appealing imagery. Motion graphic design has also achieved widespread use in content marketing and advertising. In 2018, Cisco projected that 82% of all web traffic would be video by 2022. [ 11 ] Marketers and advertisers have focused much of their efforts on the production of high-quality branded video and motion graphic content. [ citation needed ] In addition to its myriad of uses in advertising, marketing, and branding, motion graphics are used in software, UI design, video game development, and other fields. Although motion design and animation share many commonalities, the difference between them lies in the fact that animation as a specific art form focuses more on cinematic effects and storytelling techniques to craft a narrative, whereas motion design is typically associated with setting abstract objects, text and other graphic design elements in motion. Bringing a graph, infographic or web design to life using movement is broadly speaking "animation", but more specifically, it's a type of animation that's called motion graphics. [ citation needed ] Motion graphics take a variety of forms. While some are entirely animated, others incorporate live-action video and/or photography. The latter may include animation overlay, such as data visualizations, icons, illustrations, and explanatory text used to complement and enhance audiences' understanding of the content. [ citation needed ] In content marketing contexts, there are three primary types of motion graphics which marketers choose to use depending on the goals they wish to achieve with the motion graphic. Explainer motion graphics seek to elucidate a product, process, or concept. Emotive motion graphics, meanwhile, aim to inspire a particular emotional response in audiences. [ citation needed ] And finally, promotional motion graphics are used to raise awareness about a service, product, or initiative. Because so many motion graphics are designed with particular goals in mind, it is often essential to partner with a designer or organization specializing in visual communication design [ citation needed ] to achieve a final product that conveys information in both an accurate and compelling way. [ 12 ] [ irrelevant citation ] UX , also known as user experience, works hand in hand with motion design. For example, when designing a phone app, motion design is used to improve user experience. [ 13 ] Motion design improves the user experience tremendously and effectively by adding animations to any screen. Motion design is not only used in phone apps; it is used in computers, tablets, smartphones, televisions, and lots more. UX designers use motion design to create their prototyping, and experience with it to determine whether it is easy to use for an average person, or if it needs enhancing. [ 14 ] There are a variety of career opportunities for motion designers, including animation , art direction , design , concept art , compositing , creative direction , editing , illustration , producing , and storyboarding . Some motion designers take on a range of these responsibilities, while others prefer to specialize. Motion designers can work on a range of projects, including advertisements , branding/identity , video games , UX / UI , AR / VR and film . [ 15 ] In the United States, the average motion designer income was $87,900 (USD) per year in 2019. [ 16 ] Skills in typography are critical to motion designers, as videos, cartoons and advertisements often include text. A good motion designer knows how to use type styles, sizes, and timing to use text to attract audiences. Knowledge of color theory is also very important for motion designers. They must have a good understanding of the color circle, complementary colors, and color saturations. The use of color is extremely helpful in communicating moods, effects and emotions. Motion designers must also have software experience. Some of the software includes Adobe Photoshop , Adobe Illustrator , Adobe After Effects , Adobe Premiere Pro and Adobe Substance . Other important motion-design skills are attention to detail; and good timing sense, for things such as matching video to audio. [ 17 ] Motion design began as early as the 1800s, when early animation devices such as flip-books were invented. There were no official founders of this art form, however, Saul Bass , Pablo Ferro , and John Whitney are some of the earliest well-known motion designers. John Whitney was one of the pioneers of computer-generated motion design. In 1960, he coined the term " motion graphics " with the foundation of his company, Motion Graphics Incorporated. He invented his own mechanical analog computer to design motion graphics for television commercials and movie title sequences . Whitney collaborated with Saul Bass to animate one of his most famous pieces, the title sequence for Alfred Hitchcock's 1958 film Vertigo , which featured swirling graphics increasing in size. [ 18 ] A degree in motion design can help an aspiring designer build a foundation for a career, by developing their skills in design, animation and conceptual thinking. [ 19 ] In the United States, college-level bachelor's degree programs can cost around $200,000. [ 1 ] [ 20 ] Since the 2010s, online learning options for motion design have become more prevalent, with resources like School of Motion , [ 21 ] Video Copilot , [ 22 ] Greyscalegorilla , [ 23 ] and an abundance of YouTube tutorials from channels like Ben Marriott , [ 2 ] EC Abrams , Eyedesyn and Mt. Mograph . [ 24 ] Online communities like Creative COW allow motion designers to get advice and technical assistance from more experienced designers.
https://en.wikipedia.org/wiki/Motion_graphic_design
Motion graphics (sometimes mograph ) are pieces of animation or digital footage that create the illusion of motion or rotation, and are usually combined with audio for use in multimedia projects. Motion graphics are usually displayed via electronic media technology, but may also be displayed via manual powered technology (e.g. thaumatrope , phenakistoscope , stroboscope , zoetrope , praxinoscope , flip book ). The term distinguishes static graphics from those with a transforming appearance over time, without over-specifying the form. [ 1 ] While any form of experimental or abstract animation can be called motion graphics, the term typically more explicitly refers to the commercial application of animation and effects to video, film, TV, and interactive applications. Since there is no universally accepted definition of motion graphics, the official beginning of the art form is disputed. There have been presentations that could be classified as motion graphics as early as the 19th century. Michael Betancourt wrote the first in-depth historical survey of the field, arguing for its foundations in visual music and the historical abstract films of the 1920s by Walther Ruttmann , Hans Richter , Viking Eggeling and Oskar Fischinger . [ 2 ] The history of motion graphics is closely related to the history of computer graphics , as the new developments of computer-generated graphics led to wider use of motion design not based on optical film animation. The term motion graphics originated with digital video editing in computing, perhaps to keep pace with newer technology. Graphics for television were originally referred to as Broadcast Design. Walter Ruttmann was a German cinematographer and film director who worked mainly in experimental film. The films were experiments in new forms of film expression and featured shapes of different colors flowing back and forth and in and out of the lens. He started his film career in the early 1920s, starting with abstract films Lichtspiel: Opus I (1921), the first publicly screened abstract film, and Opus II (1923.) The animations were painted with oil on glass plates, so the wet paint could be wiped away and modified easily. [ 3 ] John Whitney was of the first users of the term "motion graphics" and founded a company called Motion Graphics Inc. in 1960. [ 4 ] One of his most famous works was the animated title sequence from Alfred Hitchcock’s film “Vertigo” in 1958, collaborating with Saul Bass , which featured swirling graphics growing from small to large. Saul Bass was a major pioneer in the development of feature film title sequences. His work included title sequences for popular films such as The Man with the Golden Arm (1955), Vertigo (1958), Anatomy of a Murder (1959), North by Northwest (1959), Psycho (1960), and Advise & Consent (1962). His designs were simple, but effectively communicated the mood of the film. [ 5 ] Stan Brakhage was one of the most important figures in 20th-century experimental film . He explored a variety of formats, creating a large, diverse body of work. His influence in the credits of the film Seven (1995), designed by Kyle Cooper , with the scratched emulsion, rapid cutaways, and bursts of light in his style. [ 3 ] Computer-generated animations "are more controllable than other, more physically based processes, like constructing miniatures for effects shots, or hiring extras for crowd scenes, because it allows the creation for images that would not be feasible using any other technology." Before computers were widely available, motion graphics were costly and time-consuming, limiting their use to high-budget filmmaking and television production . Computers began to be used as early as the late 1960s as super computers were capable of rendering crude graphics. John Whitney and Charles Csuri can be considered early pioneers of computer aided animation. [ 6 ] [ 7 ] In the late 1980s to mid-1990s, expensive proprietary graphics systems such as those from British-based Quantel were quite commonplace in many television stations . Quantel workstations such as the Hal, Henry, Harry, Mirage, and Paintbox were the broadcast graphics standard of the time. Many other real-time graphics systems were used such as Ampex ADO, Abekas A51 and Grass Valley Group Kaleidoscope for live digital video effects . Early proprietary 3D computer systems were also developed specifically for broadcast design such as the Bosch FGS-4000 which was used in the music video for Dire Straits ' Money for Nothing . The advent of more powerful desktop computers running Photoshop in the mid-90s drastically lowered the costs for producing digital graphics. With the reduced cost of producing motion graphics on a computer, the discipline has seen more widespread use. With the availability of desktop programs such as Adobe After Effects , Adobe Premiere Pro and Apple Motion , motion graphics have become increasingly accessible. Modern character generators (CG) from Vizrt and Ross Video , incorporate motion graphics. Motion graphics continued to evolve as an art form with the incorporation of sweeping camera paths and 3D elements. Maxon's Cinema 4D , plugins such as MoGraph and Adobe After Effects . Despite their relative complexity, Autodesk 's Maya and 3D Studio Max are widely used for the animation and design of motion graphics, as is Maya and 3D Studio which uses a node-based particle system generator similar to Cinema 4D 's Thinking Particles plugin. There are also some other packages in Open Source panorama, which are gaining more features and adepts in order to use in a motion graphics workflow, while Blender integrates several of the functions of its commercial counterparts. Many motion graphics animators learn several 3D graphics packages for use according to each program's strengths. Although many trends in motion graphics tend to be based on a specific software's capabilities, the software is only a tool the broadcast designer uses while bringing the vision to life. Leaning heavily from techniques such as the collage or the pastiche , motion graphics have begun to integrate many traditional animation techniques as well, including stop-motion animation , frame by frame animation, or a combination of both. Motion design applications include Adobe After Effects , Blackmagic Fusion , Nuke , Apple Motion , Max/MSP , various VJ programs, Moho , Adobe Animate , Natron . 3D programs used in motion graphics include Adobe Substance, Maxon Cinema 4D and Blender . Motion graphics plug-ins include Video Copilot's products, [ 8 ] Red Giant Software and The Foundry Visionmongers . Elements of a motion graphics project can be animated by various means, depending on the capabilities of the software. These elements may be in the form of art, text, photos, and video clips, to name a few. The most popular form of animation is keyframing , in which properties of an object can be specified at certain points in time by setting a series of keyframes so that the properties of the object can be automatically altered (or tweened ) in the frames between keyframes. Another method involves a behavior system such as is found in Apple Motion that controls these changes by simulating natural forces without requiring the more rigid but precise keyframing method. Yet another method involves the use of formulas or scripts, such as the expressions function in Adobe After Effects or the creation of ActionScripts within Adobe Flash . [ unreliable source? ] Computers are capable of calculating and randomizing changes in imagery to create the illusion of motion and transformation. Computer animations can use less information space ( computer memory ) by automatically tweening , a process of rendering the key changes of an image at a specified or calculated time. These key poses or frames are commonly referred to as keyframes or low CP. Adobe Flash uses computer animation tweening as well as frame-by-frame animation and video. Early ground breaking motion design studios include: [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ]
https://en.wikipedia.org/wiki/Motion_graphics
In mechanics , the derivative of the position vs. time graph of an object is equal to the velocity of the object. In the International System of Units , the position of the moving object is measured in meters relative to the origin , while the time is measured in seconds . Placing position on the y-axis and time on the x-axis , the slope of the curve is given by: Here s {\displaystyle s} is the position of the object, and t {\displaystyle t} is the time. Therefore, the slope of the curve gives the change in position divided by the change in time, which is the definition of the average velocity for that interval of time on the graph. If this interval is made to be infinitesimally small, such that Δ s {\displaystyle {\Delta s}} becomes d s {\displaystyle {ds}} and Δ t {\displaystyle {\Delta t}} becomes d t {\displaystyle {dt}} , the result is the instantaneous velocity at time t {\displaystyle t} , or the derivative of the position with respect to time. A similar fact also holds true for the velocity vs. time graph. The slope of a velocity vs. time graph is acceleration , this time, placing velocity on the y-axis and time on the x-axis. Again the slope of a line is change in y {\displaystyle y} over change in x {\displaystyle x} : where v {\displaystyle v} is the velocity, and t {\displaystyle t} is the time. This slope therefore defines the average acceleration over the interval, and reducing the interval infinitesimally gives d v d t {\displaystyle {\begin{matrix}{\frac {dv}{dt}}\end{matrix}}} , the instantaneous acceleration at time t {\displaystyle t} , or the derivative of the velocity with respect to time (or the second derivative of the position with respect to time). In SI , this slope or derivative is expressed in the units of meters per second per second ( m / s 2 {\displaystyle \mathrm {m/s^{2}} } , usually termed "meters per second-squared"). Since the velocity of the object is the derivative of the position graph, the area under the line in the velocity vs. time graph is the displacement of the object. (Velocity is on the y-axis and time on the x-axis. Multiplying the velocity by the time, the time cancels out, and only displacement remains.) The same multiplication rule holds true for acceleration vs. time graphs. When acceleration (with unit m / s 2 {\displaystyle \mathrm {m/s^{2}} } ) on the y-axis is multiplied by time ( s {\displaystyle \mathrm {s} } for seconds) on the x-axis, the time dimension in the numerator and one of the two time dimensions (i.e., s 2 = s ∗ s {\displaystyle \mathrm {s} ^{2}=\mathrm {s} *\mathrm {s} } , "seconds squared") in the denominator cancel out, and only velocity remains ( m / s {\displaystyle \mathrm {m/s} } ). The expressions given above apply only when the rate of change is constant or when only the average ( mean ) rate of change is required. If the velocity or positions change non- linearly over time, such as in the example shown in the figure, then differentiation provides the correct solution. Differentiation reduces the time-spans used above to be extremely small ( infinitesimal ) and gives a velocity or acceleration at each point on the graph rather than between a start and end point. The derivative forms of the above equations are Since acceleration differentiates the expression involving position, it can be rewritten as a second derivative with respect to time: Since, for the purposes of mechanics such as this, integration is the opposite of differentiation, it is also possible to express position as a function of velocity and velocity as a function of acceleration. The process of determining the area under the curve, as described above, can give the displacement and change in velocity over particular time intervals by using definite integrals :
https://en.wikipedia.org/wiki/Motion_graphs_and_derivatives
The motion ratio of a mechanism is the ratio of the displacement of the point of interest to that of another point. The most common example is in a vehicle's suspension , where it is used to describe the displacement and forces in the springs and shock absorbers . The force in the spring is (roughly) the vertical force at the contact patch divided by the motion ratio, and the spring rate is the wheel rate divided by the motion ratio squared. This is described as the Installation Ratio in the reference. Motion ratio is the more common term in the industry, but sometimes is used to mean the inverse of the above definition. Motion ratio in suspension of a vehicle describes the amount of shock travel for a given amount of wheel travel. Mathematically, it is the ratio of shock travel and wheel travel. The amount of force transmitted to the vehicle chassis reduces with increase in motion ratio. A motion ratio close to one is desired in the vehicle for better ride and comfort. One should know the desired wheel travel of the vehicle before calculating motion ratio, which depends much on the type of track the vehicle will run upon. Selecting the appropriate ratio depends on multiple factors: This article about an automotive technology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Motion_ratio
Motion system in engineering and systems, is a component of a test and measurement system that provides motion to a load or loads in a one or many directions. Generally a motion system is made up of a set (or stack) of linear and rotational stages. A linear stage moves in a straight line , while a rotation stage moves in a partial or full circle. A stage can either be manually controlled with a knob control, or automated with a motion controller. A motion system generally is computer controlled and can perform fast, reliable, repeatable, and accurate positioning of loads. Most systems will support motion in X and Y directions, which is referred to as an XY stack. Often either a Z axis (up/down motion) or R axis (rotational motion) is placed on top of the XY stack. For automated stages, a scale can be attached to the internals of the stage and an encoder used to measure the position on the scale and report this to the controller, thereby determining the precise position of the stage. This allows for a motion controller to reliably and repeatably move to set positions with the linear stage. Motion Basics and Standards [1]
https://en.wikipedia.org/wiki/Motion_system
In physics and chemistry , motional narrowing is a phenomenon where a certain resonant frequency has a smaller linewidth than might be expected, due to motion in an inhomogeneous system. [ 1 ] The discovery of motional narrowing has been attributed to Nicolaas Bloembergen during his thesis work in the 1940s [ 2 ] A common example is NMR . [ 1 ] In this process, the nuclear spin of an atom starts rotating, with the frequency of rotation proportional to the external magnetic field that the atom experiences. However, in an inhomogeneous medium, the magnetic field often varies from point to point (depending, for example, on the magnetic susceptibility of nearby atoms), so the frequency of nuclear spin rotation is different in different places. Therefore, when detecting the resonant rotation frequency, there is a linewidth (i.e., finite range of different frequencies) due to the variation in that resonant frequency from point to point. (This is called " inhomogeneous broadening ".) However, if the atoms are diffusing around the system, they will experience a higher magnetic field than average sometimes, and a lower magnetic field than average other times. Therefore, (in accordance with the central limit theorem ), the time-averaged magnetic field experienced by an atom has less variation than the instantaneous magnetic field does. As a consequence, when detecting the resonant rotation frequency, the linewidth is smaller (narrower) than it would be if the atoms were stationary. This is the motional narrowing effect. In magnetically doped semiconductors, the local magnetic field is determined by the magnetization of the dopant ions which are distributed statistically. The spin of charge carriers precesses in this field. The motional character enters the picture by the fact that the charge carriers diffuse through the semiconductor, and that the electron and hole spins thereby experience a varying local magnetization and variations of spin precession. The motional-narrowing effect was studied in optical pump/probe experiments, where mobile singlet excitons were excited optically. The motional narrowing manifests in a peculiar temperature dependence of spin dephasing : The dephasing becomes slower at higher sample temperature where the exciton velocity becomes larger and the excitons more quickly experience environments with different magnetization. [ 3 ] A similar phenomenon occurs in many other systems. Another example is vibrational modes in a liquid. Each molecule of the liquid has vibrational modes, and the vibrational frequency is influenced by the positions of nearby molecules. However, if the nearby molecules reorient and move around fast enough, the vibration will essentially occur at an averaged frequency, and therefore have a smaller linewidth. For example, simulations suggest that the OH stretch vibration linewidth in liquid water is 30% smaller than it would be without this motional narrowing effect. [ 4 ]
https://en.wikipedia.org/wiki/Motional_narrowing
Motivated reasoning is the mental process that include mechanisms for accessing, constructing, and evaluating beliefs in response to new information or experiences. The motivation may be to arrive at accurate beliefs, or to arrive at desired conclusions. While people may be more likely to arrive at conclusions they want, such desires are generally constrained by the ability to construct a reasonable justification. [ 1 ] Motivated reasoning may involve personal choices, such as continuing to smoke after encountering evidence of the health effects of tobacco , leading to personal justifications for doing so. Other beliefs have social and political significance, being associated with deeply held values and identities . Political reasoning involves the goal of identity protection or maintaining status within an affinity group united by shared values. [ 2 ] Current research in motivated reasoning has been effected by technological change, both in the methods used by researchers and in the behavior being studied. Researchers employ the methodology of neuroscience to provide data on brain functioning, rather than relying solely upon self-reports or observations of behavior. Much of the information used by people in forming beliefs now comes from broadcast or social media, which may support a biased viewpoint, including conspiracy theories . [ 3 ] To attract an audience, news media favor content that stimulate negative emotions, favoring news stories about threats to the beliefs or social identity of viewers. [ 4 ] A number of psychological concepts are related to how individuals respond to threats to their current beliefs. Due to the need for cognitive consistency, contradictory beliefs cause the stress defined as cognitive dissonance . Motivated reasoning is a process for relieving this stress by either modifying beliefs to incorporate the new evidence , or constructing a rationale for maintaining current beliefs. Confirmation bias is one of the means to do the latter, by seeking to find additional evidence to support current beliefs, to discredit the new information, or both. [ 5 ] As described in 1990 by Ziva Kunda , motivated reasoning takes two forms, a more rational "cold" motivation favoring accuracy; or "hot" motivation to reach a desired goal, usually to maintain current beliefs at the expense of rationality . This distinction has been controversial, some finding that apparently self-serving conclusions may be seen as plausible, rather than biased, based upon prior beliefs. Cognitive psychology was also dominated in the 1970s by research into biases that explained departures from rationality in terms of heuristics rather than affect. However, exploration of the underlying mechanisms finds that motivation plays a role in the outcome by determining which cognitive processes are used in a particular situation. [ 1 ] Early research on how humans evaluated and integrated information supported a cognitive approach consistent with Bayesian probability , in which individuals weighted new information using rational calculations ("cold cognition"). [ 6 ] More recent theories endorse these cognitive processes as only partial explanations of motivated reasoning, but have also introduced motivational [ 1 ] or affective (emotional) processes ("hot cognition"). [ 7 ] Some research found a "tipping point" at which new information had accumulated to a level at which belief change occurred. [ 8 ] Motivated reasoning has become associated more often with efforts to maintain beliefs in the face of substantial contrary evidence, individuals or groups using different criteria for evaluating propositions they favor versus those they oppose. [ 9 ] Extreme examples of refusal to accept the validity of beliefs that are well-supported by evidence is called denialism, while the invention of alternative "facts" is the basis of conspiracy theories. [ 3 ] Motivated reasoning can be classified into two categories: 1) Accuracy-oriented (non-directional), in which the motive is to arrive at an accurate conclusion, irrespective of the individual's prior beliefs, and 2) goal-oriented (directional), in which the motive is to arrive at a particular conclusion favored based upon other considerations. [ 1 ] Such considerations include identity protection and afirming shared values and norms. [ 10 ] [ 11 ] However, the question as to whether citizens, in making political decisions, are more likely to be rational or rationalizing has not been determined. [ 12 ] Being motivated towards forming accurate beliefs is primary in situations that allow for the time and mental effort required for making correct decisions necessary for survival. However, many everyday decisions are made using mental shortcuts, or heuristics , that provide adequate results. [ 9 ] Kunda asserts that accuracy goals delay the process of coming to a premature conclusion, in that accuracy goals increase both the quantity and quality of processing—particularly in leading to more complex inferential cognitive processing procedures. [ 1 ] When researchers manipulated test subjects' motivation to be accurate by informing them that the target task was highly important or that they would be expected to defend their judgments, it was found that subjects utilized deeper processing and that there was less biasing of information. This was true when accuracy motives were present at the initial processing and encoding of information. [ 13 ] [ 14 ] In reviewing a line of research on accuracy goals and bias, Kunda concludes, "several different kinds of biases have been shown to weaken in the presence of accuracy goals".However, accuracy goals do not always eliminate biases and improve reasoning: some biases (e.g. those resulting from using the availability heuristic) might be resistant to accuracy manipulations. For accuracy to reduce bias, the following conditions must be present: However, these last two conditions introduce the construct that accuracy goals include a conscious process of utilizing cognitive strategies in motivated reasoning. This construct is called into question by neuroscience research that concludes that motivated reasoning is qualitatively distinct from reasoning in which there is no strong emotional stake in the outcomes. [ 15 ] Accuracy-oriented individuals who are thought to use "objective" processing can vary in information updating, depending on how much faith they place in a provided piece of evidence and inability to detect misinformation that can lead to beliefs that diverge from scientific consensus. [ 11 ] Directional goals enhance the accessibility of knowledge structures (memories, beliefs, information) that are consistent with desired conclusions. According to Kunda, such goals can lead to biased memory search and belief construction mechanisms. [ 1 ] Several studies [ which? ] support the effect of directional goals in selection and construction of beliefs about oneself, other people and the world. Cognitive dissonance research provides extensive evidence that people may bias their self-characterizations when motivated to do so. Other biases such as confirmation bias, prior attitude effect and disconfirmation bias could contribute to goal-oriented motivated reasoning. [ 11 ] For example, in one study, subjects altered their self-view by viewing themselves as more extroverted when induced to believe that extroversion was beneficial. Michael Thaler of Princeton University, conducted a study [ vague ] that found that men are more likely than women to demonstrate performance-motivated reasoning due to a gender gap in beliefs about personal performance. [ 16 ] After a second study was conducted the conclusion was drawn [ vague ] that both men and women are susceptible to motivated reasoning, but certain motivated beliefs can be separated into genders. [ 17 ] The motivation to achieve directional goals could also influence which rules (procedural structures, such as inferential rules ) are accessed to guide the search for information. Studies also suggest that evaluation of scientific evidence may be biased by whether the conclusions are in line with the reader's beliefs. In spite of goal-oriented motivated reasoning, people are not at liberty to conclude whatever they want merely because of that want. [ 1 ] People tend to draw conclusions only if they can muster up supportive evidence. They search memory for those beliefs and rules that could support their desired conclusion or they could create new beliefs to logically support their desired goals. A neuroimaging study by Drew Westen and colleagues does not support the use of cognitive processes in motivated reasoning, lending greater support to affective processing as a key mechanism in supporting bias. This study, designed to test the neural circuitry of individuals engaged in motivated reasoning, found that motivated reasoning "was not associated with neural activity in regions previously linked with cold reasoning tasks [Bayesian reasoning] nor conscious (explicit) emotion regulation". [ 15 ] This neuroscience data suggests that "motivated reasoning is qualitatively distinct from reasoning when people do not have a strong emotional stake in the conclusions reached." [ 15 ] However, if there is a strong emotion attached during their previous round of motivated reasoning and that emotion is again present when the individual's conclusion is reached, a strong emotional stake is then attached to the conclusion. Any new information in regards to that conclusion will cause motivated reasoning to reoccur. This can create pathways within the neural network that further ingrain the reasoned beliefs of that individual along similar neural networks to where logical reasoning occurs. This causes the strong emotion to reoccur when confronted with contradictory information, time and time again. This is referred to by Lodge and Taber as affective contagion . [ 18 ] But instead of "infecting" other individuals, the emotion "infects" the individual's reasoning pathways and conclusions. Careful or "reflective" reasoning has been linked to both overcoming and reinforcing motivated reasoning, suggesting that reflection is not a panacea, but a tool that can be used for rational or irrational purposes depending on other factors. [ 19 ] For example, when people are presented with and forced to think analytically about something complex that they lack adequate knowledge of (i.e. being presented with a new study on meteorology whilst having no degree in the subject), there is no directional shift in thinking, and their extant conclusions are more likely to be supported with motivated reasoning. Conversely, if they are presented with a more simplistic test of analytical thinking that confronts their beliefs (i.e. seeing implausible headlines as false), motivated reasoning is less likely to occur and a directional shift in thinking may result. [ 20 ] Peter Ditto and his students conducted a meta-analysis in 2018 of studies relating to political bias. Their aim was to assess which U.S. political orientation (left/liberal or right/conservative) was more biased and initiated more motivated reasoning. They found that both political orientations are susceptible to bias to the same extent. [ 21 ] The analysis was disputed by Jonathan Baron and John Jost , [ 22 ] to whom Ditto and colleagues responded. [ 23 ] Reviewing the debate, Stuart Vyse concluded that the answer to the question of whether U.S. liberals or conservatives are more biased is: "We don't know." [ 24 ] Many instances of motivated reasoning in personal decisions are related to health , misinformation being used to justify continuing unhealthy behaviors or avoid recommended treatments. [ 25 ] When an individual is trying to quit smoking, they might engage in motivated reasoning, focusing on information that makes smoking seem less harmful while discrediting evidence which emphasizes any dangers associated with the behavior. Individuals in situations like this are driven to initiate motivated reasoning to lessen the amount of cognitive dissonance they feel. [ 1 ] [ 26 ] In the context of the COVID-19 pandemic, people who refuse to wear masks or get vaccinated may engage in motivated reasoning to justify their beliefs and actions. They may reject scientific evidence that supports mask-wearing and vaccination and instead seek out information that supports their pre-existing beliefs, such as conspiracy theories or misinformation. This can lead to behaviors that are harmful to both themselves and others. [ 27 ] In a 2020 study, Van Bavel and colleagues explored the concept of motivated reasoning as a contributor to the spread of misinformation and resistance to public health measures during the COVID-19 pandemic. Their results indicated that people often engage in motivated reasoning when processing information about the pandemic, interpreting it to confirm their pre-existing beliefs and values. [ 28 ] The authors argue that addressing motivated reasoning is critical to promoting effective public health messaging and reducing the spread of misinformation. They suggested several strategies, such as reframing public health messages to align with individuals' values and beliefs. In addition, they suggested using trusted sources to convey information by creating social norms that support public health behaviors. [ 28 ] Politically motivated reasoning involves processing new information in accord with the impact on an individual's beliefs and the beliefs of people in an identity-defining group, not truth-related norms. Views about political issues become symbolic of group membership. People tend to trust experts whom they believe share their perspectives, distrusting experts that do not. Information is processed in the context of values, and values influence opinions in predictable ways. In other words, voters view experts as trustworthy or untrustworthy based upon their group’s values. Expertise is judged based upon the character, not reliability. [ 29 ] : 244–245 Despite a scientific consensus on climate change , citizens are divided on the topic, particularly along political lines. [ 30 ] Liberals and progressives generally believe, based on extensive evidence, that human activity is the main driver of climate change. By contrast, conservatives are generally much less likely to hold this belief, and a subset believes that there is no human involvement, and that the reported evidence is faulty (or even fraudulent). [ 31 ] [ 11 ] [ 10 ] On April 22, 2011, The New York Times published a series of articles attempting to explain the Barack Obama citizenship conspiracy theories . One of these articles by political scientist David Redlawsk explained these "birther" conspiracies as an example of political motivated reasoning. [ 32 ] [ 33 ] Despite ample evidence that President Barack Obama was born in the U.S. state of Hawaii, many people continue to believe that he was not born in the U.S., and therefore that he was an illegitimate president. Similarly, many people believe he is a Muslim (as was his father), despite ample lifetime evidence of his Christian beliefs and practice (as was true of his mother). [ 32 ] Subsequent research by others suggested that political partisan identity was more important for motivating "birther" beliefs than for some other conspiracy beliefs such as 9/11 conspiracy theories . [ 34 ] The theory of motivated reasoning plays a crucial role in shaping public beliefs about immigration. Especially when it comes to political issues such as welfare access. [ 35 ] [ 36 ] Studies suggest that how information is presented to the public can significantly impact people’s attitudes towards immigrants. For example, when certain information about immigrants is framed in a negative light, it has been shown to strengthen welfare prejudicial attitudes, especially among right wing individuals. [ 35 ] This suggests that people’s existing political ideologies might shape how they chose to interpret information, more often than not, reinforcing their pre-existing beliefs, a process that could be influenced by negatively biased and motivated reasoning. [ 35 ] In addition to ideological influences, cognitive factors can also affect how individuals interpret information related to the topic of immigration. Research shows that those with higher analytical abilities are less likely to engage in motivated reasoning when discussing immigration-related information. However, their reasoning could still be influenced by the topic at hand, such as gender or patriarchal quotas. [ 37 ] This underlines the idea that motivated reasoning mechanisms can vary depending on the subject matter and the individual’s cognitive traits. Some individuals are more influenced by personal views than others. [ 38 ] Social media is used for many different purposes and ways of spreading opinions. It is the number one place people go to get information and most of that information is complete opinion and bias. The way this applies to motivated reasoning is the way it spreads. "However, motivated reasoning suggests that informational uses of social media are conditioned by various social and cultural ways of thinking". [ 39 ] All ideas and opinions are shared and makes it very easy for motivated reasoning and biases to come through when searching for an answer or just facts on the internet or any news source. Social media platforms are frequently used to share individuals personal opinions, which often times create echo chambers where users are exposed to content that aligns with their personal beliefs. Magdalena Wlschenweski’s research dives into how this dynamic functions, emphasizing the role of individuals’ identities influence shaping these interactions with information. Her study uncovers that individuals are more likely to accept and share content that supports their views, while ignoring or rejecting certain contradictory information. [ 40 ] Emotional reactions to identity-related content can also influence how users engage with fact-checking. [ 41 ] Users tend to perceive opposing viewpoints and “inauthentic” or “automated”, which further solidifies their beliefs. Wischnewski introduced the idea of identity-salience interventions, which underline shared identities to promote more balanced information evaluation. [ 40 ] These interventions aim to decrease the impact of motivated reasoning and encourage a more objective approach to information. Michael Thaler’s 2024 paper examines how people may evaluate and assess the reliability of media sources, specifically relating to broadcast news. [ 17 ] The finding was that participants significantly over-trust non-credible “Fake News” sources. It also asserts that the strong association with political beliefs could be a cause for intense levels of polarization. The results also confirm that motivated reasoning originates less from a misguided optimism than a desire to defend one’s previously held political beliefs. [ 16 ] Ultimately, it asserts misinformation presented attractively and agreeably to one's beliefs making them generally believe that it is more valid, credible, or trustworthy. Another study expands on and explains this, exploring how long-held political opinions may influence morals and create strong beliefs held as unarguably factual. Barron, Becker, and Huck's paper elaborates on how identity shapes beliefs and, therefore, a desire for conformity. Minimal evidence supports that people do this for a self-serving purpose, but there is also a lack of evidence to show that they do it for others, creating new research questions. The study also concludes that motivated reasoning happens more often with political issues rather than economic ones. [ 42 ] Research on motivated reasoning tested accuracy goals (i.e., reaching correct conclusions) and directional goals (i.e., reaching preferred conclusions). Factors such as these affect perceptions; and results confirm that motivated reasoning affects decision-making and estimates. [ 43 ] These results have far reaching consequences because, when confronted with a small amount of information contrary to an established belief, an individual is motivated to reason away the new information, contributing to a hostile media effect . [ 44 ] If this pattern continues over an extended period of time, the individual becomes more entrenched in their beliefs.
https://en.wikipedia.org/wiki/Motivated_reasoning
In social psychology , a motivated tactician is someone who shifts between quick-and-dirty cognitively economical tactics and more thoughtful, thorough strategies when processing information, depending on the type and degree of motivation. [ 1 ] Such behavior is a type of motivated reasoning . The idea has been used to explain why people use stereotyping , biases and categorization in some situations, and more analytical thinking in others. [ citation needed ] After much research on categorization, and other cognitive shortcuts, psychologists began to describe human beings as cognitive misers ; which explains that a need to conserve mental resources causes people to use shortcuts to thinking about stimuli, instead of motivations and urges influencing the way humans think about their world. Stereotypes and heuristics were used as evidence of the economic nature of human thinking. In recent years, the work of Fiske & Neuberg (1990) , Higgins & Molden (2003) , Molden & Higgins (2005) and others has led to the recognition of the importance of motivational thinking. This is due to contemporary research studying the importance of motivation in cognitive processes, instead of concentrating on cognition versus motivation. [ 2 ] Current research does not deny that people will be cognitively miserly in certain situations, but it takes into account that thorough analytic thought does occur in other situations. Using this perspective, researchers have begun to describe human beings as "motivated tacticians" who are tactical about how much cognitive resources will be used depending on the individual's intent and level of motivation. Based on the complex nature of the world and the occasional need for quick thinking, it would be detrimental for a person to be methodical about everything, while other situations require more focus and attention. Considering human beings as motivated tacticians has become popular because it takes both situations into account. This concept also takes into account, and continues to study, what motivates people to use more or less mental resources when processing information about the world. Research has found that intended outcome, relevancy to the individual, culture, and affect can all influence the way a person processes information. [ citation needed ] The most prominent explanation of motivational thinking is that the person's desired outcome motivates him to use more or less cognitive resources while processing a situation or thing. [ 2 ] Researchers have divided preferred outcomes into two broad categories: directional and non-directional outcomes. The preferred outcome provides the motivation for the level of processing involved. Individuals motivated by directional outcomes have the intention of accomplishing a specific goal. These goals can range from appearing smart, courageous or likeable to affirming positive thoughts and feelings about something or someone to whom they are close or find likable. If someone is motivated by non-directional outcomes, he or she may wish to make the most logical and clear decision. Whether a person is motivated by directional or non-directional outcomes depends on the situation and the person's goals. Confirmation bias is an example of thought-processing motivated by directional outcomes. The goal is to affirm previously held beliefs, so one will use less thorough thinking in order to reach that goal. A person motivated to get the best education, who researches information on colleges and visit schools is motivated by a non-directional outcome. Evidence for outcome-influenced motivation is illustrated by research on self-serving bias . According to Miller (1976) , "Independent of expectancies from prior success or failure, the more personally important a success is in any given situation, the stronger is the tendency to claim responsibility for this success but to deny responsibility for failure." Though outcome-based motivation is the most prominent approach to motivated thinking, there is evidence that a person can be motivated by their preferred strategy of processing information. [ 2 ] However, rather than being an alternative, this idea is actually a compliment to the outcome-based approach. Proponents of this approach feel that a person prefers a specific method of information-processing because it usually yields the results they wish to receive. This relates back to the intended outcome being the primary motivation. "Strategy of information processing" means whether a person makes a decision using bias, categories, or analytical thinking. Regardless of whether the method is best suited for the situation or more thorough is less important to the person than its likelihood of yielding the intended result. People feel that their preferred strategy just "feels right". What makes the heuristic or method feel "right" is that the strategy accomplishes the desired goal (i.e. affirming positive beliefs of self-efficacy ). [ 2 ] There has been limited research on motivated tactical thinking outside of Western countries. One theory experts have mentioned is that a person's culture could play a large role in a person's motivations. [ 3 ] Nations like the United States are considered to be individualistic , while many Asian nations are considered to be collectivistic . An individualist emphasizes importance on the self and is motivated by individual reward and affirmation, while a collectivist sees the world as being more group- or culture-based. The difference in the two ways of thinking could affect motivation in information processing. For example, instead of being motivated by self-affirmation, a collectivist would be motivated by more group-affirming goals. [ 3 ] Another theory is that emotions can affect the way a person processes information. Forgas (2000) has stated that current mood can determine the information processing as well as thoroughness of thought. He also mentioned that achieving a desired emotion can influence the level to which information is processed.
https://en.wikipedia.org/wiki/Motivated_tactician
The Moto 360 is an Android Wear -based smartwatch announced by Motorola Mobility in 2014. [ 2 ] It was announced on March 18, 2014 [ 3 ] and was released on September 5, 2014 in the US [ 4 ] along with new models of the Moto X and the Moto G . The Moto 360's form factor is based on the circular design of traditional watches, supporting a 40mm (1.5 in) viewing diameter and circular capacitive touch display. The case is stainless steel and available in different finishes. Removable wrist bands are available in metal and natural leather. [ 5 ] The watch is water resistant and has only a single physical button. The watch has an all day battery, and rather than needing to be plugged in, it charges wirelessly by being placed on an included cradle. [ 6 ] [ 7 ] Internally [ 8 ] it has dual microphones for voice recognition and noise rejection and a vibration motor allowing tactile feedback. An ambient light sensor optimizes screen brightness and allows gesture controls such as blanking the screen by placing one's hand over it. Bluetooth 4.0 is included for connectivity and driving wireless headphones. In the June 2015 release notes, Motorola announced Wi-Fi support for the device, such that it could be used out of Bluetooth range. A heart-rate sensor and 9-axis accelerometer support health and activity monitoring. It has IP67 certification for dust resistance and fresh water resistance rated at 30-minutes at 1.5 meters (4.9 feet) depth. The Moto 360 runs Android Wear , Google's Android-based platform specifically designed for wearable devices. The 360 currently runs Android Marshmallow and pairs with any phone running Android 4.3 or higher and any iPhone running IOS 8 or higher. Its software displays notifications from paired phones. It uses paired phones to enable interactive features such as Google Now cards, search , navigation , playing music , and integration with apps such as fitness, EverNote , and others. [ 9 ] Ars Technica criticized the "terrible" battery life and performance, blaming it on the outdated SoC (system-on-chip) used in the Moto 360: "Motorola inexplicably chose an ancient 1GHz single-core Texas Instruments OMAP 3". [ 10 ] In a review for Engadget , Jon Fingas wrote, "The interface isn't that great at surfacing the information I need at the time I need it, for that matter. Spotify's Android Wear card always showed up on cue, but Sonos' controls appeared inconsistently even when there was music playing. And the watch frequently defaulted to showing apps that weren't really relevant to the situation at hand; no, I don't need to check out my fitness goals in the middle of the workday. Google may be right that watches are primarily about receiving passive streams of information, but that doesn't excuse doing a poor job when I want to be more active." He concluded, "Even with those quirks in mind, it's pretty clear the Moto 360 has turned a corner in half a year's time. It's no longer the underdeveloped novelty that it was on launch, and it's now my pick of the current Android Wear crop. True, it doesn't have the G Watch R 's true circular display, the ZenWatch 's custom software or the Sony Smartwatch 3's GPS, but I'd say of the three, it strikes the best balance between looks, functionality and price." [ 11 ]
https://en.wikipedia.org/wiki/Moto_360_(1st_generation)
The Moto 360 (2nd generation) , also known as the Moto 360 (2015) , is an Android Wear -based smartwatch . It was announced on September 14, 2015 at the IFA . It was discontinued by Motorola in February 2017. [ 1 ] The Moto 360 (2nd generation) has a circular design, similar to the Huawei Watch and LG Watch Urbane , with 42mm diameter options. The case is stainless steel and available in several different finishes. Removable wrist bands are available in metal and Horween leather, and more readily removable than those of the previous generation. [ 2 ] [ 3 ] The device has an "all-day" battery which Motorola claims to last longer than that of the previous generation Moto 360 . Like the previous watch, the 2nd generation Moto 360 charges wirelessly by being placed on an included cradle. It has dual microphones for voice recognition and noise rejection and a vibration motor allowing tactile feedback. An ambient light sensor optimizes screen brightness and allows gesture controls such as dimming the screen by placing one's hand over it. Bluetooth 4.0 LE is included for connectivity and wireless accessories. [ 2 ] [ 3 ] [ 4 ] [ 5 ] Like the previous generation, its ambient light sensor is located below the main display. A PPG and 9-axis accelerometer enable health and activity monitoring. It has IP67 certification for dust resistance and fresh water resistance rated at 30-minutes at 1 meter (4 feet) depth. [ 3 ] As of early 2017, the Moto 360 runs Android Wear 1.5, Google's Android-based platform specifically designed for wearable devices and Android Marshmallow 6.0.1 and pairs with any phone running Android 4.3 or higher. Also compatible with iPhone iOS v9+ when paired with Android Wear app for iOS . Its software displays notifications from paired phones. It uses paired phones to enable interactive features such as Google Now cards, search , navigation , playing music , and integration with apps such as Google Fit , Evernote , and others. [ 3 ] [ 6 ] The last supported version of the software is Android Wear 2.0 [ 7 ] The starting price was US$300. [ 8 ] Impressions of the Moto 360 were generally positive, especially in comparison to its predecessor, however the limitations of Android Wear concerned some critics. In contrasting the industrial design with the software, Dan Seifert of The Verge noted "if you buy the Moto 360 smartwatch, you’re paying more for the watch than you are for the smart". [ 9 ] The Guardian gave the device four out of five stars, concluding that "it’s no more capable than almost any other Android Wear watch" despite having "fluid performance" and being more comfortable than the first generation. [ 10 ]
https://en.wikipedia.org/wiki/Moto_360_(2nd_generation)
Motoactv (styled MOTOACTV ) is a smartwatch sold by Motorola Mobility which contains a number of hardware features and software applications tailored to fitness training. The watch contains apps for monitoring athletic activity using a built-in accelerometer to measure strides and GPS to measure distance. It can also be synced with a Speed/Cadence Bike Sensor or foot pod via ANT+ technology. The watch can communicate with external devices over Bluetooth 4.0 such as pulse sensor and Bluetooth stereo headphones for music. In addition, there is a DJ mode that will custom tailor the music dynamically to the workout. It was announced on October 18, 2011, and released to the US market on November 6, 2011. [ 1 ] The watch was discontinued by Motorola in 2013. When connected to a phone, the watch will display caller ID, text messages and calendar alerts. Plugins are available for Facebook , Twitter and Weather. The device reports that it is running Android 2.3.4 which according to Google is the same API as Android 2.3 with some minor patches and bug fixes. Many apps install and run with the small 220x176 display, though not often very usably. CMW reports Angry Birds runs well. Google Earth performs well but with some UI issues. Google Maps not only runs well and utilizes the built-in GPS, it is very usable on the tiny screen. With the Street View addon installed, Motoactv. GPU impressively handles pseudo 3D Street View transitions at a fluid 60fps. One particular limitation with the Motoactv is the lack of either an on-screen or hardware "Menu" key, which is necessary to access most of the functionality of most Android apps. This can be remedied by remapping one of app-specific buttons at the top. For instance, the 'Music' key can be remapped to Menu simply by mapping key 387 to menu in file /system/usr/keylayout/sholes-keypad.kl. In order to change files in /system, an "adb remount" must be issued.
https://en.wikipedia.org/wiki/Motoactv
Motor imagery is a mental process by which an individual rehearses or simulates a given action. It is widely used in sport training as mental practice of action , neurological rehabilitation , and has also been employed as a research paradigm in cognitive neuroscience and cognitive psychology to investigate the content and the structure of covert processes (i.e., unconscious) that precede the execution of action. [ 1 ] [ 2 ] In some medical, musical, and athletic contexts, when paired with physical rehearsal, mental rehearsal can be as effective as pure physical rehearsal (practice) of an action. [ 3 ] Motor imagery can be defined as a dynamic state during which an individual mentally simulates a physical action. This type of phenomenal experience implies that the subject feels themselves performing the action. [ 4 ] It corresponds to the so-called internal imagery (or first person perspective) of sport psychologists . [ 5 ] Mental practice refers to use of visuo-motor imagery with the purpose of improving motor behavior. Visuo-motor imagery requires the use of one's imagination to simulate an action, without physical movement. It has come to the fore due to the relevance of imagery in enhancing sports and surgical performance. [ 3 ] Mental practice, when combined with physical practice, can be beneficial to beginners learning a sport, but even more helpful to professionals looking to enhance their skills. [ 6 ] Physical practice generates the physical feedback necessary to improve, while mental practice creates a cognitive process physical practice cannot easily replicate. [ 7 ] When surgeons and other medical practitioners mentally rehearse procedures along with their physical practice, it produces the same results as physical rehearsal, but costs much less. But unlike its use in sports, to improve a skill, mental practice is used in medicine as a form of stress reduction before operations. [ 7 ] Mental practice is a technique used in music as well. Professional musicians may use mental practice when they are away from their instrument or unable to physically practice due to an injury. Studies show that a combination of physical and mental practice can provide improvement in mastering a piece equal to physical practice alone. [ 8 ] [ 9 ] This is because mental practice causes neuron growth that mirrors growth caused by physical practice. And there is precedent: Vladimir Horowitz and Arthur Rubinstein , among others, supplemented their physical practice with mental rehearsal. [ 10 ] Mental practice has been used to rehabilitate motor deficits in a variety of neurological disorders. [ 11 ] Mental practice of action seems to improve balance in individuals with multiple sclerosis and in elderly women. [ 12 ] For instance, mental practice has been used with success in combination with actual practice to rehabilitate motor deficits in a patient with sub-acute stroke. [ 13 ] Several studies have also shown improvement in strength, function, and use of both upper and lower extremities in chronic stroke. Some studies evaluated the effect of MI in gait rehabilitation after stroke however there was very low‐certainty evidence that motor imagery is more beneficial for improving gait (walking speed), motor function and functional mobility compared to other therapies, placebo or no intervention. [ 14 ] Additionally, there was insufficient scientific evidence to assess the influence of MI on the dependence on personal assistance and walking endurance. [ 14 ] Motor imagery has been studied using the classical methods of introspection and mental chronometry. These methods have revealed that motor images retain many of the properties, in terms of temporal regularities, programming rules and biomechanical constraints, which are observed in the corresponding real action when it comes to execution. For instance, in an experiment participants were instructed to walk mentally through gates of a given apparent width positioned at different apparent distances. The gates were presented to the participants with a 3-D visual display (a virtual reality helmet) which involved no calibration with external cues and no possibility for the subject to refer to a known environment. Participants were asked to indicate the time they started walking and the time they passed through the gate. Mental walking time was found to increase with increasing gate distance and decreasing gate width. Thus, it took the participant longer to walk mentally through a narrow gate than to walk through a larger gate placed at the same distance. [ 15 ] [ 16 ] This finding led neurophysiologists Marc Jeannerod and Jean Decety to propose that there is a similarity in mental states between action simulation and execution. [ 17 ] [ 18 ] [ 19 ] The functional equivalence between action and imagination goes beyond motor movements. For instance similar cortical networks mediate music performance and music imagery in pianists. [ 20 ] A large number of functional neuroimaging studies have demonstrated that motor imagery is associated with the specific activation of the neural circuits involved in the early stage of motor control (i.e., motor programming). This circuits includes the supplementary motor area , the primary motor cortex , the inferior parietal cortex , the basal ganglia , and the cerebellum . [ 21 ] [ 22 ] Such physiological data gives strong support about common neural mechanisms of imagery and motor preparation. [ 23 ] Measurements of cardiac and respiratory activity during motor imagery and during actual motor performance revealed a covariation of heart rate and pulmonary ventilation with the degree of imagined effort. [ 24 ] [ 25 ] [ 26 ] Motor imagery activates motor pathways. Muscular activity often increases with respect to rest, during motor imagery. When this is the case, EMG activity is limited to those muscles that participate in the simulated action and tends to be proportional to the amount of imagined effort. [ 27 ] Motor imagery is now widely used as a technique to enhance motor learning and to improve neurological rehabilitation in patients after stroke . Its effectiveness has been demonstrated in musicians. [ 28 ] Motor imagery is close to the notion of simulation used in cognitive and social neuroscience to account for different processes. An individual who is engaging in simulation may replay his own past experience in order to extract from it pleasurable, motivational or strictly informational properties. Such a view was clearly described by the Swedish physiologist Hesslow. [ 33 ] For this author, the simulation hypothesis states that thinking consists of simulated interaction with the environment, and rests on the following three core assumptions: (1) Simulation of actions: we can activate motor structures of the brain in a way that resembles activity during a normal action but does not cause any overt movement; (2) Simulation of perception: imagining perceiving something is essentially the same as actually perceiving it, only the perceptual activity is generated by the brain itself rather than by external stimuli; (3) Anticipation: there exist associative mechanisms that enable both behavioral and perceptual activity to elicit other perceptual activity in the sensory areas of the brain. Most importantly, a simulated action can elicit perceptual activity that resembles the activity that would have occurred if the action had actually been performed. Mental simulation may also be a representational tool to understand the self and others. Philosophy of mind and developmental psychology also draw on simulation to explain our capacity to mentalize, i.e., to understand mental states (intentions, desires, feelings, and beliefs) of others (aka theory of mind ). In this context, the basic idea of simulation is that the attributor attempts to mimic the mental activity of the target by using his own psychological resources. [ 34 ] In order to understand the mental state of another when observing the other acting, the individual imagines herself/himself performing the same action, a covert simulation that does not lead to an overt behavior. One critical aspect of the simulation theory of mind is the idea that in trying to impute mental states to others, an attributor has to set aside her own current mental states, and substitutes those of the target. [ 35 ]
https://en.wikipedia.org/wiki/Motor_imagery
Motor oil , engine oil , or engine lubricant is any one of various substances used for the lubrication of internal combustion engines . They typically consist of base oils enhanced with various additives, particularly antiwear additives , detergents, dispersants , and, for multi-grade oils, viscosity index improvers . [ citation needed ] [ 1 ] The main function of motor oil is to reduce friction and wear on moving parts and to clean the engine from sludge (one of the functions of dispersants ) and varnish (detergents). It also neutralizes acids that originate from fuel and from oxidation of the lubricant (detergents), improves the sealing of piston rings, and cools the engine by carrying heat away from moving parts. [ 2 ] In addition to the aforementioned basic constituents, almost all lubricating oils contain corrosion and oxidation inhibitors. Motor oil may be composed of only a lubricant base stock in the case of non- detergent oil, or a lubricant base stock plus additives to improve the oil's detergency, extreme pressure performance, and ability to inhibit corrosion of engine parts. Motor oils are blended using base oils composed of petroleum -based hydrocarbons , polyalphaolefins (PAO), or their mixtures in various proportions, sometimes with up to 20% by weight of esters for better dissolution of additives. [ 3 ] On 6 September 1866, American John Ellis founded the Continuous Oil Refining Company . While studying the possible healing powers of crude oil, Dr. Ellis was disappointed to find no real medicinal value, but was intrigued by its potential lubricating properties. He eventually abandoned the medical practice to devote his time to the development of an all-petroleum, high- viscosity lubricant for steam engines – which at the time were using inefficient combinations of petroleum and animal and vegetable fats. He made his breakthrough when he developed an oil that worked effectively at high temperatures. This meant fewer stuck valves and corroded cylinders. Motor oil is a lubricant used in internal combustion engines , which power cars , motorcycles , lawnmowers , engine-generators , and many other machines. In engines, there are parts which move against each other, and the friction between the parts wastes otherwise useful power by converting kinetic energy into heat . It also wears away those parts, which could lead to lower efficiency and degradation of the engine. Proper lubrication decreases fuel consumption, decreases wasted power, and increases engine longevity. Lubricating oil creates a separating film between surfaces of adjacent moving parts to minimize direct contact between them, decreasing frictional heat and reducing wear, thus protecting the engine. In use, motor oil transfers heat through conduction as it flows through the engine. [ 4 ] In an engine with a recirculating oil pump, this heat is transferred by means of airflow over the exterior surface of the oil pan , airflow through an oil cooler , and through oil gases evacuated by the positive crankcase ventilation (PCV) system. While modern recirculating pumps are typically provided in passenger cars and other engines of similar or larger in size, total-loss oiling is a design option that remains popular in small and miniature engines. In petrol (gasoline) engines, the top piston ring can expose the motor oil to temperatures of 160 °C (320 °F). In diesel engines, the top ring can expose the oil to temperatures over 315 °C (600 °F). Motor oils with higher viscosity indices thin less at these higher temperatures. [ 5 ] Coating metal parts with oil also keeps them from being exposed to oxygen , inhibiting oxidation at elevated operating temperatures preventing rust or corrosion . Corrosion inhibitors may also be added to the motor oil. Many motor oils also have detergents and dispersants added to help keep the engine clean and minimize oil sludge build-up. The oil is able to trap soot from combustion in itself, rather than leaving it deposited on the internal surfaces. It is a combination of this and some singeing that turns used oil black after some running. Rubbing of metal engine parts inevitably produces some microscopic metallic particles from the wearing of the surfaces. Such particles could circulate in the oil and grind against moving parts, causing wear . Because particles accumulate in the oil, it is typically circulated through an oil filter to remove harmful particles. An oil pump , a vane or gear pump powered by the engine, pumps the oil throughout the engine, including the oil filter. Oil filters can be a full flow or bypass type. In the crankcase of a vehicle engine, motor oil lubricates rotating or sliding surfaces between the crankshaft journal bearings (main bearings and big-end bearings) and rods connecting the pistons to the crankshaft. The oil collects in an oil pan , or sump , at the bottom of the crankcase. In some small engines such as lawn mower engines, dippers on the bottoms of connecting rods dip into the oil at the bottom and splash it around the crankcase as needed to lubricate parts inside. In modern vehicle engines, the oil pump takes oil from the oil pan and sends it through the oil filter into oil galleries, from which the oil lubricates the main bearings holding the crankshaft up at the main journals and camshaft bearings operating the valves. In typical modern vehicles, oil pressure-fed from the oil galleries to the main bearings enters holes in the main journals of the crankshaft. From these holes in the main journals, the oil moves through passageways inside the crankshaft to exit holes in the rod journals to lubricate the rod bearings and connecting rods. Some simpler designs relied on these rapidly moving parts to splash and lubricate the contacting surfaces between the piston rings and interior surfaces of the cylinders. However, in modern designs, there are also passageways through the rods which carry oil from the rod bearings to the rod-piston connections and lubricate the contacting surfaces between the piston rings and interior surfaces of the cylinders . This oil film also serves as a seal between the piston rings and cylinder walls to separate the combustion chamber in the cylinder head from the crankcase. The oil then drips back down into the oil pan. [ 6 ] [ 7 ] Motor oil may also serve as a cooling agent. In some engines oil is sprayed through a nozzle inside the crankcase onto the piston to provide cooling of specific parts that undergo high-temperature strain. On the other hand, the thermal capacity of the oil pool has to be filled, i.e. the oil has to reach its designed temperature range before it can protect the engine under high load. This typically takes longer than heating the main cooling agent – water or mixtures thereof – up to its operating temperature. In order to inform the driver about the oil temperature, some older and most high-performance or racing engines feature an oil thermometer . Continued operation of an internal combustion engine without adequate engine oil can cause damage to the engine, first by wear and tear, and in extreme cases by "engine seizure" where the lack of lubrication and cooling causes the engine to cease operation suddenly. Engine seizure can cause extensive damage to the engine mechanisms. [ 8 ] [ 9 ] The traditional two-stroke engine design is used in most small gasoline-powered consumer equipment, such as snow blowers , leaf blowers , chain saws, model airplanes, hedge trimmers, soil cultivators, and so on. It is also used in some trolling motors for personal watercraft. In this design, motor oil is blended into the fuel to lubricate the cylinder and reciprocating assembly, in ratios such as 25:1, 40:1, or 50:1 fuel-to-oil. This two-stroke oil is burned during combustion and contributes to emissions and inefficiency. [ 10 ] Non-smoking two-stroke oils, often composed of esters or polyglycols, are designed to reduce visible emissions and environmental impact. Environmental legislation, particularly in Europe, has driven the adoption of ester-based two-stroke oils, especially for leisure marine applications. Often, these motors are not exposed to as wide of service temperature ranges as in vehicles, so these oils may be single viscosity oils. [ citation needed ] Newer two-stroke engines used in outboard motors and some personal watercraft use direct-injection systems which eliminate the need for oil to be blended into the fuel, offering cleaner and more efficient operation at the expense of separate oil maintenance procedures. Portable electricity generators and "walk behind" lawn mowers use four-stroke engines similar to those in automotive vehicles and use standard motor oils. [ 10 ] Most motor oils are made from a heavier, thicker petroleum hydrocarbon base stock derived from crude oil , with additives to improve certain properties. The bulk of a typical motor oil consists of hydrocarbons with between 18 and 34 carbon atoms per molecule . [ 11 ] One of the most important properties of motor oil in maintaining a lubricating film between moving parts is its viscosity . The viscosity of a liquid can be thought of as its "thickness" or a measure of its resistance to flow. The viscosity must be high enough to maintain a lubricating film, but low enough that the oil can flow around the engine parts under all conditions. The viscosity index is a measure of how much the oil's viscosity changes as temperature changes. A higher viscosity index indicates the viscosity changes less with temperature than a lower viscosity index. Motor oil must be able to flow adequately at the lowest temperature it is expected to experience in order to minimize metal to metal contact between moving parts upon starting up the engine. The pour point defined first this property of motor oil, as defined by ASTM D97 as "...an index of the lowest temperature of its utility..." for a given application, [ 12 ] but the cold-cranking simulator (CCS, see ASTM D5293-08) and mini-rotary viscometer (MRV, see ASTM D3829-02(2007), ASTM D4684-08) are today the properties required in motor oil specs and define the Society of Automotive Engineers (SAE) classifications. Oil is largely composed of hydrocarbons which can burn if ignited. Still another important property of motor oil is its flash point , the lowest temperature at which the oil gives off vapors which can ignite. It is dangerous for the oil in a motor to ignite and burn, so a high flash point is desirable. At a petroleum refinery , fractional distillation separates a motor oil fraction from other crude oil fractions, removing the more volatile components, and therefore increasing the oil's flash point (reducing its tendency to burn). Another manipulated property of motor oil is its total base number (TBN), which is a measurement of the reserve alkalinity of an oil, meaning its ability to neutralize acids. The resulting quantity is determined as mg KOH/ (gram of lubricant). Analogously, total acid number (TAN) is the measure of a lubricant's acidity . Other tests include zinc , phosphorus , or sulfur content, and testing for excessive foaming . The Noack volatility test (ASTM D-5800) determines the physical evaporation loss of lubricants in high temperature service. A maximum of 14% evaporation loss is allowable to meet API SL and ILSAC GF-3 specifications. Some automotive OEM oil specifications require lower than 10%. Table of thermal and physical properties of typical (SAE 20W [ 13 ] ) unused engine oil: [ 14 ] [ 15 ] The oil and the oil filter need to be periodically replaced; the process is called an oil change . While there is an entire industry surrounding regular oil changes and maintenance, an oil change is a relatively simple car maintenance operation that many car owners can do themselves. It involves draining the oil from the engine into a drip pan, replacing the filter, and adding fresh oil. However, most localities require the used oil to be recycled after an oil change. In engines, there is some exposure of the oil to products of internal combustion, and microscopic coke particles from black soot accumulate in the oil during operation. Also, the rubbing of metal engine parts produces some microscopic metallic particles from the wearing of the surfaces. Such particles could circulate in the oil and grind against the part surfaces causing wear . The oil filter removes many of the particles and sludge, but eventually, the oil filter can become clogged, if used for extremely long periods. The motor oil and especially the additives also undergo thermal and mechanical degradation, which reduces the viscosity and reserve alkalinity of the oil. At reduced viscosity, the oil is not as capable of lubricating the engine, thus increasing wear and the chance of overheating. Reserve alkalinity is the ability of the oil to resist the formation of acids. Should the reserve alkalinity decline to zero, those acids form and corrode the engine. Some engine manufacturers specify which Society of Automotive Engineers (SAE) viscosity grade of oil should be used, but different viscosity motor oil may perform better based on the operating environment. Many manufacturers have varying requirements and have designations for motor oil they require to be used. This is driven by the EPA requirement that the same viscosity grade of oil used in the MPG test must be recommended to the customer. This exclusive recommendation led to the elimination of informative charts depicting climate temperature range along with several corresponding oil viscosity grades being suggested. In general, unless specified by the manufacturer, thicker oils are not necessarily better than thinner oils; heavy oils tend to stick longer to parts between two moving surfaces, and this degrades the oil faster than a lighter oil that flows better, allowing fresh oil in its place sooner. Cold weather has a thickening effect on conventional oil, and this is one reason thinner oils are manufacturer recommended in places with cold winters. Motor oil changes are usually scheduled based on the time in service or the distance that the vehicle has traveled. These are rough indications of the real factors that control when an oil change is appropriate, which include how long the oil has been run at elevated temperatures, how many heating cycles the engine has been through, and how hard the engine has worked. The vehicle distance is intended to estimate the time at high temperature, while the time in service is supposed to correlate with the number of vehicle trips and capture the number of heating cycles. Oil does not degrade significantly just sitting in a cold engine. On the other hand, if a car is driven just for very short distances, the oil will not fully heat up, and it will accumulate contaminants such as water, due to lack of sufficient heat to boil off the water. Oil in this condition, just sitting in an engine, can cause problems. Also important is the quality of the oil used, especially with synthetics (synthetics are more stable than conventional oils). Some manufacturers address this (for example, BMW and VW with their respective long-life standards), while others do not. Time-based intervals account for the short-trip drivers who drive short distances, which build up more contaminants. Manufacturers advise to not exceed their time or distance-driven interval for a motor oil change. Many modern cars now list somewhat higher intervals for changing oil and filter, with the constraint of "severe" service requiring more frequent changes with less-than-ideal driving. This applies to short trips of under 15 kilometres (10 mi), where the oil does not get to full operating temperature long enough to boil off condensation, excess fuel, and other contamination that leads to "sludge", "varnish", "acids", or other deposits. Many manufacturers have engine computer calculations to estimate the oil's condition based on the factors which degrade it, such as RPM, temperature, and trip length; one system adds an optical sensor for determining the clarity of the oil in the engine. These systems are commonly known as oil life monitor s or OLMs. Some quick oil change shops recommend intervals of 5,000 kilometres (3,000 mi), or every three months; this is not necessary, according to many automobile manufacturers. This has led to a campaign by the California EPA against the " 3,000-mile myth ", promoting vehicle manufacturer's recommendations for oil change intervals over those of the oil change industry. The engine user can, in replacing the oil, adjust the viscosity for the ambient temperature change, thicker for summer heat and thinner for the winter cold. Lower-viscosity oils are common in newer vehicles. By the mid-1980s, recommended viscosities had moved down to 5W-30, primarily to improve fuel efficiency. A typical modern application would be Honda motor's use of 5W-20 (and in their newest vehicles, 0W-20) viscosity oil for 12,000 kilometres (7,500 mi). Engine designs are evolving to allow the use of even lower-viscosity oils without the risk of excessive metal-to-metal abrasion, principally in the cam and valve mechanism areas. In line with car manufacturers push towards these lower viscosities in search of better fuel economy, in April 2013 the Society of Automotive Engineers (SAE) introduced an SAE 16 viscosity rating, a break from its traditional "divisible by 10" numbering system for its high-temperature viscosity ratings that spanned from low-viscosity SAE 20 to high-viscosity SAE 60. [ 16 ] The Society of Automotive Engineers (SAE) has established a numerical code system for grading motor oils according to their viscosity characteristics known as SAE J300 . This standard is commonly used throughout the world, and standards organizations that do so include API [ 17 ] and ACEA . [ 18 ] The grades include single grades, such as SAE 30, and also multi-grades such as SAE 15W-30. A multi-grade consists of a winter grade specifying the viscosity at cold temperatures and a non-winter grade specifying the viscosity at operating temperatures. An engine oil using a polymeric viscosity index improver (VII) must be classified as multi-grade. Breakdown of VIIs under shear is a concern in motorcycle applications, where the transmission may share lubricating oil with the motor. For this reason, motorcycle-specific oil is sometimes recommended. [ 19 ] The necessity of higher-priced motorcycle-specific oil has also been challenged by at least one consumer organization. [ 20 ] Engine lubricants are evaluated against the American Petroleum Institute (API), SJ, SL, SM, SN, SP, CH-4, CI-4, CI-4 PLUS, CJ-4, CK, and FA, as well as International Lubricant Standardization and Approval Committee (ILSAC) GF-3, GF-4, GF-5, GF-6A, GF-6B and Cummins, Mack and John Deere (and other Original Equipment Manufacturers (OEM)) requirements. These evaluations include chemical and physical properties using bench test methods as well as actual running engine tests to quantify engine sludge, oxidation, component wear, oil consumption, piston deposits and fuel economy. Originally S for spark ignition and C for compression, as used with diesel engines. Many oil producers still refer these categories in their marketing. [ 21 ] The API sets minimum performance standards for lubricants. Motor oil is used for the lubrication , cooling, and cleaning of internal combustion engines . Motor oil may be composed of only a lubricant base stock in the case of mostly obsolete non- detergent oil, or a lubricant base stock plus additives to improve the oil's detergency, extreme pressure performance, and ability to inhibit corrosion of engine parts. Groups: Lubricant base stocks are categorized into five groups by the API. Group I base stocks are composed of fractionally distilled petroleum which is further refined with solvent extraction processes to improve certain properties such as oxidation resistance and to remove wax. Poorly refined mineral oils that fail to meet the minimum VI of 80 required in group I fit into Group V. Group II base stocks are composed of fractionally distilled petroleum that has been hydrocracked to further refine and purify it. Group III base stocks have similar characteristics to Group II base stocks, except that Group III base stocks have higher viscosity indexes. Group III base stocks are produced by further hydrocracking of either Group II base stocks or hydroisomerized slack wax (a Group I and II dewaxing process by-product). Group IV base stock are polyalphaolefins (PAOs). Group V is a catch-all group for any base stock not described by Groups I to IV. Examples of group V base stocks include polyolesters (POE), polyalkylene glycols (PAG), and perfluoropolyalkylethers (PFPAEs) and poorly refined mineral oil. Groups I and II are commonly referred to as mineral oils , group III is typically referred to as synthetic (except in Germany and Japan, where they must not be called synthetic) and group IV is a synthetic oil. Group V base oils are so diverse that there is no catch-all description. The API service classes [ 22 ] have two general classifications: S for "service/spark ignition" (typical passenger cars and light trucks using gasoline engines ), and C for "commercial/compression ignition" (typical diesel equipment). Engine oil which has been tested and meets the API standards may display the API Service Symbol (also known as the "Donut") with the service categories on containers sold to oil users. [ 22 ] The latest API service category is API SP for gasoline automobile and light-truck engines. [ 23 ] The SP standard refers to a group of laboratory and engine tests, including the latest series for control of high-temperature deposits. Current API service categories include SP, SN, SM, SL and SJ for gasoline engines. All earlier service categories are obsolete. [ 21 ] Motorcycle oils commonly still use the SF/SG standard though. [ citation needed ] [ 24 ] All the current gasoline categories (including the obsolete SH) have placed limitations on the phosphorus content for certain SAE viscosity grades (the xW-20, xW-30) due to the chemical poisoning that phosphorus has on catalytic converters. Phosphorus is a key anti-wear component in motor oil and is usually found in motor oil in the form of zinc dithiophosphate (ZDDP). Each new API category has placed successively lower phosphorus and zinc limits, and thus has created a controversial issue of obsolescent oils needed for older engines, especially engines with sliding (flat/cleave) tappets. API and ILSAC, which represents most of the world's major automobile/engine manufacturers, state API SM/ILSAC GF-4 is fully backwards compatible, and it is noted that one of the engine tests required for API SM, the Sequence IVA, is a sliding tappet design to test specifically for cam wear protection. Not everyone is in agreement with backwards compatibility, and in addition, there are special situations, such as "performance" engines or fully race built engines, where the engine protection requirements are above and beyond API/ILSAC requirements. Because of this, there are specialty oils out in the market place with higher than API allowed phosphorus levels. Most engines built before 1985 have the flat/cleave bearing style systems of construction, which is sensitive to reducing zinc and phosphorus. For example, in API SG rated oils, this was at the 1200–1300 ppm level for zinc and phosphorus, where the current SM is under 600 ppm. This reduction in anti-wear chemicals in oil has caused premature failures of camshafts and other high pressure bearings in many older automobiles and has been blamed for premature failure of the oil pump drive/cam position sensor gear that is meshed with camshaft gear in some modern engines. The current diesel engine service categories are API CK-4, CJ-4, CI-4 PLUS, CI-4, CH-4, and FA-4. The previous service categories such as API CC or CD are obsolete. API solved problems with API CI-4 by creating a separate API CI-4 PLUS category that contains some additional requirements – this marking is located in the lower portion of the API Service Symbol "Donut". API CK-4 and FA-4 have been introduced for 2017 model American engines. [ 25 ] API CK-4 is backward compatible that means API CK-4 oils are assumed to provide superior performance to oils made to previous categories and could be used without problems in all previous model engines. API FA-4 oils are formulated for enhanced fuel economy (presented as reduced greenhouse gas emission ). To achieve that, they are SAE xW-30 oils blended to a high temperature high shear viscosity from 2.9 cP to 3.2 cP. They are not suitable for all engines thus their use depends on the decision of each engine manufacturer. They cannot be used with diesel fuel containing more than 15 ppm sulfur. Cummins reacted to the introduction of API CK-4 and API FA-4 by issuing its CES 20086 list of API CK-4 registered oils [ 26 ] and CES 20087 list of API FA-4 registered oils. [ 27 ] Valvoline oils are preferred. While engine oils are formulated to meet a specific API service category, they in fact conform closely enough to both the gasoline and diesel categories. Thus diesel rated engine oils usually carry the relevant gasoline categories, e.g. an API CJ-4 oil could show either API SL or API SM on the container. The rule is that the first mentioned category is fully met and the second one is fully met except where its requirements clash with the requirements of the first one. [ citation needed ] The API oil classification structure has eliminated specific support for wet-clutch motorcycle applications in their descriptors, and API SJ and newer oils are referred to be specific to automobile and light truck use. Accordingly, motorcycle oils are subject to their own unique standards. See JASO below. As discussed above, motorcycle oils commonly still use the obsolescent SF/SG standard. The International Lubricant Standardization and Approval Committee (ILSAC) also has standards for motor oil. Introduced in 2004, GF-4 [ 28 ] applies to SAE 0W-20, 5W-20, 0W-30, 5W-30, and 10W-30 viscosity grade oils. In general, ILSAC works with API in creating the newest gasoline oil specification, with ILSAC adding an extra requirement of fuel economy testing to their specification. For GF-4, a Sequence VIB Fuel Economy Test (ASTM D6837) is required that is not required in API service category SM. A key new test for GF-4, which is also required for API SM, is the Sequence IIIG, which involves running a 3.8 litres (230 cu in), GM 3.8 L V-6 at 125 hp (93 kW), 3,600 rpm, and 150 °C (302 °F) oil temperature for 100 hours. These are much more severe conditions than any API-specified oil was designed for: cars which typically push their oil temperature consistently above 100 °C (212 °F) are most turbocharged engines, along with most engines of European or Japanese origin, particularly small capacity, high power output. The IIIG test is about 50% more difficult [ 29 ] than the previous IIIF test, used in GF-3 and API SL oils. Engine oils bearing the API starburst symbol since 2005 are ILSAC GF-4 compliant. [ 28 ] To help consumers recognize that an oil meets the ILSAC requirements, API developed a "starburst" certification mark. A new set of specifications, GF-5, [ 30 ] took effect in October 2010. The industry had one year to convert their oils to GF-5 and in September 2011, ILSAC no longer offered licensing for GF-4. After nearly a decade of GF-5, ILSAC released final GF-6 specifications in 2019, with licensed sales to oil manufacturers and re-branders to begin in May 2020. There are two GF6 standards; GF-6A being a progression and fully backwards compatible with GF-5, and GF-6B specifically for SAE 0W-16 viscosity oil. [ 31 ] The ACEA ( Association des Constructeurs Européens d'Automobiles ) performance/quality classifications A3/A5 tests used in Europe are arguably more stringent than the API and ILSAC standards [ citation needed ] . [ 32 ] [ 33 ] CEC (The Co-ordinating European Council) is the development body for fuel and lubricant testing in Europe and beyond, setting the standards via their European Industry groups; ACEA, ATIEL, ATC and CONCAWE. ACEA does not certify oils, nor license, nor register, compliance certificates. Oil manufacturers are themselves responsible for carrying out all oil testing and evaluation according to recognised engine lubricant industry standards and practices. [ 34 ] Popular categories include A3/B3 and A3/B4 which are defined as "Stable, stay-in-grade Engine Oil intended for use in Passenger Car & Light Duty Van Gasoline& Diesel Engines with extended drain intervals" A3/B5 is suitable only for engines designed to use low viscosities. Category C oils are designated for use with catalysts and particulate filters while Category E is for heavy duty diesel. [ 35 ] [ 36 ] The Japanese Automotive Standards Organization (JASO) has created their own set of performance and quality standards for petrol engines of Japanese origin. For four-stroke gasoline engines, the JASO T904 standard is used, and is particularly relevant to motorcycle engines. The JASO T904-MA and MA2 standards are designed to distinguish oils that are approved for wet clutch use, with MA2 lubricants delivering higher friction performance. The JASO T904-MB standard denotes oils not suitable for wet clutch use, and are therefore used in scooters equipped with continuously variable transmissions. The addition of friction modifiers to JASO MB oils can contribute to greater fuel economy in these applications. [ 37 ] For two-stroke gasoline engines, the JASO M345 (FA, FB, FC, FD) standard is used, [ 38 ] and this refers particularly to low ash, lubricity, detergency, low smoke and exhaust blocking. These standards, especially JASO-MA (for motorcycles) and JASO-FC, are designed to address oil-requirement issues not addressed by the API service categories. One element of the JASO-MA standard is a friction test designed to determine suitability for wet clutch usage. [ 39 ] [ 40 ] An oil that meets JASO-MA is considered appropriate for wet clutch operations. Oils marketed as motorcycle-specific will carry the JASO-MA label. A 1989 American Society for Testing and Materials (ASTM) report stated that its 12-year effort to come up with a new high-temperature, high-shear (HTHS) standard was not successful. Referring to SAE J300, the basis for current grading standards, the report stated: The rapid growth of non-Newtonian multigraded oils has rendered kinematic viscosity as a nearly useless parameter for characterising "real" viscosity in critical zones of an engine... There are those who are disappointed that the twelve-year effort has not resulted in a redefinition of the SAE J300 Engine Oil Viscosity Classification document so as to express high-temperature viscosity of the various grades ... In the view of this writer, this redefinition did not occur because the automotive lubricant market knows of no field failures unambiguously attributable to insufficient HTHS oil viscosity. [ 41 ] Some current engine or vehicle manufacturers require a specific oil formula, known as oil specs, be used to add extra levels of protection for special engine designs, materials and operating conditions. General Motors defined and licensed 'dexos' oil specifications from 2011. dexos 1 and dexos R are designed for petrol (gasoline) engines, dexos 2 and dexos D are designed for diesel engines, however dexos 2 is specified for European petrol vehicles too. dexos1 was introduced in 2011, superseded by dexos1Gen2 in 2015, and later dexos1Gen3. dexos 2 was discontinued in 2025, replaced by dexos D for diesels, and dexos R for petrol (gasoline) engines. [ 42 ] Starting in the late 1990s, BMW for example came out with a spec called LL-98 (Long Life 1998) which requires special additives in oils that were approved to meet that spec. BMW regularly develops new specs to meet the increasing demands of the EPA emission standards and MPG requirements as well as new engines. Failure to use the correct specification oil has been known to cause PCV (positive crankcase ventilation), VVT (variable valve timing) system, gasket and sealing system, and other internal combustion component premature clogging and other failures. Some of the additives in those specs are designed to aid in keeping systems lubricated and clean. Some examples of BMW's other specs are: LL-01, LL-01 fe, LL-12, LL-14+, LL-17 fe. [ 43 ] European vehicle manufacturers have led the way for oil specs but Asian and American manufacturers have since joined in creating a need for oil change, repair shops and dealerships to carry many different oils to avoid damages both mechanical and monetarily. In addition to the viscosity index improvers, motor oil manufacturers often include other additives such as detergents and dispersants to help keep the engine clean by minimizing sludge buildup, corrosion inhibitors , and alkaline additives to neutralize acidic oxidation products of the oil. Most commercial oils have a minimal amount of zinc dialkyldithiophosphate as an anti-wear additive to protect contacting metal surfaces with zinc and other compounds in case of metal to metal contact. The quantity of zinc dialkyldithiophosphate is limited to minimize adverse effect on catalytic converters . Another aspect for after-treatment devices is the deposition of oil ash, which increases the exhaust back pressure and reduces fuel economy over time. The so-called "chemical box" limits today the concentrations of sulfur, ash and phosphorus (SAP). There are other additives available commercially which can be added to the oil by the user for purported additional benefit. Some of these additives include: Some molybdenum disulfide containing oils may be unsuitable for motorcycles which share wet clutch lubrication with the engine. [ 39 ] Due to its chemical composition, worldwide dispersion and effects on the environment, used motor oil is considered a serious environmental problem. [ 46 ] [ 47 ] Most current motor-oil lubricants contain petroleum base stocks, which are toxic to the environment and difficult to dispose of after use. [ 48 ] Over 40% of the pollution in America's waterways is from used motor oil. [ 49 ] Used oil is considered the largest source of oil pollution in the U.S. harbors and waterways, at 1,460 ML (385 × 10 ^ 6 US gal) per year, mostly from improper disposal. [ 50 ] By far the greatest cause of motor-oil pollution in oceans comes from drains and urban street-runoff, much of it caused by improper disposal of engine oil. [ 51 ] One US gallon (3.8 L) of used oil can generate a 32,000 m 2 (8 acres) slick on surface water, threatening fish, waterfowl and other aquatic life. [ 50 ] According to the U.S. EPA, films of oil on the surface of water prevent the replenishment of dissolved oxygen, impair photosynthetic processes, and block sunlight. [ 52 ] Toxic effects of used oil on freshwater and marine organisms vary, but significant long-term effects have been found at concentrations of 310 ppm in several freshwater fish species and as low as 1 ppm in marine life forms. [ 52 ] Motor oil can have an incredibly detrimental effect on the environment, particularly to plants that depend on healthy soil to grow. There are three main ways that motor oil affects plants: Used motor-oil dumped on land reduces soil productivity. [ 52 ] Improperly disposed used oil ends up in landfills, sewers, backyards, or storm drains where soil, groundwater and drinking water may become contaminated. [ 53 ] Synthetic lubricants were first made in significant quantities as replacements for mineral lubricants (and fuels) by German scientists in the late 1930s and early 1940s, because of their insufficient quantities of crude needed to fight in World War II . A significant factor in their gain in popularity was the ability of synthetic-based lubricants to remain fluid in very low temperatures, such as those encountered on Germany's eastern front , which caused petroleum-based lubricants to solidify owing to their higher wax content. The use of synthetic lubricants widened through the 1950s and 1960s owing to a property at the other end of the temperature spectrum – the ability to lubricate aviation engines at high temperatures that caused mineral-based lubricants to break down. In the mid-1970s, synthetic motor oils were formulated and commercially applied for the first time in automotive applications. The same SAE system for designating motor oil viscosity also applies to synthetic oils . Synthetic oils are derived from either Group III, Group IV, or some Group V bases. Synthetics include classes of lubricants like synthetic esters (Group V) as well as "others" like GTL (methane gas-to-liquid) (Group III +) and polyalpha-olefins (Group IV). Higher purity and therefore better property control theoretically means synthetic oil has better mechanical properties at extremes of high and low temperatures. The molecules are made large and "soft" enough to retain good viscosity at higher temperatures, yet branched molecular structures interfere with solidification and therefore allow flow at lower temperatures. Thus, although the viscosity still decreases as temperature increases, these synthetic motor oils have a higher viscosity index over the traditional petroleum base. Their specially designed properties allow a wider temperature range at higher and lower temperatures and often include a lower pour point. With their improved viscosity index, synthetic oils need lower levels of viscosity index improvers, which are the oil components most vulnerable to thermal and mechanical degradation as the oil ages, and thus they do not degrade as quickly as traditional motor oils. However, they still fill up with particulate matter, although the matter better suspends within the oil, [ citation needed ] and the oil filter still fills and clogs up over time. So periodic oil and filter changes should still be done with synthetic oil, but some synthetic oil suppliers suggest that the intervals between oil changes can be longer, sometimes as long as 16,000–24,000 kilometres (9,900–14,900 mi) primarily due to reduced degradation by oxidation. Tests [ citation needed ] show that fully synthetic oil is superior in extreme service conditions to conventional oil, and may perform better for longer under standard conditions. But in the vast majority of vehicle applications, mineral oil-based lubricants, fortified with additives and with the benefit of over a century of development, continue to be the predominant lubricant for most internal combustion engine applications. [ 54 ] Bio-based oils existed prior to the development of petroleum-based oils in the 19th century. They have become the subject of renewed interest with the advent of bio-fuels and the push for green products. The development of canola-based motor oils began in 1996 in order to pursue environmentally friendly products. Purdue University has funded a project to develop and test such oils. Test results indicate satisfactory performance from the oils tested. [ 55 ] A review on the status of bio-based motor oils and base oils globally, as well as in the U.S, shows how bio-based lubricants show promise in augmenting the current petroleum-based supply of lubricating materials, as well as replacing it in many cases. [ 56 ] The USDA National Center for Agricultural Utilization Research developed an Estolide lubricant technology made from vegetable and animal oils. Estolides have shown great promise in a wide range of applications, including engine lubricants. [ 57 ] Working with the USDA, a California-based company Biosynthetic Technologies has developed a high-performance "drop-in" biosynthetic oil using Estolide technology for use in motor oils and industrial lubricants. This biosynthetic oil American Petroleum Institute (API) has the potential to greatly reduce environmental challenges associated with petroleum. Independent testing not only shows biosynthetic oils to be among the highest-rated products for protecting engines and machinery; they are also bio-based, biodegradable, non-toxic and do not bioaccumulate in marine organisms. Also, motor oils and lubricants formulated with biosynthetic base oils can be recycled and re-refined with petroleum-based oils. [ 58 ] The U.S.-based company Green Earth Technologies manufactures a bio-based motor oil, called G-Oil, made from animal oils. [ 59 ] A new process to break down polyethylene , a common plastic product found in many consumer containers, converts it into a paraffin-like wax with the correct molecular properties for conversion into a lubricant, avoiding the expensive Fischer–Tropsch process . The plastic is melted and then pumped into a furnace. The heat of the furnace breaks down the molecular chains of polyethylene into wax. Finally, the wax is subjected to a catalytic process that alters the wax's molecular structure, leaving a clear oil. [ 60 ] Biodegradable motor oils based on esters or hydrocarbon-ester blends appeared in the 1990s followed by formulations beginning in 2000 which respond to the bio-no-tox-criteria of the European preparations directive (EC/1999/45). [ 61 ] This means, that they not only are biodegradable according to OECD 301x test methods, but also the aquatic toxicities (fish, algae, daphnia) are each above 100 mg/L. Another class of base oils suited for engine oil are the polyalkylene glycols. They offer zero-ash, bio-no-tox properties, and lean burn characteristics. [ 62 ] The oil in a motor oil product does break down and burns as it is used in an engine – it also gets contaminated with particles and chemicals that make it a less effective lubricant. Re-refining cleans the contaminants and used additives out of the dirty oil. From there, this clean "base stock" is blended with some virgin base stock and a new additives package to make a finished lubricant product that can be just as effective as lubricants made with all-virgin oil. [ 63 ] The United States Environmental Protection Agency (EPA) defines re-refined products as containing at least 25% re-refined base stock, [ 64 ] but other standards are significantly higher. The California State public contract code defines a re-refined motor oil as one that contains at least 70% re-refined base stock. [ 65 ] Motor oils were sold at retail in glass bottles , metal cans, and metal-cardboard cans, before the advent of the current polyethylene plastic bottle , which began to appear in the early 1980s. Reusable spouts were made separately from the cans; with a piercing point like that of a can opener, these spouts could be used to puncture the top of the can and to provide an easy way to pour the oil. Today, motor oil in the US is generally sold in bottles of one U.S. quart (950 mL) and on a rarity in one-liter (33.8 U.S. fl oz) as well as in larger plastic containers ranging from approximately 4.4 to 5 liters (4.6 to 5.3 U.S. qt) due to most small to mid-size engines requiring around 3.6 to 5.2 liters (3.8 to 5.5 U.S. qt) of engine oil. In the rest of the world, it is most commonly available in 1L, 3L, 4L, and 5L retail packages. Distribution to larger users (such as drive-through oil change shops) is often in bulk, by tanker truck or in one barrel (160 L) drums , in Europe 208 litres (55 US gal) and 60 litres (16 US gal) drums are common. Human ingestion of motor oil is considered dangerous. Ingestion of small amounts unused motor oil will generally result in loose stools or diarrhea . The motor oil could also aspirate . This can cause: coughing , wheezing , or trouble breathing. If the skin comes in contact with motor oil defatting can occur. [ 66 ] [ 67 ] If exposed to an open flame motor oil could ignite. [ 68 ]
https://en.wikipedia.org/wiki/Motor_oil
Motor proteins are a class of molecular motors that can move along the cytoskeleton of cells. They do this by converting chemical energy into mechanical work by the hydrolysis of ATP . [ 1 ] Motor proteins are the driving force behind most active transport of proteins and vesicles in the cytoplasm . Kinesins and cytoplasmic dyneins play essential roles in intracellular transport such as axonal transport and in the formation of the spindle apparatus and the separation of the chromosomes during mitosis and meiosis . Axonemal dynein , found in cilia and flagella , is crucial to cell motility , for example in spermatozoa , and fluid transport, for example in trachea. The muscle protein myosin "motors" the contraction of muscle fibers in animals. The importance of motor proteins in cells becomes evident when they fail to fulfill their function. For example, kinesin deficiencies have been identified as the cause for Charcot-Marie-Tooth disease and some kidney diseases . Dynein deficiencies can lead to chronic infections of the respiratory tract as cilia fail to function without dynein. Numerous myosin deficiencies are related to disease states and genetic syndromes. Because myosin II is essential for muscle contraction, defects in muscular myosin predictably cause myopathies. Myosin is necessary in the process of hearing because of its role in the growth of stereocilia so defects in myosin protein structure can lead to Usher syndrome and non-syndromic deafness . [ 2 ] Motor proteins utilizing the cytoskeleton for movement fall into two categories based on their substrate : microfilaments or microtubules . Actin -based motor proteins ( myosin ) move along microfilaments through interaction with actin , and microtubule motors ( dynein and kinesin ) move along microtubules through interaction with tubulin . There are two basic types of microtubule motors: plus-end motors and minus-end motors, depending on the direction in which they "walk" along the microtubule cables within the cell. Myosins are a superfamily of actin motor proteins that convert chemical energy in the form of ATP to mechanical energy, thus generating force and movement. The first identified myosin, myosin II, is responsible for generating muscle contraction . Myosin II is an elongated protein that is formed from two heavy chains with motor heads and two light chains. Each myosin head contains actin and ATP binding site. The myosin heads bind and hydrolyze ATP, which provides the energy to walk toward the plus end of an actin filament. Myosin II are also vital in the process of cell division . For example, non-muscle myosin II bipolar thick filaments provide the force of contraction needed to divide the cell into two daughter cells during cytokinesis. In addition to myosin II, many other myosin types are responsible for variety of movement of non-muscle cells. For example, myosin is involved in intracellular organization and the protrusion of actin-rich structures at the cell surface. Myosin V is involved in vesicle and organelle transport. [ 3 ] [ 4 ] Myosin XI is involved in cytoplasmic streaming , wherein movement along microfilament networks in the cell allows organelles and cytoplasm to stream in a particular direction. [ 5 ] Eighteen different classes of myosins are known. [ 6 ] Genomic representation of myosin motors: [ 7 ] Kinesins are a superfamily of related motor proteins that use a microtubule track in anterograde movement. They are vital to spindle formation in mitotic and meiotic chromosome separation during cell division and are also responsible for shuttling mitochondria , Golgi bodies , and vesicles within eukaryotic cells . Kinesins have two heavy chains and two light chains per active motor. The two globular head motor domains in heavy chains can convert the chemical energy of ATP hydrolysis into mechanical work to move along microtubules. [ 8 ] The direction in which cargo is transported can be towards the plus-end or the minus-end, depending on the type of kinesin. In general, kinesins with N-terminal motor domains move their cargo towards the plus ends of microtubules located at the cell periphery, while kinesins with C-terminal motor domains move cargo towards the minus ends of microtubules located at the nucleus. Fourteen distinct kinesin families are known, with some additional kinesin-like proteins that cannot be classified into these families. [ 9 ] Genomic representation of kinesin motors: [ 7 ] Dyneins are microtubule motors capable of a retrograde sliding movement. Dynein complexes are much larger and more complex than kinesin and myosin motors. Dyneins are composed of two or three heavy chains and a large and variable number of associated light chains. Dyneins drive intracellular transport toward the minus end of microtubules which lies in the microtubule organizing center near the nucleus. [ 10 ] The dynein family has two major branches. Axonemal dyneins facilitate the beating of cilia and flagella by rapid and efficient sliding movements of microtubules. Another branch is cytoplasmic dyneins which facilitate the transport of intracellular cargos. Compared to 15 types of axonemal dynein, only two cytoplasmic forms are known. [ 11 ] Genomic representation of dynein motors: [ 7 ] In contrast to animals , fungi and non-vascular plants , the cells of flowering plants lack dynein motors. However, they contain a larger number of different kinesins. Many of these plant-specific kinesin groups are specialized for functions during plant cell mitosis . [ 12 ] Plant cells differ from animal cells in that they have a cell wall . During mitosis, the new cell wall is built by the formation of a cell plate starting in the center of the cell. This process is facilitated by a phragmoplast , a microtubule array unique to plant cell mitosis. The building of cell plate and ultimately the new cell wall requires kinesin-like motor proteins. [ 13 ] Another motor protein essential for plant cell division is kinesin-like calmodulin-binding protein (KCBP), which is unique to plants and part kinesin and part myosin. [ 14 ] Besides the motor proteins above, there are many more types of proteins capable of generating forces and torque in the cell. Many of these molecular motors are ubiquitous in both prokaryotic and eukaryotic cells, although some, such as those involved with cytoskeletal elements or chromatin , are unique to eukaryotes. The motor protein prestin , [ 15 ] expressed in mammalian cochlear outer hair cells, produces mechanical amplification in the cochlea. It is a direct voltage-to-force converter, which operates at the microsecond rate and possesses piezoelectric properties.
https://en.wikipedia.org/wiki/Motor_protein
A motor soft starter is a device used with AC electrical motors to temporarily reduce the load and torque in the powertrain and electric current surge of the motor during start-up. This reduces the mechanical stress on the motor and shaft, as well as the electrodynamic stresses on the attached power cables and electrical distribution network , extending the lifespan of the system. [ 1 ] : 150 It can consist of mechanical or electrical devices, or a combination of both. Mechanical soft starters include clutches and several types of couplings using a fluid , magnetic forces, or steel shot to transmit torque, similar to other forms of torque limiter . Electrical soft starters can be any control system that reduces the torque by temporarily reducing the voltage or current input, or a device that temporarily alters how the motor is connected in the electric circuit . Whenever the armature of an electric motor is moving, both the motor action and generator action are occurring simultaneously; the electromagnetic force produced by generator action opposes the desired motor action and effectively creates a variable motor resistance which increases with motor speed. When a voltage is applied to the motor, this resistance dictates the current drawn by the motor. At rest, the resistance is relatively low, so the starting or inrush current can be high if the full line voltage is applied to the motor. Compared to DC motors, AC motors tend to have significantly higher stator resistance and correspondingly lower inrush current. [ 1 ] : 24 Nevertheless, across-the line starting of induction motors is accompanied by inrush currents up to 7-10 times higher than running current, and higher efficiency motors can experience inrush currents 10-15 times running current. In addition, starting torque can be up to 3 times higher than running torque. The starting torque transient can create a sudden mechanical stress on the machine, which leads to a reduced service life. Moreover, the high inrush current stresses the power supply, which may lead to voltage dips. As a result, lifespan of sensitive equipment may be reduced. [ 1 ] Another common side-effect, especially in residential installations, is voltage sag in the site's power supply created by the high inrush current, visible as flickering lights. A soft starter continuously controls the motor's voltage supply during the start-up phase. This way, the motor is adjusted to the machine's load behavior. Mechanical operating equipment is accelerated smoothly. This lengthens service life, improves operating behavior, and smooths work flows. Electrical soft starters can use solid state devices to control the current flow and therefore the voltage applied to the motor. They can be connected in series with the line voltage applied to the motor, or can be connected inside the delta (Δ) loop of a delta-connected motor , controlling the voltage applied to each winding. Solid state soft starters can control one or more phases of the voltage applied to the induction motor with the best results achieved by three-phase control. Soft starters controlled via two phases have the disadvantage that the uncontrolled phase will always shows some current unbalance with respect to the controlled phases. Typically, the voltage is controlled by reverse- parallel -connected silicon-controlled rectifiers ( thyristors ), but in some circumstances with three-phase control, the control elements can be a reverse-parallel-connected SCR and diode . [ 2 ] [ 3 ] Another way to limit motor starting current is a series reactor . If an air core is used for the series reactor then a very efficient and reliable soft starter can be designed which is suitable for all types of 3 phase induction motor [ synchronous / asynchronous ] ranging from 25 kW 415 V to 30 MW 11 kV. Using an air core series reactor soft starter is very common practice for applications like pump, compressor, fan etc. Usually high starting torque applications do not use this method. Soft starters can be set up to the requirements of the individual application. Compared to variable-frequency drives, soft starters require very few user adjustments. Some soft starters also include a "learning" process to automatically adapt the drive settings to the characteristics of a motor load, to reduce the power inrush requirement at the start. In pump applications, a soft starter can avoid pressure surges that could lead to water hammer . Conveyor belt systems can be smoothly started, avoiding jerk and stress on drive components. Fans or other systems with belt drives can be started slowly to avoid belt slipping as well as air pressure surges. Soft starters are seen in electrical R/C helicopters, and allow the rotor blades to spool-up in a smooth, controlled manner rather than a sudden surge. In all systems, a soft start limits the inrush current and so improves stability of the power supply and reduces transient voltage drops that may affect other loads. [ 4 ] [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Motor_soft_starter
The motor system is the set of central and peripheral structures in the nervous system that support motor functions , i.e. movement. [ 1 ] [ 2 ] Peripheral structures may include skeletal muscles and neural connections with muscle tissues. [ 2 ] Central structures include cerebral cortex , brainstem , spinal cord , pyramidal system including the upper motor neurons , extrapyramidal system , cerebellum , and the lower motor neurons in the brainstem and the spinal cord. [ 3 ] The motor system is a biological system with close ties to the muscular system and the circulatory system . To achieve motor skill , the motor system must accommodate the working state of the muscles, whether hot or cold, stiff or loose, as well as physiological fatigue. The pyramidal motor system , also called the pyramidal tract or the corticospinal tract, start in the motor center of the cerebral cortex . [ 4 ] There are upper and lower motor neurons in the corticospinal tract. The motor impulses originate in the giant pyramidal cells or Betz cells of the motor area; i.e., precentral gyrus of cerebral cortex. These are the upper motor neurons (UMN) of the corticospinal tract. The axons of these cells pass in the depth of the cerebral cortex to the corona radiata and then to the internal capsule , passing through the posterior branch of internal capsule and continuing to descend in the midbrain and the medulla oblongata . In the lower part of the medulla oblongata, 90–95% of these fibers decussate (pass to the opposite side) and descend in the white matter of the lateral funiculus of the spinal cord on the opposite side. The remaining 5–10% pass to the same side. Fibers for the extremities ( limbs ) pass 100% to the opposite side. The fibers of the corticospinal tract terminate at different levels in the anterior horn of the grey matter of the spinal cord. Here, the lower motor neurons (LMN) of the corticospinal cord are located. Peripheral motor nerves carry the motor impulses from the anterior horn to the voluntary muscles . The extrapyramidal motor system consists of motor-modulation systems, particularly the basal ganglia and cerebellum. For information, see extrapyramidal system .
https://en.wikipedia.org/wiki/Motor_system
In electronics , motorboating is a type of low frequency parasitic oscillation (unwanted cyclic variation of the output voltage) that sometimes occurs in audio and radio equipment and often manifests itself as a sound similar to an idling motorboat engine, a "put-put-put", in audio output from speakers or earphones. [ 1 ] [ 2 ] [ 3 ] [ 4 ] It is a problem encountered particularly in radio transceivers and older vacuum tube audio systems , guitar amplifiers , PA systems and is caused by some type of unwanted feedback in the circuit. The amplifying devices in audio and radio equipment are vulnerable to a variety of feedback problems, which can cause distinctive noise in the output. The term motorboating is applied to oscillations whose frequency is below the range of hearing, from 1 to 10 hertz , [ 3 ] so the individual oscillations are heard as pulses. Sometimes the oscillations can even be seen visually as the woofer cones in speakers slowly moving in and out. [ 2 ] Besides sounding annoying, motorboating can cause clipping of the audio output waveform, and thus distortion in the output. Although low frequency parasitic oscillations in audio equipment may be due to a range of causes, there are a few types of equipment in which it is frequently seen: As with all electronic oscillation , motorboating occurs when some of the output energy from an amplifying device like a transistor or vacuum tube gets coupled back into the input circuit of the device (or possibly into an earlier stage of the amplifier circuit) with the proper phase for positive feedback . This indicates there is an unwanted feedback path through the circuit from output to input of an amplifying stage. The technical conditions for oscillation, given by the Barkhausen stability criterion , are that the total gain around the feedback loop (comprising the amplifying device and the feedback path) at the oscillation frequency must be one (0 dB), and that the phase shift must be a multiple of 360° (2π radians ). Since most amplifying devices, transistors and tubes, are inverting, with the output signal 180° opposite in phase from the input, the feedback path must contribute the other 180° of shift. Many types of parasitic oscillation are caused by small interelectrode capacitances ( parasitic capacitance ) or mutual inductance between adjacent wires or electronic components on the circuit board, which create an inadvertent feedback path. However these usually cause oscillations of high frequency , at the upper end of or above the passband of the equipment. This is because the phase shift of the small reactances in the feedback path, which increases with frequency, only become significant at high frequencies. Low frequency oscillations like motorboating indicate that some device or circuit with a large time constant is involved, such as the interstage coupling capacitors [ 6 ] or transformers, or the filter capacitors and supply transformer winding. [ 6 ] In vacuum tube circuits, a common cause is feedback through the plate power supply circuit. [ 2 ] [ 6 ] [ 4 ] The power supply provides DC current to each tube's plate circuit, so the power supply wiring (power busses) can be an inadvertent feedback path between stages. The output (plate) impedance of the diode vacuum tube rectifiers in the power supply is high, so the increasing impedance of the filter capacitors at low frequencies can mean that low frequency swings in the current drawn by output stages can cause voltage swings in the power supply voltage which feed back to earlier stages, [ 2 ] [ 6 ] [ 4 ] making the system a subaudio oscillator. This is caused by inadequate power supply filtering or decoupling. The electrolytic capacitors used in equipment of 1960s vintage contained liquid electrolyte, which dried out over decades, decreasing the capacitance and increasing the leakage current, and these are often the cause. One solution suggested is a "capacitor job", replacing all the old electrolytic capacitors. [ 4 ] [ 11 ] A more radical but comprehensive solution is to add modern IC voltage regulators , or replace the entire power supply with a modern regulated one. [ 4 ] In equipment that includes radio transmitters , motorboating can be caused by radio frequency interference (RFI), the strong radio signal from the transmitter getting into audio or receiver circuits. Receiver audio circuits with automatic gain control (AGC) have a long time constant feedback loop which adjusts the gain of the audio stage to compensate for differences in audio level from causes like different speaking voices. Squelch circuits used in two-way radios to cut out noise similarly have a feedback loop which turns off the audio when high frequency noise is detected. If the inaudible radio frequency (RF) transmitter signal is inadvertently coupled into the receiver's audio signal path, it can trigger the AGC or squelch circuit to reduce the gain. Then, after a delay time set by the circuit's time constant, the circuit increases the gain again until the amplitude of the radio signal triggers another gain reduction. This repetitive cycle is heard as motorboating. An example might be a 27 MHz Citizen's band radio in a car, connected to the car's 12 volt DC supply. If the decoupling capacitors which bypass radio noise from the power supply wires are missing or inadequate, or the long power leads pick up excessive RF from the antenna then it is possible for the RF transmitter signal to enter the radio's receiving circuits through the supply wires. This then causes the motorboating to occur.
https://en.wikipedia.org/wiki/Motorboating_(electronics)
Motorcycle design can be described as activities that define the appearance, function and engineering of motorcycles . Professionally it is a branch of industrial design , similar to automotive design using identical techniques and methodology, but confined by a set of conventions about what is acceptable to the buying public. These conventions have been defined by the acceptance of the industry and media as a whole to the assumption that the public will only purchase machines that bear more than a passing resemblance to competition machines of whatever kind. In some large OEM motorcycle manufacturers, the term designer can also be applied to the project leader or chief engineer charged with laying down the principal architecture of the vehicle. In recent years, it has also become associated with custom or " chopper " builder culture. Professional motorcycle designers almost always hold degrees in industrial design, industrial design engineering or similar, and have training in styling, modeling, as well as knowledge in aspects of technology associated with single track vehicles. Although no degree as a specialisation exists per se, the majority of candidates graduate through colleges and universities with established transportation design courses, and are trained as automotive designers. Most OEM motorcycle manufacturers, such as Honda , Suzuki , Kawasaki , BMW , Ducati , Piaggio and others have in-house design studios dedicated to this purpose, while others such as Yamaha and KTM depend on specialised independent design studios . Due to the high importance of mechanical components or even exposed engines to motorcycle styling, almost always designers will have a greater sensitivity to and awareness of engineering than will typical car designers. In OEM situations, large teams of professional engineers and specialists will collaborate on each project development, allowing the designer to focus on the more intangible or subjective aspects of design, such as styling, human-machine interface psychology , and market and cultural relationships. In other matters such as pure mechanical ergonomics (such as seat height, handlebar placement, etc.), or basic layout (the location of major components, storage, etc.) there is usually considerable overlap between the designer and engineer. The designer will nominally approach each problem from a human interface, or "feel" or "irrational" point of view (example : "Does this material feel cold or warm, and is this feeling appropriate to this vehicle's target consumer?"), while the engineer will attack each problem with the "rational" or clinical approach of empirically weighing the cause and effect of each design decision against the project's technical and economic design targets (example : "Can this material be moulded into the designer's desired shape? Will that be too expensive to produce?") In OEM motorcycle design, the normal procedure of developing a new motorcycle involves the same steps as in other professional design disciplines : identifying a target consumer, researching them to identify benchmarks and project targets, then proposing concept directions in a written form known as a Design Brief or QFD . From this point, artwork is developed to visually communicate the designer's ideas. These are presented in 2Db drawing or illustrated form, from which a winning direction is down selected for further development. Once a satisfactory design is established on paper (the term paper is a generalization that can include traditional hand renderings, digital artwork or CAD drawings), then full scale modeling begins to realise the design in tangible 3D form. Often used as an interchangeable term with "design", styling is in fact just one component of the design process. Typically, styling is developed through sketches, renderings and illustrations then realised in 3D form using automotive styling clay , specialised industrial modeling foams such as Sibatool, Renshape or Epiwood, or in increasingly limited cases plaster or body filler. As the most subjective part of the design process, the various members of the development team must depend heavily on the judgment, skill and experience of the appointed designer to create an appropriate look. The most misunderstood element and the most dangerous to the success of a product, is the idea that team members should evaluate the design based on personal tastes or preferences. Industrial design is not an art form, but a focused creative expression using the scientific data and analysis in the Design Brief and QFD as ultimate guidelines. The target user, their needs and tastes should be reflected in the final design, not necessarily exclusively those of the design team. Of course, many complex variables such as the OEM brand identity, past successes and failures, and whimsical trends often skew or distort styling decisions. In instances where the factors are overwhelming, OEM's may err on the side of cautious conservative design. Because of the need to reduce development time and costs, the "styling" design model is usually developed in parallel with the engineering 3D design. While there is an increasing amount of digital design input in the modern OEM design process, nearly all major motorcycle manufacturers still rely on full scale clay models to render the master style model, then scan and import the styling surfaces into suitable 3D software packages (Alias, CATIA, ISEM Surf) for integration into the 3D engineering CAD platform (CATIA, ProEngineer, etc.). Once combined, the design team can virtually refine the motorcycle by optimising component assembly, checking for any undesirable interferences between parts, and predict and eliminate possible engineering problems. Typically, designers and engineers will have the greatest number of conflicts during this phase of development, as designers will fight to maintain the original styling and design of the clay model and artwork into the production vehicle, while the engineer will eliminate all problems in the most efficient manner possible. The success of the final product depends heavily on the level of cooperation between these often conflicting needs. In recent years, largely due to the popularity of television programs like Orange County Chopper and Biker Build-off , the building of one of a kind " chopper " or "cruiser" type motorcycles has become more mainstream, leading to a flourishing builder industry. As a whole, these vehicles are not designed in the professional sense, but rather crafted by hand by metal workers and artisans using traditional skills. The resulting vehicles tend to be very elaborate, expensive and difficult or impossible to reproduce in mass production, but are highly valued for the same reasons. Among custom motorcycle culture, certain names have become famous for their creations and have led to mainstream acceptance of previously unacceptable design solutions such as extreme ergonomics, totally rigid rear wheels without the benefit of suspension, minimal lighting and limited ground clearance for cornering. These design characteristics are purely emotional in nature, being led by styling and image rather than technical or performance considerations. Custom and specials motorcycles are similar to the above but tend to be super sport type motorcycles, or at least high-performance based, using many special add-on parts, one-of-a-kind or limited series frames, racing wheels and parts or hand-made components to maximise performance. While modifying motorcycles is an activity as old as the motorcycle itself, the "special" culture or " streetfighter " began to flourish in the mid-1970s as a response to the myriad high performance Japanese motorcycles then available, but whose power far exceeded their handling. Individuals would choose premanufactured parts from catalogs or from other bikes and redesign their particular machine to suit their desires. In general this activity is limited to one-of-a-kind vehicles and, as with custom motorcycles, uses very little genuine engineering or design methodology, although some small-scale manufacturers exist who make limited runs of a given model. In some cases, these tiny specialists were successful enough to grow into full-scale OEM companies such as the Buell Motorcycle Company and Bimota of Italy .
https://en.wikipedia.org/wiki/Motorcycle_design
The MC68451 is a Motorola (now Freescale ) Memory Management Unit (MMU), which was primarily used in conjunction with the Motorola MC68010 microprocessor . The MC68451 supported a 16 MB address space and provided a MC68000 or a MC68010 with support for memory management and protection of memory against unauthorized access. The block size was variable, so it was usually used for segment-based memory management. It supported the mapping of up to 32 memory segments or pages of a variable size from logical to physical addresses. To allow more segments or pages, the simultaneous use of multiple MC68451 MMUs was supported. [ 1 ] In combination with a MC68010 , the MC68451 permitted the realization of virtual memory . With the earlier MC68000 , this was not possible due to the way the MC68000 treated memory access errors, i.e. processor state could not always be properly restored after a page fault; two MC68000s would be required, with the main CPU pausing when it got a memory access error, and the other CPU servicing the page fault. [ 2 ] The limitation to 32 segment table entries per MMU made systems based on a MC68010 and a MC68451 slow, as they often had to modify the segment table due to its small size. Motorola made a single-board computer module that demonstrated the combination of 68010 and 68451 for applications requiring virtual memory. [ 3 ] H. Berthold AG used 12 MC68451 MMUs together with their UNOS variant vBertOS. Others (e.g. Sun Microsystems , Convergent Technologies ) used their own proprietary MMUs instead of the MC68451. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Motorola_68451
The Motorola Minitor is a portable, analog, receive only, voice pager typically carried by civil defense organizations such as fire , rescue , and EMS personnel (both volunteer and career) to alert of emergencies . The Minitor, slightly smaller than a pack of cigarettes, is carried on a person and usually left in selective call mode. When the unit is activated, the pager sounds a tone alert, followed by an announcement from a dispatcher alerting the user of a situation. After activation, the pager remains in monitor mode much like a scanner , and monitors transmissions on that channel until the unit is reset back into selective call mode either manually, or automatically after a set period of time, depending on programming. In the times before modern radio communications, it was difficult for emergency services such as volunteer fire departments to alert their members to an emergency, since the members were not based at the station. The earliest methods of sounding an alarm would typically be by ringing a bell either at the fire station or the local church. As electricity became available, most fire departments used fire sirens or whistles to summon volunteers (many fire departments still use outdoor sirens and horns along with pagers to alert volunteers). Other methods included specialized phones placed inside the volunteer firefighter's home or business or by base radios or scanners. " Plectron " radio receivers were very popular, but were limited to 120VAC or 12VDC operation, limiting their use to a house/building or mounted in a vehicle. There was a great need and desire for a portable radio small enough to be worn by a person and only activated when needed. Thus, Motorola answered this call in the 1970s and released the very first Minitor pager. There are six versions of Minitor pagers. The first was the original Minitor, followed by the Minitor II(1992), Minitor III(1999), Minitor IV, and the Minitor V released in late 2005. The Minitor VI was released in early 2014. The Minitor III, IV, and V used the same basic design, while the original Minitor and Minitor II use their own rectangular proprietary case design. Similar voice pagers released by Motorola were the Keynote and Director pagers. They were essentially stripped down versions of the Minitor and never gained widespread use, though the Keynotes were much more common in Europe because they could decode 5/6 tone alert patterns in addition to the more popular two tone sequential used in the United States. Although the Minitor is primarily used by emergency personnel, other agencies such as utilities and private contractors also use the pager. Unlike conventional alphanumeric pagers and cell phones, Minitors are operated on an RF network that is generally restricted to a particular agency in a given geographical area. The Minitor is the most common voice pager used by emergency services in the United States. However, digital 2-way pagers that can display alpha-numeric characters can overcome some of the limitations of voice only pagers, are now starting to replace the Minitor pagers in certain applications. Minitor pagers, depending on the model and application, can operate in the VHF Low Band , VHF High Band , and UHF frequency ranges. They are alerted by using two-tone sequential Selective calling , generally following the Motorola Quick Call II standard. In other words, the pager will activate when a particular series of audible tones are sent over the frequency (commonly referred to as a "page") the pager is set to. For example, if a Minitor is programmed on VHF frequency channel 155.295 MHz and set to alert for 879 Hz & 358.6 Hz, it will disregard any other tone sequences transmitted on that frequency, only alerting when the proper sequence has been received. The pager may be reset back into its selective call mode by pressing the reset button, or it can be programmed to reset back into selective call mode automatically after a predetermined amount of time, to conserve battery power. Older Minitor pagers (both the Minitor I and Minitor II series) have tone reeds or filters that are tuned to a specific audible tone frequency, and must physically be replaced if alert tones are changed. For two-tone sequential paging, there are two reeds, the first tone passes through the first reed, and the second tone passes through the second reed, thereby activating the pager. Beginning with the Minitor III series, these physical reeds or filters are no longer necessary, as the pagers now feature all solid-state electronics, and various tone sequences can be programmed via computer software . Newer Minitor pagers can scan two channels by selecting that function via a rotary knob on the pager; in this mode when using a Minitor III or IV the user will hear all traffic, even without the correct tones being sent. If the activation tones are transmitted in this monitor mode, the pager alerts as normal. Minitor Vs have the option to remain in monitor mode or in selective call mode when scanning two channels. Minitor IIIs and IVs only have the option to remain in monitor mode when scanning two channels. The range of the Minitor's operating distance depends on the strength (" wattage ") of the paging transmitter. A repeater is often used to improve paging coverage, as it can be located for better range than the dispatch center where the page originates from. Weather conditions, low battery, and even atmospheric conditions can affect the Minitor's ability to receive transmissions. In fact, a remote transmitter hundreds, even thousands of miles away belonging to a separate agency, can activate a Minitor (and also block it) unknowingly if the atmospheric conditions let the signal propagate that far. This is commonly known as radio skip . The Minitor is a receive-only unit, not a transceiver , and thus cannot transmit. Note - most all of the features below refer to the Minitor pagers III and up, the original Minitor and Minitor II pagers may not have some of the listed features The audible alarm on the Minitor lasts only for the duration of the second activation tone. If there is bad reception, the pager may only sound a quick beep, and the user may not be alerted properly. This can be changed by editing the codeplug's "Alert Duration" from STD to Fixed, the user can then set the alert duration longer than the second tone. The user must be cautious, however, as setting the alert tone duration too high may cover some voice information. Also, some units may have the volume knob set to control the sound output of the audible alert as well. The user may have the volume turned down to an undetectable level either by accident or by carelessness, thus missing the page. A factory option for "Fixed Alert" (the only option on the earlier Minitor I), however, lets the alert tone override the volume and sound at maximum volume regardless of the volume knob's position. It is possible to program the pager to always vibrate when an alert is received, giving the possibility of either a silent (vibrating) alert or audible and vibrating alerts. Minitor I and II do not have vibrating capabilities standard). The vibrating motor in the newer (IV and V) Minitor pagers is quite strong in order to be felt in varying conditions, such as when performing heavy work. It is not uncommon for the vibrating motor in a pager, placed in a charger overnight and left in vibrate mode, to "walk" the pager and charger off of a table or nightstand . Minitor pagers are powered by battery which will eventually run down if not charged (a flashing red LED and audible alarm is used as a warning of low battery power). As the Minitor is portable, its electronics aren't as sensitive as set top or base radios and are usually less able to pick up weak or distant signals.
https://en.wikipedia.org/wiki/Motorola_Minitor
The Motorola PageWriter 2000 was a two-way pager introduced in 1997. [ 1 ] Featuring the 68000 based Motorola DragonBall processor, 1 MB of internal storage, a four color grayscale screen, IrDA transmitter/receiver, and a full QWERTY keyboard the PageWriter represented a combination of both PDA and pager in one package. For wireless connectivity the PageWriter used SkyTel's ReFLEX paging network to send and receive messages to other pagers or to email addresses. The device shipped with a number of applications including messaging, contacts, calendar, and notepad all written in its proprietary FLEXScript programming language. Additional applications could be purchased and downloaded to the PageWriter from a PC through its charging dock. That dock connected via RS-232 serial port to the PC and an IR port to the PageWriter. The most notable of the add-on applications for the PageWriter was the Motorola developed Email VClient. [ 2 ] Debuted to the public at Lotusphere '98 in January of that year, this application allowed users for the first time to access their corporate email account remotely from their portable wireless device. Users could read, respond to, and create emails remotely while appearing to be interacting from their desktop. This application created the market for remote control of desktop email from a wireless device later popularized by the RIM BlackBerry . This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Motorola_PageWriter_2000
Motorola Pageboy was a pager produced by Motorola . In the 1960s, when pagers were mainly used by medical professionals, the Pageboy was considered "cutting edge and compact", measuring 5.25 inches by 2.36 inches. [ 1 ] In 1967, low-frequency Pageboys operating in the 39-43 MHz band were priced at $180, while VHF units operating in the 151-159 MHz band cost $275 in the United States. [ 2 ]
https://en.wikipedia.org/wiki/Motorola_Pageboy
Motorola Pageboy II was a pager and the successor to the Motorola Pageboy . The Motorola pager was a small radio receiver that delivered a message individually or widespread to those carrying the device. [ 1 ] The first successful consumer pager was Motorola’s Pageboy I which was introduced in 1974. This type (without display) could not store messages, however, it was small, portable and notified its wearer that a message had been sent. Motorola’s Pageboy II was launched in 1975 for the United States and 1976 for Europe in various types. Pb II 5-tone only 68–88 MHz / 146–174 MHz (US and Eur). Pb II tone only for 5-tone 80,6–88 MHz / 146–174 MHz (US). Pb II tone & voice radio for 2-tone signalling systems 68–88 MHz / 146–174 MHz (US). Pb II A04FNC Series radio pager 450–512 MHz (Eur). Pb II MAA04FNC1568AA 440–470 MHz für Funftonfolge-Rufsysteme (Eur). Pb II radio pager H04BNC Series 406–420 MHz, 450–470 MHz (US and Eur). The variety and reliability made the system popular worldwide. The European system worked strictly in the 85–87, 150–170 and later on in the 450–512 MHz band and was based on the ZVEI codes. ZVEI is the abbreviation of Zentral Verband der Electrotechnischen Industrie West Germany (central union for the electro technical industry). This organization was responsible for a fixed industrial norm sequence of 5 selective call tones. The device was in use to alert individuals or groups of persons within fire brigades or civil protection organizations. Though the device was even smaller than the Pageboy I, its speaker was pointed upward so that the alerting beeps followed by a voice message always came through. For its time, the device had an outstanding receiver sensitivity, which was reached by using IC circuitry. Even now, Motorola’s Pageboy II is still in use. As many fire brigades switched over to digital equipment their outdated Pageboy II’s found their way to industrial safety & medical organizations.
https://en.wikipedia.org/wiki/Motorola_Pageboy_II
Motorola Single Board Computers is Motorola 's production line of computer boards for embedded systems . [ 1 ] There are three different lines : mvme68k , mvmeppc and mvme88k . The first version of the board appeared in 1988. Motorola still makes those boards and the last one is MVME3100. [ 2 ] NetBSD supports the MVME147, MVME162, MVME167, MVME172 and MVME177 boards from the mvme68k family, [ 3 ] as well as the MVME160x line of mvmeppc boards. [ 4 ] OpenBSD supported the MVME141, MVME147, MVME162, MVME165, MVME167, MVME172, MVME177, MVME180, MVME181, MVME187, MVME188, and MVME197 boards. Both the OpenBSD/mvme68k and OpenBSD/mvme88k ports were discontinued following the 5.5 release. [ 5 ] [ 6 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Motorola_Single_Board_Computers
Motorola X8 Mobile Computing System is a chipset from Motorola for Android-based smartphones, based on Qualcomm Snapdragon System on a chip S4 Pro. [ 1 ] CPU of S4 Pro is ARM-compatible dual-core Krait, and GPU of this chip is 4-core Adreno 320. Several low-power DSP chips were added by Motorola to S4 Pro in the chipset to process audio and inputs from other sensors. This mobile technology related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Motorola_X8_Mobile_Computing_System
Mott insulators are a class of materials that are expected to conduct electricity according to conventional band theories , but turn out to be insulators (particularly at low temperatures). These insulators fail to be correctly described by band theories of solids due to their strong electron –electron interactions, which are not considered in conventional band theory. A Mott transition is a transition from a metal to an insulator, driven by the strong interactions between electrons. [ 1 ] One of the simplest models that can capture Mott transition is the Hubbard model . The band gap in a Mott insulator exists between bands of like character, such as 3d electron bands, whereas the band gap in charge-transfer insulators exists between anion and cation states. Although the band theory of solids had been very successful in describing various electrical properties of materials, in 1937 Jan Hendrik de Boer and Evert Johannes Willem Verwey pointed out that a variety of transition metal oxides predicted to be conductors by band theory are insulators. [ 2 ] With an odd number of electrons per unit cell, the valence band is only partially filled, so the Fermi level lies within the band. From the band theory , this implies that such a material has to be a metal. This conclusion fails for several cases, e.g. CoO , one of the strongest insulators known. [ 1 ] Nevill Mott and Rudolf Peierls also in 1937 predicted the failing of band theory can be explained by including interactions between electrons. [ 3 ] In 1949, in particular, Mott proposed a model for NiO as an insulator, where conduction is based on the formula [ 4 ] In this situation, the formation of an energy gap preventing conduction can be understood as the competition between the Coulomb potential U between 3 d electrons and the transfer integral t of 3 d electrons between neighboring atoms (the transfer integral is a part of the tight binding approximation). The total energy gap is then where z is the number of nearest-neighbor atoms. In general, Mott insulators occur when the repulsive Coulomb potential U is large enough to create an energy gap. One of the simplest theories of Mott insulators is the 1963 Hubbard model . The crossover from a metal to a Mott insulator as U is increased, can be predicted within the so-called dynamical mean field theory . Mott reviewed the subject (with a good overview) in 1968. [ 5 ] The subject has been thoroughly reviewed in a comprehensive paper by Masatoshi Imada, Atsushi Fujimori, and Yoshinori Tokura . [ 6 ] A recent proposal of a "Griffiths-like phase close to the Mott transition" has been reported in the literature. [ 7 ] The Mott criterion describes the critical point of the metal–insulator transition . The criterion is n − 1 / 3 < C a 0 ∗ , {\displaystyle n^{-1/3}<Ca_{0}^{*},} where n {\displaystyle ~n} is the electron density of the material and a 0 ∗ {\displaystyle a_{0}^{*}} the effective bohr radius. The constant C {\displaystyle C} , according to various estimates, is 2.0, 2.78,4.0, or 4.2. If the criterion is satisfied (i.e. if the density of electrons is sufficiently high) the material becomes conductive (metal) and otherwise it will be an insulator. [ 8 ] Mottism denotes the additional ingredient, aside from antiferromagnetic ordering, which is necessary to fully describe a Mott insulator. In other words, we might write: antiferromagnetic order + mottism = Mott insulator . Thus, mottism accounts for all of the properties of Mott insulators that cannot be attributed simply to antiferromagnetism. There are a number of properties of Mott insulators, derived from both experimental and theoretical observations, which cannot be attributed to antiferromagnetic ordering and thus constitute mottism. These properties include: A Mott transition is a metal-insulator transition in condensed matter . Due to electric field screening the potential energy becomes much more sharply (exponentially) peaked around the equilibrium position of the atom and electrons become localized and can no longer conduct a current. It is named after physicist Nevill Francis Mott . In a semiconductor at low temperatures, each 'site' ( atom or group of atoms) contains a certain number of electrons and is electrically neutral. For an electron to move away from a site, it requires a certain amount of energy, as the electron is normally pulled back toward the (now positively charged) site by Coulomb forces . If the temperature is high enough that 1 2 k B T {\displaystyle {\tfrac {1}{2}}k_{\mathrm {B} }T} of energy is available per site, the Boltzmann distribution predicts that a significant fraction of electrons will have enough energy to escape their site, leaving an electron hole behind and becoming conduction electrons that conduct current . The result is that at low temperatures a material is insulating, and at high temperatures the material conducts. While the conduction in an n- (p-) type doped semiconductor sets in at high temperatures because the conduction (valence) band is partially filled with electrons (holes) with the original band structure being unchanged, the situation is different in the case of the Mott transition where the band structure itself changes. Mott argued that the transition must be sudden, occurring when the density of free electrons N and the Bohr radius a 0 {\displaystyle a_{0}} satisfies N 1 / 3 a 0 ≃ 0.2 {\displaystyle N^{1/3}a_{0}\simeq 0.2} . Simply put, a Mott transition is a change in a material's behavior from insulating to metallic due to various factors. This transition is known to exist in various systems: mercury metal vapor-liquid, metal NH 3 solutions, transition metal chalcogenides and transition metal oxides. [ 15 ] In the case of transition metal oxides, the material typically switches from being a good electrical insulator to a good electrical conductor. The insulator-metal transition can also be modified by changes in temperature, pressure or composition (doping). As observed by Nevill Francis Mott in his 1949 publication on Ni-oxide, the origin of this behavior is correlations between electrons and the close relationship this phenomenon has to magnetism. The physical origin of the Mott transition is the interplay between the Coulomb repulsion of electrons and their degree of localization (band width). Once the carrier density becomes too high (e.g. due to doping), the energy of the system can be lowered by the localization of the formerly conducting electrons (band width reduction), leading to the formation of a band gap, e.g. by pressure (i.e. a semiconductor/insulator). In a semiconductor, the doping level also affects the Mott transition. It has been observed that higher dopant concentrations in a semiconductor creates internal stresses that increase the free energy (acting as a change in pressure) of the system, [ 16 ] thus reducing the ionization energy. The reduced barrier causes easier transfer by tunneling or by thermal emission from donor to its adjacent donor. The effect is enhanced when pressure is applied for the reason stated previously. When the transport of carriers overcomes a minimum activation energy , the semiconductor has undergone a Mott transition and become metallic. The Mott transition is usually first order, and involves discontinuous changes of physical properties. Theoretical studies of the Mott transition in the limit of large dimension find a first order transition. However in low dimensions and when the lattice geometry leads to frustration of magnetic ordering, it may be only weakly first order or even continuous (i.e second order). Weakly first order Mott transitions are seen in some quasi-two dimensional organic materials. Continuous Mott transitions have been reported in semiconductor moire materials. A theory of a continuous Mott transition is available if the Mott insulating phase is a quantum spin liquid with an emergent fermi surface of neutral fermions. Mott insulators are of growing interest in advanced physics research, and are not yet fully understood. They have applications in thin-film magnetic heterostructures and the strong correlated phenomena in high-temperature superconductivity , for example. [ 17 ] [ 18 ] [ 19 ] [ 20 ] This kind of insulator can become a conductor by changing some parameters, which may be composition, pressure, strain, voltage, or magnetic field. The effect is known as a Mott transition and can be used to build smaller field-effect transistors , switches and memory devices than possible with conventional materials. [ 21 ] [ 22 ] [ 23 ]
https://en.wikipedia.org/wiki/Mott_insulator
In mathematics the Mott polynomials s n ( x ) are polynomials given by the exponential generating function: They were introduced by Nevill Francis Mott who applied them to a problem in the theory of electrons. [ 1 ] Because the factor in the exponential has the power series in terms of Catalan numbers C k {\displaystyle C_{k}} , the coefficient in front of x k {\displaystyle x^{k}} of the polynomial can be written as By differentiation the recurrence for the first derivative becomes The first few of them are (sequence A137378 in the OEIS ) The polynomials s n ( x ) form the associated Sheffer sequence for –2 t /(1–t 2 ) [ 2 ] An explicit expression for them in terms of the generalized hypergeometric function 3 F 0 : [ 3 ] This polynomial -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mott_polynomials
In physics , Mott scattering is elastic electron scattering from nuclei. [ 1 ] It is a form of Coulomb scattering that requires treatment of spin-coupling . It is named after Nevill Francis Mott , who first developed the theory in 1929. Mott scattering is similar to Rutherford scattering but electrons are used instead of alpha particles as they do not interact via the strong interaction (only through weak interaction and electromagnetism ), which enable electrons to penetrate the atomic nucleus , giving valuable insight into the nuclear structure . Mott scattering derives from a 1929 paper by Nevill Mott which proposed a mechanism for experimentally verifying free electron spin quantization. Samuel Goudsmit and George Eugene Uhlenbeck had proposed electron spin and spin-orbit coupling to explain line splitting in atomic spectra in 1925 and by 1928 Paul Dirac had a relativistic quantum theory incorporating these ideas. As Mott details in the first part of his paper, direct obsevation of free electron spin was thought to be impossible due to issues with the uncertainty principle . Mott proposed double scattering of a high energy beam of electrons from atomic nuclei. The first backscattering event would polarize the beam transverse to the scattering plane; the second scattering event above or below the plane would then have measurable intensity differences to the left or right in a amounts according to the degree of polarization. [ 2 ] : 1635 The predicted effect was finally observed experimentally 1942. [ 3 ] [ 2 ] During the 1950s, Noah Sherman analyzed detailed relativistic electron scattering calculations of the intensity asymmetry in terms of a function later called the Sherman function . This concept became the basis for Mott electron polarimetry. [ 2 ] The first successful measurement of the electron g factor in 1954, [ 4 ] used this technique. [ 5 ] In Mott's original paper he proposed measuring the free electron spin with two scattering events, one that created polarization and one that measured the degree of polarization. The second half of this concept forms an electron polarimeter . The electron beam is directed at a gold foil. Gold has a high atomic number (Z), is non-reactive (does not form an oxide layer), and can be easily made into a thin film (reducing multiple scattering). Two detectors are placed the same scattering angle to the left and right of the foil to count the number of scattered electrons. The measured asymmetry A , given by: is proportional to the degree of spin polarization P according to A = SP , where S is the Sherman function . [ 6 ] : 81 Electron polarimeters can be used to study polarized electron-atom interactions, [ 7 ] spin dependence of electrons scattered or emitted from magnetic surfaces, [ 6 ] measuring parity violation in high energy inelastic scattering from atoms, and tests of special relativity. [ 2 ] Qualitatively, Mott scattering can be analyzed with classical models. In the frame of the electron, the in-coming nuclear charge represents a Coulomb scattering center and a magnetic field circulating in a plane perpendicular in coming charge. The magnetic field interacts with the electron dipole, pushing spin "up" electrons to the right and spin "down" electrons to the left. At backscattering angles the smaller spin-dependent forces can alter the cross section to a measurable amount. [ 6 ] : 79 Mathematically the magnetic field, B , is related to the electric field of the nucleus, E , and the velocity of the scattering as: [ 2 ] : 1636 B = 1 c v × E {\displaystyle \mathbf {B} ={\frac {1}{c}}\mathbf {v} \times \mathbf {E} } Writing E in terms of r , the separation of the scattering particles, E = ( Z e / r 3 ) r , {\displaystyle \mathbf {E} =(Ze/r^{3})\mathbf {r} ,} gives B = Z e m c r 3 L {\displaystyle \mathbf {B} ={\frac {Ze}{mcr^{3}}}\mathbf {L} } where L = m r × v {\displaystyle \mathbf {L} =m\mathbf {r} \times \mathbf {v} } is the orbital angular momentum of the electron about the nucleus. The electron's spin magnetic moment μ s {\displaystyle \mathbf {\mu _{s}} } interacts with the magnetic field in proportion to its alignment with that field: V S O = μ s ⋅ B = μ s ⋅ Z e 2 m c r 3 L {\displaystyle V_{\mathrm {SO} }=\mathbf {\mu _{s}} \cdot \mathbf {B} =\mathbf {\mu _{s}} \cdot {\frac {Ze}{2mcr^{3}}}\mathbf {L} } Finally, the electron's magnetic moment relates to its spin μ s = − ( g e / 2 m c ) S {\displaystyle \mathbf {\mu _{s}} =-(ge/2mc)\mathbf {S} } : V S O = Z e 2 2 m 2 c 2 r 3 L ⋅ S {\displaystyle V_{\mathrm {SO} }={\frac {Ze^{2}}{2m^{2}c^{2}r^{3}}}\mathbf {L} \cdot \mathbf {S} } This potential term works in addition to Coulomb potential, altering the spin averaged cross section I according to Sherman's spin asymetry function, S σ ( θ ) = I ( θ ) [ 1 + S ( θ ) P ⋅ n ] {\displaystyle \sigma (\theta )=I(\theta )[1+S(\theta )\mathbf {P} \cdot \mathbf {n} ]} for polarization P and a unit vector n in the direction of the orbital angular momentum L {\displaystyle \mathbf {L} } . The equation for the Mott cross section includes an inelastic scattering term to take into account the recoil of the target proton or nucleus. It also can be corrected for relativistic effects of high energy electrons, and for their magnetic moment. [ 8 ] When an experimentally found diffraction pattern deviates from the mathematically derived Mott scattering, it gives clues as to the size and shape of an atomic nucleus [ 9 ] [ 8 ] The reason is that the Mott cross section assumes only point-particle Coulombic and magnetic interactions between the incoming electrons and the target. When the target is a charged sphere rather than a point, additions to the Mott cross section equation ( form factor terms) can be used to probe the distribution of the charge inside the sphere. The Born approximation of the diffraction of a beam of electrons by atomic nuclei is an extension of Mott scattering. [ 10 ]
https://en.wikipedia.org/wiki/Mott_scattering
The Mott–Schottky equation relates the capacitance to the applied voltage across a semiconductor - electrolyte junction . [ 1 ] 1 C 2 = 2 ϵ ϵ 0 A 2 e N d ( V − V f b − k B T e ) {\displaystyle {\frac {1}{C^{2}}}={\frac {2}{\epsilon \epsilon _{0}A^{2}eN_{d}}}(V-V_{fb}-{\frac {k_{B}T}{e}})} where C {\displaystyle C} is the differential capacitance ∂ Q ∂ V {\displaystyle {\frac {\partial {Q}}{\partial {V}}}} , ϵ {\displaystyle \epsilon } is the dielectric constant of the semiconductor, ϵ 0 {\displaystyle \epsilon _{0}} is the permittivity of free space , A {\displaystyle A} is the area such that the depletion region volume is w A {\displaystyle wA} , e {\displaystyle e} is the elementary charge, N d {\displaystyle N_{d}} is the density of dopants, V {\displaystyle V} is the applied potential, V f b {\displaystyle V_{fb}} is the flat band potential , k B {\displaystyle k_{B}} is the Boltzmann constant , and T is the absolute temperature . This theory predicts that a Mott–Schottky plot will be linear. The doping density N d {\displaystyle N_{d}} can be derived from the slope of the plot (provided the area and dielectric constant are known). The flatband potential can be determined as well; absent the temperature term, the plot would cross the V {\displaystyle V} -axis at the flatband potential. Under an applied potential V {\displaystyle V} , the width of the depletion region is [ 2 ] w = ( 2 ϵ ϵ 0 e N d ( V − V f b ) ) 1 2 {\displaystyle w=({\frac {2\epsilon \epsilon _{0}}{eN_{d}}}(V-V_{fb}))^{\frac {1}{2}}} Using the abrupt approximation , [ 2 ] all charge carriers except the ionized dopants have left the depletion region, so the charge density in the depletion region is e N d {\displaystyle eN_{d}} , and the total charge of the depletion region, compensated by opposite charge nearby in the electrolyte, is Q = e N d A w = e N d A ( 2 ϵ ϵ 0 e N d ( V − V f b ) ) 1 2 {\displaystyle Q=eN_{d}Aw=eN_{d}A({\frac {2\epsilon \epsilon _{0}}{eN_{d}}}(V-V_{fb}))^{\frac {1}{2}}} Thus, the differential capacitance is C = ∂ Q ∂ V = e N d A 1 2 ( 2 ϵ ϵ 0 e N d ) 1 2 ( V − V f b ) − 1 2 = A ( e N d ϵ ϵ 0 2 ( V − V f b ) ) 1 2 {\displaystyle C={\frac {\partial {Q}}{\partial {V}}}=eN_{d}A{\frac {1}{2}}({\frac {2\epsilon \epsilon _{0}}{eN_{d}}})^{\frac {1}{2}}(V-V_{fb})^{-{\frac {1}{2}}}=A({\frac {eN_{d}\epsilon \epsilon _{0}}{2(V-V_{fb})}})^{\frac {1}{2}}} which is equivalent to the Mott-Schottky equation, save for the temperature term. In fact the temperature term arises from a more careful analysis, which takes statistical mechanics into account by abandoning the abrupt approximation and solving the Poisson–Boltzmann equation for the charge density in the depletion region. [ 2 ]
https://en.wikipedia.org/wiki/Mott–Schottky_equation
Motul S.A. is a global French company which manufactures, develops and distributes lubricants and other specialized products for engines ( motorcycles , cars and other vehicles) and for industry. Founded in 1853 in New York, the Swan & Finch company started its activity in the sector of high quality lubricants. As of 1920, it turned to the international markets by exporting some of its portfolio brands like Aerul, Textul, Motul. In 1932, Ernst Zaug negotiated the distribution in France of products of the Motul brand with Swan & Finch via his company Supra Penn . [ 1 ] In 1953, the Swan & Finch centenary was celebrated with the worldwide launch of Motul Century, which became the first multigrade oil on the European market. However, Swan & Finch suspended its activities in 1957. Supra Penn bought back all title deeds and patents pertaining to the Motul brand, which was renamed for the company's chief product, becoming Motul S.A. As a specialist in synthetic oils , Motul has become the partner of many manufacturers and sports teams for its technological developments in mechanical sports, car and motorcycle racing. Motul is present in many international competitions as official team supplier: MotoGP , Road racing , Trial , Enduro , Endurance , Superbike , Supercross , Rallycross , World Rally Championship , FIA GT , Le Mans 24 Hours , Spa 24 Hours , Le Mans Series , rally raid , Paris-Dakar , F3, etc. In 1977, Motul won its first Motorcycle World Champion title, in the Road Racing category, with Takazumi Katayama on a Yamaha 350.
https://en.wikipedia.org/wiki/Motul_(company)
Motus ( Latin for movement) is a network of radio receivers for tracking signals from transmitters attached to wild animals. Motus uses radio telemetry for real-time tracking. It was launched by Birds Canada in 2014 in the US and Canada. As of 2022 [update] , more than 1,500 receiver stations had been installed in 34 countries. [ 1 ] Most receivers are concentrated in the United States and Canada, where the network began. The network has spread rapidly because it provides important key data useful to researchers and conservationists, both nationally and internationally. The Motus transmitter's great advantage is its small size and weight. Transmitters weigh 0.2 to 2.6 grams (0.0071 to 0.0917 oz), [ 2 ] and can therefore be attached to all animals, even insects such as a bee or butterfly . [ 3 ] Once a researcher or organization receives state and federal permits, they only need to acquire the appropriate transmitters and attach them to their study objects. Current transmitters' range (depending on size) is up to 12 miles (20 kilometers). The long-used geolocators and GPS loggers are light and small but only store the desired data; they cannot wirelessly transmit the data. This means that researchers must recapture the transmitter-equipped animal to read the stored information, which can take a long time, and many times is unsuccessful. The transmitter is attached in a suitable way, depending on the animal to be tracked, either with a thread or an adhesive . After a certain time the glue and thread dissolve and the transmitter falls off, in the meantime having transmitted all the data to the receivers it passed. [ 2 ]
https://en.wikipedia.org/wiki/Motus_(wildlife_tracking_network)
The Motzkin–Taussky theorem is a result from operator and matrix theory about the representation of a sum of two bounded, linear operators (resp. matrices). The theorem was proven by Theodore Motzkin and Olga Taussky-Todd . [ 1 ] The theorem is used in perturbation theory , where e.g. operators of the form are examined. Let X {\displaystyle X} be a finite-dimensional complex vector space . Furthermore, let A , B ∈ B ( X ) {\displaystyle A,B\in B(X)} be such that all linear combinations are diagonalizable for all α , β ∈ C {\displaystyle \alpha ,\beta \in \mathbb {C} } . Then all eigenvalues of T {\displaystyle T} are of the form (i.e. they are linear in α {\displaystyle \alpha } und β {\displaystyle \beta } ) and λ A , λ B {\displaystyle \lambda _{A},\lambda _{B}} are independent of the choice of α , β {\displaystyle \alpha ,\beta } . [ 2 ] Here λ A {\displaystyle \lambda _{A}} stands for an eigenvalue of A {\displaystyle A} .
https://en.wikipedia.org/wiki/Motzkin–Taussky_theorem
Mound-building termites are a group of termite species that live in mounds which are made of a combination of soil, termite saliva and dung. These termites live in Africa , Australia and South America . The mounds sometimes have a diameter of 30 metres (98 ft). Most of the mounds are in well-drained areas. Termite mounds usually outlive the colonies themselves. If the inner tunnels of the nest are exposed it is usually dead. Sometimes other colonies, of the same or different species, occupy a mound after the original builders' deaths. [ 1 ] The structure of the mounds can be very complicated. Inside the mound is an extensive system of tunnels and conduits that serves as a ventilation system for the underground nest. In order to get good ventilation, the termites will construct several shafts leading down to the cellar located beneath the nest. The mound is built above the subterranean nest. The nest itself is a spheroidal structure consisting of numerous gallery chambers. They come in a wide variety of shapes and sizes. Some, like Odontotermes termites build open chimneys or vent holes into their mounds, while others build completely enclosed mounds like Macrotermes . The Amitermes (Magnetic termites) mounds are created tall, thin, wedge-shaped, usually oriented north-south. [ 1 ] The extensive system of tunnels and conduits have long been considered to help control climate inside the mound. The termite mound is able to regulate temperature, humidity and respiratory gas distribution. An early proposition suggested a thermosiphon mechanism. [ 2 ] The heat created due to the metabolism of termites imparts sufficient buoyancy to the nest air to push it up into the mound and eventually to the mound’s porous surface where heat and gases exchange with the atmosphere through the porous walls. The density of air near the surface rises due to heat exchange and is forced below the nest and eventually through the nest again. This model was proposed for mounds with capped chimneys and with no large vents constructed by the species Macrotermes natalensis . A similar model based on the Stack effect was proposed for mounds with open-chimneys. [ 3 ] The tall chimneys are exposed to higher wind velocities compared to openings at ground level due to surface boundary condition. Therefore, a Venturi flow draws fresh air into the mound through the openings at ground level which flows through the nest and finally out of the mound through the chimney. The flow is unidirectional in the stack effect model compared to the circulatory flow in the thermosiphon model. Odontotermes transvaalensis mound temperature is not regulated by ventilation within the mound. The tall chimneys rather induce flow due to the Venturi effect and are the primary facilitators of ventilation. [ 4 ] Research conducted on Macrotermes michaelseni mounds has shown that the primary role played by the mound is that of exchanging respiratory gases. The complex interaction between the mound and kinetic energy of turbulent winds are the driving forces for the colony’s gas exchange. [ 5 ] [ 6 ] But recent studies on the Macrotermes michaelseni mound with a better built custom sensor to measure airflow suggests that the air in the mound largely moves due to the convective flows induced by the diurnal oscillation of the external temperature. A secondary thermal gradient is generated due to partial exposure of east side of the mound to the sun before and west side of the mound after noon. Improved reliability of the sensor suggests that wind plays a secondary role relative to the dominant thermal mechanism in ventilation. Wind enhances the exchange of gases near the walls but does not induce significant average or transient flows within the mound. [ 7 ] Overall, a similar mechanism of ventilation and thermoregulation is observed in Macrotermes michaelseni and Odontotermes obesus mounds. [ 8 ] Workers , smallest in size, are the most numerous of the castes. They are all completely blind, wingless, and sexually immature. Their job is to feed and groom all of the dependent castes. They also dig tunnels, locate food and water, maintain colony atmospheric homeostasis, and build and repair the nest. The soldiers' job is to defend the colony from any unwanted animals. When the large soldiers attack they emit a drop of brown, corrosive salivary liquid which spreads between the open mandibles. When they bite, the liquid spreads over the opponent. The secretion is commonly stated to be toxic or undergoes coagulation with the air which renders it glue-like. Finally, there are the reproductives which include the king and the queen. The queen can sometimes grow up to six centimeters long while the lower classes are generally less than one centimeter. Vegetation on termite mounds usually differs highly from vegetation in the surrounding. [ 9 ] [ 10 ] In African savannas, Macrotermes mounds form 'islands' with high tree densities. This is usually attributed to the fact that due to the digging of termites and due to their decomposition of plant material, the mound soils are generally more fertile than other soil. On top of that, mound soils have been found to contain more water than their surroundings, a clear advantage for plant growth in savannas. [ 11 ] The high tree densities on termite mounds attract high densities of browsing herbivores, due to the high nutrient contents in foliage from trees growing on mounds, [ 12 ] [ 13 ] or perhaps due to the high quantities of food and shelter on mounds. [ 10 ] The caatinga ecoregion in northeast Brazil has about 200 million termite mounds spread over an area the size of Great Britain. [ 14 ] Some of the mounds are 3 m (10 ft) tall and 10 m (33 ft) wide, and they are spaced about 20 m (66 ft) apart. Underneath the mounds are networks of tunnels that required the excavation of 10 cubic kilometres (2.4 cu mi) of dirt. Scientists performed radioactive dating on 11 mounds. The youngest mound was 690 years old. The oldest was at least 3,820 years and possibly more than twice that. The mounds were built by Syntermes dirus termites, which are about half an inch long. Deforestation in the region helped to reveal the extent of the mounds to scientists. [ 15 ] One scientist stated that the mounds apparently represent "the world's most extensive bio-engineering effort by a single insect species". [ 16 ]
https://en.wikipedia.org/wiki/Mound-building_termites
A mound system is an engineered drain field for treating wastewater in places with limited access to multi-stage wastewater treatment systems. Mound systems are an alternative to the traditional rural septic system drain field. They are used in areas where septic systems are prone to failure from extremely permeable or impermeable soils, soil with the shallow cover over porous bedrock, and terrain that features a high water table. The mound system was designed in the 1930s by the North Dakota College of Agriculture . [ 1 ] and was known as the Nodak Disposal System. In 1976, the University of Wisconsin studied the design of mound systems as part of the university's Waste Management Project. This project published the first ever design manual for identifying the appropriate site conditions and design criteria for mounds. In 2000, a new manual was released. [ 1 ] Mound systems are used to help purify and transport water efficiently. Some soils are too high in permeability, allowing water to quickly pass through it, hindering purification effectiveness and allowing contamination to spread to nearby water sources or ecosystems. . Areas of low soil permeability, such as areas with high water tables and limited soil cover over porous bedrock, can result in contaminated surface pooling. The mound system includes a septic tank , a dosing chamber, and a mound. Wastes from homes are sent to the septic tank where the solid portion sinks to the bottom of the tank. Effluents are sent to a second tank called a dosing chamber, from which they are distributed to the mound at a metered rate (in doses). Wastewater is partially treated as it moves through the mound sand. Final treatment and disposal occur in the soil beneath the mound. The mound system does not allow all the effluent to enter the mound at once, accordingly allowing it to clean the effluent more effectively and helping keep the system from failing. The absorption mound is built in layers. The layer depths are determined by the depth of the limiting layer of the soil, which may be a seasonal water table, bedrock , fragrant , or glacial till . Standards created by Ohio State University state that 24 inches of soil should be above the limiting layer in the soil. A 24-inch layer of specifically sized sand is placed on top of the soil. The distribution pipes that are fed by the dosing chamber are placed on top of the sand in gravel. Then construction fabric and additional soil are placed on top of the gravel to help keep the pipes from freezing. The top layer of soil also allows the mound to be planted with grass or non-woody plants to control erosion The primary waste liquids cleaning and purification actions in a drain field are performed by a biofilm in the loose fill surrounding the perforated drain tile. If the soil permeability is too low, the liquid is not absorbed fast enough. If the soil permeability is too high or is exposed to fractured bedrock, the wastewater reaches the water table before the biofilm has time to purify the water , contaminating the aquifer. In either situation, the mound system provides an ideal habitat for the biofilm and has the correct permeability to assure slow absorption of effluent into the mound before exiting as purified water into the surrounding environment. When installing a mound system, the soil in the area where the mound is to be placed will be compacted or disturbed. Any trees that in the mound area are cut away, and the roots and stumps retained. The surface of the area for the mound is then roughened with a chisel plow. This prepares the area for the sand. Work is done from upslope of the mound area so that the ground downslope of the mound does not get compacted. Tyler tables are used to help determine the mound size. Time dosing is another important aspect of the functioning of the mound system. Short frequent doses of effluent onto sand filters with orifices that are closely spaced helps to improve effluent quality. By contrast, demand dosing releases large amounts of effluent at once, which rapidly passes through the sand. This does not give the biota the proper amount of time to clean the effluent.
https://en.wikipedia.org/wiki/Mound_system
Moungi Bawendi ( Arabic : منجي الباوندي ; born 15 March 1961) [ 2 ] [ 3 ] is an American –Tunisian–French chemist . [ 4 ] [ 5 ] He is currently the Lester Wolfe Professor at the Massachusetts Institute of Technology . [ 6 ] [ 7 ] Bawendi is known for his advances in the chemical production of high-quality quantum dots . [ 8 ] For this work, he was awarded the Nobel Prize in Chemistry in 2023. Moungi Bawendi was born in Paris , France, the son of Tunisian mathematician Mohammed Salah Baouendi . After periods living in France and Tunisia , Bawendi and his family migrated to the United States when he was a child. [ 9 ] They lived in West Lafayette, Indiana , as Salah worked in the math department at Purdue University . [ 9 ] Bawendi graduated from West Lafayette Junior-Senior High School in 1978. [ 10 ] [ 11 ] Bawendi received both an A.B. in 1982 [ 12 ] and an A.M. in 1983 from Harvard University . [ 13 ] He earned a Ph.D. in chemistry in 1988 from the University of Chicago , [ 8 ] under the supervision of Karl Freed and Takeshi Oka . [ 14 ] With Freed, Bawendi worked on theoretical polymer physics , [ 15 ] and with Oka, Bawendi worked on experiments on hot-bands of H 3 + , which played a role in deciphering the emission spectrum of Jupiter observed in 1989. [ 16 ] During his graduate studies, Oka recommended Bawendi to a summer program in Bell Labs , where Louis E. Brus introduced Bawendi to the research on quantum dots. [ 15 ] Upon graduation, Bawendi went to work with Brus at Bell Labs as a postdoctoral researcher . [ 17 ] Bawendi joined Massachusetts Institute of Technology (MIT) in 1990 and became professor in 1996. [ 17 ] Bawendi was one of the most cited chemists of the decade from 2000 to 2010. [ 18 ] He is a leading figure in the research and development of quantum dots . [ 8 ] Quantum dots are tiny semiconducting crystals whose nanoscale size gives them unique optical and electronic properties. [ 19 ] A major challenge in quantum dot research was to find ways to create high quality quantum dots that are stable and uniform. Bawendi is recognized for his work in developing standardized methods for quantum dot synthesis. In 1993, Bawendi, and his PhD students David J. Norris and Christopher B. Murray , [ 20 ] reported on a hot-injection synthesis method for producing reproducible quantum dots with well-defined size and with high optical quality. This breakthrough in chemical production methods made it possible to “tune” quantum dots according to size, and achieve predictable properties as a result. It gave scientists much greater control over the material, and made it possible to achieve precise and reproducible results. [ 21 ] [ 22 ] The method opened the door to the development of large-scale technological applications of quantum dots in a wide range of areas. [ 21 ] [ 22 ] Quantum dots are now used in light-emitting diodes (LEDs), photovoltaics (solar cells), [ 23 ] photodetectors , photoconductors , lasers, [ 24 ] biomedical imaging , biosensing and other applications. [ 23 ] Bawendi was granted the Sloan Research Fellowship in 1994. [ 25 ] He won the 1997 Nobel Signature Award for Graduate Education in Chemistry of American Chemical Society (ACS). [ 26 ] In 2001, he received the Sackler Prize in Physical Chemistry of Advanced Materials. [ 27 ] In 2006, he was awarded the Ernest Orlando Lawrence Award . [ 28 ] He was elected member of the American Association for the Advancement of Science in 2003, [ 29 ] of the American Academy of Arts and Sciences in 2004, [ 30 ] and of the National Academy of Sciences in 2007. [ 31 ] In 2010 during the National Meeting on March 23, 2010, Bawendi received the ACS Award in Colloid and Surface Chemistry . [ 32 ] [ 26 ] He also received the 2011 SEMI Award for North America for quantum dot research. [ 33 ] Bawendi was selected as a Clarivate Citation Laureate in Chemistry in 2020, jointly with Christopher B. Murray and Hyeon Taeghwan , "for synthesis of nanocrystals with precise attributes for a wide range of applications in physical, biological, and medical systems.". [ 34 ] In 2023, Bawendi was awarded the Nobel Prize in Chemistry jointly with Louis E. Brus and Alexey Ekimov "for the discovery and synthesis of quantum dots ". [ 4 ] He was also awarded by the Medal of Honor of the Tunis University . [ 35 ] Bawendi is married to journalist Rachel Zimmerman, widow of another MIT professor, Seth J. Teller . [ 38 ]
https://en.wikipedia.org/wiki/Moungi_Bawendi
In mathematics , the mountain climbing problem is a mathematical problem that considers a two-dimensional mountain range (represented as a continuous function ), and asks whether it is possible for two mountain climbers starting at sea level on the left and right sides of the mountain to meet at the summit, while maintaining equal altitudes at all times. It has been shown that when the mountain range has only a finite number of peaks and valleys, it is always possible to coordinate the climbers' movements, but this does not necessarily hold when it has an infinite number of peaks and valleys. This problem was named and posed in this form by James V. Whittaker ( 1966 ), but its history goes back to Tatsuo Homma ( 1952 ), who solved a version of it. The problem has been repeatedly rediscovered and solved independently in different contexts by a number of people (see references below). Since the 1990s, the problem was shown to be connected to the weak Fréchet distance of curves in the plane, [ 1 ] various planar motion planning problems in computational geometry , [ 2 ] the inscribed square problem , [ 3 ] semigroup of polynomials , [ 4 ] etc. The problem was popularized in the article by Goodman, Pach & Yap (1989) , which received the Mathematical Association of America 's Lester R. Ford Award in 1990. [ 5 ] The problem can be rephrased as asking whether, for a given pair of continuous functions f , g {\displaystyle f,g} (corresponding to rescaled versions of the left and right faces of the mountain) where: Is it possible to find another pair of functions x 1 , x 2 {\displaystyle x_{1},x_{2}} (representing the climbers' horizontal positions at time t {\displaystyle t} ) satisfying: Such that the compositions f ∘ x 1 {\displaystyle f\circ x_{1}} and g ∘ x 2 {\displaystyle g\circ x_{2}} (which represent the climbers' altitudes at time t {\displaystyle t} ) are identical? When f , g {\displaystyle f,g} have only a finite number of peaks and valleys ( local maxima and local minima ) it is always possible to coordinate the climbers' movements. [ 6 ] This can be shown by drawing out a sort of game tree : an undirected graph G {\displaystyle G} with one vertex labeled ( x 1 , x 2 ) {\displaystyle (x_{1},x_{2})} whenever f ( x 1 ) = g ( x 2 ) {\displaystyle f(x_{1})=g(x_{2})} and either f ( x 1 ) {\displaystyle f(x_{1})} or g ( x 2 ) {\displaystyle g(x_{2})} is a local maximum or minimum. Two vertices will be connected by an edge if and only if one node is immediately reachable from the other; the degree of a vertex will be greater than one only when the climbers have a non-trivial choice to make from that position. According to the handshaking lemma , every connected component of an undirected graph has an even number of odd-degree vertices. Since the only odd-degree vertices in all of G {\displaystyle G} are ( 0 , 0 ) {\displaystyle (0,0)} and ( 1 , 1 ) {\displaystyle (1,1)} , these two vertices must belong to the same connected component. That is, G {\displaystyle G} must contain a path from ( 0 , 0 ) {\displaystyle (0,0)} to ( 1 , 1 ) {\displaystyle (1,1)} . That path tells how to coordinate the climbers' movement to the summit. It has been observed that for a mountain with n peaks and valleys the length of this path (roughly corresponding to the number of times one or the other climber must "backtrack") can be as large as quadratic in n . [ 1 ] This technique breaks down when f , g {\displaystyle f,g} have an infinite number of local extrema. In that case, G {\displaystyle G} would not be a finite graph, so the handshaking lemma would not apply: ( 0 , 0 ) {\displaystyle (0,0)} and ( 1 , 1 ) {\displaystyle (1,1)} might be connected but only by a path with an infinite number of vertices, possibly taking the climbers "infinite time" to traverse. The following result is due to Huneke (1969) : On the other hand, it is not possible to extend this result to all continuous functions. For, if f {\displaystyle f} has constant height over an interval while g {\displaystyle g} has infinitely many oscillations passing through the same height, then the first climber may be forced to go back and forth over that interval infinitely many times, making his path to the summit infinitely long. [ 6 ] James V. Whittaker ( 1966 ) gives a concrete example involving x sin ⁡ ( x − 1 ) {\displaystyle x\sin(x^{-1})} . [ 6 ]
https://en.wikipedia.org/wiki/Mountain_climbing_problem
Mountain research , traditionally also known as orology [ 1 ] (from Greek oros ὄρος for 'mountain' and logos λόγος), is a field of research that regionally concentrates on the Earth's surface 's part covered by mountain environments . Different approaches have been developed to define mountainous areas. While some use an altitudinal difference of 300 m inside an area to define that zone as mountainous, [ 2 ] others consider differences from 1000 m or more, [ 3 ] depending on the areas' latitude . Additionally, some include steepness to define mountain regions, hence excluding high plateaus (e.g. the Andean Altiplano or the Tibetan Plateau ), zones often seen to be mountainous. A more pragmatic but useful definition has been proposed by the Italian Statistics Office ISTAT , which classifies municipalities as mountainous The United Nations Environmental Programme has produced a map [ 5 ] [ 6 ] of mountain areas worldwide using a combination of criteria, including regions with In a broader sense, mountain research is considered any research in mountain regions: for instance disciplinary studies on Himalayan plants , Andean rocks , Alpine cities , or Carpathian people . It is comparable to research that concentrates on the Arctic and Antarctic (polar research) or coasts (coastal research). In a narrower sense, mountain research focuses on mountain regions, their description and the explanation of the human-environment interaction in ( positive ) and the sustainable development of ( normative ) these areas. So-defined mountain research is situated at the nexus of natural sciences , social sciences and humanities . Drawing on Alexander von Humboldt 's work in the Andean realm , mountain geography and ecology are considered core areas of study; nevertheless important contributions are coming from anthropology , geology , economics , history or spatial planning . One definition of mountain science was given by the Romanian professor Radu Rey in 1985 in the book "Mountain Civilization", as follows: Mountain science can be defined as representing the study of socio-economic, human, technical and technological phenomena regarding man-nature relationships in mountain systems (specific to mountainous regions), aiming at the conceptualization and promotion of ways (methods, techniques, variants) of their optimized development. It integrates knowledge (disciplines) from fields such as agriculture, animal husbandry, human ecology, geoecology and pedology, biology, demography and ethnography, human and animal psychology, architecture, construction and building materials, elements of forestry and geology, beekeeping, fish farming, economics, the organization and operation of the private mountain household, as well as other specific systems, mountain systematization, mountain design, specific ergonomics, small/large industries and crafts, tourism and agro-tourism, hygienic-sanitary education, material natural resources (minerals, plants, animals) and energy (unconventional), legislation and legal relations, specifically mountain, human resources - tradition and culture. [ 7 ] [ 8 ] In sum, a narrowly defined mountain research applies an interdisciplinary and integrative regional approach. Slaymaker summarizes: The science of montology [...] starts with recognition of the importance of verticality, a distinctive feature of mountain regions, which imposes vertical control of the production system; marginality, which results from low agricultural potential; centrality of mechanisms of power and violence; population growth and expansion; and religion and myth, expressed in mountains as sacred places. Montology emphasises restoration ecology to include re-vegetation, rehabilitation, reclamation and recovery of the lost landscape form and function [...]. Landscape ecological effects are arranged along altitudinal belts and form the basis for a more comprehensive understanding of critical habitats for conservation and development. This approach has an underlying assumption of climax communities each fitting into a narrow altitudinal band. [ 9 ] Mountain research or orology —not to be confused with orography —, is sometimes denominated mountology . This term stems from Carl Troll's mountain geoecology —geoecology being Troll's English translation of the German Landschaftsökologie —and appeared at a meeting in Cambridge, Massachusetts in 1977. [ 10 ] Since then, scholars such as Jack D. Ives, Bruno Messerli and Robert E. Rhoades have claimed the development of mountology as interdisciplinary mountain research. The term montology was included in the Oxford English Dictionary in 2002. [ 11 ] It defines montology as: The study of mountains; specifically the interdisciplinary study of the physical, chemical, geological, and biological aspects of mountain regions; (also) the study of the lifestyles and economic concerns of people living in these regions. [ 12 ] On the one hand, the term montology received criticism due to the mix of Latin ( mōns , pl. montēs ) and Greek ( logos ). On the other hand, however, this is also the—well accepted—case in several, already established disciplines such as glaciology or sociology . The correct English syntax for mountain science is "Mountology", because the root is "mount". In Latin and German the root is, indeed, "mont". For example, in French, German, and Romanian the correct syntax is "Montologie", while in Italian, Spanish and Portuguese "Montologia". The following list includes peer-reviewed journals that have a focus on mountain research and are open to both the natural and the social sciences:
https://en.wikipedia.org/wiki/Mountain_research
Mouse Genome Informatics ( MGI ) is a free, online database and bioinformatics resource hosted by The Jackson Laboratory , with funding by the National Human Genome Research Institute (NHGRI), the National Cancer Institute (NCI), and the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). [ 1 ] MGI provides access to data on the genetics, genomics and biology of the laboratory mouse to facilitate the study of human health and disease. [ 2 ] [ 3 ] The database integrates multiple projects, with the two largest contributions coming from the Mouse Genome Database and Mouse Gene Expression Database (GXD). [ 4 ] As of 2018 [update] , MGI contains data curated from over 230,000 publications. [ 5 ] The MGI resource was first published online in 1994 [ 5 ] and is a collection of data, tools, and analyses created and tailored for use in the laboratory mouse , a widely used model organism . It is "the authoritative source of official names for mouse genes, alleles, and strains", which follow the guidelines established by the International Committee on Standardized Genetic Nomenclature for Mice. [ 6 ] The history and focus of Jackson Laboratory research and production facilities generates tremendous knowledge and depth which researchers can mine to advance their research. A dedicated community of mouse researchers, worldwide enhances and contributes to the knowledge as well. This is an indispensable tool for any researcher using the mouse as a model organism for their research, and for researchers interested in genes that share homology with the mouse genes. Various mouse research support resources including animal collections and free colony management software are also available at the MGI site. [ 7 ] The Mouse Genome Database collects and curates comprehensive phenotype and functional annotations for mouse genes and alleles . [ 8 ] This is an NHGRI -funded project which contributes to the Mouse Genome Informatics database. The Gene Expression Database is a community resource of mouse developmental expression information. [ 9 ] MGI evolved from a project funded by the National Center for Human Genome Research in 1989 to combine the databases of several Jackson Laboratory scientists and create a tool for visualizing data on the mouse genome. [ 10 ] The result of that project, led by Joseph H. Nadeau, Larry E. Mobraaten, and Janan T. Eppig, was called the "Encyclopedia of the Mouse Genome" and distributed via floppy disk semi-annually to around 300 scientists around the world. [ 10 ] In 1992, that group joined with the team responsible for developing the "Genomic Database for Mouse", led by Muriel T. Davisson and Thomas H. Roderick, to start the "Mouse Genome Informatics" project. [ 10 ] That project resulted in the first online release of the "Mouse Genome Database" in 1994. [ 10 ]
https://en.wikipedia.org/wiki/Mouse_Genome_Informatics
The mouse ear swelling test is a toxicological test that aims to mimic human skin reactions to chemicals. [ 1 ] It avoids post-mortem examination of tested animals. [ 2 ] This toxicology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mouse_ear_swelling_test
In computer networking , a mouse flow is a short (in total bytes) flow set up by a TCP (or other protocol) flow measured over a network link. A mouse is a flow with fewer than C packets. An elephant flow is a flow with at least C packets. The constant C is left as a degree of freedom in the analysis. C is chosen depending on the target application. [ 1 ]
https://en.wikipedia.org/wiki/Mouse_flow
A mouse unit ( MU ) is the amount of toxin required to kill a 20g mouse in 15 minutes via intraperitoneal injection . [ 1 ] [ 2 ] Mouse units are measured by a mouse bioassay , and are commonly used in the shellfish industry when describing relative toxicities for assessing food safety levels for human consumption. [ 3 ] This toxicology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mouse_unit
Mouse warping is a facility provided by some window managers that automatically positions the pointer to the centre of the current application window [ clarification needed ] when the application is made current [ clarification needed ] . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mouse_warping
Mouza Sulaiman Mohamed Al-Wardi ( Arabic : موزة سليمان محمد الوردي ) is a curator and historian from Oman , who is Director of the Collections Department at the National Museum . She specialises in the history of silverworking in the Oman region. Al-Wardi is the Director of the Collections Department at the National Museum of Oman . [ 1 ] [ 2 ] She was appointed to the role in 2019. [ 3 ] She joined the museum in 2009 as Chief Curator during its construction and development. [ 1 ] [ 3 ] She is a specialist in historical metal-working, in particular in bronze and silver. [ 4 ] She is an expert on the history of silversmithing, particularly in Oman, which has a historic tradition of women working as silversmiths, particularly in southern Oman in Dhofar . [ 5 ] [ 6 ] [ 7 ] The project is a collaboration between Aude Mongiatti ( British Museum ), Fahmida Suleman ( Royal Ontario Museum ), Marcia Dorr and Al-Wardi. [ 8 ] As part of this work she has studied the coinage that circulated in Oman and in particular its re-use into coin pendants. [ 9 ] Al-Wardi has also worked on the Diba Hoard , an assemblage of stone vessels and bronze metalwork dating to 1200 - 300 BCE. [ 10 ] In 2013 she was a candidate on the British Museum 's International Training Programme. [ 3 ] She previously studied for a BA degree in Heritage and Cultural Studies from Curtin University (formally known as Western Australian Institute of Technology) in Perth. [ 11 ] [ 3 ] [ 12 ] After graduation she worked at the Museum of Omani Heritage , developing training programmes and working on their heritage craft programme. [ 3 ] In 2018 she was part of a UNESCO -hosted regional event in Kuwait, which focussed on the development of national museums. [ 13 ]
https://en.wikipedia.org/wiki/Mouza_Sulaiman_Mohamed_Al-Wardi
The movable cellular automaton (MCA) method is a method in computational solid mechanics based on the discrete concept. It provides advantages both of classical cellular automaton and discrete element methods. One important advantage [ 1 ] of the MCA method is that it permits direct simulation of material fracture, including damage generation, crack propagation, fragmentation, and mass mixing. It is difficult to simulate these processes by means of continuum mechanics methods (For example: finite element method , finite difference method , etc.), so some new concepts like peridynamics are required. Discrete element method is very effective to simulate granular materials, but mutual forces among movable cellular automata provides simulating solids behavior. As the cell size of the automaton approaches zero, MCA behavior approaches classical continuum mechanics methods. [ 2 ] The MCA method was developed in the group of S.G. Psakhie [ 3 ] In framework of the MCA approach an object under modeling is considered as a set of interacting elements/automata. The dynamics of the set of automata are defined by their mutual forces and rules for their relationships. This system exists and operates in time and space. Its evolution in time and space is governed by the equations of motion. The mutual forces and rules for inter-elements relationships are defined by the function of the automaton response. This function has to be specified for each automaton. Due to mobility of automata the following new parameters of cellular automata have to be included into consideration: R i – radius-vector of automaton; V i – velocity of automaton; ω i – rotation velocity of automaton; θ i – rotation vector of automaton; m i – mass of automaton; J i – moment of inertia of automaton. The new concept of the MCA method is based on the introducing of the state of the pair of automata (relation of interacting pairs of automata) in addition to the conventional one – the state of a separate automaton. Note that the introduction of this definition allows to go from the static net concept to the concept of neighbours . As a result of this, the automata have the ability to change their neighbors by switching the states (relationships) of the pairs. The introducing of new type of states leads to new parameter to use it as criteria for switching relationships . It is defined as an automaton overlapping parameters h ij . So the relationship of the cellular automata is characterised by the value of their overlapping . The initial structure is formed by setting up certain relationships among each pair of neighboring elements. In contrast to the classical cellular automaton method in the MCA method not only a single automaton but also a relationship of pair of automata can be switched . According with the bistable automata concept there are two types of the pair states (relationships): So the changing of the state of pair relationships is controlled by relative movements of the automata and the media formed by such pairs can be considered as bistable media. The evolution of MCA media is described by the following equations of motion for translation : Here m i {\displaystyle m^{i}} is the mass of automaton i {\displaystyle i} , p i j {\displaystyle p^{ij}} is central force acting between automata i {\displaystyle i} and j {\displaystyle j} , C ( i j , i k ) {\displaystyle C(ij,ik)} is certain coefficient associated with transferring the h parameter from pair ij to pair ik , ψ ( α i j , i k ) {\displaystyle \psi (\alpha _{ij,ik})} is the angle between directions ij and ik . Due to finite size of movable automata the rotation effects have to be taken into account. The equations of motion for rotation can be written as follows: Here Θ ij is the angle of relative rotation (it is a switching parameter like h ij for translation), q ij is the distance from center of automaton i to contact point of automaton j (moment arm), τ ij is the pair tangential interaction, S ( i j , i k ) {\displaystyle S(ij,ik)} is certain coefficient associated with transferring the Θ parameter from one pair to other (it is similar to C ( i j , i k ) {\displaystyle C(ij,ik)} from the equation for translation). These equations are completely similar to the equations of motion for the many–particle approach. Translation of the pair automata The dimensionless deformation parameter for translation of the i j automata pair can be presented as: In this case: where Δt time step, V n ij – relative velocity. Rotation of the pair automata can be calculated by analogy with the last translation relationships. The ε ij parameter is used as a measure of deformation of automaton i under its interaction with automaton j . Where q ij – is a distance from the center of automaton i to its contact point with automaton j ; R i = d i /2 ( d i – is the size of automaton i ). As an example the titanium specimen under cyclic loading (tension – compression) is considered. The loading diagram is shown in the next figure: Due to mobility of each automaton the MCA method allows to take into account directly such actions as: Using boundary conditions of different types (fixed, elastic, viscous-elastic, etc.) it is possible to imitate different properties of surrounding medium, containing the simulated system. It is possible to model different modes of mechanical loading (tension, compression, shear strain, etc.) by setting up additional conditions at the boundaries.
https://en.wikipedia.org/wiki/Movable_cellular_automaton
A movable scaffolding system (MSS) is a special-purpose self-launching form used in bridge construction, specifically prestressed concrete bridges with segments or spans that are cast in place . The movable scaffolding system is used to support a form while the concrete is cured; once the segment is complete, the scaffold and forms are moved to the end of the new segment and another segment is poured. While superficially similar, movable scaffolding systems should not be confused with launching gantry machines, which also are used in segmental bridge construction. Both feature long girders spanning multiple bridge spans which move with and temporarily support the work, but launching gantry machines are used to lift and support precast bridge segments and bridge girders, while movable scaffolding systems are used for cast-in-place construction. An MSS is generally used instead of a launching gantry to minimize the number of joints, since the cast in place segments typically are longer than precast segments. [ 1 ] Once several bridge piers are complete, support brackets are attached to adjacent piers and the main parallel girders of the MSS are lifted in place to support the scaffold and concrete forms. Jacks are used to raise the girders and forms and the concrete is poured for the segment (or span) after rebar is placed. After the concrete has cured and the tendons have been tensioned, the jacks are lowered and the MSS girders are launched to bridge the next span. This process is repeated until the bridge is complete. Both overhead (forms suspended from support girder(s) above the bridge deck level) and underslung (forms supported by support girder(s) below bridge deck level) MSS are available. [ 2 ] [ 3 ] MSS construction was developed in the 1960s in Europe; [ 4 ] the first bridge built with a MSS was the Kettiger Hangbrücke [ de ] in Germany, completed in 1959. [ 5 ] The first bridge constructed with a MSS in California was the Long Beach International Gateway in Long Beach that replaced the Gerald Desmond Bridge , completed in 2020. [ 6 ]
https://en.wikipedia.org/wiki/Movable_scaffolding_system
The move-to-front (MTF) transform is an encoding of data (typically a stream of bytes ) designed to improve the performance of entropy encoding techniques of compression . When efficiently implemented, it is fast enough that its benefits usually justify including it as an extra step in data compression algorithm . This algorithm was first published by Boris Ryabko under the name of "book stack" in 1980. [ 1 ] Subsequently, it was rediscovered by J.K. Bentley et al. in 1986, [ 2 ] as attested in the explanatory note. [ 3 ] The main idea is that each symbol in the data is replaced by its index in the stack of “recently used symbols”. For example, long sequences of identical symbols are replaced by as many zeroes, whereas when a symbol that has not been used in a long time appears, it is replaced with a large number. Thus at the end the data is transformed into a sequence of integers; if the data exhibits a lot of local correlations, then these integers tend to be small. Let us give a precise description. Assume for simplicity that the symbols in the data are bytes . Each byte value is encoded by its index in a list of bytes, which changes over the course of the algorithm. The list is initially in order by byte value (0, 1, 2, 3, ..., 255). Therefore, the first byte is always encoded by its own value. However, after encoding a byte, that value is moved to the front of the list before continuing to the next byte. An example will shed some light on how the transform works. Imagine instead of bytes, we are encoding values in a–z. We wish to transform the following sequence: By convention, the list is initially (abcdefghijklmnopqrstuvwxyz). The first letter in the sequence is b, which appears at index 1 (the list is indexed from 0 to 25). We put a 1 to the output stream: The b moves to the front of the list, producing (bacdefghijklmnopqrstuvwxyz). The next letter is a, which now appears at index 1. So we add a 1 to the output stream. We have: and we move the letter a back to the top of the list. Continuing this way, we find that the sequence is encoded by: It is easy to see that the transform is reversible. Simply maintain the same list and decode by replacing each index in the encoded stream with the letter at that index in the list. Note the difference between this and the encoding method: The index in the list is used directly instead of looking up each value for its index. i.e. you start again with (abcdefghijklmnopqrstuvwxyz). You take the "1" of the encoded block and look it up in the list, which results in "b". Then move the "b" to front which results in (bacdef...). Then take the next "1", look it up in the list, this results in "a", move the "a" to front ... etc. Details of implementation are important for performance, particularly for decoding. For encoding, no clear advantage is gained by using a linked list , so using an array to store the list is acceptable, with worst-case performance O ( n k ), where n is the length of the data to be encoded and k is the number of values (generally a constant for a given implementation). The typical performance is better because frequently-used symbols are more likely to be at the front and will produce earlier hits. This is also the idea behind a Move-to-front self-organizing list . However, for decoding, we can use specialized data structures to greatly improve performance. [ example needed ] This is a possible implementation of the move-to-front algorithm in Python . In this example we can see the MTF code taking advantage of the three repetitive i 's in the input word. The common dictionary here, however, is less than ideal since it is initialized with more commonly used ASCII printable characters put after little-used control codes, against the MTF code's design intent of keeping what's commonly used in the front. If one rotates the dictionary to put the more-used characters in earlier places, a better encoding can be obtained: The MTF transform takes advantage of local correlation of frequencies to reduce the entropy of a message. [ clarification needed ] Indeed, recently used letters stay towards the front of the list; if use of letters exhibits local correlations, this will result in a large number of small numbers such as "0"'s and "1"'s in the output. However, not all data exhibits this type of local correlation, and for some messages, the MTF transform may actually increase the entropy. An important use of the MTF transform is in Burrows–Wheeler transform based compression. The Burrows–Wheeler transform is very good at producing a sequence that exhibits local frequency correlation from text and certain other special classes of data. Compression benefits greatly from following up the Burrows–Wheeler transform with an MTF transform before the final entropy-encoding step. As an example, imagine we wish to compress Hamlet's soliloquy ( To be, or not to be... ). We can calculate the size of this message to be 7033 bits. Naively, we might try to apply the MTF transform directly. The result is a message with 7807 bits (higher than the original). The reason is that English text does not in general exhibit a high level of local frequency correlation. However, if we first apply the Burrows–Wheeler transform, and then the MTF transform, we get a message with 6187 bits. Note that the Burrows–Wheeler transform does not decrease the entropy of the message; it only reorders the bytes in a way that makes the MTF transform more effective. One problem with the basic MTF transform is that it makes the same changes for any character, regardless of frequency, which can result in diminished compression as characters that occur rarely may push frequent characters to higher values. Various alterations and alternatives have been developed for this reason. One common change is to make it so that characters above a certain point can only be moved to a certain threshold. Another is to make some algorithm that runs a count of each character's local frequency and uses these values to choose the characters' order at any point. Many of these transforms still reserve zero for repeat characters, since these are often the most common in data after the Burrows–Wheeler Transform.
https://en.wikipedia.org/wiki/Move-to-front_transform