id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
42,480,494
https://en.wikipedia.org/wiki/En-ring
In mathematics, an -algebra in a symmetric monoidal infinity category C consists of the following data: An object for any open subset U of Rn homeomorphic to an n-disk. A multiplication map: for any disjoint open disks contained in some open disk V subject to the requirements that the multiplication maps are compatible with composition, and that is an equivalence if . An equivalent definition is that A is an algebra in C over the little n-disks operad. Examples An -algebra in vector spaces over a field is a unital associative algebra if n = 1, and a unital commutative associative algebra if n ≥ 2. An -algebra in categories is a monoidal category if n = 1, a braided monoidal category if n = 2, and a symmetric monoidal category if n ≥ 3. If Λ is a commutative ring, then defines an -algebra in the infinity category of chain complexes of -modules. See also Categorical ring Highly structured ring spectrum References http://www.math.harvard.edu/~lurie/282ynotes/LectureXXII-En.pdf http://www.math.harvard.edu/~lurie/282ynotes/LectureXXIII-Koszul.pdf External links Higher category theory Homotopy theory
En-ring
Mathematics
279
51,165,385
https://en.wikipedia.org/wiki/TCP-seq
Translation complex profile sequencing (TCP-seq) is a molecular biology method for obtaining snapshots of momentary distribution of protein synthesis complexes along messenger RNA (mRNA) chains. Application Expression of genetic code in all life forms consists of two major processes, synthesis of copies of the genetic code recorded in DNA into the form of mRNA (transcription), and protein synthesis itself (translation), whereby the code copies in mRNA are decoded into amino acid sequences of the respective proteins. Both transcription and translation are highly regulated processes essentially controlling everything of what happens in live cells (and multicellular organisms, consequently). Control of translation is especially important in eukaryotic cells where it forms part of post-transcriptional regulatory networks of genes expression. This additional functionality is reflected in the increased complexity of the translation process, making it a hard object to investigate. Yet details on when and what mRNA is translated and what mechanisms are responsible for this control are key to understanding of normal and pathological cell functionality. TCP-seq can be used to obtain this information. Principles With the advent of the high-throughput DNA and RNA sequence identification methods (such as Illumina sequencing), it became possible to efficiently analyse nucleotide sequences of large numbers of relatively short DNA and RNA fragments. Sequences of these fragments can be superimposed to reconstruct the source. Alternatively, if the source sequence is already known, the fragments can be found within it (“mapped”), and their individual numbers counted. Thus, if an initial stage exists whereby the fragments are differentially present or selected (“enriched”), this approach can be used to quantitatively describe such stage over even a very large number or length of the input sequences, most usually encompassing the entire DNA or RNA of the cell. TCP-seq is based on these capabilities of the high-throughput RNA sequencing and further uses the nucleic acid protection phenomenon. The protection is manifested as resistance to depolymerisation or modification of stretches of nucleic acids (particularly, RNA) that are tightly bound to or engulfed with other biomolecules, which thus leave their “footprints” over the nucleic acid strand. These “footprint” fragments therefore represent location on nucleic acid chain where the interaction occurs. By sequencing and mapping the fragments back to the source sequence, it is possible to precisely identify the locations and counts of these intermolecular contacts. In case of TCP-seq, ribosomes and ribosomal subunits engaged in interaction with mRNA are first fast chemically crosslinked to it with formaldehyde to preserve existing state of interactions (“snapshot” of distribution) and to block any possible non-equilibrium processes. The crosslinking can be performed directly in, but not restricted to, live cells. The RNA is then partially degraded (e.g. with ribonuclease) so that only fragments protected by the ribosomes or ribosomal subunits are left. The protected fragments are then purified according to the sedimentation dynamics of the attached ribosomes or ribosomal subunits, de-blocked, sequenced and mapped to the source transcriptome, giving the original locations of the translation complexes over mRNA. TCP-seq merges several elements typical to other transcriptome-wide analyses of its kind. In particular, polysome profiling and ribosome (translation) profiling approaches are also employed to identify mRNA involved in polysome formation and locations of elongating ribosomes over coding regions of transcripts, correspondingly. These methods, however, do not use chemical stabilisation of translation complexes and purification of the covalently bound intermediates from the live cells. TCP-seq thus can be considered more as a functional equivalent of ChIP-seq and similar methods of investigating momentary interactions of DNA that are redesigned to be applicable for translation. Advantages and disadvantages The advantages of the method include: uniquely wide field of view (because translation complexes of any type, including scanning small ribosomal subunits, are captured for the first time); potentially more natural representation of complex dynamics (because all, and not only selected, translation processes are arrested by formaldehyde fixation); possibly more faithful and/or sensitive detection of translation complexes locations (as covalent fixation prevents detachment of the fragments from the ribosomes or their subunits). The disadvantages include: higher overall complexity of the experimental procedure (due to requirement of the initial isolation of translated mRNA and preparative sedimentation to separate ribosomes and ribosomal subunits); higher contamination of the useful sequencing read depth with the undesired fragments of the ribosomal RNA (inherited from the wide size selection window used for protected RNA fragments); a pre-requirement for optimization of the formaldehyde fixation procedure for each new cell or sample type (as optimal formaldehyde fixation timings strongly depend on sample morphology and both over- and under-fixation will compromise the results). Development The method is currently being developed and was applied to investigate translation dynamics in live yeast cells and is extending, rather than simply combining, the capabilities of the previous techniques. The only other transcriptome-wide method for mapping ribosome positions over mRNA with nucleotide precision is ribosome (translation) profiling. However, it captures positions of only elongating ribosomes, and most dynamic and functionally important intermediates of translation at the initiation stage are not detected. TCP-seq was designed to specifically target these blind spots. It can essentially provide the same level of details for elongation phase as ribosome (translation) profiling, but also includes recording of initiation, termination and recycling intermediates (and basically any other possible translation complexes as long as the ribosome or its subunits are contacting and protecting the mRNA) of protein synthesis that previously remained out of the reach. Therefore, TCP-seq provides a single approach for a complete insight into the translation process of a biological sample. This particular aspect of the method can be expected to be developed further as the dynamics of ribosomal scanning on mRNA during translation initiation is generally unknown for the most of life. Current dataset containing TCP-seq data for translation initiation is available for yeast Saccharomyces cerevisiae, and likely to be extended for other organisms in the future. References Molecular biology techniques Biochemistry methods Molecular biology
TCP-seq
Chemistry,Biology
1,313
6,988,866
https://en.wikipedia.org/wiki/Directed%20percolation
In statistical physics, directed percolation (DP) refers to a class of models that mimic filtering of fluids through porous materials along a given direction, due to the effect of gravity. Varying the microscopic connectivity of the pores, these models display a phase transition from a macroscopically permeable (percolating) to an impermeable (non-percolating) state. Directed percolation is also used as a simple model for epidemic spreading with a transition between survival and extinction of the disease depending on the infection rate. More generally, the term directed percolation stands for a universality class of continuous phase transitions which are characterized by the same type of collective behavior on large scales. Directed percolation is probably the simplest universality class of transitions out of thermal equilibrium. Lattice models One of the simplest realizations of DP is bond directed percolation. This model is a directed variant of ordinary (isotropic) percolation and can be introduced as follows. The figure shows a tilted square lattice with bonds connecting neighboring sites. The bonds are permeable (open) with probability and impermeable (closed) otherwise. The sites and bonds may be interpreted as holes and randomly distributed channels of a porous medium. The difference between ordinary and directed percolation is illustrated to the right. In isotropic percolation a spreading agent (e.g. water) introduced at a particular site percolates along open bonds, generating a cluster of wet sites. Contrarily, in directed percolation the spreading agent can pass open bonds only along a preferred direction in space, as indicated by the arrow. The resulting red cluster is directed in space. As a dynamical process Interpreting the preferred direction as a temporal degree of freedom, directed percolation can be regarded as a stochastic process that evolves in time. In a minimal, two-parameter model that includes bond and site DP as special cases, a one-dimensional chain of sites evolves in discrete time , which can be viewed as a second dimension, and all sites are updated in parallel. Activating a certain site (called initial seed) at time the resulting cluster can be constructed row by row. The corresponding number of active sites varies as time evolves. Universal scaling behavior The DP universality class is characterized by a certain set of critical exponents. These exponents depend on the spatial dimension . Above the so-called upper critical dimension they are given by their mean-field values while in dimensions they have been estimated numerically. Current estimates are summarized in the following table: Other examples In two dimensions, the percolation of water through a thin tissue (such as toilet paper) has the same mathematical underpinnings as the flow of electricity through two-dimensional random networks of resistors. In chemistry, chromatography can be understood with similar models. The propagation of a tear or rip in a sheet of paper, in a sheet of metal, or even the formation of a crack in ceramic bears broad mathematical resemblance to the flow of electricity through a random network of electrical fuses. Above a certain critical point, the electrical flow will cause a fuse to pop, possibly leading to a cascade of failures, resembling the propagation of a crack or tear. The study of percolation helps indicate how the flow of electricity will redistribute itself in the fuse network, thus modeling which fuses are most likely to pop next, and how fast they will pop, and what direction the crack may curve in. Examples can be found not only in physical phenomena, but also in biology, neuroscience, ecology (e.g. evolution), and economics (e.g. diffusion of innovation). Percolation can be considered to be a branch of the study of dynamical systems or statistical mechanics. In particular, percolation networks exhibit a phase change around a critical threshold. Experimental realizations In spite of vast success in the theoretical and numerical studies of DP, obtaining convincing experimental evidence has proved challenging. In 1999 an experiment on flowing sand on an inclined plane was identified as a physical realization of DP. In 2007, critical behavior of DP was finally found in the electrohydrodynamic convection of liquid crystal, where a complete set of static and dynamic critical exponents and universal scaling functions of DP were measured in the transition to spatiotemporal intermittency between two turbulent states. See also Percolation threshold Ziff–Gulari–Barshad model Percolation critical exponents Sources Literature L. Canet: "Processus de réaction-diffusion : une approche par le groupe de renormalisation non perturbatif", Thèse. Thèse en ligne Muhammad Sahimi. Applications of Percolation Theory. Taylor & Francis, 1994. (cloth), (paper) Geoffrey Grimmett. Percolation (2. ed). Springer Verlag, 1999. References Sources Percolation theory Critical phenomena
Directed percolation
Physics,Chemistry,Materials_science,Mathematics
1,024
6,432,722
https://en.wikipedia.org/wiki/Photon%20polarization
Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two. The description of photon polarization contains many of the physical concepts and much of the mathematical machinery of more involved quantum descriptions, such as the quantum mechanics of an electron in a potential well. Polarization is an example of a qubit degree of freedom, which forms a fundamental basis for an understanding of more complicated quantum phenomena. Much of the mathematical machinery of quantum mechanics, such as state vectors, probability amplitudes, unitary operators, and Hermitian operators, emerge naturally from the classical Maxwell's equations in the description. The quantum polarization state vector for the photon, for instance, is identical with the Jones vector, usually used to describe the polarization of a classical wave. Unitary operators emerge from the classical requirement of the conservation of energy of a classical wave propagating through lossless media that alter the polarization state of the wave. Hermitian operators then follow for infinitesimal transformations of a classical polarization state. Many of the implications of the mathematical machinery are easily verified experimentally. In fact, many of the experiments can be performed with polaroid sunglass lenses. The connection with quantum mechanics is made through the identification of a minimum packet size, called a photon, for energy in the electromagnetic field. The identification is based on the theories of Planck and the interpretation of those theories by Einstein. The correspondence principle then allows the identification of momentum and angular momentum (called spin), as well as energy, with the photon. Polarization of classical electromagnetic waves Polarization states Linear polarization The wave is linearly polarized (or plane polarized) when the phase angles are equal, This represents a wave with phase polarized at an angle with respect to the x axis. In this case the Jones vector can be written with a single phase: The state vectors for linear polarization in x or y are special cases of this state vector. If unit vectors are defined such that and then the linearly polarized polarization state can be written in the "x–y basis" as Circular polarization If the phase angles and differ by exactly and the x amplitude equals the y amplitude the wave is circularly polarized. The Jones vector then becomes where the plus sign indicates left circular polarization and the minus sign indicates right circular polarization. In the case of circular polarization, the electric field vector of constant magnitude rotates in the x–y plane. If unit vectors are defined such that and then an arbitrary polarization state can be written in the "R–L basis" as where and We can see that Elliptical polarization The general case in which the electric field rotates in the x–y plane and has variable magnitude is called elliptical polarization. The state vector is given by Geometric visualization of an arbitrary polarization state To get an understanding of what a polarization state looks like, one can observe the orbit that is made if the polarization state is multiplied by a phase factor of and then having the real parts of its components interpreted as x and y coordinates respectively. That is: If only the traced out shape and the direction of the rotation of is considered when interpreting the polarization state, i.e. only (where and are defined as above) and whether it is overall more right circularly or left circularly polarized (i.e. whether or vice versa), it can be seen that the physical interpretation will be the same even if the state is multiplied by an arbitrary phase factor, since and the direction of rotation will remain the same. In other words, there is no physical difference between two polarization states and , between which only a phase factor differs. It can be seen that for a linearly polarized state, M will be a line in the xy plane, with length 2 and its middle in the origin, and whose slope equals to . For a circularly polarized state, M will be a circle with radius and with the middle in the origin. Energy, momentum, and angular momentum of a classical electromagnetic wave Energy density of classical electromagnetic waves Energy in a plane wave The energy per unit volume in classical electromagnetic fields is (cgs units) and also Planck units: For a plane wave, this becomes: where the energy has been averaged over a wavelength of the wave. Fraction of energy in each component The fraction of energy in the x component of the plane wave is with a similar expression for the y component resulting in . The fraction in both components is Momentum density of classical electromagnetic waves The momentum density is given by the Poynting vector For a sinusoidal plane wave traveling in the z direction, the momentum is in the z direction and is related to the energy density: The momentum density has been averaged over a wavelength. Angular momentum density of classical electromagnetic waves Electromagnetic waves can have both orbital and spin angular momentum. The total angular momentum density is For a sinusoidal plane wave propagating along axis the orbital angular momentum density vanishes. The spin angular momentum density is in the direction and is given by where again the density is averaged over a wavelength. Optical filters and crystals Passage of a classical wave through a polaroid filter A linear filter transmits one component of a plane wave and absorbs the perpendicular component. In that case, if the filter is polarized in the x direction, the fraction of energy passing through the filter is Example of energy conservation: Passage of a classical wave through a birefringent crystal An ideal birefringent crystal transforms the polarization state of an electromagnetic wave without loss of wave energy. Birefringent crystals therefore provide an ideal test bed for examining the conservative transformation of polarization states. Even though this treatment is still purely classical, standard quantum tools such as unitary and Hermitian operators that evolve the state in time naturally emerge. Initial and final states A birefringent crystal is a material that has an optic axis with the property that the light has a different index of refraction for light polarized parallel to the axis than it has for light polarized perpendicular to the axis. Light polarized parallel to the axis are called "extraordinary rays" or "extraordinary photons", while light polarized perpendicular to the axis are called "ordinary rays" or "ordinary photons". If a linearly polarized wave impinges on the crystal, the extraordinary component of the wave will emerge from the crystal with a different phase than the ordinary component. In mathematical language, if the incident wave is linearly polarized at an angle with respect to the optic axis, the incident state vector can be written and the state vector for the emerging wave can be written While the initial state was linearly polarized, the final state is elliptically polarized. The birefringent crystal alters the character of the polarization. Dual of the final state The initial polarization state is transformed into the final state with the operator U. The dual of the final state is given by where is the adjoint of U, the complex conjugate transpose of the matrix. Unitary operators and energy conservation The fraction of energy that emerges from the crystal is In this ideal case, all the energy impinging on the crystal emerges from the crystal. An operator U with the property that where I is the identity operator and U is called a unitary operator. The unitary property is necessary to ensure energy conservation in state transformations. Hermitian operators and energy conservation If the crystal is very thin, the final state will be only slightly different from the initial state. The unitary operator will be close to the identity operator. We can define the operator H by and the adjoint by Energy conservation then requires This requires that Operators like this that are equal to their adjoints are called Hermitian or self-adjoint. The infinitesimal transition of the polarization state is Thus, energy conservation requires that infinitesimal transformations of a polarization state occur through the action of a Hermitian operator. Photons: connection to quantum mechanics Energy, momentum, and angular momentum of photons Energy The treatment to this point has been classical. It is a testament, however, to the generality of Maxwell's equations for electrodynamics that the treatment can be made quantum mechanical with only a reinterpretation of classical quantities. The reinterpretation is based on the theories of Max Planck and the interpretation by Albert Einstein of those theories and of other experiments. Einstein's conclusion from early experiments on the photoelectric effect is that electromagnetic radiation is composed of irreducible packets of energy, known as photons. The energy of each packet is related to the angular frequency of the wave by the relation where is an experimentally determined quantity known as the reduced Planck constant. If there are photons in a box of volume , the energy in the electromagnetic field is and the energy density is The photon energy can be related to classical fields through the correspondence principle that states that for a large number of photons, the quantum and classical treatments must agree. Thus, for very large , the quantum energy density must be the same as the classical energy density The number of photons in the box is then Momentum The correspondence principle also determines the momentum and angular momentum of the photon. For momentum where is the wave number. This implies that the momentum of a photon is Angular momentum and spin Similarly for the spin angular momentum where is field strength. This implies that the spin angular momentum of the photon is the quantum interpretation of this expression is that the photon has a probability of of having a spin angular momentum of and a probability of of having a spin angular momentum of . We can therefore think of the spin angular momentum of the photon being quantized as well as the energy. The angular momentum of classical light has been verified. A photon that is linearly polarized (plane polarized) is in a superposition of equal amounts of the left-handed and right-handed states. Spin operator The spin of the photon is defined as the coefficient of in the spin angular momentum calculation. A photon has spin 1 if it is in the state and −1 if it is in the state. The spin operator is defined as the outer product The eigenvectors of the spin operator are and with eigenvalues 1 and −1, respectively. The expected value of a spin measurement on a photon is then An operator S has been associated with an observable quantity, the spin angular momentum. The eigenvalues of the operator are the allowed observable values. This has been demonstrated for spin angular momentum, but it is in general true for any observable quantity. Spin states We can write the circularly polarized states as where s = 1 for and s = −1 for . An arbitrary state can be written where and are phase angles, θ is the angle by which the frame of reference is rotated, and Spin and angular momentum operators in differential form When the state is written in spin notation, the spin operator can be written The eigenvectors of the differential spin operator are To see this note The spin angular momentum operator is Nature of probability in quantum mechanics Probability for a single photon There are two ways in which probability can be applied to the behavior of photons; probability can be used to calculate the probable number of photons in a particular state, or probability can be used to calculate the likelihood of a single photon to be in a particular state. The former interpretation violates energy conservation. The latter interpretation is the viable, if nonintuitive, option. Dirac explains this in the context of the double-slit experiment: Some time before the discovery of quantum mechanics people realized that the connection between light waves and photons must be of a statistical character. What they did not clearly realize, however, was that the wave function gives information about the probability of one photon being in a particular place and not the probable number of photons in that place. The importance of the distinction can be made clear in the following way. Suppose we have a beam of light consisting of a large number of photons split up into two components of equal intensity. On the assumption that the beam is connected with the probable number of photons in it, we should have half the total number going into each component. If the two components are now made to interfere, we should require a photon in one component to be able to interfere with one in the other. Sometimes these two photons would have to annihilate one another and other times they would have to produce four photons. This would contradict the conservation of energy. The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs.—Paul Dirac, The Principles of Quantum Mechanics, 1930, Chapter 1 Probability amplitudes The probability for a photon to be in a particular polarization state depends on the fields as calculated by the classical Maxwell's equations. The polarization state of the photon is proportional to the field. The probability itself is quadratic in the fields and consequently is also quadratic in the quantum state of polarization. In quantum mechanics, therefore, the state or probability amplitude contains the basic probability information. In general, the rules for combining probability amplitudes look very much like the classical rules for composition of probabilities: [The following quote is from Baym, Chapter 1] The probability amplitude for two successive probabilities is the product of amplitudes for the individual possibilities. For example, the amplitude for the x polarized photon to be right circularly polarized and for the right circularly polarized photon to pass through the y-polaroid is the product of the individual amplitudes. The amplitude for a process that can take place in one of several indistinguishable ways is the sum of amplitudes for each of the individual ways. For example, the total amplitude for the x polarized photon to pass through the y-polaroid is the sum of the amplitudes for it to pass as a right circularly polarized photon, plus the amplitude for it to pass as a left circularly polarized photon, The total probability for the process to occur is the absolute value squared of the total amplitude calculated by 1 and 2. Uncertainty principle Mathematical preparation For any legal operators the following inequality, a consequence of the Cauchy–Schwarz inequality, is true. If B A ψ and A B ψ are defined, then by subtracting the means and re-inserting in the above formula, we deduce where is the operator mean of observable X in the system state ψ and Here is called the commutator of A and B. This is a purely mathematical result. No reference has been made to any physical quantity or principle. It simply states that the uncertainty of one operator times the uncertainty of another operator has a lower bound. Application to angular momentum The connection to physics can be made if we identify the operators with physical operators such as the angular momentum and the polarization angle. We have then which means that angular momentum and the polarization angle cannot be measured simultaneously with infinite accuracy. (The polarization angle can be measured by checking whether the photon can pass through a polarizing filter oriented at a particular angle, or a polarizing beam splitter. This results in a yes/no answer that, if the photon was plane-polarized at some other angle, depends on the difference between the two angles.) States, probability amplitudes, unitary and Hermitian operators, and eigenvectors Much of the mathematical apparatus of quantum mechanics appears in the classical description of a polarized sinusoidal electromagnetic wave. The Jones vector for a classical wave, for instance, is identical with the quantum polarization state vector for a photon. The right and left circular components of the Jones vector can be interpreted as probability amplitudes of spin states of the photon. Energy conservation requires that the states be transformed with a unitary operation. This implies that infinitesimal transformations are transformed with a Hermitian operator. These conclusions are a natural consequence of the structure of Maxwell's equations for classical waves. Quantum mechanics enters the picture when observed quantities are measured and found to be discrete rather than continuous. The allowed observable values are determined by the eigenvalues of the operators associated with the observable. In the case angular momentum, for instance, the allowed observable values are the eigenvalues of the spin operator. These concepts have emerged naturally from Maxwell's equations and Planck's and Einstein's theories. They have been found to be true for many other physical systems. In fact, the typical program is to assume the concepts of this section and then to infer the unknown dynamics of a physical system. This was done, for instance, with the dynamics of electrons. In that case, working back from the principles in this section, the quantum dynamics of particles were inferred, leading to Schrödinger's equation, a departure from Newtonian mechanics. The solution of this equation for atoms led to the explanation of the Balmer series for atomic spectra and consequently formed a basis for all of atomic physics and chemistry. This is not the only occasion in which Maxwell's equations have forced a restructuring of Newtonian mechanics. Maxwell's equations are relativistically consistent. Special relativity resulted from attempts to make classical mechanics consistent with Maxwell's equations (see, for example, Moving magnet and conductor problem). See also Angular momentum of light Spin angular momentum of light Orbital angular momentum of light Quantum decoherence Stern–Gerlach experiment Wave–particle duality Double-slit experiment Spin polarization References Further reading Quantum mechanics Physical phenomena Polarization (waves)
Photon polarization
Physics
3,652
36,180,950
https://en.wikipedia.org/wiki/Flat%20convergence
In mathematics, flat convergence is a notion for convergence of submanifolds of Euclidean space. It was first introduced by Hassler Whitney in 1957, and then extended to integral currents by Federer and Fleming in 1960. It forms a fundamental part of the field of geometric measure theory. The notion was applied to find solutions to Plateau's problem. In 2001 the notion of an integral current was extended to arbitrary metric spaces by Ambrosio and Kirchheim. Integral currents A k-dimensional current T is a linear functional on the space of smooth, compactly supported k-forms. For example, given a Lipschitz map from a manifold into Euclidean space, , one has an integral current T(ω) defined by integrating the pullback of the differential k-form, ω, over N. Currents have a notion of boundary (which is the usual boundary when N is a manifold with boundary) and a notion of mass, M(T), (which is the volume of the image of N). An integer rectifiable current is defined as a countable sum of currents formed in this respect. An integral current is an integer rectifiable current whose boundary has finite mass. It is a deep theorem of Federer-Fleming that the boundary is then also an integral current. Flat norm and flat distance The flat norm |T| of a k-dimensional integral current T is the infimum of M(A) + M(B), where the infimum is taken over all integral currents A and B such that . The flat distance between two integral currents is then dF(T,S) = |T − S|. Compactness theorem Federer-Fleming proved that if one has a sequence of integral currents whose supports lie in a compact set K with a uniform upper bound on , then a subsequence converges in the flat sense to an integral current. This theorem was applied to study sequences of submanifolds of fixed boundary whose volume approached the infimum over all volumes of submanifolds with the given boundary. It produced a candidate weak solution to Plateau's problem. References Metric geometry Riemannian geometry Convergence (mathematics)
Flat convergence
Mathematics
444
62,485,831
https://en.wikipedia.org/wiki/Ted%20Janssen
Theo Willem Jan Marie Janssen (13 August 1936 – 29 September 2017), better known as Ted Janssen, was a Dutch physicist and Full Professor of Theoretical Physics at the Radboud University Nijmegen. Together with Pim de Wolff and Aloysio Janner, he was one of the founding fathers of N-dimensional superspace approach in crystal structure analysis for the description of quasi periodic crystals and modulated structures. For this work he received the Aminoff Prize of the Royal Swedish Academy of Sciences (together with de Wolff and Janner) in 1988 and the Ewald Prize of the International Union of Crystallography (with Janner) in 2014. These achievements were merit of his unique talent, combining a deep knowledge of physics with a rigorous mathematical approach. Their theoretical description of the structure and symmetry of incommensurate crystals using higher dimensional superspace groups also included the quasicrystals that were discovered in 1982 by Dan Schechtman, who received the Nobel Prize in Chemistry in 2011. The Swedish Academy of Sciences explicitly mentioned their work at this occasion. Early life and education Ted Janssen was born on August 13, 1936, in Vught, near 's-Hertogenbosch in the Netherlands. Already as a young boy he was fascinated by the sciences. He built radios, set up a chemistry lab in the attic of his parental home, was an avid bird watcher and he built his own telescopes. He remembered high school as ‘not very inspiring’ and he passed all exams without much effort, but viewed it as a time that truly formed him. Instead of spending time on homework he studied the history and philosophy of science and was very interested in astronomy and astrophysics. During his high school years he also developed a deep appreciation of literature and music. Later he added the visual arts, ballet, and architecture to that list. The enjoyment of the arts was vital to Ted. He called it essential components of life. He started playing the piano, harpsichord and cello in his early twenties. Too late to become an accomplished musician, but it brought him great joy. In 1954 he started college in Utrecht, studying mathematics and physics with minors in chemistry and astronomy. He again showed his interest in a wide variety of topics by attending lectures in ethics, philosophy, music and sculpture. After his candidate degree he concentrated on theoretical physics, but always included a deep understanding of mathematics in his work. After studying theoretical physics in Utrecht University Ted graduated under Leon van Hove with his doctoral dissertation ‘the classical limit of quantum mechanical diagram expansions’ and he was offered the opportunity to present it at an international conference in Utrecht on ‘Many-body Problems’. No less than six Nobel laureates (Yang, Lee, Prigogine, Anderson, Cooper and Schrieffer) were in the audience for Ted’s first presentation, which led to his first publication as well: ’On the classical limit of the diagram expansion in quantum statistics’ by T.W.J.M. Janssen. All his later publications were published as T. Janssen or Ted Janssen. After his doctoral exam in 1960 Ted worked for several years with professors Theo Ruijgrok, Tini Veltman, and John Tjon in Utrecht. Earlier he developed a friendship with co-student and co-worker Geert Fast. Geert’s promotor van Hove moved from Utrecht to CERN in Geneva and Geert asked Ted to keep an eye on his little sister, Loes Fast, who was studying veterinary medicine in Utrecht. Ted quickly developed strong feelings for Loes and in 1965 they got married. In 1965, he became the first PhD student of Aloysio Janner at the Catholic University Nijmegen and started on the work that resulted in his PhD thesis, Crystallographic Groups in Space and Time, in 1968, thereby already providing the theoretical basis of what would become the superspace approach. Career After his promotion Ted Janssen got a position in Nijmegen at the department of Theoretical Solid State Physics. He immediately was given teaching responsibilities. In the years that followed Ted taught many classes, including electrodynamics, classical mechanics, quantum mechanics, complex functions, crystallographic groups, group theory for physicists, chaos theory, soft modes and solid state physics. Ted was always interested in international collaboration and taught ‘crystallographic groups’ for 6 months in Leuven in 1969. In 1971 Ted accepted an invite from professor Baltensberger to come to the ETH in Zürich for one year. Baltensberger organized weekly meetings between theoretical and experimental physicists. Ted ever since made it a habit to bring theoretical and experimental physicists together on a regular basis. Back in Nijmegen Ted was promoted to associate professor in 1972 and he continued working with Aloysio Janner and Li Ching Chen on space-time symmetry of electromagnetic fields and independently on PUA (projective unitary/anti-unitary) representations. In 1972 Aloysio and Ted also started their long collaboration with Pim de Wolff. Together with Aloysio Janner and Pim de Wolff he was one of the founders of the higher dimensional superspace approach in crystal structure analysis for the description of quasiperiodic crystals and modulated structures. This collaboration and its results received international recognition in 1998 with the Aminoff Prize from the Swedish Academy of Science. The award ceremony was followed by a symposium and the speakers were Aloysio Janner, Ted Janssen, Gervais Chapuis, Mike Glazer, Borje Johansson, Sander van Smaalen, Vaclav Petrcek and Reine Wallenberg. In 1973 and 1975 Ted and Aloysio organized conferences on ‘Group Theoretical Methods in Physics’ in Nijmegen. These are small conferences that attract both mathematicians and physicists. The series still exists. In 1993 Ted was appointed as professor at Utrecht University and in 1994 he took Aloysio’s position in Nijmegen after Aloysio retired. Also in 1994 Ted organized the conference Dyproso 1994 (Dynamic Properties of Solids) in Lunteren. In 1987 Ted joined the board of EMF (European Meeting on Ferro- electricity) and a few years later also the IMF (International Meeting on Ferroelectricity). Ted organized EMF-8 in 1995, in Nijmegen. In 1997 he joined the board of Aperiodic (Modulated Structures, Polytypes and Quasicrystals) and he organized Aperiodic-2000, again in Nijmegen. Ted also was a board member of ICQ, NVK (Nederlandse Vereniging voor Kristallografie – Dutch Union of Crystallography), LOTN (Collaboration of Dutch Institutes for Theoretical Physics), and the Dutch organization of Fundamental Research in Solid State Physics. Ted attended many conferences and was often traveling. In the earlier years his wife Loes worked as a veterinarian and took care of the children, but once all children had left the house Loes would join Ted on many of his travels. Ted spent time as visiting lector or professor in Leuven (1969), Zürich (1971-1972), Dijon (1987), Paris, Orsay, Palaiseau (1992), Gif-sur-Yvette (1993), Grenoble (1986 and 1990), Marseille (2001), Nagoya (1992), Lausanne (2003), Beer Sheva (2003) en Sendai (2004-2005 and 2013). In 2014 Aloyiso and Ted received a second award, the Ewald Prize, one of the most prestigious prizes in crystallography, of the International Union of Crystallography during the IUCr conference in Montreal. Death Ted Janssen died in Groesbeek, Netherlands, on September 29, 2017, after a short and devastating struggle with leukemia. He did however work until the last day, finishing his edits for the second edition of the book "Aperiodic structures: from modulated structures to quasicrystals" that was published in 2018. Selected bibliography Books Papers References 1936 births Theoretical physicists 2017 deaths 20th-century Dutch physicists Crystallographers Mathematical chemistry Academic staff of Radboud University Nijmegen Utrecht University alumni Radboud University Nijmegen alumni
Ted Janssen
Physics,Chemistry,Materials_science,Mathematics
1,701
3,084,771
https://en.wikipedia.org/wiki/Electric%20bicycle%20laws
Many countries have enacted electric vehicle laws to regulate the use of electric bicycles, also termed e-bikes. Some jurisdictions have regulations governing safety requirements and standards of manufacture. The members of the European Union and other regions have wider-ranging legislation covering use and safety. Laws and terminology are diverse. Some countries have national regulations with additional regional regulations for each state, province, or municipality. Systems of classification and nomenclature may vary. Jurisdictions may address "power-assisted bicycles" (Canada) or "electric pedal-assisted cycles" (European Union and United Kingdom) or simply "electric bicycles". Some classify pedelecs as being distinct from other bicycles using electric power. Consequently, any particular e-bike may be subject to different classifications and regulations in different jurisdictions. Australia In Australia, the e-bike is defined by the Australian Vehicle Standards as a bicycle that has an auxiliary motor with a maximum power output not exceeding 250 W without consideration for speed limits or pedal sensors. Each state is responsible for deciding how to treat such a vehicle and currently all states agree that such a vehicle does not require licensing or registration. Some states have their own rules such as no riding under electric power on bike paths and through built-up areas so riders should view the state laws regarding their use. There is no license and no registration required for e-bike use. Since 30 May 2012, Australia has an additional new e-bike category using the European Union model of a pedelec as per the CE EN15194 standard. This means the e-bike can have a motor of 250,W of continuous rated power which can only be activated by pedalling (if above 6 km/h) and must cut out over 25 km/h – if so it is classed as a normal bicycle. The state of Victoria is the first to amend it's local road rules, see below. Road vehicles in Australia must comply with all applicable Australian Design Rules (ADRs) before they can be supplied to the market for use in transport (Motor Vehicle Standards Act 1989 Cwth). The ADRs contain the following definitions for bicycles and mopeds: 4.2. Two-Wheeled and Three-Wheeled Vehicles 4.2.1. PEDAL CYCLE (AA) A vehicle designed to be propelled through a mechanism solely by human power. 4.2.2. POWER-ASSISTED PEDAL CYCLE (AB) A pedal cycle to which is attached one or more auxiliary propulsion motors having a combined maximum power output not exceeding 200 watts. 4.2.3. MOPED - 2 Wheels (LA) A 2-wheeled motor vehicle, not being a power-assisted pedal cycle, with an engine cylinder capacity not exceeding 50 ml and a "Maximum Motor Cycle Speed" not exceeding 50 km/h; or a 2-wheeled motor vehicle with a power source other than a piston engine and a "Maximum Motor Cycle Speed" not exceeding 50 km/h. (Vehicle Standard (Australian Design Rule – Definitions and Vehicle Categories 2005 Compilation 3 19 September 2007). There are no ADRs applicable to AA or AB category vehicles. There are ADRs for lighting, braking, noise, controls and dimensions for LA category vehicles, mostly referencing the equivalent UN ECE Regulations. An approval is required to supply to the market any road vehicle to which ADRs apply and an import approval is required to import any road vehicle into Australia. New South Wales In New South Wales, there are two types of power-assisted pedal cycle. For the first type, the electric motor's maximum power output must not exceed 200 watts, and the pedal cycle cannot be propelled exclusively by the motor. For the second type, known as a "pedalec", the vehicle must comply with the European Standard for Power Assisted Pedal Cycles (EN15194). Since October 2014 all petrol powered cycles are explicitly banned. Since February 2023 pedelac bikes in NSW can have a 500 watt motor. Victoria A bicycle designed to be propelled by human power using pedals may have an electric or petrol powered motor attached provided the motor's maximum power output does not exceed 200 watts. As of 18 September 2012, the Victorian road rules have changed to enable a pedelec to be used as a bicycle in Victoria. The change allows more options of power assisted pedal cycles under bicycle laws. A pedelec is defined as meeting EU standard EN15194, has a motor of no more than 250w of continuous rated power and which is only to be activated by pedalling when travelling at speeds of between 6 km/h and 25 km/h. Queensland In Queensland, the situation is similar to Victoria. There are two types of legal motorised bicycle. For the first type, the electric motor must not be capable of generating more than 200 watts of power. For the second type, known as a "pedalec", the vehicle must comply with the European Standard for Power Assisted Pedal Cycles (EN15194). The pedals on a motorised bicycle must be the primary source of power for the vehicle. If the motor is the primary source of power then the device cannot be classed as a motorised bicycle. For example, a device where the rider can twist a throttle and complete a journey using motor power only without using the pedals, would not be classed as a motorised bicycle. Motorised bicycles can be ridden on all roads and paths, except where bicycles are specifically excluded. Riders do not need to have a driver licence to ride a motorised bicycle. Canada Eight provinces of Canada allow electric power-assisted bicycles. In all eight provinces, e-bikes are limited to 500 W output, and cannot travel faster than on motor power alone on level ground. In Alberta prior to July 1, 2009, the limits were 750 W and , but presently match federal legislation. Age restrictions vary in Canada. All require an approved helmet. Regulations may or may not require an interlock to prevent the use of power when the rider is not pedaling. Some versions (e.g., if capable of operating without pedaling) of e-bikes require drivers' licenses in some provinces and have age restrictions. Vehicle licenses and liability insurance are not required. Generally, they are considered vehicles (like motorcycles and pedal cycles), so are subject to the same rules of the road as regular bicycles. In some cases, regulatory requirements have been complicated by lobbying in respect of the Segway PT. Bicycles assisted by a gasoline motor or other fuel are regulated differently from e-bikes. These are classified as motorcycles, regardless of the power output of the motor and maximum attainable speed. Note that in Canada, the term "assist bicycle" is the technical term for an e-bike and "power-assisted bicycle" is used in the Canadian Federal Legislation, but is carefully defined to only apply to electric motor assist, and specifically excludes internal combustion engines (though this is not the case in the United States). Federal requirements Since 2000, Canada's Motor Vehicle Safety Regulations (MVSR) have defined Power Assisted bicycles (PABs) as a separate category, and which require no license to operate. PABs are currently defined as a two- or three-wheeled bicycle equipped with handlebars and operable pedals, an attached electric motor of 500W or less, and a maximum speed capability of 32 km/h from the motor over level ground. Other requirements include a permanently affixed label from the manufacturer in a conspicuous location stating the vehicle is a power-assisted bicycle under the statutory requirements in force at the time of manufacture. All power-assisted bicycles must utilize an electric motor for assisted propulsion. A power-assisted bicycle may be imported and exported freely within Canada without the same restrictions placed on auto-mobiles or a moped. Under federal law, power-assisted bicycles may be restricted from operation on some roads, lanes, paths, or thoroughfares by the local municipality. Bicycle-style PABs are permitted on National Capital Commission's (NCC) Capital Pathway network, but scooter-style PABs are prohibited. All PABs (bicycle- and scooter-style) are permitted on dedicated NCC bike lanes. All PABs are prohibited in Gatineau Park's natural surface trails. Provincial requirements for use Alberta Alberta identifies e-bikes as "power bicycles" and is consistent with the federal definition of "power-assisted bicycle" in MVSR CRC, c 1038 s 2. Motor output must not exceed and e-bikes cannot travel faster than . Fully operable pedals are required. No driver's license, vehicle insurance, or vehicle registration is required. Operators must be 12 years of age or older. All operators are required to wear a motorcycle helmet meeting the standards set in AR 122/2009 s 112(2). A passenger is permitted only if the e-bike is equipped with a seat designated for that passenger. British Columbia An e-bike is identified as a "motor-assisted cycle" (MAC) in British Columbia, which differs from electric mopeds and scooters, which are "limited-speed motorcycles". Motor-assisted cycles must: have an electric motor of no more than 500 W; have fully operable pedals; not be capable of propelling the device at a speed greater than ]. The engine must disengage when (a) the operator stops pedaling, (b) an accelerator controller is released, OR (c) a brake is applied. A driver's license, vehicle registration, and insurance are all not required. Rider must be 16 years old or more, and a bike helmet must be worn. E-bikes in British Columbia must comply with all standards outlined in Motor Assisted Cycle Regulation, BC Reg 151/2002. Ontario Ontario is one of the last provinces in Canada to move toward legalizing power-assisted bicycles (PABs) for use on roads, even though they have been federally defined and legal in Canada since early 2001. In November 2005, "Bill 169" received royal assent allowing the Ministry of Transportation of Ontario (MTO) to place any vehicle on road. On October 4, 2006, the Minister of Transportation for Ontario Donna Cansfield announced the Pilot Project allowing PABs which meet the federal standards definition for operation on road. PAB riders must follow the rules and regulations of a regular bicycles, wear an approved bicycle helmet and be at least 16 years or older. There are still a number of legal considerations for operating any bicycle in Ontario. On October 5, 2009, the Government of Ontario brought in laws regulating electric bikes in the province. E-bikes, which can reach a speed of 32 kilometres per hour, are allowed to share the road with cars, pedestrians and other traffic throughout the province. The new rules limit the maximum weight of an e-bike to 120 kilograms, require a maximum braking distance of nine metres and prohibit any modifications to the bike's motor that would create speeds greater than 32 kilometres per hour. Also, riders must be at least 16 years of age, wear approved bicycle or motorcycle helmets and follow the same traffic laws as bicyclists. Municipalities are also specifically permitted by the legislation to restrict where e-bikes may be used on their streets, bike lanes and trails, as well as restricting certain types of e-bike (e.g. banning "scooter-style" e-bikes from bicycle trails). E-bikes are not permitted on 400-series highways, expressways or other areas where bicycles are not allowed. Riding an e-bike under the age of 16 or riding an e-bike without an approved helmet are new offences in the legislation, carrying fines of between $60 and $500. E-bike riders are subject to the same penalties as other cyclists for all other traffic offences. Manitoba E-bikes are legal in Manitoba, so long as certain stipulations are met. The bike must not be designed to have more than three wheels touching the ground, the motor must stop providing motive power if the bike exceeds 32 km/h for any reason, the motor must be smaller than 500W, it has to have functioning pedals, if it is engaged by a throttle, the motor immediately stops providing the vehicle with motive power when the driver activates a brake, and if engaged by the driver applying muscle power to the pedals, the motor immediately stops providing the vehicle with motive power when the driver stops applying muscle power. The bike must also have either a mechanism to turn the electric motor on and off that can be operated by the driver, and if the vehicle has a throttle, is separate from the throttle, or a mechanism that prevents the motor from engaging until the vehicle is traveling at 3 km/h or more. The user must also be at least 14 years of age to operate an E-bike. All other Manitoba laws regarding cycling also apply. New Brunswick To be allowed on the road it needs wheel rims larger than 9 inches, have a headlight for night, a seat at least 27 inches off the ground. New Brunswick's Policy on Electric Motor Driven Cycles and Electric Bicycles The Registrar will permit an electric motor driven cycle to be registered if it meets Canada Motor Vehicle Safety Standards (CMVSS) as a Limited Speed Motorcycle, or Scooter as is done with gas powered motor driven cycles. If the vehicle was manufactured after 1988, it will bear a compliance label stating that it meets these standards. The operator will be subject to all the requirements placed on operators of motor driven cycles. If the vehicle is able to powered by human force and has a motor 500W or less, and the motor is not capable of assisting when the vehicle is traveling at a speed greater than 32 km/h then it can be considered a bicycle and all the requirements placed on bicyclists are applicable. It is important to note that if a vehicle has an electric motor greater than 500 watts and is capable of powering the vehicle when traveling at a speed greater than 32 km/h and it does not have a CMVSS compliance label it cannot be registered unless the owner can prove, by having the vehicle certified by an engineer, that it is safe for operation on NB highways. Also, not all vehicles are suitable for operation on NB highways and it could be that the vehicle in question may not be a motor driven cycle or a bicycle and cannot be operated on the highway at all. Power Assisted Bicycle Label: Manufacturers of e-bikes must permanently affix a label, in a conspicuous location, stating in both official languages that the vehicle is a power-assisted bicycle as defined in the regulations under the federal Motor Vehicle Safety Act. Homemade e-bikes will not have this label. NOTE 1: The previous version of the policy had a section on it needing to "look like a bike" or a "bike style frame" but never defined what those were. That has been dropped and is no longer part of the new policy. NOTE 2: The top speed of the bike if propelled by human power is the posted speed limit, but the motor is only allowed to get up to and keep at 32 km/h. If the posted limit is under 32 then the posted limit is the limit allowed. NOTE 3: There is no maximum weight limit. NOTE 4: Ebikes are allowed to use cargo trailers/kid trailers. NOTE 5: There is no minimum age set. NOTE 6: DUI – If you have a DUI conviction the restrictions of the DUI override the ebike policy definition of an ebike as a bicycle and put it into the motor vehicle category. Newfoundland Nova Scotia In Nova Scotia power-assisted bicycles are classified similarly to standard pedal bicycles. The Nova Scotia Motor Vehicle Act defines a power-assisted bicycle as a bicycle with an electric motor of 500 watts or less, with two wheels (one of which is at least 350 mm) or four wheels (two of which are at least 350mm). PABs are permitted on the road in the province of Nova Scotia as long as you wear an approved bicycle helmet with the chinstrap engaged. They do not have to meet the conditions defined within the Canadian Motor Vehicle Safety Regulations for a motorcycle (they are not classed as "motor vehicles"), but they do have to comply with federal regulations that define Power Assisted Bicycles. Prince Edward Island Are treated as Mopeds and will need to pass inspection as a moped. Quebec In Quebec power-assisted bicycles are often classified similarly to standard pedal bicycles. They do not have to meet the conditions defined within the Canadian Motor Vehicle Safety Regulations (they are not classed as "motor vehicles"), but they do have to comply with federal regulations that define Power Assisted Bicycles. The Quebec Highway Safety Code defines a power-assisted bicycle as a bicycle (2 or 3 wheels that touch the ground) with an electric motor with a maximum power of 500W and a top speed of 32 km/h bearing a specific compliance label permanently attached by the manufacturer. PABs are permitted on the road in the province of Quebec, but riders have to be 14 and over to ride the electric bicycle and if they are under the age of 18, must have a moped or scooter license. Saskatchewan Power assisted bicycles are classified in two categories in Saskatchewan. An electric assist bicycle is a 2 or 3-wheeled bicycle that uses pedals and a motor at the same time only. A power cycle uses either pedals and motor or motor only. Both must have engines with 500 watt power or less, and must not be able exceed , i.e., electric motor cuts out at this speed or cycle is unable to go this fast on a level surface. The power cycle has to meet the Canadian Motor Vehicle Safety Standards (CMVSS) for a power-assisted bicycle. The power cycle requires at least a learner's driving licence (class 7), and all of the other classes 1–5 may operate these also. The electric assist bicycle does not require a licence. Helmets are required for each. Both are treated as bicycles regarding rules of the road. Gas powered or assisted bicycles are classified as motorcycles regardless of engine size or if using pedals plus motor. Stickers identifying the bicycle's compliance with the Federal classification may be required for power cycles by some cities or municipalities. China Mainland In China, e-bikes currently come under the same classification as bicycles and hence do not require a driver's license to operate. Previously it was required that users registered their bike in order to be recovered if stolen, although this has recently been abolished. Due to a recent rise in electric-bicycle-related accidents, caused mostly by inexperienced riders who ride on the wrong side of the road, run red lights, do not use headlights at night etc., the Chinese government plans to change the legal status of illegal bicycles so that vehicles with an unladen weight of or more and a top speed of or more will require a motorcycle license to operate, while vehicles lighter than and slower than 30 km/h can be ridden unlicensed. In the southern Chinese cities of Guangzhou, Dongguan and Shenzhen, e-bikes, like all motorcycles, are banned from certain downtown districts. There are also bans in place in small areas of Shanghai, Hangzhou and Beijing. Bans of "Scooter-Style Electric Bikes" (SSEB) were however cancelled and in Shenzhen e-bikes may be seen on the streets nowadays (2010–11). Electric powered bicycles slower than 20 km/h without pedaling are legally recognized as a non-mechanically operated vehicle in China. According to "TECHNOLOGY WATCH", this should help promote its widespread use. Electric bicycles were banned in some areas of Beijing from August 2002 to January 2006 due to concerns over environmental, safety and city image issues. Beijing has re-allowed use of approved electric bicycles as of January 4, 2006. Some cities in China still ban electric bikes. Hong Kong Hong Kong has independent traffic laws from mainland China. Electric bikes are considered motorcycles in Hong Kong, and therefore need type approval from the Transport Department, just as automobiles. All electric bikes available in Hong Kong fail to meet the type approval requirement, and the Transport Department has never granted any type approval for an electric bike, making all electric bikes effectively illegal in Hong Kong. Even if they got type approval, the driver would need a motorcycle driving licence to ride. As a side note, Hong Kong does not have a moped vehicle class (and therefore no moped driving license), and mopeds are considered motorcycles too. Electric bicycles are not allowed in any public area, meaning an area where there is full or partial public access. Any kind of pedal assist, electric bike, scooter, moped or vehicle which has any form of propulsion, whether in full or as assist, other than human power, must be approved as either a car, motorcycle, van, truck, bus or similar. This makes pedelecs and tilt-controlled two-wheel personal vehicles illegal in all practical ways, as they cannot be registered as motorcycles. Europe European Union definition Regulation (EU) No 168/2013 of the European Parliament and of the council, which replaced 2002/24/EC on 1 January 2016 but is substantially the same, exempts vehicles with the following definition from the requirement for type approval: "pedal cycles with pedal assistance which are equipped with an auxiliary electric motor having a maximum continuous rated power of less than or equal to 250 W, where the output of the motor is cut off when the cyclist stops pedalling and is otherwise progressively reduced and finally cut off before the vehicle speed reaches ". This is the de facto definition of an electrically assisted pedal cycle in the EU. As with all EU directives, individual member countries of the EU are left to implement the requirements in national legislation. The EU specification does not require a helmet to be worn when riding this class of bicycle. European product safety standard EN 15194 was published in 2009. The aim of EN 15194 is "to provide a standard for the assessment of electrically powered cycles of a type which are excluded from type approval by Directive 2002/24/EC". National requirements Belgium In Belgium, Technical laws passed on 09/09/2016 and 17/11/2017 allow for three types of e-bikes: 250 W 25 km/h limited "e-bikes" for all ages without a helmet. 1000 W 25 km/h limited "motorized-bikes", over 16 years, with conformity certificate, without a helmet. 4000 W 45 km/h limited "speed pedelecs", which are classed as mopeds for all requirements. Denmark In Denmark, Parliament has decided to approve the speed pedelec – a type of super electric bike that can reach speeds of up to 45 km/h – for riding on cycle paths. Danish Parliament has decided that as of 1 July 2018, those operating the superbikes only need to have turned 15 and wear an approved helmet, and a license if between 15 and 18. "The regulations for experimental schemes with Speed Pedelecs (45 km/h) and the rules for e-bikes (25 km/h) are as follows: - A Speed Pedelec must be EU type-approved, and a physical type plate containing data from the approval must be present on the bicycle. It must not exceed 20 km/h without pedal assistance. - An e-bike can achieve a maximum speed of 6 km/h without pedal assistance (referred to as the 'walk function') and a maximum of 25 km/h with pedal assistance. The walk function can be operated via a button or twist/gas handle. - There is no specific requirement that an e-bike with a maximum 250 W motor must be EU type-approved. However, it must comply with the Machinery Directive and carry the CE mark. - Regarding how the law applies to configuring different restrictions via a computer/display, there is no specific information available." Finland In Finland, Bicycles meeting the European Union definition can be used without regulation. Bicycles with 250-1000 W electric motors, or which allow assistance without pedalling, are classified as L1e-A-class motorised bicycles according to EU regulation, and must be insured for use on public roads, and limited to 25 km/h. Latvia In Latvia, the laws do not set any additional provisions specifically for electric bicycles other than defining a "bicycle" for the Road Traffic Law as a human-powered vehicle that may be equipped with an electric motor with power of no more than 250 W. Norway As a member of the European Economic Area (EEA), Norway implemented the European Union definition. As in the EU, pedelec e-bikes are classified as ordinary bicycles, according to the Vehicle Regulation (kjøretøyforskriften) § 4–1, 5g, not registered in the Vehicle Registry, and with no requirement for a driving license. Sweden Sweden uses the European Union definition, according to the Swedish Vehicle Regulation (Trafikverket). Switzerland Regulations updated in 2012 categorize electric-assisted pedal bikes as "light", usable without regulation if their motor power does not exceed 500W, and their maximum speed is 25 km/h if pedalled, 20 km/h without pedal assistance. Switzerland (not an EU member) has more liberal standards for fast electric bicycles than most of Europe, with an easy process to obtain a license to use 45 km/h e-bikes. Turkey Laws are similar to those in the EU. United Kingdom Laws were amended in 2015 to match much of the EU regulation detail, including a 250W power limit and 25 km/h speed limit. A minimum rider age of 14 years old applies. India Indian law requires that all-electric vehicles have ARAI approval. Vehicles with below 250W and speed less than 25 km/h, do not require certification- hence not following full testing process, but needs to get an exemption report from ARAI. Whereas more powerful vehicles need to go through a full testing process following CMVR rules. This can take time and cost money but assures safe and reliable design for Electric Vehicles. These regulations are not promulgated by the Regional Transport offices, and riders are not required to obtain a license to drive, carry insurance, or wear a helmet. In India, all-electric cycles which do not require a licence and registration are made in accordance with the guidelines issued by ARAI. Israel In Israel, persons above 16 years old are allowed to use pedal-assisted bicycles with power of up to 250 W and a speed limit of 25 km/h. The bicycle must satisfy the European Standard EN15194 and be approved by the Standards Institution of Israel. A new law, effective January 10, 2019, states that riders under 18 who have no automobile license will need a special permit. Otherwise, no license or insurance is required. Other motorized bicycles are considered to be motorcycles and should be licensed and insured as such. The maximum weight of the e-bike itself cannot exceed 30 kg. The Israeli Ministry of Transportation passed legislation in 2009 and again in 2018. The 2018 law is effective from January 1, 2019, and regards a bicycle permit: Israeli authorities passed legislation, as of December 2009, that allows electric bicycles to be legal for street use in the country under the following criteria: The maximum power of the electric engine is not higher than 250W. The electric motor is activated by the rider's pedalling effort and it has to cut out completely when the rider stops pedalling. The electric motor power decreases with the advance of the bicycle speed and it must cut out completely whenever the bicycle reaches a speed of 25 km/h. The electric bicycle has to comply with the European standard — BSEN 15194. Japan In Japan, Electric-assisted bicycles are treated as human-powered bicycles, while bicycles capable of propulsion by electric power alone face additional registration and regulatory requirements as mopeds. Requirements include electric power generation by a motor that cannot be easily modified, along with a power assist mechanism that operates safely and smoothly. In December 2008, The assist ratio was updated as follows: Under 10 km/h; 2 10–24 km/h; 2-(Running speed - 10) / 7 Over 24 km/h; 0 (See Moped#Individual countries/regions) New Zealand In New Zealand, the regulations read: "AB (Power-assisted pedal cycle) A pedal cycle to which is attached one or more auxiliary propulsion motors having a combined maximum power output not exceeding 300 watts." This is explained by NZTA as "A power-assisted cycle is a cycle that has a motor of up to 300 watts. The law treats these as ordinary cycles rather than motorcycles. This means that it is not necessary to register or license them. Note that the phrase "maximum power output" that is found in the regulation (but omitted in the explanation) may create confusion because some e-bike motor manufacturers advertise and print on the motor their "maximum input power" because that number is larger (typically motors run at about 80% efficiency) thus give the impression the buyer is getting a more powerful motor. This can cause misunderstandings with law enforcement officers who do not necessarily understand the difference, and when stopping a rider on an e-bike in a traffic stop, look at the number on the motor to determine if the e-bike is legal or not. Vehicles with electric power and power of less than 300 W are classified as "not a motor vehicle". Such electric bicycles must comply with the same rules as bicycles. You must wear a helmet even on a scooter or bike under 300 W. If the power is over 300 W or a combustion engine is used it is a "low -powered vehicle" and the moped rules apply. Specifically, a driver's license and registration are required. Philippines In the Philippines, the Land Transportation Office issued Memorandum Circular 721-2006 stating that registration is not needed for electric bicycles (i.e. electric motor assisted bicycles with working pedals) and even extended the exemption to "bicycle-like" vehicles. Russian Federation According to Russian law, bicycles can have electric motors with nominal output power of 250 watts or less which automatically turns itself off on speeds above 25 km/h. No driver's license is required. Comparison (*) Allowed on bike paths when electric systems are turned off (**) E-bikes are illegal in this region Comparison of US rules and regulations Identity: How exactly does legislation identify the electric bicycle? Type: How does the law define vehicle type? Max Speed: Maximum speed when powered solely by the motor. Max Power: Maximum motor power, or engine size, permitted. Helmet: Is use of a helmet mandatory? Minimum Age: Operator's minimum age. Driver's License: Is a license or endorsement required for the driver? United States Federal laws and regulations on sales The U.S. Consumer Product Safety Act states that electric bicycles and tricycles meeting the definition of low-speed electric bicycles will be considered consumer products. The Consumer Product Safety Commission (CPSC) has regulatory authority to assure, through guidelines and standards, that the public will be protected from unreasonable risks of injury or death associated with the use of electric bicycles. In addition to federal and state electric bicycle regulations, people with certain mobility disabilities may be granted use of Class I and Class II electric bicycles per Title 28 Chapter 1 Part 36 at certain locations where electric bicycles are not normally permitted, so long as they can be used reasonably safely. Defined The federal Consumer Product Safety Act defines low-speed electric bicycle" as a two or three-wheeled vehicle with fully operable pedals, a top speed when powered solely by the motor under and an electric motor that produces less than . The Act authorizes the Consumer Product Safety Commission to protect people who ride low-speed electric vehicles by issuing necessary safety regulations. The rules for e-bikes on public roads, sidewalks, and pathways are under state jurisdiction and vary. In conformance with legislation adopted by the U.S. Congress defining this category of electric-power bicycle (15 U.S.C. 2085(b)), CPSC rules stipulate that low-speed electric bicycles (to include two- and three-wheel vehicles) are exempt from classification as motor vehicles providing they have fully operable pedals, an electric motor of less than , and a top motor-powered speed of fewer than when operated by a rider weighing 170 pounds. An electric bike remaining within these specifications is subject to the CPSC consumer product regulations for a bicycle. Commercially manufactured e-bikes exceeding these power and speed limits are regulated by the federal DOT and NHTSA as motor vehicles and must meet additional safety requirements. The legislation enacting this amendment to the CPSC is also known as HR 727. The text of HR 727 includes the statement: "This section shall supersede any State law or requirement concerning low-speed electric bicycles to the extent that such State law or requirement is more stringent than the Federal law or requirements." (Note that this refers to consumer product regulations enacted under the Consumer Product Safety Act. Preemption of more stringent state consumer product regulations does not limit State authority to regulate the use of electric bicycles, or bicycles in general, under state vehicle codes.) State requirements for use While Federal law governs consumer product regulations for "low-speed electric bicycles", as with motor vehicles and bicycles, regulation of how these products are used on public streets is subject to state vehicle codes. There is significant variation from state to state, as summarized below. Alabama Every bicycle with a motor attached is defined as a motor-driven cycle. The operation of a motor-driven cycle requires a class M driver license. Restricted class M driver licenses are available for those as young as 14 years of age. Arizona Under Arizona law, motorized electric bicycles and tricycles meeting the definition under the applicable statute are not subject to title, licensing, insurance, or registration requirements, and may be used upon any roadway authorized for use by conventional bicycles, including use in bike lanes integrated with motor vehicle roadways. Unless specifically prohibited, electric bicycles may be operated on multi-use trails designated for hiking, biking, equestrian, or other non-motorized use, and upon paths designated for the exclusive use of bicycles. No operator's license is required, but anyone operating a bicycle on Arizona roads must carry proof of identity. A "motorized electric bicycle or tricycle" is legally defined as a bicycle or tricycle that is equipped with a helper motor that may be self-propelled, which is operated at speeds of less than twenty miles per hour. Electric bicycles operated at speeds of twenty miles an hour or more, but less than twenty-five miles per hour may be registered for legal use on the roadways as mopeds, and above twenty-five miles per hour as a registered moped with an 'M' endorsement on the operator's driving license. However, mopeds in Arizona are prohibited from using bike lanes on motor vehicle roadways. The Arizona statute governing motorized electric bicycles does not prohibit local jurisdictions from adopting an ordinance that further regulates or prohibits the operation of motorized electric bicycles or tricycles. Arkansas Arkansas does not define E-bikes. The following definition describes a combustion engine. E-bikes being electric do not have a cylinder capacity and thus this law is not technically applicable. The state defines a "Motorized bicycle" as "a bicycle with an automatic transmission and a motor of less than 50cc." Riders require either a certificate to operate a motorized bicycle, a motorcycle license, a motor-driven cycle license, or a license of class A, B, C or D. Certificates cannot be issued to riders under 10 years of age. California Electric Bicycles are defined by the California Vehicle Code. New legislation became effective January 2016. The current regulations define an "electric bicycle", a bicycle equipped with fully operable pedals and an electric motor of less than 750 watts, separated into three classes: Beginning January 1, 2017, manufacturers and distributors of electric bicycles will be required to apply a label that is permanently affixed, in a prominent location, to each electric bicycle, indicating its class. Should a user "tamper with or modify" an electric bicycle, changing the speed capability, they must replace the label indicating the classification. Driver's licenses, registration, insurance and license plate requirements do not apply. An electric bicycle is not a motor vehicle. Drinking and driving laws apply. Additional laws or ordinances may apply to the use of electric bicycles by each city or county. Colorado Ebike definition in Colorado follows the HR 727 National Law: e-power and max, 2 or 3 wheels, pedals that work. Legal low-powered ebikes are allowed on roads and bike lanes, and prohibited from using their motors on bike and pedestrian paths, unless overridden by local ordinance. The city of Boulder is the first to have done so, banning ebikes over 400W from bike lanes. Bicycles and Ebikes are disallowed on certain high speed highways and all Interstates unless signed as "Allowed" in certain rural Interstate stretches where the Interstate is the ONLY means of travel. Connecticut Section 14-1 of Connecticut state law classifies electric bicycles as "motor-driven cycles" if they have a seat height of not less than 26 inches and a motor which produces brake horsepower of 2 or less. Motor-driven cycles may be operated on the roadway without registration, but the operator must have a driver's license. The cycle may not be operated on any sidewalk, limited access highway or turnpike. If the maximum speed of the cycle is less than the speed limit of the road, the cycle must operate in the right hand lane available for traffic or upon a usable shoulder on the right side of the road unless the operator is making a left turn. District of Columbia Electric-assist and other "motorized bicycles" do not need to be inspected, do not require a license, and do not require registration. The vehicle must meet all of the following criteria: a post mounted seat for each person it is designed to carry, two or three wheels which contact the ground, fully operative pedals, wheels at least 16 inches in diameter and a motor not capable of propelling the device at more than 20 mph on level ground. The driver does not need a license, but must be at least 16 years old. DC law prohibits motorized bicycles from traveling anywhere on the sidewalk or in the bike lanes. DC Regulation 18–1201.18 provides: "Except as otherwise permitted for a motor vehicle, no person shall operate a motorized bicycle on any sidewalk or any off-street bikepath or bicycle route within the District. This prohibition shall apply even though the motorized bicycle is being operated solely by human power." So, if cars are prohibited in a particular place, motor-assisted bikes are also prohibited. Florida Florida DMV Procedure RS-61 II. "(B.) Dirt bikes noted for off road use, motorized bicycles and Go-Peds are not registered." Electric Helper-Motor Bicycles If you are at least 16 years old, a person may ride a bicycle that is propelled by a combination of human power (pedals) and an electric helper-motor that cannot go faster than 20 mph on level ground without a driver license. Motorized Bicycles and Motorized Scooters Under Title 23, Chapter 316 of the code, bicycles and motorized bicycles are defined as follows: Bicycle—Every vehicle propelled solely by human power, and every motorized bicycle propelled by a combination of human power and an electric helper motor capable of propelling the vehicle at a speed of not more than 20 miles per hour on level ground upon which any person may ride, having two tandem wheels, and including any device generally recognized as a bicycle though equipped with two front or two rear wheels. The term does not include such a vehicle with a seat height of no more than 25 inches from the ground when the seat is adjusted to its highest position or a scooter or similar device. No person under the age of 16 may operate or ride upon a motorized bicycle. Motorized Scooter—Any vehicle not having a seat or saddle for the use of the rider, designed to travel on not more than three wheels, and not capable of propelling the vehicle at a speed greater than 30 miles per hour on level ground. In addition to the statutory language, there are several judicial rulings on the subject. Georgia Georgia Code 40-1-1 Part 15.3 Hawaii A Federal agency, the Consumer Product Safety Commission (CPSC), has exclusive jurisdiction over electric bicycles as to consumer product regulations, but this does not change state regulation of the use of electric bicycles on streets and highways. "Bicycle" means every vehicle "propelled solely by human power" upon which any person may ride, having two tandem wheels, and including any vehicle generally recognized as a bicycle though equipped with two front or two rear wheels except a toy bicycle. Now (by update on September 20, 2019), the DOT of HI STATE has announced the normalization of electric bicycles on city roads (registration fee of $30) under HB.812 (- any 2-3 wheel electric bikes with a DC motor below or up to 750W is qualified to be a bicycle; minimal age to ride an e-bike is 15). HB.812 was passed in both House and Senate floors in March 2019, and it was signed to be effect by Governor David Ige in July 2019. "Moped" means a device upon which a person may ride which is DOT Approved. Under the statute, mopeds must be registered. To be registered under Hawaii law a moped must bear a certification label from the manufacturer stating that it complies with federal motor vehicle safety standards (FMVSS). A moped must also possess the following equipment approved by the D.O.T. under Chapter 91: approved braking, fuel, and exhaust system components; approved steering system and handlebars; wheel rims; fenders; a guard or protective covering for drive belts, chains and rotating components; seat or saddle; lamps and reflectors; equipment controls; speedometer; retracting support stand; horn; and identification markings. Illinois Two relevant laws for Illinois are 625 ILCS 5/11-1517 & 625 ILCS 5/1-140.10. 625 ILCS 5/11-1517 Each low-speed electric bicycle operating in Illinois should comply with requirements adopted by the United States Consumer Product Safety Commission under 16 CFR 1512. Class 3 low-speed electric bicycle need an accurate speedometer in miles per hour. After January 1, 2018, every manufacturer and distributor of low-speed electric bicycles needs a permanent and prominent label on the bicycle detailing: (1) the classification number for the bicycle from 625 ILCS 5/1-140.10 (2) the bicycle's top assisted speed (3) the bicycle's motor wattage in Arial font in at least 9-point type. No person shall knowingly tamper or modify the speed capability or engagement of a low-speed electric bicycle without replacing the original label with accurate class, assisted top speed and motor wattage. A Class 2 low-speed electric bicycle's electric motor should disengage or cease to function when the brakes are applied. For Class 1 and Class 3 low-speed electric bicycles, the electric motor should disengage or cease to function when the rider stops pedaling. Low-speed electric bicycle can go on any highway, street, or roadway authorized for use by bicycles, including, but not limited to, bicycle lanes. A municipality, county, or local authority with jurisdiction can prohibit the use of low-speed electric bicycles or a specific class of low-speed electric bicycles on a bicycle path. Otherwise, low-speed electric bicycles are allowed on a bicycle path. Low-speed electric bicycle cannot go on a sidewalk. Class 3 low-speed electric bicycle drivers need to be 16 years or older. If the Class 3 low-speed electric bicycle is designed to accommodate passengers, there are no age restrictions on passengers. Low-speed electric bicycle & class is defined by 625 ILCS 5/1-140.10. A "low-speed electric bicycle" is not a moped or a motor driven cycle. All "low-speed electric bicycle"s must have fully operable pedals and an electric motor of less than 750 watts. Furthermore, they must qualify as a class 1,2 or 3. Class 1 low-speed electric bicycles have a motor that provides assistance only when the rider is pedaling and does not assist over 20 miles per hour. Class 2 low-speed electric bicycles have a motor that can exclusively to propel the bicycle and does not assist over 20 miles per hour. Class 3 low-speed electric bicycles have a motor that provides assistance only when the rider is pedaling and that ceases to provide assistance when the bicycle reaches a speed of 28 miles per hour. Indiana In Indiana, the law for E-bikes was changed and now E-bikes are regulated like bicycles. The same rules of the road apply to both e-bikes and what we historically think of as bicycles (i.e. human powered). During the 2019 update to the Indiana Code of Motor Vehicles, E-bikes were put in three classes. Iowa In 2006 a bill was passed that changed the definition of a bicycle to include a bicycle that has an electric motor of less than 1 hp (750 watts). The new definition, found in Iowa Code section 321.1(40)c states: "Bicycle" means either of the following: (1) A device having two wheels and having at least one saddle or seat for the use of a rider which is propelled by human power. (2) A device having two or three wheels with fully operable pedals and an electric motor of less than 750 watts (one horsepower), whose maximum speed on a paved level surface, when powered solely by such a motor while ridden, is less than 20 miles per hour. Kentucky Electric bicycle fits under the definition of "moped" under Kentucky law. No tag or insurance is required, but a driver's license is required. "Moped" means either a motorized bicycle whose frame design may include one (1) or more horizontal crossbars supporting a fuel tank so long as it also has pedals, or a motorized bicycle with a step-through type frame which may or may not have pedals rated no more than two (2) brake horsepower, a cylinder capacity not exceeding fifty (50) cubic centimeters, an automatic transmission not requiring clutching or shifting by the operator after the drive system is engaged, and capable of a maximum speed of not more than thirty (30) miles per hour Helmets are required. Louisiana Louisiana Revised Statute R.S. 32:1(41) defines a motorized bicycle as a pedal bicycle which may be propelled by human power or helper motor, or by both, with a motor rated no more than one and one-half brake horsepower, a cylinder capacity not exceeding fifty cubic centimeters, an automatic transmission, and which produces a maximum design speed of no more than twenty-five miles per hour on a flat surface. Motorized bicycles falling within this definition must be registered and titled under Louisiana law. Additionally, a motorized bicycle operated upon Louisiana roadways or highways by a person fifteen years of age or older and producing more than five horsepower must possess a valid driver's license with a motorcycle endorsement and adhere to laws governing the operation of a motorcycle, including the wearing of approved eye protectors or a windshield and the wearing of a helmet. The statute also states that "Motorized bicycles such as pocket bikes and scooters that do not meet the requirements of this policy shall not be registered." As R.S. 32:1(41) refers to motorized bicycles using "an automatic transmission" with helper motors rated in horsepower and cylinder capacity, not by watts or volts, the statute arguably does not cover bicycles powered by an electric motor(s), whether self-propelled or pedal-assist designs. Maryland Maryland defines an "electric bicycle" as a vehicle that (1) is designed to be operated by human power with the assistance of an electric motor, (2) is equipped with fully operable pedals, (3) has two or three wheels, (4) has a motor with a rating of 500 watts or less, (5) and is capable of a maximum speed of 20 miles per hour on a level surface when powered by the motor. (Senate Bill 379, approved by the Governor 5/5/2014, Chapter 294.) This legislation excludes "electric bicycle" from the definition of "moped", "motorized minibike", and "motor vehicle", and removes the titling and insurance requirements required for electric bicycles under prior Maryland law. Before September 20, 2014, Maryland law had classified an electric bicycle as a moped. Mopeds are specifically excluded from the definition of "motor vehicle" per § 11-135 of the Maryland Transportation Code. Mopeds may not be operated sidewalks, trails, roadways with posted speeds in excess of 50 mph, or limited-access highways. Standard requirements for bicycle lighting, acceptable bicycle parking locations, and prohibitions on wearing earplugs or headsets over both ears apply. Recent legislation has passed putting Maryland ebike laws in line with the popular class 1,2,3 systems previously implemented in states such as California. This legislation becomes effective October 2019. The most significant portion of this change is the increased max limit on power and speed. It will be increased from a max of 500w / 20 mph to 750w / 28 mph (assuming the ebike in question meets class 3 criteria) Massachusetts Massachusetts has the following two definitions for electric bicycles since November 8, 2022: Class I electric bicycle: Bicycle equipped with a motor that provides assistance only when the rider is pedaling, and that ceases to provide assistance when the electric bicycle reaches 20 mph, with an electric motor of 750 watts or less. Class II electric bicycle: Bicycle equipped with a throttle-actuated motor that ceases to provide assistance when the electric bicycle reaches 20 mph, with an electric motor of 750 watts or less. Riders of these electric bicycles do not require a license and are afforded all the rights and privileges related to all bicycle riders except as noted: These electric bicycles are allowed most places traditional bicycles are allowed: roadways, bike lanes, bike paths, paved trails, and if specifically allowed by local jurisdiction and signage, natural surface trails. Electric bicycles are prohibited from sidewalks and most natural surface trails. Local jurisdictions may prohibit the use of electric bicycles on bikeways and bike paths, but this first requires a public notice, public hearing, and posted signage prohibiting electric bicycles. Manufacturers and distributors of electric bicycles must apply a prominent fixed label specifying the classification number, top assisted speed, and motor wattage. Persons who modify the motor-powered speed capability or engagement of the electric bicycle must appropriately replace this label. Massachusetts does not explicitly use the Federal definition of Class III, 28 mph pedal assist e-bikes. Massachusetts General Laws defines other classes of motorized two-wheeled vehicles that are not Class I or Class II electric bicycles: Motorcycle, Motorized bicycle, and Motorized scooter. Although the definition of motorized scooter includes two-wheeled vehicles propelled by electric motors with or without human power, motorized scooter specifically excludes anything which falls under the definitions of Class I or Class II electric bicycles, motorized bicycle, and motorcycle. Motorized bicycle is a pedal bicycle which has a helper motor, or a non-pedal bicycle which has a motor, with a cylinder capacity not exceeding fifty cubic centimeters, an automatic transmission, which is capable of a maximum speed of no more than thirty miles per hour, and does not fall under the specific definition of a Class I or Class II electric bicycles. Motorcycle includes any bicycle with a motor or driving wheel attached, with the exception of vehicles that fall under the specific definition of motorized bicycle. Thus, a pedal bicycle with an electric motor or a non-pedal bicycle with an electric motor, automatic transmission, maximum speed of 30 miles an hour, and not a Class I or Class II electric bicycle, would fall under the definition of motorized bicycle. A non-Class I or II electric bicycle that did not meet those restrictions would be either a motorized scooter or motorcycle, depending on specific characteristics. A motorized bicycle cannot be operated by any person under sixteen years of age. Motorized bicycles also cannot be driven at a speed exceeding twenty-five Miles per Hour within the commonwealth, and they are explicitly prohibited from being driven on public highways, public walkways or other public land as designated by the parks department. A motorized bicycle cannot be operated by any person not possessing a valid driver's license or learner's permit. Every person operating a motorized bicycle upon has the right to use all public ways in the commonwealth except limited access or express state highways where signs specifically prohibiting bicycles have been posted, and are subject to the traffic laws and regulations of the commonwealth. Motorized bicycles may be operated on bicycle lanes adjacent to the various ways, but are excluded from off-street recreational bicycle paths. Every person operating a motorized bicycle or riding as a passenger on a motorized bicycle must wear protective headgear, and no person operating a motorized bicycle can permit any other person to ride a passenger on such motorized bicycle unless such passenger is wearing such protective headgear. Michigan An electric bicycle (or e-bike) is a bicycle that has a small rechargeable electric motor that can give a boost to the pedaling rider or can take over pedaling completely. To qualify as an e- bike in Michigan, the bike must meet the following requirements: It must have a seat or saddle for the rider to sit. There must be fully operational pedals. It must have an electric motor of no more than 750 watts (or 1 horsepower). Whether you can ride an e-bicycle on a trail depends on several factors, including the e-bike's class, the type of trail and whether the authority that manages or oversees the trail allows the use. To learn more, read the full legislation or review the provided summary. Minnesota Electric-assisted bicycles, also referred to as "e-bikes", are a subset of bicycles that are equipped with a small attached motor. To be classified as an "electric-assisted bicycle" in Minnesota, the bicycle must have a saddle and operable pedals, two or three wheels, and an electric motor of up to 750 watts, as well as meet certain federal motor vehicle safety standards. The motor must disengage during braking and have a maximum speed of 20 miles per hour (whether assisted by human power or not). Minn. Stat. §169.011, subd. 27. Legislative changes in 2012 significantly altered the classification and regulatory structure for e-bikes. The general effect was to establish electric-assisted bicycles as a subset of bicycles and regulate e-bikes in roughly the same manner as bicycles instead of other motorized devices with two (or three) wheels. Laws 2012, ch. 287, art. 3, §§ 15–17, 21, 23–26, 30, 32–33, and 41. The 2012 Legislature also modified and clarified regulation of e-bikes on bike paths and trails. Laws 2012, ch. 287, art. 4, §§ 1–4, 20. Following the 2012 change, electric-assisted bicycles are regulated similarly to other bicycles. Most of the same laws apply. Minn. Stat. § §169.011, subd. 27; 169.222. The bicycle does not need to be registered, and a title is no longer necessary. Minn. Stat. §§ 168.012, subd. 2d;168A.03, subd. 1 clause(11) A license plate is no longer required to be displayed on the rear. See Minn. Stat. § 169.79, subd. 3. It is not subject to motor vehicle sales tax (the general sales tax would instead be owed on e-bike purchases). A driver's license or permit is not required. Unlike a non-powered bicycle, the minimum operator age is 15 years old. Minn. Stat. § 169.222, subd. 6a. The device does not need to be insured. See Minn. Stat. § 65B.43, subds. 2, 13. Electric-assisted bicycle operators must follow the same traffic laws as operators of motor vehicles (except those that by their nature would not be relevant). The bicycles may be operated two abreast. Operators must generally ride as close as is practical to the right-hand side of the road (exceptions include when overtaking another vehicle, preparing for a left turn, and to avoid unsafe conditions). The bicycle must be ridden within a single lane. Travel on the shoulder of a road must be in the same direction as the direction of adjacent traffic. Some prohibitions also apply, such as on: carrying cargo that prevents keeping at least one hand on the handlebars or prevents proper use of brakes, riding no more than two abreast on a roadway or shoulder, and attaching the bicycle to another vehicle. Minn. Stat. § 169.222, subds. 3–5. The vehicles may be operated on a sidewalk except in a business district or when prohibited by a local unit of government, and must yield to pedestrians on the sidewalk. Minn. Stat. § 169.223, subd. 3. By default, electric-assisted bicycles are allowed on road shoulders as well as on bicycle trails, bicycle paths, and bicycle lanes. A local unit of government having jurisdiction over a road or bikeway (including the Department of Natural Resources in the case of state bike trails) is authorized to restrict e-bike use if: the use is not consistent with the safety or general welfare of others; or the restriction is necessary to meet the terms of any legal agreements concerning the land on which a bikeway has been established. Electric-assisted bicycles can be parked on a sidewalk unless restricted by local government (although they cannot impede normal movement of pedestrians) and can be parked on streets where parking of other motor vehicles is allowed. Minn. Stat. § 169.222, subd. 9. During nighttime operation, the bicycle must be equipped with a front headlamp, a rear-facing red reflector, and reflectors on the front and rear of pedals, and the bicycle or rider must have reflective surfaces on each side. Minn. Stat. §169.222, subd. 6. An electric-assisted bicycle can be equipped with a front-facing headlamp that emits a flashing white light, a rear-facing lamp that has a flashing red light, or both. The bicycle can carry studded tires designed for traction (such as in snowy or icy conditions). Helmets are no longer required for e-bike use. Mississippi In Opinion No. 2007-00602 of the Attorney General, Jim Hood clarified that a "bicycle with a motor attached" does not satisfy the definition of "motor vehicle" under Section 63-3-103. He stated that it is up to the authority creating the bike lane to determine if a bicycle with a motor attached can be ridden in bike lanes. No specifications about the motor were made. In Opinion No. 2011-00095 of the Attorney General, Jim Hood stated that an operator's license, helmet, safety insurance, title, registration, and safety inspection are all not required of bicycles with a motor attached. Missouri The rights and privileges of electric bicycle riders can be found in 307.194 RSMo. Generally, electric bicycle riders have all the rights and responsibilities as riders of bicycles. Electric bicycles are not subject to all laws covering motor vehicles meaning they do not require "vehicle registration, certificates of title, drivers' licenses, [or] financial responsibility." Electric bicycles are divided into 3 different classes under 301.010(15), RSMo. Class 1 includes electric bicycles with a motor that assists the rider when pedaling and cases at 20 mph. Class 2 includes electric bicycles that use a motor to propel the bicycle instead of a rider pedaling and "is not capable of providing assistance" when the bicycle reaches 20 mph. Class 3 includes electric bicycles with a motor that provides assistance when the rider is pedaling and stops when the bicycle reaches 28 mph. Persons under the age of 16 are not permitted to operate class 3 electric bicycles. In 2022, the Missouri Department of Conservation expanded the areas where bicycles, includes electric bicycles, are allowed to be used. Montana The law uses a three-part definition where the first two parts describe a human-powered bicycle and one with an independent power source respectively, while the third describes a "moped" with both a motor and pedal assist. (Montana Code 61-8-102). As of April 21, 2015, mopeds were reclassified to be treated as bicycles in Montana, not requiring a driver's license. The definition as written does not define the power of the motor in Watts as is conventionally done for electric bicycles, but rather in brake horsepower. Thus for an electric bicycle, motor kit, or electric bicycle motor that is not rated by the manufacture in brake horsepower, but rather in Watts, a conversion must be made in the units a conversion which is not given in the code of the law and thus the court will have to consider a factor of conversion that is not directly encoded in the law. Industry standard conversion for Watts to horsepower for electric motors is 1 horsepower = 746 watts. Acceptance of that conversion factor from industry, however, as interpretation of the law is subject to the process of the courts since it is not defined specifically in the law. In addition the specific wording of the law may or may not prohibit the use of a "mid-drive" or "crank-drive" motor set-up where the motor drives the rear wheel of the bicycle through the existing chain drive of a bicycle that has multiple gears depending on several points of interpretation of the law. Specifically the interpretation of the wording, "does not require clutching or shifting by the operator after the drive system is engaged". A "mid-drive" or "crank-drive" motor set-up on an electric bicycle does indeed allow the operator to change gears in the power drive system between the motor and the rear wheel of the bicycle. Whether or not such a mechanism which allows the operator to change gears satisfies the wording that requires the operator to change gears is a matter of legal interpretation by the courts. Just as "shall issue" and "may issue" (as in laws governing the issuing licenses) in application of the law have two different meanings (in the first case if you meet the requirements they have to give you the license and in the second they do not have to if they decide not to even if you meet the requirements for the license) whether or not "does not require shifting" outlaws electric bicycles where shifting is possible but is not necessarily required is a matter of interpretation. Thus the legality of electric bicycles equipped with a "mid-drive" or "crank-drive" motor set-up in the U.S. state of Montana is not clearly defined. Nebraska Nebraska defines a Moped as "a bicycle with fully operative pedals for propulsion by human power, an automatic transmission, and a motor with a cylinder capacity not exceeding fifty cubic centimeters which produces no more than two brake horsepower and is capable of propelling the bicycle at a maximum design speed of no more than thirty miles per hour on level ground." However, under a bill passed February 20, 2015 electric bicycles are explicitly defined. Bicycle shall mean (1) every device propelled solely by human power, upon which any person may ride, and having two tandem wheels either of which is more than fourteen inches in diameter or (2) a device with two or three wheels, fully operative pedals for propulsion by human power, and an electric motor with a capacity not exceeding seven hundred fifty watts which produces no more than one brake horsepower and is capable of propelling the bicycle at a maximum design speed of no more than twenty miles per hour on level ground. Nevada As of May 19, 2009, Nevada amended its state transportation laws to explicitly permit electric bicycles to use any "trail or pedestrian walkway" intended for use with bicycles and constructed with federal funding, and otherwise generally permits electric bicycles to be operated in cases where a regular bicycle could be. An electric bicycle is defined as a two- or three-wheeled vehicle with fully operable pedals with an electric motor producing up to 1 gross brake horsepower and up to 750 watts final output, and with a maximum speed of up to 20 miles per hour on flat ground with a rider when powered only by that engine. New Jersey As of May 14, 2019, a new vehicle class ("Low-speed electric bicycle") was added to NJRS Title 39, described as "a two or three-wheeled vehicle with fully operable pedals and an electric motor of less than 750 watts, whose maximum speed on a paved level surface, when powered solely by a motor, while operated by a person weighing , is less than ." Additionally, the existing class of "motorized bicycles" has been expanded to include—in addition to gas-powered vehicles such as mopeds—electric bicycles that can achieve speeds between . For these vehicles, a driver's license and registration are still required. Under previous regulations, all e-bikes were classified as motorized bicycles (mopeds) and required registration, but could not actually be registered since the law was written only for gas-powered vehicles. The new legislation, which applies to both "pedal-assist" and "throttle" bicycles, removes e-bikes from that legal gray area. New Mexico New Mexico has no specific laws concerning electric or motorized bicycles. MVD rules treat motorized bicycles the same as bicycles, requiring no registration or drivers license. Prior to this clarification by the MVD, electric bicycles were often treated as mopeds, which require a standard drivers license, but no registration. New York New York State (NYS) included "motor-assisted bicycles" in its list of vehicles which cannot be registered. A Federal agency, the Consumer Product Safety Commission (CPSC), has exclusive jurisdiction over electric bicycles as to consumer product regulations. Despite the illegal status in the state of New York until 2020, enforcement varies at the local level. New York City enforces the bike ban with fines and vehicle confiscation for throttle activated electric bikes. However, Mayor Bill de Blasio has changed the city's policy to legalize pedal-assist electric bikes that have a maximum speed limited to 20 mph. Contrarily, Tompkins County supports electric bike use, even providing grant money to fund electric bike share/rental projects. Several bills were sponsored to legalize electric bicycles for use on NYS roads, and several passed overwhelmingly at the committee level, but none of these initiatives was able to be heard and then passed in the New York State Senate, until 2015. Bill S3997, "An act to amend the vehicle and traffic law, in relation to the definition of electric assisted bicycle. Clarifying the vehicle and traffic law to define electric assisted bicycles; establish that electric assisted bicycles, as defined, are bicycles, not motor vehicles; and establish safety and operational criteria for their use." passed in the Senate in 2015. The related Assembly bill A233 was not brought to a vote in the assembly even though it had passed with little issue in prior years. A legalization bill passed in 2019 was vetoed by the Governor. The New York Bicycle Coalition has supported efforts to define electric bicycles in New York State New York City has repeatedly drawn media attention for its enforcement of a ban on electric bicycles in certain neighborhoods, with fines of up to $3,000. A law was passed in April 2020 defining and legalizing three classes of electric bicycle. In September 2023, additional regulations were introduced in New York City to restrict the sale of electric bicycles and other battery-powered mobility devices to only those that are UL certified. Ohio The Ohio Revised Code 4511.01 distinguishes motorized bicycles and mopeds from motorcycles or scooters by describing them as "...any vehicle having either two tandem wheels or one wheel in the front and two wheels in the rear, that is capable of being pedaled and is equipped with a helper motor of not more than fifty cubic centimeters piston displacement that produces no more than one brake horsepower and is capable of propelling the vehicle at a speed of no greater than twenty miles per hour on a level surface." One brake horsepower converts to 0.75 kW, or (rounded) 750W. Thus, a bicycle with an electric helper motor operating under 750W, and not propelling the bicycle over 20 mph, does not qualify to be registered under Ohio state law. Local jurisdictions may have other regulations. Oklahoma Oklahoma defines an Electric-Assisted Bicycle in 47 O.S. 1-104 as "Two or three wheels; and Fully operative pedals for human propulsion and equipped with an electric motor with a power output not to exceed one thousand (1,000) watts, incapable of propelling the device at a speed of more than thirty (30) miles per hour on level ground, and incapable of further increasing the speed of the device when human power alone is used to propel the device at a speed of thirty (30) miles per hour or more. An electric-assisted bicycle shall meet the requirements of the Federal Motor Vehicle Safety Standards as set forth in federal regulations and shall operate in such a manner that the electric motor disengages or ceases to function when the brakes are applied." Oklahoma the following restrictions on the operation of Electric-Assisted Bicycle in 47 O.S. 11-805.2 as follows: 1. Possess a Class A, B, C or D license, but shall be exempt from a motorcycle endorsement; 2. Not be subject to motor vehicle liability insurance requirements only as they pertain to the operation of electric-assisted bicycles; 3. Be authorized to operate an electric-assisted bicycle wherever bicycles are authorized to be operated; 4. Be prohibited from operating an electric-assisted bicycle wherever bicycles are prohibited from operating; and 5. Wear a properly fitted and fastened bicycle helmet which meets the standards of the American National Standards Institute or the Snell Memorial Foundation Standards for protective headgear for use in bicycling, provided such operator is eighteen (18) years of age or less. Oregon Oregon Law (ORS 801.258]) defines an electric assisted bicycle as an electric motor-driven vehicle equipped with operable pedals, a seat or saddle for the rider, no more than three wheels in contact during travel. In addition, the vehicle must be equipped with an electric motor that is capable of applying a power output of no greater than 1,000 watts, and that is incapable of propelling the vehicle at a speed greater than 20 miles per hour on level ground. In general, electric bicycles are considered "bicycles", rather than motor vehicles, for purposes of the code. This implies that all bicycle regulations apply to electric bicycles including operation in bike lanes. Exceptions to this include a restriction of operation on sidewalks and that a license or permit is required if the rider is younger than 17 years of age. Pennsylvania State law defines a motorized pedalcycle as a motor-driven cycle equipped with operable pedals, a motor rated at no more than 1.5 brake horsepower, a cylinder capacity not exceeding 50 cubic centimeters, an automatic transmission, and a maximum design speed of no more than 25 miles per hour. Subchapter J of Publication 45 spells out the vehicle requirements in full. As of 2008 a standard class C license, proof of insurance, and registration (annual fee: $9.00) are required for operation of any motorized pedalcycle in Pennsylvania. Additionally, there are strict equipment standards that must be met for operation, including: handlebars, brakes, tires/wheels, electrical systems/lighting, mirrors, speedometer, and horns/warning devices. The definition was clearly written with gasoline-powered pedalcycles in mind. The requirement of an automatic transmission is troublesome for those who just want to add an electric-assist motor to a bicycle, for almost all bicycles have transmissions consisting of chains and manually shifted sprockets. The registration form asks for a VIN, making it difficult to register some foreign-made ebikes. The fine for riding an unregistered electric bike is approximately $160.00 per event as of 2007. On February 4, 2014, SB997 was introduced by Senator Matt Smith, which seeks to amend PA Vehicle Code to include "Pedalcycle with Electric Assist". In a memo addressed to all senate members, Smith said the definition shall include "bicycles equipped with an electric motor not exceeding 750 watts, weighing not more than , are capable of a maximum speed of not more than , and have operable pedals." On October 22, 2014, PA house bill 573 passed into law as Act 154, which changes the definition of "pedalcycle" (bicycle) in the PA state vehicle code. "Pedalcycle" is now defined as a vehicle propelled solely by human-powered pedals or a "pedalcycle" (bicycle) with electric assist (a vehicle weighing not more than with two or three wheels more than in diameter, manufactured or assembled with an electric motor rated at no more than 750 watts, equipped with operational pedals, with a maximum speed of 20 mph). Pedal-assisted bicycles meeting this definition may be used without regulation in PA. Tennessee Electric Bicycles are defined in Tennessee Code Annotated 55-8-301 – 307 This legislation passed in 2016 and defines an "electric bicycle", as a bicycle or tricycle equipped with fully operable pedals and an electric motor of less than 750 watts, separated into three classes: (1) A "class 1 electric bicycle," or "low-speed pedal-assisted electric bicycle," is a bicycle equipped with a motor that provides assistance only when the rider is pedaling, and that ceases to provide assistance when the bicycle reaches the speed of 20 miles per hour. (2) A "class 2 electric bicycle," or "low-speed throttle-assisted electric bicycle," is a bicycle equipped with a motor that may be used exclusively to propel the bicycle, and that is not capable of providing assistance when the bicycle reaches the speed of 20 miles per hour. (3) A "class 3 electric bicycle," or "speed pedal-assisted electric bicycle," is a bicycle equipped with a motor that provides assistance only when the rider is pedaling, (no throttle) and that ceases to provide assistance when the bicycle reaches the speed of 28 miles per hour, and equipped with a speedometer. Electric bicycles are governed by the same law as other bicycles, subject to any local restrictions. They may be operated on any part of a street or highway where bicycles are authorized to travel, including a bicycle lane or other portion of a roadway designated for exclusive use by bicyclists. Class 1 and 2 electric bicycles are allowed on greenways and multi-use paths unless the local government bans their use by ordinance. Class 3 bikes are banned unless the local city council passes an ordinance to allow their use. Beginning January 1, 2017, manufacturers and distributors of electric bicycles were required to apply a label that is permanently affixed, in a prominent location, to each electric bicycle, indicating its class. Driver's licenses, registration, insurance and license plate requirements do not apply. An electric bicycle is not a motor vehicle. Drinking and driving laws apply. Additional laws or ordinances may apply to the use of electric bicycles by each city or county. Texas "Bicycles" and "Electric Bicycles" are legally defined in the Texas Transportation Code Title 7, Chapter 664 entitled "Operation of Bicycles, Mopeds, and Play Vehicles" in Subchapter G. Under Chapter 541.201 (24), "Electric bicycle" means a bicycle that is (A) designed to be propelled by an electric motor, exclusively or in combination with the application of human power, (B) cannot attain a speed of more than 20 miles per hour without the application of human power, and (C) does not exceed a weight of . The department or a local authority may not prohibit the use of an electric bicycle on a highway that is used primarily by motor vehicles. The department or a local authority may prohibit the use of an electric bicycle on a highway used primarily by pedestrians. "Medical Exemptions" are also a standard right in the State of Texas for motorcycles & even bicyclists. Through Texas's motorcycle helmet law (bicycle helmet laws from city ordinances), it is only required for those 21 years old or younger to wear a helmet. However, a medical exemption, written by a certified licensed medical physician or licensed chiropractor, which exempts one from wearing a helmet, can be used for bicyclists if helmets are required. Utah According to Utah Code 41-6a-102 (17) <Utah Code Section 41-6a-102> an electric assisted bicycle is equipped with an electric motor with a power output of not more than 750 watts and is not capable of further assistance at a speed of more than , or at while pedaling and using a speedometer. New laws specifically exclude electric pedal-assisted bicycles as "motorized vehicles" and bicycles are permitted on all state land (but not necessarily on Indian Reservations, nor restrictive municipalities, such as in Park City Code 10-1-4.5 where electric bicycles are generally not allowed on bike paths2) if the motor is not more than 750 Watts, and the assistance shuts off at (Utah Traffic Code 53-3-202-17-a 1). E-bikes sold in Utah are required to have a sticker that details the performance capacity. Children under 14 can operate an electric bicycle if accompanied by a parent/guardian, but children under 8 may not. (Utah code 41-6a-1115.5) No license, registration, or insurance is required by the State but some municipalities may require these measures (Salt Lake City and Provo require registration). 1 Utah Traffic Code Utah Code Section 41-6a-102 2 Park city, Utah Municipal Code Park City : Municipal Code Vermont "Motor-driven cycle" means any vehicle equipped with two or three wheels, a power source providing up to a maximum of two brake horsepower and having a maximum piston or rotor displacement of 50 cubic centimeters if a combustion engine is used, which will propel the vehicle, unassisted, at a speed not to exceed on a level road surface, which does not require clutching or shifting by the operator. The designation is a replacement for "scooter" and "moped;" Vermont does not seem to have laws specifically for e-bikes. Operators of motor-driven cycles are required to have a valid driver's license but not a motorcycle endorsement. Virginia Virginia laws that cover electric bicycles include Va. Code § 46.2-100; § 46.2-903; § 46.2-904; § 46.2-908.1; § 46.2-906.1. E-bikes are allowed on sidewalks and bike paths, but are subject to local city or county restrictions. E-bikes are not subject to the registration, licensing or insurance requirements that apply to motor vehicles. Washington A law that came into effect on June 7, 2018, defines electric-assisted bicycles as a bicycle with two or three wheels, a saddle, fully operative pedals for human propulsion, and an electric motor of no more than 750 watts. The law divides electric-assisted bicycles into three classes: Class 1 — "an electric assisted bicycle in which the motor provides assistance only when the rider is pedaling and ceases to provide assistance when the bicycle reaches the speed of twenty miles per hour"; Class 2 — "an electric assisted bicycle in which the motor may be used exclusively to propel the bicycle and is not capable of providing assistance when the bicycle reaches the speed of twenty miles per hour"; Class 3 — "an electric assisted bicycle in which the motor provides assistance only when the rider is pedaling and ceases to provide assistance when the bicycle reaches the speed of twenty-eight miles per hour and is equipped with a speedometer." No drivers license is required and there is no age restriction for operation of Class 1 and 2 e-bikes, but one must be at least 16 years old to use a Class 3 bike. All classes of electric-assisted bicycles may be operated on a fully controlled limited access highway. Class 1 and 2 electric bicycles can be used on sidewalks, but Class 3 bicycles "may not be used on a sidewalk unless there is no alternative to travel over a sidewalk as part of a bicycle or pedestrian path." Generally a person may not operate an electric-assisted bicycle on a trail that is designated as non-motorized and that has a natural surface, unless otherwise authorized. Since July 1, 2018, manufacturers or distributors offering new electric-assisted bicycles in Washington state must affix a permanent label in a prominent place on the bike containing the classification number, top assisted speed, and motor wattage of the bike. See also Outline of cycling Personal transporter (International regulation section) References External links Regulations of E-Bikes in North America, National Institute for Transportation and Communities, August 2014. Bicycle law Electric bicycles Bicycle Vehicle law
Electric bicycle laws
Engineering
16,669
54,214,587
https://en.wikipedia.org/wiki/HESS%20J1857%2B026
HESS J1857+026 is a pulsar wind nebula located approximately from Earth in the constellation of Aquila. HESS J1857+026 releases γ-rays in the range of 0.8−45 TeV. It is most likely powered by PSR J1856+0245, a pulsar located nearby. References Pulsar wind nebulae Aquila (constellation)
HESS J1857+026
Astronomy
86
5,589,857
https://en.wikipedia.org/wiki/Trust%20%28company%29
Trust International B.V. is a Dutch company producing value digital lifestyle accessories including PC peripherals and accessories for video gaming. Based in Dordrecht, it was originally founded in 1983 as Aashima Technology B.V. before gaining its current name in 2003. Products The company's product lines are divided into Home & Office, Gaming, Smart Home and Business to Business (B2B). Products that the company has covered for many years include mice, keyboards, webcams and headsets. Trust's products are sold in specialist stores, large retailers, electronics chains and online stores in over 50 countries. In the past, Trust also produced peripherals such as scanners and modems. Sports sponsorship Dutch F1 driver Jos Verstappen used his strong Dutch links to gain sponsorship for the Minardi F1 Team in 2003 when Trust became one of the team sponsors. That sponsorship was moved to Jordan Grand Prix in 2004 when Verstappen was on the verge of a race seat with the team. Trust had a sponsorship agreement with Spyker F1 as the team started to bring in Dutch sponsorship. Trust was the head sponsor of the Arden International team, which competed in the GP2 and GP2 Asia series, and previously in Formula 3000. Because of the sponsorship, the team has been dubbed Trust Team Arden. Trust also sponsored Minardi Team USA in the 2007 Champ Car World Series for much of the season but ended their sponsorship at the end of the season after the team stopped competing at the end of the year due to the unification of Champ Car and Indycar. Trust sponsored Red Bull Racing in 2009, both Sebastian Vettel and Mark Webber had the Trust name visible on the chin bar of their helmets. See also List of Dutch companies References External links Computer companies of the Netherlands Computer hardware companies Electronics companies of the Netherlands Computer peripheral companies Videotelephony Electronics companies established in 1983 Dutch brands
Trust (company)
Technology
384
60,035,244
https://en.wikipedia.org/wiki/Wildlife%20of%20Sweden
Located in the Scandinavian Peninsula, Sweden is a mountainous country dominated by lakes and forests. Its habitats include mountain heath, montane forests, tundra, taiga, beech forests, rivers, lakes, bogs, brackish, marine coasts, and cultivated land. The climate of Sweden is mild for a country at this latitude, largely owing to the significant maritime influence. Geography Sweden is an elongated country east of Norway and west of the Baltic Sea and the Gulf of Bothnia. It extends from a latitude of 55°N (similar to Newcastle or Moscow) to more than 70°N, which is north of the Arctic Circle. To the southwest lie the Skagerrak and the Kattegat seas. To the northeast is the land border with Finland, marked by the Torne River. The coastline along the Baltic Sea is indented with many small islands and two larger ones, Gotland and Öland. Lakes are numerous, ranging in size from small ponds to Vänern, the third largest lake in Europe. Most of northern and central Sweden, roughly north of the large river Dalälven, constitutes the Norrland terrain which consists of large, barren areas of hilly and mountainous land gradually rising from the Gulf of Bothnia to the Scandinavian Mountains (or Scandes) in the west. These mountains, which form the border with Norway in the north, are mostly around 1000 meters in height, but Kebnekaise reaches 2097 meters, making it the tallest mountain in Sweden and northern Scandinavia. The geology of the Scandes is quite diverse; often reflected in differences in the flora. South of Dalälven is a low-lying area surrounding the large lakes Mälaren and Hjälmaren. The soils in this area are clayey and fertile, having originated from marine deposits during the latest glaciation. Due to the rich soils, this area became one of the main agricultural regions in Sweden. To the south, there are some minor hilly and barren areas, such as Tiveden. East and west of Lake Vättern are intensively cultivated plains on sedimentary rock. To the south of this region, the land rises again to the South Swedish highlands, a terrain of mostly barren hills reaching 377 meters. The southernmost province of Scania differs from the rest of Sweden in consisting almost entirely of mostly flat, arable land, and also in its complex geology, which includes Mesozoic rocks and abrasion coasts. The rest of Sweden mostly consists of gneiss and granite, sometimes forming archipelagos (Sw. "skärgård") of fairly small, bare, rounded rocks in the northern part of the west coast and around Stockholm. The Baltic islands Öland and Gotland consist almost entirely of Ordovician and Silurian limestone, respectively. Climate Despite its northerly latitude, most parts of Sweden have a temperate climate with few temperature extremes. Climatically, the country can be divided into three regions; the northernmost part has a subarctic climate, the central part a humid continental climate and the southernmost part an oceanic climate. The country is much warmer and drier than other places at a similar latitude, mainly because of the combination of the Gulf Stream and the general westerly direction of the wind. The northern half of the country gets less rainfall than Norway because of the rain shadow effect caused by the Scandinavian Mountains. Biodiversity There are an estimated 55,000 species of animals and plants in terrestrial habitats in Sweden, this relatively low number is attributed to the cold climate; These include 73 species of mammal, about 240 breeding bird species (and another 60 or so non-breeding species which can be seen rarely or annually), 6 species of reptile, 12 species of amphibian, 56 species of freshwater fish, around 2000 species of vascular plants, close to 1000 species of bryophyte, and over 2000 lichens. Sweden had a 2019 Forest Landscape Integrity Index mean score of 5.35/10, ranking it 103rd globally out of 172 countries. Flora and vegetation Beech (Fagus sylvatica) is the dominant tree species in the region of Skåne and along a narrow strip of the west coast. This is called the nemoral zone. Forest herbs in this zone typically vegetate and flower in spring, as the crown of beech is very dense, and little light reaches the ground once the leaves appear. Examples are Anemone spp. and Corydalis spp. Oak (Quercus robur and Quercus petraea) forests occur on poor soils. Forests of alder (Alnus glutinosa), ash (Fraxinus excelsior), and elm (Ulmus glabra) grow in nutrient-rich, often wet soil, but most of these areas have long since been drained and converted to arable fields. Most of Sweden below the mountains is covered by conifer forests and forms part of the circumboreal zone. South of the river Dalälven, there are scattered deciduous trees like oak (Quercus robur), and this zone is referred to as boreo-nemoral. North of Dalälven, in the proper boreal (taiga) zone, deciduous trees are rarer, but birches (Betula pubescens and Betula pendula) and aspen (Populus tremula) may be abundant in early successional stages, such as after a fire or in recently clear-cut areas. There are a total of four native conifers in Sweden, and of these only Norway spruce (Picea abies) and Scots pine (Pinus sylvestris) form forests, in pure or mixed stands. Spruce grows in wetter soils and pine drier soils, but in bogs, there are often numerous stunted pines. The undergrowth in a spruce forest is commonly almost pure stands of bilberry (Vaccinium myrtillus). In wetter areas, ferns (e.g., Athyrium filix-femina and Dryopteris spp.) are abundant, and in richer soils, herbs (e.g., Paris quadrifolia, Actaea spicata) and broad-leaved grasses (e.g., Milium effusum) are more common. In pine forests, lingonberries (Vaccinium vitis-idaea), heather (Calluna vulgaris) and/or Cladonia lichens are most common. Fires occur at irregular intervals and usually kill all spruce and most pines. Fireweed (Epilobium angustifolium), raspberry (Rubus idaeus), and Geranium bohemicum are among the first plants to germinate in the ashes. In the mountains, the conifers are replaced by birch (Betula pubescens ssp. tortuosa), which forms the tree line in most areas. The undergrowth in these forests is quite variable. Under wet and nutrient-rich conditions, luxuriant vegetation may develop, consisting of tall herbs such as Aconitum septentrionale, Angelica archangelica, and Cicerbita alpina. Above the birch forest, starting at 300–1000 meters, depending on latitude, there are usually willow-thickets, and above these can be found alpine heath or meadows, the former dominated by dwarf shrubs of the family Ericaceae, the latter by sedges, rushes and various herbs such as Saxifraga spp., Dryas octopetala and Draba spp. Ranunculus glacialis reaches the highest altitude of all plants in Sweden, often growing near the ever-shrinking glaciers. Wetlands cover large areas in Sweden. In the south, raised bogs are a common variety, of which a striking example is Store Mosse. These bogs largely consist of living and dead Sphagnum spp., with scattered dwarf shrubs and sedges such as Eriophorum vaginatum. In the wet southwest, Narthecium ossifragum and Erica tetralix occur in the bogs, while in the north and the east, the dwarf birch Betula nana and Ledum palustre, an evergreen shrub, are common. Rich fens, with many sedges and orchids, are rather rare, except on Gotland and Öland, two large limestone islands in the Baltic, where Cladium-dominated fens are common. In the north of Sweden, there are many large mire complexes with both fen- and bog-like parts. The largest is found in Sjaunja, a nature reserve in Lapland. Sweden has as many as 90,000 lakes larger than one hectare. Most of these are either nutrient-poor with clear water and few plants (e.g. Lobelia dortmanna and Isoëtes spp.), like Lake Vattern, or small ponds with brown water surrounded by floating mats of bog vegetation (e.g. sedges and Menyanthes trifoliata). Nutrient-rich lakes are found mostly in the south and typically have dense reed stands, other emergent plants (e.g. Iris pseudacorus and Sparganium erectum), free-floating plants such as Hydrocharis morsus-ranae and Stratiotes aloides, and submerged vegetation with spp. of Potamogeton, Ranunculus, and others. The best-known lakes in this category are undoubtedly Tåkern and Hornborgasjön. The coast of Sweden is long and conditions are quite different at the endpoints. Near the Norwegian border, conditions are typical of the North Atlantic, turning to subarctic near the Finnish border where salinity is down to 0.1–0.2%. A common seashore species there is the endemic, tussock-forming grass Deschampsia bottnica, which survives the destructive force of up to 2 meters thick sea ice. Common submerged vascular plants in this area, the Gulf of Bothnia, are, among others Myriophyllum sibiricum, Callitriche hermaphroditica and Stuckenia pectinata. On the west coast, one may instead find Zostera marina in similar localities. Diversity, abundance and size of red (Rhodophyta) and brown (Phaeophyta) algae decrease drastically with salinity, while Charophyceae (of the green algae, the Chlorophyta) thrive in the brackish waters of the Baltic. Fauna According to the IUCN Red List, terrestrial mammals occurring in Sweden include the European hedgehog, the European mole (only in the south), six species of shrews, and eighteen species of bats. The mountain hare, the Eurasian beaver, the red squirrel, as well as about fourteen species of smaller rodents occur in Sweden as well. Of the ungulates, wild boar, red deer, moose, and roe deer are found in the country, as well as semi-domesticated reindeer. Terrestrial carnivores include the brown bear, the Eurasian wolf, and the red fox; in the mountains, the Arctic fox, as well as the Eurasian lynx, the European badger, the Eurasian otter, the stoat, the least weasel, the European polecat, and the European pine marten; and, in the north, the wolverine. The coast is inhabited by three species of seal: harbor seal in the south and west, ringed seal in the Gulf of Bothnia, and grey seal throughout. The porpoise is the only whale that breeds in Swedish waters. The European rabbit, the European hare, and the fallow deer were deliberately introduced, while the raccoon dog, mink, muskrat, brown rat, and house mouse were unintended introductions. All these introductions, perhaps except the fallow deer, have been "successful," resulting in viable populations. Sweden's Red List of critically endangered mammals includes Bechstein's bat, the common pipistrelle and the Arctic fox, while endangered mammals include the barbastelle, the serotine bat, the pond bat, the lesser noctule, and the wolf. Listed as vulnerable are the Eurasian otter, the wolverine, the harbor seal, the harbour porpoise, and the Natterer's bat. According to Avibase: Bird Checklists of the World, 535 species of bird have been recorded in Sweden, but less than half of these breed regularly. Many of them are migratory, making their way between Arctic breeding grounds and overwintering quarters in Europe and Africa. Birds that breed and overwinter in Sweden include tits, corvids, Galliformes, owls and several birds of prey. Canada geese (Branta canadensis) and pheasants (Phasianus colchicus) have been deliberately introduced. The only endemic fish in Sweden is the critically endangered freshwater Coregonus trybomi, still surviving in only a single lake. Amphibians found in Sweden include eleven species of frogs and toads and two species of newt, while reptiles include four species of snake and three species of lizard. They are all protected under the law. Sweden has an estimated 108 species of butterflies, 60 species of dragonflies, and 40 species of wood-boring beetles. Conservation Some of the significant challenges Swedish wildlife faces include: Lack of protection for the few remaining old-growth forests, particularly in the north, severely impacts lichens, mosses, and insects. Use of alien species such as the lodgepole pine (Pinus contorta) in forestry, potentially outcompeting the native Scots pine and Norway spruce. Invasive species, such as Carassius gibelio, Colpomenia peregrina, and Dasya baillouviana. Introduction of forest trees of foreign provenance of native species, potentially causing genetic pollution. Exploitation of hydroelectric power causing drastic changes in water-level dynamics, possibly leading to the loss of various vegetation types and species, particularly vascular plants. Draining of wet forests (home to most of forest species, in several categories) in connection with timber extraction. Draining of mires for peat extraction. Large-scale exploitation of mineral resources, such as limestone on Gotland and ultrabasic rock in the mountains, threatening rare and endangered organisms and landscapes. Removal of deciduous trees, which have key role in maintaining biodiversity in boreal forests. Overgrowth of (wet or dry) meadows and pastures. Removal of dead wood, along with fungi and insects. Additionally, climate change is likely to affect the country's biodiversity, with the treeline moving further north and to higher altitudes, and forests replacing tundra. The melting of ice will increase runoff, affecting wetlands. With a rise in sea level, the Baltic Sea will receive a greater inflow of saline water. References Sweden Biota of Sweden
Wildlife of Sweden
Biology
3,040
23,157,224
https://en.wikipedia.org/wiki/Spacefiller
In Conway's Game of Life and related cellular automata, a spacefiller is a pattern that spreads out indefinitely, eventually filling the entire space with a still life pattern. It typically consists of three components: stretchers that resemble spaceships at the four corners of the pattern, a growing boundary region along the edges of the pattern, and the still life in the interior pattern. It resembles a breeder in that both types of patterns have a quadratic growth rate in their numbers of live cells, and in both having a three-component architecture. However, in a breeder the moving part of the breeder (corresponding to the stretcher) leaves behind a fixed sequence of glider guns which fill space with gliders, moving objects (gliders or spaceships) rather than still life patterns. With a spacefiller, unlike a breeder, every point in the space eventually becomes part of the space-filling still life pattern. References Cellular automaton patterns
Spacefiller
Technology
197
33,855,225
https://en.wikipedia.org/wiki/Biological%20specimen
A biological specimen (also called a biospecimen) is a biological laboratory specimen held by a biorepository for research. Such a specimen would be taken by sampling so as to be representative of any other specimen taken from the source of the specimen. When biological specimens are stored, ideally they remain equivalent to freshly-collected specimens for the purposes of research. Human biological specimens are stored in a type of biorepository called a biobank, and the science of preserving biological specimens is most active in the field of biobanking. Quality control Setting broad standards for quality of biological specimens was initially an underdeveloped aspect of biobank growth. There is currently discussion on what standards should be in place and who should manage those standards. Since many organizations set their own standards and since biobanks are necessarily used by multiple organizations and typically are driven towards expansion, the harmonization of standard operating procedures for lab practices are a high priority. The procedures have to be evidence-based and will change with time as new research and technology becomes available. Policy makers Some progress for the creation of policy-making organizations include the National Cancer Institute's 2005 creation of the Office of Biobanking and Biospecimen Research (OBBR) and the annual Biospecimen Research Network Symposia. The International Society for Biological and Environmental Repositories, International Agency for Research on Cancer, Organisation for Economic Co-operation and Development, and the Australasian Biospecimen Network have also proposed policies and standards. In 2008 AFNOR, a French standardization organization, published the first biobank-specific quality standard. Aspects of ISO 9000 have been applied to biobanks. Quality goals Quality criteria for specimens depends on the study being considered and there is not a universal standard specimen type. DNA integrity is an important factor for studies which involve whole genome amplification. RNA integrity is critical for some studies and can be assessed by gel electrophoresis. Also biobanks, which do specimen storage, cannot take full responsibility for specimen integrity, because before they take custody of samples someone must collect and process them and effects such as RNA degradation are more likely to occur from delayed sample processing than inadequate storage. Samples stored Biorepositories store various types of specimens. Different specimens are useful for different purposes. Storage techniques Many specimens in biobanks are cryopreserved. Other specimens are stored in other ways. Techniques associated with biobanks Some of the laboratory techniques associated with biological specimen storage include phenol-chloroform extraction, PCR, and RFLP. See also Zoological specimen References External links Biospecimen research database, a curated collection of articles about biospecimens Office of Biorepositories and Biospecimen Research Specimen Central biorepository list, A worldwide listing of active biobanks and biorepositories Biospecimen Research Network Symposia, a conference on biobank specimens Mayo Clinic on biobanking Short Public TV episode on museum Collections Biospecimen Collection Services Biobanks
Biological specimen
Biology
637
1,817,986
https://en.wikipedia.org/wiki/Advertising%20campaign
An advertising campaign or marketing campaign is a series of advertisement messages that share a single idea and theme which make up an integrated marketing communication (IMC). An IMC is a platform in which a group of people can group their ideas, beliefs, and concepts into one large media base. Advertising campaigns utilize diverse media channels over a particular time frame and target identified audiences. The campaign theme is the central message that will be received in the promotional activities and is the prime focus of the advertising campaign, as it sets the motif for the series of individual advertisements and other marketing communications that will be used. The campaign themes are usually produced with the objective of being used for a significant period but many of them are temporal due to factors like being not effective or market conditions, competition and marketing mix. Advertising campaigns are built to accomplish a particular objective or a set of objectives. Such objectives usually include establishing a brand, raising brand awareness, and aggrandizing the rate of conversions/sales. The rate of success or failure in accomplishing these goals is reckoned via effectiveness measures. There are 5 key points that an advertising campaign must consider to ensure an effective campaign. These points are, integrated marketing communications, media channels, positioning, the communications process diagram and touch points. Integrated marketing communication Integrated marketing communication (IMC) is a conceptual approach used by the majority of organizations to develop a strategic plan on how they are going to broadcast their marketing and advertising campaigns. Recently there has been a shift in the way marketers and advertisers interact with their consumers and now see it as a conversation between Advertising/ Marketing teams and consumers. IMC has emerged as a key strategy for organizations to manage customer experiences in the digital age, since organizations can communicate with people in more ways than those typically thought of as media. The more traditional advertising practices such as newspapers, billboards, and magazines are still used but fail to have the same effect now as they did in previous years. Current research shows that no other form of commercial communication shares the same essential elements as the mobile forms, making it unique in its advertising impact. The importance of the IMC is to make the marketing process seamless for both the brand and the consumer. IMC attempts to meld all aspects of marketing into one cohesive piece. This includes sales promotion, advertising, public relations, direct marketing, and social media. The entire point of IMC is to have all of these aspects of marketing work together as a unified force. This can be done through methods, channels, and activities all while using a media platform. The end goal of IMC is to get the brand's message across to consumers in the most convenient way possible. The advantage of using IMC is that it can communicate the same message through several channels to create brand awareness. IMC is the most cost-effective solution when compared to mass media advertising to interact with target consumers on a personal level. IMC also benefits small businesses, as they are able to submerge their consumers with communication of various kinds in a way that pushes them through the research and buying stages creating a relationship and dialogue with their new customer. Popular and obvious examples of IMC put into action are the likes of direct marketing to the consumer that the organization already has a knowledge that the person is interested in the brand by gathering personal information about them from when they previously shopped there and then sending mail, emails, texts and other direct communication with the person. In-store sales promotions are tactics such as '30% off' sales or offering loyalty cards to consumers to build a relationship. Television and radio advertisements are also a form of advertising strategy derived from IMC. All of the components of IMC play an important role and a company may or may not choose to implement any of the integration strategies. Media channels Media channels, also known as, marketing communications channels, are used to create a connection with the target consumer and influence the behavior. Traditional methods of communication with the consumer include newspapers, magazines, radio, television, billboards, telephone, post, and door to door sales. These are just a few of the historically traditional methods. Along with traditional media channels, comes new and upcoming media channels. Social media has begun to play a very large role in the way media and marketing intermingle to reach a consumer base. Social media has the power to reach a wider audience. Depending on the age group and demographic, social media can influence a company's overall image. Using social media as a marketing tool has become a widely popular branding method. A brand has the chance to create an entire social media presence based around its own specific targeted community. With advancements in digital communications channels, marketing communications allow for the possibility of two-way communications where an immediate consumer response can be elicited. Digital communications tools include: websites, blogs, social media, email, mobile, and search engines as a few examples. It is important for an advertising campaign to carefully select channels based on where their target consumer spends time to ensure market and advertising efforts are maximized. Marketing professionals should also consider the cost of reaching its target audience and the time (i.e. advertising during the holiday season tends to be more expensive). Modern day implications for the advantages & disadvantages of traditional media channels In the rapidly changing marketing and advertising environment, exposure to certain consumer groups and target audiences through traditional media channels has blurred. These traditional media channels are defined as print, broadcast, out-of-home and direct mail. The introduction of various new modern-day media channels has altered their traditional advantages and disadvantages. It is imperative to the effectiveness of the Integrated Marketing Communication (IMC) strategy that exposure to certain demographics, consumer groups and target audiences is anticipated to provide clarity, consistency, and maximum communications impact. Print media Print media is mainly defined as newspapers and magazines. With the transition in around 2006 – 2016 to digital information on phones, computers and tablets, the main demographic that is still exposed to traditional print media is older. It is also estimated that there will be a reduction of print material in the coming years, as print media moves online. Advertisers need to consider this; in some cases, they could use this to their advantage. The advantages of newspaper advertising are that it is low cost, timely, the reader controls exposure, and it provides moderate coverage to the older generations in western society. Disadvantages are the aging demographic, short life, clutter and that it attracts less attention. Magazines are similar in some cases, but as they are a niche product they increase segmentation potential; they also have high information content and longevity. Disadvantages are that they are visual only, they lack flexibility and a long lead time for advertisement placement. Broadcast media Traditional broadcast media's primary platforms are television and radio. These are still relatively prominent in modern-day society, but with the emergence of online content such as YouTube and Instagram, it would be difficult to anticipate where the market is headed in the next decade. Television's advantages are that it has mass coverage, high reach, quality reputation, low post per exposure and impacts human senses. Disadvantages would be that it has low selectivity, short message life and high production costs. Alternatively, radio offers flexibility, high frequency and low advertising and production costs. Disadvantages to radio are that its audio only, low attention-getting and short message. Out-of-home (OOH) media This is a broad marketing concept that is no longer confined to large, static billboards on the side of motorways. More current and innovative approaches to OOH media range from street furniture to aerial blimps and the advance of digital OOH. As the world changes, there will always be new ways in which a campaign can revitalize this media channel. Its potential advantages are accessibility and reach, geographic flexibility and relatively low cost. Disadvantages to OOH media are that it has a short life, is difficult to measure/control and can convey a poor brand image. Direct mail consists of messages sent directly to consumers through the mail delivery service. It is one of the more "dated" media channels. In the modern day it has few advantages, except that it can be highly selective, and has high information content. Disadvantages are that it promotes a poor brand image ("junk mail") and has a high cost-to-contact ratio. Target market When an organisation begins to construct their advertising campaign they need to research each and every aspect of their target market and target consumers. The target consumers (or "potential customers") are the people who are most likely to buy from an organisation. They can be categorized by several key characteristics: mainly gender, age, occupation, marital status, geographical location, behavioral, level of income and education. This process is called segmenting customers on the basis of demographics. Target market Defining the target market helps businesses and individuals design a marketing campaign. This in turn helps businesses and individuals avoid waste and get their advertisements to likely customers. While attempting to find the correct target market it is important to focus on specific groups of individuals that will benefit. By marketing to specific groups of individuals that specifically relate to the product, businesses and individuals will more quickly and efficiently find those who will purchase the product. Businesses and individuals that monitor their existing data (customer and sales data) will find it easier to define their target market, and surveying existing customers will assist in finding more customers. Avoiding inefficiencies when finding a target market is equally as important. Wasting time and money advertising to a large group of potential customers is inefficient if only a handful become customers. A focused plan that reaches a tiny audience can work out well if they're already interested in a product. Over time target markets can change. People interested today might not be interested tomorrow, and those not interested in the present time might become interested over time. Analysing sales data and customer information helps businesses and individuals understand when their target market is increasing or decreasing. There are many advantages that are associated with finding a target market. One advantage is the "ability to offer the right product" (Suttle. R. 2016) through knowing the age and needs of the customer willing to purchase the item. Another advantage of target marketing assists businesses in understanding what price the customer will pay for the products or service. Businesses are also more efficient and effective at advertising their product, because they "reach the right consumers with messages that are more applicable" (Suttle. R. 2016). However, there are several disadvantages that can be associated with target marketing. Firstly, finding a target market is expensive. Often businesses conduct primary research to find whom their target market is, which usually involves hiring a research agency, which can cost "tens of thousands of dollars" (Suttle, R. 2016). Finding one's target market is also time-consuming, as it often "requires a considerable amount of time to identify a target audience" (Suttle, R. 2016). Also focusing on finding a target market can make one overlook other customers that may be in a product. Businesses or individuals may find that their 'average customer' might not include those that fall just outside of the average customers "demographics" (Suttle, R. 2016), which will limit the sale of their products. The last disadvantage to note is the ethical ramifications that are associated with target marketing. An example of this would be a "beer company that may target less educated, poorer people with larger-sized bottles" (Suttle, R. 2016). Positioning In advertising various brands compete to the most important brand to the consumer. Everyday consumers view advertising and rank particular brands compared to their competitors. Individuals rank these specific brands in an order of what is most important to them. For example, a person may compare brands of cars based on how sporty they think they look, affordability, practicality and classiness. How one person perceives a brand is different from another but is largely left to the advertising campaign to manipulate and create the perception that they want a consumer to envision Positioning is an important marketing concept that businesses implement to market their products or services. The positioning concept focuses on creating an image that will best attract the intended audience. Businesses that implement the positioning concept focus on promotion, price, placement and product. When the positioning concept is effective and productive it elevates the marketing efforts made by a business, and assists the buyer in purchasing the product. The positioning process is imperative in marketing because of the specific level of consumer-based recognition is involved. A company must create a trademark brand for themselves in order to be recognizable by a broad range of consumers. For example, a fast food restaurant positions itself as fast, cheap, and delicious. They are playing upon their strengths and most visible characteristics. On the other hand, a luxury car brand will position its brand as a stylish and expensive platform because they want to target a specific brand very different from the fast food brand. For the positioning concept to be effective one must focus on the concepts of promotion, price, place and product. There are three basic objectives of promotion, which include: presenting product information to targeted business customers and consumers, increase demand among the target market, and differentiating a product and creating a brand identity. Tools that can be used to achieve these objectives are advertising, public relations, personal selling, direct marketing, and sales promotion. Price of an object is crucial in the concept of positioning. Adjusting or decreasing the product price has a profound impact on the sales of the product, and should complement the other parts of the positioning concept. The price needs to ensure survival, increase profit, generate survival, gain market shares, and establish an appropriate image. Promoting a product is essential in the positioning concept. It is the process marketers use to communicate their products' attributes to the intended target market. In order for products to be successful businesses must focus on the customer needs, competitive pressures, available communication channels and carefully crafted key messages. Product Positioning presents several advantages in the advertising campaign, and to the businesses/ individuals that implement it. Positioning connects with superior aspects of a product and matches "them with consumers more effectively than competitors" (Jaideep, S. 2016). Positioning can also help businesses or individuals realise the consumer's expectations of the product/s they are willing to purchase from them. Positioning a product reinforces the companies name, product and brand. It also makes the brand popular and strengthens customer loyalty. Product benefits to customers are better advertised through positioning the product, which results in more interest and attention of consumers. This also attracts different types of consumers as products posse's different benefits that attract different groups of consumers, for example: a shoe that is advertised for playing sports, going for walks, hiking and casual wear will attract different groups of consumers. Another advantage of positioning is the competitive strength it gives to businesses/ individuals and their products, introducing new products successfully to the market and communicating new and varied features that are added to a product later on. Communication process diagram The Communication of processes diagram refers to the order of operation an advertising campaign pieces together the flow of communication between a given organisation and the consumer. The diagram usually flows left to right (unless shown in a circular array) starting with the source. An advertising campaign uses the communication process diagram to ensure all the appropriate steps of communication are being taken in order. The source is the person or organisation that has a message they want to share with potential consumers. An example of this is Vodafone wanting to tell their consumers and new consumers of a new monthly plan. The diagram then moves on to encoding which consists of the organisation putting messages, thoughts and ideas into a symbolic form that be interpreted by the target consumer using symbols or words. The third stage in the diagram is channel message. This occurs when the information or meaning the source wants to convoy, is put into a form to easily be transmitted to the targeted audience. This also includes the method that communication gets from the source to the receiver. Examples of this is Vodafone advertising on TV, bus stops and university campuses as students may be the intended consumer for the new plan. Decoding is the processes that the viewer interprets the message that the source sent. Obviously it is up to the source to ensure that the message encoded well enough so that it is received as intended. The receiver is also known as the viewer or potential consumer. This is the person who interprets the source message through channeling whether they are the intended target audience or not. Every day we interpret different advertisements even if we are not the target audience for that advertisement. In between these steps there are external factors acting as distractions, these factors are called noise. Noise distorts the way the message gets to the intended target audience. These distractions are from all other forms of advertising and communication from every other person or organisation. Examples of noise are State of mind, unfamiliar language, unclear message, Values, Attitudes, Perceptions, Culture and Knowledge of similar products or services to name a few forms of noise. Finally there is the response or feedback. This is the receiver's reaction to the communication of message and the way they understood it. Feedback relates to the way sales react as well as the interest or questions that arise in relation to the message put out. Touch points When considering touch points in an advertising campaign a brand looks for Multisensory touch points. These touch points help the brand to develop a point of contact between themselves and the consumer. Modern day advancements in various forms of technology have made it easier for consumers to engage with brands in numerous ways. The most successful touch points are those that create value in the consumer and brands relationship. Common examples of touch points include social media links, QR codes, person handing out flyers about a particular brand, billboards, web sites and various other methods that connect the brand and consumer. The most effective touch points, as found in Effie Award- winning campaigns, are: interactive (91%), followed by TV (63%), print (52%) and consumer involvement (51%). Multi sensory touch points are subconscious yet helps use to recognise brands through characteristic identified through human sensors. These characteristics could be shape, colour, textures, sounds, smell or tastes associated with a given brand. It is important for an advertising campaign to consider sensory cues into their campaign as market places continue to become increasingly competitive and crowded. Anyone of the given sensory characteristics may remind a person of the brand they best associate with. A prime example of this is Red Bull who use the colour, shapes and size of their cans to best relate their product to success and winning. A taller can looks like the 1st place podium when placed next to competitors, the design looks like the finish flag in racing representing winning. The opportunity for an advertising campaign to succeed is significantly increased with the use of multi sensory touch points used as a point of difference between brands. Guerrilla marketing Guerrilla marketing is an advertising strategy which increases brand exposure through the use of unconventional campaigns which initiate social discussion and "buzz". This can often be achieved with lower budgets than conventional advertising methods, allowing small and medium-sized businesses the chance to compete against larger competitors. Through unconventional methods, inventiveness and creativity, guerrilla marketing leaves the receiver with a long lasting impression of the brand as most guerrilla marketing campaigns target the receivers at a personal level, taking them by surprise and may incorporate an element of shock. Guerrilla marketing is typically executed exclusively in public places, including streets, parks, shopping centres etc., to ensure maximum audience resulting in further discussion on social media. Guerrilla marketing is the term used for several types of marketing categories including street marketing, ambient marketing, presence marketing, alternative marketing, experimental marketing, grassroots marketing, flyposting, guerrilla projection advertising, undercover marketing and astroturfing. Jay Conrad Levinson coined the term Guerrilla Marketing with his 1984 book of the same name. Through the enhancement of technology and common use of internet and mobile phones, marketing communication has become more affordable and guerrilla marketing is on the rise, allowing the spread of newsworthy guerrilla campaigns. When establishing a guerrilla marketing strategy, there are seven elements to a clear and logical approach. Firstly, write a statement that identifies the purpose of the strategy. Secondly define how the purpose will be achieved concentrating on the key advantages. Next Levinson (1989) suggests writing a descriptive summary on the target market or consumers. The fourth element is to establish a statement that itemizes the marketing tools and methods planning to be used in the strategy (for example, radio advertising during 6.30am – 9am on weekday mornings or window displays that are regularly updated). The fifth step is to create a statement which positions the brand/product/company in the market. Define the brands characteristics and give it an identity is the sixth element. Lastly, clearly identify a budget which will be put solely towards marketing going forward. For a successful overall guerrilla marketing campaign, combine the above steps with seven winning actions. These seven principles are commitment – stick to the marketing plan without changing it; investment – appreciate that marketing is an investment, consistency – ensure the marketing message and strategy remains consistent across all forms of, confidence – show confidence in the commitment to the guerrilla marketing strategy, patience – time and dedication to the strategy, assortment – incorporate different methods of advertising and marketing for optimum results, and subsequent – build customer loyalty and retention though follow up marketing post-sale. Levinson suggests guerrilla marketing tactics were initiated to enable small businesses with limited financial resources to gain an upper hand on the corporate giants who had unlimited budgets and resources at their disposal. Large companies cottoned on to the success of guerrilla marketing and have had hundreds of effective attention grabbing campaigns using the strategies originally designed for smaller businesses with minimal marketing budgets. Non-traditional, unconventional and shocking campaigns are highly successful in obtaining media coverage and therefore brand awareness, albeit good or bad media attention. However, like most marketing strategies a bad campaign can backfire and damage profits and sales. Undercover marketing and astroturfing are two type of guerrilla marketing that are deemed as risky and can be detrimental to the company. "Advertising can be dated back to 4000 BC where Egyptians used papyrus to make sales messages and wall posters. Traditional advertising and marketing slowly developed over the centuries but never bloomed until early 1900s" ("What Is Guerrilla Marketing?", 2010). Guerrilla marketing are relatively simple, use tactics to advertise on a very small budget. It is to make a campaign that is "shocking, funny, unique, outrageous, clever and creative that people can't stop talking about it" (Uk essays, 2016). Guerrilla marketing is different when compared to traditional marketing tactics (Staff, 2016). "Guerrilla marketing means going after conventional goals of profits, sales and growth but doing it by using unconventional means, such as expanding offerings during gloomy economic days to inspire customers to increase the size of each purchase" (Staff, 2016). Guerrilla marketing also suggests that rather than investing money, it is better to "invest time, energy, imagination and knowledge" (Staff, 2016) instead. Guerrilla marketing puts profit as their main priority not sales as their main focal point, this is done to urge the growth of geometrically by enlarging the size of each transactions. This all done through one of the most powerful marketing weapons around, the telephone. Research shows that it will always increases profits and sales. The term "guerrilla first appeared during the war of independence in Spain and Portugal at the beginning of the 19th century it can be translated as battle" (Uk essays, 2016). Even thou guerrilla marketing was aimed for small business; this did not stop bigger business from adopting the same ideology. "Larger business has been using unconventional marketing to complement their advertising campaigns, even then some marketers argue that when bigger business utilize guerrilla marketing tactics, it isn't true guerrilla" ("What Is Guerrilla Marketing?", 2010). The reason being that larger companies have bigger budgets and usually their brands well established. In some cases, it is far riskier for a larger business to do guerrilla marketing tactics. Which can cause problem when their stunts become a flop when compared to smaller business, as they do not run as much risk, as most people will just write it off as another failed stunt. Many methods in guerrilla marketing consist of "graffiti (or reverse graffiti, where a dirty wall is selectively cleaned), interactive displays, intercept encounters in public spaces, flash mobs, or various PR stunts are often used." Small business use social media as a form of marketing. "Collecting billions of people around the world through a series of status updates, tweets, and other rich media" ("Guerrilla Marketing Strategies for Small Businesses", 2013). Social media is a powerful tool in the world of business. Guerrilla marketing strategies and tactics are a great and cost effective way to generate" awareness for business, products and services. To maximize full potential of marketing efforts, it's to blend them with a powerful and robust online marking strategy with a marketing automation software" ("Guerrilla Marketing Strategies for Small Businesses", 2013). Which can boost small businesses. Guerrilla tactics consist of instruments that have effects on the efforts. Some instruments are usually there to maximize the surprise effect and some of these instruments mainly cutting advertising costs." Guerrilla marketing is a way of increasing the number of individuals exposed to the advertising with the cost of campaign. The instrument of diffusion helps to each a wide audience, which causes none or little cost because consumers (viral marketing) or the media (guerrilla PR) pass on the advertising message" ("Guerrilla Marketing: The Nature of the Concept and Propositions for Further Research", 2016). Guerrilla campaigns usually implement a free ride approach, this means that to cut their costs and increase the number of recipients simultaneously to maximize the low cost effect. For example, they will try to benefit from placing advertisements on big events e.g. sporting events. Guerrilla marketing was regarded to target existing customers rather than new ones, aiming to increase their engagement with a product and/ or brand. "When selecting audiences for a guerrilla message, a group that is already engaged with the product at some level is the best target; they will be quicker to recognize and respond to creative tactics, and more likely to share the experience with their friends, as social media has become a major feature of the market landscape, guerrilla marketing has shown to be particularly effective online. Consumers who regularly use social media are more likely to share their interactions with guerrilla marketing, and creative advertising can quickly go viral." See also References Advertising techniques Communication design Promotion and marketing communications Marketing techniques
Advertising campaign
Engineering
5,403
19,947,051
https://en.wikipedia.org/wiki/Streetcorner
A streetcorner or street corner is the location which lies adjacent to an intersection of two roads. Such locations are important in terms of local planning and commerce, usually being the locations of street signs and lamp posts, as well as being a prime spot to locate a business due to visibility and accessibility from traffic going along either of the adjacent streets. One source suggests that this is so for a facility combining two purposes, like an automotive showroom that provides repair services as well: "For all these types of buildings, property on a street corner is most desirable as separate entrances are most easily provided for." Due to this visibility, street-corners are the choice location for activities ranging from panhandling to prostitution to protests to petition signature drives, hence the term "street-corner politics". This makes street-corners a good location to observe human activity, for purposes of learning what environmental structures best fit that activity. Sidewalks at street corners tend to be rounded, rather than coming to a point, for ease of traffic making turns at the intersection. References Urban planning Streets and roads
Streetcorner
Engineering
218
58,917,570
https://en.wikipedia.org/wiki/Peter%20Rona%20%28physician%29
Peter Rona, born as Peter Rosenfeld (* 13. May 1871 in Budapest; † February or March 1945) was a Hungarian German Jewish physician and physiologist. References Biochemists 1871 births 1945 deaths Academic staff of the Humboldt University of Berlin 20th-century Hungarian physicians Physicians from Budapest Jews executed by Nazi Germany Hungarian civilians killed in World War II
Peter Rona (physician)
Chemistry,Biology
74
2,363,287
https://en.wikipedia.org/wiki/Visual%20learning
Visual learning is a learning style among the learning styles of Neil Fleming's VARK model in which information is presented to a learner in a visual format. Visual learners can utilize graphs, charts, maps, diagrams, and other forms of visual stimulation to effectively interpret information. The Fleming VARK model also includes Kinesthetic Learning and Auditory learning. There is no evidence that providing visual materials to students identified as having a visual style improves learning. Techniques A review study concluded that using graphic organizers improves student performance in the following areas: Retention Students remember information better and can better recall it when it is represented and learned both visually and verbally. Reading comprehension The use of graphic organizers helps improve reading comprehension of students. Student achievement Students with and without learning disabilities improve performance across content areas and grade levels. Thinking and learning skills; critical thinking When students develop and use a graphic organizer their higher order thinking and critical thinking skills are enhanced. Areas of the brain affected Various areas of the brain work together in many ways to produce the images that we see with our eyes and encoded by our brains. The basis of this work takes place in the visual cortex of the brain. The visual cortex is located in the occipital lobe of the brain and harbors many other structures that aid in visual recognition, categorization, and learning. One of the first things the brain must do when acquiring new visual information is to recognize it. Brain areas involved in recognition are the inferior temporal cortex, the superior parietal cortex, and the cerebellum. During recognition tasks, activation increases in the left inferior temporal cortex, and decreases in the right superior parietal cortex. Recognition is aided by neural plasticity, or the brain's ability to reshape itself based on new information. Next the brain must categorize the material using the three main areas that are used when categorizing new visual information: the orbitofrontal cortex and two dorsolateral prefrontal regions which begin the process of sorting new information into groups and further assimilating that information into things that you might already know. After recognizing and categorizing new material entered into the visual field, the brain is ready to begin the encoding process – the process that leads to learning. Multiple brain areas are involved in this process such as the frontal lobe, the right extrastriate cortex, the neocortex, and again, the neostriatum. One area in particular, the limbic-diencephalic region, is essential for transforming perceptions into memories. With the coming together of tasks of recognition, categorization, and learning; schemas help make the process of encoding new information and relating it to things you already know much easier. One can remember visual images much better when applying them to an already-known schema. Schemas provide enhancement of visual memory and learning. Infancy Where it starts Between the fetal stage and 18 months, a baby experiences rapid growth of a substance called gray matter. Gray matter is the darker tissue of the brain and spinal cord, consisting mainly of nerve cell bodies and branching dendrites. It is responsible for processing sensory information in the brain such as areas like the primary visual cortex. The primary visual cortex is located within the occipital lobe in the back of infant's brain and is responsible for processing visual information such as static or moving objects and pattern recognition. The four pathways Within the primary visual cortex, there are four pathways: the superior colliculus pathway (SC pathway), the middle temporal area pathway (MT pathway), the frontal eye fields pathway (FEF pathway), and the inhibitory pathway. Each pathway is crucial to the development of visual attention in the first few months of life. The SC pathway is responsible for the generation of eye movements toward simple stimuli. It receives information from the retina and the visual cortex and can direct behavior toward an object. The MT pathway is involved in the smooth tracking of objects and travels between the SC pathway and the primary visual cortex. In conjunction with the SC pathway and the MT pathway, the FEF pathway allows the infant to control eye movements as well as visual attention. It also plays a part in sensory processing in the infant. Lastly, the inhibitory pathway regulates the activity in the superior colliculus and is later responsible for obligatory attention in the infant. The maturation and functionality of these pathways depends on how well the infant can make distinctions as well as focus on stimuli. Supporting studies A study by Haith, Hazan, & Goodman in 1988 showed that babies as young as 3.5 months, are able to create short-term expectations of situations they confront. Expectations in this study refer to the cognitive and perceptual ways in which an infant can forecast a future event. This was tested by showing the infant either a predictable pattern of slides or an irregular pattern of slides and tracking the infant's eye movements. A later study by Johnson, Posner, & Rothbart in 1991 showed that by 4 months, infants can develop expectations. This was tested through anticipatory looks and disengagement with stimuli. For example, anticipatory looks portray the infant as being able to predict the next part of a pattern which can then be applied to the real world scenario of breast-feeding. Infants are able to predict a mother's movements and expect feeding so they can latch onto the nipple for feeding. Expectations, anticipatory looks, and disengagement all show that infants can learn visually, even if it is only short term. David Roberts (2016) tested multimedia learning propositions, he found that using certain images dislocates pedagogically harmful excesses of text, reducing cognitive overloading and exploiting under-used visual processing capacities In early childhood From the ages 3–8, visual learning improves and begins to take many different forms. At the toddler age of 3–5, children's bodily actions structure the visual learning environment. At this age, toddlers are using their newly developed sensory-motor skills quite often and fusing them with their improved vision to understand the world around them. This is seen by toddlers using their arms to bring objects of interest close to their sensors, such as their eyes and faces, to explore the object further. The act of bringing objects close to their face affects their immediate view by placing their mental and visual attention on that object and just blocking the view of other objects that are around them and out of view. There is an emphasis placed on objects and things that are directly in front of them and thus proximal vision is the primary perspective of visual learning. This is different from how adults utilize visual learning. This difference in toddler vision and adult vision is attributable to their body sizes, and body movements such that their visual experiences are created by their body movement. An adult's view is broad due to their larger body size, with most objects in view because of the distance between them and objects. Adults tend to scan a room, and see everything rather than focusing on one object only. The way a child integrates visual learning with motor experiences enhances their perceptual and cognitive development. For elementary school children aged 4–11, intellect is positively related to their level of auditory-visual integrative proficiency. The most significant period for the development of auditory-visual integration occurs between ages 5–7. During this time, the child has mastered visual-kinesthetic integration, and the child's visual learning can be applied to formal learning focused towards books and reading, rather than physical objects, thus impacting their intellect. As reading scores increase, children are able to learn more, and their visual learning has developed to not only focus on physical objects in close proximity to them, but also to interpret words and as such acquire knowledge by reading. In middle childhood Here we categorize middle childhood as ages 9 to 14. By this stage in a child's normal development, vision is sharp and learning processes are well underway. Most studies that have focused their efforts on visual learning have found that visual learning styles as opposed to traditional learning styles greatly improve the totality of a student's learning experience. Firstly, visual learning engages students, note that student engagement is one of the most important factors that motivate students to learn. Visuals increase student interest with the use of graphics animation and video. Consequently, it has been found that students pay greater attention to lecture material when visuals are used. With increased attention to lesson material, many positive outcomes have been seen with the use of visual tactics in the classrooms of middle-aged students. Students organize and process information more thoroughly when they learn visually which helps them to understand the information better and they are more likely to remember information that is learned with a visual aid. Research shows that when teachers used visual tactics to teach middle-aged students they found that students had more positive attitudes about the material they were learning. Students also exemplified higher test performance, higher standard achievement scores, thinking on levels that require higher-order thinking, and more engagement. One study also found that learning about emotional events, such as the Holocaust, with visual aids increase middle- aged children's empathy. In adolescence Brain maturation into young adulthood Gray matter is responsible for generating nerve impulses that process brain information, and white matter is responsible for transmitting that brain information between lobes and out through the spinal cord. Nerve impulses are transmitted by myelin, a fatty material that grows around a cell. White matter has a myelin sheath (a collection of myelin) while gray matter does not which efficiently allows neural impulses to move swiftly along the fiber. The myelin sheath isn't fully formed until around ages 24–26. This means that adolescents and young adults typically learn differently, and subsequently often utilize visual aids in order to help them better comprehend difficult subjects. Learning preferences can vary across a wide spectrum. Specifically, within the realm of visual learning, they can vary between people who prefer being given learning instructions with text as opposed to those who prefer graphics. College students were tested in general factors like learning preference and spatial ability (being able to be proficient in creating, holding, and manipulating spatial representations). The study determined that college-age individuals report efficient learning styles and learning preferences for themselves individually. These personal assessments have proved accurate, meaning that self-ratings of factors such as spatial ability and learning preference can be effective measures of how well one learns visually. Gender differences Studies have indicated that adolescents learn best through 10 various styles: reading, manipulative activity, teacher explanation, auditory stimulation, visual demonstration, visual stimulation (electronic), visual stimulation (just pictures), games, social interaction, and personal experience. According to the study, young adult males demonstrate a preference for learning through activities they are able to manipulate while young adult females show a greater preference for learning through teacher notes visually or by using graphs, and through reading. This suggests that women are more visually stimulated, interested in information that they can have physical direct control over. Men, on the other hand, learn best through reading information and having it explained by spoken word. Lack of evidence Although learning styles have "enormous popularity", and both children and adults express personal preferences, there is no evidence that identifying a student's learning style produces better outcomes. There is significant evidence against the widely touted "meshing hypothesis" (that a student will learn best if taught in a method deemed appropriate for that student's learning style). Well-designed studies "flatly contradict the popular meshing hypothesis". Rather than targeting instruction to the "right" learning style, students appear to benefit most from mixed modality presentations, for instance using both auditory and visual techniques for all students. See also Learning styles Auditory learning Kinesthetic learning References External links Articles and resources about the visual learning style for students and instructors More tips for visual learners Learning methods Infographics Information technology management Neuro-linguistic programming concepts and methods
Visual learning
Technology
2,430
5,509,325
https://en.wikipedia.org/wiki/Agouti-signaling%20protein
Agouti-signaling protein is a protein that in humans is encoded by the ASIP gene. It is responsible for the distribution of melanin pigment in mammals. Agouti interacts with the melanocortin 1 receptor to determine whether the melanocyte (pigment cell) produces phaeomelanin (a red to yellow pigment), or eumelanin (a brown to black pigment). This interaction is responsible for making distinct light and dark bands in the hairs of animals such as the agouti, which the gene is named after. In other species such as horses, agouti signalling is responsible for determining which parts of the body will be red or black. Mice with wildtype agouti will be grey-brown, with each hair being partly yellow and partly black. Loss of function mutations in mice and other species cause black fur coloration, while mutations causing expression throughout the whole body in mice cause yellow fur and obesity. The agouti-signaling protein (ASIP) is a competitive antagonist with alpha-Melanocyte-stimulating hormone (α-MSH) to bind with melanocortin 1 receptor (MC1R) proteins. Activation by α-MSH causes production of the darker eumelanin, while activation by ASIP causes production of the redder phaeomelanin. This means where and while agouti is being expressed, the part of the hair that is growing will come out yellow rather than black. Function In mice, the agouti gene encodes a paracrine signalling molecule that causes hair follicle melanocytes to synthesize the yellow pigment pheomelanin instead of the black or brown pigment eumelanin. Pleiotropic effects of constitutive expression of the mouse gene include adult-onset obesity, increased tumor susceptibility, and premature infertility. This gene is highly similar to the mouse gene and encodes a secreted protein that may (1) affect the quality of hair pigmentation, (2) act as an inverse agonist of alpha-melanocyte-stimulating hormone, (3) play a role in neuroendocrine aspects of melanocortin action, and (4) have a functional role in regulating lipid metabolism in adipocytes. In mice, the wild type agouti allele (A) presents a grey phenotype, however, many allele variants have been identified through genetic analyses, which result in a wide range of phenotypes distinct from the typical grey coat. The most widely studied allele variants are the lethal yellow mutation (Ay) and the viable yellow mutation (Avy) which are caused by ectopic expression of agouti. These mutations are also associated with yellow obese syndrome which is characterized by early onset obesity, hyperinsulinemia and tumorigenesis. The murine agouti gene locus is found on chromosome 2 and encodes a 131 amino acid protein. This protein signals the distribution of melanin pigments in epithelial melanocytes located at the base of hair follicles with expression being more sensitive on ventral hair than on dorsal hair. Agouti is not directly secreted in the melanocyte as it works as a paracrine factor on dermal papillae cells to inhibit release of melanocortin. Melanocortin acts on follicular melanocytes to increase production of eumelanin, a melanin pigment responsible for brown and black hair. When agouti is expressed, production of pheomelanin dominates, a melanin pigment that produces yellow or red colored hair. Structure Agouti signalling peptide adopts an inhibitor cystine knot motif. Along with the homologous Agouti-related peptide, these are the only known mammalian proteins to adopt this fold. The peptide consists of 131 amino acids. Mutations The lethal yellow mutation (Ay) was the first embryonic mutation to be characterized in mice, as homozygous lethal yellow mice (Ay/ Ay) die early in development, due to an error in trophectoderm differentiation. Lethal yellow homozygotes are rare today, while lethal yellow and viable yellow heterozygotes (Ay/a and Avy/a) remain more common. In wild-type mice agouti is only expressed in the skin during hair growth, but these dominant yellow mutations cause it to be expressed in other tissues as well. This ectopic expression of the agouti gene is associated with the yellow obese syndrome, characterized by early onset obesity, hyperinsulinemia and tumorigenesis. The lethal yellow (Ay) mutation is due to an upstream deletion at the start site of agouti transcription. This deletion causes the genomic sequence of agouti to be lost, except the promoter and the first non-encoding exon of Raly, a ubiquitously expressed gene in mammals. The coding exons of agouti are placed under the control of the Raly promoter, initiating ubiquitous expression of agouti, increasing production of pheomelanin over eumelanin and resulting in the development of a yellow phenotype. The viable yellow (Avy) mutation is due to a change in the mRNA length of agouti, as the expressed gene becomes longer than the normal gene length of agouti. This is caused by the insertion of a single intracisternal A particle (IAP) retrotransposon upstream to the start site of agouti transcription. In the proximal end of the gene, an unknown promoter then causes agouti to be constitutionally activated, and individuals to present with phenotypes consistent with the lethal yellow mutation. Although the mechanism for the activation of the promoter controlling the viable yellow mutation is unknown, the strength of coat color has been correlated with the degree of gene methylation, which is determined by maternal diet and environmental exposure. As agouti itself inhibits melanocortin receptors responsible for eumelanin production, the yellow phenotype is exacerbated in both lethal yellow and viable yellow mutations as agouti gene expression is increased. Viable yellow (Avy/a) and lethal yellow (Ay/a) heterozygotes have shortened life spans and increased risks for developing early onset obesity, type II diabetes mellitus and various tumors. The increased risk of developing obesity is due to the dysregulation of appetite, as agouti agonizes the agouti-related protein (AGRP), responsible for the stimulation of appetite via hypothalamic NPY/AGRP orexigenic neurons. Agouti also promotes obesity by antagonizing melanocyte-stimulating hormone (MSH) at the melanocortin receptor (MC4R), as MC4R is responsible for regulating food intake by inhibiting appetite signals. The increase in appetite is coupled to alterations in nutrient metabolism due to the paracrine actions of agouti on adipose tissue, increasing levels of hepatic lipogenesis, decreasing levels of lipolysis and increasing adipocyte hypertrophy. This increases body mass and leads to difficulties with weight loss as metabolic pathways become dysregulated. Hyperinsulinemia is caused by mutations to agouti, as the agouti protein functions in a calcium dependent manner to increase insulin secretion in pancreatic beta cells, increasing risks of insulin resistance. Increased tumor formation is due to the increased mitotic rates of agouti, which are localized to epithelial and mesenchymal tissues. Methylation and diet intervention Correct functioning of agouti requires DNA methylation. Methylation occurs in six guanine-cytosine (GC) rich sequences in the 5’ long terminal repeat of the IAP element in the viable yellow mutation. Methylation on a gene causes the gene to not be expressed because it will cause the promoter to be turned off. In utero, the mother's diet can cause methylation or demethylation. When this area is unmethylated, ectopic expression of agouti occurs, and yellow phenotypes are shown because the phaeomelanin is expressed instead of eumelanin. When the region is methylated, agouti is expressed normally, and grey and brown phenotypes (eumelanin) occur. The epigenetic state of the IAP element is determined by the level of methylation, as individuals show a wide range of phenotypes based on their degree of DNA methylation. Increased methylation is correlated with increased expression of the normal agouti gene. Low levels of methylation can induce gene imprinting which results in offspring displaying consistent phenotypes to their parents, as ectopic expression of agouti is inherited through non-genomic mechanisms. DNA methylation is determined in utero by maternal nutrition and environmental exposure. Methyl is synthesized de novo but attained through the diet by folic acid, methionine, betaine, and choline, as these nutrients feed into a consistent metabolic pathway for methyl synthesis. Adequate zinc and vitamin B12 are required for methyl synthesis as they act as cofactors for transferring methyl groups. When inadequate methyl is available during early embryonic development, DNA methylation cannot occur, which increases ectopic expression of agouti and results in the presentation of the lethal yellow and viable yellow phenotypes which persist into adulthood. This leads to the development of the yellow obese syndrome, which impairs normal development and increases susceptibility to the development of chronic disease. Ensuring maternal diets are high in methyl equivalents is a key preventive measure for reducing ectopic expression of agouti in offspring. Diet intervention through methyl supplementation reduces imprinting at the agouti locus, as increased methyl consumption causes the IAP element to become completely methylated and ectopic expression of agouti to be reduced. This lowers the proportion of offspring that present with the yellow phenotype and increases the number offspring that resemble agouti wild type mice with grey coats. Two genetically identical mice could look very different phenotypically due to the mothers' diets while the mice were in utero. If the mice has the agouti gene it can be expressed due to the mother eating a typical diet and the offspring would have a yellow coat. If the same mother had eaten a methyl-rich diet supplemented with zinc, vitamin B12, and folic acid then the offspring's agouti gene would likely become methylated, it wouldn't be expressed, and the coat color would be brown instead. In mice, the yellow coat color is also associated with health problems in mice including obesity and diabetes. Human homologue Agouti signaling protein (ASP) is the human homologue of murine agouti. It is encoded by the human agouti gene on chromosome 20 and is a protein consisting of 132 amino acids. It is expressed much more broadly than murine agouti and is found in adipose tissue, pancreas, testes, and ovaries, whereas murine agouti is solely expressed in melanocytes. ASP has 85% similarity to the murine form of agouti. As ectopic expression of murine agouti leads to the development of the yellow obese syndrome, this is expected to be consistent in humans. The yellow obese syndrome increases the development of many chronic diseases, including obesity, type II diabetes mellitus and tumorigenesis. ASP has similar pharmacological activation to murine agouti, as melanocortin receptors are inhibited through competitive antagonism. Inhibition of melanocortin by ASP can also be through non-competitive methods, broadening its range of effects. The function of ASP differs to murine agouti. ASP effects the quality of hair pigmentation whereas murine agouti controls the distribution of pigments that determine coat color. ASP has neuroendocrine functions consistent with murine agouti, as it agonizes via AgRP neurons in the hypothalamus and antagonizes MSH at MC4Rs which reduce satiety signals. AgRP acts as an appetite stimulator and increases appetite while decreasing metabolism. Because of these mechanisms, AgRP may be linked to increased body mass and obesity in both humans and mice. Over-expression of AgRP has been linked to obesity in males, while certain polymorphisms of AgRP have been linked to eating disorders like anorexia nervosa. The mechanism underlying hyperinsulinemia in humans is consistent with murine agouti, as insulin secretion is heightened through calcium sensitive signaling in pancreatic beta cells. The mechanism for ASP induced tumorigenesis remains unknown in humans. See also Agouti coloration genetics Agouti-related peptide Genomic imprinting Methylation Epigenetics References Further reading External links Peptides Peptide hormones Mammal genes Melanocortin receptor antagonists
Agouti-signaling protein
Chemistry
2,662
52,202,598
https://en.wikipedia.org/wiki/LS%20IV-14%20116
LS IV-14 116 is a hot subdwarf located approximately 2,000 light years away on the border between the constellations Capricornus and Aquarius. It has a surface temperature of approximately 34,000 ± 500 kelvins. Along with stars HE 2359-2844 and HE 1256-2738, LS IV-14 116 forms a new group of star called heavy metal subdwarfs. These are thought to be stars contracting to the extended horizontal branch after a helium flash and ejection of their atmospheres at the tip of the red giant branch. Amir Ahmad and C. Simon Jeffery discovered that the star is a variable star, in 2004, and published their discovery in 2005. They detected two pulsation periods, 1950 and 2900 seconds. The star's atmosphere contains 10,000 times more zirconium (per unit mass) than the Sun's; it also has between 1,000 and 10,000 times the amount of strontium, germanium and yttrium than the Sun. The heavy metals are believed to be in cloud layers in the atmosphere where the ions of each metal have a particular opacity that allows radiational levitation to balance gravitational settling. References External links Most zirconium-rich star discovered Distant Star Enveloped By Ingredients for Fake Diamonds Aquarius (constellation) B-type subdwarfs J20573887-1425437 Very rapidly pulsating hot stars Chemically peculiar stars
LS IV-14 116
Astronomy
308
16,259,862
https://en.wikipedia.org/wiki/Polynomial%20code
In coding theory, a polynomial code is a type of linear code whose set of valid code words consists of those polynomials (usually of some fixed length) that are divisible by a given fixed polynomial (of shorter length, called the generator polynomial). Definition Fix a finite field , whose elements we call symbols. For the purposes of constructing polynomial codes, we identify a string of symbols with the polynomial Fix integers and let be some fixed polynomial of degree , called the generator polynomial. The polynomial code generated by is the code whose code words are precisely the polynomials of degree less than that are divisible (without remainder) by . Example Consider the polynomial code over with , , and generator polynomial . This code consists of the following code words: Or written explicitly: Since the polynomial code is defined over the Binary Galois Field , polynomial elements are represented as a modulo-2 sum and the final polynomials are: Equivalently, expressed as strings of binary digits, the codewords are: This, as every polynomial code, is indeed a linear code, i.e., linear combinations of code words are again code words. In a case like this where the field is GF(2), linear combinations are found by taking the XOR of the codewords expressed in binary form (e.g. 00111 XOR 10010 = 10101). Encoding In a polynomial code over with code length and generator polynomial of degree , there will be exactly code words. Indeed, by definition, is a code word if and only if it is of the form , where (the quotient) is of degree less than . Since there are such quotients available, there are the same number of possible code words. Plain (unencoded) data words should therefore be of length Some authors, such as (Lidl & Pilz, 1999), only discuss the mapping as the assignment from data words to code words. However, this has the disadvantage that the data word does not appear as part of the code word. Instead, the following method is often used to create a systematic code: given a data word of length , first multiply by , which has the effect of shifting by places to the left. In general, will not be divisible by , i.e., it will not be a valid code word. However, there is a unique code word that can be obtained by adjusting the rightmost symbols of . To calculate it, compute the remainder of dividing by : where is of degree less than . The code word corresponding to the data word is then defined to be Note the following properties: , which is divisible by . In particular, is a valid code word. Since is of degree less than , the leftmost symbols of agree with the corresponding symbols of . In other words, the first symbols of the code word are the same as the original data word. The remaining symbols are called checksum digits or check bits. Example For the above code with , , and generator polynomial , we obtain the following assignment from data words to codewords: 000 00000 001 00111 010 01001 011 01110 100 10010 101 10101 110 11011 111 11100 Decoding An erroneous message can be detected in a straightforward way through polynomial division by the generator polynomial resulting in a non-zero remainder. Assuming that the code word is free of errors, a systematic code can be decoded simply by stripping away the checksum digits. If there are errors, then error correction should be performed before decoding. Efficient decoding algorithms exist for specific polynomial codes, such as BCH codes. Properties of polynomial codes As for all digital codes, the error detection and correction abilities of polynomial codes are determined by the minimum Hamming distance of the code. Since polynomial codes are linear codes, the minimum Hamming distance is equal to the minimum weight of any non-zero codeword. In the example above, the minimum Hamming distance is 2, since 01001 is a codeword, and there is no nonzero codeword with only one bit set. More specific properties of a polynomial code often depend on particular algebraic properties of its generator polynomial. Here are some examples of such properties: A polynomial code is cyclic if and only if the generator polynomial divides . If the generator polynomial is primitive, then the resulting code has Hamming distance at least 3, provided that . In BCH codes, the generator polynomial is chosen to have specific roots in an extension field, in a way that achieves high Hamming distance. The algebraic nature of polynomial codes, with cleverly chosen generator polynomials, can also often be exploited to find efficient error correction algorithms. This is the case for BCH codes. Specific families of polynomial codes Cyclic codes – every cyclic code is also a polynomial code; a popular example is the CRC code. BCH codes – a family of cyclic codes with high Hamming distance and efficient algebraic error correction algorithms. Reed–Solomon codes – an important subset of BCH codes with particularly efficient structure. References W.J. Gilbert and W.K. Nicholson: Modern Algebra with Applications, 2nd edition, Wiley, 2004. R. Lidl and G. Pilz. Applied Abstract Algebra, 2nd edition. Wiley, 1999. Coding theory
Polynomial code
Mathematics
1,068
19,923,045
https://en.wikipedia.org/wiki/Panelh%C3%A1z
Panelház (, often shortened to panel) is a Hungarian term for a type of concrete block of flats (panel buildings), built in the People's Republic of Hungary and other Eastern Bloc countries. They are also known as Plattenbau in German, Panelák in Czech and Slovak, Blok in Polish. It was the main urban housing type in the Socialist era, which still dominates the Hungarian cityscapes. According to the 2011 census, there were 829,177 panel apartments in Hungary (18.9% of the dwellings) that were home to 1,741,577 people (17.5% of the total population). Panelház are not the only type of block of flats in Hungary; as of 2014, 31.6% of Hungarians lived in flats (according to data from Eurostat). History After World War II a serious housing crisis developed in Hungary due to rapid population growth and urbanization. The exodus of the rural population from rural areas after the collectivization in the late 1940s and the early 1950s was particularly important as a source of migration. Budapest and other cities became overcrowded, and the Communist government eventually responded. After several study visits and conventions, in the early 1960s Hungary bought the large-panel system (LPS) from the Soviet Union and Denmark. The Danish technology was known as the Larsen-Nielsen system and was a common housing method in Western Europe, Turkey, and Hong Kong. By the late 1960s, Hungarian engineers developed the country's own large-panel system (mostly based on the Soviet LPS), adapted to the Hungarian situation. The large-panel system permitted rapid construction that was not constrained by Hungary's relatively cold winters. After the 1968 Ronan Point explosion (a Larsen-Nielsen-type tower block partial collapse in London), Hungarian engineers modified the original system, made the structure more compact and the joints stronger. The Larsen-Nielsen system was retired in Hungary in 1970. The first, experimental panel residential building was built in Dunaújváros (new industrial city) in 1961, followed by other blocks in Pécs and Debrecen in 1963. The first precast concrete panel work was finished in 1962 in Dunaújváros, while the first large-panel system (LPS) housing factory (these works produced near all parts of these buildings, including the built-up kitchen units and the built-up wardrobes), was built in 1965 in Óbuda, Budapest. The first LPS building also was built in Óbuda in 1965. The structure of Hungarian cities in the immediate post-war period consisted of a historic core surrounded by mostly single-story buildings and workers' houses, predominantly on unpaved streets. The nationwide public housing program of the 1960s changed this. The Communist government demolished the single-story buildings, replacing them with panel blocks. It also created new neighbourhoods on former farmland around the cities. Panel apartments provided their inhabitants with a real improvement in living conditions. Two and three-bedroom sunny apartments with district heating, piped hot water, and flush toilets replaced what had been predominantly one-bedroom dwellings without modern conveniences. According to the 1960 census, one-bedroom flats made up 60% of the dwellings in Budapest; this had decreased to 25% in 1990. During this period, the share of dwellings with three or more bedrooms rose from 9% to 35%. The last panel building was finished in 1993. The Hungarian government and local municipalities started renovation programs during the 2000s. These programs insulated the panel buildings, replaced the old doors and windows with multi-layer thermo glass, renovated the heating system, and gave the buildings more attractive exterior colours. These buildings still dominate the Hungarian cityscape. The share of panel dwellings is 31% in Budapest, 39% in Debrecen, 52% in Miskolc, 38% in Szeged, 42% in Pécs, 41% in Győr, 50% in Székesfehérvár and 60% in Dunaújváros. Former housing factories Former panel works Statistics According to the 2011 census, there were 829,177 panel flats in Hungary (777,263 inhabited, 51,914 tenantless, 18.9% of the dwellings overall), of whom there were 548,464 flats (66.1%) in large-panel system buildings (LPS) and 280,713 (33.9%) in precast concrete (PC) buildings (the LPS is originally unplastered, while the PC is plastered and painted). 7,423 (0.9%) flats were built before 1960, 115,471 (13.9%) in the 1960s, 396,158 (47.8%) in the 1970s, 262,004 (31.6%) in the 1980s, while 48,121 (5.8%) flats were built after 1990. These flats were home to 1,741,577 people (17.5% of the total population). There were 58,698 (7.1% of the total) one-bedroom, 421,274 (50.8%) two-bedroom, 271,422 (32.7%) three-bedroom flat, while 77,783 panel flats (9.4%) had four or more bedroom in 2011. Average floor space was 54 m2 for an LPS flat and 69 m2 for a PC flat in 2011, lower than the national average (78 m2). The average floor space for a state-built flat (mostly panel flats) was 48 m2 in the 1960s, 53 m² in the 1970s and 55 m² in the 1980s, significantly smaller than a privately built one (panel blocks also were built by non-governmental organizations, mostly housing cooperatives). Despite economic hardship, flats got even bigger in the late 1980s (before the fall of the Communism), the largest panel flats were built in the Káposztásmegyer housing estate of Budapest with 124 m². The society of panel housing estates was heterogeneous until the privatization in the early 1990s (after the fall of the Communism), when the poor and the rich fled from these buildings, making them middle class characteristic. The residents of panel buildings predominantly have an above-average level of education, according to the 2011 census, 19.1% of the residents over 25 had Bachelor's degree or higher, while the national average was 17.3%. Largest panel housing estates Equivalents in other countries Khrushchyovka (former Soviet Union) Panelák and Sídlisko (Czech Republic and Slovakia) Plattenbau (Germany) Systematization (Romania) Ugsarmal bair (Mongolian People's Republic) In popular culture Béla Tarr's film Panelkapcsolat tells a doomed love story set in a housing project in Hungary. Special Mention at the 1982 Locarno Film Festival. See also Housing estate Public housing Affordable housing Subsidized housing Urban planning in communist countries References Architecture in Hungary Hungary–Soviet Union relations Denmark–Hungary relations Urban planning in Hungary Hungarian People's Republic Prefabricated buildings Concrete buildings and structures Public housing
Panelház
Engineering
1,472
19,381,888
https://en.wikipedia.org/wiki/Coprophilia
Coprophilia (from Greek κόπρος, kópros 'excrement' and φιλία, philía 'liking, fondness'), also called scatophilia or scat (Greek: σκατά, skatá 'feces'), is the paraphilia involving sexual arousal and pleasure from feces. Research In the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association, it is classified under 302.89—Paraphilia NOS (Not Otherwise Specified) and has no diagnostic criteria other than a general statement about paraphilias that says "the diagnosis is made if the behavior, sexual urges, or fantasies cause clinically significant distress or impairment in social, occupational, or other important areas of functioning". Furthermore, the DSM-IV-TR notes, "Fantasies, behaviors, or objects are paraphilic only when they lead to clinically significant distress or impairment (e.g. are obligatory, result in sexual dysfunction, require participation of nonconsenting individuals, lead to legal complications, interfere with social relationships)". Although there may be no connection between coprophilia and sadomasochism (SM), the limited data on the former comes from studies of the latter. A 1999 study of 164 males in Finland from two SM clubs found that 18.2% had engaged in coprophilia; 3% as a sadist only, 6.1% as a masochist only, and 9.1% as both. In the study pool 18% of heterosexuals and 17% of homosexuals had tried coprophilia, showing no statistically significant difference between heterosexuals and homosexuals. In a separate article analyzing 12 men who engaged in bestiality, an additional analysis of an 11-man subgroup revealed that six had engaged in coprophilic behavior, compared with only one in the matched control group consisting of 12 SM-oriented males who did not engage in bestiality. Society and culture A table in Larry Townsend's The Leatherman's Handbook II (the 1983 second edition; the 1972 first edition did not include this list) which is generally considered authoritative states that a brown handkerchief is a symbol for coprophilia in the handkerchief code, which is employed usually among gay male casual-sex seekers or BDSM practitioners in the United States, Canada, Australia and Europe. Wearing the handkerchief on the left indicates the top, dominant, or active partner; right the bottom, submissive, or passive partner. However, negotiation with a prospective partner remains important because, as Townsend noted, people may wear hankies of any color "only because the idea of the hankie turns them on" or "may not even know what it means". Originally the Mineshaft had a room for coprophilia, but it was soon abandoned as too extreme. American musician Chuck Berry recorded videos of himself urinating on and engaging in coprophilia with women. In one video, a woman defecates on him after he says "Now it's time for my breakfast." He was also sued for videotaping dozens of women in the restroom of a restaurant he owned, which has been identified as being motivated by his coprophilia fetish. The Cleveland steamer is a colloquial term for a form of coprophilia, where someone defecates on their partner's chest. The term received news attention through its use in a U.S. Congress staff hoax email and being addressed by the United States Federal Communications Commission. Hot Karl (also Hot Sasser) is sexual slang referring to one of several purported acts involving feces. It variously means an act of defecating on one's sexual partner, defecating on someone who is asleep, or defecating on someone's face while covered in plastic wrap. According to psychologist Anil Aggrawal, it is a synonym for a Cleveland steamer and is part of a coprophilia vocabulary that also includes the Dirty Sanchez. The term was adopted as a name by rapper Hot Karl. Dirty Sanchez is a purported sex act which consists of feces purposely being smeared onto a partner's upper lip. The New Partridge Dictionary of Slang and Unconventional English says, "This appears to have been contrived with the intention to provoke shock rather than actually as a practice, although, no doubt, some have or will experiment." Columnist Gustavo Arellano of ¡Ask a Mexican! contends the term evokes the stereotypical mustache of a Mexican. The term for the sex act entered British gay cant Polari in the 1960s. See also 2 Girls 1 Cup Anilingus Ass to mouth — removing the penis from the passive partner's anus followed by its immediate insertion into either their mouth, or another person's. Coprophagia — the consumption of feces Scatology Urolagnia (also known as urophilia) — a paraphilia involving sexual pleasure from urine References Further reading External links Feces Paraphilias Sexual acts
Coprophilia
Biology
1,048
809,979
https://en.wikipedia.org/wiki/Blink%20comparator
A blink comparator is a viewing apparatus formerly used by astronomers to find differences between two photographs of the night sky. It permits rapid switching from viewing one photograph to viewing the other, "blinking" back and forth between the two images taken of the same area of the sky at different times. This allows the user to more easily spot objects in the night sky that have changed position or brightness. It was also sometimes known as a blink microscope. It was invented in 1904 by physicist Carl Pulfrich at Carl Zeiss AG, then constituted as Carl-Zeiss-Stiftung. In photographs taken a few days apart, rapidly moving objects such as asteroids and comets would stand out, because they would appear to be jumping back and forth between two positions, while all the distant stars remained stationary. Photographs taken at longer intervals could be used to detect stars with large proper motion, or variable stars, or to distinguish binary stars from optical doubles. The most notable object in our solar system to be found using this technique is Pluto, discovered by Clyde Tombaugh in 1930. The Projection Blink Comparator (PROBLICOM), invented by amateur astronomer Ben Mayer, is a low-cost version of the professional tool. It consists of two slide projectors with a rotating occluding disk that alternately blocks the images from the projectors. This tool allowed amateur astronomers to contribute to some phases of serious research. Modern replacements In modern times, charge-coupled devices (CCDs) have largely replaced photographic plates, as astronomical images are stored digitally on computers. The blinking technique can easily be performed on a computer screen rather than with a physical blink comparator apparatus as before. The blinking technique is less used today, because image differencing algorithms detect moving objects more effectively than human eyes can. To measure the precise position of a known object whose direction and rate of motion are known, a "track and stack" software technique is used. Multiple images are superimposed such that the moving object is fixed in place; the moving object then stands out as a dot among the star trails. This is particularly effective in cases where the moving object is very faint and superimposing multiple images of it permits it to be seen better. See also Hinman collator Visual comparison References Astronomical imaging Optical devices Observational astronomy Products introduced in 1904 1904 in science Carl Zeiss AG
Blink comparator
Materials_science,Astronomy,Engineering
481
33,535,504
https://en.wikipedia.org/wiki/Pilot%20direction%20indicator
A pilot direction indicator or pilot's directional indicator (PDI) is an aircraft instrument used by bombardiers to indicate heading changes to the pilot in order to direct them to the proper location to drop bombs. The PDI is used in aircraft where the pilot and bombardier are physically separated and cannot easily see each other. PDI's typically consist of a dial that is installed in the pilot's instrument set on the main console, with an arrow pointer than can be moved to indicate how far and in what direction to correct the heading. The bombardier typically has a switch to move the pointer to the right or left, and a repeater dial so he can see the setting. The Norden bombsight was originally designed with the idea of automatically directing a PDI and thereby simplifying the bombardier's task. See also Index of aviation articles References Aircraft instruments Measuring instruments
Pilot direction indicator
Technology,Engineering
178
28,345,524
https://en.wikipedia.org/wiki/Realized%20variance
Realized variance or realised variance (RV, see spelling differences) is the sum of squared returns. For instance the RV can be the sum of squared daily returns for a particular month, which would yield a measure of price variation over this month. More commonly, the realized variance is computed as the sum of squared intraday returns for a particular day. The realized variance is useful because it provides a relatively accurate measure of volatility which is useful for many purposes, including volatility forecasting and forecast evaluation. Related quantities Unlike the variance the realized variance is a random quantity. The realized volatility is the square root of the realized variance, or the square root of the RV multiplied by a suitable constant to bring the measure of volatility to an annualized scale. For instance, if the RV is computed as the sum of squared daily returns for some month, then an annualized realized volatility is given by . Properties under ideal conditions Under ideal circumstances the RV consistently estimates the quadratic variation of the price process that the returns are computed from. Ole E. Barndorff-Nielsen and Neil Shephard (2002), Journal of the Royal Statistical Society, Series B, 63, 2002, 253–280. For instance suppose that the price process is given by the stochastic integral where is a standard Brownian motion, and is some (possibly random) process for which the integrated variance, is well defined. The realized variance based on intraday returns is given by where the intraday returns may be defined by Then it has been shown that, as the realized variance converges to IV in probability. Moreover, the RV also converges in distribution in the sense that is approximately distributed as a standard normal random variables when is large. Properties when prices are measured with noise When prices are measured with noise the RV may not estimate the desired quantity. This problem motivated the development of a wide range of robust realized measures of volatility, such as the realized kernel estimator. See also Volatility (finance) Notes Mathematical finance
Realized variance
Mathematics
413
67,572,697
https://en.wikipedia.org/wiki/Anke%20Weidenkaff
Anke Weidenkaff (December 27, 1966 in Hanover, Germany) is a German-Swiss chemist and materials scientist. Since 2018, she has been head of the Materials & Resources Group at the Faculty of Materials Science at Technical University Darmstadt and director of the Fraunhofer Research Institution for Materials Recycling and Resource Strategies (IWKS) in Hanau (Hesse) and Alzenau (Bavaria). Life Weidenkaff was born in Hanover, Germany, and studied chemistry at the University of Hamburg. She received her PhD in 2000 from ETH Zurich in the Department of Chemistry. In 2006, she received the Venia Legendi for Solid State Chemistry and Materials Science from the University of Augsburg and became section head at the Swiss Federal Laboratories for Materials Science and Technology (Empa) and associated professor at the University of Bern. From 2013 to 2018, she was director of the Institute of Materials Science at the University of Stuttgart, where she chaired the Department of Chemical Materials Synthesis.. Since October 1, 2018, Weidenkaff has been director of Fraunhofer Research Institution for Materials Recycling and Resource Strategies. Weidenkaff is also a professor at Technical University Darmstadt in the field of material science and resource management. From 2016 to 2019, she was president of the European Thermoelectric Society (ETS), of which she had been a board member since 2007. She is an elected member of the European Materials Research Society's (E-MRS) Executive Committee and was chair of the 2019 E-MRS Spring Meeting. Since 2020, Anke Weidenkaff has been a member of the German Advisory Council on Global Change (WBGU) Anke Weidenkaff was elected as a member to the German National Academy of Sciences Leopoldina and the German Academy of Science and Engineering in 2023. Research Weidenkaff's main areas of research and expertise are materials science and resource strategies, including the development, synthesis chemistry, and characterization of substitute materials for energy conversion and storage. Building on scientific knowledge of solid-state chemistry, her current work focuses on materials science and specifically the development of regenerative, sustainable materials and next-generation process technologies for fast and efficiently closed materials cycles. Anke Weidenkaff and her team are currently working on technologies for the production of (green) hydrogen including photoelectrochemical water splitting, the production of carbon nanotubes using microwave plasma synthesis for carbon storage, and sustainable perovskite materials. She is also involved in the development of thermoelectrics, electroceramics and ceramic membranes. Together with the Energy Materials Department of Fraunhofer IWKS, she conducts research on sustainable materials and recycling technologies for batteries and fuel cells. Another focus of her work is "Green ICT", the development of sustainable materials and processes for information and communication technology. International recognition and activities 2008: Visiting professor, Case Western Reserve University (CWRU) and visiting scientist NASA Glenn Research Centre, Cleveland, USA 2011: Kavli Foundation Lectureship Award 2012 - 2013: Editor in Chief and Member of the Editorial Board of “Energy Quarterly”; Member of the Advisory Board of the MRS Book Series on Energy and Sustainability 2015 - 2017: Member of the Board of Directors, Materials Research Society (MRS) 2016 - 2019: President of the European Thermoelectric Society (ETS) since 2020: Member of the German Advisory Council on Global Change (WBGU) 2022: Karl W. Böer Renewable Energy Mid-Career Award since 2023: Member of the German National Academy of Sciences Leopoldina since 2023: Member of the Acatech, the German National Academy of Science and Engineering. References External links Women chemists Materials scientists and engineers Fraunhofer Society Academic staff of Technische Universität Darmstadt Members of the German National Academy of Sciences Leopoldina 1966 births Living people University of Hamburg alumni ETH Zurich alumni
Anke Weidenkaff
Materials_science,Engineering
800
2,507,954
https://en.wikipedia.org/wiki/Cumulina
Cumulina (October 3, 1997 - May 5, 2000), a mouse, was the first animal cloned from adult cells that survived to adulthood. She was cloned using the Honolulu technique developed by "Team Yana", the Ryuzo Yanagimachi research group at the former campus of the John A. Burns School of Medicine located at the University of Hawai'i at Mānoa. Cumulina was a brown Mus musculus or common house mouse. She was named after the cumulus cells surrounding the developing oocyte in the ovarian follicle in mice. Nuclei from these cells were put into egg cell devoid of their original nuclei in the Honolulu cloning technique. All other mice produced by the Yanagimachi lab are just known by a number. Cumulina was able to produce two healthy litters. She was retired after the second. Cumulina's preserved remains were displayed at the Institute for Biogenesis Research, a part of the John A. Burns School of Medicine laboratory, in Honolulu, Hawaii until 2022 when they were donated to the Smithsonian Institution's National Museum of American History. Some of her descendants have been displayed at the Bishop Museum and the Museum of Science and Industry in Chicago, Illinois. See also List of cloned animals References External links Second birthday picture Obituary from 2000 Museum of Science and Industry exhibit containing some of her descendants. 1997 animal births 2000 animal deaths Individual mice Cloned animals Individual animals in the United States
Cumulina
Biology
305
1,211,612
https://en.wikipedia.org/wiki/Idleness
Idleness is a lack of motion or energy. In describing a person, idle suggests having no labor: "idly passing the day". In physics, an idle machine exerts no transfer of energy. When a vehicle is not in motion, an idling engine does no useful thermodynamic work. In computing, an idle processor or network circuit is not being used by any program, application, or message. Cultural norms Typically, when one describes a machine as idle, it is an objective statement regarding its current state. However, when used to describe a person, idle typically carries a negative connotation, with the assumption that the person is wasting their time by doing nothing of value. Such a view is reflected in the proverb "an idle mind is the devil's workshop". Also, the popular phrase "killing time" refers to idleness and can be defined as spending time doing nothing in particular in order that time seems to pass more quickly. These interpretations of idleness are not universal – they are more typically associated with Western cultures. Idleness was considered a disorderly offence in England punishable as a summary offense. Involuntary enforced idleness is the punishment used for lazy or slacking workers in zero-hour contracts. Paid time off, which was introduced in the 20th century as a trade unionist reform, is now absent from an increasing number of job arrangements both as a money-saving mechanism and so that only work pays and thus reinforcing the stigma against idleness and enabling nature's punishment of idleness in the form of destitution and starvation. Analysis and interpretation Philosopher Bertrand Russell published In Praise of Idleness and Other Essays in 1935, exploring the virtues of being idle in modern society. Founded in 1993 by Tom Hodgkinson, the magazine The Idler is devoted to promoting the ethos of "idle living". Hodgkinson published How to Be Idle in 2005 (subsequently subtitled A Loafer's Manifesto in 2007), also aimed to improve the public perception of idling. The Importance of Being Idle: A Little Book of Lazy Inspiration is a humorous self-help book published in August 2000. The book inspired the title of the 2005 chart-topping single by English rock band Oasis. Mark Slouka published the essay "Quitting the Paint Factory: The Virtues of Idleness", hinting at a post-scarcity economy and linking conscious busyness with anti-democratic and fascist tendencies. Idleness: A Philosophical Essay is a 2018 publication contending the idle state is one of true freedom. See also Inert Laziness Leisure Loitering Refusal of work Soldiering Slow movement (culture) Work (disambiguation) References Further reading Human behavior
Idleness
Biology
559
13,510,193
https://en.wikipedia.org/wiki/Densely%20defined%20operator
In mathematics – specifically, in operator theory – a densely defined operator or partially defined operator is a type of partially defined function. In a topological sense, it is a linear operator that is defined "almost everywhere". Densely defined operators often arise in functional analysis as operations that one would like to apply to a larger class of objects than those for which they a priori "make sense". A closed operator that is used in practice is often densely defined. Definition A densely defined linear operator from one topological vector space, to another one, is a linear operator that is defined on a dense linear subspace of and takes values in written Sometimes this is abbreviated as when the context makes it clear that might not be the set-theoretic domain of Examples Consider the space of all real-valued, continuous functions defined on the unit interval; let denote the subspace consisting of all continuously differentiable functions. Equip with the supremum norm ; this makes into a real Banach space. The differentiation operator given by is a densely defined operator from to itself, defined on the dense subspace The operator is an example of an unbounded linear operator, since This unboundedness causes problems if one wishes to somehow continuously extend the differentiation operator to the whole of The Paley–Wiener integral, on the other hand, is an example of a continuous extension of a densely defined operator. In any abstract Wiener space with adjoint there is a natural continuous linear operator (in fact it is the inclusion, and is an isometry) from to under which goes to the equivalence class of in It can be shown that is dense in Since the above inclusion is continuous, there is a unique continuous linear extension of the inclusion to the whole of This extension is the Paley–Wiener map. See also References Functional analysis Hilbert spaces Linear operators Operator theory
Densely defined operator
Physics,Mathematics
372
55,828,680
https://en.wikipedia.org/wiki/Black%20Hole%20Initiative
The Black Hole Initiative (BHI) is an interdisciplinary center at Harvard University that includes the fields of astronomy, physics, and philosophy, and is claimed to be the first center in the world to focus on the study of black holes. Principal participants include Sheperd S. Doeleman, Peter Galison, Avi Loeb, Andrew Strominger and Shing-Tung Yau. The BHI inauguration was held on 18 April 2016 and attended by Stephen Hawking; related workshop events were held on 19 April 2016. Robbert Dijkgraaf created the mural for the BHI Inauguration. The BHI is funded by the John Templeton Foundation and the Gordon and Betty Moore Foundation. Harvard University allocated office space for the BHI on the second floor of 20 Garden Street in Cambridge, Massachusetts. The BHI is an independent Center within the Faculty of Arts & Sciences at Harvard University. See also Cosmology Galactic Center Galaxy General relativity List of black holes Outline of black holes Timeline of black hole physics References External links Official website Official Youtube Channel Inauguration workshop events (19 April 2017): Astrophysics research institutes Cosmological simulation Physical cosmology Black holes Research institutes established in 2016 Harvard University research institutes 2016 establishments in Massachusetts
Black Hole Initiative
Physics,Astronomy
250
62,293,966
https://en.wikipedia.org/wiki/Prorenin
Prorenin () is a protein that constitutes a precursor for renin, the hormone that activates the renin–angiotensin system, which serves to raise blood pressure. Prorenin is converted into renin by the juxtaglomerular cells, which are specialised smooth muscle cells present mainly in the afferent, but also the efferent, arterioles of the glomerular capillary bed. Prorenin is a relatively large molecule, weighing approximately 46 KDa. History Prorenin was discovered by Eugenie Lumbers in 1971. Synthesis In addition to juxtaglomerular cells, prorenin is also synthesised by other organs, such as the adrenal glands, the ovaries, the testis and the pituitary gland, which is why it is found in the plasma of anephric individuals. Concentration Blood concentration levels of prorenin are between 5 and 10 times higher than those of renin. There is evidence to suggest that, in diabetes mellitus, prorenin levels are even higher. One study using relatively newer technology found that blood concentrations levels may be several order of magnitude higher than previously believed, and placing it at micrograms rather than nanograms per millilitre. Pregnancy Prorenin occurs in very high concentrations in amniotic fluid and amnion. It is secreted in large amounts from the placenta and womb, and from the ovaries. Conversion to renin Proprotein convertase 1 converts prorenin into renin, but proprotein convertase 2 does not. There is no evidence that prorenin can be converted into renin in the circulation. Therefore, the granular (JG) cells seem to be the only source of active renin. References External links RCSB PDB PDBe Proteins
Prorenin
Chemistry
382
29,970,787
https://en.wikipedia.org/wiki/Kauri%20Museum
The Kauri Museum is in the west coast village of Matakohe, Northland, New Zealand. The museum, to the south of the Waipoua Forest, contains many exhibits that tell the story of the pioneering days when early European settlers in the area extracted kauri timber and kauri gum. The museum has over 4000 sq metres of undercover exhibits, including the largest collection of kauri gum in the world, and the largest collection of kauri furniture. It has a model of a 1900s kauri house with furniture and models in the dress of the early years, and an extensive collection of photographs and pioneering memorabilia. On the wall, there are full-scale circumference outlines of the huge trees, including one of 8 metres, larger even than Tāne Mahuta. The museum includes a working mock-up of a steam sawmill. It tells its story from the colonial viewpoint, and presents its representation of the kauri gum industry as part of the process of creating the New Zealand identity. It has little to say about negative aspects, such as the impact on the Māori people. The Kauri Museum has however helped raise awareness of the need to conserve the remaining forest through a display of photographs by the conservationist Stephen King, presented in partnership with the Waipoua Forest Trust. References External links The Kauri Museum (1 min 3 secs) Natural history museums in New Zealand Kaipara District Museums in the Northland Region History of the Northland Region Local museums in New Zealand Forestry in New Zealand Forestry museums Kauri gum
Kauri Museum
Physics
312
17,330,825
https://en.wikipedia.org/wiki/Perveance
Perveance is a notion used in the description of charged particle beams. The value of perveance indicates how significant the space charge effect is on the beam's motion. The term is used primarily for electron beams, in which motion is often dominated by the space charge. Origin of the word The word was probably created from Latin pervenio–to attain. Definition For an electron gun, the gun perveance is determined as a coefficient of proportionality between a space-charge limited current, , and the gun anode voltage, , in three-half power in the Child-Langmuir law The same notion is used for non-relativistic beams propagating through a vacuum chamber. In this case, the beam is assumed to have been accelerated in a stationary electric field so that is the potential difference between the emitter and the vacuum chamber, and the ratio of is referred to as a beam perveance. In equations describing motion of relativistic beams, contribution of the space charge appears as a dimensionless parameter called the generalized perveance defined as , where (for electrons) is the Budker (or Alfven) current; and are the relativistic factors, and is the neutralization factor. Examples The 6S4A is an example of a high perveance triode. The triode section of a 6AU8A becomes a high-perveance diode when its control grid is employed as the anode. Each section of a 6AL5 is a high-perveance diode as opposed to a 1J3 which requires over 100 V to reach only 2 mA. Perveance does not relate directly to current handling. Another high-perveance diode, the diode section of a 33GY7, shows similar perveance to a 6AL5, but handles 15 times greater current, at almost 13 times maximum peak inverse voltage. References Accelerator physics Experimental particle physics
Perveance
Physics
400
13,289,657
https://en.wikipedia.org/wiki/XHTML%2BMathML%2BSVG
XHTML+MathML+SVG is a W3C standard that describes an integration of MathML and Scalable Vector Graphics semantics with XHTML and Cascading Style Sheets. It is categorized as "obsolete" on the W3C's HTML Current Status page. References External links W3C Working Draft World Wide Web Consortium standards
XHTML+MathML+SVG
Technology
73
22,866,377
https://en.wikipedia.org/wiki/Biohappiness
Biohappiness, or bio-happiness, is the elevation of well-being in humans and other animals through biological methods, including germline engineering through screening embryos with genes associated with a high level of happiness, or the use of drugs intended to raise baseline levels of happiness. The object is to facilitate the achievement of a state of "better than well". Proponents of biohappiness include the transhumanist philosopher David Pearce, whose goal is to end the suffering of all sentient beings and the Canadian ethicist Mark Alan Walker. Walker coined the term "bio-happiness" to describe the idea of directly manipulating the biological roots of happiness in order to increase it. He sought to defend it on the grounds that happiness ought to be of interest to a wide range of moral theorists; and that hyperthymia, a state of high baseline happiness, is associated with better outcomes in health and human achievement. Potential risks A significant danger of bio happiness is the ethical problems of altering the natural human emotional state through technological methods. Molding organic brain chemistry or genetic structures to achieve happiness would raise concerns about the authenticity of the human body/experience. It is argued that tampering with the state of the human mind and creating an eternal happiness would disrupt the natural range of emotions that a human will experience. Sadness, grief and anger are all crucial for emotional growth, empathy and understanding. Additionally, the long term effects of bio happiness are not yet understood, meaning later down the line, issues could arise. Loss of individuality, emotional depth and the risk of being dependent on an external source for happiness are all concerns regarding this. Current research and technologies Antidepressants are a short term form of biohappiness. Depending on the specific drug, they can either keep certain chemicals (i.e. serotonin or dopamine) active in the brain for longer, stop chemicals from breaking down, or increase the rate of chemical release. The acceptance of antidepressant use makes way for the normalization of technology use in mental health. Postmenopausal women with depression were given a questionnaire to determine their mood and to give a rating to how depressed they were feeling. The women then took a newly engineered neuroactive steroid geared towards the dampening of the GABA receptors. The women were then asked to repeat the questionnaire after the drug had kicked in and their average self reports showed a significant mood increase although no specific numbers were given as to by how much. The lack of any hard numerical data is a concern for some and may question the effectiveness of dampening GABA receptors in hopes of alleviating depression. This new drug does also have the added benefit of very little collateral damage, with other antidepressants causing other undesired body functions to be lessened or strengthened. If this dampening of GABA receptors was to be applied via CRISPR, the goal of biohappiness may be reachable. Preimplantation genetic diagnosis and embryo profiling are both current technologies that could be used for biohappiness in the future. See also Eradication of suffering Perfectionism (philosophy) References External links The Biohappiness Revolution (video) Beyond Therapy: Biotechnology and the Pursuit of Happiness (The President's Council on Bioethics, Washington, D.C., October 2003). Bioethics Biotechnology Hedonism Neuropharmacology Transhumanism Utilitarianism
Biohappiness
Chemistry,Technology,Engineering,Biology
705
399,115
https://en.wikipedia.org/wiki/49%20%28number%29
49 (forty-nine) is the natural number following 48 and preceding 50. In mathematics Forty-nine is the square of the prime number seven and hence the fourth non-unitary square prime of the form p2. It appears in the Padovan sequence, preceded by the terms 21, 28, 37 (it is the sum of the first two of these). Along with the number that immediately derives from it, 77, the only number under 100 not having its home prime known (). The smallest triple of three squares in arithmetic succession is (1,25,49), and the second smallest is (49,169,289). 49 is the smallest discriminant of a totally real cubic field. 49 and 94 are the only numbers below 100 whose all permutations are composites but they are not multiples of 3, repdigits or numbers which only have digits 0, 2, 4, 5, 6 and 8, even excluding the trivial one digit terms. 49 = 7^2 and 94 = 2 * 47 The number of prime knots with 9 crossings is 49. Decimal representation The sum of the digits of the square of 49 (2401) is the square root of 49. 49 is the first square where the digits are squares. In this case, 4 and 9 are squares. Reciprocal The fraction is a repeating decimal with a period of 42: = (42 digits repeat) There are 42 positive integers less than 49 and coprime to 49. (42 is the period.) Multiplying 020408163265306122448979591836734693877551 by each of these integers results in a cyclic permutation of the original number: 020408163265306122448979591836734693877551 × 2 = 040816326530612244897959183673469387755102 020408163265306122448979591836734693877551 × 3 = 061224489795918367346938775510204081632653 020408163265306122448979591836734693877551 × 4 = 081632653061224489795918367346938775510204 ... The repeating number can be obtained from 02 and repetition of doubles placed at two places to the right: 02 04 08 16 32 64 128 256 512 1024 2048 + ... ---------------------- 020408163265306122448979591836734693877551...0204081632... because satisfies: In chemistry During the Manhattan Project, plutonium was also often referred to simply as "49". Number 4 was for the last digit in 94 (atomic number of plutonium) and 9 for the last digit in Pu-239, the weapon-grade fissile isotope used in nuclear bombs. In religion In Judaism: the number of days of the Counting of the Omer and the number of years in a Jubilee (biblical) cycle. In Buddhism, 49 days is one of the lengths of the intermediate state (bardo) In other fields Forty-nine is: 49er, one who participated in the 1849 California Gold Rush. This meaning has endured and things continue to be referred to as "49er," such as a member of the San Francisco 49ers team of the National Football League. A 49 is a party after a powwow or any gathering of American Indians, held by the participants. It is also type of song that is sung on such occasions. A 49 is typically held in an isolated place and features drumming and singing. References Integers
49 (number)
Mathematics
842
5,075,391
https://en.wikipedia.org/wiki/Volterra%20space
In mathematics, in the field of topology, a topological space is said to be a Volterra space if any finite intersection of dense Gδ subsets is dense. Every Baire space is Volterra, but the converse is not true. In fact, any metrizable Volterra space is Baire. The name refers to a paper of Vito Volterra in which he uses the fact that (in modern notation) the intersection of two dense G-delta sets in the real numbers is again dense. References Cao, Jiling and Gauld, D, "Volterra spaces revisited", J. Aust. Math. Soc. 79 (2005), 61–76. Cao, Jiling and Junnila, Heikki, "When is a Volterra space Baire?", Topology Appl. 154 (2007), 527–532. Gauld, D. and Piotrowski, Z., "On Volterra spaces", Far East J. Math. Sci. 1 (1993), 209–214. Gruenhage, G. and Lutzer, D., "Baire and Volterra spaces", Proc. Amer. Math. Soc. 128 (2000), 3115–3124. Volterra, V., "Alcune osservasioni sulle funzioni punteggiate discontinue", Giornale di Matematiche 19 (1881), 76–86. Properties of topological spaces
Volterra space
Mathematics
320
3,430,211
https://en.wikipedia.org/wiki/Calculus%20of%20voting
Calculus of voting refers to any mathematical model which predicts voting behaviour by an electorate, including such features as participation rate. A calculus of voting represents a hypothesized decision-making process. These models are used in political science in an attempt to capture the relative importance of various factors influencing an elector to vote (or not vote) in a particular way. Example One such model was proposed by Anthony Downs (1957) and is adapted by William H. Riker and Peter Ordeshook, in “A Theory of the Calculus of Voting” (Riker and Ordeshook 1968) V = pB − C + D where V = the proxy for the probability that the voter will turn out p = probability of vote “mattering” B = “utility” benefit of voting--differential benefit of one candidate winning over the other C = costs of voting (time/effort spent) D = citizen duty, goodwill feeling, psychological and civic benefit of voting (this term is not included in Downs's original model) A political science model based on rational choice used to explain why citizens do or do not vote. The alternative equation is V = pB + D > C Where for voting to occur the the vote will matter "times" the (B)enefit of one candidate winning over another combined with the feeling of civic (D)uty, must be greater than the (C)ost of voting References Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Harper & Row. Riker, William and Peter Ordeshook. 1968. “A Theory of the Calculus of Voting.” American Political Science Review 62(1): 25–42. Voting theory Mathematical modeling
Calculus of voting
Mathematics
352
77,954,525
https://en.wikipedia.org/wiki/3-Chlorostyrylcaffeine
3-Chlorostyrylcaffeine (CSC), or 8-(3-chlorostyryl)caffeine (8-CSC), is a potent and selective adenosine A2A receptor antagonist which is used in scientific research. It has 520-fold selectivity for the adenosine A2A receptor over the adenosine A1 receptor (Ki = 54nM and 28,000nM for the rat receptors, respectively). Its affinities for the adenosine A2B and A3 receptors are similarly low (Ki = 8,200nM and >10,000nM, respectively). CSC has been found to reverse the catalepsy induced by the dopamine D1 receptor antagonist SCH-23390 and the dopamine D2 receptor antagonists raclopride and sulpiride in animals. The drug was one of the first selective adenosine A2A receptor antagonists to be developed. However, in addition to its adenosine receptor antagonism, CSC was subsequently found to be a potent monoamine oxidase B (MAO-B) inhibitor (Ki = 80.6nM for baboon MAO-B). CSC was first described in the scientific literature by 1993. See also DMPX Istradefylline MSX-3 References 3-Chlorophenyl compounds Adenosine receptor antagonists Antiparkinsonian agents Experimental drugs Monoamine oxidase inhibitors Xanthines
3-Chlorostyrylcaffeine
Chemistry
315
26,488,448
https://en.wikipedia.org/wiki/Biosorption
Biosorption is a physiochemical process that occurs naturally in certain biomass which allows it to passively concentrate and bind contaminants onto its cellular structure. Biosorption can be defined as the ability of biological materials to accumulate heavy metals from wastewater through metabolically mediated or physico-chemical pathways of uptake. Though using biomass in environmental cleanup has been in practice for a while, scientists and engineers are hoping this phenomenon will provide an economical alternative for removing toxic heavy metals from industrial wastewater and aid in environmental remediation. Environmental uses Pollution interacts naturally with biological systems. It is currently uncontrolled, seeping into any biological entity within the range of exposure. The most problematic contaminants include heavy metals, pesticides and other organic compounds which can be toxic to wildlife and humans in small concentration. There are existing methods for remediation, but they are expensive or ineffective. However, an extensive body of research has found that a wide variety of commonly discarded waste including eggshells, bones, peat, fungi, seaweed, crab shells, yeast, baggase and carrot peels can efficiently remove toxic heavy metal ions from contaminated water. Ions from metals like mercury can react in the environment to form harmful compounds like methylmercury, a compound known to be toxic in humans. In addition, adsorbing biomass, or biosorbents, can also remove other harmful metals like: arsenic, lead, cadmium, cobalt, chromium and uranium. The idea of using biomass as a tool in environmental cleanup has been around since the early 1900s when Arden and Lockett discovered certain types of living bacteria cultures were capable of recovering nitrogen and phosphorus from raw sewage when it was mixed in an aeration tank. This discovery became known as the activated sludge process which is structured around the concept of bioaccumulation and is still widely used in wastewater treatment plants today. It wasn't until the late 1970s when scientists noticed the sequestering characteristic in dead biomass which resulted in a shift in research from bioaccumulation to biosorption. Differences from bioaccumulation Though bioaccumulation and biosorption are used synonymously, they are very different in how they sequester contaminants: Biosorption is a metabolically passive process, meaning it does not require energy, and the amount of contaminants a sorbent can remove is dependent on kinetic equilibrium and the composition of the sorbents cellular surface. Contaminants are adsorbed onto the cellular structure. Bioaccumulation is an active metabolic process driven by energy from a living organism and requires respiration. Both bioaccumulation and biosorption occur naturally in all living organisms however, in a controlled experiment conducted on living and dead strains of bacillus sphaericus it was found that the biosorption of chromium ions was 13–20% higher in dead cells than living cells. In terms of environmental remediation, biosorption is preferable to bioaccumulation because it occurs at a faster rate and can produce higher concentrations. Since metals are bound onto the cellular surface, biosorption is a reversible process whereas bioaccumulation is only partially reversible. Factors affecting performance Since biosorption is determined by equilibrium, it is largely influenced by pH, the concentration of biomass and the interaction between different metallic ions. For example, in a study on the removal of pentachlorophenol (PCP) using different strains of fungal biomass, as the pH changed from low pH to high pH (acidic to basic) the amount of removal decreased by the majority of the strains, however one strain was unaffected by the change. In another study on the removal of copper, zinc and nickel ions using a composite sorbent as the pH increased from low to high the sorbent favored the removal of copper ions over the zinc and nickel ions. Because of the variability in sorbent this might be a drawback to biosorption, however, more research will be necessary. Common uses Even though the term biosorption may be relatively new, it has been put to use in many applications for a long time. One very widely known use of biosorption is seen in activated carbon filters. They can filter air and water by allowing contaminants to bind to their incredibly porous and high surface area structure. The structure of the activated carbon is generated as the result of charcoal being treated with oxygen. Another type of carbon, sequestered carbon, can be used as a filtration media. It is made by carbon sequestration, which uses the opposite technique as for creating activated carbon. It is made by heating biomass in the absence of oxygen. The two filters allow for biosorption of different types of contaminants due to their chemical compositions—one with infused oxygen and the other without. In industry Many industrial effluents contain toxic metals that must be removed. Removal can be accomplished with biosorption techniques. It is an alternative to using man-made ion-exchange resins, which cost ten times more than biosorbents. The cost is so much less, because the biosorbents used are often waste from farms or they are very easy to regenerate, as is the case with seaweed and other unharvested biomass. Industrious biosorption is often done by using sorption columns as seen in Figure 1. Effluent containing heavy metal ions is fed into a column from the top. The biosorbents adsorb the contaminants and let the ion-free effluent to exit the column at the bottom. The process can be reversed to collect a highly concentrated solution of metal contaminants. The biosorbents can then be re-used or discarded and replaced. References Ecology
Biosorption
Biology
1,200
34,641,290
https://en.wikipedia.org/wiki/Emotional%20geography
Emotional geography is a subtopic within human geography, more specifically cultural geography, which applies psychological theories of emotion. It is an interdisciplinary field relating emotions, geographic places and their contextual environments. These subjective feelings can be applied to individual and social contexts. Emotional geography specifically focuses on how human emotions relate to, or affect, the environment around them. Firstly, there is a difference between emotional and affectual geography and they have their respective geographical sub-fields. The former refers to theories of expressed feelings and the social constructs of expressed feelings which can be generalisable and understood globally. The latter refers to theories underlying inexpressible feelings that are independent, embodied, and hard to understand. Emotional geography approaches geographical concepts and research from an expressed and generalisable perspective. Historically, emotions have an ultimate adaptive significance by accentuating a non-verbal form of communication that is universal. This dates back to Darwin's theory of emotion, which explains the evolutionary development of expressed emotion. This aids individual and societal relationships as there is the presence of emotional communication. For example, when studying social phenomena, individuals' emotions can connect and create a social emotion which can define the event happening. So, emotional geography applies emotional theory to places, emphasising the individual and social presence of it. History Emotions in geography have previously been ignored and classified as unimportant, leading to misconceptions and methodological issues. So, this appearance of emotions in geography is part of the cultural turn. Previously emotions were not accounted for due to historical reasons which include: the analytic mindset refusing to express emotion (from the Enlightenment), sexist connotations of emotions, cultural taboos of emotion and the idea of the objective researcher who does not account for emotions in their research. As individuals express a constant circulation of emotion, researchers also encompass these subjective emotional fluxes which extend beyond the individual and influence the research, both intentionally and unintentionally. This emotional awareness changed geographical research methodology, as accounting for the integration of the researcher has induced interconnectivity. This can be especially important when trying to understand the feelings of the 'other' as situational and personal awareness is required from the researcher to achieve a rational perspective. By including emotion in research, it has induced research reflexivity and provoked a paradigm shift, aiding the reputation of geography as a social science. Individuals The complex lives of individuals lead them to constantly have an emotional perspective. So, feeling emotions is humanely omnipresent and is another type of knowledge. Emotions are internal but influenced by varying external conditions. Emotional geography studies how these emotions are varying fluxes in an individual which are then flowing between the individuals and between their environments. This leads to people identifying with certain places, such as through a sense of place and topophilia, which in turn influences the perception of a place based on an individual's emotion. However, due to the subjective nature of emotions, everyone's perception of a location is completely different. Society Emotional geography has implications for societal emotions which lead to social and cultural geographical concepts that are related to emotions. Contemporarily, emotions are integrated into society, which differs from its historical restriction to the private life, thus allowing relationships between people and their locations. Consequently, personal emotions express themselves in the social realm which is influenced by the space and the framing of the place. This is present when people share and experience a collective emotion or even recreate it. These collective emotions, such as heightened emotions during social events, can also lead to dominant norms, allowing the possibility of systemic change. Collective emotions have been studied through social inequality including racism, sexism and the societal discrimination of other marginalised societies, which could lead to institutional change. However, there is a diversity of cross-cultural emotional expression and interpretation which should be accounted for in policy change. Limitations The limitations of emotional geography are the following: ignorance of affect, leading to misunderstood feelings as only expressed emotions are accounted for, generalisation of expressed emotion, which includes reducing emotions to the six basic ones, lack of distinction between thought and emotion, consequently leaving out their relationship, subjective nature of emotions, whereby the researcher may incorrectly alter the research. This shows a potential lack of inadequacy and incapability of real world applications. To overcome these limitations, emotional geographers could reflect on the basis of their field and avoid presuming emotions while simultaneously accounting for thoughts, affects, etc... Examples Real world-applications of this field are numerous and include studies demonstrating: emotional geographies of healthcare, childcare through the emotional geography of mothers, emotional engagements with an LGBTQ monument, the understanding of a place through the emotional geography of oppressed people, such as women of colour, emotional geographies of a classroom and the relationships between students, parents and teachers, situational emotional geographies, i.e. elderly incarcerated people, which highlights the Shoelace model of Emotional Geography, the potential increase of emotional connections to cities by increasing public spaces, the possibility of environmental implications to promote an emotional connection and sense of place to nature, the prospect of global application as emotions are amplified during social marginalisation, economic crises and health and natural catastrophes. There is a wide range of literature addressing emotional geography which extends beyond this list and findings may be applied socio-culturally, morally, professionally, physically, and politically. Communities The leading community for emotional geography is an organisation known as EMME (Eliciting, Mapping and Managing Emotions). It has its home in the Festival of Emotions which can be found at: www.emotional-geography.com. It consists of 84 Geographers of Emotions, citizens of the world with no borders or agenda, who come together to share their knowledge and experience with others through courses, journeys, games and community events. Furthermore, there is an organisation, Emotion, Space and Society, which specialises in the relationship between emotion and geography and aims to increase awareness by hosting conferences and publishing journals. See also Cultural geography Environment (biophysical) Natural environment Physical environments Social environment References Further reading Cultural geography Human geography
Emotional geography
Environmental_science
1,238
10,540,671
https://en.wikipedia.org/wiki/CD135
Cluster of differentiation antigen 135 (CD135) also known as fms like tyrosine kinase 3 (FLT-3 with fms standing for "feline McDonough sarcoma"), receptor-type tyrosine-protein kinase FLT3, or fetal liver kinase-2 (Flk2) is a protein that in humans is encoded by the FLT3 gene. FLT3 is a cytokine receptor which belongs to the receptor tyrosine kinase class III. CD135 is the receptor for the cytokine Flt3 ligand (FLT3L). It is expressed on the surface of many hematopoietic progenitor cells. Signalling of FLT3 is important for the normal development of haematopoietic stem cells and progenitor cells. The FLT3 gene is one of the most frequently mutated genes in acute myeloid leukemia (AML). High levels of wild-type FLT3 have been reported for blast cells of some AML patients without FLT3 mutations. These high levels may be associated with worse prognosis. Structure FLT3 is composed of five extracellular immunoglobulin-like domains, an extracellular domain, a transmembrane domain, a juxtamembrane domain and a tyrosine-kinase domain consisting of 2 lobes that are connected by a tyrosine-kinase insert. Cytoplasmic FLT3 undergoes glycosylation, which promotes localization of the receptor to the membrane. Function CD135 is a class III receptor tyrosine kinase. When this receptor binds to FLT3L a ternary complex is formed in which two FLT3 molecules are bridged by one (homodimeric) FLT3L. The formation of such complex brings the two intracellular domains in close proximity to each other, eliciting initial trans-phosphorylation of each kinase domain. This initial phosphorylation event further activates the intrinsic tyrosine kinase activity, which in turn phosphorylates and activates signal transduction molecules that propagate the signal in the cell. Signaling through CD135 plays a role in cell survival, proliferation, and differentiation. CD135 is important for lymphocyte (B cell and T cell) development. Two cytokines that down modulate FLT3 activity (& block FLT3-induced hematopoietic activity) are: TNF-alpha (Tumor necrosis factor-alpha) TGF-beta (Transforming growth factor-beta) TGF-beta especially, decreases FLT3 protein levels and reverses the FLT3L-induced decrease in the time that hematopoietic progenitors spend in the G1-phase of the cell cycle. Clinical significance Cell surface marker Cluster of differentiation (CD) molecules are markers on the cell surface, as recognized by specific sets of antibodies, used to identify the cell type, stage of differentiation and activity of a cell. In mice, CD135 is expressed on several hematopoietic (blood) cells, including long- and short-term reconstituting hematopoietic stem cells (HSC) and other progenitors like multipotent progenitors (MPPs) and common lymphoid progenitors (CLP). Role in cancer CD135 is a proto-oncogene, meaning that mutations of this protein can lead to cancer. Mutations of the FLT3 receptor can lead to the development of leukemia, a cancer of bone marrow hematopoietic progenitors. Internal tandem duplications of FLT3 (FLT3-ITD) are the most common mutations associated with acute myelogenous leukemia (AML) and are a prognostic indicator associated with adverse disease outcome. FLT3 inhibitors Gilteritinib, a dual FLT3-AXL tyrosine kinase inhibitor has completed a phase 3 trial of relapsed/refractory acute myeloid leukemia in patients with FLT3 ITD or TKD mutations. In 2017, gilteritinib gained FDA orphan drug status for AML. In November 2018, the FDA approved gilteritinib (Xospata) for treatment of adult patients with relapsed or refractory acute myeloid leukemia (AML) with a FLT3 mutation as detected by an FDA-approved test. In July 2023, quizartinib (Vanflyta) was also approved for the treatment of newly diagnosed AML with FLT3 internal tandem duplication (ITD)-positive, as detected by an FDA-approved test. Precisely, it should be used with standard cytarabine and anthracycline induction and cytarabine consolidation, and as maintenance monotherapy following consolidation chemotherapy. Midostaurin was approved by the FDA in April 2017 for the treatment of adult patients with newly diagnosed AML who are positive for oncogenic FLT3, in combination with chemotherapy. The drug is approved for use with a companion diagnostic, the LeukoStrat CDx FLT3 Mutation Assay, which is used to detect the FLT3 mutation in patients with AML. Sorafenib has been reported to show significant activity against Flt3-ITD positive acute myelogenous leukemia. Sunitinib also inhibits Flt3. Lestaurtinib is in clinical trials. A paper published in Nature in April 2012 studied patients who developed resistance to FLT3 inhibitors, finding specific DNA sites contributing to that resistance and highlighting opportunities for future development of inhibitors that could take into account the resistance-conferring mutations for a more potent treatment. See also Cluster of differentiation cytokine receptor receptor tyrosine kinase tyrosine kinase oncogene hematopoiesis Lymphopoiesis#Labeling lymphopoiesis References Further reading External links Tyrosine kinase receptors EC 2.7.10
CD135
Chemistry
1,252
2,379,569
https://en.wikipedia.org/wiki/Submersible%20bridge
A submersible bridge is a type of movable bridge that lowers the bridge deck below the water level to permit waterborne traffic to use the waterway. This differs from a lift bridge or table bridge, which operate by raising the roadway. Two submersible bridges exist across the Corinth Canal in Greece, one at each end, in Isthmia and Corinth. They lower the centre span to 8 metres below water level when they give way to ships crossing the channel. The submersible bridge's primary advantage over the similar lift bridge is that there is no structure above the shipping channel and thus no height limitation on ship traffic. This is particularly important for sailing vessels. Additionally, the lack of an above-deck structure is considered aesthetically pleasing, a similarity shared with the Chicago-style bascule bridge and the table bridge. However, the presence of the submerged bridge structure limits the draft of vessels in the waterway. The term submersible bridge is also sometimes applied to a non-movable bridge that is designed to withstand submersion and high currents when the water level rises. Such a bridge is more properly called a low water bridge. See also Low water crossing, a non-moving bridge that is sometimes submerged Moveable bridges for a list of other moveable bridge types Table bridge, a similar bridge that moves upward Underwater bridge, a non-moving military bridge that is always submerged References External links Popular Science, November 1943, "Ducking Bridge" Lowers Span To Allow Ships To Pass built in Iraq in 1943 (bottom-right hand side of page) Video of the operation of a submersible bridge at the entrance of the Corinth Canal Bridges Moveable bridges Bridges by structural type
Submersible bridge
Engineering
348
17,106
https://en.wikipedia.org/wiki/KA9Q
KA9Q, also called KA9Q NOS or simply NOS, was a popular early implementation of TCP/IP and associated protocols for amateur packet radio systems and smaller personal computers connected via serial lines. It was named after the amateur radio callsign of Phil Karn, who first wrote the software for a CP/M system and then ported it to DOS on the IBM PC. As the KA9Q package included source code, many radio amateurs modified it, so many different versions were available at the same time. KA9Q was later maintained by Anthony Frost (callsign G8UDV) and Adam Goodfellow. It was ported to the Acorn Archimedes by Jonathan Naylor (G4KLX). Until 1995 it was the standard access software provided by British dial-up internet service provider Demon Internet. Most modern operating systems provide a built-in implementation of TCP/IP protocol; Linux especially includes all the necessary kernel functions and support utilities for TCP/IP over amateur radio systems, as well as basic AX.25 and NET/ROM functionality. Therefore, NOS is regarded as obsolete by its original developer. It still may have its uses for embedded systems that are too small for Linux. KA9Q is also a name for the IP-over-IP Tunneling protocol. References External links Phil Karn's web page on KA9Q NOS Amateur radio software Internet protocols Packet radio
KA9Q
Technology
290
11,553,100
https://en.wikipedia.org/wiki/Melampsora%20medusae
Melampsora medusae is a fungal pathogen, causing a disease of woody plants. The infected trees' leaves turn yellowish-orange. The disease affects mostly conifers, e.g. the Douglas-fir, western larch, tamarack, ponderosa, and lodgepole pine trees, but also some broadleaves, e.g. trembling aspen and poplars. Coniferous hosts are affected in late spring through early August, and trembling aspens and poplars from early summer to late fall. It is one of only two foliage rusts that occur naturally in British Columbia. Life cycle Symptoms usually are contained to a single year on conifers, shedding the affected needles in fall. To survive the winter Melampsora medusae remain as teliospores on the dead leaves of the host, coming back in the spring to be spread by the wind as basidiospores, and infecting new conifers. After about two weeks, aeciospores are produced on the coniferous needles. Those spores serve as inoculum for an infection in live trembling aspen and other poplar trees in another two weeks. Urediniospores are produced on the poplar leaves, where the infection spreads. Winter then comes, and the cycle begins again. References Bibliography Brown, J.S. (1984) Recent invasions of Australia and New Zealand by pathogenic fungi and counter measures. Bulletin OEPP/EPPO Bulletin 14, 417-428. CMI (1991) Distribution Maps of Plant Diseases No. 547 (edition 2). CAB International, Wallingford, UK. Hepting, G.H. (1971) Diseases of forest and shade trees of the United States. Agricultural handbook No. 386, pp. 209, 212, 299, 382, 387. Forest Service, US Department of Agriculture, USA. Kraayenoord, C.W.S. van; Laudon, G.F.; Spies, A.G. (1974) Poplar rusts invade New Zealand. Plant Disease Reporter 58, 423-427. McBride, R.P. (1965) A microbiological control of Melampsora medusae. Canadian Journal of Botany 47, 711-715. McMillan, R. (1972) Poplar leaf rust hazard. New Zealand Journal of Agriculture 125, 47. Nagarajan, S.; Singh, D.V. (1990) Long-distance dispersion of rust pathogens. Annual Review of Plant Pathology 28, 139-153. OEPP/EPPO (1982) Data sheets on quarantine organisms No. 33, Melampsora medusae. Bulletin OEPP/EPPO Bulletin 12 (1). OEPP/EPPO (1990) Specific quarantine requirements. EPPO Technical Documents No. 1008. Pinon, J. (1986) Situation de Melampsora medusae en Europe. Bulletin OEPP/EPPO Bulletin 16, 547-551. Prakash, C.S.; Heather, W.A. (1985) Adaption of Melampsora medusae to increasing temperature and light intensities on a clone of Populus deltoides. Canadian Journal of Botany 64, 834-841. Prakash, C.S.; Heather, W.A. (1989) Inheritance of partial resistance to two races of leaf rust Melampsora medusae in eastern cottonwood, Populus deltoides. Silvae Genetica 38, 90-94. Prakash, C.S.; Thielges, B.A. (1987) Pathogenic variation in Melampsora medusae leaf rust of poplars. Euphytica 36, 563-570. Prakash, C.S.; Thielges, B.A. (1989) Interaction of geographic isolates of Melampsora medusae and Populus: effect of temperature. Canadian Journal of Botany 67, 486-490. Schipper, A.L., Jr.; Dawson, D.H. (1974) Poplar leaf rust - a problem in maximum wood production. Plant Disease Reporter 58, 721-723. Shain, L. (1988) Evidence for formae speciales in poplar leaf rust fungus Melampsora medusae. Mycologia 80, 729-732. Sharma, J.K.; Heather, W.A. (1977) Infection of Populus alba var. hickeliana by Melampsora medusae Thüm. European Journal of Forest Pathology 7, 119-124. Siwecky, R. (1974) The mechanism of poplar leaf resistance to fungal infection. Polish Academy of Sciences, Annual Report, 1973, 32 pp. Spiers, A.G.; Hopcroft, D.H. (1985) Ultrastructural studies of pathogenesis and uredinial development of Melampsora larici-populina and M. medusae on poplar and M. coleosporioides and M. epitea on willow. New Zealand Journal of Botany 23, 117-133. Spiers, A.G.; Hopcroft, D.H. (1988) Penetration and infection of poplar leaves by urediniospores of Melampsora larici-populina and Melampsora medusae. New Zealand Journal of Botany 26, 101-111. Trench, T.N.; Baxter, A.P.; Churchill, H. (1987) Report of Melampsora medusae on Populus deltoides in Southern Africa. Plant Disease 71, 761. Walker, J. (1975) Melampsora medusae. CMI Descriptions of Pathogenic Fungi and Bacteria No. 480. CAB International, Wallingford, UK. Walker, J.; Hartigan, D. (1972) Poplar rust in Australia. Australian Plant Pathology Society Newsletter 1, 3. Ziller, W.G. (1955) Studies of western tree rusts. II. Melampsora occidentalis and M. albertensis, two needle rusts of Douglas-fir. Canadian Journal of Botany 33, 177-188. Ziller, W.G. (1965) Studies of western tree rusts. VI. The aecial host ranges of Melampsora albertensis, M. medusae and M. occidentalis. Canadian Journal of Botany 43, 217-230. Ziller, W.G. (1974) The tree rusts of western Canada. Forest Service, British Columbia, Canada, Publications No. 1329, pp. 144–147. Forest Service, British Columbia, Canada. External links Index Fungorum USDA ARS Fungal Database Pucciniales Fungal tree pathogens and diseases Fungi described in 1878 Taxa named by Felix von Thümen Fungus species
Melampsora medusae
Biology
1,436
1,799,268
https://en.wikipedia.org/wiki/Architecture%20description%20language
Architecture description languages (ADLs) are used in several disciplines: system engineering, software engineering, and enterprise modelling and engineering. The system engineering community uses an architecture description language as a language and/or a conceptual model to describe and represent system architectures. The software engineering community uses an architecture description language as a computer language to create a description of a software architecture. In the case of a so-called technical architecture, the architecture must be communicated to software developers; a functional architecture is communicated to various stakeholders and users. Some ADLs that have been developed are: Acme (developed by CMU), AADL (standardized by the SAE), C2 (developed by UCI), SBC-ADL (developed by National Sun Yat-Sen University), Darwin (developed by Imperial College London), and Wright (developed by CMU). Overview The ISO/IEC/IEEE 42010 document, Systems and software engineering—Architecture description, defines an architecture description language as "any form of expression for use in architecture descriptions" and specifies minimum requirements on ADLs. The enterprise modelling and engineering community have also developed architecture description languages catered for at the enterprise level. Examples include ArchiMate (now a standard of The Open Group), DEMO, ABACUS (developed by the University of Technology, Sydney). These languages do not necessarily refer to software components, etc. Most of them, however, refer to an application architecture as the architecture that is communicated to the software engineers. Most of the writing below refers primarily to the perspective from the software engineering community. A standard notation (ADL) for representing architectures helps promote mutual communication, the embodiment of early design decisions, and the creation of a transferable abstraction of a system. Architectures in the past were largely represented by box-and-line drawing annotated with such things as the nature of the component, properties, semantics of connections, and overall system behavior. ADLs result from a linguistic approach to the formal representation of architectures, and as such they address its shortcomings. Also important, sophisticated ADLs allow for early analysis and feasibility testing of architectural design decisions. History ADLs have been classified into three broad categories: box-and-line informal drawings, formal architecture description language, and UML (unified modeling language)-based notations. Box-and-line have been for a long time the most predominant means for describing software architectures. While providing useful documentation, the level of informality limited the usefulness of the architecture description. A more rigorous way for describing software architectures was required. Quoting Allen and Garlan (1997), "while these [box-and-line] descriptions may provide useful documentation, the current level of informality limits their usefulness. Since it is generally imprecise what is meant by such architectural descriptions, it may be impossible to analyze an architecture for consistency or determine non-trivial properties of it. Moreover, there is no way to check that a system implementation is faithful to its architectural design." A similar conclusion is drawn in Perry and Wolf (1992), which reports that: "Aside from providing clear and precise documentation, the primary purpose of specifications is to provide automated analysis of the documents and to expose various kinds of problems that would otherwise go undetected." Since then, a thread of research on formal languages for software architecture description has been carried out. Tens of formal ADLs have been proposed, each characterized by different conceptual architectural elements, different syntax or semantics, focusing on a specific operational domain, or only suitable for different analysis techniques. For example, domain-specific ADLs have been presented to deal with embedded and real-time systems (such as AADL, EAST-ADL, and EADL), control-loop applications (DiaSpec), product line architectures (Koala), and dynamic systems (Π-ADL)). Analysis-specific ADLs have been proposed to deal with availability, reliability, security, resource consumption, data quality and real-time performance analysis (AADL, behavioral analysis (Fractal)), and trustworthiness analysis (TADL). However, these efforts have not seen the desired adoption by industrial practice. Some reasons for this lack of industry adoption have been analyzed by Woods and Hilliard, Pandey, Clements, and others: formal ADLs have been rarely integrated in the software life-cycle, they are seldom supported by mature tools, scarcely documented, focusing on very specific needs, and leaving no space for extensions enabling the addition of new features. As a way to overcome some of those limitations, UML has been indicated as a possible successor of existing ADLs. Many proposals have been presented to use or extend the UML to more properly model software architectures. A 2013 study found that practitioners were generally satisfied with the design capabilities of the ADLS they used, but had several major concerns with them: they lacked analysis features and the ability to define extra-functional properties; those used in practice mostly originated from industrial development rather than academic research; they needed more formality and better usability. Characteristics There is a large variety in ADLs developed by either academic or industrial groups. Many languages were not intended to be an ADL, but they turn out to be suitable for representing and analyzing an architecture. In principle ADLs differ from requirements languages, because ADLs are rooted in the solution space, whereas requirements describe problem spaces. They differ from programming languages, because ADLs do not bind architectural abstractions to specific point solutions. Modeling languages represent behaviors, where ADLs focus on representation of components. However, there are domain specific modeling languages (DSMLs) that focus on representation of components. Minimal requirements The language must: Be suitable for communicating an architecture to all interested parties Support the tasks of architecture creation, refinement and validation Provide a basis for further implementation, so it must be able to add information to the ADL specification to enable the final system specification to be derived from the ADL Provide the ability to represent most of the common architectural styles Support analytical capabilities or provide quick generating prototype implementations ADLs have in common: Graphical syntax with often a textual form and a formally defined syntax and semantics Features for modeling distributed systems Little support for capturing design information, except through general purpose annotation mechanisms Ability to represent hierarchical levels of detail including the creation of substructures by instantiating templates ADLs differ in their ability to: Handle real-time constructs, such as deadlines and task priorities, at the architectural level Support the specification of different architectural styles. Few handle object oriented class inheritance or dynamic architectures Support the analysis of the architecture Handle different instantiations of the same architecture, in relation to product line architectures Positive elements of ADL ADLs are a formal way of representing architecture ADLs are intended to be both human and machine readable ADLs support describing a system at a higher level than previously possible ADLs permit analysis and assessment of architectures, for completeness, consistency, ambiguity, and performance ADLs can support automatic generation of software systems Negative elements of ADL There is no universal agreement on what ADLs should represent, particularly as regards the behavior of the architecture Representations currently in use are relatively difficult to parse and are not supported by commercial tools Most ADLs tend to be very vertically optimized toward a particular kind of analysis Common concepts of architecture The ADL community generally agrees that Software Architecture is a set of components and the connections among them. But there are different kind of architectures like: Object connection architecture Configuration consists of the interfaces and connections of an object-oriented system Interfaces specify the features that must be provided by modules conforming to an interface Connections represented by interfaces together with call graph Conformance usually enforced by the programming language Decomposition — associating interfaces with unique modules Interface conformance — static checking of syntactic rules Communication integrity — visibility between modules Interface connection architecture Expands the role of interfaces and connections Interfaces specify both "required" and "provided" features Connections are defined between "required" features and "provided" features Consists of interfaces, connections and constraints Constraints restrict behavior of interfaces and connections in an architecture Constraints in an architecture map to requirements for a system Most ADLs implement an interface connection architecture. Architecture vs. design Architecture, in the context of software systems, is roughly divided into categories, primarily software architecture, network architecture, and systems architecture. Within each of these categories, there is a tangible but fuzzy distinction between architecture and design. To draw this distinction as universally and clearly as possible, it is best to consider design as a noun rather than as a verb, so that the comparison is between two nouns. Design is the abstraction and specification of patterns and organs of functionality that have been or will be implemented. Architecture is both a degree higher in abstraction and courser in granularity. Consequentially, architecture is also more topological (i.e. overall structure and relationship between components) in nature than design (i.e. specific details and implementation), in that it specifies where major components meet and how they relate to one another. Architecture focuses on the partitioning of major regions of functionality into high level components, where they will physically or virtually reside, what off-the-shelf components may be employed effectively, in general what interfaces each component will expose, what protocols will be employed between them, and what practices and high level patterns may best meet extensibility, maintainability, reliability, durability, scalability, and other non-functional objectives. Design is a detailing of these choices and a more concrete clarification of how functional requirements will be met through the delegation of pieces of that functionality to more granular components and how these smaller components will be organized within the larger ones. Oftentimes, a portion of architecture is done during the conceptualization of an application, system, or network and may appear in the non-functional sections of requirement documentation. Canonically, design is not specified in requirements, but is rather driven by them. The process of defining an architecture may involve heuristics, acquired by the architect or architectural team through experience within the domain. As with design, architecture often evolves through a series of iterations, and just as the wisdom of a high level design is often tested when low level design and implementation occurs, the wisdom of an architecture is tested during the specification of a high level design. In both cases, if the wisdom of the specification is called into question during detailing, another iteration of either architecture or design, as the case may be, may become necessary. In summary, the primary differences between architecture and design are ones of granularity and abstraction, and (consequentially) chronology. (Architecture generally precedes design, although overlap and circular iteration is a common reality.) Examples ArchiMate Architecture Analysis & Design Language C4 model (software) Darwin (ADL) EAST-ADL Wright (ADL) Approaches to system architecture Academic approach focus on analytic evaluation of architectural models individual models rigorous modeling notations powerful analysis techniques depth over breadth special-purpose solutions Industrial approach focus on wide range of development issues families of models practicality over rigor architecture as the big picture in development breadth over depth general-purpose solutions See also AADL Darwin Scripting language Hardware description language References External links Architecture Description Languages // Mälardalen University ABACUS ACME ADML Aesop AO-ADL ArchiMate An example of an ADL for enterprise architecture ByADL (Build Your ADL) - University of L'Aquila C2 SADL DAOP-ADL DEMO Another example of an enterprise architecture ADL DiaSpec an approach and tool to generate a distributed framework from a software architecture DUALLy Rapide SSEP Unicon Wright Computer languages Systems architecture Software architecture Programming language classification Modeling languages
Architecture description language
Technology,Engineering
2,398
31,324,140
https://en.wikipedia.org/wiki/JAUS%20Tool%20Set
The JAUS Tool Set (JTS) is a software engineering tool for the design of software services used in a distributed computing environment. JTS provides a graphical user interface (GUI) and supporting tools for the rapid design, documentation, and implementation of service interfaces that adhere to the Society of Automotive Engineers' standard AS5684A, the JAUS Service Interface Design Language (JSIDL). JTS is designed to support the modeling, analysis, implementation, and testing of the protocol for an entire distributed system. Overview The JAUS Tool Set (JTS) is a set of open source software specification and development tools accompanied by an open source software framework to develop Joint Architecture for Unmanned Systems (JAUS) designs and compliant interface implementations for simulations and control of robotic components per SAE-AS4 standards. JTS consists of the components: GUI based Service Editor: The Service Editor (referred to as the GUI in this document) provides a user friendly interface with which a system designer can specify and analyze formal specifications of Components and Services defined using the JAUS Service Interface Definition Language (JSIDL). Validator: A syntactic and semantic validator provides on-the-fly validation of specifications entered (or imported) by the user with respect to JSIDL syntax and semantics is integrated into the GUI. Specification Repository: A repository (or database) that is integrated into the GUI that allows for the storage of and encourages the reuse of existing formal specifications. C++ Code Generator: The Code Generator automatically generates C++ code that has a 1:1 mapping to the formal specifications. The generated code includes all aspects of the service, including the implementations of marshallers and unmarshallers for messages, and implementations of finite-state machines for protocol behavior that are effectively decoupled from application behavior. Document Generator: The Document Generator automatically generates documentation for sets of Service Definitions. Documents may be generated in several formats. Software Framework: The software framework implements the transport layer specification AS5669A, and provides the interfaces necessary to integrate the auto-generated C++ code with the transport layer implementation. Present transport options include UDP and TCP in wired or wireless networks, as well as serial connections. The transport layer itself is modular, and allows end-users to add additional support as needed. Wireshark Plugin: The Wireshark plugin implements a plugin to the popular network protocol analyzer called Wireshark. This plugin allows for the live capture and offline analysis of JAUS message-based communication at runtime. A built-in repository facilitates easy reuse of service interfaces and implementations traffic across the wire. The JAUS Tool Set can be downloaded from www.jaustoolset.org User documentation and community forum are also available at the site. Release history Following a successful Beta test, Version 1.0 of the JAUS Tool Set was released in July 2010. The initial offering focused on core areas of User Interface, HTML document generation, C++ code generation, and the software framework. The Version 1.1 update was released in October 2010. In addition to bug fixes and UI improvements, this version offered several important upgrades including enhancement to the Validator, Wireshark plug-in, and generated code. The JTS 2.0 release is scheduled for the second quarter of 2011 and further refines the Tool Set functionality: Protocol Validation: Currently, JTS provides validation for message creation, to ensure users cannot create invalid messages specifications. That capability does not currently exist for protocol definitions, but is being added. This will help ensure that users create all necessary elements of a service definition, and reduce user error. C# and Java Code Generation: Currently, JTS generates cross-platform C++ code. However, other languages including Java and C# are seeing a dramatic increase in their use in distributed systems, particularly in the development of graphical clients to embedded services. MS Word Document Generation: HTML and JSIDL output is supported, but native Office-Open-XML (OOXML) based MS Word generation has advantages in terms of output presentation, and ease of use for integration with other documents. Therefore, we plan to integrate MS Word service document generation. In addition, the development team has several additional goals that are not-yet-scheduled for a particular release window: Protocol Verification: This involves converting the JSIDL definition of a service into a PROMELA model, for validation by the SPIN model checking tool. Using PROMELA to model client and server interfaces will allow developers to formally validate JAUS services. End User Experience: We plan to conduct formal User Interface testing. This involves defining a set of tasks and use cases, asking users with various levels of JAUS experience to accomplish those tasks, and measuring performance and collecting feedback, to look for areas where the overall user experience can be improved. Improved Service Re-Use: JSIDL allows for inheritance of protocol descriptions, much like object-oriented programming languages allow child classes to re-use and extend behaviors defined by the parent class. At present, the generated code 'flattens' these state machines into a series of nested states which gives the correct interface behavior, but only if each single leaf (child) service is generated within its own component. This limits service re-use and can lead to a copy-and-paste of the same implementation across multiple components. The team is evaluating other inheritance solutions that would allow for multiple leaf (child) services to share access to a common parent, but at present the approach is sufficient to address the requirements of the JAUS Core Service Set. Domains and application The JAUS Tool Set is based on the JAUS Service Interface Definition Language (JSIDL), which was originally developed for application within the unmanned systems, or robotics, communities. As such, JTS has quickly gained acceptance as a tool for generation of services and interfaces compliant with the SAE AS-4 "JAUS" publications. Although usage statistics are not available, the Tool Set has been downloaded by representatives of US Army, Navy, Marines, and numerous defense contractors. It was also used in a commercial product called the JAUS Expansion Module sold by DeVivo AST, Inc. Since the JSIDL schema is independent of the data being exchanged, however, the Tool Set can be used for the design and implementation of a Service Oriented Architecture for any distributed systems environment that uses binary encoded message exchange. JSIDL is built on a two-layered architecture that separates the application layer and the transport layer, effectively decoupling the data being exchanges from the details of how that data moves from component to component. Furthermore, since the schema itself is widely generic, it's possible to define messages for any number of domains including but not limited to industrial control systems, remote monitoring and diagnostics, and web-based applications. Licensing JTS is released under the open source BSD license. The JSIDL Standard is available from the SAE. The Jr Middleware on which the Software Framework (Transport Layer) is based is open source under LGPL. Other packages distributed with JTS may have different licenses. Sponsors Development of the JAUS Tool Set was sponsored by several United States Department of Defense organizations: Office of Under Secretary of Defense for Acquisition, Technology & Logistics / Unmanned Warfare. Navy Program Executive Officer Littoral and Mine Navy Program Executive Officer Unmanned Aviation and Strike Weapons Office of Naval Research Air Force Research Lab References External links jaustoolset.org: Homepage for the JAUS Tool Set sae.org: Publishers of the SAE AS-4 JAUS family of standards, including JSIDL (AS-5684) jrmiddleware.org: Homepage for the JR Middleware, the LGPL source code used by the JTS Software Framework Vehicle design Programming tools
JAUS Tool Set
Engineering
1,596
36,115,409
https://en.wikipedia.org/wiki/Weyl%E2%80%93von%20Neumann%20theorem
In mathematics, the Weyl–von Neumann theorem is a result in operator theory due to Hermann Weyl and John von Neumann. It states that, after the addition of a compact operator () or Hilbert–Schmidt operator () of arbitrarily small norm, a bounded self-adjoint operator or unitary operator on a Hilbert space is conjugate by a unitary operator to a diagonal operator. The results are subsumed in later generalizations for bounded normal operators due to David Berg (1971, compact perturbation) and Dan-Virgil Voiculescu (1979, Hilbert–Schmidt perturbation). The theorem and its generalizations were one of the starting points of operator K-homology, developed first by Lawrence G. Brown, Ronald Douglas and Peter Fillmore and, in greater generality, by Gennadi Kasparov. In 1958 Kuroda showed that the Weyl–von Neumann theorem is also true if the Hilbert–Schmidt class is replaced by any Schatten class Sp with p ≠ 1. For S1, the trace-class operators, the situation is quite different. The Kato–Rosenblum theorem, proved in 1957 using scattering theory, states that if two bounded self-adjoint operators differ by a trace-class operator, then their absolutely continuous parts are unitarily equivalent. In particular if a self-adjoint operator has absolutely continuous spectrum, no perturbation of it by a trace-class operator can be unitarily equivalent to a diagonal operator. References Operator theory Theorems in functional analysis K-theory
Weyl–von Neumann theorem
Mathematics
324
28,222,069
https://en.wikipedia.org/wiki/Garden%20square
A garden square is a type of communal garden in an urban area wholly or substantially surrounded by buildings; commonly, it continues to be applied to public and private parks formed after such a garden becomes accessible to the public at large. The archetypal garden square is surrounded by tall terraced houses and other types of townhouse. Because it is designed for the amenity of surrounding residents, it is subtly distinguished from a town square designed to be a public gathering place: due to its inherent private history, it may have a pattern of dedicated footpaths and tends to have considerably more plants than hard surfaces or large monuments. Propagation At their conception in the early 17th century, each such garden was a private communal amenity for the residents of the overlooking houses akin to a garden courtyard within a palace or community. Such community courtyards date back to at least Ur in 2000 BC where two-storey houses were built of fired brick around an open square. Kitchen, working, and public spaces were located on the ground floor, with private rooms located upstairs. In the 20th century, many garden squares that were previously accessible only to defined residents became accessible to the public. Those in central urban locations, such as Leicester Square in London's West End, have become indistinguishable from town squares. Others, while publicly accessible, are largely used by local residents and retain the character of garden squares or small communal parks. Many private squares, even in busy locations, remain private, such as Portman Square in Marylebone in London, despite its proximity to London's busiest shopping districts. Occurrence Europe United Kingdom London is famous for them; they are described as one of the glories of the capital. Many were built or rebuilt during the late eighteenth and early nineteenth centuries, at the height of Georgian architecture, and are surrounded by townhouses. In 1913, The UK Parliament passed the 1913 London Squares Preservation Act. The act provided enhanced legal protection to garden squares and other public spaces, ensuring they were preserved against inappropriate development and remained accessible for community enjoyment. Large projects, such as the Bedford Estate, included garden squares in their development. The Notting Hill and Bloomsbury neighbourhoods both have many garden squares, with the former mostly still restricted to residents, and the latter open to all. Other UK cities prominent in the Georgian era such as Edinburgh, Bath, Bristol and Leeds have several garden squares. Householders with access to a private garden square are commonly required to pay a maintenance levy. Normally the charge is set annually by a garden committee. Sometimes private garden squares are opened to the public, such as during Open Garden Squares Weekend. France In Paris Privately owned squares which survived the decades after the French Revolution and 19th century Haussmann's renovation of Paris include the Place des Vosges and Square des Épinettes in Paris. The Place des Vosges was a fashionable and expensive square to live in during the 17th and 18th centuries, and one of the central reasons that Le Marais district became so fashionable for French nobility. It was inaugurated in 1612 with a grand carrousel to celebrate the engagement of Louis XIII to Anne of Austria and is a prototype of the residential squares of European cities that were to come. What was new about the Place Royale as it was known in 1612 was that the house fronts were all built to the same design, probably by Baptiste du Cerceau. In town squares, similarly green but publicly accessible from the outset, is the Square René Viviani. Gardens substantially cover a few of the famous Places in the capital; instead, the majority are paved and replete with profoundly hard materials such as Place de la Concorde. Inspired by ecological interests and a 21st-century focus on pollution mitigation, an increasing number of the Places in Paris today many have a focal tree or surrounding raised flower beds/and or rows of trees such as the Place de la République. The enclosed garden terraces (French: jardins terrasses) and courtyards (French: cours) of some French former palaces have resulted in redevelopments into spaces equivalent to garden squares. The same former single-owner scenario applies to at least one garden square in London (Coleridge Square). Outside of Paris Grandiose instances of garden-use town squares are a part of many French cities, others opt for solid material town squares. Belgium The Square de Meeûs and Square Orban are notable examples in Brussels. Ireland Dublin has several Georgian examples, including Merrion Square, Fitzwilliam Square, Mountjoy Square, St Stephens Green and Parnell Square. The Americas United States Perhaps the most famous garden square in the United States is Gramercy Park in southern Midtown Manhattan. Famously, it has remained private and gated throughout its existence; possession of a key to the park is a jealously guarded privilege that only certain local residents enjoy. The tradition of fee simple land ownership in American cities has made collective amenities such as garden squares comparatively rare. Very few sub-dividers and developers included them in plats during the 19th century, with notable exceptions below. Rittenhouse Square in the Center City, Philadelphia encases a public garden, one of the five original open-space parks planned by William Penn and his surveyor Thomas Holme during the late 17th century. It was first named Southwest Square. Nearby Fitler Square is a similar garden square named for late 19th century Philadelphia mayor Edwin Henry Fitler shortly after his death in 1896. The Square, cared for through a public private partnership between the Department of Parks and Recreation and the Fitler Square Improvement Association. In Boston tens of squares exist, some having a mainly residential use. The Kingstowne development in Fairfax County, Virginia, near Washington, DC, contains several townhouse complexes built around garden squares. Africa In Africa, garden squares are rare. Many squares and parks in Africa were constructed during colonial rule, along with European-styled architecture. South Africa A well-known square like this is Greenmarket Square, in the center of Cape Town, South Africa, which previously hosted more townhouses at its edges but has been mostly paved over. Asia Garden Squares generally do not occur throughout Asia. Parks usually occupy the need for urban green spaces, while historic and modern gardens exist as attractions, not central communal spaces. Australia and New Zealand Trafalgar Square, Nelson Victory Square, Nelson See also Communal garden Private park Courtyard Urban open space Architecture of the United Kingdom Parks and open spaces in London List of garden squares in London Squares in London Terraced houses in the United Kingdom Townhouse (Great Britain) References Town squares Urban planning Types of garden Town and country planning in the United Kingdom
Garden square
Engineering
1,342
33,902,732
https://en.wikipedia.org/wiki/Greenwood%20statistic
The Greenwood statistic is a spacing statistic and can be used to evaluate clustering of events in time or locations in space. Definition In general, for a given sequence of events in time or space the statistic is given by:. where represents the interval between events or points in space and is a number between 0 and 1 such that the sum of all . Where intervals are given by numbers that do not represent a fraction of the time period or distance, the Greenwood statistic is modified and is given by: where: and represents the length of the 'ith interval, which is either the time between events or the distances between points in space. A reformulation of the statistic yields where is the sample coefficient of variation of the n + 1 interval lengths. Properties The Greenwood statistic is a comparative measure that has a range of values between 0 and 1. For example, applying the Greenwood statistic to the arrival of 11 buses in a given time period of say 1 hour, where in the first example all eleven buses arrived at a given point each 6 minutes apart, would give a result of roughly 0.10. However, in the second example if the buses became bunched up or clustered so that 6 buses arrived 10 minutes apart and then 5 buses arrived 2 minutes apart in the last 10 minutes, the result is roughly 0.17. The result for a random distribution of 11 bus arrival times in an hour will fall somewhere between 0.10 and 0.17. So this can be used to tell how well a bus system is running and in a similar way, the Greenwood statistic was also used to determine how and where genes are placed in the chromosomes of living organisms. This research showed that there is a definite order to where genes are placed, particularly with regard to what function the genes perform, and this is important in the science of genetics. References Spatial analysis Statistical deviation and dispersion
Greenwood statistic
Physics
386
21,424,701
https://en.wikipedia.org/wiki/Defaunation
Defaunation is the global, local, or functional extinction of animal populations or species from ecological communities. The growth of the human population, combined with advances in harvesting technologies, has led to more intense and efficient exploitation of the environment. This has resulted in the depletion of large vertebrates from ecological communities, creating what has been termed "empty forest". Defaunation differs from extinction; it includes both the disappearance of species and declines in abundance. Defaunation effects were first implied at the Symposium of Plant-Animal Interactions at the University of Campinas, Brazil in 1988 in the context of Neotropical forests. Since then, the term has gained broader usage in conservation biology as a global phenomenon. It is estimated that more than 50 percent of all wildlife has been lost in the last 40 years. In 2016, it was estimated that by 2020, 68% of the world's wildlife would be lost. In South America, there is believed to be a 70 percent loss. A 2021 study found that only around 3% of the planet's terrestrial surface is ecologically and faunally intact, with healthy populations of native animal species and little to no human footprint. In November 2017, over 15,000 scientists around the world issued a second warning to humanity, which, among other things, urged for the development and implementation of policies to halt "defaunation, the poaching crisis, and the exploitation and trade of threatened species." Drivers Overexploitation The intensive hunting and harvesting of animals threaten endangered vertebrate species across the world. Game vertebrates are considered valuable products of tropical forests and savannas. In Brazilian Amazonia, 23 million vertebrates are killed every year; large-bodied primates, tapirs, white-lipped peccaries, giant armadillos, and tortoises are some of the animals most sensitive to harvest. Overhunting can reduce the local population of such species by more than half, as well as reducing population density. Populations located nearer to villages are significantly more at risk of depletion. Abundance of local game species declines as density of local settlements, such as villages, increases. Hunting and poaching may lead to local population declines or extinction in some species. Most affected species undergo pressure from multiple sources but the scientific community is still unsure of the complexity of these interactions and their feedback loops. One case study in Panama found an inverse relationship between poaching intensity and abundance for 9 of 11 mammal species studied. In addition, preferred game species experienced greater declines and had higher spatial variation in abundance. Habitat destruction and fragmentation Human population growth results in changes in land use, which can cause natural habitats to become fragmented, altered, or destroyed. Large mammals are often more vulnerable to extinction than smaller animals because they require larger home ranges and thus are more prone to suffer the effects of deforestation. Large species such as elephants, rhinoceroses, large primates, tapirs and peccaries are the first animals to disappear in fragmented rainforests. A case study from Amazonian Ecuador analyzed two oil-road management approaches and their effects on the surrounding wildlife communities. The free-access road had forests that were cleared and fragmented and the other had enforced access control. Fewer species were found along the first road with density estimates being almost 80% lower than at the second site which had minimal disturbance. This finding suggests that disturbances affected the local animals' willingness and ability to travel between patches. Fragmentation lowers populations while increasing extinction risk when the remaining habitat size is small. When there is more unfragmented land, there is more habitat for more diverse species. A larger land patch also means it can accommodate more species with larger home ranges. However, when patch size decreases, there is an increase in the number of isolated fragments that can remain unoccupied by local fauna. If this persists, species may become extinct in the area. A study on deforestation in the Amazon looked at two patterns of habitat fragmentation: "fish-bone" in smaller properties and another unnamed large property pattern. The large property pattern contained fewer fragments than the smaller fishbone pattern. The results suggested that higher levels of fragmentation within the fish-bone pattern led to the loss of species and decreased diversity of large vertebrates. Human impacts, such as the fragmentation of forests, may cause large areas to lose the ability to maintain biodiversity and ecosystem function due to loss of key ecological processes. This can consequently cause changes within environments and skew evolutionary processes. In North America, wild bird populations have declined by 29%, or around three billion, since 1970, largely as the result of anthropogenic causes such as habitat loss for human use, the primary driver of the decline, along with widespread use of neonicotinoid insecticides and the proliferation of domesticated cats allowed to roam outdoors. Invasive species Human influences, such as colonization and agriculture, have caused species to become distributed outside of their native ranges. Fragmentation also has cascading effects on native species, beyond reducing habitat and resource availability; it leaves areas vulnerable to non-native invasions. Invasive species can out-compete or directly prey upon native species, as well as alter the habitat so that native species can no longer survive. In extinct animal species for which the cause of extinction is known, over 50% were affected by invasive species. For 20% of extinct animal species, invasive species are the only cited cause of extinction. Invasive species are the second-most important cause of extinction for mammals. Global patterns Tropical regions are the most heavily impacted by defaunation. These regions, which include the Brazilian Amazon, the Congo Basin of Central Africa, and Indonesia, experience the greatest rates of overexploitation and habitat degradation. However, specific causes are varied, and areas with one endangered group (such as birds) do not necessarily also have other endangered groups (such as mammals, insects, or amphibians). Deforestation of the Brazilian Amazon leads to habitat fragmentation and overexploitation. Hunting pressure in the Amazon rainforest has increased as traditional hunting techniques have been replaced by modern weapons such as shotguns. Access roads built for mining and logging operations fragment the forest landscape and allow hunters to move into forested areas which previously were untouched. The bushmeat trade in Central Africa incentivizes the overexploitation of local fauna. Indonesia has the most endangered animal species of any area in the world. International trade in wild animals, as well as extensive logging, mining and agriculture operations, drive the decline and extinction of numerous species. Ecological impacts Genetic loss Inbreeding and genetic diversity loss often occur with endangered species populations because they have small and/or declining populations. Loss of genetic diversity lowers the ability of a population to deal with change in their environment and can make individuals within the community homogeneous. If this occurs, these animals are more susceptible to disease and other occurrences that may target a specific genome. Without genetic diversity, one disease could eradicate an entire species. Inbreeding lowers reproduction and survival rates. It is suggested that these genetic factors contribute to the extinction risk in threatened/endangered species. Seed dispersal Effects on plants and forest structure The consequences of defaunation can be expected to affect the plant community. There are three non-mutually exclusive conclusions as to the consequences on tropical forest plant communities: If seed dispersal agents are targeted by hunters, the effectiveness and amount of dispersal for those plant species will be reduced The species composition of the seedling and sapling layers will be altered by hunting, and Selective hunting of medium/large-sized animals instead of small-sized animals will lead to different seed predation patterns, with an emphasis on smaller seeds One recent study analyzed seedling density and composition from two areas, Los Tuxtlas and Montes Azules. Los Tuxtlas, which is affected more by human activity, showed higher seedling density and a smaller average number of different species than in the other area. Results suggest that an absence of vertebrate dispersers can change the structure and diversity of forests. As a result, a plant community that relies on animals for dispersal could potentially have an altered biodiversity, species dominance, survival, demography, and spatial and genetic structure. Poaching is likely to alter plant composition because the interactions between game and plant species varies in strength. Some game species interact strongly, weakly, or not at all with species. A change in plant species composition is likely to be a result because the net effect removal of game species varies among the plant species they interact with. Effects on small-bodied seed dispersers and predators As large-bodied vertebrates are increasingly lost from seed-dispersal networks, small-bodied seed dispersers (i.e. bats, birds, dung beetles) and seed predators (i.e. rodents) are affected. Defaunation leads to reduced species diversity. This is due to relaxed competition; small-bodied species normally compete with large-bodied vertebrates for food and other resources. As an area becomes defaunated, dominant small-bodied species take over, crowding out other similar species and leading to an overall reduced species diversity. The loss of species diversity is reflective of a larger loss of biodiversity, which has consequences for the maintenance of ecosystem services. The quality of the physical habitat may also suffer. Bird and bat species (many of who are small bodied seed dispersers) rely on mineral licks as a source of sodium, which is not available elsewhere in their diets. In defaunated areas in the Western Amazon, mineral licks are more thickly covered by vegetation and have lower water availability. Bats were significantly less likely to visit these degraded mineral licks. The degradation of such licks will thus negatively affect the health and reproduction of bat populations. Defaunation has negative consequences for seed dispersal networks as well. In the western Amazon, birds and bats have separate diets and thus form separate guilds within the network. It is hypothesized that large-bodied vertebrates, being generalists, connect separate guilds, creating a stable, resilient network. Defaunation results in a highly modular network in which specialized frugivores instead act as the connector hubs. Food webs According to a 2022 study published in Science, terrestrial mammal food web links have declined by 53% over the past 130,000 years as a result of human population expansion and accompanying defaunation. Ecosystem services Changes in predation dynamics, seed predation, seed dispersal, carrion removal, dung removal, vegetation trampling, and other ecosystem processes as a result of defaunation can affect ecosystem supporting and regulatory services, such as nutrient cycling and decomposition, crop pollination, pest control, and water quality. Conservation Efforts against defaunation include wildlife overpasses and riparian corridors. Both of these can be otherwise known as wildlife crossing mechanisms. Wildlife overpasses are specifically used for the purpose of protecting many animal species from the roads. Many countries use them and they have been found to be very effective in protecting species and allowing forests to be connected. These overpasses look like bridges of forest that cross over many roads, like a walk bridge for humans, allowing animals to migrate from one side of the forest to the other safely since the road cut off the original connectivity. It was concluded in a study done by Pell and Jones, looking at bird use of these corridors in Australia, that many birds did, in fact, use these corridors to travel from one side of forest to the other and although they did not spend much time in the corridor specifically, they did commonly use them. Riparian corridors are very similar to overpasses they are just on flat land and not on bridges, however, they also work as connective "bridges" between fragmented pieces of forest. One study done connected the corridors with bird habitat and use for seed dispersal. The conclusions of this study showed that some species of birds are highly dependent on these corridors as connections between forest, as flying across the open land is not ideal for many species. Overall both of these studies agree that some sort of connectivity needs to be established between fragments in order to keep the forest ecosystem in the best health possible and that they have in fact been very effective. Marine Defaunation in the ocean has occurred later and less intensely than on land. A relatively small number of marine species have been driven to extinction. However, many species have undergone local, ecological, and commercial extinction. Most large marine animal species still exist, such that the size distribution of global species assemblages has changed little since the Pleistocene, but individuals of each species are smaller on average, and overfishing has caused reductions in genetic diversity. Most extinctions and population declines to date have been driven by human overexploitation. Overfishing has reduced populations of oceanic sharks and rays by 71% since 1970, with more than three quarters of species facing extinction. Consequences Marine defaunation has a wide array of effects on ecosystem structure and function. The loss of animals can have both top-down (cascading) and bottom-up effects, as well as consequences for biogeochemical cycling and ecosystem stability. Two of the most important ecosystem services threatened by marine defaunation are the provision of food and coastal storm protection. See also Anthropocene Anthropocentrism Bushmeat Holocene extinction Human impact on the environment Human overpopulation Insect population decline References Further reading External links Mongobay.com : Defaunation, like deforestation, threatens global biodiversity: Interview with Rodolfo Dirzo (archived 13 July 2009) Ecology Biodiversity Environmental conservation
Defaunation
Biology
2,764
40,292,886
https://en.wikipedia.org/wiki/Bruceanol
Bruceanols are quassinoids isolated from Brucea antidysenterica. Bruceanols Bruceanol A Bruceanol B Bruceanol C Bruceanol D Bruceanol E Bruceanol F Bruceanol G Bruceanol H References Quassinoids
Bruceanol
Chemistry
60
9,737,431
https://en.wikipedia.org/wiki/Hexamilion%20wall
The Hexamilion wall (, "six-mile wall") was a defensive wall constructed across the Isthmus of Corinth, guarding the only land route onto the Peloponnese peninsula from mainland Greece. It was constructed between AD 408 and 450, under the reign of Theodosius II. History Early fortifications The Hexamilion stands at the most recent end of a long series of attempts to fortify the isthmus stretching back to perhaps the Mycenaean period. Many of the Peloponnesian cities wanted to pull back and fortify the isthmus instead of making a stand at Thermopylae when Xerxes invaded in 480 BC (Herodotus' Histories 7.206). The issue arose again before the Battle of Salamis (Herodotos 8.40, 49, 56). Although the concept of a "Fortress Peloponnese" had been repeatedly suggested, fortification of the isthmus was of no utility without control of the sea, as Herodotus notes (7.138). The Hexamilion and its history The wall was constructed from AD 408 and 450, in the reign of Theodosius II during the time of the great Barbarian invasions into the Roman Empire. Its purpose was to protect the Peloponnese from invasion from the north. The attack of Alaric on Greece in 396 or the sack of Rome in 410 by the Visigoths may have motivated its construction. The wall ran from the Gulf of Corinth to the Saronic Gulf, covering a distance of 7,028 and 7,760 meters. The fortress contained two gates (north and south), of which the northern gate functioned as the formal entrance to the Peloponnese. In the reign of Justinian, the wall was fortified with additional towers, reaching a total number of 153, with forts at either end and the construction of Justinian's Fortress at Isthmia. The building of the Fortress at Isthmia was left mostly to autonomous work crews that, while following the same general instructions and using the same materials, operated in markedly different ways. As for the wall itself, local Corinthians – irrespective of politics or religion – would have contributed to the physical construction of the Hexamilion and the maintenance of any associated garrisons. Military use appears to have fallen off after the 7th century, and by the 11th century domestic structures were being built into the wall. Characteristics of the wall The strategic fortress of Isthmia, taking advantage of favorable terrain, was located to the southern side of the Hexamilion wall, north-east of the Poseidon Sanctuary. The wall was constructed with a rubble and mortar core faced with squared stones. The blocks on the northern facade were larger and coalesced with more carefully implemented edges, while the southern face was conceived of smaller stones set in mortar. It is not certain how long it took to complete, but the importance given to the task is apparent from the scale of the construction; the Hexamilion is the largest archaeological structure in Greece. Due to the great mass of the 7.5 km long wall (which was 7m high and 3m thick) many structures in the region were cannibalized for stone for the effort. Some structures were incorporated into the wall directly (as was the temple of Poseidon at Isthmia) whereas some were burned into lime (as was the sanctuary of Hera at Perachora, as well as much of the ancient statuary of Corinth). Materials from the Sanctuary of Poseidon were evenly distributed and converted into the main entrance of the wall in an emplecton building technique in the first century. Spolia (voussoirs, column drums, and inscribed blocks) were incorporated into both the structure and roadway. The fortress was intimately tied into the defensive network, a fact readily demonstrated by similarities in construction techniques used. The fortress consisted of nineteen rectangular towers protruding from the walls of its 2.7-hectare total area, and more than likely housed the military garrison that defended the Hexamilion as a whole. The main passageway through the wall was through the Isthmia fortress, where the north-east gate acted as the main entrance into the Peloponnese. It is likely that the fortifications were damaged severely by earthquakes, which contributed to the rapid deterioration of the wall between renovations during Justinian and Manuel II's reigns. Most damaging was perhaps the earthquake of 551, which Procopius mentions as being particularly destructive to Greece as a whole. Garrison The garrison of the fortress of Isthmia in the 5th century likely consisted of four to eight tagmata. Historians believe the quality and state of the troops were similar to that of Procopius' descriptions of the state of the soldiers that manned the fortification at Thermopylae prior to Justinian’s reign; namely, local farmers who proved to be incapable of checking the advance of various invaders and so were replaced by comitatenses. As part of his repairs to the wall, Justinian established a professional military garrison within the Fortress of Isthmia, which replaced the local farmers who previously manned it. To bolster supplies, the soldiers produced some of their own food through farming south of the Hexamilion, although major aid came also from local farmers, merchants, artisans, and workmen, including from other nearby towns, such as Corinth. A system of rural villas supplied a considerable share of goods and services also; such villae rusticae being an important part of the economic exchange system of the Empire, and a basic productive unit of Late Roman and Early Byzantine times. The variety of skilled labor contributed by the Hexamilion garrison allowed for the creation of local granaries, allowing for intensified economic exploitation of the region. Despite this growth in developmental pace, the demands on the countryside and local economy fluctuated seasonally, with a notable intensification of economic activity during the warmer seasons. Likewise, the garrison's presence strained both the environment and local economy during the off-season, when their skills were not in use. This created a cyclical local economy based on the presence of troops, where demand and production were in constant flux. Effects on the locals During its initial construction, the Hexamilion significantly restricted the number of passages into the Peloponnese. The road from Athens was made to pass directly through the eastern fortress towards Corinth to the west and Epidaurus to the east. This transformed the fortress of Isthmia and its attendant wall section into the main overland connection to southern Greece. The wall's guarded gateways allowed for taxation of incoming and outgoing trade, which helped boost the local economy of the region. The Hexamilion wall likely had both short and long-term negative effects on the local population as well. The acquisition of land and clearing of buildings along the route of the wall led to conflict with individual property holders. In addition to its defensive role, the wall likely functioned as a means to entrench state control over local affairs. While the scale of the repairs on the Hexamilion wall during Justinian's reign suggests the fortification project would have provided employment to local laborers, which influenced the distribution of wealth within the local economy, and likely attracted many skilled laborers to the region. Opposition Multiple archeological finds support the idea that, during its construction, re-fortification and even after its completion, locals may have opposed the construction of the wall. One such piece of evidence was the discovery of graffiti scratched onto the rear face just west of Tower 15, This was undoubtedly made by individuals associated with the wall's initial construction or repair, as the etching occurred before the mortar had time to harden. The image depicts two galleys and a different kind of vessel seen as a boarding device suggesting naval combat and the notion of the Hexamlion’s lack of defense from seaborne threats. As Frey notes, the Hexamilion could not defend against attack from the sea, as it was designed to counter only overland threats, and did not even project into the sea on either side. This being said, we may never determine the true intentions of those who carved these images in the mortar. The may have been a simple expression of playfulness, devoid of broader meaning. A second example which supports the idea of local opposition to the wall's construction were the graves found during the excavations between 1954 and 1976. These were located inexplicably at the base of a staircase leading to an upper fighting platform. They appear to have been placed roughly a decade after the wall's initial completion. The construction of one of the graves resulted in the removal of the bottom tread of the staircase, undermining the functionality of one of the most strategically important points of defense along the Hexamilion wall. The graves were created over a span of many decades and contained women and children, suggesting that soon after its initial construction, the Fortress' maintenance passed over to local residents. During the later sixth and early seventh century, both the Northeast and South Gates of the Fortress were sealed with thick walls, effectively blocking the busy roadways to Athens, Corinth, and Epidaurus. This comes as a surprise to researchers given the gates' importance in connecting prominent cities. The construction style suggests they were built with haste and somewhat carelessly. The Northeast Gate was integrated with sluice gates for drainage, indicating a non-temporary residency. Exactly why the gates were sealed remains unknown. Explanations include the idea that locals may have blocked the gates themselves, as these events coincided with the timing of the Hexamilion gate repairs during the reign of Justinian. It may be the case that the local population of Isthmia resisted this alteration of their land (which would have turned it into a major thoroughfare) and acted independently to retain the status quo. Archeological findings seem to reinforce the idea of a cyclic pattern of imperial concerns followed by local indifference and opposition to Hexamilion wall and its upkeep. Quite apart from petty graffiti protests and grave marker placements intended to “demilitarize” the wall, open revolts have also been associated with the construction and maintenance of the Hexamilion. The re-fortification of the Isthmus in an effort to counter the invasion of the Ottoman Turks in 1415 CE during Emperor Manuel II's reign led to an open revolt among the local population, which was put down by force. Manuel II saw the opposition as open resistance to the reinstatement of imperial control, whereas Chrysoloras documents a growing local frustration with the continuous funding and building of the wall. Destruction of the Hexamilion From its initial construction to its re-fortification and repairs throughout Justinian and Manuel II’s reign, the Hexamillion passed through many phases of use. However, the downfall of the wall can be attributed mainly to the invasions of the Ottoman Turks. In 1415, Byzantine emperor Manuel II personally supervised repairs over a period of forty days, but the rigorous demands of this effort caused unrest among local elites. The wall was breached by the Ottomans in 1423, and again in 1431 under the command of Turahan Bey. Constantine Palaiologos, who was Despot of Morea before his accession to the throne of the Byzantine empire, and his brother Thomas restored the wall again in 1444, but the Ottomans breached it in 1446 and again in October 1452. The final fall of the Trans-Isthmian wall occurred during a battle between Constantine and the Turks starting on November 27, 1446. Murad II, commander of a Turkish army said to have consisted of 50,000 to 60,000 men, supposedly lined the entirety of the wall with heavy artillery of long cannons (new weapons at the time), siege engines and scaling ladders. According to Chalkokondyles' vivid account of the assault, after five days of fighting Murad signaled the final attack, and on December 10, 1446, the Hexamilion was no more than a heap of ruins. After the fall of Constantinople in 1453 and the Ottoman conquest of the Peloponnese in 1460, the wall was abandoned. During its history, the wall never succeeded in fulfilling the function for which it was constructed, although it may have functioned as a deterrent. Elements of the wall are preserved south of the Corinth Canal and at the Sanctuary of Poseidon at Isthmia. Images of the Hexamilion Notes References Secondary sources on the Hexamilion Barker, J. W. (New Brunswick, NJ 1969). Manuel II Paleologus (1391–1425): A Study in Late Byzantine Statesmanship. Clement, P. A. (Thessaloniki 1977) “The Date of the Hexamilion” in Essays in Memory of Basil Laourdas. Fowden, G. (JRA 8 (1995), p. 549-567). “Late Roman Achaea: Identity and Defense.” Gregory, T. E. (Princeton, NJ 1993). The Hexamilion and the Fortress. (Isthmia vol. 5). Hohlfelder, R. (GRBS 18 (1977), p. 173-179). "Trans-Isthmian Walls in the Age of Justinian." Jenkins, R. J. H. and H. Megaw. (BSA 32 (1931/1932) p. 68-89). “Researches at Isthmia.” Johnson, S. (London 1983). Late Roman Fortifications. Lawrence, A. W. (BSA 78 (1983), p. 171-233). “A Skeletal History of Byzantine Fortification.” Leake, W. M. (London 1830). Travels in the Morea. Monceaux, P. (Gazette archéologique (1884), p. 273-285, 354-363). “Fouilles et recherches archéologiques au sanctuaire des Jeux Isthmiques.” Monceaux, P. (Gazette archéologique (1885), p. 205-214). “Fouilles et recherches archéologiques au sanctuaire des Jeux Isthmiques.” Pringle, D. (Oxford 1981). The Defense of Byzantine Africa from Justinian to the Arab Conquest. (British Archaeological Reports, International Series 99). Stroud, R. (Hesperia 40 (1971), p. 127-145). “An Ancient Fort on Mount Oneion.” Winter, F. E. (London 1971). Greek Fortifications. Wiseman, J. R. (Hesperia 32 (1963), p. 248-275). “A Trans-Isthmian Fortification Wall.” Secondary sources on transisthmian fortifications Bodnar, E. W. (AJA 64 (1960), p. 165-172). “The Isthmian Fortifications in Oracular Prophecy.” Broneer, O. (Hesperia 35 (1966), p. 346-362). “The Cyclopean Wall on the Isthmus of Corinth and Its Bearing on Late Bronze Age Chronology.” Broneer, O. (Hesperia 37 (1968), p. 25-35). “The Cyclopean Wall on the Isthmus of Corinth, Addendum.” Caraher, W. R. and T. E. Gregory. (Hesperia 75.3 (2006), p. 327-356). “Fortifications of Mount Oneion, Corinthia.” Chrysoula, P. K. (AAA 4 (1971), p. 85-89). “The Isthmian Wall.” Dodwell, E. (London 1819). A Classical and Topographical Tour through Greece II Fimmen, E. (RE IX (1916), cols. 2256–2265). “Isthmos.” Hope-Simpson, R. (London 1965). Gazetteer and Atlas of Mycenaean Sites. Jansen, A. g. (Lewiston, NY 2002). A Study of the Remains of Mycenaean Roads and Stations of Bronze-Age Greece. Lawrence, A. W. (Oxford 1979). Greek Aims in Fortification. Vermeule, E. T. (Chicago 1972). Greece in the Bronze Age. Wheler, G. (London 1682). A Journey into Greece. Wiseman, J. R. (Göteborg 1978). The Land of the Ancient Corinthians. (Studies in Mediterranean Archaeology 50). Wiseman, J. R. (diss. University of Chicago 1966). Corinthian Trans-Isthmian Walls and the Defense of the Peloponnesos. Primary sources Zosimus, Historia nova 1.29 (253-260 CE), 5.6 (396 CE). Procopius, De aedificiis 4.2.27-28 (548-560 CE). IG IV.204 (548-560 BCE). G. Sphrantzes, Chronicon minus (p. 4, Grecu) (1415 CE), (p. 16, Grecu) (1423 CE), (p. 50, Grecu) (1431 CE), (p. 52, Grecu) (1435 CE), (p. 66, Grecu) (1444 CE), (p. 128, ed. Grecu) (1462). Laonikos Chalkokondyles (p. 183-184, ed. Bonn) (1415 CE), (p. 319-320, ed. Bonn) (1443 CE), (p. 70, Grecu) (1446), (p. 345-346, ed. Bonn) (1446 CE), (p. 443, ed. Bonn) (1458). Short Chronicle 35 (p. 286, Schreiner, I) (1415 CE), 33 (p. 252, Schreiner, I) (1446 CE). Manuel II, The Letters of Manuel Palaeologus (p. 68, Dennis) (1415–1416 CE). Mazaris, Descent into Hades (p. 80-82, Buffalo (1415 CE). Cyriacus of Ancona, Cyriacus of Ancona and Athens (p. 168, Bodnar) (1436 CE). Pythian Oracle (p. 166-167, Bodnar) (1431–1446 CE). Pseudo-Phrantzes, Chronicum maius (p. 235, ed. Bonn) (1452 CE). Plutarch, Lives Agis and Cleomenes 20.1-21.4 (223 BCE), Aratus 43.1-44.4 (223 BCE). Polybius 2.52.1-53.6 (223 BCE). Diodorus Siculus 15.68.1-5 (369/368 BCE), 19.53.1-53.4 (316 BCE), 19.63.1-64.4 (315 BCE). Xenophon, Hellenica 6.5.49-52 (370 BCE), 7.1.15-22 (369 BCE). Herodotus 7.138-139 (480 BCE), 8.71-72 (480 BCE), 9.7-8 (480 BCE). See also Diolkos External links Excavations on the Hexamilion, by the Ohio State University Medieval Corinthia Buildings and structures completed in the 5th century Byzantine fortifications in Greece Walls Ancient defensive walls in Greece Byzantine–Ottoman wars Buildings and structures in Corinthia 5th-century fortifications Fortification lines
Hexamilion wall
Engineering
4,137
24,958,693
https://en.wikipedia.org/wiki/Simon%20Phipps%20%28programmer%29
Simon Phipps is a computer scientist and web and open source advocate. Phipps was instrumental in IBM's involvement in the Java programming language, founding IBM's Java Technology Center. He left IBM for Sun Microsystems in 2000, taking leadership of Sun's open source programme from Danese Cooper. Under Phipps, most of Sun's core software was released under open source licenses, including Solaris and Java. Phipps was not hired into Oracle as part of the acquisition of Sun Microsystems and his final day was March 8, 2010 when the two entities combined. Following Sun, he spent a year as Chief Strategy Officer of identity startup ForgeRock before becoming an independent consultant. In 2015 he briefly joined Wipro Technologies as director of their open source advisory practice. Phipps was President of the Open Source Initiative until 2015 when he stepped down in preparation for the end of his Board term in 2016, and was re-elected in 2017 and re-appointed President by the Board in September 2017. He was a board member of the Open Rights Group and The Document Foundation and on the advisory board of Open Source for America. He has served on a number of advisory boards for other projects, including as CEO of the MariaDB Foundation, and at the GNOME Foundation, OpenSolaris, OpenJDK, and OpenSPARC and most recently the AlmaLinux OS Foundation. He has appeared as a guest and occasional co-host on episodes of the FLOSS Weekly podcast. References External links Simon Phipps personal blog Meshed Insights Business Information (consultancy site) People in information technology Free software programmers Living people Java platform LibreOffice Sun Microsystems people Members of the Open Source Initiative board of directors Open source people Year of birth missing (living people)
Simon Phipps (programmer)
Technology
354
30,677
https://en.wikipedia.org/wiki/Tool
A tool is an object that can extend an individual's ability to modify features of the surrounding environment or help them accomplish a particular task. Although many animals use simple tools, only human beings, whose use of stone tools dates back hundreds of millennia, have been observed using tools to make other tools. Early human tools, made of such materials as stone, bone, and wood, were used for the preparation of food, hunting, the manufacture of weapons, and the working of materials to produce clothing and useful artifacts and crafts such as pottery, along with the construction of housing, businesses, infrastructure, and transportation. The development of metalworking made additional types of tools possible. Harnessing energy sources, such as animal power, wind, or steam, allowed increasingly complex tools to produce an even larger range of items, with the Industrial Revolution marking an inflection point in the use of tools. The introduction of widespread automation in the 19th and 20th centuries allowed tools to operate with minimal human supervision, further increasing the productivity of human labor. By extension, concepts that support systematic or investigative thought are often referred to as "tools" or "toolkits". Definition While a common-sense understanding of the meaning of tool is widespread, several formal definitions have been proposed. In 1981, Benjamin Beck published a widely used definition of tool use. This has been modified to:Other, briefer definitions have been proposed: History Anthropologists believe that the use of tools was an important step in the evolution of mankind. Because tools are used extensively by both humans (Homo sapiens) and wild chimpanzees, it is widely assumed that the first routine use of tools took place prior to the divergence between the two ape species. These early tools, however, were likely made of perishable materials such as sticks, or consisted of unmodified stones that cannot be distinguished from other stones as tools. Stone artifacts date back to about 2.5 million years ago. However, a 2010 study suggests the hominin species Australopithecus afarensis ate meat by carving animal carcasses with stone implements. This finding pushes back the earliest known use of stone tools among hominins to about 3.4 million years ago. Finds of actual tools date back at least 2.6 million years in Ethiopia. One of the earliest distinguishable stone tool forms is the hand axe. Up until recently, weapons found in digs were the only tools of "early man" that were studied and given importance. Now, more tools are recognized as culturally and historically relevant. As well as hunting, other activities required tools such as preparing food, "...nutting, leatherworking, grain harvesting and woodworking..." Included in this group are "flake stone tools". Tools are the most important items that the ancient humans used to climb to the top of the food chain; by inventing tools, they were able to accomplish tasks that human bodies could not, such as using a spear or bow to kill prey, since their teeth were not sharp enough to pierce many animals' skins. "Man the hunter" as the catalyst for Hominin change has been questioned. Based on marks on the bones at archaeological sites, it is now more evident that pre-humans were scavenging off of other predators' carcasses rather than killing their own food. Timeline of ancient tool development Many tools were made in prehistory or in the early centuries of recorded history, but archaeological evidence can provide dates of development and use. Olduvai stone technology (Oldowan) 2.5 million years ago (scrapers; to butcher dead animals) Huts, 2 million years ago. Acheulean stone technology 1.6 million years ago (hand axe) Fire creation and manipulation, used since the Paleolithic, possibly by Homo erectus as early as 1.5 Million years ago Boats, 900,000 years ago. Cooking, 500,000 years ago. Javelins, 400,000 years ago. Glue, 200,000 years ago. Clothing possibly 170,000 years ago. Stone tools, used by Homo floresiensis, possibly 100,000 years ago. Harpoons, 90,000 years ago. Bow and arrows, 70,000–60,000 years ago. Sewing needles, 60,000 – 50,000 BC Flutes, 43,000 years ago. Fishing nets, 43,000 years ago. Ropes, 40,000 years ago. Ceramics Fishing hooks, . Domestication of animals, Sling (weapon) Microliths Brick used for construction in the Middle East Agriculture and Plough Wheel Gnomon Writing systems Copper Bronze Salt Chariot Iron Sundial Glass Catapult Cast iron Horseshoe Stirrup first few centuries AD Several of the six classic simple machines (wheel and axle, lever, pulley, inclined plane, wedge, and screw) were invented in Mesopotamia. The wheel and axle mechanism first appeared with the potter's wheel, invented in what is now Iraq during the 5th millennium BC. This led to the invention of the wheeled vehicle in Mesopotamia during the early 4th millennium BC. The lever was used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia , and then in ancient Egyptian technology . The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC. The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911–609 BC). The Assyrian King Sennacherib (704–681 BC) claims to have invented automatic sluices and to have been the first to use water screw pumps, of up to 30 tons weight, which were cast using two-part clay molds rather than by the 'lost wax' process. The Jerwan Aqueduct ( is made with stone arches and lined with waterproof concrete. The earliest evidence of water wheels and watermills date back to the ancient Near East in the 4th century BC, specifically in the Persian Empire before 350 BC, in the regions of Mesopotamia (Iraq) and Persia (Iran). This pioneering use of water power constituted perhaps the first use of mechanical energy. Mechanical devices experienced a major expansion in their use in Ancient Greece and Ancient Rome with the systematic employment of new energy sources, especially waterwheels. Their use expanded through the Dark Ages with the addition of windmills. Machine tools Machine tools occasioned a surge in producing new tools in the Industrial Revolution. Pre-industrial machinery was built by various craftsmenmillwrights built water and windmills, carpenters made wooden framing, and smiths and turners made metal parts. Wooden components had the disadvantage of changing dimensions with temperature and humidity, and the various joints tended to rack (work loose) over time. As the Industrial Revolution progressed, machines with metal parts and frames became more common. Other important uses of metal parts were in firearms and threaded fasteners, such as machine screws, bolts, and nuts. There was also the need for precision in making parts. Precision would allow better working machinery, interchangeability of parts, and standardization of threaded fasteners. The demand for metal parts led to the development of several machine tools. They have their origins in the tools developed in the 18th century by makers of clocks and watches and scientific instrument makers to enable them to batch-produce small mechanisms. Before the advent of machine tools, metal was worked manually using the basic hand tools of hammers, files, scrapers, saws, and chisels. Consequently, the use of metal machine parts was kept to a minimum. Hand methods of production were very laborious and costly and precision was difficult to achieve. With their inherent precision, machine tools enabled the economical production of interchangeable parts. Examples of machine tools include: Broaching machine Drill press Gear shaper Hobbing machine Hone Lathe Screw machines Milling machine Shear (sheet metal) Shaper Bandsaw Planer Stewart platform mills Grinding machines Advocates of nanotechnology expect a similar surge as tools become microscopic in size. Types One can classify tools according to their basic functions: Cutting and edge tools, such as the knife, sickle, scythe, hatchet, and axe, are wedge-shaped implements that produce a shearing force along a narrow face. Ideally, the edge of the tool needs to be harder than the material being cut or the blade will become dulled with repeated use. But even resilient tools will require periodic sharpening, which is the process of removing deformation wear from the edge. Other examples of cutting tools include gouges and drill bits. Moving tools move large and tiny items. Many are levers which give the user a mechanical advantage. Examples of force-concentrating tools include the hammer which moves a nail or the maul which moves a stake. These operate by applying physical compression to a surface. In the case of the screwdriver, the force is rotational and called torque. By contrast, an anvil concentrates force on an object being hammered by preventing it from moving away when struck. Writing implements deliver a fluid to a surface via compression to activate the ink cartridge. Grabbing and twisting nuts and bolts with pliers, a glove, a wrench, etc. likewise move items by applying torque (rotational force). Tools that enact chemical changes, including temperature and ignition, such as lighters and blowtorches. Guiding, measuring and perception tools include the ruler, glasses, square, sensors, straightedge, theodolite, microscope, monitor, clock, phone, printer Shaping tools, such as molds, jigs, trowels. Fastening tools, such as welders, soldering irons, rivet guns, nail guns, or glue guns. Information and data manipulation tools, such as computers, IDE, spreadsheets Some tools may be combinations of other tools. An alarm-clock is for example a combination of a measuring tool (the clock) and a perception tool (the alarm). This enables the alarm-clock to be a tool that falls outside of all the categories mentioned above. There is some debate on whether to consider protective gear items as tools, because they do not directly help perform work, just protect the worker like ordinary clothing. They do meet the general definition of tools and in many cases are necessary for the completion of the work. Personal protective equipment includes such items as gloves, safety glasses, ear defenders and biohazard suits. Function Tool substitution Often, by design or coincidence, a tool may share key functional attributes with one or more other tools. In this case, some tools can substitute for other tools, either as a makeshift solution or as a matter of practical efficiency. "One tool does it all" is a motto of some importance for workers who cannot practically carry every specialized tool to the location of every work task, such as a carpenter who does not necessarily work in a shop all day and needs to do jobs in a customer's house. Tool substitution may be divided broadly into two classes: substitution "by-design", or "multi-purpose", and substitution as makeshift. Substitution "by-design" would be tools that are designed specifically to accomplish multiple tasks using only that one tool. Substitution is "makeshift" when human ingenuity comes into play and a tool is used for an unintended purpose, such as using a long screwdriver to separate a cars control arm from a ball joint, instead of using a tuning fork. In many cases, the designed secondary functions of tools are not widely known. For example, many wood-cutting hand saws integrate a square by incorporating a specially-shaped handle, that allows 90° and 45° angles to be marked by aligning the appropriate part of the handle with an edge, and scribing along the back edge of the saw. The latter is illustrated by the saying "All tools can be used as hammers". Nearly all tools can be used to function as a hammer, even though few tools are intentionally designed for it and even fewer work as well as the original. Tools are often used to substitute for many mechanical apparatuses, especially in older mechanical devices. In many cases a cheap tool could be used to occupy the place of a missing mechanical part. A window roller in a car could be replaced with pliers. A transmission shifter or ignition switch would be able to be replaced with a screwdriver. Again, these would be considered tools that are being used for their unintended purposes, substitution as makeshift. Tools such as a rotary tool would be considered the substitution "by-design", or "multi-purpose". This class of tools allows the use of one tool that has at least two different capabilities. "Multi-purpose" tools are basically multiple tools in one device/tool. Tools such as this are often power tools that come with many different attachments like a rotary tool does, so one could say that a power drill is a "multi-purpose" tool. Multi-use tools A multi-tool is a hand tool that incorporates several tools into a single, portable device; the Swiss Army knife represents one of the earliest examples. Other tools have a primary purpose but also incorporate other functionality – for example, lineman's pliers incorporate a gripper and cutter and are often used as a hammer; and some hand saws incorporate a square in the right-angle between the blade's dull edge and the saw's handle. This would also be the category of "multi-purpose" tools, since they are also multiple tools in one (multi-use and multi-purpose can be used interchangeably – compare hand axe). These types of tools were specifically made to catch the eye of many different craftsman who traveled to do their work. To these workers these types of tools were revolutionary because they were one tool or one device that could do several different things. With this new revolution of tools, the traveling craftsman would not have to carry so many tools with them to job sites, in that their space would be limited to the vehicle or to the beast of burden they were driving. Multi-use tools solve the problem of having to deal with many different tools. Use by other animals Tool use by animals is a phenomenon in which an animal uses any kind of tool in order to achieve a goal such as acquiring food and water, grooming, defense, communication, recreation or construction. Originally thought to be a skill possessed only by humans, some tool use requires a sophisticated level of cognition. There is considerable discussion about the definition of what constitutes a tool and therefore which behaviours can be considered true examples of tool use. Observation has confirmed that a number of species can use tools including monkeys, apes, elephants, several birds, and sea otters. Now the unique relationship of humans with tools is considered to be that we are the only species that uses tools to make other tools. Primates are well known for using tools for hunting or gathering food and water, cover for rain, and self-defense. Chimpanzees have often been the object of study in regard to their usage of tools, most famously by Jane Goodall; these animals are closely related to humans. Wild tool-use in other primates, especially among apes and monkeys, is considered relatively common, though its full extent remains poorly documented, as many primates in the wild are mainly only observed distantly or briefly when in their natural environments and living without human influence. Some novel tool-use by primates may arise in a localized or isolated manner within certain unique primate cultures, being transmitted and practiced among socially connected primates through cultural learning. Many famous researchers, such as Charles Darwin in his book The Descent of Man, mentioned tool-use in monkeys (such as baboons). Among other mammals, both wild and captive elephants are known to create tools using their trunks and feet, mainly for swatting flies, scratching, plugging up waterholes that they have dug (to close them up again so the water does not evaporate), and reaching food that is out of reach. Many other social mammals particularly have been observed engaging in tool-use. A group of dolphins in Shark Bay uses sea sponges to protect their beaks while foraging. Sea otters will use rocks or other hard objects to dislodge food (such as abalone) and break open shellfish. Many or most mammals of the order Carnivora have been observed using tools, often to trap or break open the shells of prey, as well as for scratching. Corvids (such as crows, ravens and rooks) are well known for their large brains (among birds) and tool use. New Caledonian crows are among the only animals that create their own tools. They mainly manufacture probes out of twigs and wood (and sometimes metal wire) to catch or impale larvae. Tool use in some birds may be best exemplified in nest intricacy. Tailorbirds manufacture 'pouches' to make their nests in. Some birds, such as weaver birds, build complex nests utilizing a diverse array of objects and materials, many of which are specifically chosen by certain birds for their unique qualities. Woodpecker finches insert twigs into trees in order to catch or impale larvae. Parrots may use tools to wedge nuts so that they can crack open the outer shell of nuts without launching away the inner contents. Some birds take advantage of human activity, such as carrion crows in Japan, which drop nuts in front of cars to crack them open. Several species of fish use tools to hunt and crack open shellfish, extract food that is out of reach, or clear an area for nesting. Among cephalopods (and perhaps uniquely or to an extent unobserved among invertebrates), octopuses are known to use tools relatively frequently, such as gathering coconut shells to create a shelter or using rocks to create barriers. Non-material usage By extension, concepts which support systematic or investigative thought are often referred to as "tools", for example Vanessa Dye refers to "tools of reflection" and "tools to help sharpen your professional practice" for trainee teachers, illustrating the connection between physical and conceptual tools by quoting the French scientist Claude Bernaud: Similarly, a decision-making process "developed to help women and their partners make confident and informed decisions when planning where to give birth" is described as a "Birth Choice tool": and the idea of a "toolkit" is used by the International Labour Organization to describe a set of processes applicable to improving global labour relations. A telephone is a communication tool that interfaces between two people engaged in conversation at one level. It also interfaces between each user and the communication network at another level. It is in the domain of media and communications technology that a counter-intuitive aspect of our relationships with our tools first began to gain popular recognition. John M. Culkin famously said, "We shape our tools and thereafter our tools shape us". One set of scholars expanded on this to say: "Humans create inspiring and empowering technologies but also are influenced, augmented, manipulated, and even imprisoned by technology". See also Antique tool Equipment Human factors and ergonomics List of timber framing tools Scientific instrument Tool and die maker Tool library ToolBank USA References External links Industrial equipment
Tool
Engineering
3,913
19,675,057
https://en.wikipedia.org/wiki/Upper%20tropospheric%20cyclonic%20vortex
An upper tropospheric cyclonic vortex is a vortex, or a circulation with a definable center, that usually moves slowly from east-northeast to west-southwest and is prevalent across Northern Hemisphere's warm season. Its circulations generally do not extend below in altitude, as it is an example of a cold-core low. A weak inverted wave in the easterlies is generally found beneath it, and it may also be associated with broad areas of high-level clouds. Downward development results in an increase of cumulus cloudy and the appearance of circulation at ground level. In rare cases, a warm-core cyclone can develop in its associated convective activity, resulting in a tropical cyclone and a weakening and southwest movement of the nearby upper tropospheric cyclonic vortex. Symbiotic relationships can exist between tropical cyclones and the upper level lows in their wake, with the two systems occasionally leading to their mutual strengthening. When they move over land during the warm season, an increase in monsoon rains occurs History of research Using charts of mean 200-hectopascal circulation for July through August (located above sea level) to locate the circumpolar troughs and ridges, trough lines extend over the eastern and central North Pacific and over the North Atlantic. Case studies of upper tropospheric cyclones in the Atlantic and Pacific have been performed by using airplane reports (winds, temperatures and heights), radiosonde data, geostationary satellite cloud imagery, and cloud-tracked winds throughout the troposphere. It was determined they were the origin of an upper tropospheric cold-core lows, or cut-off lows. Characteristics The tropical upper tropospheric cyclone has a cold core, meaning it is stronger aloft than at the Earth's surface, or stronger in areas of the troposphere with lower pressures. This is explained by the thermal wind relationship. It also means that a pool of cold air aloft is associated with the feature. If both an upper tropospheric cold-core low and lower tropospheric easterly wave trough are in-phase, with the easterly wave near or to the east of the upper level cyclone, thunderstorm development (also known as moist convection) is enhanced. If they are out-of-phase, with the tropical wave west of the upper level circulation, convection is suppressed due to convergence aloft leading to downward motion over the tropical wave or surface trough in the easterlies. Upper level cyclones also interact with troughs in the subtropical westerlies, such as cold fronts and stationary fronts. When the subtropical disturbances in the Northern Hemisphere actively move southward, or dig, the area between the upper tropospheric anticyclone to its west and cold-core low to its east generally have strong northeasterly winds in addition to a rapid development of active thunderstorm activity. Cloud bands associated with upper tropospheric cyclonic vortices are aligned with the vertical wind shear. Animated satellite cloud imagery is a better tool for their early detection and tracking. The low-level convergence caused by the cut-off low can trigger squall lines and rough seas, and the low-level spiral cloud bands caused by the upper level circulation are parallel to the low-level wind direction. This has also been witnessed with upper level lows which occur at higher latitudes. For example, in areas where small-scale snow bands develop within the cold sector of extratropical cyclones. Climatology In the Northern Hemisphere, the tropical upper tropospheric trough (TUTT) normally occurs between May and November, with peak activity between July and September. James Sadler suggested a revised model for the TUTT during the early part of the typhoon season in the western Pacific. Both Sadler and Lance Bosart have shown that the tropical upper tropospheric trough cyclonic cells are caused by the mid-latitude disturbance riding around the western side of the tropical upper tropospheric trough when the subtropical ridge to its south is quite weak. In the north Atlantic, the TUTT is characterized by the semi-permanent circulation pattern that forms in the North Atlantic between August and November. Toby Carlson evaluated data over the eastern Caribbean sea for October 1965 and pinpointed the presence of an upper tropospheric cold-core cyclone. These cold-core cyclones generally form close to the Azores and move south and westward towards a latitude of 20°N. These circulations extend over an area of about 20° of latitude (or ) and 40° of longitude. The lowest level of closed circulation underneath the upper level cold-core cyclone is often between the 700 and the 500-hectopascal level ( to above sea level). Their life cycles span 5 to 14 days. The upper tropospheric cyclonic centers in the North Atlantic differ from that in the North Pacific. Most of them are detectable in the low tropospheric temperature field as cold troughs in the easterlies. They tend to vertically tilt toward the northeast. Cumulonimbus clouds and rainfall occur in the southeast quadrant, approximately 5° latitude (or ) from the upper cyclone center. Large variations of cloud cover can exist in different systems. The summer tropical upper tropospheric trough is a dominant feature over the trade wind regions of the North Atlantic Ocean, Gulf of Mexico, and Caribbean Sea, and that the lower tropospheric responses to the tropical upper tropospheric trough in the North Atlantic are differ from those in the North Pacific. Interaction with tropical cyclones The summer TUTT in the Southern Hemisphere lies over the trade wind region of the east central Pacific and can cause tropical cyclogenesis offshore Central America. University of Hawaii Professor James C. Sadler has documented tropical cyclones over the eastern North Pacific that were revealed by weather satellite observations, and suggested that the upper-tropospheric circulation is a factor in the development, as well as the life history, of the tropical cyclones. Ralph Huschke and Gary Atkinson proposed that a moist southwest wind that results from southeast trades of the eastern South Pacific deflecting towards the Pacific coasts of Central America between June and November, is known as the "temporale". Temporales are most frequent in July and August, when they can reach gale force and cause rough seas/swell. The area of heavy rain is generally located in the northeast quadrant approximately 5° of latitude (or ) from the eye. In the western Pacific, tropical upper tropospheric lows are the main cause for the few tropical cyclones which develop north of the 20th parallel north and east of the 160th meridian east during La Nina events. Trailing upper cyclones and upper troughs can cause additional outflow channels and aid in the intensification process of tropical cyclones. Developing tropical disturbances can help create or deepen upper troughs or upper lows in their wake due to the outflow jet stream emanating from the developing tropical disturbance/cyclone. In the western North Pacific, there are strong reciprocal relationships between the areas of formative tropical cyclones and that of the lower tropospheric monsoon troughs and the tropical upper tropospheric trough. Tropical cyclone movement can also be influenced by TUTT cells within of their position, which can lead to non-climatological tropical cyclone tracks. Interaction with monsoon regimes As upper level lows retrograde over land masses, they can enhance thunderstorm activity during the afternoon. This magnifies regional monsoon regimes, such as that over western North America near the United States and Mexican border, which can be used to effectively forecast monsoon surges in precipitation magnitude. Across the north Indian Ocean, the formation of this type of vortex leads to the onset of monsoon rains during the wet season. References Satellite interpretation Storm Types of cyclone Vortices
Upper tropospheric cyclonic vortex
Chemistry,Mathematics
1,608
12,060,085
https://en.wikipedia.org/wiki/Digital%20signal%20controller
A digital signal controller (DSC) is a hybrid of microcontrollers and digital signal processors (DSPs). Like microcontrollers, DSCs have fast interrupt responses, offer control-oriented peripherals like PWMs and watchdog timers, and are usually programmed using the C programming language, although they can be programmed using the device's native assembly language. On the DSP side, they incorporate features found on most DSPs such as single-cycle multiply–accumulate (MAC) units, barrel shifters, and large accumulators. Not all vendors have adopted the term DSC. The term was first introduced by Microchip Technology in 2002 with the launch of their 6000 series DSCs and subsequently adopted by most, but not all DSC vendors. For example, Infineon and Renesas refer to their DSCs as microcontrollers. DSCs are used in a wide range of applications, but the majority go into motor control, power conversion, and sensor processing applications. Currently, DSCs are being marketed as green technologies for their potential to reduce power consumption in electric motors and power supplies. In order of market share, the top three DSC vendors are Texas Instruments, Freescale, and Microchip Technology, according to market research firm Forward Concepts (2007). These three companies dominate the DSC market, with other vendors such as Infineon and Renesas taking a smaller slice of the pie. DSC chips NOTE: Data is from 2012 (Microchip and TI) and table currently only includes offering from the top 3 DSC vendors. DSC software DSCs, like microcontrollers and DSPs, require software support. There are a growing number of software packages that offer the features required by both DSP applications and microcontroller applications. With a broader set of requirements, software solutions are more rare. They require: development tools, DSP libraries, optimization for DSP processing, fast interrupt handling, multi-threading, and a tiny footprint. References Microcontrollers Digital signal processing Digital signal processors Integrated circuits
Digital signal controller
Technology,Engineering
433
68,607,146
https://en.wikipedia.org/wiki/Miriani%20Griselda%20Pastoriza
Miriani Griselda Pastoriza (born 1939) is an Argentine-born Brazilian astronomer, tenured professor in the Department of Astronomy of the Institute of Physics, at the Federal University of Rio Grande do Sul, and is a member of the Brazilian Academy of Sciences. Biography Miriani Griselda Pastoriza was born in 1939, in Villa San Martín Loreto, Santiago del Estero Province, Argentina. One of her main scientific contributions was the discovery and characterization, together with the Argentine astronomer José Luis Sérsic, of the so-called Sersic-Pastoriza galaxies (also known as galaxies with peculiar nuclei). In 1970, she personally determined that the spectrum of the galaxy NGC 1566 is variable, which was a shocking discovery that introduced a change in the discipline. Continuing with this line of research, Pastoriza, in collaboration with international researchers, carried out work on light variability in other galaxies, which allowed mapping of the structure and size of the central regions of galaxies where supermassive black holes are hosted. Pastoriza was the scientific advisor for many Brazilian astronomers who are now leading international scientists, including Thaisa Storchi Bergmann and Eduardo Luiz Damiani Bica. Pastoriza is also an active fighter for female equality in science. She collaborates with the Latin American Association of Women Astronomers. She also participates in a program in Brazil called "Girls in Science". Pastoriza is the representative of Brazil in the “International Scientific Committee of Gemini telescopes”. She also represents Brazil on the “SOAR Telescope International Board of Directors” and belongs to the “Board of Directors of the National Observatory of Rio de Janeiro”. She was appointed a member of the “Board of Directors of the National Astrophysics Laboratory of Sao Paulo”. Since 2014, she has been an Emeritus Professor at the Federal University of Rio Grande do Sul. Pastoriza is a naturalized Brazilian. Awards and honours In 1995 she was included in a list of the 170 most productive researchers in Brazil in all areas of science, published by Folha do Sao Paulo, one of the newspapers with the largest circulation in Brazil. She has reached the highest category for a researcher in Brazil, classified as 1A within the CNPq. She is the representative of Brazil in the International Scientific Committee of Gemini telescopes and SOAR Telescope International Board of Directors. She is part of the Board of Directors of the National Observatory of Rio de Janeiro and Board of Directors of the National Astrophysics Laboratory of Sao Paulo. The Brazilian Astronomical Society named an award after her to recognize outstanding contributions in astronomical research. In 2007, she was named a member of the Brazilian Academy of Sciences. In 2008, she was awarded the Medal of Commendation from the National Order of Scientific Merit of Brazil - one of the highest recognition to which a scientist in that country can aspire - for her relevant contributions to Science and Technology On October 24, 2018, the National University of Córdoba awarded her the title of Doctor Honoris Causa for her contributions to the field of astronomy. Legacy The "Miriani Pastoriza Award" is named in her honor by the board of directors of the Brazilian Astronomical Society. References 1939 births Living people People from Santiago del Estero Province Argentine astronomers Brazilian astronomers Argentine academics Academic staff of the Federal University of Rio Grande do Sul Argentine emigrants to Brazil Women astronomers
Miriani Griselda Pastoriza
Astronomy
666
1,885,672
https://en.wikipedia.org/wiki/Keystroke-level%20model
In human–computer interaction, the keystroke-level model (KLM) predicts how long it will take an expert user to accomplish a routine task without errors using an interactive computer system. It was proposed by Stuart K. Card, Thomas P. Moran and Allen Newell in 1980 in the Communications of the ACM and published in their book The Psychology of Human-Computer Interaction in 1983, which is considered as a classic in the HCI field. The foundations were laid in 1974, when Card and Moran joined the Palo Alto Research Center (PARC) and created a group named Applied Information-Processing Psychology Project (AIP) with Newell as a consultant aiming to create an applied psychology of human-computer interaction. The keystroke-level model is still relevant today, which is shown by the recent research about mobile phones and touchscreens (see Adaptions). Structure of the keystroke-level model The keystroke-level model consists of six operators: the first four are physical motor operators followed by one mental operator and one system response operator: K (keystroke or button press): it is the most frequent operator and means keys and not characters (so e.g. pressing SHIFT is a separate K operation). The time for this operator depends on the motor skills of the user and is determined by one-minute typing tests, where the total test time is divided by the total number of non-error keystrokes. P (pointing to a target on a display with a mouse): this time differs depending on the distance to the target and the size of the target, but is held constant. A mouse click is not contained and counts as a separate K operation. H (homing the hand(s) on the keyboard or other device): This includes movement between any two devices as well as the fine positioning of the hand. D (drawing (manually) nD straight-line segments with a total length of D(nD, lD) cm): where nD is the number of the line segments drawn and lD is the total length of the line segments. This operator is very specialized because it is restricted to the mouse and the drawing system has to constrain the cursor to a .56 cm grid. M (mentally preparing for executing physical actions): denotes the time a user needs for thinking or decision making. The number of Ms in a method depends on the knowledge and skill of the user. Heuristics are given to help decide where an M should be placed in a method. For example, when pointing with the mouse a button press is usually fully anticipated and no M is needed between both operators. The following table shows the heuristics for placing the M operator: R (response time of the system): the response time depends on the system, the command and the context of the command. It is only used when the user actually has to wait for the system. For instance, when the user mentally prepares (M) for executing their next physical action only the non-overlapping part of the response time is needed for R because the user uses the response time for the M operation (e.g. R of 2 seconds – M of 1.35 seconds = R of .65 seconds). To make things clearer, Kieras suggests the naming waiting time (W) instead of response time (R) to avoid confusion. Sauro suggests taking a sample of the system response time. The following table shows an overview of the times for the mentioned operators as well as the times for suggested operators: Comparison with GOMS The KLM is based on the keystroke level, which belongs to the family of GOMS models. The KLM and the GOMS models have in common that they only predict behaviour of experts without errors, but in contrast the KLM needs a specified method to predict the time because it does not predict the method like GOMS. Therefore, the KLM has no goals and method selection rules, which in turn makes it easier to use. The KLM resembles the model K1 from the family of GOMS models the most because both are at the keystroke level and possess a generic M operator. The difference is that the M operator of the KLM is more aggregated and thus larger (1.35 seconds vs. 0.62 seconds), which makes its mental operator more similar to the CHOOSE operations of the model K2. All in all, the KLM represents the practical use of the GOMS keystroke level. Advantages The KLM was designed to be a quick and easy to use system design tool, which means that no deep knowledge about psychology is required for its usage. Also, task times can be predicted (given the limitations) without having to build a prototype, recruit and test users, which saves time and money. See the example for a practical use of the KLM as a system design tool. Limitations The keystroke-level model has several restrictions: It measures only one aspect of performance: time, which means execution time and not the time to acquire or learn a task It considers only expert users. Generally, users differ regarding their knowledge and experience of different systems and tasks, motor skills and technical ability It considers only routine unit tasks The method has to be specified step by step. This makes it more accessible to use for an average person without advanced technical skills. The execution of the method has to be error-free The mental operator aggregates different mental operations and therefore cannot model a deeper representation of the user’s mental operations. If this is crucial, a GOMS model has to be used (e.g. model K2) Also, one should keep in mind when assessing a computer system that other aspects of performance (errors, learning, functionality, recall, concentration, fatigue, and acceptability), types of users (novice, casual) and non-routine tasks have to be considered as well. Furthermore, tasks which take more than a few minutes take several hours to model and a source of errors is forgetting operations. This implies that the KLM is best suited for short tasks with few operators. In addition, the KLM can not make a perfect prediction and has a root-mean-square error of 21%. Example The following example slightly modified to be more compact from Kieras shows the practical use of the KLM by comparing two different ways to delete a file for an average skilled typist. Note that M is 1.35 seconds as stated in the KLM instead of 1.2 seconds used by Kieras. The difference between the two designs would remain the same either way for this example. This shows that Design B is 1 second faster than Design A, although it contains more operations. Adaptions The six operators of the KLM can be reduced, but this decreases the accuracy of the model. If this low of an accuracy makes sense (e.g. “back-of-the-envelope” calculations) such a simplification can be sufficient. While the existing KLM applies to desktop applications, the model might not fulfill the range of mobile tasks, or as Dunlop and Cross declaimed KLM is no longer precise for mobile devices. There are various efforts to extend the KLM regarding the use for mobile phones or touch devices. One of the significant contributions to this field is done by Holleis, who retained existing operators while revisiting the timing specifications. Furthermore, he introduced new operators: Distraction (X), Gesture (G), Initial Act (I). While Li and Holleis both agree that the KLM model can be applied to predict task times on mobile devices, Li suggests further modifications to the model, by introducing a new concept called operator blocks. These are defined as "the sequence of operators that can be used with high repeatability by analyst of the extended KLM.”. He also discards old operators and defines 5 new mental operators and 9 new physical operators, while 4 of the physical operators focus on pen-based operations. Rice and Lartigue suggest numerous operators for touch devices together with updating existing operators naming the model TLM (Touch Level Model). They retain the operators Keystroke (K/B), Homing (H), Mental (M) and Response Time (R(t)) and suggest new touch specific operators partly based on Holleis’ suggested operators: Distraction. A multiplicative operator that adds time to other operators. Pinch. A 2+ finger gesture commonly used to zoom out Zoom. A 2+ finger gesture commonly used to zoom in Initial Act. The action or actions necessary to prepare the system for use (e.g. unlocking device, tapping an icon, entering a password). Tap. Tapping some area of the screen to effect a change or initiate an action. Swipe. A 1+ finger gesture in which a finger or fingers are placed on the screen and subsequently moved in a single direction for a specified amount of time. Tilt. The tilting — or full rotation of — the entire device d degrees (or radians). Rotate. A 2+ finger gesture in which fingers are placed on the screen and then rotated d degrees (or radians) about a central axis. Drag. A 1+ finger gesture in which fingers are placed on the screen and then moved — usually in a straight line — to another location. See also Human-Computer Interaction Usability Usability Testing Human information processor model GOMS CMN-GOMS CPM-GOMS References External links Simple KLM calculator (free, web-based) Simple KLM calculator (free, downloadable Windows app) The KLM Form Analyzer (KLM-FA), a program which automatically evaluates web form filling tasks (free, downloadable Windows app). The CogTool project at Carnegie Mellon University has developed an open-source tool to support KLM-GOMS analysis. See also their publications about CogTool. GOMS by Lorin Hochstein Human–computer interaction
Keystroke-level model
Engineering
2,041
657,187
https://en.wikipedia.org/wiki/Mobile%20computing
Mobile computing is human–computer interaction in which a computer is expected to be transported during normal usage and allow for transmission of data, which can include voice and video transmissions. Mobile computing involves mobile communication, mobile hardware, and mobile software. Communication issues include ad hoc networks and infrastructure networks as well as communication properties, protocols, data formats, and concrete technologies. Hardware includes mobile devices or device components. Mobile software deals with the characteristics and requirements of mobile applications. Main principles Portability: Devices/nodes connected within the mobile computing system should facilitate mobility. These devices may have limited device capabilities and limited power supply but should have a sufficient processing capability and physical portability to operate in a movable environment. Connectivity: This defines the quality of service (QoS) of the network connectivity. In a mobile computing system, the network availability is expected to be maintained at a high level with a minimal amount of lag/downtime without being affected by the mobility of the connected nodes. Interactivity: The nodes belonging to a mobile computing system are connected with one another to communicate and collaborate through active transactions of data. Individuality: A portable device or a mobile node connected to a mobile network often denotes an individual; a mobile computing system should be able to adopt the technology to cater to the individual needs and also to obtain contextual information of each node. Devices Some of the most common forms of mobile computing devices are as given below: Portable computers, compact, lightweight units including a full character set keyboard and primarily intended as hosts for software that may be parameterized, such as laptops/desktops, smartphones/tablets, etc. Smart cards that can run multiple applications but are typically used for payment, travel, and secure area access. Mobile phones, telephony devices which can call from a distance through cellular networking technology. Wearable computers, mostly limited to functional keys and primarily intended for the incorporation of software agents, such as bracelets, keyless implants, etc. These classes are expected to endure and to complement each other, none replacing another completely. Other types of mobile computers have been introduced since the 1990s, including the: Portable computer Personal digital assistant, enterprise digital assistant Ultra-Mobile PC Laptop Tablet computer Wearable computer E-reader Carputer Handheld PC Limitations Expandability, replaceability and modularity: In contrast to the common traditional motherboard-based PC the SoC architecture in which they are embedded makes these features impossible. Lack of a BIOS: As most smart devices lack a proper BIOS, their bootloading capabilities are limited as they can only boot into the single operative system with which it came, in contrast with the PC BIOS model. Range and bandwidth: Mobile Internet access is generally slower than direct cable connections, using technologies such as GPRS and EDGE, and more recently HSDPA, HSUPA, 3G and 4G networks and also the proposed 5G network. These networks are usually available within a range of commercial cell phone towers. High-speed network wireless LANs are inexpensive but have a very limited range. Security standards: When working mobile, one is dependent on public networks, requiring careful use of VPN. Security is a major concern while concerning the mobile computing standards on the fleet. One can easily attack the VPN through a huge number of networks interconnected through the line. Power consumption: When a power outlet or portable generator is not available, mobile computers must rely entirely on battery power. Combined with the compact size of many mobile devices, this often means unusually expensive batteries must be used to obtain the necessary battery life. Transmission interferences: Weather, terrain, and the range from the nearest signal point can all interfere with signal reception. Reception in tunnels, some buildings, and rural areas is often poor. Potential health hazards: People who use mobile devices while driving are often distracted from driving and are thus assumed more likely to be involved in traffic accidents. (While this may seem obvious, there is considerable discussion about whether banning mobile device use while driving reduces accidents.) Cell phones may interfere with sensitive medical devices. Questions concerning mobile phone radiation and health have been raised. Human interface with device: Screens and keyboards tend to be small, which may make them hard to use. Alternate input methods such as speech or handwriting recognition require training. In-vehicle computing and fleet computing Many commercial and government field forces deploy a rugged portable computer with their fleet of vehicles. This requires the units to be anchored to the vehicle for driver safety, device security, and ergonomics. Rugged computers are rated for severe vibration associated with large service vehicles and off-road driving and the harsh environmental conditions of constant professional use such as in emergency medical services, fire, and public safety. Other elements affecting function in the vehicle: Operating temperature: A vehicle cabin can often experience temperature swings from . Computers typically must be able to withstand these temperatures while operating. Typical fan-based cooling has stated limits of of ambient temperature and temperatures below freezing require localized heaters to bring components up to operating temperature (based on independent studies by the SRI Group and by Panasonic R&D). Vibration can decrease the life expectancy of computer components, notably rotational storage such as HDDs. Visibility of standard screens becomes an issue in bright sunlight. Touchscreen users easily interact with the units in the field without removing gloves. High-temperature battery settings: Lithium-ion batteries are sensitive to high-temperature conditions for charging. A computer designed for the mobile environment should be designed with a high-temperature charging function that limits the charge to 85% or less of capacity. External antenna connections go through the typical metal cabins of vehicles which would block wireless reception and take advantage of much more capable external communication and navigation equipment. Security issues involved in mobile Mobile security has become increasingly important in mobile computing. It is of particular concern as it relates to the security of personal information now stored on the smartphone. Mobile applications might copy user data from these devices to a remote server without the users’ permission and often without the users’ consent. The user profiles automatically created in the cloud for smartphone users raise privacy concerns on all major platforms, in terms of, including, but not limited to, location tracking and personal data collection, regardless of user settings on the device. More and more users and businesses use smartphones as a means of planning and organizing their work and private life. Within companies, these technologies are causing profound changes in the organization of information systems and therefore they have become the source of new risks. Indeed, smartphones collect and compile an increasing amount of sensitive information to which access must be controlled to protect the privacy of the user and the intellectual property of the company. All smartphones are preferred targets of attacks. These attacks exploit weaknesses related to smartphones that can come from means of wireless telecommunication like WiFi networks and GSM. There are also attacks that exploit software vulnerabilities from both the web browser and operating system. Finally, there are forms of malicious software that rely on the weak knowledge of average users. Different security counter-measures are being developed and applied to smartphones, from security in different layers of software to the dissemination of information to end-users. There are good practices to be observed at all levels, from design to use, through the development of operating systems, software layers, and downloadable apps. Portable computing devices Several categories of portable computing devices can run on batteries but are not usually classified as laptops: portable computers, PDAs, ultra mobile PCs (UMPCs), tablets, and smartphones. A portable computer is a general-purpose computer that can be easily moved from place to place, but cannot be used while in transit, usually because it requires some "setting-up" and an AC power source. The most famous example is Osborne 1. Portable computers are also called a "transportable" or a "luggable" PC. A personal digital assistant (PDA) is a small, usually pocket-sized, computer with limited functionality. It is intended to supplement and to synchronize with a desktop computer, giving access to contacts, address book, notes, e-mail, and other features. An ultra mobile PC is a full-featured, PDA-sized computer running a general-purpose operating system. Phones, tablets: a slate tablet is shaped like a paper notebook. Smartphones are the same devices as tablets, however, the only difference with smartphones is that they are much smaller and pocketable. Instead of a physical keyboard, these devices have a touchscreen including a combination of a virtual keyboard but can also link to a physical keyboard via wireless Bluetooth or USB. These devices include features other computer systems would not be able to incorporate, such as built-in cameras, because of their portability - although some laptops possess camera integration, and desktops and laptops can connect to a webcam by way of USB. A carputer is installed in an automobile. It operates as a wireless computer, sound system, GPS, and DVD player. It also contains word processing software and is Bluetooth compatible. A Pentop (discontinued) is a computing device the size and shape of a pen. It functions as a writing utensil, MP3 player, language translator, digital storage device, and calculator. An application-specific computer is one that is tailored to a particular application. For example, Ferranti introduced a handheld application-specific mobile computer (the MRT-100) in the form of a clipboard for conducting opinion polls. Boundaries that separate these categories are blurry at times. For example, the OQO UMPC is also a PDA-sized tablet PC; the Apple eMate had the clamshell form factor of a laptop but ran PDA software. The HP Omnibook line of laptops included some devices small enough to be called ultra mobile PCs. The hardware of the Nokia 770 internet tablet is essentially the same as that of a PDA such as the Zaurus 6000; the only reason it's not called a PDA is that it does not have PIM software. On the other hand, both the 770 and the Zaurus can run some desktop Linux software, usually with modifications. Mobile data communication Wireless data connections used in mobile computing take three general forms. Cellular data service uses technologies GSM, CDMA or GPRS, 3G networks such as W-CDMA, EDGE or CDMA2000. and more recently 4G and 5G networks. These networks are usually available within range of commercial cell towers. Wi-Fi connections offer higher performance, may be either on a private business network or accessed through public hotspots, and have a typical range of 100 feet indoors and up to 1000 feet outdoors. Satellite Internet access covers areas where cellular and Wi-Fi are not available and may be set up anywhere the user has a line of sight to the satellite's location, which for satellites in geostationary orbit means having an unobstructed view of the southern sky. Some enterprise deployments combine networks from multiple cellular networks or use a mix of cellular, Wi-Fi and satellite. When using a mix of networks, a mobile virtual private network (mobile VPN) not only handles the security concerns, but also performs the multiple network logins automatically and keeps the application connections alive to prevent crashes or data loss during network transitions or coverage loss. See also Lists of mobile computers Mobile cloud computing Mobile Computing and Communications Review Mobile development Mobile device management Mobile identity management Mobile interaction Mobile software Ubiquitous computing References Footnotes Bibliography GH Forman, J Zahorjan - Computer, 1994 - doi.ieeecomputersociety.org David P. Helmbold, "A dynamic disk spin-down technique for mobile computing", citeseer.ist.psu.edu, 1996 MH Repacholi, "health risks from the use of mobile phones", Toxicology Letters, 2001 - Elsevier Landay, J.A. Kaufmann, T.R., "user interface issues in mobile computing", Workstation Operating Systems, 1993. Roth, J. "Mobile Computing - Grundlagen, Technik, Konzepte", 2005, dpunkt.verlag, Germany Pullela, Srikanth. "Security Issues in Mobile Computing" http://crystal.uta.edu/~kumar/cse6392/termpapers/Srikanth_paper.pdf Zimmerman, James B. "Mobile Computing: Characteristics, Business Benefits, and Mobile Framework" April 2, 1999. https://web.archive.org/web/20111126105426/http://ac-support.europe.umuc.edu/~meinkej/inss690/zimmerman/INSS%20690%20CC%20-%20Mobile%20Computing.htm Koudounas, Vasilis. Iqbal, Omar. "Mobile Computing: Past, Present, and Future" https://web.archive.org/web/20181110210750/http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/vk5/report.html Further reading Automatic identification and data capture Mobile phones
Mobile computing
Technology
2,693
7,407,236
https://en.wikipedia.org/wiki/TIM/TOM%20complex
The TIM/TOM complex is a protein complex in cellular biochemistry which translocates proteins produced from nuclear DNA through the mitochondrial membrane for use in oxidative phosphorylation. In enzymology, the complex is described as an mitochondrial protein-transporting ATPase (), or more systematically ATP phosphohydrolase (mitochondrial protein-importing), as the TIM part requires ATP hydrolysis to work. Only 13 proteins necessary for a mitochondrion are actually coded in mitochondrial DNA. The vast majority of proteins destined for the mitochondria are encoded in the nucleus and synthesised in the cytoplasm. These are tagged by an N-terminal or/and a C-terminal signal sequence. Following transport through the cytosol from the nucleus, the signal sequence is recognized by a receptor protein in the translocase of the outer membrane (TOM) complex. The signal sequence and adjacent portions of the polypeptide chain are inserted in the TOM complex, then begin interaction with a translocase of the inner membrane (TIM) complex, which are hypothesized to be transiently linked at sites of close contact between the two membranes. The signal sequence is then translocated into the matrix in a process that requires an electrochemical hydrogen ion gradient across the inner membrane. Mitochondrial Hsp70 binds to regions of the polypeptide chain and maintains it in an unfolded state as it moves into the matrix. The ATPase domain is essential during the interactions of the proteins Hsp70 and subunit Tim44. Without the presence of ATPase, carboxy-terminal segment is not able to bind to protein of Tim44. As mtHsp70 transmits the nucleotide state of the ATPase domain with alpha-helices A and B, Tim44 interacts with the peptide binding domain to coordinate the protein bind. TIC/TOC Complex vs. TIM/TOM Complex This protein complex is functionally analogous to the TIC/TOC complex located on the inner and outer membranes of the chloroplast, in the sense that it transports proteins into the membrane of the mitochondria. Although they both hydrolyze triphosphates, they are evolutionally unrelated. References External links TCDB 3.A.8 - description of the entire complex Overview of the various import ways into mitochondria (group of N. Pfanner) Transport proteins Mitochondria Transmembrane proteins EC 3.6.3 EC 7.4.2 Enzymes of unknown structure
TIM/TOM complex
Chemistry
524
6,959,617
https://en.wikipedia.org/wiki/Nominal%20level
Nominal level is the operating level at which an electronic signal processing device is designed to operate. The electronic circuits that make up such equipment are limited in the maximum signal they can handle and the low-level internally generated electronic noise they add to the signal. The difference between the internal noise and the maximum level is the device's dynamic range. The nominal level is the level that these devices were designed to operate at, for best dynamic range and adequate headroom. When a signal is chained with improper gain staging through many devices, clipping may occur or the system may operate with reduced dynamic range. In audio, a related measurement, signal-to-noise ratio, is usually defined as the difference between the nominal level and the noise floor, leaving the headroom as the difference between nominal and maximum output. The measured level is a time average, meaning that the peaks of audio signals regularly exceed the measured average level. The headroom measurement defines how far the peak levels can stray from the nominal measured level before clipping. The difference between the peaks and the average for a given signal is the crest factor. Standards VU meters are designed to represent the perceived loudness of a passage of music, or other audio content, measuring in volume units. Devices are designed so that the best signal quality is obtained when the meter rarely goes above nominal. The markings are often in dB instead of "VU", and the reference level should be defined in the device's manual. In most professional recording and sound reinforcement equipment, the nominal level is . In semi-professional and domestic equipment, the nominal level is usually −10 dBV. This difference is due to the cost required to create larger power supplies and output higher levels. In broadcasting equipment, this is termed the Maximum Permitted Level, which is defined by European Broadcasting Union standards. These devices use peak programme meters instead of VU meters, which gives the reading a different meaning. "Mic level" is sometimes defined as −60 dBV, though levels from microphones vary widely. In video systems, nominal levels are 1 VP-P for synched systems, such as baseband composite video, and 0.7 VP-P for systems without sync. Note that these levels are measured peak-to-peak, while audio levels are time averages. See also Alignment level Transmission level point References External links Nominal Level — Sweetwater glossary Level Headed — Nominal Level (explained) plus an SV-3700 modification Signal processing Sound
Nominal level
Technology,Engineering
495
45,260,029
https://en.wikipedia.org/wiki/Penicillium%20coalescens
Penicillium coalescens is a fungus species of the genus of Penicillium which was isolated from soil. See also List of Penicillium species References coalescens Fungi described in 1984 Fungus species
Penicillium coalescens
Biology
46
2,309,327
https://en.wikipedia.org/wiki/Calabi%20flow
In the mathematical fields of differential geometry and geometric analysis, the Calabi flow is a geometric flow which deforms a Kähler metric on a complex manifold. Precisely, given a Kähler manifold , the Calabi flow is given by: , where is a mapping from an open interval into the collection of all Kähler metrics on , is the scalar curvature of the individual Kähler metrics, and the indices correspond to arbitrary holomorphic coordinates . This is a fourth-order geometric flow, as the right-hand side of the equation involves fourth derivatives of . The Calabi flow was introduced by Eugenio Calabi in 1982 as a suggestion for the construction of extremal Kähler metrics, which were also introduced in the same paper. It is the gradient flow of the ; extremal Kähler metrics are the critical points of the Calabi functional. A convergence theorem for the Calabi flow was found by Piotr Chruściel in the case that has complex dimension equal to one. Xiuxiong Chen and others have made a number of further studies of the flow, although as of 2020 the flow is still not well understood. References Eugenio Calabi. Extremal Kähler metrics. Ann. of Math. Stud. 102 (1982), pp. 259–290. Seminar on Differential Geometry. Princeton University Press (PUP), Princeton, N.J. E. Calabi and X.X. Chen. The space of Kähler metrics. II. J. Differential Geom. 61 (2002), no. 2, 173–193. X.X. Chen and W.Y. He. On the Calabi flow. Amer. J. Math. 130 (2008), no. 2, 539–570. Piotr T. Chruściel. Semi-global existence and convergence of solutions of the Robinson-Trautman (2-dimensional Calabi) equation. Comm. Math. Phys. 137 (1991), no. 2, 289–313. Geometric flow Partial differential equations String theory
Calabi flow
Astronomy
430
78,581,512
https://en.wikipedia.org/wiki/Starship%20Propellant%20Transfer%20Demonstration
The Starship Propellant Transfer Demo is expected to occur in 2025. A similar test occurred during Starship's third test flight, though the transfer during that test was between two tanks on the same vehicle. The ability to refuel a Starship in low orbit is critical for the Artemis program, as Starship HLS requires approximately ten tanker launches to reach the lunar surface. Mission profile The mission profile for the Starship Propellant Transfer Demo will begin with the first launch. This launch will deliver the upper stage into orbit around the earth, while the first stage returns to the launch site for a catch. The second launch will repeat this profile three to four weeks later, and dock with the first starship. Once docked, the vehicles will use a pressure differential between them to force propellant from the second vehicle into the first. After this is complete, the two ships will undock, and reenter. Payload The second launch in the propellant transfer will fly an unknown amount of propellant as its payload. In order to prevent the propellant from boiling during the vehicle's time in orbit, significant insulation and vacuum jacketing will be added to the propellant lines inside the vehicle. This change has already been observed on Block 2 vehicles. References Spaceflight
Starship Propellant Transfer Demonstration
Astronomy
257
461,410
https://en.wikipedia.org/wiki/Arbovirus
Arbovirus is an informal name for any virus that is transmitted by arthropod vectors. The term arbovirus is a portmanteau word (arthropod-borne virus). Tibovirus (tick-borne virus) is sometimes used to more specifically describe viruses transmitted by ticks, a superorder within the arthropods. Arboviruses can affect both animals (including humans) and plants. In humans, symptoms of arbovirus infection generally occur 3–15 days after exposure to the virus and last three or four days. The most common clinical features of infection are fever, headache, and malaise, but encephalitis and viral hemorrhagic fever may also occur. Signs and symptoms The incubation period – the time between when infection occurs and when symptoms appear – varies from virus to virus, but is usually limited between 2 and 15 days for arboviruses. The majority of infections, however, are asymptomatic. Among cases in which symptoms do appear, symptoms tend to be non-specific, resembling a flu-like illness, and are not indicative of a specific causative agent. These symptoms include fever, headache, malaise, rash and fatigue. Rarely, vomiting and hemorrhagic fever may occur. The central nervous system can also be affected by infection, as encephalitis and meningitis are sometimes observed. Prognosis is good for most people, but is poor in those who develop severe symptoms, with up to a 20% mortality rate in this population depending on the virus. The very young, elderly, pregnant women, and people with immune deficiencies are more likely to develop severe symptoms. Cause Transmission Arboviruses maintain themselves in nature by going through a cycle between a host, an organism that carries the virus, and a vector, an organism that carries and transmits the virus to other organisms. For arboviruses, vectors are commonly mosquitoes, ticks, sandflies and other arthropods that consume the blood of vertebrates for nutritious or developmental purposes. Vertebrates which have their blood consumed act as the hosts, with each vector generally having an affinity for the blood of specific species, making those species the hosts. Transmission between the vector and the host occurs when the vector feeds on the blood of the vertebrate, wherein the virus that has established an infection in the salivary glands of the vector comes into contact with the host's blood. While the virus is inside the host, it undergoes a process called amplification, where the virus replicates at sufficient levels to induce viremia, a condition in which there are large numbers of virions present in the blood. The abundance of virions in the host's blood allows the host to transmit the virus to other organisms if its blood is consumed by them. When uninfected vectors become infected from feeding, they are then capable of transmitting the virus to uninfected hosts, resuming amplification of virus populations. If viremia is not achieved in a vertebrate, the species can be called a "dead-end host", as the virus cannot be transmitted back to the vector. An example of this vector-host relationship can be observed in the transmission of the West Nile virus. Female mosquitoes of the genus Culex prefer to consume the blood of passerine birds, making them the hosts of the virus. When these birds are infected, the virus amplifies, potentially infecting multiple mosquitoes that feed on its blood. These infected mosquitoes may go on to further transmit the virus to more birds. If the mosquito is unable to find its preferred food source, it will choose another. Human blood is sometimes consumed, but since the West Nile virus does not replicate that well in mammals, humans are considered a dead-end host. In humans Person-to-person transmission of arboviruses is not common, but can occur. Blood transfusions, organ transplantation, and the use of blood products can transmit arboviruses if the virus is present in the donor's blood or organs. Because of this, blood and organs are often screened for viruses before being administered. Rarely, vertical transmission, or mother-to-child transmission, has been observed in infected pregnant and breastfeeding women. Exposure to used needles may also transmit arboviruses if they have been used by an infected person or animal. This puts intravenous drug users and healthcare workers at risk for infection in regions where the arbovirus may be spreading in human populations. Virology Arboviruses are a polyphyletic group, belonging to various viral genera and therefore exhibiting different virologic characteristics. Diagnosis Preliminary diagnosis of arbovirus infection is usually based on clinical presentations of symptoms, places and dates of travel, activities, and epidemiological history of the location where infection occurred. Definitive diagnosis is typically made in a laboratory by employing some combination of blood tests, particularly immunologic, serologic and/or virologic techniques such as ELISA, complement fixation, polymerase chain reaction, neutralization test, and hemagglutination-inhibition test. Classification In the past, arboviruses were organized into one of four groups: A, B, C, and D. Group A denoted members of the genus Alphavirus, Group B were members of the genus Flavivirus, and Group C remains as the Group C serogroup of the genus Orthobunyavirus. Group D was renamed in the mid-1950s to the Guama group and is currently the Guama serogroup in the genus Orthobunyavirus. Currently, viruses are jointly classified according to Baltimore classification and a virus-specific system based on standard biological classification. With the exception of the African swine fever virus, which belongs to the Asfarviridae family of viruses, all major clinically important arboviruses belong to one of the following four groups: Order Bunyavirales (Baltimore class V) Genus Banyangvirus Huaiyangshan banyangvirus Genus Orthobunyavirus Bunyamwera virus California encephalitis virus Jamestown Canyon virus La Crosse encephalitis virus Genus Orthonairovirus Crimean–Congo hemorrhagic fever virus Genus Phlebovirus Heartland virus Rift Valley fever virus Toscana virus Family Flaviviridae (Baltimore class IV) Genus Flavivirus Mosquito-borne viruses Dengue virus group Dengue virus Japanese encephalitis virus group Japanese encephalitis virus Murray Valley encephalitis virus St. Louis encephalitis virus West Nile virus Spondweni virus group Spondweni virus Zika virus Yellow fever virus group Yellow fever virus Tick-borne viruses Mammalian tick-borne virus group Kyasanur forest disease virus Tick-borne encephalitis virus Family Reoviridae (Baltimore class III) Subfamily Sedoreovirinae Genus Orbivirus African horse sickness virus Bluetongue disease virus Epizootic hemorrhagic disease virus Equine encephalosis virus Genus Seadornavirus Banna virus Subfamily Spinareovirinae Genus Coltivirus Colorado tick fever virus Family Togaviridae (Baltimore class IV) Genus Alphavirus Chikungunya virus' Eastern equine encephalitis virus Ross River virus Venezuelan equine encephalitis virus Western equine encephalitis virus'Prevention Vector control measures, especially mosquito control, are essential to reducing the transmission of disease by arboviruses. Habitat control involves draining swamps and removal of other pools of stagnant water (such as old tires, large outdoor potted plants, empty cans, etc.) that often serve as breeding grounds for mosquitoes. Insecticides can be applied in rural and urban areas, inside houses and other buildings, or in outdoor environments. They are often quite effective for controlling arthropod populations, though use of some of these chemicals is controversial, and some organophosphates and organochlorides (such as DDT) have been banned in many countries. Infertile male mosquitoes have been introduced in some areas in order to reduce the breeding rate of relevant mosquito species. Larvicides are also used worldwide in mosquito abatement programs. Temefos is a common mosquito larvicide. People can also reduce the risk of getting bitten by arthropods by employing personal protective measures such as sleeping under mosquito nets, wearing protective clothing, applying insect repellents such as permethrin and DEET to clothing and exposed skin, and (where possible) avoiding areas known to harbor high arthropod populations. Arboviral encephalitis can be prevented in two major ways: personal protective measures and public health measures to reduce the population of infected mosquitoes. Personal measures include reducing time outdoors particularly in early evening hours, wearing long pants and long sleeved shirts and applying mosquito repellent to exposed skin areas. Public health measures often require spraying of insecticides to kill juvenile (larvae) and adult mosquitoes. Vaccination Vaccines are available for the following arboviral diseases: Japanese encephalitis Yellow fever Tick-borne encephalitis Rift Valley Fever (only veterinary use) Vaccines are in development for the following arboviral diseases: Zika Virus Dengue fever Eastern Equine encephalitis West Nile Chikungunya Rift Valley Fever Treatment Because the arboviral encephalitides are viral diseases, antibiotics are not an effective form of treatment and no effective antiviral drugs have yet been discovered. Treatment is supportive, attempting to deal with problems such as swelling of the brain, loss of the automatic breathing activity of the brain and other treatable complications like bacterial pneumonia. The WHO caution against the use of aspirin and ibuprofen as they can increase the risk of bleeding. Epidemiology Most arboviruses are located in tropical areas, however as a group they have a global distribution. The warm climate conditions found in tropical areas allows for year-round transmission by the arthropod vectors. Other important factors determining geographic distribution of arthropod vectors include rainfall, humidity, and vegetation. Mapping methods such as GIS and GPS have allowed for spatial and temporal analyses of arboviruses. Tagging cases or breeding sites geographically has allowed for deeper examination of vector transmission. To see the epidemiology of specific arboviruses, the following resources hold maps, fact sheets, and reports on arboviruses and arboviral epidemics. History Arboviruses were not known to exist until the rise of modern medicine, with the germ theory and an understanding that viruses were distinct from other microorganisms. The connection between arthropods and disease was not postulated until 1881 when Cuban doctor and scientist Carlos Finlay proposed that yellow fever may be transmitted by mosquitoes instead of human contact, a reality that was verified by Major Walter Reed in 1901. The primary vector, Aedes aegypti, had spread globally from the 15th to the 19th centuries as a result of globalization and the slave trade. This geographic spreading caused dengue fever epidemics throughout the 18th and 19th centuries, and later, in 1906, transmission by the Aedes mosquitoes was confirmed, making yellow fever and dengue fever the first two diseases known to be caused by viruses. Thomas Milton Rivers published the first clear description of a virus as distinct from a bacterium in 1927. The discovery of the West Nile virus came in 1937, and has since been found in Culex'' populations causing epidemics throughout Africa, the Middle East, and Europe. The virus was introduced into the Western Hemisphere in 1999, sparking a series of epidemics. During the latter half of the 20th century, Dengue fever reemerged as a global disease, with the virus spreading geographically due to urbanization, population growth, increased international travel, and global warming, and continues to cause at least 50 million infections per year, making Dengue fever the most common and clinically important arboviral disease. Yellow fever, alongside malaria, was a major obstacle in the construction of the Panama Canal. French supervision of the project in the 1880s was unsuccessful because of these diseases, forcing the abandonment of the project in 1889. During the American effort to construct the canal in the early 1900s, William C. Gorgas, the Chief Sanitary Officer of Havana, was tasked with overseeing the health of the workers. He had past success in eradicating the disease in Florida and Havana by reducing mosquito populations through draining nearby pools of water, cutting grass, applying oil to the edges of ponds and swamps to kill larvae, and capturing adult mosquitoes that remained indoors during the daytime. Joseph Augustin LePrince, the Chief Sanitary Inspector of the Canal Zone, invented the first commercial larvicide, a mixture of carbolic acid, resin, and caustic soda, to be used throughout the Canal Zone. The combined implementation of these sanitation measures led to a dramatic decline in the number of workers dying and the eventual eradication of yellow fever in the Canal Zone as well as the containment of malaria during the 10-year construction period. Because of the success of these methods at preventing disease, they were adopted and improved upon in other regions of the world. See also List of diseases spread by invertebrates List of insect-borne diseases Mosquito-borne disease Robovirus Tibovirus Tick-borne disease References External links
Arbovirus
Biology
2,799
75,356,654
https://en.wikipedia.org/wiki/Navafenterol
Navafenterol is an investigational drug that had been evaluated for chronic obstructive pulmonary disease. It is a Beta2 agonist and a muscarinic antagonist. Further development has been discontinued for strategic reasons. References Beta2-adrenergic agonists Muscarinic antagonists Benzotriazoles Quinolinols Thiophenes Esters Alcohols Tertiary alcohols Secondary amines Tertiary amines Cyclohexylamines
Navafenterol
Chemistry
98
37,388,708
https://en.wikipedia.org/wiki/Cadmium%20chromate
Cadmium chromate is the inorganic compound with the formula CdCrO4. It is relevant to chromate conversion coating, which is used to passivate common metal alloys such as aluminium, zinc, cadmium, copper, silver, magnesium, and tin. In conversion coating chromate reacts with these metals to prevent corrosion, retain electrical conductivity, and provide a finish for the appearance of the final alloy products. This process is commonly used on hardware and tool items. Chromate species take on their distinctive yellow color when coated. References Cadmium compounds Chromates
Cadmium chromate
Chemistry
118
26,062,564
https://en.wikipedia.org/wiki/Cancer%20genome%20sequencing
Cancer genome sequencing is the whole genome sequencing of a single, homogeneous or heterogeneous group of cancer cells. It is a biochemical laboratory method for the characterization and identification of the DNA or RNA sequences of cancer cell(s). Unlike whole genome (WG) sequencing which is typically from blood cells, such as J. Craig Venter's and James D. Watson’s WG sequencing projects, saliva, epithelial cells or bone - cancer genome sequencing involves direct sequencing of primary tumor tissue, adjacent or distal normal tissue, the tumor micro environment such as fibroblast/stromal cells, or metastatic tumor sites. Similar to whole genome sequencing, the information generated from this technique include: identification of nucleotide bases (DNA or RNA), copy number and sequence variants, mutation status, and structural changes such as chromosomal translocations and fusion genes. Cancer genome sequencing is not limited to WG sequencing and can also include exome, transcriptome, micronome sequencing, and end-sequence profiling. These methods can be used to quantify gene expression, miRNA expression, and identify alternative splicing events in addition to sequence data. The first report of cancer genome sequencing appeared in 2006. In this study 13,023 genes were sequenced in 11 breast and 11 colorectal tumors. A subsequent follow up was published in 2007 where the same group added just over 5,000 more genes and almost 8,000 transcript species to complete the exomes of 11 breast and colorectal tumors. The first whole cancer genome to be sequenced was from cytogenetically normal acute myeloid leukaemia by Ley et al. in November 2008. The first breast cancer tumor was sequenced by Shah et al. in October 2009, the first lung and skin tumors by Pleasance et al. in January 2010, and the first prostate tumors by Berger et al. in February 2011. History Historically, cancer genome sequencing efforts has been divided between transcriptome-based sequencing projects and DNA-centered efforts. The Cancer Genome Anatomy Project (CGAP) was first funded in 1997 with the goal of documenting the sequences of RNA transcripts in tumor cells. As technology improved, the CGAP expanded its goals to include the determination of gene expression profiles of cancerous, precancerous and normal tissues. The CGAP published the largest publicly available collection of cancer expressed sequence tags in 2003. The Sanger Institute's Cancer Genome Project, first funded in 2005, focuses on DNA sequencing. It has published a census of genes causally implicated in cancer, and a number of whole-genome resequencing screens for genes implicated in cancer. The International Cancer Genome Consortium (ICGC) was founded in 2007 with the goal of integrating available genomic, transcriptomic and epigenetic data from many different research groups. As of December 2011, the ICGC includes 45 committed projects and has data from 2,961 cancer genomes available. Societal Impact The Complexity and Biology of Cancer The process of tumorigenesis that transforms a normal cell to a cancerous cell involve a series of complex genetic and epigenetic changes. Identification and characterization of all these changes can be accomplished through various cancer genome sequencing strategies. The power of cancer genome sequencing lies in the heterogeneity of cancers and patients. Most cancers have a variety of subtypes and combined with these ‘cancer variants’ are the differences between a cancer subtype in one individual and in another individual. Cancer genome sequencing allows clinicians and oncologists to identify the specific and unique changes a patient has undergone to develop their cancer. Based on these changes, a personalized therapeutic strategy can be undertaken. Clinical Relevance A big contribution to cancer death and failed cancer treatment is clonal evolution at the cytogenetic level, for example as seen in acute myeloid leukaemia (AML). In a Nature study published in 2011, Ding et al. identified cellular fractions characterized by common mutational changes to illustrate the heterogeneity of a particular tumor pre- and post-treatment vs. normal blood in one individual. These cellular factions could only have been identified through cancer genome sequencing, showing the information that sequencing can yield, and the complexity and heterogeneity of a tumor within one individual. Comprehensive Cancer Genomic Projects The two main projects focused on complete cancer characterization in individuals, heavily involving sequencing include the Cancer Genome Project, based at the Wellcome Trust Sanger Institute and the Cancer Genome Atlas funded by the National Cancer Institute (NCI) and the National Human Genome Research Institute (NHGRI). Combined with these efforts, the International Cancer Genome Consortium (a larger organization) is a voluntary scientific organization that provides a forum for collaboration among the world's leading cancer and genomic researchers. Cancer Genome Project (CGP) The Cancer Genome Projects goal is to identify sequence variants and mutations critical in the development of human cancers. The project involves the systematic screening of coding genes and flanking splice junctions of all genes in the human genome for acquired mutations in human cancers. To investigate these events, the discovery sample set will include DNA from primary tumor, normal tissue (from the same individuals) and cancer cell lines. All results from this project are amalgamated and stored within the COSMIC cancer database. COSMIC also includes mutational data published in scientific literature. The Cancer Genome Atlas (TCGA) The TCGA is a multi-institutional effort to understand the molecular basis of cancer through genome analysis technologies, including large-scale genome sequencing techniques. Hundreds of samples are being collected, sequenced and analyzed. Currently the cancer tissue being collected include: central nervous system, breast, gastrointestinal, gynecologic, head and neck, hematologic, thoracic, and urologic. The components of the TCGA research network include: Biospecimen Core Resources, Genome Characterization Centers, Genome Sequencing Centers, Proteome Characterization Centers, a Data Coordinating Center, and Genome Data Analysis Centers. Each cancer type will undergo comprehensive genomic characterization and analysis. The data and information generated is freely available through the projects TCGA data portal. International Cancer Genome Consortium (ICGC) The ICGC’s goal is “To obtain a comprehensive description of genomic, transcriptomic and epigenomic changes in 50 different tumor types and/or subtypes which are of clinical and societal importance across the globe”. Technologies and platforms Cancer genome sequencing utilizes the same technology involved in whole genome sequencing. The history of sequencing has come a long way, originating in 1977 by two independent groups - Fredrick Sanger’s enzymatic didoxy DNA sequencing technique and the Allen Maxam and Walter Gilbert chemical degradation technique. Following these landmark papers, over 20 years later ‘Second Generation’ high-throughput next generation sequencing (HT-NGS) was born followed by ‘Third Generation HT-NGS technology’ in 2010. The figures to the right illustrate the general biological pipeline and companies involved in second and third generation HT-NGS sequencing. Three major second generation platforms include Roche/454 Pyro-sequencing, ABI/SOLiD sequencing by ligation, and Illumina’s bridge amplification sequencing technology. Three major third generation platforms include Pacific Biosciences Single Molecule Real Time (SMRT) sequencing, Oxford Nanopore sequencing, and Ion semiconductor sequencing. Data Analysis As with any genome sequencing project, the reads must be assembled to form a representation of the chromosomes being sequenced. With cancer genomes, this is usually done by aligning the reads to the human reference genome. Since even non-cancerous cells accumulate somatic mutations, it is necessary to compare sequence of the tumor to a matched normal tissue in order to discover which mutations are unique to the cancer. In some cancers, such as leukemia, it is not practical to match the cancer sample to a normal tissue, so a different non-cancerous tissue must be used. It has been estimated that discovery of all somatic mutations in a tumor would require 30-fold sequencing coverage of the tumor genome and a matched normal tissue. By comparison, the original draft of the human genome had approximately 65-fold coverage. To facilitate further improvement in somatic mutation detection in cancer, the Sequencing Quality Control Phase 2 Consortium has established a pair of tumor-normal cell lines as community reference samples and data sets for the benchmarking of cancer mutation detections. A major goal of cancer genome sequencing is to identify driver mutations: genetic changes which increase the mutation rate in the cell, leading to more rapid tumor evolution and metastasis. It is difficult to determine driver mutations from DNA sequence alone; but drivers tend to be the most commonly shared mutations amongst tumors, cluster around known oncogenes, and are tend to be non-silent. Passenger mutations, which are not important in the progression of the disease, are randomly distributed throughout the genome. It has been estimated that the average tumor carries c.a. 80 somatic mutations, fewer than 15 of which are expected to be drivers. A personal-genomics analysis requires further functional characterization of the detected mutant genes, and the development of a basic model of the origin and progression of the tumor. This analysis can be used to make pharmacological treatment recommendations. As of February 2012, this has only been done for patients clinical trials designed to assess the personal genomics approach to cancer treatment. Limitations A large-scale screen for somatic mutations in breast and colorectal tumors showed that many low-frequency mutations each make small contribution to cell survival. If cell survival is determined by many mutations of small effect, it is unlikely that genome sequencing will uncover a single "Achilles heel" target for anti-cancer drugs. However, somatic mutations tend to cluster in a limited number of signalling pathways, which are potential treatment targets. Cancers are heterogeneous populations of cells. When sequence data is derived from a whole tumor, information about the differences in sequence and expression pattern between cells is lost. This difficulty can be ameliorated by single-cell analysis. Clinically significant properties of tumors, including drug resistance, are sometimes caused by large-scale rearrangements of the genome, rather than single mutations. In this case, information about single nucleotide variants will be of limited utility. Cancer genome sequencing can be used to provide clinically relevant information in patients with rare or novel tumor types. Translating sequence information into a clinical treatment plan is highly complicated, requires experts of many different fields, and is not guaranteed to lead to an effective treatment plan. Incidentalome The incidentalome is the set of detected genomic variants not related to the cancer under study. (The term is a play on the name incidentaloma, which designates tumors and growths detected on whole-body imaging by coincidence). The detection of such variants may result in additional measures such as further testing or lifestyle management. See also 454 Life Sciences Pyrosequencing ABI Solid Sequencing Cancer Genome Project Cancer Genome Atlas Caris Life Sciences Center for Personalized Cancer Therapy DNA nanoball sequencing International Cancer Genome Consortium Ion semiconductor sequencing Nanopore sequencing Next-generation sequencing Oncogenomics Polony sequencing Precision medicine Pyrosequencing Single molecule real time sequencing SNV calling from NGS data References External links The Cancer Genome Project CGAP The Cancer Genome Atlas Cancer Genome Project Cancer Genome Project International Cancer Genome Consortium Francis S. Collins and Anna D. Barker. "Mapping the Cancer Genome". Scientific American, February 2007 Cancer genomics DNA sequencing pt:Projeto Genoma do Câncer
Cancer genome sequencing
Chemistry,Biology
2,368
7,485,092
https://en.wikipedia.org/wiki/Montipora
Montipora is a genus of Scleractinian corals in the phylum Cnidaria. Members of the genus Montipora may exhibit many different growth morphologies. With eighty five known species, Montipora is the second most species rich coral genus after Acropora. Description Growth morphologies for the genus Montipora include submassive, laminar, foliaceous, encrusting, and branching. It is not uncommon for a single Montipora colony to display more than one growth morphology. Healthy Montipora corals can be a variety of colors, including orange, brown, pink, green, blue, purple, yellow, grey, or tan. Although they are typically uniform in color, some species, such as Montipora spumosa or Montipora verrucosa, may display a mottled appearance. Montipora corals have the smallest corallites of any coral family. Columellae are not present. Coenosteum and corallite walls are porous, which can result in elaborate structures. The coenosteum of each Montipora species is different, making it useful for identification. Polyps are typically only extended at night. Montipora corals are commonly mistaken for members of the genus Porites based on their visual similarities, however, Porites can be distinguished from Montipora by examining the structure of the corallites. Distribution Montipora corals are common on reefs and lagoons of the Red Sea, the western Indian Ocean and the southern Pacific Ocean, but are entirely absent in the Atlantic Ocean. Ecology Montipora corals are hermaphroditic broadcast spawners. Spawning typically happens in spring. The eggs of Montipora corals already contain zooxanthellae, so none is obtained from the environment. This process is known as direct or vertical transmission. Montipora corals are preyed upon by corallivorous fish, such as butterflyfish. Montipora corals are known to host endo- and ectoparasites such as Allopodion mirum and Xarifia extensa. A currently undescribed species of nudibranch in the genus Phestilla has also been reported in the scientific and aquarium hobbyist literature to feed on the genus. Montipora corals are susceptible to the same stresses as other Scleractinian corals, such as anthropogenic pollution, sediment, algal growth, and other competitive organisms. Evolutionary history A 2007 study found that the genus Montipora formed a strongly supported clade with Anacropora, making it the genus with the closest genetic relationship to Montipora. It is thought that Anacropora evolved from Montipora relatively recently. Gallery Species Montipora aequituberculata Bernard, 1897 Montipora altasepta Nemenzo, 1967 Montipora angulata Lamarck, 1816 Montipora aspergillus Veron, DeVantier & Turak, 2000 Montipora australiensis Bernard, 1897 Montipora biformis Nemenzo, 1988 Montipora cactus Bernard, 1897 Montipora calcarea Bernard, 1897 Montipora calculata Dana, 1846 Montipora capitata Dana, 1846 Montipora capricornis Veron, 1985 Montipora cebuensis Nemenzo, 1976 Montipora circumvallata Ehrenberg, 1834 Montipora cocosensis Vaughan, 1918 Montipora confusa Nemenzo, 1967 Montipora conspicua Nemenzo, 1979 Montipora contorta Nemenzo & Montecillo, 1981 Montipora corbettensis Veron & Wallace, 1984 Montipora crassituberculata Bernard, 1897 Montipora cryptus Veron, 2000 Montipora danae Milne Edwards & Haime, 1851 Montipora delicatula Veron, 2000 Montipora digitata Dana, 1846 Montipora dilatata Studer, 1901 Montipora echinata Veron, DeVantier & Turak, 2000 Montipora edwardsi Bernard, 1897 Montipora efflorescens Bernard, 1897 Montipora effusa Dana, 1846 Montipora ehrenbergi Verrill, 1872 Montipora explanata Brüggemann, 1879 Montipora flabellata Studer, 1901 Montipora florida Nemenzo, 1967 Montipora floweri Wells, 1954 Montipora foliosa Pallas, 1766 Montipora foveolata Dana, 1846 Montipora friabilis Bernard, 1897 Montipora gaimardi Bernard, 1897 Montipora gracilis Klunzinger, 1879 Montipora grisea Bernard, 1897 Montipora hemispherica Veron, 2000 Montipora hirsuta Nemenzo, 1967 Montipora hispida Dana, 1846Montipora hodgsoni Veron, 2000 Montipora hoffmeisteri Wells, 1954 Montipora incrassata Dana, 1846 Montipora informis Bernard, 1897 Montipora kellyi Veron, 2000 Montipora lobulata Bernard, 1897 Montipora mactanensis Nemenzo, 1979 Montipora malampaya Nemenzo, 1967 Montipora maldivensis Pillai & Scheer, 1976 Montipora manauliensis Pillai, 1967 Montipora meandrina Ehrenberg, 1834 Montipora millepora Crossland, 1952 Montipora mollis Bernard, 1897 Montipora monasteriata Forskåi, 1775 Montipora niugini Veron, 2000 Montipora nodosa Dana, 1846 Montipora orientalis Nemenzo, 1967 Montipora pachytuberculata Veron, DeVantier & Turak Montipora palawanensis Veron, 2000 Montipora patula Verrill, 1870 Montipora peltiformis Bernard, 1897 Montipora porites Veron, 2000 Montipora samarensis Nemenzo, 1967 Montipora saudii Veron, DeVantier & Turak Montipora setosa Nemenzo, 1976 Montipora sinuosa Pillai & Scheer, 1976 Montipora spongiosa Ehrenberg, 1834 Montipora spongodes Bernard, 1897 Montipora spumosa Lamarck, 1816 Montipora stellata Bernard, 1897 Montipora stilosa Montipora suvadivae Pillai & Scheer, 1976 Montipora taiwanensis Veron, 2000 Montipora tortuosa Dana, 1846 Montipora tuberculosa Lamarck, 1816 Montipora turgescens Bernard, 1897 Montipora turtlensis Veron & Wallace, 1984 Montipora undata Bernard, 1897 Montipora venosa Ehrenberg, 1834 Montipora verrilli Vaughan, 1907 Montipora verrucosa Lamarck, 1816 Montipora verruculosa Veron, 2000 Montipora vietnamensis'' Veron, 2000 References Acroporidae Coral reefs Scleractinia genera
Montipora
Biology
1,480
43,972,057
https://en.wikipedia.org/wiki/Semantic%20heterogeneity
Semantic heterogeneity is when database schema or datasets for the same domain are developed by independent parties, resulting in differences in meaning and interpretation of data values. Beyond structured data, the problem of semantic heterogeneity is compounded due to the flexibility of semi-structured data and various tagging methods applied to documents or unstructured data. Semantic heterogeneity is one of the more important sources of differences in heterogeneous datasets. Yet, for multiple data sources to interoperate with one another, it is essential to reconcile these semantic differences. Decomposing the various sources of semantic heterogeneities provides a basis for understanding how to map and transform data to overcome these differences. Classification One of the first known classification schemes applied to data semantics is from William Kent more than two decades ago. Kent's approach dealt more with structural mapping issues than differences in meaning, which he pointed to data dictionaries as potentially solving. One of the most comprehensive classifications is from Pluempitiwiriyawej and Hammer, "Classification Scheme for Semantic and Schematic Heterogeneities in XML Data Sources". They classify heterogeneities into three broad classes: Structural conflicts arise when the schema of the sources representing related or overlapping data exhibit discrepancies. Structural conflicts can be detected when comparing the underlying schema. The class of structural conflicts includes generalization conflicts, aggregation conflicts, internal path discrepancy, missing items, element ordering, constraint and type mismatch, and naming conflicts between the element types and attribute names. Domain conflicts arise when the semantics of the data sources that will be integrated exhibit discrepancies. Domain conflicts can be detected by looking at the information contained in the schema and using knowledge about the underlying data domains. The class of domain conflicts includes schematic discrepancy, scale or unit, precision, and data representation conflicts. Data conflicts refer to discrepancies among similar or related data values across multiple sources. Data conflicts can only be detected by comparing the underlying sources. The class of data conflicts includes ID-value, missing data, incorrect spelling, and naming conflicts between the element contents and the attribute values. Moreover, mismatches or conflicts can occur between set elements (a "population" mismatch) or attributes (a "description" mismatch). Michael Bergman expanded upon this schema by adding a fourth major explicit category of language, and also added some examples of each kind of semantic heterogeneity, resulting in about 40 distinct potential categories . This table shows the combined 40 possible sources of semantic heterogeneities across sources: A different approach toward classifying semantics and integration approaches is taken by Sheth et al. Under their concept, they split semantics into three forms: implicit, formal and powerful. Implicit semantics are what is either largely present or can easily be extracted; formal languages, though relatively scarce, occur in the form of ontologies or other description logics; and powerful (soft) semantics are fuzzy and not limited to rigid set-based assignments. Sheth et al.'s main point is that first-order logic (FOL) or description logic is inadequate alone to properly capture the needed semantics. Relevant applications Besides data interoperability, relevant areas in information technology that depend on reconciling semantic heterogeneities include data mapping, semantic integration, and enterprise information integration, among many others. From the conceptual to actual data, there are differences in perspective, vocabularies, measures and conventions once any two data sources are brought together. Explicit attention to these semantic heterogeneities is one means to get the information to integrate or interoperate. A mere twenty years ago, information technology systems expressed and stored data in a multitude of formats and systems. The Internet and Web protocols have done much to overcome these sources of differences. While there is a large number of categories of semantic heterogeneity, these categories are also patterned and can be anticipated and corrected. These patterned sources inform what kind of work must be done to overcome semantic differences where they still reside. See also Data integration Data mapping Enterprise information integration Heterogeneous database system Interoperability Ontology-based data integration Schema matching Semantic integration Semantic matching Semantics References Further reading Classification of semantic heterogeneity Data management Interoperability Knowledge management Semantics
Semantic heterogeneity
Technology,Engineering
900
62,529
https://en.wikipedia.org/wiki/Sea%20level
Mean sea level (MSL, often shortened to sea level) is an average surface level of one or more among Earth's coastal bodies of water from which heights such as elevation may be measured. The global MSL is a type of vertical datuma standardised geodetic datumthat is used, for example, as a chart datum in cartography and marine navigation, or, in aviation, as the standard sea level at which atmospheric pressure is measured to calibrate altitude and, consequently, aircraft flight levels. A common and relatively straightforward mean sea-level standard is instead a long-term average of tide gauge readings at a particular reference location. The term above sea level generally refers to the height above mean sea level (AMSL). The term APSL means above present sea level, comparing sea levels in the past with the level today. Earth's radius at sea level is 6,378.137 km (3,963.191 mi) at the equator. It is 6,356.752 km (3,949.903 mi) at the poles and 6,371.001 km (3,958.756 mi) on average. This flattened spheroid, combined with local gravity anomalies, defines the geoid of the Earth, which approximates the local mean sea level for locations in the open ocean. The geoid includes a significant depression in the Indian Ocean, whose surface dips as much as below the global mean sea level (excluding minor effects such as tides and currents). Measurement Precise determination of a "mean sea level" is difficult because of the many factors that affect sea level. Instantaneous sea level varies substantially on several scales of time and space. This is because the sea is in constant motion, affected by the tides, wind, atmospheric pressure, local gravitational differences, temperature, salinity, and so forth. The mean sea level at a particular location may be calculated over an extended time period and used as a datum. For example, hourly measurements may be averaged over a full Metonic 19-year lunar cycle to determine the mean sea level at an official tide gauge. Still-water level or still-water sea level (SWL) is the level of the sea with motions such as wind waves averaged out. Then MSL implies the SWL further averaged over a period of time such that changes due to, e.g., the tides, also have zero mean. Global MSL refers to a spatial average over the entire ocean area, typically using large sets of tide gauges and/or satellite measurements. One often measures the values of MSL with respect to the land; hence a change in relative MSL or (relative sea level) can result from a real change in sea level, or from a change in the height of the land on which the tide gauge operates, or both. In the UK, the ordnance datum (the 0 metres height on UK maps) is the mean sea level measured at Newlyn in Cornwall between 1915 and 1921. Before 1921, the vertical datum was MSL at the Victoria Dock, Liverpool. Since the times of the Russian Empire, in Russia and its other former parts, now independent states, the sea level is measured from the zero level of Kronstadt Sea-Gauge. In Hong Kong, "mPD" is a surveying term meaning "metres above Principal Datum" and refers to height of above chart datum and below the average sea level. In France, the Marégraphe in Marseilles measures continuously the sea level since 1883 and offers the longest collated data about the sea level. It is used for a part of continental Europe and the main part of Africa as the official sea level. Spain uses the reference to measure heights below or above sea level at Alicante, while the European Vertical Reference System is calibrated to the Amsterdam Peil elevation, which dates back to the 1690s. Satellite altimeters have been making precise measurements of sea level since the launch of TOPEX/Poseidon in 1992. A joint mission of NASA and CNES, TOPEX/Poseidon was followed by Jason-1 in 2001 and the Ocean Surface Topography Mission on the Jason-2 satellite in 2008. Height above mean sea level Height above mean sea level (AMSL) is the elevation (on the ground) or altitude (in the air) of an object, relative to a reference datum for mean sea level (MSL). It is also used in aviation, where some heights are recorded and reported with respect to mean sea level (contrast with flight level), and in the atmospheric sciences, and in land surveying. An alternative is to base height measurements on a reference ellipsoid approximating the entire Earth, which is what systems such as GPS do. In aviation, the reference ellipsoid known as WGS84 is increasingly used to define heights; however, differences up to exist between this ellipsoid height and local mean sea level. Another alternative is to use a geoid-based vertical datum such as NAVD88 and the global EGM96 (part of WGS84). Details vary in different countries. When referring to geographic features such as mountains, on a topographic map variations in elevation are shown by contour lines. A mountain's highest point or summit is typically illustrated with the AMSL height in metres, feet or both. In unusual cases where a land location is below sea level, such as Death Valley, California, the elevation AMSL is negative. Difficulties in use It is often necessary to compare the local height of the mean sea surface with a "level" reference surface, or geodetic datum, called the geoid. In the absence of external forces, the local mean sea level would coincide with this geoid surface, being an equipotential surface of the Earth's gravitational field which, in itself, does not conform to a simple sphere or ellipsoid and exhibits gravity anomalies such as those measured by NASA's GRACE satellites. In reality, the geoid surface is not directly observed, even as a long-term average, due to ocean currents, air pressure variations, temperature and salinity variations, etc. The location-dependent but time-persistent separation between local mean sea level and the geoid is referred to as (mean) ocean surface topography. It varies globally in a typical range of ±. Dry land Several terms are used to describe the changing relationships between sea level and dry land. "relative" means change relative to a fixed point in the sediment pile. "eustatic" refers to global changes in sea level relative to a fixed point, such as the centre of the earth, for example as a result of melting ice-caps. "steric" refers to global changes in sea level due to thermal expansion and salinity variations. "isostatic" refers to changes in the level of the land relative to a fixed point in the earth, possibly due to thermal buoyancy or tectonic effects, disregarding changes in the volume of water in the oceans. The melting of glaciers at the end of ice ages results in isostatic post-glacial rebound, when land rises after the weight of ice is removed. Conversely, older volcanic islands experience relative sea level rise, due to isostatic subsidence from the weight of cooling volcanos. The subsidence of land due to the withdrawal of groundwater is another isostatic cause of relative sea level rise. On planets that lack a liquid ocean, planetologists can calculate a "mean altitude" by averaging the heights of all points on the surface. This altitude, sometimes referred to as a "sea level" or zero-level elevation, serves equivalently as a reference for the height of planetary features. Change Local and eustatic Local mean sea level (LMSL) is defined as the height of the sea with respect to a land benchmark, averaged over a period of time long enough that fluctuations caused by waves and tides are smoothed out, typically a year or more. One must adjust perceived changes in LMSL to account for vertical movements of the land, which can occur at rates similar to sea level changes (millimetres per year). Some land movements occur because of isostatic adjustment to the melting of ice sheets at the end of the last ice age. The weight of the ice sheet depresses the underlying land, and when the ice melts away the land slowly rebounds. Changes in ground-based ice volume also affect local and regional sea levels by the readjustment of the geoid and true polar wander. Atmospheric pressure, ocean currents and local ocean temperature changes can affect LMSL as well. Eustatic sea level change (global as opposed to local change) is due to change in either the volume of water in the world's oceans or the volume of the oceanic basins. Two major mechanisms are currently causing eustatic sea level rise. First, shrinking land ice, such as mountain glaciers and polar ice sheets, is releasing water into the oceans. Second, as ocean temperatures rise, the warmer water expands. Short-term and periodic changes Many factors can produce short-term changes in sea level, typically within a few metres, in timeframes ranging from minutes to months: Recent changes Aviation Pilots can estimate height above sea level with an altimeter set to a defined barometric pressure. Generally, the pressure used to set the altimeter is the barometric pressure that would exist at MSL in the region being flown over. This pressure is referred to as either QNH or "altimeter" and is transmitted to the pilot by radio from air traffic control (ATC) or an automatic terminal information service (ATIS). Since the terrain elevation is also referenced to MSL, the pilot can estimate height above ground by subtracting the terrain altitude from the altimeter reading. Aviation charts are divided into boxes and the maximum terrain altitude from MSL in each box is clearly indicated. Once above the transition altitude, the altimeter is set to the international standard atmosphere (ISA) pressure at MSL which is 1013.25 hPa or 29.92 inHg. See also (UK and Ireland) References External links Sea Level Rise:Understanding the past – Improving projections for the future Permanent Service for Mean Sea Level Global sea level change: Determination and interpretation Environment Protection Agency Sea level rise reports Properties of isostasy and eustasy Measuring Sea Level from Space Rising Tide Video: Scripps Institution of Oceanography Sea Levels Online: National Ocean Service (CO-OPS) Système d'Observation du Niveau des Eaux Littorales (SONEL) Sea level rise – How much and how fast will sea level rise over the coming centuries? Geodesy Physical oceanography Oceanographical terminology Vertical datums
Sea level
Physics,Mathematics
2,204
60,203,108
https://en.wikipedia.org/wiki/Maneuvering%20Characteristics%20Augmentation%20System
The Maneuvering Characteristics Augmentation System (MCAS) is a flight stabilizing feature developed by Boeing that became notorious for its role in two fatal accidents of the 737 MAX in 2018 and 2019, which killed all 346 passengers and crew among both flights. Because the CFM International LEAP engine used on the 737 MAX was larger and mounted further forward from the wing and higher off the ground than on previous generations of the 737, Boeing discovered that the aircraft had a tendency to push the nose up when operating in a specific portion of the flight envelope (flaps up, high angle of attack, manual flight). MCAS was intended to mimic the flight behavior of the previous Boeing 737 Next Generation. The company indicated that this change eliminated the need for pilots to have simulator training on the new aircraft. After the fatal crash of Lion Air Flight 610 in 2018, Boeing and the Federal Aviation Administration (FAA) referred pilots to a revised trim runaway checklist that must be performed in case of a malfunction. Boeing then received many requests for more information and revealed the existence of MCAS in another message, and that it could intervene without pilot input. According to Boeing, MCAS was implemented to compensate for an excessive angle of attack by adjusting the horizontal stabilizer before the aircraft would potentially stall. Boeing denied that MCAS was an anti-stall system, and stressed that it was intended to improve the handling of the aircraft while operating in a specific portion of the flight envelope. Following the crash of Ethiopian Airlines Flight 302 in 2019, Ethiopian authorities stated that the procedure did not enable the crew to prevent the accident, however further investigation revealed that the pilots did not follow the procedure properly. The Civil Aviation Administration of China then ordered the grounding of all 737 MAX planes in China, which led to more groundings across the globe. Boeing admitted MCAS played a role in both accidents, when it acted on false data from a single angle of attack (AoA) sensor. In 2020, the FAA, Transport Canada, and European Union Aviation Safety Agency (EASA) evaluated flight test results with MCAS disabled, and suggested that the MAX might not have needed MCAS to conform to certification standards. Later that year, an FAA Airworthiness Directive approved design changes for each MAX aircraft, which would prevent MCAS activation unless both AoA sensors register similar readings, eliminate MCAS's ability to repeatedly activate, and allow pilots to override the system if necessary. The FAA began requiring all MAX pilots to undergo MCAS-related training in flight simulators by 2021. Background In the 1960s, a basic pitch control system known as the stick shaker was installed in the Boeing 707 to avoid stalling. Later, a similar system to avoid stalling, in this case specifically called the Maneuvering Characteristics Augmentation System (MCAS), was implemented on the Boeing KC-46 Pegasus military aerial refueling tanker. The KC-46, which is based on the Boeing 767, requires MCAS because the weight and balance shifts when the tanker redistributes and offloads fuel. On that aircraft, the MCAS is overridden and disengaged when a pilot makes a stick input. Another MCAS implementation was developed for the Boeing 737 MAX, because its larger, repositioned engines changed the aircraft's flight characteristics compared to the preceding 737 generations. When a single angle of attack (AoA) sensor indicated that the angle was too high, MCAS would trim the horizontal stabilizer in the nose-down direction. Boeing did this to meet the company's objective of minimizing training requirements for pilots already qualified on the 737NG, which Boeing felt would make the new variant more appealing to aircraft customers that would prefer not to bear the costs of differences training. However, according to interviews with agency directors describing assessments undertaken after the MCAS-induced crashes had occurred, both the FAA and EASA felt that the aircraft would have had acceptable stability without MCAS. Role of MCAS in accidents On Lion Air Flight 610 and Ethiopian Airlines Flight 302, investigators determined that MCAS was triggered by falsely high AoA inputs, as if the plane had pitched up excessively. On both flights, shortly after takeoff, MCAS repeatedly actuated the horizontal stabilizer trim motor to push down the airplane nose. Satellite data for the flights showed that the planes struggled to gain altitude. Pilots reported difficulty controlling the airplane and asked to return to the airport. The implementation of MCAS has been found to disrupt autopilot operations. On March 11, 2019, after China had grounded the aircraft, Boeing published some details of new system requirements for the MCAS software and for the cockpit displays, which it began implementing in the wake of the prior accident five months earlier: If the two AoA sensors disagree with the flaps retracted, MCAS will not activate and an indicator will alert the pilots. If MCAS is activated in non-normal conditions, it will only "provide one input for each elevated AoA event." Flight crew will be able to counteract MCAS by pulling back on the column. On March 27, Daniel Elwell, the acting administrator of the FAA, testified before the Senate Committee on Commerce, Science, and Transportation, saying that on January 21, "Boeing submitted a proposed MCAS software enhancement to the FAA for certification. ... the FAA has tested this enhancement to the 737 MAX flight control system in both the simulator and the aircraft. The testing, which was conducted by FAA flight test engineers and flight test pilots, included aerodynamic stall situations and recovery procedures." After a series of delays, the updated MCAS software was released to the FAA in May 2019. On May 16, Boeing announced that the completed software update was awaiting approval from the FAA. The flight software underwent 360 hours of testing on 207 flights. Boeing also updated existing crew procedures. On April 4, 2019, Boeing publicly acknowledged that MCAS played a role in both accidents. Purpose of MCAS and the stabilizer trim system The FAA and Boeing both disputed media reports describing MCAS as an anti-stall system, which Boeing asserted it is distinctly not and instead a system that's designed to provide handling qualities for the pilot that meet pilot preferences. The aircraft had to perform well in a low-speed stall test. The (JATR) "considers that the /MCAS and elevator feel shift (EFS) functions could be considered as stall identification systems or stall protection systems, depending on the natural (unaugmented) stall characteristics of the aircraft". The JATR said, "MCAS used the stabilizer to change the column force feel, not trim the aircraft. This is a case of using the control surface in a new way that the regulations never accounted for and should have required an issue paper for further analysis by the FAA. If the FAA technical staff had been fully aware of the details of the MCAS function, the JATR team believes the agency likely would have required an issue paper for using the stabilizer in a way that it had not previously been used; this [might have] identified the potential for the stabilizer to overpower the elevator." Description Background The Maneuvering Characteristics Augmentation System (MCAS) is a flight control law built into the Boeing 737 MAX's flight control computer, designed to help the aircraft emulate the handling characteristics of the earlier Boeing 737 Next Generation. According to an international Civil Aviation Authorities team review (JATR) commissioned by the FAA, MCAS may be a stall identification or protection system, depending on the natural (unaugmented) stall characteristics of the aircraft. Boeing considered MCAS part of the flight control system, and elected to not describe it in the flight manual or in training materials, based on the fundamental design philosophy of retaining commonality with the 737NG. Minimizing the functional differences between the Boeing 737 MAX and Next Generation aircraft variants allowed both variants to share the same type rating. Thus, airlines can save money by employing and training one pool of pilots to fly both variants of the Boeing 737 interchangeably. When activated, MCAS directly engages the horizontal stabilizer, which is distinct from an anti-stall device such as a stick pusher, which physically moves the pilot's control column forward and engages the airplane's elevators when the airplane is approaching a stall. Boeing's former CEO Dennis Muilenburg said has been reported or described as an anti-stall system, which it is not. It's a system that's designed to provide handling qualities for the pilot that meet pilot preferences." The 737 MAX's larger CFM LEAP-1B engines are fitted farther forward and higher up than in previous models. The aerodynamic effect of its nacelles contributes to the aircraft's tendency to pitch up at high angles of attack (AOA). The MCAS is intended to compensate in such cases, modeling the pitching behavior of previous models, and meet a certain certification requirement, in order to enhance handling characteristics and thus minimizing the need for significant pilot retraining. The software code for the MCAS function and the computer for executing the software are built to Boeing's specifications by Collins Aerospace, formerly Rockwell Collins. As an automated corrective measure, the MCAS was given full authority to bring the aircraft nose down, and could not be overridden by pilot resistance against the control wheel as on previous versions of the 737. Following the Lion Air accident, Boeing issued an Operations Manual Bulletin (OMB) on November 6, 2018, to outline the many indications and effects resulting from erroneous AOA data and provided instructions to turn off the motorized trim system for the remainder of the flight, and trim manually instead. Until Boeing supplemented the manuals and training, pilots were unaware of the existence of MCAS due to its omission from the crew manual and no coverage in training. Boeing first publicly named and revealed the existence of MCAS on the 737 MAX in a message to airline operators and other aviation interests on November 10, 2018, twelve days after the Lion Air crash. Safety engineering and human factors As with any other equipment on board an aircraft, the FAA approves a functional "development assurance level" corresponding to the consequences of a failure, using the SAE International standards ARP4754 and ARP4761. MCAS was designated a "hazardous failure" system. This classification corresponds to failures causing "a large reduction in safety margins" or "serious or fatal injury to a relatively small number of the occupants", but nothing "catastrophic". The MCAS was designed with the assumption, approved by FAA, that pilots would react to an unexpected activation within three seconds. Technology readiness The MCAS design parameters originally envisioned automated corrective actions to be taken in cases of high AoA and g-forces beyond normal flight conditions. Test pilots routinely push aircraft to such extremes, as the FAA requires airplanes to perform as expected. Before the MCAS, test pilot Ray Craig determined the plane did not fly smoothly, in part due to the larger engines. Craig would have preferred an aerodynamic solution, but Boeing decided to implement a control law in software. According to a news report in the Wall Street Journal, engineers who had worked on the KC-46A Pegasus tanker, which includes an MCAS function, suggested MCAS to the design team. With the MCAS implemented, new test pilot Ed Wilson said the "MAX wasn't handling well when nearing stalls at low speeds" and recommended MCAS to apply across a broader range of flight conditions. This required the MCAS to function under normal g-forces and, at stalling speeds, deflect the vertical trim more rapidly and to a greater extent—but now it reads a single AoA sensor, creating a single point of failure that allowed false data to trigger MCAS to pitch the nose downward and force the aircraft into a dive. "Inadvertently, the door was now opened to serious system misbehavior during the busy and stressful moments right after takeoff", said Jenkins of The Wall Street Journal. The FAA did not conduct a safety analysis on the changes. It had already approved the previous version of MCAS, and the agency's rules did not require it to take a second look because the changes did not affect how the plane operated in extreme situations. The Joint Authorities Technical Review found the technology unprecedented: "If the FAA technical staff had been fully aware of the details of the MCAS function, the JATR team believes the agency likely would have required an issue paper for using the stabilizer in a way that it had not previously been used. MCAS used the stabilizer to change the column force feel, not trim the aircraft. This is a case of using the control surface in a new way that the regulations never accounted for and should have required an issue paper for further analysis by the FAA. If an issue paper had been required, the JATR team believes it likely would have identified the potential for the stabilizer to overpower the elevator." In November 2019, Jim Marko, a manager of aircraft integration and safety assessment at Transport Canada aviation regulator's National Aircraft Certification Branch questioned the readiness of MCAS. Because new problems kept emerging, he suggested to his peers at FAA, ANAC and EASA to consider the safety benefits of removing MCAS from the MAX. Scrutiny The MCAS came under scrutiny following the fatal crashes of Lion Air Flight 610 and Ethiopian Airlines Flight 302 soon after takeoff. The Boeing 737 MAX global fleet was grounded by all airlines and operators, and a number of functional issues were raised. The MCAS deflects the horizontal stabilizer four times farther than was stated in the initial safety analysis document. Due to the amount of trim the system applies to the horizontal stabilizer, aerodynamic forces resist pilot control effort to raise the nose. As long as the faulty AOA readings persist, a human pilot "can quickly become exhausted trying to pull the column back". In addition, switches for the horizontal stabilizer trim assist now serve a shared purpose of turning off automated systems such as MCAS as well as the trim buttons on the yoke, whereas in previous 737 models each could be switched off independently. In simulator sessions, pilots were stunned by the substantial effort needed to manually crank the trim wheel out of its nose down setting when the trim assist was deactivated. Boeing CEO Dennis Muilenburg has stated that there was "no surprise, or gap, or unknown here or something that somehow slipped through a certification process." On April 29, 2019, he stated the design of the aircraft was not flawed and reiterated that it was designed per Boeing's standards. In a May 29 interview with CBS, Boeing admitted that it had botched the software implementation and lamented the poor communications. On September 26, the National Transportation Safety Board criticized Boeing's inadequate testing of the 737 MAX, and pointed out that Boeing made erroneous assumptions on pilots' response to alerts in 737 MAX, triggered by activation of MCAS due to a faulty signal from an angle-of-attack sensor. The Joint Authorities Technical Review (JATR), a team commissioned by the FAA for 737 MAX investigation, concluded that FAA failed to properly review MCAS. Boeing failed to provide adequate and updated technical information regarding the MCAS system to FAA during Boeing 737 Max certification process, and had not carried out a thorough verification by stress-testing of the MCAS system. On October 18, Boeing turned over a discussion from 2016 between two employees that revealed prior issues with the MCAS system. Boeing's own internal design guidelines related to the 737 MAX's development stated that the system should "not have any objectionable interaction with the piloting of the airplane" and "not interfere with dive recovery". The operation of MCAS violated those. National Transportation Safety Board On September 26, 2019, the National Transportation Safety Board (NTSB) released the results of its review of potential lapses in the design and approval of the 737 MAX. The NTSB report concludes that assumptions "that Boeing used in its functional hazard assessment of uncommanded MCAS function for the 737 MAX did not adequately consider and account for the impact that multiple flight deck alerts and indications could have on pilots' responses to the hazard". When Boeing induced a stabilizer trim input that simulated the stabilizer moving consistent with the MCAS function, the specific failure modes that could lead to unintended MCAS activation (such as an erroneous high AOA input to the MCAS) were not simulated as part of these functional hazard assessment validation tests. As a result, additional flight deck effects (such as IAS DISAGREE and ALT DISAGREE alerts and stick shaker activation) resulting from the same underlying failure (for example, erroneous AOA) were not simulated and were not in the stabilizer trim safety assessment report reviewed by the NTSB." The NTSB questioned the long-held industry and FAA practice of assuming the nearly instantaneous responses of highly trained test pilots as opposed to pilots of all levels of experience to verify human factors in aircraft safety. The NTSB expressed concerns that the process used to evaluate the original design needs improvement because that process is still in use to certify current and future aircraft and system designs. The FAA could, for example, randomly sample pools from the worldwide pilot community to obtain a more representative assessment of cockpit situations. Supporting systems The updates proposed by Boeing focus mostly on MCAS software. In particular, there have been no public statements regarding reverting the functionality of the stabilizer trim cutout switches to pre-MAX configuration. A veteran software engineer and experienced pilot suggested that software changes may not be enough to counter the 737 MAX's engine placement. The Seattle Times noted that while the new software fix Boeing proposed "will likely prevent this situation recurring, if the preliminary investigation confirms that the Ethiopian pilots did cut off the automatic flight-control system, this is still a nightmarish outcome for Boeing and the FAA. It would suggest the emergency procedure laid out by Boeing and passed along by the FAA after the Lion Air crash is wholly inadequate and failed the Ethiopian flight crew." Boeing and the FAA decided that the AoA display and an AoA disagree light, which signals if the sensors give different readings, were not critical features for safe operation. Boeing charged extra for the addition of the AoA indicator to the primary display. In November 2017, Boeing engineers discovered that the standard AoA disagree light cannot independently function without the optional AoA indicator software, a problem affecting 80% of the global fleet that had not ordered the option. The software remedy was scheduled to coincide with the roll out of the elongated 737 MAX 10 in 2020, only to be accelerated by the Lion Air accident. Furthermore, the problem had not been disclosed to the FAA until 13 months after the fact. Although it is unclear whether the indicator could have changed the outcome for the ill-fated flights, American Airlines said the disagree indicator provided the assurance in continued operations of the airplane. "As it turned out, that wasn't true." Runaway stabilizer and manual trim In February 2016, the EASA certified the MAX with the expectation that pilot procedures and training would clearly explain unusual situations in which the seldom used manual trim wheel would be required to trim the plane, i.e. adjust the angle of the nose; however, the original flight manual did not mention those situations. The EASA certification document referred to simulations whereby the electric thumb switches were ineffective to properly trim the MAX under certain conditions. The EASA document said that after flight testing, because the thumb switches could not always control trim on their own, the FAA was concerned by whether the 737 MAX system complied with regulations. The American Airlines flight manual contains a similar notice regarding the thumb switches but does not specify conditions where the manual wheel may be needed. Boeing's CEO Muilenburg, when asked about the non-disclosure of MCAS, cited the "runaway stabilizer trim" procedure as part of the training manual. He added that Boeing's bulletin pointed to that existing flight procedure. Boeing views the "runaway stabilizer trim" checklist as a memory item for pilots. Mike Sinnett, vice president and general manager for the Boeing New Mid-Market Airplane (NMA) since July 2019, repeatedly described the procedure as a "memory item". However, some airlines view it as an item for the quick reference card. The FAA issued a recommendation about memory items in an Advisory Circular, Standard Operating Procedures and Pilot Monitoring Duties for Flight Deck Crewmembers: "Memory items should be avoided whenever possible. If the procedure must include memory items, they should be clearly identified, emphasized in training, less than three items, and should not contain conditional decision steps." In November 2018, Boeing told airlines that MCAS could not be overcome by pulling back on the control column to stop a runaway trim as on previous generation 737s. Nevertheless, confusion continued: the safety committee of a major U.S. airline misled its pilots by telling that the MCAS could be overcome by "applying opposite control-column input to activate the column cutout switches". Former pilot and CBS aviation & safety expert Chesley Sullenberger testified, "The logic was that if MCAS activated, it had to be because it was needed, and pulling back on the control wheel shouldn't stop it." In October, Sullenberger wrote, "These emergencies did not present as a classic runaway stabilizer problem, but initially as ambiguous unreliable airspeed and altitude situations, masking MCAS." In a legal complaint against Boeing, the Southwest Airlines Pilot Association states:An MCAS failure is not like a runaway stabilizer. A runaway stabilizer has continuous un-commanded movement of the tail, whereas MCAS is not continuous and pilots (theoretically) can counter the nose-down movement, after which MCAS would move the aircraft tail down again. Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and relied upon in earlier generations of 737 aircraft. Stabilizer cutoff switches re-wiring In May 2019, The Seattle Times reported that the two stabilizer cutoff switches, located on the center console, operate differently on the MAX than on the earlier 737 NG. On previous aircraft, one cutoff switch deactivates the thumb buttons on the control yoke that pilots use to move the horizontal stabilizer; the other cutoff switch disables automatic control of the horizontal stabilizer by autopilot or /MCAS. On the MAX, both switches are wired in series and perform the same function: they cut off all electric power to the stabilizer, both from the yoke buttons and from an automatic system. Thus, on previous aircraft it is possible to disable automatic control of the stabilizer yet to employ electric power assist by operating the yoke switches. On the MAX, with all power to the stabilizer cut, pilots have no choice but to use the mechanical trim wheel in the center console. Manual trim stiffness As pilots pull on the 737 controls to raise the nose of the aircraft, aerodynamic forces on the elevator create an opposing force, effectively paralyzing the jackscrew mechanism that moves the stabilizer. It becomes very difficult for pilots to hand crank the trim wheel. The problem was encountered on earlier 737 versions, and a "roller coaster" emergency technique for handling the flight condition was documented in 1982 for the 737-200 but did not appear in training documentation for later versions (including the MAX). This problem was originally found in the early 1980s with the 737-200 model. When the elevator operated to raise or lower the nose, it set up a strong force on the trim jackscrew that opposed any corrective force from the control systems. When attempting to correct an unwanted deflection using the manual trim wheel, exerting enough hand force to overcome the force exerted by the elevator became increasingly difficult as speed and deflection increased and the jackscrew effectively jammed in place. For the 737-200, a workaround called the "roller coaster" technique was developed. Counter-intuitively, to correct an excessive deflection causing a dive the pilot first pushes the nose down further, before easing back to gently raise the nose again. During this easing back period, the elevator deflection reduces or even reverses, its force on the jackscrew does likewise and the manual trim eases up. The workaround was included in the pilot's emergency procedures and in the training schedule. While the 737 MAX has a similar jackscrew mechanism, the "roller coaster" technique has been dropped from the pilot information. During the events leading to the two MAX crashes, the stiffness of the manual trim wheel repeatedly prevented manual trim adjustment to correct the MCAS-induced nose-down pitching. The issue has been brought to the notice of the DoJ criminal inquiry into the 737 MAX crashes. In simulator tests of Ethiopian Airlines Flight 302 flight scenario, the trim wheel was "impossible" to move when one of the pilots would instinctively pull up following an automatic nose-down trim input. It takes 15 turns to manually trim the aircraft one degree, and up to 40 turns to bring the trim back to neutral from the nose-down trim input caused by MCAS. Note that in the Ethiopian flight, the autothrottle was not disengaged and the aircraft entered overspeed conditions at low altitude which resulted in extraneous aerodynamic forces on the control surfaces. Horizontal stabilizer actuator The horizontal stabilizer is fitted with a conventional elevator for flight control. However, it is itself all-moving about a single pivot and can be trimmed to adjust its angle. The trim is actuated via a jackscrew mechanism. Slippage concern Engineers Sylvain Alarie and Gilles Primeau, experts on horizontal stabilizers consulted by Radio-Canada, observed anomalies in the data recorded during the Lion Air and Ethiopian Airlines crashes: a progressive shift of the horizontal stabilizer by 0.2°, before the crash. In reference to the Ethiopian Airlines flight, Alarie noted that without receiving a command from the MCAS or the pilots, the jackscrew slipped, and then slipped again as the aircraft accelerated and dove. Primeau noted that this deflection was an order of magnitude larger than what would ordinarily be permitted, and they concluded that these deflections were disallowed by FAA regulation 395A. These experts are concerned that the loads on the jackscrew have potentially increased since the creation of the 737, modern versions of which are considerably larger than the original design. These experts have raised concerns about the motors possibly overheating in April 2019. MCAS circumvention for ferry flights During the groundings, special flights to reposition MAX aircraft to storage locations, as per 14 CFR § 21.197, flew at lower altitude and with flaps extended to circumvent MCAS activation, rather than using the recovery procedure after the fact. Such flights required a certain pilot qualification as well as permission from corresponding regulators, and with no other cabin crew or passengers. Angle of attack As per Boeing technical description: "the Angle of Attack (AoA) is an aerodynamic parameter that is key to understanding the limits of airplane performance. Recent accidents and incidents have resulted in new flight crew training programs, which in turn have raised interest in AoA in commercial aviation. Awareness of AOA is vitally important as the airplane nears stall." Chesley Sullenberger said AoA indicators might have helped in these two crashes. "It is ironic that most modern aircraft measure (angle of attack) and that information is often used in many aircraft systems, but it is not displayed to pilots. Instead, pilots must infer (angle of attack) from other parameters, deducing it indirectly." AoA sensors Though there are two sensors on the MAX, only one of them is used at a time to trigger MCAS activation on the 737 MAX. Any fault in this sensor, perhaps due to physical damage, creates a single point failure, and the flight control system lacks any basis for rejecting its input as faulty information. Reports of a single point of failure were not always acknowledged by Boeing. Addressing American Airlines pilots, Boeing vice-president Mike Sinnett contradicted reports that the MCAS had a single-point failure, because the pilots themselves are the backup. Reporter Useem said in The Atlantic it was "showing both a misunderstanding of the term and a sharp break from Boeing's long-standing practice of having multiple backups for every flight system". Problems with the AoA sensor had been reported in over 200 incident reports submitted to the FAA; however, Boeing did not flight test a scenario in which it malfunctioned. The sensors themselves are under scrutiny. Sensors on the Lion air aircraft were supplied by United Technologies' Rosemount Aerospace. In September 2019, the EASA said it prefers triple-redundant AoA sensors rather than the dual redundancy in Boeing's proposed upgrade to the MAX. Installation of a third sensor could be expensive and take a long time. The change, if mandated, could be extended to thousands of older model 737s in service around the world. A former professor at Embry-Riddle Aeronautical University, Andrew Kornecki, who is an expert in redundancy systems, said operating with one or two sensors "would be fine if all the pilots were sufficiently trained in how to assess and handle the plane in the event of a problem". But, he would much prefer building the plane with three sensors, as Airbus does. AoA Disagree alert In November 2017, after several months of MAX deliveries, Boeing discovered that the AoA Disagree message, which is indicative of potential sensor mismatch on the primary flight display, was unintentionally disabled. Clint Balog, a professor at Embry-Riddle Aeronautical University, said after the Lion Air crash: "In retrospect, clearly it would have been wise to include the warning as standard equipment and fully inform and train operators on MCAS". According to Bjorn Fehrm, Aeronautical and Economic Analyst at Leeham News and Analysis, "A major contributor to the ultimate loss of JT610 is the missing AoA DISAGREE display on the pilots' displays." The software depended on the presence of the visual indicator software, a paid option that was not selected by most airlines. For example, Air Canada, American Airlines and Westjet had purchased the disagree alert, while Air Canada and American Airlines also purchased, in addition, the AoA value indicator, and Lion Air had neither. Boeing had determined that the defect was not critical to aircraft safety or operation, and an internal safety review board (SRB) corroborated Boeing's prior assessment and its initial plan to update the aircraft in 2020. Boeing did not disclose the defect to the FAA until November 2018, in the wake of the Lion Air crash. Consequently, Southwest had informed pilots that its entire fleet of MAX 8 aircraft will receive the optional upgrades. In March 2019, after the second accident of Ethiopian Airlines Flight 302, a Boeing representative told Inc. magazine, "Customers have been informed that AoA Disagree alert will become a standard feature on the 737 MAX. It can be retrofitted on previously delivered airplanes." On May 5, 2019, The Wall Street Journal reported that Boeing had known of existing problems with the flight control system a year before the Lion Air accident. Boeing defended that "Neither the angle of attack indicator nor the AoA Disagree alert are necessary for the safe operation of the airplane." Boeing recognized that the defective software was not implemented to their specifications as a "standard, standalone feature." Boeing stated, "...MAX production aircraft will have an activated and operable AoA Disagree alert and an optional angle of attack indicator. All customers with previously delivered MAX airplanes will have the ability to activate the AoA Disagree alert." Boeing CEO Muilenburg said the company's communication about the alert "was not consistent. And that's unacceptable." Visual AoA indicator Boeing published an article in Aero magazine about AoA systems, "Operational use of Angle of Attack on modern commercial jet planes": Boeing announced a change in policy in the Frequently Asked Questions (FAQ) about the MAX corrective work, "With the software update, customers are not charged for the AoA Disagree feature or their selection of the AoA indicator option." In 1996, the NTSB issued Safety Recommendation A-96-094. TO THE FEDERAL AVIATION ADMINISTRATION (FAA): Require that all transport-category aircraft present pilots with angle-of-attack info in a visual format, and that all air carriers train their pilots to use the info to obtain maximum possible airplane climb performance. The NTSB also stated about another accident in 1997, that "a display of angle of attack on the flight deck would have maintained the flightcrew's awareness of the stall condition and it would have provided direct indication of the pitch attitudes required for recovery throughout the attempted stall recovery sequence." The NTSB also believed that the accident may have been prevented if a direct indication of AoA was presented to the flightcrew (NTSB, 1997)." Flight computer architecture In early April 2019, Boeing reported a problem with software affecting flaps and other flight-control hardware, unrelated to MCAS; classified as critical to flight safety, the FAA has ordered Boeing to fix the problem correspondingly. In October 2019, the EASA has suggested to conduct more testing on proposed revisions to flight-control computers due to its concerns about portions of proposed fixes to MCAS. The necessary changes to improve redundancy between the two flight control computers have proved more complex and time-consuming than the fixes for the original MCAS issue, delaying any re-introduction to service beyond the date originally envisaged. In January 2020, new software issues were discovered, affecting monitoring of the flight computer start-up process and verifying readiness for flight. In April 2020, Boeing identified new risks where the trim system might unintentionally command nose down during flight or prematurely disconnect the autopilot. Microprocessor stress testing The MAX systems are integrated in the "e-cab" test flight deck, a simulator built for developing the MAX. In June 2019, "in a special Boeing simulator that is designed for engineering reviews," FAA pilots performed a stress testing scenarioan abnormal condition identified through FMEA after the MCAS update was implementedfor evaluating the effect of a fault in a microprocessor: as expected from the scenario, the horizontal stabilizer pointed the nose downward. Although the test pilot ultimately recovered control, the system was slow to respond to the proper runaway stabilizer checklist steps. Boeing initially classified this as a "major" hazard, and the FAA upgraded it to a much more severe "catastrophic" rating. Boeing stated that the issue can be fixed in software. The software change will not be ready for evaluation until at least September 2019. EASA director Patrick Ky said that retrofitting additional hardware is an option to be considered. The test scenario simulated an event toggling five bits in the flight control computer. The bits represent status flags such as whether MCAS is active, or whether the tail trim motor is energized. Engineers were able to simulate single event upsets and artificially induce MCAS activation by manipulating these signals. Such a fault occurs when memory bits change from 0 to 1 or vice versa, which is something that can be caused by cosmic rays striking the microprocessor. The failure scenario was known before the MAX entered service in 2017: it had been assessed in a safety analysis when the plane was certified. Boeing had concluded that pilots could perform a procedure to shut off the motor driving the stabilizer to overcome the nose-down movement. The scenario also affects 737NG aircraft, though it presents less risk than on the MAX; on the NG, moving the yoke counters any uncommanded stabilizer input, but this function is bypassed on the MAX to avoid negating the purpose of MCAS. Boeing also said that it agreed with additional requirements that the FAA required it to fulfill, and added that it was working toward resolving the safety risk. It will not offer the MAX for certification until all requirements have been satisfied. Early news reports were inaccurate in attributing the problem to an 80286 microprocessor overwhelmed with data, though as of April 2020 the concern remains that the MCAS software is overloading the 737 MAX's computers. Computer redundancy , the two flight control computers of Boeing 737 never cross-checked each other's operations; i.e., each was a single non-redundant channel. This lack of robustness existed since the early implementation and persisted for decades. The updated flight control system will use both flight control computers and compare their outputs. This switch to a fail-safe two-channel redundant system, with each computer using an independent set of sensors, is a radical change from the architecture used on 737s since the introduction on the older model 737-300 in the 1980s. Up to the MAX in its prior-to-grounding-version, the system alternates between computers after each flight. The two computers' architecture allowed switching in-flight if the operating computer failed, thus increasing availability. In the revised architecture, Boeing required the two computers to monitor each other so that each one can vet the other. Trim system malfunction indicator In January 2020, during flight testing, Boeing discovered a problem with an indicator light; the defect was traced to the "redesign of the two flight computers that control the 737 MAX to make them more resilient to failure". The indicator, which signals a problem with the trim system, can remain on longer than intended by design. Updates for return to service In November 2020, an Airworthiness Directive required corrective actions to the airplane's flight control laws (embodied in the Speed Trim System software): The new flight control laws now require inputs from both AOA sensors in order to activate MCAS. They also compare the inputs from the two sensors, and if those inputs differ significantly (greater than 5.5 degrees for a specified period of time), will disable the Speed Trim System (STS), which includes MCAS, for the remainder of the flight and provide a corresponding indication of that deactivation on the flight deck. The new flight control laws now permit only one activation of MCAS per sensed high-AOA event, and limit the magnitude of any MCAS command to move the horizontal stabilizer such that the resulting position of the stabilizer will preserve the flightcrew's ability to control the airplane's pitch by using only the control column. This means the pilot will have sufficient control authority without the need to make electric or manual stabilizer trim inputs. The new flight control laws also include Flight Control Computer (FCC) integrity monitoring of each FCC's performance and cross-FCC monitoring, which detects and stops erroneous FCC-generated stabilizer trim commands (including MCAS) References Further reading Design of a pitch stability augmentation system. External links Boeing Engineering failures Flight control systems Software bugs
Maneuvering Characteristics Augmentation System
Technology,Engineering
7,969
68,349,528
https://en.wikipedia.org/wiki/Zeta%20Octantis
Zeta Octantis, Latinized from ζ Octantis, is a solitary, yellowish-white hued star located in the southern circumpolar constellation Octans. It has an apparent magnitude of 5.42, making it faintly visible to the naked eye under ideal conditions. The star is located relatively close at a distance of only 156 light-years based on Gaia DR3 parallax measurements, but is drifting closer with a radial velocity of . At its current distance, Zeta Octantis' brightness is diminished by 0.25 magnitudes due to interstellar dust. This is an evolved A-type star with a stellar classification of A8/9 IV. David S. Evans and colleagues, however, give it a classification of F0 III, which suggests it is already an evolved giant star. It has double the Sun's mass, and 2.25 times the Sun's radius. It radiates around 13 times the luminosity of the Sun from its photosphere at an effective temperature of . Zeta Octantis is estimated to be 1.25 billion years olds based on stellar evolution models by Trevor J. David and Lynne A. Hillenbrand. It has a low metallicity, having only 44% the abundance of heavy metals compared to the Sun. Despite its advanced age, the object spins rapidly with a projected rotational velocity of , resulting in an oblate shape with an equatorial bulge 11% larger than the polar radius. References A-type subgiants Octantis, Zeta 079837 3678 043908 PD-85 00183 Octans Octantis, 9
Zeta Octantis
Astronomy
335
14,567,369
https://en.wikipedia.org/wiki/Born-digital
The term born-digital refers to materials that originate in a digital form. This is in contrast to digital reformatting, through which analog materials become digital, as in the case of files created by scanning physical paper records. It is most often used in relation to digital libraries and the issues that go along with said organizations, such as digital preservation and intellectual property. However, as technologies have advanced and spread, the concept of being born-digital has also been discussed in relation to personal consumer-based sectors, with the rise of e-books and evolving digital music. Other terms that might be encountered as synonymous include "natively digital", "digital-first", and "digital-exclusive". Discrepancies in definition There exists some inconsistency in defining born-digital materials. Some believe such materials must exist in digital form exclusively; in other words, if they can be transferred into a physical, analog form, they are not truly born-digital. However, others maintain that while these materials will often not have a subsequent physical counterpart, having one does not bar them from being classified as 'born-digital'. For instance, Mahesh and Mittal identify two types of born-digital content, "exclusive digital" and "digital for print", allowing for a broader base of classification than the former definition provides. Furthermore, it has been pointed out that certain works may incorporate components that are both born-digital and digitized, further blurring the lines between what should and should not be considered 'born-digital.' For example, a digital video created may utilize historical film footage that has been converted. It is important to be aware of these discrepancies when thinking about born-digital materials and the effects they have. However, some universals do exist across these definitions. All make clear the fact that born-digital media must originate digitally. Also, they agree that this media must be able to be utilized in a digital form (whether exclusively or otherwise), while they do not have to exist or be used as analog materials. Etymology The term "born digital" is of uncertain origin. While it may have occurred to multiple people at various times, it was coined independently by web developer Randel (Rafi) Metz in 1993, who acquired the domain name "borndigital.com" then and sustained it as a personal website for 18 years until 2011. The domain is now owned by a web developer in New Zealand. The original website is archived here. Examples of born-digital content Grey literature and communications Much of the grey literature that exists today are almost entirely conducted online, due in part to the accessibility and speed of internet communications. As the products of the vast amount of information created by organizations and individuals on computers, data sets and electronic records must exist in the context of other activities. Common content includes: Email Documents created in word processors and/or observed in viewers. Examples include Microsoft Word, Google Docs, WordPerfect, Apple Pages, LibreOffice Writer, and Adobe Reader. Spreadsheets used to organize and tabulate data are almost entirely digital. Common applications include Microsoft Excel, Google Sheets, LibreOffice Calc, and Lotus 1-2-3 (discontinued). Presentations used to present data and ideas are created with software such as Microsoft PowerPoint, Google Slides, LibreOffice Impress, and Prezi. Electronic medical records Social media websites such as Facebook, Twitter, and Reddit have originated in the networked world, and are therefore born-digital by default. Digital photography Digital photography has allowed larger groups of people to participate in the process, art form, and pastime of photography. With the advent of digital cameras in the late 1980s, followed by the invention and dissemination of mobile phones capable of photography, sales of digital cameras eventually surpassed that of analog cameras. The early to mid 2000s saw the rise of photo storage websites, such as Flickr and Photobucket, and social media websites dedicated primarily to sharing digital photographs, including Instagram, Pinterest, Imgur, and Tumblr. Digital image files include Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), Portable Network Graphics (PNG), Graphic Interchange Format (GIF), and raw image format. Digital art Digital art is an umbrella term for art created with a computer. Types include visual media, digital animation, computer-aided design, 3D models and interactive art. Webcomics, comics published primarily on the internet, are an example of exclusively born-digital art. Webcomics follow the tradition of user-generated content and may later be printed by the creator, but as they were originally disseminated through the internet, they are considered to be born-digital media. Many webcomics are published on existing social media websites, while others use webcomic-specific platforms or their own domains. Electronic books E-books are books that can be read through the digital screens of computers, smartphones, or dedicated devices. The e-book sector of the book industry has flourished in recent years, with increasing numbers of e-books and e-book readers being developed and sold. E-publishing is particularly favorable to independent authors, because the digital marketplace creates a more direct connection between authors, their works, and the audience. Some publishing houses, including major ones such as Harlequin, have formed imprints for digital-only books in response to this trend. Publishers also offer digital-exclusive publications for use on e-book readers, such as the Kindle. One example of this was with the simultaneous launch of Amazon's Kindle 2 with the Stephen King novelette Ur. In recent years, however, the sale of e-books from traditional publishers has decreased, due in part to increasing prices. Video recordings Videos that are born-digital vary in type and usage. Vlogs, an amalgamation of "video" and "blog," are streamed and consumed on video-sharing websites such as YouTube. Similarly, a web series is a television-like show that is shown exclusively and/or initially on the internet. This does not include the streaming of pre-existing traditional television shows. Examples include Dr. Horrible's Sing Along Blog, The Lizzie Bennett Diaries, The Guild, and The Twilight Zone (2019). Sound recordings Digital sound recordings have played a role since the 1970s with the acceptance of pulse-code modulation (PCM) in the recording process. Since then, numerous means of storing and delivering digital audio have been developed, including web streams, compact discs and mp3 audio files. Increasingly, digital audio are only available via download, lacking any kind of tangible counterpart. One example of this trend is the 2008 recording of Hector Berlioz's Symphonie fantastique by Los Angeles Philharmonic under Gustavo Dudamel. Available through download only, it has presented problems for libraries which may want to carry this work but cannot due to licensing limitations. Another example is Radiohead's 2007 release In Rainbows, released initially as a digital download. The music industry has changed dramatically with the increase in digital music, specifically digital downloads. The digital format and consumers' growing comfort with it has led to rising sales in single tracks. This growth is clearly still underway, with all of the ten best-selling singles since 2000 having been released since 2007. This does not necessarily signal the demise of CDs, as they are still more popular than digital albums, but it does show that this changing born-digital content is having a significant influence on sales and the industry. Other media WebExhibits are websites that act as virtual museums for any variety of content. These often use both primary and secondary historical sources, maps, timelines, infographics, and other data visualizations to showcase the historical past. One example is Clio Visualizing History's Click! The Ongoing Feminist Revolution, a web exhibit about the American women's movement from the 1940s to the present. Clio Visualizing History was founded by Lola Van Wagenen in 1996 to meet the growing need for innovative history projects in multi-media platforms. Journalism As existing print publications migrated to born-digital releases, digital native news websites such as HuffPo and Buzzfeed News have grown substantially. This trend toward web-exclusive content has seen the rise of "news applications," or news articles built with interactive features that cannot be replicated on print. "News Apps" are often heavily data-driven, using interactive graphics custom-built for the story by a team of software specialists in addition to the core group of writers and editors. Examples include Baltimore Homicides from The Baltimore Sun, Do No Harm from the Las Vegas Sun, and Snow Fall from The New York Times, which took a team of more than fifteen journalists, web developers, and designers to build. Key issues Preservation Digital preservation involves the conservation and maintenance of digital content. As with other digital objects, preservation must be a continuous and regular undertaking, as these materials do not show the same signs of degradation that print and other physical materials do. Invisible processes such as bit rot can lead to irreparable damage. In the case of born-digital content, deterioration can occur in the form of bit rot, a process in which digital files degrade over time, and link rot, a process in which URLs link to pages on the internet that are no longer available. Incompatibility is also a concern, in regard to the eventual obsolescence of both hardware and software capable of making sense of the documents. Many questions arise regarding what should be archived and preserved and who should undertake the job. Vast amounts of born-digital content are created constantly and institutions are forced to decide what and how much should be saved. Because linking plays such a large role in the digital setting, whether a responsibility exists to maintain access to links (and therefore context) is debated, especially when considering the scope of such a task. Additionally, since publishing is not as delineated in the digital realm and preliminary versions of work are increasingly made available, knowing when to archive presents further complications. Relevance and accessibility For digital libraries and repositories that are used as reference materials, such as PBS LearningMedia, which provides educational resources for teachers, staying relevance is of utmost importance. The information must be factually accurate and include context, while staying current to the website's main goals. As in the case of preservation, bit rot, link rot, and incompatibility negatively affect how users might access born-digital records, while mere functionality, e.g. video quality and legibility of any text, is also a concern. Additionally, considerations on how digital content can be inclusive of people with disabilities should be made, particularly in conjunction with assistive technologies such as screen readers, screen magnifiers, and speech-to-text software. Access is also affected by licensing laws — the lack of ownership of their digital collections leaves libraries with nothing when their license expires, despite the costs already paid. Licensing Laws created to protect the intellectual property were written for analog works; as such, provisions such as the first-sale doctrine of US copyright law, which enables libraries to lend materials to patrons, have not been applied to the digital realm. Therefore, certain copyrighted digital content that is licensed rather than owned, as is common with many digital materials, is often of limited use since it cannot be transmitted to patrons at various computers or lent through an interloan agreement. However, with regards to the preservation functions of libraries and archives and the subsequent need to make copies of born-digital materials, the laws of many countries have been changing, allowing for agreements to be made between these institutions and the rights holders of born-digital content. Consumers have also had to deal with intellectual property as it concerns their ownership of and ability to control the born-digital material that they buy. Piracy proves to be a bigger problem with digital objects, including those that are born-digital, because such materials can be copied and spread in perfect condition with speed and distance on a scale inconceivable for traditional print and physical materials. Again, the first-sale doctrine, which, from a consumer standpoint, allows purchasers of materials to sell or give away items (such as books and CDs), is not yet applied effectively to digital objects. Three reasons for this have been identified by Victor Calaba: "...first, license agreements imposed by software manufacturers typically prohibit exercise of the first sale doctrine; second, traditional copyright law may not support application of the first sale doctrine to digital works; finally, the functionally prevents users from making copies of digitized works and prohibits the necessary bypassing of access control mechanisms to facilitate a transfer." Increasingly, institutions are more interested in subscribing to digital versions of journals, something observed as some scholarly journals have unbundled their print and electronic editions and allowed for separate subscription; these trends have created questions about the economic sustainability of print publication. Major journals such as the American Chemical Society have made significant changes to their print editions in order to cut costs, and many others predict an exclusively digital future. The increasing subscription prices and predatory practices of scholarly journals, however, provided impetus for the Open Access Movement, which advocates for free, unrestricted access to scholarly papers. See also e-Flux Digital artifactual value Digital curation Legal deposit National edeposit, Australia's system for depositing, storing and managing all born-digital documents published in Australia Virtual artifact References Library science terminology Academic publishing Publishing terminology Digital media Records management Online publishing
Born-digital
Technology
2,759
12,767,009
https://en.wikipedia.org/wiki/Plate%20count%20agar
Plate count agar (PCA), also called standard methods agar (SMA), is a microbiological growth medium commonly used to assess or to monitor "total" or viable bacterial growth of a sample. PCA is not a selective medium. The total number of living aerobic bacteria can be determined using a plate count agar which is a substrate for bacteria to grow on. The medium contains casein which provides nitrogen, carbon, amino acids, vitamins and minerals to aid in the growth of the organism. Yeast extract is the source for vitamins, particularly of B-group. Glucose is the fermentable carbohydrate and agar is the solidifying agent. This is a non-selective medium and the bacteria is counted as colony forming units per gram (CFU/g) in solid samples and (CFU/ml) in liquid samples. Pour plate technique The pour plate technique is the typical technique used to prepare plate count agars. Here, the inoculum is added to the molten agar before pouring the plate. The molten agar is cooled to about 45 degrees Celsius and is poured using a sterile method into a petri dish containing a specific diluted sample. From here, the plates are rotated to ensure the samples are uniformly mixing with the agar. Incubation of the plates is the next step and is carried out for about 3 days at 20 to 30 degrees Celsius. Composition Benefits easy to perform larger sample volume than the surface spread method allowing for detection of lower microbiological concentrations agar surface does not have to be pre-dried number of microbes/ mL in a specimen can be determined previously prepared plates are not needed possibility of determination of bacterial contamination of foods Obtaining isolated colonies from plate count agars Once a plate has been successfully prepared, plate count agar cells will grow into colonies which can be sufficiently isolated to determine the original cell type. The colony-forming unit (CFU) is an appropriate description of the colony's origin. In plate counts, colonies are counted, but the count is usually recorded in CFU. Due to the fact that colonies growing on plates may begin as either a single cell or a cluster of cells, CFU allows for a correct description of the cell density. The streak plate method helps identify the unknown microbe by producing individual colonies on an agar plate which allows for CFU method to be used: Beginning the streak pattern. Label the base of the plate. Then, visualize the plate in four quadrants: top left (I), top right (II), bottom right (III), bottom left (IV). Streak the mixed culture back and forth in the first quadrant (top left) of the agar plate. Do not cut the agar, simply scrape the top. Flame the loop to rid of culture residue. Wait for it to cool for the next quadrant. Streaking again. Proceed to the second quadrant with streaking. Streaks on the medium will overlap. Flame the loop to rid of culture residue. Wait for it to cool for the next quadrant. Streaking yet again. Rotate the plate 180 degrees to get a proper streaking angle in the third quadrant. Be sure to cool the loops before streaking in quadrant four. Streaking in the center. Streak one last time beginning in quadrant four and into the center of the plate. Flame the loops. Incubate the plate for assigned time and appropriate temperature. References 1. "Plate Count Agar (PCA) - Culture Media". Microbe Notes. 2019-05-13. Retrieved 2021-12-06. 2. Aryal, Sagar (2021-07-08). "Streak Plate Method- Principle, Methods, Significance, Limitations". Microbe Notes. Retrieved 2021-12-07. Microbiological media
Plate count agar
Biology
784
60,035,737
https://en.wikipedia.org/wiki/Over-the-top%20media%20services%20in%20India
There are currently about 57 providers of over-the-top media services (OTT) in India, which distribute streaming media or video on demand over the Internet. History and growth The first dependent Indian OTT platform was BIGFlix, launched by Reliance Entertainment in 2008. In 2010 Digivive launched India's first OTT mobile app called nexGTv, which provides access to both live TV and on–demand content. nexGTV was the first app to live–stream Indian Premier League matches on smart phones and did so during 2013 and 2014. The livestream of the IPL since 2015, when rights were won, played an important role in the growth of another OTT platform, Hotstar (now Disney+ Hotstar) in India. OTT gained significant momentum in India when both DittoTV (Zee) and Sony Liv were launched in the Indian market around 2013. Ditto TV was an aggregator platform containing shows across all media channels including Star, Sony, Viacom, Zee, etc. Hotstar Hotstar (now Disney+ Hotstar) is the most subscribed–to OTT platform in India, owned by Star India as of July 2020, with around 300 million active users and over 350 million downloads. According to Hotstar's India Watch Report 2018, 96% of watch time on Hotstar comes from videos longer than 20 minutes, while one–third of Hotstar subscribers watch television shows. In 2019, Hotstar began investing crore in generating original content such as "Hotstar Specials." 80% of the viewership on Hotstar comes from drama, movies and sports programs. Hotstar has the exclusive streaming rights of IPL in India. Netflix American streaming service Netflix entered India in January 2016. In April 2017, it was registered as a limited liability partnership (LLP) and started commissioning content. It earned a net profit of ₹2020,000 (₹2.02 million) for fiscal year 2017. In fiscal year 2018, Netflix earned revenues of ₹580 million. According to Morgan Stanley Research, Netflix had the highest average watch time of more than 120 minutes but viewer counts of around 20 million in July 2018. As of 2018, Netflix has six million subscribers, of which 5–6% are paid members. India was not affected by Netflix's July 2018 increase in subscription rates for the US and Latin America. Netflix has stated its intent to invest ₹600 crore in the production of Indian original programming. In late 2018, Netflix bought of office space in Bandra–Kurla Complex (BKC) in Mumbai as their head office. As of December 2018, Netflix has more than 40 employees in India. Other OTT providers Sun NXT is an Indian video on demand service run by Sun TV Network. It was launched in June 2017, streaming in the Tamil language and six other languages. The platform has more than 4,000 Tamil movies and 200 Tamil shows, as well as regional movies and shows. Sun NXT also streams a large library of its own Sun TV shows and movies. Amazon Prime Video was launched in 2016. The platform has 2,300 titles available including 2,000 movies and about 400 shows. It has announced that it will invest ₹20 billion in creating original content in India. Besides English, Prime Video is available in six Indian languages as of December 2018. Amazon India launched Amazon Prime Music in February 2018. Eros Now, an OTT platform launched by Eros International, has the most content among the OTT providers in India, including over 12,000 films, 100,000 music tracks and albums, and 100 TV shows. Eros Now was named the Best OTT Platform of the Year 2019 at the British Asian Media Awards. It has 211.5 million registered users and 36.2 million paying subscribers as of September 2020. In February 2020, Aha OTT platform was launched, broadcasting exclusively Telugu content. In 2021, Planet Marathi became the first OTT platform dedicated to Marathi content in India, including web-series, films, music, theater, fiction and non-fiction reality shows. It is available for both Android and iOS mobile devices along with Android TV and Amazon Fire TV devices. Bollywood actress Madhuri Dixit helped launch the platform. With rising interest for Korean dramas, Rakuten Viki saw its biggest jump of web traffic from India in 2020 due to the COVID-19 lockdown, which led to ad localization on the platform. The OTT market in fiscal year 2020 was estimated to be worth $1.7 billion. SonyLIV and ZEE5 In December 2021, Sony and Zee announced their merger, and announced plans to merge their OTT platforms. The merger was called off. OTT services launched as Amazon Prime video channels The list is by alphabetical order, not by rank or popularity. Content regulation Due to the absence of any rules and regulation regarding OTT content, many OTT providers were accused of showing nudity, vulgarity and obscenity and hurting Hindu religious sentiments in their shows. Series which were the focus of controversy include Four More Shots Please!, Tandav, Paatal Lok, Sacred Games, Mirzapur Lust stories franchise, Rana Naidu. Thank You for Coming, and Annapoorani (2023). According to media reports, between 2018 and 2024, some OTT platforms emerged which started showing porn in the form of web series. Both the Supreme Court and Delhi High Court say that OTT regulation is necessary. OTT regulation On 25 Feb 2021, Indian govt introduced self-regulation rules for OTT platforms to stop obscene content and abusive language. On 19 March 2023, I&B minister Anurag Thakur said that self regulation does not mean that OTT should show obscenity and nudity. On 15 April 2023, I&B Secretary Apurva Chandra has said because of the government's soft-touch regulations on OTT industry have led to the creation of content that is undesirable and vulgar. On 26 April 2023, MIB India said that if nudity and obscenity is seen on any OTT platform, strict action will be taken against it. On 16 May 2023, Don't show obscene content, parliamentary panel told to Netflix and Amazon Prime Video. On 20 June 2023, the government told Netflix, Disney+ Hotstar and all other streaming services that their content should be independently reviewed for obscenity and violence before being shown online. On 18 July 2023, Anarug Thakur said in a meeting with all OTT stakeholders that demeaning Indian culture will not be tolerated. On 22 August 2023, Indian government assured that it will bring rules and regulation to regulate vulgar and obscene content on social media and OTT platforms. On 10 November 2023, MIB India introduces the 'Broadcasting Service Regulation Bill', which included Programme code with Content Evaluation Committee(CEC) for every OTT platforms. Currently public consultation is ongoing till 15 January 2024. The draft bill mandates that all OTT streaming platforms can only broadcast those web series or content, which will be duly certified by Content Evaluation Committee(CEC). Legal action Currently OTT is regulated under the IT Rules 2021, which clearly stated that 'No content that is prohibited by law at the time being force can be published or transmitted'. MIB has continuously taking action on OTT platform, who has violating the IT Act 67A, which prevent Publishing and transmitting obscene materials. Pornography in India is restricted and illegal in all form including print media, electronic media, and digital media (OTT). Several websites and OTT Platforms has been banned by the Cyber Crime Branch for streaming Pornographic and obscene content on their platform. The owners of several OTT platforms on which obscene content was streamed were arrested and all their bank accounts were also frozen. On 27 June 2023, DPCGC took punitive action against Ullu for streaming obscene content and asked them to remove all their explicit shows or remove all adult scenes within 15 days. Judicial opinion Supreme Court of India said that, OTT regulations is a necessity as some even showing nudity and pornography. In March 2023, the Delhi High Court had said framing rules and regulations to regulate the content on social media and OTT platforms needs urgent attention. Criticism IAMAI again pledges self-regulation for OTT platforms. Content creators and producers in India, as mentioned in the report, already face many challenges, including the multiplicity of legislation and forums for filing complaints. The study paper by The Dialogue Internet and IAMAI found that these challenges lead to compliance uncertainties, self-censorship, and unwarranted economic burden. A private association of current affairs and news television broadcasters has expressed strong reservations against the Draft Broadcasting Services (Regulation) Bill, 2023, which it warned would have a “chilling effect" on the freedom of speech and expression, News Broadcasters & Digital Association (NBDA) said in a submission to the information and broadcasting ministry. List of OTT platforms in India The list is by alphabetical order, not by rank or popularity. See also Streaming television Streaming media List of streaming media services Multichannel television in the United States Golden Age of Television (2000s–present) References Broadcasting in India OTT in India Net neutrality Streaming television Set-top box Subscription video streaming services Digital media
Over-the-top media services in India
Technology,Engineering
1,903
4,142,564
https://en.wikipedia.org/wiki/HTML%20email
HTML email is the use of a subset of HTML to provide formatting and semantic markup capabilities in email that are not available with plain text: Text can be linked without displaying a URL, or breaking long URLs into multiple pieces. Text is wrapped to fit the width of the viewing window, rather than uniformly breaking each line at 78 characters (defined in RFC 5322, which was necessary on older text terminals). It allows in-line inclusion of images, tables, as well as diagrams or mathematical formulae as images, which are otherwise difficult to convey (typically using ASCII art). Adoption Most graphical email clients support HTML email, and many default to it. Many of these clients include both a GUI editor for composing HTML emails and a rendering engine for displaying received HTML emails. Since its conception, a number of people have vocally opposed all HTML email (and even MIME itself), for a variety of reasons. For instance, the ASCII Ribbon Campaign advocated that all email should be sent in ASCII text format. Proponents placed ASCII art in their signature blocks, meant to look like an awareness ribbon, along with a message or link to an advocacy site The campaign was unsuccessful and was abandoned in 2013. While still considered inappropriate in many newsgroup postings and mailing lists, HTML adoption for personal and business mail has only increased over time. Some of those who strongly opposed it when it first came out now see it as mostly harmless. According to surveys by online marketing companies, adoption of HTML-capable email clients is now nearly universal, with less than 3% reporting that they use text-only clients. The majority of users prefer to receive HTML emails over plain text. Compatibility Email software that complies with RFC 2822 is only required to support plain text, not HTML formatting. Sending HTML formatted emails can therefore lead to problems if the recipient's email client does not support it. In the worst case, the recipient will see the HTML code instead of the intended message. Among those email clients that do support HTML, some do not render it consistently with W3C specifications, and many HTML emails are not compliant either, which may cause rendering or delivery problems. In particular, the <head> tag, which is used to house CSS style rules for an entire HTML document, is not well supported, sometimes stripped entirely, causing in-line style declarations to be the de facto standard, even though in-line style declarations are inefficient and fail to take good advantage of HTML's ability to separate style from content. Although workarounds have been developed, this has caused no shortage of frustration among newsletter developers, spawning the grassroots Email Standards Project, which grades email clients on their rendering of an Acid test, inspired by those of the Web Standards Project, and lobbies developers to improve their products. To persuade Google to improve rendering in Gmail, for instance, they published a video montage of grimacing web developers, resulting in attention from an employee. Style Some senders may excessively rely upon large, colorful, or distracting fonts, making messages more difficult to read. For those especially bothered by this formatting, some user agents make it possible for the reader to partially override the formatting (for instance, Mozilla Thunderbird allows specifying a minimum font size); however, these capabilities are not globally available. Further, the difference in optical appearance between the sender and the reader can help to differentiate the author of each section, improving readability. Multi-part formats Many email servers are configured to automatically generate a plain text version of a message and send it along with the HTML version, to ensure that it can be read even by text-only email clients, using the Content-Type: multipart/alternative, as specified in RFC 1521. The message itself is of type multipart/alternative, and contains two parts, the first of type text/plain, which is read by text-only clients, and the second with text/html, which is read by HTML-capable clients. The plain text version may be missing important formatting information, however. (For example, a mathematical equation may lose a superscript and take on an entirely new meaning.) Many mailing lists deliberately block HTML email, either stripping out the HTML part to just leave the plain text part or rejecting the entire message. The order of the parts is significant. RFC1341 states that: In general, user agents that compose multipart/alternative entities should place the body parts in increasing order of preference, that is, with the preferred format last. For multipart emails with html and plain-text versions, that means listing the plain-text version first and the html version after it, otherwise the client may default to showing the plain-text version even though an html version is available. Message size HTML email is larger than plain text. Even if no special formatting is used, there will be the overhead from the tags used in a minimal HTML document, and if formatting is heavily used it may be much higher. Multi-part messages, with duplicate copies of the same content in different formats, increase the size even further. The plain text section of a multi-part message can be retrieved by itself, though, using IMAP's FETCH command. Although the difference in download time between plain text and mixed message mail (which can be a factor of ten or more) was of concern in the 1990s (when most users were accessing email servers through slow modems), on a modern connection the difference is negligible for most people, especially when compared to images, music files, or other common attachments. Security vulnerabilities HTML allows a link to be hidden, but shown as any arbitrary text, such as a user-friendly target name. This can be used in phishing attacks, in which users are fooled into accessing a counterfeit web site and revealing personal details (like bank account numbers) to a scammer. If an email contains inline content from an external server, such as a picture, retrieving it requires a request to that external server which identifies where the picture will be displayed and other information about the recipient. Web bugs are specially created images (usually unique for each individual email) intended to track that email and let the creator know that the email has been opened. Among other things, that reveals that an email address is real, and can be targeted in the future. Some phishing attacks rely on particular features of HTML: Brand impersonation with procedurally-generated graphics (such graphics can look like a trademarked image but evade security scanning because there is no file) Text containing invisible Unicode characters or with a zero-height font to confuse security scanning Victim-specific URI, where a malicious link encodes special information which allows a counterfeit site to be personalized (appearing as the victim's account) so as to be more convincing. Displaying HTML content frequently involves the client program calling on special routines to parse and render the HTML-coded text; deliberately mis-coded content can then exploit mistakes in those routines to create security violations. Requests for special fonts, etc, can also impact system resources. During periods of increased network threats, the US Department of Defense has converted user's incoming HTML email to text email. The multipart type is intended to show the same content in different ways, but this is sometimes abused; some email spam takes advantage of the format to trick spam filters into believing that the message is legitimate. They do this by including innocuous content in the text part of the message and putting the spam in the HTML part (that which is displayed to the user). Most email spam is sent in HTML for these reasons, so spam filters sometimes give higher spam scores to HTML messages. In 2018 a vulnerability (EFAIL) of the HTML processing of many common email clients was disclosed, in which decrypted text of PGP or S/MIME encrypted email parts can be caused to be sent as an attribute to an external image address, if the external image is requested. This vulnerability was present in Thunderbird, macOS Mail, Outlook, and later, Gmail and Apple Mail. See also Enriched text – an HTML-like system for email using MIME Email production References External links https://www.caniemail.com/ Email Internet terminology HTML
HTML email
Technology
1,725
73,597,668
https://en.wikipedia.org/wiki/State%20of%20Happiness
State of Happiness () is a Norwegian period drama television series about the discovery of oil in the North Sea, and subsequent growth of the petroleum industry in Stavanger, beginning in 1969. It is directed by Petter Næss and Pål Jackman, written by Mette Marit Bølstad and coproduced by NRK and Maipo Film. Its first season premiered on NRK1 in 2018, and is set during the years 1969 to 1972. Its second season, covering the years 1977 to 1980, premiered in 2022. A third season, covering the years 1987 to 1990, was ordered in October 2022, with filming commencing in March 2023 and Norwegian premiere on 29 October 2024. The series has drawn comparisons to the American period drama Mad Men. Cast and characters Anne Regine Ellingsæter as Anna Hellevik, a secretary and Christian's fiancée Amund Harboe as Christian Nyman (season 1), an oil rig diver and Anna's fiancé Paal Herman Ims as Christian Nyman (seasons 2 & 3) Bart Edwards as Jonathan Kay, a lawyer from Phillips Petroleum Company (seasons 1 & 2) Malene Wadel as Toril Torstensen, a young mother and worker at Fredrik Nyman's company Pia Tjelta as Ingrid Nyman, a socialite, wife of Fredrik, and mother of Christian Per Kjerstad as Fredrik Nyman, owner of a fish canning company in Stavanger, husband of Ingrid, and father of Christian Mads Sjøgård Pettersen as Martin Lekanger, an oil rig diver Adam Fergus as Ed Young, a businessman from Phillips Petroleum Company Ole Christoffer Ertvaag as Rein Hellevik, Anna's brother Laila Goody as Randi Torstensen, Toril's mother, and a devout Christian Vegar Hoel as Arne Rettedal, a Stavanger politician Roar Kjølv Jenssen as Leif Larsen, the mayor of Stavanger Peter Førde, as Bjørklund, a deep sea diver (season 3) Awards and accolades The first season of the series received five Gullruten awards from among eight nominations in 2019, including best drama series and best actor (Anne Regine Ellingsæter). Its second season received another nine nominations, winning five Gullruten awards in 2022. Internationally, the series won awards for best screenplay and best music at the inaugural Canneseries in 2018. It received nominations in three categories during the 2019 Monte-Carlo Television Festival, but did not win. International release Internationally, broadcasters in more than 60 countries have bought rights to the series, according to NRK. The series is known by the title "State of Happiness" in English. In the UK, the series was acquired by BBC Four. It was also acquired for Topic's streaming platform in the United States. Reception The Times Carol Midgley said that the series,"has a hypnotic charm and easily stands on its own merit", giving it four out of five stars. References External links Television shows set in Norway Television series set in the 1970s Norwegian drama television series 2018 Norwegian television series debuts Works about petroleum
State of Happiness
Chemistry
659
12,079,977
https://en.wikipedia.org/wiki/Flood%20insurance%20rate%20map
A flood insurance rate map (FIRM) is an official map of a community within the United States that displays the floodplains, more explicitly special hazard areas and risk premium zones, as delineated by the Federal Emergency Management Agency (FEMA). The term is used mainly in the United States but similar maps exist in many other countries, such as Australia. Uses FIRMs display areas that fall within the 100-year flood boundary. Areas that fall within the boundary are called special flood hazard areas (SFHAs) and they are further divided into insurance risk zones. The term 100-year flood indicates that the area has a one-percent chance of flooding in any given year, not that a flood will occur every 100 years. Such maps are used in town planning, in the insurance industry, and by individuals who want to avoid moving into a home at risk of flooding or to know how to protect their property. FIRMs are used to set rates of insurance against risk of flood and whether buildings are insurable at all against flood. It is similar to a topographic map, but is designed to show floodplains. Towns and municipalities use FIRMs to plan zoning areas. Most places will not allow construction in a flood way. Creation process In the United States the FIRM for each town is occasionally updated. At that time a preliminary FIRM will be published, and available for public viewing and comment. FEMA sells the official FIRMs, called community kits, as well as an updating access service to the maps. There are also some companies that sell software to locate land parcels or real estate on digitized FIRMs. These FIRMs are used in identifying whether a land or building is in flood zone and, if so, which of the different flood zones are in effect. In 2004, FEMA began a project to update and digitize the flood plain maps at a yearly cost of $200 million. The new maps usually take around 18 months to go from a preliminary release to the final product. During that time period FEMA works with local communities to determine the final maps. Louisiana and FEMA In early 2014, two congressmen from Louisiana, Bill Cassidy and Steve Scalise, asked FEMA to consider the width of drainage canals, water flow levels, drainage improvements, pumping stations and computer models when deciding the final flood insurance rate maps. See also National Flood Insurance Program Floodplain Special Flood Hazard Area References External links FIRMettes from FEMA Hydrology and urban planning Flood control in the United States Flood insurance Federal Emergency Management Agency Geologic maps
Flood insurance rate map
Environmental_science
507
2,584,445
https://en.wikipedia.org/wiki/Ferrovanadium
Ferrovanadium (FeV) is an alloy formed by combining iron and vanadium with a vanadium content range of 35–85%. The production of this alloy results in a grayish silver crystalline solid that can be crushed into a powder called "ferrovanadium dust". Ferrovanadium is a universal hardener, strengthener and anti-corrosive additive for steels like high-strength low-alloy steel, tool steels, as well as other ferrous-based products. It has significant advantages over both iron and vanadium individually. Ferrovanadium is used as an additive to improve the qualities of ferrous alloys. One such use is to improve corrosion resistance to alkaline reagents as well as sulfuric and hydrochloric acids. It is also used to improve the tensile strength to weight ratio of the material. One application of such steels is in the chemical processing industry for high pressure high throughput fluid handling systems dealing with industrial scale sulfuric acid production. It is also commonly used for hand tools e.g. spanners (wrenches), screwdrivers, ratchets, etc. Composition Vanadium content in ferrovanadium ranges from 35% to 85%. FeV80 (80% Vanadium) is the most common ferrovanadium composition. In addition to iron and vanadium, small amounts of silicon, aluminum, carbon, sulfur, phosphorus, arsenic, copper, and manganese are found in ferrovanadium. Impurities can make up to 11% by weight of the alloy. Concentrations of these impurities determine the grade of ferrovanadium. Synthesis Eighty-five percent of all vanadium extracted from the Earth is used to create alloys such as ferrovanadium. There are two common ways in which ferrovanadium is produced: silicon reduction and aluminum reduction. Reduction by silicon Vanadium pentoxide (V2O5), ferrosilicon (FeSi75), lime (CaO) and slag (recycled vanadium containing waste) and are combined in an electric arc furnace heated to 1850 °C. Silicon in the ferrosilicon reduces the vanadium in V2O5 to vanadium metal. The vanadium then interacts with the iron to form ferrovanadium. Excess lime and V2O5 are added to use up the silicon and refine the metal. This process produces vanadium concentrations between thirty-five and sixty percent. 2 V2O5 + 5 (Fey/5Si)alloy + 10 CaO → 4 (Fey/4V)alloy + 5 Ca2SiO4 Reduction by Aluminum Iron, V2O5, aluminum, and lime are combined in an electric arc furnace. Like the silicon, aluminum reduces the vanadium in V2O5 to vanadium metal. The vanadium dissolves into the iron and forms the ferrovanadium alloy. The resulting ferrovanadium has a vanadium concentration between seventy and eighty-five percent. 3 V2O5 + 10 Al → 6 V + 5 Al2O3 Vx + Fe1−x → (Fe1−xVx)alloy Toxicology Ferrovanadium dust is a mild irritant that affects the eyes when touched by contaminated skin and the respiratory tract when inhaled. The dust caused chronic bronchitis and pneumonitis in animals exposed to high concentration (1000–2000 mg/m3) at intervals for two months. However, no such long-term effects have been observed in humans. Occupational exposure The American Conference of Governmental Industrial Hygienists (ACGIH) states that an employee who is working eight hours a day, five days a week, can be exposed to ferrovanadium dust in their place of work at concentrations of up to 1.0 mg/m3 without adverse effects. Short-term exposures should be kept below 3.0 mg/m3. It is suggested that those working with high concentrations of ferrovanadium dust wear a respirator to prevent inhalation and irritation of the respiratory tract. Steel The most common use of ferrovanadium is in the production of steel. In 2017, 94% of consumption of vanadium in the USA was to produce iron and steel alloys. Ferrovanadium and other vanadium alloys are used in carbon steel, alloy steel, high strength steel, and HSLA (High Strength Low Alloy) steel. These steels are then used to make automotive parts, pipes, tools, and more. The addition of ferrovanadium toughens the steel making it more resistant to temperature and torsion. This increase in strength is a result of the formation of vanadium carbides which have a rigid crystal structure as well as a finer grain size which decreases the ductility of the steel. In addition to adding to the composition of the steel, ferrovanadium can also be used as a coating on the steel. When coated with nitrated ferrovanadium, the abrasion resistance of steel increases 30-50%. Market Between 2013 and 2017, the United States imported 13,510 tons of ferrovanadium, a majority of which came from Czechia, Austria, Canada, and South Korea. The price of ferrovanadium has fluctuated dramatically since 1996, hitting an all-time high in 2008 at $76041.61/ton FeV80. In more recent years, it has once again seen an increase in price as environmental standards shut down some of the vanadium producers in China. These shutdowns, as well as the closure of a South African vanadium mine, created a vanadium shortage, forcing ferrovanadium factories to reduce their production of ferrovanadium, decreasing its supply and driving up the price. See also Ferroalloy Steel alloy Vanadium Vanadium(V) oxide References Vanadium compounds Ferrous alloys
Ferrovanadium
Chemistry
1,235
51,223,451
https://en.wikipedia.org/wiki/NGC%207582
NGC 7582 is a spiral galaxy of the Hubble type SB(s)ab in the constellation Grus. It has an angular size of 5.0' × 2.1' and an apparent magnitude of 11.37. It is about 70 million light years away from Earth and has a diameter of about 100,000 light years. The galaxy is classified as a Seyfert 2 galaxy, a type of active galaxy. This galaxy is in the upper middle west part of the Virgo Supercluster. The supermassive black hole at the core has a mass of . Gallery References External links Barred spiral galaxies Grus (constellation) 7582 71029
NGC 7582
Astronomy
140
5,170,782
https://en.wikipedia.org/wiki/Cation%E2%80%93%CF%80%20interaction
Cation–π interaction is a noncovalent molecular interaction between the face of an electron-rich π system (e.g. benzene, ethylene, acetylene) and an adjacent cation (e.g. Li+, Na+). This interaction is an example of noncovalent bonding between a monopole (cation) and a quadrupole (π system). Bonding energies are significant, with solution-phase values falling within the same order of magnitude as hydrogen bonds and salt bridges. Similar to these other non-covalent bonds, cation–π interactions play an important role in nature, particularly in protein structure, molecular recognition and enzyme catalysis. The effect has also been observed and put to use in synthetic systems. Origin of the effect Benzene, the model π system, has no permanent dipole moment, as the contributions of the weakly polar carbon–hydrogen bonds cancel due to molecular symmetry. However, the electron-rich π system above and below the benzene ring hosts a partial negative charge. A counterbalancing positive charge is associated with the plane of the benzene atoms, resulting in an electric quadrupole (a pair of dipoles, aligned like a parallelogram so there is no net molecular dipole moment). The negatively charged region of the quadrupole can then interact favorably with positively charged species; a particularly strong effect is observed with cations of high charge density. Nature of the cation–π interaction The most studied cation–π interactions involve binding between an aromatic π system and an alkali metal or nitrogenous cation. The optimal interaction geometry places the cation in van der Waals contact with the aromatic ring, centered on top of the π face along the 6-fold axis. Studies have shown that electrostatics dominate interactions in simple systems, and relative binding energies correlate well with electrostatic potential energy. The Electrostatic Model developed by Dougherty and coworkers describes trends in binding energy based on differences in electrostatic attraction. It was found that interaction energies of cation–π pairs correlate well with electrostatic potential above the π face of arenes: for eleven Na+-aromatic adducts, the variation in binding energy between the different adducts could be completely rationalized by electrostatic differences. Practically, this allows trends to be predicted qualitatively based on visual representations of electrostatic potential maps for a series of arenes. Electrostatic attraction is not the only component of cation–π bonding. For example, 1,3,5-trifluorobenzene interacts with cations despite having a negligible quadrupole moment. While non-electrostatic forces are present, these components remain similar over a wide variety of arenes, making the electrostatic model a useful tool in predicting relative binding energies. The other "effects" contributing to binding are not well understood. Polarization, donor-acceptor and charge-transfer interactions have been implicated; however, energetic trends do not track well with the ability of arenes and cations to take advantage of these effects. For example, if induced dipole was a controlling effect, aliphatic compounds such as cyclohexane should be good cation–π partners (but are not). The cation–π interaction is noncovalent and is therefore fundamentally different than bonding between transition metals and π systems. Transition metals have the ability to share electron density with π-systems through d-orbitals, creating bonds that are highly covalent in character and cannot be modeled as a cation–π interaction. Factors influencing the cation–π bond strength Several criteria influence the strength of the bonding: the nature of the cation, solvation effects, the nature of the π system, and the geometry of the interaction. Nature of the cation From electrostatics (Coulomb's law), smaller and more positively charged cations lead to larger electrostatic attraction. Since cation–π interactions are predicted by electrostatics, it follows that cations with larger charge density interact more strongly with π systems. The following table shows a series of Gibbs free energy of binding between benzene and several cations in the gas phase. For a singly charged species, the gas-phase interaction energy correlates with the ionic radius, (non-spherical ionic radii are approximate). This trend supports the idea that coulombic forces play a central role in interaction strength, since for other types of bonding one would expect the larger and more polarizable ions to have greater binding energies. Solvation effects The nature of the solvent also determines the absolute and relative strength of the bonding. Most data on cation–π interaction is acquired in the gas phase, as the attraction is most pronounced in that case. Any intermediating solvent molecule will attenuate the effect, because the energy gained by the cation–π interaction is partially offset by the loss of solvation energy. For a given cation–π adduct, the interaction energy decreases with increasing solvent polarity. This can be seen by the following calculated interaction energies of methylammonium and benzene in a variety of solvents. Additionally, the trade-off between solvation and the cation–π effect results in a rearrangement of the order of interaction strength for a series of cations. While in the gas phase the most densely charged cations have the strongest cation–π interaction, these ions also have a high desolvation penalty. This is demonstrated by the relative cation–π bond strengths in water for alkali metals: K+ > Rb+ \gg Na+, Li+ Nature of the π system Quadrupole moment Comparing the quadrupole moment of different arenes is a useful qualitative tool to predict trends in cation–π binding, since it roughly correlates with interaction strength. Arenes with larger quadrupole moments are generally better at binding cations. However, a quadrupole-ion model system cannot be used to quantitatively model cation–π interactions. Such models assume point charges, and are therefore not valid given the short cation–π bond distance. In order to use electrostatics to predict energies, the full electrostatic potential surface must be considered, rather than just the quadrupole moment as a point charge. Substituents on the aromatic ring The electronic properties of the substituents also influence the strength of the attraction. Electron withdrawing groups (for example, cyano −CN) weaken the interaction, while electron donating substituents (for example, amino −NH2) strengthen the cation–π binding. This relationship is illustrated quantitatively in the margin for several substituents. The electronic trends in cation–π binding energy are not quite analogous to trends in aryl reactivity. Indeed, the effect of resonance participation by a substituent does not contribute substantively to cation–π binding, despite being very important in many chemical reactions with arenes. This was shown by the observation that cation–π interaction strength for a variety of substituted arenes correlates with the Hammett parameter. This parameter is meant to illustrate the inductive effects of functional groups on an aryl ring. The origin of substituent effects in cation–π interactions has often been attributed to polarization from electron donation or withdrawal into or out of the π system. This explanation makes intuitive sense, but subsequent studies have indicated that it is flawed. Recent computational work by Wheeler and Houk strongly indicate that the effect is primarily due to direct through-space interaction between the cation and the substituent dipole. In this study, calculations that modeled unsubstituted benzene plus interaction with a molecule of "H-X" situated where a substituent would be (corrected for extra hydrogen atoms) accounted for almost all of the cation–π binding trend. For very strong pi donors or acceptors, this model was not quite able account for the whole interaction; in these cases polarization may be a more significant factor. Binding with heteroaromatic systems Heterocycles are often activated towards cation–π binding when the lone pair on the heteroatom is in incorporated into the aromatic system (e.g. indole, pyrrole). Conversely, when the lone pair does not contribute to aromaticity (e.g. pyridine), the electronegativity of the heteroatom wins out and weakens the cation–π binding ability. Since several classically "electron rich" heterocycles are poor donors when it comes to cation–π binding, one cannot predict cation–π trends based on heterocycle reactivity trends. Fortunately, the aforementioned subtleties are manifested in the electrostatic potential surfaces of relevant heterocycles. cation–heterocycle interaction is not always a cation–π interaction; in some cases it is more favorable for the ion to be bound directly to a lone pair. For example, this is thought to be the case in pyridine-Na+ complexes. Geometry cation–π interactions have an approximate distance dependence of 1/rn where n<2. The interaction is less sensitive to distance than a simple ion-quadrupole interaction which has 1/r3 dependence. A study by Sherrill and coworkers probed the geometry of the interaction further, confirming that cation–π interactions are strongest when the cation is situated perpendicular to the plane of atoms (θ = 0 degrees in the image below). Variations from this geometry still exhibit a significant interaction which weakens as θ angle approaches 90 degrees. For off-axis interactions the preferred ϕ places the cation between two H atoms. Equilibrium bond distances also increase with off-axis angle. Energies where the cation is coplanar with the carbon ring are saddle points on the potential energy surface, which is consistent with the idea that interaction between a cation and the positive region of the quadrupole is not ideal. Relative interaction strength Theoretical calculations suggest the cation–π interaction is comparable to (and potentially stronger than) ammonium-carboxylate salt bridges in aqueous media. Computed values below show that as solvent polarity increases, the strength of the cation–π complex decreases less dramatically. This trend can be rationalized by desolvation effects: salt bridge formation has a high desolvation penalty for both charged species whereas the cation–π complex would only pay a significant penalty for the cation. In nature Nature's building blocks contain aromatic moieties in high abundance. Recently, it has become clear that many structural features that were once thought to be purely hydrophobic in nature are in fact engaging in cation–π interactions. The amino acid side chains of phenylalanine, tryptophan, tyrosine, histidine, are capable of binding to cationic species such as charged amino acid side chains, metal ions, small-molecule neurotransmitters and pharmaceutical agents. In fact, macromolecular binding sites that were hypothesized to include anionic groups (based on affinity for cations) have been found to consist of aromatic residues instead in multiple cases. Cation–π interactions can tune the pKa of nitrogenous side-chains, increasing the abundance of the protonated form; this has implications for protein structure and function. While less studied in this context, the DNA bases are also able to participate in cation–π interactions. Role in protein structure Early evidence that cation–π interactions played a role in protein structure was the observation that in crystallographic data, aromatic side chains appear in close contact with nitrogen-containing side chains (which can exist as protonated, cationic species) with disproportionate frequency. A study published in 1986 by Burley and Petsko looked at a diverse set of proteins and found that ~ 50% of aromatic residues Phe, Tyr, and Trp were within 6Å of amino groups. Furthermore, approximately 25% of nitrogen containing side chains Lys, Asn, Gln, and His were within van der Waals contact with aromatics and 50% of Arg in contact with multiple aromatic residues (2 on average). Studies on larger data sets found similar trends, including some dramatic arrays of alternating stacks of cationic and aromatic side chains. In some cases the N-H hydrogens were aligned toward aromatic residues, and in others the cationic moiety was stacked above the π system. A particularly strong trend was found for close contacts between Arg and Trp. The guanidinium moiety of Arg in particular has a high propensity to be stacked on top of aromatic residues while also hydrogen-bonding with nearby oxygen atoms. Molecular recognition and signaling An example of cation–π interactions in molecular recognition is seen in the nicotinic acetylcholine receptor (nAChR) which binds its endogenous ligand, acetylcholine (a positively charged molecule), via a cation–π interaction to the quaternary ammonium. The nAChR neuroreceptor is a well-studied ligand-gated ion channel that opens upon acetylcholine binding. Acetylcholine receptors are therapeutic targets for a large host of neurological disorders, including Parkinson's disease, Alzheimer's disease, schizophrenia, depression and autism. Studies by Dougherty and coworkers confirmed that cation–π interactions are important for binding and activating nAChR by making specific structural variations to a key tryptophan residue and correlating activity results with cation–π binding ability. The nAChR is especially important in binding nicotine in the brain, and plays a key role in nicotine addiction. Nicotine has a similar pharmacophore to acetylcholine, especially when protonated. Strong evidence supports cation–π interactions being central to the ability of nicotine to selectively activate brain receptors without affecting muscle activity. A further example is seen in the plant UV-B sensing protein UVR8. Several tryptophan residues interact via cation–π interactions with arginine residues which in turn form salt bridges with acidic residues on a second copy of the protein. It has been proposed that absorption of a photon by the tryptophan residues disrupts this interaction and leads to dissociation of the protein dimer. Cation–π binding is also thought to be important in cell-surface recognition Enzyme catalysis Cation–π interactions can catalyze chemical reactions by stabilizing buildup of positive charge in transition states. This kind of effect is observed in enzymatic systems. For example, acetylcholine esterase contains important aromatic groups that bind quaternary ammonium in its active site. Polycyclization enzymes also rely on cation–π interactions. Since proton-triggered polycyclizations of squalene proceed through a (potentially concerted) cationic cascade, cation–π interactions are ideal for stabilizing this dispersed positive charge. The crystal structure of squalene-hopene cyclase shows that the active site is lined with aromatic residues. In synthetic systems Solid state structures Cation–π interactions have been observed in the crystals of synthetic molecules as well. For example, Aoki and coworkers compared the solid state structures of Indole-3-acetic acid choline ester and an uncharged analogue. In the charged species, an intramolecular cation–π interaction with the indole is observed, as well as an interaction with the indole moiety of the neighboring molecule in the lattice. In the crystal of the isosteric neutral compound the same folding is not observed and there are no interactions between the tert-butyl group and neighboring indoles. Supramolecular receptors Some of the first studies on the cation–π interaction involved looking at the interactions of charged, nitrogenous molecules in cyclophane host–guest chemistry. It was found that even when anionic solubilizing groups were appended to aromatic host capsules, cationic guests preferred to associate with the π-system in many cases. The type of host shown to the right was also able to catalyze N-alkylation reactions to form cationic products. More recently, cation–π centered substrate binding and catalysis has been implicated in supramolecular metal-ligand cluster catalyst systems developed by Raymond and Bergman. Use of π-π, CH-π, and π-cation interactions in supramolecular assembly π-systems are important building blocks in supramolecular assembly because of their versatile noncovalent interactions with various functional groups. Particularly, π-π, CH-π, and π-cation interactions are widely used in supramolecular assembly and recognition. π-π interaction concerns the direct interactions between two &pi-systems; and cation–π interaction arises from the electrostatic interaction of a cation with the face of the π-system. Unlike these two interactions, the CH-π interaction arises mainly from charge transfer between the C-H orbital and the π-system. A notable example of applying π-π interactions in supramolecular assembly is the synthesis of catenane. The major challenge for the synthesis of catenane is to interlock molecules in a controlled fashion. Stoddart and co-workers developed a series of systems utilizing the strong π-π interactions between electron-rich benzene derivatives and electron-poor pyridinium rings. [2]Catanene was synthesized by reacting bis(pyridinium) (A), bisparaphenylene-34-crown-10 (B), and 1, 4-bis(bromomethyl)benzene C (Fig. 2). The π-π interaction between A and B directed the formation of an interlocked template intermediate that was further cyclized by substitution reaction with compound C to generate the [2]catenane product. Organic synthesis and catalysis Cation–π interactions have likely been important, though unnoticed, in a multitude of organic reactions historically. Recently, however, attention has been drawn to potential applications in catalyst design. In particular, noncovalent organocatalysts have been found to sometimes exhibit reactivity and selectivity trends that correlate with cation–π binding properties. A polycyclization developed by Jacobsen and coworkers shows a particularly strong cation–π effect using the catalyst shown below. Anion–π interaction In many respects, anion–π interaction is the opposite of cation–π interaction, although the underlying principles are identical. Significantly fewer examples are known to date. In order to attract a negative charge, the charge distribution of the π system has to be reversed. This is achieved by placing several strong electron withdrawing substituents along the π system (e. g. hexafluorobenzene). The anion–π effect is advantageously exploited in chemical sensors for specific anions. See also Stacking (chemistry) Salt bridge (protein) References Sources . Chemical bonding
Cation–π interaction
Physics,Chemistry,Materials_science
3,980
42,146,937
https://en.wikipedia.org/wiki/Lang%27s%20theorem
In algebraic geometry, Lang's theorem, introduced by Serge Lang, states: if G is a connected smooth algebraic group over a finite field , then, writing for the Frobenius, the morphism of varieties   is surjective. Note that the kernel of this map (i.e., ) is precisely . The theorem implies that   vanishes, and, consequently, any G-bundle on is isomorphic to the trivial one. Also, the theorem plays a basic role in the theory of finite groups of Lie type. It is not necessary that G is affine. Thus, the theorem also applies to abelian varieties (e.g., elliptic curves.) In fact, this application was Lang's initial motivation. If G is affine, the Frobenius may be replaced by any surjective map with finitely many fixed points (see below for the precise statement.) The proof (given below) actually goes through for any that induces a nilpotent operator on the Lie algebra of G. The Lang–Steinberg theorem gave a useful improvement to the theorem. Suppose that F is an endomorphism of an algebraic group G. The Lang map is the map from G to G taking g to g−1F(g). The Lang–Steinberg theorem states that if F is surjective and has a finite number of fixed points, and G is a connected affine algebraic group over an algebraically closed field, then the Lang map is surjective. Proof of Lang's theorem Define: Then, by identifying the tangent space at a with the tangent space at the identity element, we have:   where . It follows is bijective since the differential of the Frobenius vanishes. Since , we also see that is bijective for any b. Let X be the closure of the image of . The smooth points of X form an open dense subset; thus, there is some b in G such that is a smooth point of X. Since the tangent space to X at and the tangent space to G at b have the same dimension, it follows that X and G have the same dimension, since G is smooth. Since G is connected, the image of then contains an open dense subset U of G. Now, given an arbitrary element a in G, by the same reasoning, the image of contains an open dense subset V of G. The intersection is then nonempty but then this implies a is in the image of . Notes References Algebraic groups Theorems in algebraic geometry
Lang's theorem
Mathematics
518
4,613,509
https://en.wikipedia.org/wiki/Root%20cellar
A root cellar (American and Canadian English), fruit cellar (Mid-Western American English) or earth cellar (British English) is a structure, usually underground or partially underground, used for storage of vegetables, fruits, nuts, or other foods. Its name reflects the traditional focus on root crops stored in an underground cellar, which is still often true; but the scope is wider, as a wide variety of foods can be stored for weeks to months, depending on the crop and conditions, and the structure may not always be underground. Root cellaring has been vitally important in various eras and places for winter food supply. Although present-day food distribution systems and refrigeration have rendered root cellars unnecessary for many people, they remain important for those who value self-sufficiency, whether by economic necessity or by choice and for personal satisfaction. Thus, they are popular among diverse audiences, including gardeners, organic farmers, DIY fans, homesteaders, anyone seeking some emergency preparedness (most extensively, preppers), subsistence farmers, and enthusiasts of local food, slow food, heirloom plants, and traditional culture. Function Root cellars are for keeping food supplies at controlled temperatures and steady humidity. Many crops keep longest just above freezing () and at high humidity (90–95%), but the optimal temperature and humidity ranges vary by crop, and various crops keep well at temperatures further above near-freezing but below room temperature, which is usually . A few crops keep better in low humidity. Root cellars keep food from freezing during the winter and keep food cool during the summer to prevent the spoiling and rotting of the roots, for example, potatoes, onions, garlic, carrots, parsnips, etc.. These are placed in the root cellar in the autumn after harvesting. A secondary use for the root cellar is as a place to store wine, beer, or other homemade alcoholic beverages. Vegetables stored in the root cellar primarily consist of mostly root vegetables (thus the name): potatoes, turnips, and carrots. Other food supplies placed in the root cellar during winter include beets, onions, jarred preserves and jams, salt meat, salt turbot, salt herring, winter squash, and cabbage. Summer squash (aka courgettes or zucchini) may last as long as three months at room temperature; American pumpkins and pattypan squash can endure six months in storage, while kabocha, turban, butternut, and spaghetti squash can be stored for as long as eight months. A potato cellar is sometimes called a potato barn or potato house. Separate cellars are occasionally used for storing fruits, such as apples. Apples can give off enough ethylene gas to hasten the overripening or spoilage of other crops stored nearby, although this effect is variable, and many farms successfully store vegetables without segregating their apples. Water, bread, butter, milk, and cream are sometimes stored in the root cellar. Items such as salad greens, fresh meat, and jam pies are kept in the root cellar early in the day to keep cool until they are needed for supper. The ability of some vegetables and fruit to keep for months in favorable cellar conditions stems in part from the fact that they are not entirely inanimate even after picking. Although they may no longer qualify as living, the plant cells continue to respire in some impaired but nonzero way, resisting bacterial decomposition for a time. The effect can be compared to the way that cut flowers in a vase of water last much longer than cut flowers lying on a table: the flowers in the vase are not entirely dead yet and continue to respire. The analogy is not exact, but the high humidity that supports many cellared crops is involved in this residual respiration. In some cases, plants are transplanted from the field to the soil floor of a cellar in autumn, and they then continue living in the cellar for months. The fact that they cannot thrive or grow larger in the low-light, low-temperature conditions is not a problem; the only objective is to keep them alive instead of dead, thus warding off decomposition. This is a form of season extension in which the growing season is not extended but the harvest season is substantially extended. Closets, crawlspaces, garages, sheds, and attics have all been used successfully for storage of at least some kinds of crops. Even the space under a bed can store some crops (such as pumpkins) for several weeks. Especially before rural electrification, farms with springhouses have often used them for root cellar duty (as well as milkhouse duty). Construction Common construction methods are: Digging down into the ground and erecting a shed or house over the cellar (access is via a trap door in the shed). Digging into the side of a hill (easier to excavate and facilitates water drainage). Building a structure at ground level and piling rocks, earth, and/or sod around and over it. This may be easier to build on rocky terrain where excavation is difficult. Most root cellars were built using stone, wood, mortar (cement), and sod. Newer ones may be made of concrete with sod on top. Regional variations Newfoundland and Labrador Historian Sean Cadigan writes, "Newfoundland and Labrador's climate and soil have not been conducive to agriculture, but outport isolation and poor incomes in the fishery have made supplementary farming crucial." People grew root vegetables: potatoes, carrots, turnip, cabbage and beets, while others grew a wider variety of vegetables in their gardens. Growing enough vegetables to last the winter was imperative to the survival of Newfoundlanders, and without refrigerators, root cellars were one of the few methods to preserve crops. Architect Robert Mellin noted the following on root cellars during his research in Tilting, Fogo Island: Many Newfoundland and Labrador cellars use a two-door airlock-type system as a method of temperature regulation, as they allowed people ample time to enter the first door, shutting it behind them before entering the main portion of the root cellar. Folklorist Crystal Braye notes in her study of Newfoundland root cellars: The town of Elliston has so many of the structures, the town's motto is the "Root Cellar Capital of the World". Potato Hole A potato hole is a hole dug in an earthen floor where a large deep opening was covered by boards and was mainly used to store sweet potatoes during the winter. The “potato hole” or root cellar was also used by slaves to hide food and personal possessions from their slave owners resulting in some slave owners to raise slave cabins off the ground to prevent their slaves attempting to create their own hidden personal space. The storing of valuables in pits was common among many cultures but for some enslaved Africans, like those from the Igbo people of southeastern Nigeria, storing valuables under the floors of their houses was often practiced . See also References https://www.education.nh.gov/sites/g/files/ehbemt326/files/inline-documents/sonh/Rosenwald3.pdf Bibliography Agriculture Rooms Food preservation Semi-subterranean structures
Root cellar
Engineering
1,472
5,654,862
https://en.wikipedia.org/wiki/Fire%20lookout%20tower
A fire lookout tower, fire tower, or lookout tower is a tower that provides housing and protection for a person known as a "fire lookout", whose duty it is to search for wildfires in the wilderness. It is a small building, usually on the summit of a mountain or other high vantage point to maximize viewing distance and range, known as view shed. From this vantage point the fire lookout can see smoke that may develop, determine the location by using a device known as an Osborne Fire Finder, and call for wildfire suppression crews. Lookouts also report weather changes and plot the location of lightning strikes during storms. The location of the strike is monitored for a period of days afterwards, in case of ignition. A typical fire lookout tower consists of a small room, known as a cab, atop a large steel or wooden tower. Historically, the tops of tall trees have also been used to mount permanent platforms. Sometimes natural rock may be used to create a lower platform. In cases where the terrain makes a tower unnecessary, the structure is known as a ground cab. Ground cabs are called towers, even if they don't sit on a tower. Towers gained popularity in the early 1900s, and fires were reported using telephones, carrier pigeons and heliographs. Although many fire lookout towers have fallen into disrepair from neglect, abandonment and declining budgets, some fire service personnel have made efforts to preserve older fire towers, arguing that a person watching the forest for wildfire can be an effective and cheap fire control measure. History United States The history of fire lookout towers predates the United States Forest Service, founded in 1905. Many townships, private lumber companies, and State Forestry organizations operated fire lookout towers on their own accord. The Great Fire of 1910, also known as the Big Blowup, burned through the states of Washington, Idaho, and Montana. The smoke from this fire drifted across the entire country to Washington D.C. — both physically and politically — and it challenged the five-year-old Forest Service to address new policies regarding fire suppression, and the fire did much to create the modern system of fire rules, organizations, and policies. One of the rules as a result of the 1910 fire stated "all fires must be extinguished by 10 a.m. the following morning." To prevent and suppress fires, the U.S. Forest Service made another rule that townships, corporations and States would bear the cost of contracting fire suppression services, because at the time there was not the large Forest Service Fire Department that exists today. As a result of the above rules, early fire detection and suppression became a priority. Towers began to be built across the country. While earlier lookouts used tall trees and high peaks with tents for shelters, by 1911 permanent cabins and cupolas were being constructed on mountaintops. Beginning in 1910, the New Hampshire Timberlands Owners Association, a fire protection group, was formed and soon after, similar organizations were set up in Maine and Vermont. A leader of these efforts, W.R. Brown, an officer of the Brown Company which owned over 400,000 acres of timberland, set up a series of effective forest-fire lookout towers, possibly the first in the nation, and by 1917 helped establish a forest-fire insurance company. In 1933, during the Great Depression, President Franklin Delano Roosevelt formed the Civilian Conservation Corps (CCC), consisting of young men and veterans of World War I. It was during this time that the CCC set about building fire lookout towers, and access roads to those towers. The U.S. Forest Service took great advantage of the CCC workforce and initiated a massive program of construction projects, including fire lookout towers. In California alone, some 250 lookout towers and cabs were built by CCC workers between 1933 and 1942. The heyday of fire lookout towers was from 1930 through 1950. During World War II, the Aircraft Warning Service was established, operating from mid-1941 to mid-1944. Fire lookouts were assigned additional duty as Enemy Aircraft Spotters, especially on the West Coast of the United States. From the 1960s through the 1990s the towers took a back seat to new technology, aircraft, and improvements in radios. The promise of space satellite fire detection and modern cell phones tried to compete with the remaining fire lookout towers, but in several environments, the technology failed. Fires detected from space are already too large to make accurate assessments for control. Cell phones in wilderness areas still suffer from lack of signal. Today, some fire lookout towers remain in service, because having human eyes being able to detect smoke and call in the fire report allows fire management officials to decide early how the fire is to be managed. The more modern policy is to "manage fire", not simply to suppress it. Fire lookout towers provide a reduction in time of fire detection to time of fire management assessment. Idaho had the most known lookout sites (966); 196 of them still exist, with roughly 60 staffed each summer. Kansas is the only U.S. state that has never had a lookout. A number of fire lookout tower stations, including many in New York State near the Adirondack Forest Preserve and Catskill Park, have been listed on the National Register of Historic Places. Japan During the Edo period in Japan housed the . Usually the fire lookout tower was built near a , and was equipped with a ladder, lookout platform, and an (ja). From these towers watchmen could observe the entire town, and in the event of a fire they would ring the alarm bell, calling up firemen and warning town residents. In some towns the bells were also used to mark the time. While the fire lookout towers remained fully equipped into the Shōwa period, they were later replaced by telephone and radio broadcasting systems in many cities. Canada Like the United States, fire towers were built across Canada to protect the valuable trees for the forestry industry. Most towers were built in the early 1920s to 1950s and were a mix of wood and steel structures. A total of 325 towers dotted the landscape of Ontario in the 1960s, and today approx. 156 towers span the province, but only a handful of towers remained in use after the 1970s. They are still in use in British Columbia, Alberta, Saskatchewan, Manitoba, Ontario and a few of the Maritime Provinces. Nova Scotia decommissioned the last of its 32 fire towers in 2015 and had them torn down by a contractor. Germany The first fire lookout tower was built to the plans of Forstmeister Walter Seitz between 1890 and 1900, located in the "Muskauer Forst" near Weißwasser. Warnings were transmitted by light signal. For transmission of location, Seitz divided the forest area into so-called "Jagen", numbered areas, with that number to be transmitted to the city. He received a patent for this system in 1902. Seitz traveled to the 1904 Louisiana Purchase Exposition for a presentation of his idea in the USA. Russia As wood had been a key building material in Russia for centuries, urban fires were a constant threat to the towns and cities. To address that issue, in the early 19th century a program was launched to construct fire stations equipped with lookout towers called kalancha, overlooking mostly low-rise quarters. Watchmen standing vigil there could signal other stations as well as their own using simple signals. Surviving towers are often local landmarks. Today Australia Fire towers are still in use in Australia, particularly in the mountainous regions of the south-eastern states. Victoria's Forest Fire Management operates 72 towers across the state during the fire season with towers being constructed as recently as 2016. Jimna Fire Tower in Southeastern Queensland is the tallest fire tower in the country, at 47 meters above the ground, and is included on the state heritage register. United States Today hundreds of towers are still in service with paid-staff and/or volunteer citizens. In some areas, the fire lookout operator often receives hundreds of forest visitors during a weekend and provides a needed “pre-fire suppression” message, supported by handouts from the "Smokey Bear", or "Woodsy Owl" education campaigns. This educational information is often distributed to young hikers that make their way up to the fire lookout tower. In this aspect, the towers are remote way stations and interpretive centers. The fire lookout tower also acts as a sentinel in the forest attracting lost or injured hikers, that make their way to the tower knowing they can get help. In some locations around the country, fire lookout towers can be rented by public visitors that obtain a permit. These locations provide a unique experience for the camper, and in some rental locations, the check out time is enforced when the fire lookout operator returns for duty, and takes over the cab for the day shift. Fire lookout towers are an important part of American history and several organizations have been founded to save, rebuild, restore, and operate fire lookout towers. Germany Starting in 2002, traditional fire watch was replaced by "FireWatch", optical sensors located on old lookout towers or mobile phone masts. Based on a system developed by the DLR for analyzing gases and particles in space, a terrestrial version for forest fire smoke detection was developed by DLR and IQ Wireless. Currently, about 200 of these sensors are installed around Germany, while similar systems have been deployed in other European countries, Mexico, Kazakhstan and the USA. Canada Several Canadian provinces have fire lookout towers. Dorset, Ontario's Scenic Tower was built on site of former fire lookout tower (1922-1962). Types Wooden towers Many fire lookout towers are simply cabs that have been fitted to large railroad water tank towers that are high. One of the last wooden fire lookout towers in Southern California was the South Mount Hawkins Fire Lookout, in the Angeles National Forest. A civilian effort is underway to rebuild the tower after its loss in the Curve Fire of September 2002. The typical cab of a wooden tower can be from to Example — South Mount Hawkins before the fire Example — Boucher Hill Lookout, Palomar Mountain State Park, San Diego CA Steel towers Steel towers can vary in size and height. They are very sturdy, but tend to sway in the wind more than wooden towers. The typical cab of a steel tower can be from to Example — Los Pinos Lookout, Cleveland National Forest, San Diego CA Example — Red Mountain Lookout, San Bernardino National Forest, Riverside CA Example — High Point Lookout, Cleveland National Forest, Palomar Mountain, San Diego CA Example — Mount Lofty Fire Tower, South Australia Aermotors The Aermotor Company, originally of Chicago, Illinois, was the first and lead manufacturer of steel fire towers from the 1910s to the mid-1920s. These towers have very small cabs, as the towers are based on Aermotor windmill towers. These towers are often found in the U.S. Midwest and South, but a few are in the mountainous West. In the northeast, all of the towers in the Adirondack Mountains and most in the Catskills were Aermotor towers erected between 1916 and 1921. The typical cab of an Aermoter had a cab with a fire locating device mounted in the center. Access was by way of a trap door in the floor. Lakota Peak Lookout Summit Ridge Lookout The Fire Towers of New York Example — Adirondack Towers Ground cabs Ground cabs are still known as "towers" even though there may be no such tower under the cab. These towers can be one, two or three stories tall with foundations made of natural stone or concrete. These towers vary greatly in size, but many are simple wooden or steel tower cabs that were constructed using the same plans, sans the tower. Example — Tahquitz Peak Lookout Example — Winchester Mountain Lookout Example — Mt. Tamalpais Lookout in California Lookout trees The simplest kind consist of a ladder to a suitable height. Such trees could have platforms on the ground next to them for maps and a fire finder. A more elaborate version, such as the Gloucester tree in Australia, added a permanent platform to the tree by building a wooden or, later, metal structure at the top of the tree, with metal spikes hammered into the trunk to form a spiral ladder. These 'platform trees' were often equipped with telephones, fire finder tables, seats and guy-wires. Other types There are many different types of lookouts. In the early days, the fire lookout operator simply climbed a denuded tree and sat on a platform chair atop that tree. An old fishing boat was once dragged to the top of a high hill and used as a fire lookout tower. Very little is known about the horse-mounted fire lookout, but they, too, rode the ridges patrolling the forest for smoke. Records Tallest lookout tower in the world: Warren Bicentennial Tree Lookout, Western Australia — . Tallest all-steel lookout tower in the world: Beard Tower, SE of Manjimup, Western Australia — . Tallest lookout tower in the U.S.: Woodworth Tower, Alexandria, Louisiana — . Highest lookout site in the world: Fairview Peak Lookout, Colorado — . Lowest lookout sites in the world: Pine Island L.O., Florida & Evans Pines L.O., Florida — . Countries continuing to use fire lookout towers Australia Belgium Brazil Canada (Alberta, B.C., Manitoba, Nova Scotia, Ontario, Saskatchewan) France Germany Greece Indonesia Israel Italy Latvia Mexico New Zealand Norway Poland Portugal South Africa Spain Turkey United States Uruguay See also List of fire lookout towers Lookout tree Watchtower Drill tower, used in firefighting practice Hose tower, used in some fire stations to dry firehoses Fire control tower, used to control gun fire from coastal batteries List of New Jersey Forest Fire Service fire towers Firewatch, a game centered around a fire lookout tower in Shoshone National Forest References The Lookout Network newsletter External links Fire Lookouts US Forest Service History Pages, Forest History Society Forest Fire Lookout Association Ontario’s Fire Tower Lookouts Fire Lookout Towers in Australia Eyes of the Forest: Idaho's Fire Lookouts Documentary produced by Idaho Public Television "A Day in the Life of a Fire Lookout" in Marin County, California Wildfire suppression Towers
Fire lookout tower
Engineering
2,874
14,484,306
https://en.wikipedia.org/wiki/Proof%20mining
In proof theory, a branch of mathematical logic, proof mining (or proof unwinding) is a research program that studies or analyzes formalized proofs, especially in analysis, to obtain explicit bounds, ranges or rates of convergence from proofs that, when expressed in natural language, appear to be nonconstructive. This research has led to improved results in analysis obtained from the analysis of classical proofs. References Further reading Ulrich Kohlenbach and Paulo Oliva, "Proof Mining: A systematic way of analysing proofs in mathematics", Proc. Steklov Inst. Math, 242:136–164, 2003 Paulo Oliva, "Proof Mining in Subsystems of Analysis", BRICS PhD thesis citeseer Proof theory
Proof mining
Mathematics
156
42,689,830
https://en.wikipedia.org/wiki/Kamaz%20Typhoon
KamAZ Typhoon () is a family of Russian multi-functional, modular, armored mine-resistant ambush protected vehicles manufactured by the Russian truck builder KAMAZ. The Typhoon family is part of Russia's Typhoon program. As of 2021, the number of Typhoons in the Russian Armed Forces fleet was about 330 units of Typhoon-K. History The development of the "Typhoon" vehicle family began in 2010, when Minister of Russian Federation Armed Forces approved the "Development of Russian Federation Armed Forces military vehicles for the period until 2020" program and started the Typhoon MRAP program. In 2012 the first contract between Russian Ministry of Armed Forces and KAMAZ to buy Typhoons was signed. 12 Typhoons took part in Russian Victory Day military parade in 2014. State tests were completed in 2019. The KamAZ Typhoon has been used by the Russians in the Russo-Ukrainian War with at least 25 KamAZ-63968 Typhoon and 8 K-53949 Typhoon-K being destroyed, damaged, abandoned and captured as of 11 September 2023. An import-substituted Typhoon-K vehicle was offered for export in April 2023. Description Russia claims NATO STANAG 4569 level 3b protection. Combined set of ceramics and steel armor, which protects against armor-piercing bullets of caliber 14.5 × 114 mm. Includes 128.5-129.0 mm thick bulletproof glass with a transparency of 70%, developed by the "Magistral Ltd" and tested at the Research Institute of Steel, withstanding 2 shots spaced at 280–300 mm in the shelling of KPVT with speed of bullet 911 m / s at the instant of contact with the glass. Bulletproof exceed the highest demands on available GOST (GOST R 51136 and GOST R 50963), in which the highest level - a fire of armor-piercing bullets B-32, 7.62 × 54 mm from the SVD. During the production, Magistral LTD focused on western standards level IV STANAG 4569 - guaranteed protection in the shelling of armor-piercing ammunition B-32, 14.5 × 114 mm from a distance of 200 m with a bullet velocity 891–931 m / s. Reservations withstand hit of 30-mm ammunition. There are bullet-proof tires 16.00R20 with explosion inserts diverting the blast wave, with automatic pumping of air and controlled pressure to 4.5 atmospheres. Provided loopholes for firing from small arms, remotely controlled machine gun. Unification with other family cars is 86%. Seats are equipped with personal weapons holders, seat belts and head restraints. They are attached to the roof module, to reduce the impact of mines / bombs. Inside the module is installed set for filtering FVUA-100A and air conditioning. On the roof there are escape hatches in case the vehicle rolls over. Passengers exit through the ramp at the stern of the vehicle or through a side door. Surveillance and communication In the car the Combat Information and Control System (CICS) GALS-D1M to monitor and control the operation of the engine, calculating machines roll, tilt, road speed, location, etc. The independent hydropneumatic suspension allows the driver to change the ride height on the road, using remote control within 400 mm. KamAZ-63968 is equipped with five cameras for review the troop unit and cockpit. Cabin is equipped with folding displays, showing how the state of the car is, and an external review. Technical specifications and performance Crew: Depends on configuration, up to 16. Axles: 6×6, two front axles, rear one is subject to the normal weight distribution of machine (cab is very heavy). Length: Width: Height: cab: body/fuselage: Wheelbase: n/a Ground clearance: adjustable Turning radius: less than Lifting angle: 23–30° Wheel rotation angle: 39° Tires - 16.00R20, with special run-flat tire with inserts diverting the blast wave, using an automatic tire inflation pressure level and the change in the tire (1 to 4.5 atm) depending on the road surface. Curb weight: 21 tons, gross weight - 24 tons Maximum speed: Cruising range: Fuel consumption per 100 km: less than Variants Source: 4x4 Family KamAZ-5388 - 4x4 Armoured chassis cab KamAZ-5388 - 4x4 Armoured personnel carrier KamAZ-53888 - 4x4 Armoured cargo vehicle 6x6 Family KamAZ-6396 - 6x6 Armoured chassis cab KamAZ-6396 - 6x6 Armoured cargo vehicle KamAZ-63968 - 6x6 Armoured personnel carrier 8x8 Family KamAZ-6398 - 8x8 Armoured cargo vehicle KamAZ-63988 - 8x8 Armoured personnel carrier Derivatives KamAZ-63969 Solid-body, 6x6 wheeled amphibious armoured personnel carrier (APC) with a remote-controlled weapon station. Gallery Operators Seven captured Typhoon and Typhoon-K vehicles, with an additional four captured K-53949 Linza armored field ambulances. : Armed Forces of the Republic of Uzbekistan See also Ural Typhoon - Ural Trucks Typhoon variant ZIL Karatel Notes Wheeled armoured fighting vehicles Mine-resistant ambush protected vehicles Wheeled armoured personnel carriers Military engineering vehicles Armoured fighting vehicles of Russia Kamaz Military vehicles introduced in the 2010s Armoured personnel carriers of the post–Cold War period
Kamaz Typhoon
Engineering
1,121
64,814
https://en.wikipedia.org/wiki/Andr%C3%A9-Louis%20Danjon
André-Louis Danjon (; 6 April 1890 – 21 April 1967) was a French astronomer who served as director of the Observatory of Strasbourg from 1930 to 1945 and of the Paris Observatory from 1945 to 1963. He developed several astronomical instruments to examine the regularity of the rotation of the earth and among his discoveries was an acceleration of the rotation of the Earth during periods of intense solar activity occurring in 11-year cycles correlated with an increase in earthquakes. The Danjon scale is used for measuring the intensity of lunar eclipses. He noted an increase in the number of dark lunar eclipses with solar activity which is termed as the Danjon effect. Life and work Danjon was born in Caen to drapers Louis Dominique Danjon and Marie Justine Binet. He studied at the Lyce Malherbe and then went to the Ecole Normale Superieure during which time he worked at the observatory of the Societe Astronomique de France. He graduated in 1914 and was conscripted into the army during World War I. He served under Ernest Esclangon and lost an eye in combat in Champagne. He received war honours in 1915 and in 1919 he was appointed aide-astronome to the Strasbourg University. He took up duties as an observer at the Strasbourg Meridian observatory an began to work on the improvement of the observatory. He was involved in establishing a new observatory, the Observatoire de Haute-Provence which became operational in 1923. Danjon devised a method to measure "earthshine" on the darkside of the Moon using a telescope in which a prism split the Moon's image into two identical side-by-side images. By adjusting a diaphragm to dim one of the images until the sunlit portion had the same apparent brightness as the earthlit portion on the unadjusted image, he could quantify the diaphragm adjustment, and thus had a real measurement for the brightness of earthshine. He recorded the measurements using his method (now known as the Danjon scale, on which zero equates to a barely visible Moon) from 1925 until the 1950s. He extended similar methods to study the albedo of Venus and Mercury which became the subject of his doctoral dissertation Recherches de photometrie astronomique (1928) at Paris University. In 1930 he succeeded Ernest Esclangon as director of the Strasbourg Observatory. He was also appointed as a professor at Strasbourg University. In 1939, German invasion forced the move of faculty to Clermont-Ferrand near Vichy. He was arrested in November 1943 and he escaped being sent to Auschwitz and was released in January. After World War II, Esclangon retired from his position at the Paris Observatory and Danjon replaced him. Here he taught at the Sorbonne. In the 1960s he persuaded the government to establish the European Southern Observatories at La Silla and at Paranal. He also supported the establishment of radio astronomy.at Nancay in 1956. Among his notable contributions to astronomy was the design of the impersonal (prismatic) astrolabe based on an earlier prismatic astrolabe developed by François Auguste Claude which is now known as the Danjon astrolabe, which led to an improvement in the accuracy of fundamental optical astrometry. An account of this instrument, and of the results of some early years of its operation, are given in Danjon's 1958 George Darwin Lecture to the Royal Astronomical Society. The "Danjon limit", a proposed measure of the minimum angular separation between the Sun and the Moon at which a lunar crescent is visible is named after him. However, this limit may not exist. The Danjon effect is a name given for his observation that there is an increase in the number of "dark" total lunar eclipses during the 11 year solar sunspot maxima. He developed an astrolabe to identify irregularity in the rotational periodicity and concluded that there was increases in the Earth's rotation during intense solar activity. He suggested that the atmospheric darkness might be due to an increase in aerosols in the atmosphere due to increased volcanic activity. Danjon was the President of the Société astronomique de France (SAF), the French astronomical society, during two periods: 1947–49 and 1962–64. He was awarded the Prix Jules Janssen of the Société astronomique de France in 1950, and the Gold Medal of the Royal Astronomical Society in 1958. In 1946 he was made Officier of the Legion d'Honneur and in 1954 he was made Commandeur. Danjon died in 1967 in Suresnes, Hauts-de-Seine. He was married to Madeleine Renoult (m. 1919, died 1965) and they had four children. References 20th-century French astronomers Members of the French Academy of Sciences Scientists from Caen Recipients of the Gold Medal of the Royal Astronomical Society Academic staff of the University of Strasbourg 1890 births 1967 deaths Presidents of the International Astronomical Union
André-Louis Danjon
Astronomy
1,011
69,867,676
https://en.wikipedia.org/wiki/Lewandowski-Kurowicka-Joe%20distribution
In probability theory and Bayesian statistics, the Lewandowski-Kurowicka-Joe distribution, often referred to as the LKJ distribution, is a probability distribution over positive definite symmetric matrices with unit diagonals. Introduction The LKJ distribution was first introduced in 2009 in a more general context by Daniel Lewandowski, Dorota Kurowicka, and Harry Joe. It is an example of the vine copula, an approach to constrained high-dimensional probability distributions. The distribution has a single shape parameter and the probability density function for a matrix is with normalizing constant , a complicated expression including a product over Beta functions. For , the distribution is uniform over the space of all correlation matrices; i.e. the space of positive definite matrices with unit diagonal. Usage The LKJ distribution is commonly used as a prior for correlation matrix in Bayesian hierarchical modeling. Bayesian hierarchical modeling often tries to make an inference on the covariance structure of the data, which can be decomposed into a scale vector and correlation matrix. Instead of the prior on the covariance matrix such as the inverse-Wishart distribution, LKJ distribution can serve as a prior on the correlation matrix along with some suitable prior distribution on the scale vector. It has been implemented in several probabilistic programming languages, including Stan and PyMC. References External links Described as part of the Stan manual distribution-explorer Random matrices Bayesian statistics Continuous distributions Multivariate continuous distributions
Lewandowski-Kurowicka-Joe distribution
Physics,Mathematics
303
69,860,650
https://en.wikipedia.org/wiki/Atlas%20of%20AI
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence is a book by Australian academic Kate Crawford. It is based on Crawford's research into the development and labor behind artificial intelligence, as well as AI's impact on the world. Overview The book is mainly concerned with the ethics of artificial intelligence. Chapters 1 and 2 criticise Big Tech in general for exploitation of Earth's resources, such as in the Thacker Pass Lithium Mine, and human labor, such as in Amazon warehouses and the Amazon Mechanical Turk. Crawford also compares "TrueTime" in Google's Spanner with historical efforts to control time associated with colonialism. In Chapters 3 and 4, attention is drawn to the practice of building datasets without consent, and of training on incorrect or biased data, with particular focus on ImageNet and on a failed Amazon project to classify job applicants. Chapter 5 criticises affective computing for employing training sets which, although natural, were labelled by people who had been grounded in controversial emotional expression research by Paul Ekman, in particular his Facial Action Coding System (FACS), which had been based on posed images; it is implied that Affectiva's approach would not sufficiently attenuate the problems of FACS, and attention is drawn to potential inaccurate use of this technology in job interviews without addressing claims that human bias is worse. In Chapter 6, Crawford gives an overview of the secret services' surveillance software as revealed in the leaks of Edward Snowden, with a brief comparison to Cambridge Analytica and the military use of metadata, and recounts Google employees' objections to their unwitting involvement in Project Maven (giving their image recognition a military use) before this was moved to Palantir. Chapter 7 criticises the common perception of AlphaGo as an otherworldly intelligence instead of a natural product of massive brute-force calculation at environmental cost, and Chapter 8 discusses tech billionaires' fantasies of developing private spaceflight to escape resource depletion on Earth. Reception The book received positive reviews from critics, who singled out its exploration of issues like exploitation of labour and the environment, algorithmic bias, and false claims about AI's ability to recognize human emotion. The book was considered a seminal work by Anais Resseguier of Ethics and AI. It was included on the year end booklists of Financial Times, and New Scientist, and the 2021 Choice Outstanding Academic Titles booklist. Data scientist and MIT Technology Review editor Karen Hao praised the book's description of the ethical concerns regarding the labor and history behind artificial intelligence. Sue Halpern of The New York Review commented that she felt the book shined a light on "dehumanizing extractive practices", a sentiment which was echoed by Michael Spezio of Science. Virginia Dignum of Nature positively compared the book's exploration of artificial intelligence to The Alignment Problem by Brian Christian. References 2021 non-fiction books Systems theory books Software development books English non-fiction books English-language non-fiction books Books about the politics of science Sustainability books Non-fiction books about Artificial intelligence Yale University Press books Books in philosophy of technology
Atlas of AI
Technology
636
25,505,540
https://en.wikipedia.org/wiki/Woodhead%20Dam
Woodhead Dam is a dam on Table Mountain, Western Cape, South Africa. It was built in 1897 and supplies water to Cape Town. The dam, which was the first large masonry dam in South Africa, was designated as an International Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2008. History In 1870, the growth of Cape Town led to shortages of drinking water. It was decided to build a reservoir on Table Mountain to provide water to the city. Scottish hydraulic engineer Thomas Stewart was engaged to design and build the reservoir. The Woodhead Tunnel was built between 1888 and 1891. It was used to divert the Disa Stream, a tributary of the Hout Bay River, westward to provide water for the reservoir. An aerial cableway was constructed to transport men and materials to the construction site. The dam was constructed between 1894 and 1897. This dam was followed by four others in the area. The Hely-Hutchinson Dam and reservoir were built by 1904 just upstream of the Woodhead reservoir. The Alexandra Dam and Victoria Dam were built on the original Disa Stream by 1903. The last of the five dams was the De Villiers Dam in 1907. This was built downstream of the Alexandra and Victoria Dams. Today, these five dams supply around 0.4% of the water for Cape Town. Design The Woodhead Tunnel is long. The Woodhead Dam is a masonry gravity dam that is long and high. It has a free overspill spillway with a capacity of 20 m3/s (706 ft3/s). The reservoir has a capacity of and a surface area of . See also List of reservoirs and dams in South Africa References Dams in South Africa Historic Civil Engineering Landmarks Buildings and structures in Cape Town Dams completed in 1897 Masonry dams 19th-century architecture in South Africa
Woodhead Dam
Engineering
363
6,033,433
https://en.wikipedia.org/wiki/Cutan%20%28polymer%29
Cutan is one of two waxy biopolymers which occur in the cuticle of some plants. The other and better-known polymer is cutin. Cutan is believed to be a hydrocarbon polymer, whereas cutin is a polyester, but the structure and synthesis of cutan are not yet fully understood. Cutan is not present in as many plants as once thought; for instance it is absent in Ginkgo. Cutan was first detected as a non-saponifiable component, resistant to de-esterification by alkaline hydrolysis, that increases in amount in cuticles of some species such as Clivia miniata as they reach maturity, apparently replacing the cutin secreted in the early stages of cuticle development. Evidence that cutan is a hydrocarbon polymer comes from the fact that its flash pyrolysis products are a characteristic homologous series of paired alkanes and alkenes, and through 13C-NMR analysis of present-day and fossil plants. Cutan's preservation potential is much greater than that of cutin. Despite this, the low proportion of cutan found in fossilized cuticle shows that it is probably not the cause for the widespread preservation of cuticle in the fossil record. References Further reading Organic polymers Plant anatomy Plant physiology Fossil fuels
Cutan (polymer)
Chemistry,Biology
273
5,412,385
https://en.wikipedia.org/wiki/Tom%20Liston
Tom Liston is the founder and owner of the Johnsburg,_Illinois-based network security consulting firm, Bad Wolf Security. He is the author of the first network tarpit, the open source LaBrea. He was a finalist for eWeek and PC Magazine’s "Innovations In Infrastructure" (i3) award in 2002 for LaBrea. He is one of the handlers at the SANS Institute’s Internet Storm Center, where he deals with developing security issues and authors a series of articles under the title “Follow the Bouncing Malware.” Liston is also, with Ed Skoudis, co-author of the second edition of the network security book Counter Hack Reloaded: A Step-by-Step Guide to Computer Attacks and Effective Defenses. Works Books References Living people Year of birth missing (living people)
Tom Liston
Technology
169
1,509,289
https://en.wikipedia.org/wiki/Magnetostatics
Magnetostatics is the study of magnetic fields in systems where the currents are steady (not changing with time). It is the magnetic analogue of electrostatics, where the charges are stationary. The magnetization need not be static; the equations of magnetostatics can be used to predict fast magnetic switching events that occur on time scales of nanoseconds or less. Magnetostatics is even a good approximation when the currents are not static – as long as the currents do not alternate rapidly. Magnetostatics is widely used in applications of micromagnetics such as models of magnetic storage devices as in computer memory. Applications Magnetostatics as a special case of Maxwell's equations Starting from Maxwell's equations and assuming that charges are either fixed or move as a steady current , the equations separate into two equations for the electric field (see electrostatics) and two for the magnetic field. The fields are independent of time and each other. The magnetostatic equations, in both differential and integral forms, are shown in the table below. Where ∇ with the dot denotes divergence, and B is the magnetic flux density, the first integral is over a surface with oriented surface element . Where ∇ with the cross denotes curl, J is the current density and is the magnetic field intensity, the second integral is a line integral around a closed loop with line element . The current going through the loop is . The quality of this approximation may be guessed by comparing the above equations with the full version of Maxwell's equations and considering the importance of the terms that have been removed. Of particular significance is the comparison of the term against the term. If the term is substantially larger, then the smaller term may be ignored without significant loss of accuracy. Re-introducing Faraday's law A common technique is to solve a series of magnetostatic problems at incremental time steps and then use these solutions to approximate the term . Plugging this result into Faraday's Law finds a value for (which had previously been ignored). This method is not a true solution of Maxwell's equations but can provide a good approximation for slowly changing fields. Solving for the magnetic field Current sources If all currents in a system are known (i.e., if a complete description of the current density is available) then the magnetic field can be determined, at a position r, from the currents by the Biot–Savart equation: This technique works well for problems where the medium is a vacuum or air or some similar material with a relative permeability of 1. This includes air-core inductors and air-core transformers. One advantage of this technique is that, if a coil has a complex geometry, it can be divided into sections and the integral evaluated for each section. Since this equation is primarily used to solve linear problems, the contributions can be added. For a very difficult geometry, numerical integration may be used. For problems where the dominant magnetic material is a highly permeable magnetic core with relatively small air gaps, a magnetic circuit approach is useful. When the air gaps are large in comparison to the magnetic circuit length, fringing becomes significant and usually requires a finite element calculation. The finite element calculation uses a modified form of the magnetostatic equations above in order to calculate magnetic potential. The value of can be found from the magnetic potential. The magnetic field can be derived from the vector potential. Since the divergence of the magnetic flux density is always zero, and the relation of the vector potential to current is: Magnetization Strongly magnetic materials (i.e., ferromagnetic, ferrimagnetic or paramagnetic) have a magnetization that is primarily due to electron spin. In such materials the magnetization must be explicitly included using the relation Except in the case of conductors, electric currents can be ignored. Then Ampère's law is simply This has the general solution where is a scalar potential. Substituting this in Gauss's law gives Thus, the divergence of the magnetization, has a role analogous to the electric charge in electrostatics and is often referred to as an effective charge density . The vector potential method can also be employed with an effective current density See also Darwin Lagrangian References External links Electric and magnetic fields in matter Potentials
Magnetostatics
Physics,Chemistry,Materials_science,Engineering
873
4,196,082
https://en.wikipedia.org/wiki/Arc%20converter
The arc converter, sometimes called the arc transmitter, or Poulsen arc after Danish engineer Valdemar Poulsen who invented it in 1903, was a variety of spark transmitter used in early wireless telegraphy. The arc converter used an electric arc to convert direct current electricity into radio frequency alternating current. It was used as a radio transmitter from 1903 until the 1920s when it was replaced by vacuum tube transmitters. One of the first transmitters that could generate continuous sinusoidal waves, it was one of the first technologies used to transmit sound (amplitude modulation) by radio. It is on the list of IEEE Milestones as a historic achievement in electrical engineering. History Elihu Thomson discovered that a carbon arc shunted with a series tuned circuit would "sing". This "singing arc" was probably limited to audio frequencies. Bureau of Standards credits William Duddell with the shunt resonant circuit around 1900. The English engineer William Duddell discovered how to make a resonant circuit using a carbon arc lamp. Duddell's "musical arc" operated at audio frequencies, and Duddell himself concluded that it was impossible to make the arc oscillate at radio frequencies. Valdemar Poulsen succeeded in raising the efficiency and frequency to the desired level. Poulsen's arc could generate frequencies of up to 200 kilohertz and was patented in 1903. After a few years of development the arc technology was transferred to Germany and Great Britain in 1906 by Poulsen, his collaborator Peder Oluf Pedersen and their financial backers. In 1909 the American patents as well as a few arc converters were bought by Cyril Frank Elwell. The subsequent development in Europe and the United States was rather different, since in Europe there were severe difficulties for many years implementing the Poulsen technology, whereas in the United States an extended commercial radiotelegraph system was soon established with the Federal Telegraph Company. Later the US Navy also adopted the Poulsen system. Only the arc converter with passive frequency conversion was suitable for portable and maritime use. This made it the most important mobile radio system for about a decade until it was superseded by vacuum tube systems. In 1922, the Bureau of Standards stated, "the arc is the most widely used transmitting apparatus for high-power, long-distance work. It is estimated that the arc is now responsible for 80 per cent of all the energy actually radiated into space for radio purposes during a given time, leaving amateur stations out of consideration." Description This new, more-refined method for generating continuous-wave radio signals was initially developed by Danish inventor Valdemar Poulsen. The spark-gap transmitters in use at that time produced damped wave which wasted a large portion of their radiated power transmitting strong harmonics on multiple frequencies that filled the RF spectrum with interference. Poulsen's arc converter produced undamped or continuous waves (CW) on a single frequency. There are three types for an arc oscillator: Duddell arc (and other early types) In the first type of arc oscillator, the AC current in the condenser is much smaller than the DC supply current , and the arc is never extinguished during an output cycle. The Duddell arc is an example of the first type, but the first type is not practical for RF transmitters. Poulsen arc In the second type of arc oscillator, the condenser AC discharge current is large enough to extinguish the arc but not large enough to restart the arc in the opposite direction. This second type is the Poulsen arc. Quenched spark gap In the third type of arc oscillator, the arc extinguishes but may reignite when the condenser current reverses. The third case is a quenched spark gap and produces damped oscillations. Continuous or ‘undamped’ waves (CW) were an important feature, since the use of damped waves from spark-gap transmitters resulted in lower transmitter efficiency and communications effectiveness, while polluting the RF spectrum with interference. The Poulsen arc converter had a tuned circuit connected across the arc. The arc converter consisted of a chamber in which the arc burned in hydrogen gas between a carbon cathode and a water-cooled copper anode. Above and below this chamber there were two series field coils surrounding and energizing the two poles of the magnetic circuit. These poles projected into the chamber, one on each side of the arc to provide a magnetic field. It was most successful when operated in the frequency range of a few kilohertz to a few tens of kilohertz. The antenna tuning had to be selective enough to suppress the arc converter's harmonics. Keying Since the arc took some time to strike and operate in a stable fashion, normal on-off keying could not be used. Instead, a form of frequency-shift keying was employed. In this compensation-wave method, the arc operated continuously, and the key altered the frequency of the arc by one to five percent. The signal at the unwanted frequency was called the compensation-wave. In arc transmitters up to 70 kW, the key typically shorted out a few turns in the antenna coil. For larger arcs, the arc output would be transformer coupled to the antenna inductor, and the key would short out a few bottom turns of the grounded secondary. Therefore, the "mark" (key closed) was sent at one frequency, and the "space" (key open) at another frequency. If these frequencies were far enough apart, and the receiving station's receiver had adequate selectivity, the receiving station would hear standard CW when tuned to the "mark" frequency. The compensation wave method used a lot of spectrum bandwidth. It not only transmitted on the two intended frequencies, but also the harmonics of those frequencies. Arc converters are rich in harmonics. Sometime around 1921, the Preliminary International Communications Conference prohibited the compensation wave method because it caused too much interference. The need for the emission of signals at two different frequencies was eliminated by the development of uniwave methods. In one uniwave method, called the ignition method, keying would start and stop the arc. The arc chamber would have a striker rod that shorted out the two electrodes through a resistor and extinguished the arc. The key would energize an electromagnet that would move the striker and reignite the arc. For this method to work, the arc chamber had to be hot. The method was feasible for arc converters up to about 5 kW. The second uniwave method is the absorption method, and it involves two tuned circuits and a single-pole, double-throw, make-before-break key. When the key is down, the arc is connected to the tuned antenna coil and antenna. When the key is up, the arc is connected to a tuned dummy antenna called the back shunt. The back shunt was a second tuned circuit consisting of an inductor, a capacitor, and load resistor in series. This second circuit is tuned to roughly the same frequency as the transmitted frequency; it keeps the arc running, and it absorbs the transmitter power. The absorption method is apparently due to W. A. Eaton. The design of switching circuit for the absorption method is significant. It is switching a high voltage arc, so the switch's contacts must have some form of arc suppression. Eaton had the telegraph key drive electromagnets that operated a relay. That relay used four sets of switch contacts in series for each of the two paths (one to the antenna and one to the back shunt). Each relay contact was bridged by a resistor. Consequently, the switch was never completely open, but there was a lot of attenuation. See also History of radio Transmitter Mercury arc valve Tikker References . Revised to April 24, 1921. http://www.forgottenbooks.org . Elihu Thomson made singing arc before Duddell, p. 125. Further reading . History of radio in 1925. Page 25: "Professor Elihu Thomson, of America, applied for a patent on an arc method of producing high-frequency currents. His invention incorporated a magnetic blowout and other essential features of the arc of to-day, but the electrodes were of metal and not enclosed in a gas chamber." Cites to US Patent 500630. Pages 30–31 (1900): "William Du Bois Duddell, of London, applied for a patent on a static method of generating alternating currents from a direct-current supply, which method followed very closely upon the lines of that of Elihu Thomson of 1892. Duddell suggested electrodes of carbon, but he proposed no magnetic blow-out. He stated that his invention could be used for producing oscillations of high frequency and constant amplitude which could "be used with advantage in wireless telegraphy," especially where it was "required to tune the transmitter to syntony." Duddell's invention (Br. Pat. 21,629/00) became the basis for the Poulsen Arc, and also of an interesting transmitter evolved by Von Lepel." Page 31 (1903): "Valdemar Poulsen, of Copenhagen, successfully applied for a patent upon a generator, as disclosed by Duddell in 1900, plus magnetic blow-out proposed by Thomson in 1892, and a hydrogenous vapour in which to immerse the arc. (Br. Pate 15,599/03; U.S. Pat 789,449.)" Also Ch. IV, pp 75–77, "The Poulsen Arc". Refinements by C. F. Elwell. Cyril Frank Elwell - Pioneer of American and European Wireless Communications, Talking Pictures and founder of C.F. Elwell Limited, 1922-1925 by Ian L. Sanders. Published by Castle Ridge Press, 2013. (Details the development of the arc generator in the United States and Europe by Elwell.) External links http://oz6gh.byethost33.com/poulsenarc.htm, Modulation of the Poulsen arc, from the book Radio Telephony, 1918 by Alfred N. Goldsmith. https://web.archive.org/web/20120210081832/http://www.stenomuseet.dk/person/hb.ukref.htm, English summary of the Danish Ph.D. dissertation, The Arc Transmitter - a Comparative Study of the Invention, Development and Innovation of the Poulsen System in Denmark, England and the United States, by Hans Buhl, 1995 http://pe2bz.philpem.me.uk/Comm/-%20ELF-VLF/-%20Info/-%20History/PoulsenArcOscillator/poulsen1.htm https://www.gukit.ru/sites/default/files/ogpage_files/2017/09/Dugovoy_peredatchik.pdf - From the electric arc of Petrov to the radio broadcast of speech. History of radio technology Radio electronics Electric arcs Telecommunications-related introductions in 1902 Electric power conversion History of electronic engineering
Arc converter
Physics,Engineering
2,343
15,184,596
https://en.wikipedia.org/wiki/IEC%2061970
The IEC 61970 series of standards by the International Electrotechnical Commission (IEC) deals with the application program interfaces for energy management systems (EMS). The series provides a set of guidelines and standards to facilitate: The integration of applications developed by different suppliers in the control center environment The exchange of information to systems external to the control center environment, including transmission, distribution and generation systems external to the control center that need to exchange real-time data with the control center The provision of suitable interfaces for data exchange across legacy and new systems Set of standards The complete set of standards includes the following parts: Part 1: Guidelines and general requirements Part 2: Glossary Part 3XX: Common Information Model (CIM) Part 4XX: Component Interface Specification (CIS) Part 5XX: CIS Technology Mappings See also CIM Profile IEC 61850 IEC 61968 MultiSpeak External links CIM Users Group 61970
IEC 61970
Technology
189
48,719,972
https://en.wikipedia.org/wiki/Diversification%20rates
Diversification rates are the rates at which new species form (the Speciation rate, λ) and living species go extinct (the extinction rate, μ). Diversification rates can be estimated from fossils, data on the species diversity of clades and their ages, or phylogenetic trees. Diversification rates are typically reported on a per-lineage basis (e.g. speciation rate per lineage per unit of time), and refer to the diversification dynamics expected under a birth–death process. A broad range of studies have demonstrated that diversification rates can vary tremendously both through time and across the tree of life. Current research efforts are focused on predicting diversification rates based on aspects of species or their environment. Diversification rates are also subject to various survivorship biases such as the "Push of the past" Methods for estimating diversification rates Fossil time series Diversification rates can be estimated time-series data on fossil occurrences. With perfect data, this would be an easy task; one could just count the number of speciation and extinction events in a given time interval, and then use these data to calculate per-lineage rates of speciation and extinction per unit time. However, the incomplete nature of the fossil record means that our calculations need to include the possibility that some fossil lineages were not sampled, and that we do not have precise estimates for the times of speciation and extinction of the taxa that are sampled. More sophisticated methods account for the probability of sampling any lineage, which might also depend on some properties of the lineage itself (e.g. whether it has any hard body parts that tend to fossilize) as well as the environment in which it lives. Many estimates of diversification rates for fossil lineages are for higher-level taxonomic groups like genera or families. Such rates are informative about general patterns and trends of diversification through time and across clades but can be difficult to compare directly to rates of speciation and extinction of individual species. Clade age and diversity Diversification rates can be estimated from data on the ages and diversities of monophyletic clades in the tree of life. For example, if a clade is 100 million years old and includes 1000 species, we can estimate the net diversification rate of that clade by using a formula derived from a birth-death model of diversification: Equations are also available for estimating speciation and extinction rates separately when one has ages and diversities for multiple clades. Phylogenetic trees Diversification rates can be estimated using the information available in phylogenetic trees. To calculate diversification rates, such phylogenetic trees have to include branch lengths. Various methods are available to estimate speciation and extinction rates from phylogenetic trees using both maximum likelihood and Bayesian statistical approaches. One can also use phylogenetic trees to test for changing rates of speciation and/or extinction, both through time and across clades, and to associate rates of evolution with potential explanatory factors. Diversification rates through time and across clades References Phylogenetics Evolution
Diversification rates
Biology
607
1,411,074
https://en.wikipedia.org/wiki/Strontium%20chloride
Strontium chloride (SrCl2) is a salt of strontium and chloride. It is a 'typical' salt, forming neutral aqueous solutions. As with all compounds of strontium, this salt emits a bright red colour in flame, and is commonly used in fireworks to that effect. Its properties are intermediate between those for barium chloride, which is more toxic, and calcium chloride. Preparation Strontium chloride can be prepared by treating aqueous strontium hydroxide or strontium carbonate with hydrochloric acid: Crystallization from cold aqueous solution gives the hexahydrate, SrCl2·6H2O. Dehydration of this salt occurs in stages, commencing above . Full dehydration occurs at . Structure In the solid state, SrCl2 adopts a fluorite structure. In the vapour phase the SrCl2 molecule is non-linear with a Cl-Sr-Cl angle of approximately 130°. This is an exception to VSEPR theory which would predict a linear structure. Ab initio calculations have been cited to propose that contributions from d orbitals in the shell below the valence shell are responsible. Another proposal is that polarisation of the electron core of the strontium atom causes a distortion of the core electron density that interacts with the Sr-Cl bonds. Uses Strontium chloride is a precursor to other compounds of strontium, such as yellow strontium chromate, strontium carbonate, and strontium sulfate. Exposure of aqueous solutions of strontium chloride to the sodium salt of the desired anion often leads to formation of the solid precipitate: SrCl2 + Na2CrO4 → SrCrO4 + 2 NaCl SrCl2 + Na2CO3 → SrCO3 + 2 NaCl SrCl2 + Na2SO4 → SrSO4 + 2 NaCl Strontium chloride is often used as a red colouring agent in pyrotechnics. It imparts a much more intense red colour to the flames than most alternatives. It is employed in small quantities in glass-making and metallurgy. The radioactive isotope strontium-89, used for the treatment of bone cancer, is usually administered in the form of strontium chloride. Seawater aquaria require small amounts of strontium chloride, which is consumed during the growth of certain plankton. Dental care SrCl2 is useful in reducing tooth sensitivity by forming a barrier over microscopic tubules in the dentin containing nerve endings that have become exposed by gum recession. Known in the U.S. as Elecol and Sensodyne, these products are called "strontium chloride toothpastes", although most now use saltpeter (KNO3) instead which works as an analgesic rather than a barrier. Biological research Brief strontium chloride exposure induces parthenogenetic activation of oocytes which is used in developmental biological research. Ammonia storage A commercial company is using a strontium chloride-based artificial solid called AdAmmine as a means to store ammonia at low pressure, mainly for use in NOx emission reduction on Diesel vehicles. They claim that their patented material can also be made from some other salts, but they have chosen strontium chloride for mass production. Earlier company research also considered using the stored ammonia as a means to store synthetic ammonia fuel under the trademark HydrAmmine and the press name "hydrogen tablet", however, this aspect has not been commercialized. Their processes and materials are patented. Their early experiments used magnesium chloride, and is also mentioned in that article. Soil testing Strontium chloride is used with citric acid in soil testing as a universal extractant of plant nutrients. References External links Chlorides Strontium compounds Alkaline earth metal halides Fluorite crystal structure
Strontium chloride
Chemistry
801
14,457,671
https://en.wikipedia.org/wiki/Sedoheptulose-bisphosphatase
Sedoheptulose-bisphosphatase (also sedoheptulose-1,7-bisphosphatase or SBPase, EC number 3.1.3.37; systematic name sedoheptulose-1,7-bisphosphate 1-phosphohydrolase) is an enzyme that catalyzes the removal of a phosphate group from sedoheptulose 1,7-bisphosphate to produce sedoheptulose 7-phosphate. SBPase is an example of a phosphatase, or, more generally, a hydrolase. This enzyme participates in the Calvin cycle. Structure SBPase is a homodimeric protein, meaning that it is made up of two identical subunits. The size of this protein varies between species, but is about 92,000 Da (two 46,000 Da subunits) in cucumber plant leaves. The key functional domain controlling SBPase function involves a disulfide bond between two cysteine residues. These two cysteine residues, Cys52 and Cys57, appear to be located in a flexible loop between the two subunits of the homodimer, near the active site of the enzyme. Reduction of this regulatory disulfide bond by thioredoxin incites a conformational change in the active site, activating the enzyme. Additionally, SBPase requires the presence of magnesium (Mg2+) to be functionally active. SBPase is bound to the stroma-facing side of the thylakoid membrane in the chloroplast in a plant. Some studies have suggested the SBPase may be part of a large (900 kDa) multi-enzyme complex along with a number of other photosynthetic enzymes. Regulation SBPase is involved in the regeneration of 5-carbon sugars during the Calvin cycle. Although SBPase has not been emphasized as an important control point in the Calvin cycle historically, it plays a large part in controlling the flux of carbon through the Calvin cycle. Additionally, SBPase activity has been found to have a strong correlation with the amount of photosynthetic carbon fixation. Like many Calvin cycle enzymes, SBPase is activated in the presence of light through a ferredoxin/thioredoxin system. In the light reactions of photosynthesis, light energy powers the transport of electrons to eventually reduce ferredoxin. The enzyme ferredoxin-thioredoxin reductase uses reduced ferredoxin to reduce thioredoxin from the disulfide form to the dithiol. Finally, the reduced thioredoxin is used to reduced a cysteine-cysteine disulfide bond in SBPase to a dithiol, which converts the SBPase into its active form. SBPase has additional levels of regulation beyond the ferredoxin/thioredoxin system. Mg2+ concentration has a significant impact on the activity of SBPase and the rate of the reactions it catalyzes. SBPase is inhibited by acidic conditions (low pH). This is a large contributor to the overall inhibition of carbon fixation when the pH is low inside the stroma of the chloroplast. Finally, SBPase is subject to negative feedback regulation by sedoheptulose-7-phosphate and inorganic phosphate, the products of the reaction it catalyzes. Evolutionary origin SBPase and FBPase (fructose-1,6-bisphosphatase, EC 3.1.3.11) are both phosphatases that catalyze similar during the Calvin cycle. The genes for SBPase and FBPase are related. Both genes are found in the nucleus in plants, and have bacterial ancestry. SBPase is found across many species. In addition to being universally present in photosynthetic organism, SBPase is found in a number of evolutionarily-related, non-photosynthetic microorganisms. SBPase likely originated in red algae. Horticultural Relevance Moreso than other enzymes in the Calvin cycle, SBPase levels have a significant impact on plant growth, photosynthetic ability, and response to environmental stresses. Small decreases in SBPase activity result in decreased photosynthetic carbon fixation and reduced plant biomass. Specifically, decreased SBPase levels result in stunted plant organ growth and development compared to wild-type plants, and starch levels decrease linearly with decreases in SBPase activity, suggesting that SBPase activity is a limiting factor to carbon assimilation. This sensitivity of plants to decreased SBPase activity is significant, as SBPase itself is sensitive to oxidative damage and inactivation from environmental stresses. SBPase contains several catalytically relevant cysteine residues that are vulnerable to irreversible oxidative carbonylation by reactive oxygen species (ROS), particularly from hydroxyl radicals created during the production of hydrogen peroxide. Carbonylation results in SBPase enzyme inactivation and subsequent growth retardation due to inhibition of carbon assimilation. Oxidative carbonylation of SBPase can be induced by environmental pressures such as chilling, which causes an imbalance in metabolic processes leading to increased production of reactive oxygen species, particularly hydrogen peroxide.  Notably, chilling inhibits SBPase and a related enzyme, fructose bisphosphatase, but does not affect other reductively activated Calvin cycle enzymes. The sensitivity of plants to synthetically reduced or inhibited SBPase levels provides an opportunity for crop engineering. There are significant indications that transgenic plants which overexpress SBPase may be useful in improving food production efficiency by producing crops that are more resilient to environmental stresses, as well as have earlier maturation and higher yield. Overexpression of SBPase in transgenic tomato plants provided resistance to chilling stress, with the transgenic plants maintaining higher SBPase activity, increased carbon dioxide fixation, reduced electrolyte leakage and increased carbohydrate accumulation relative to wild-type plants under the same chilling stress. It is also likely that transgenic plants would be more resilient to osmotic stress caused by drought or salinity, as the activation of SBPase is shown to be inhibited in chloroplasts exposed to hypertonic conditions, though this has not been directly tested. Overexpression of SBPase in transgenic tobacco plants resulted in enhanced photosynthetic efficiency and growth. Specifically, transgenic plants exhibited greater biomass and improved carbon dioxide fixation, as well as an increase in RuBisCO activity. The plants grew significantly faster and larger than wild-type plants, with increased sucrose and starch levels. References Further reading Photosynthesis EC 3.1.3
Sedoheptulose-bisphosphatase
Chemistry,Biology
1,433
18,756,845
https://en.wikipedia.org/wiki/History%20of%20attachment%20theory
Attachment theory, originating in the work of John Bowlby, is a psychological, evolutionary and ethological theory that provides a descriptive and explanatory framework for understanding interpersonal relationships between human beings. In order to formulate a comprehensive theory of the nature of early attachments, Bowlby explored a range of fields including evolution by natural selection, object relations theory (psychoanalysis), control systems theory, evolutionary biology and the fields of ethology and cognitive psychology. There were some preliminary papers from 1958 onwards but the full theory is published in the trilogy Attachment and Loss, 1969- 82. Although in the early days Bowlby was criticised by academic psychologists and ostracised by the psychoanalytic community, attachment theory has become the dominant approach to understanding early social development and given rise to a great surge of empirical research into the formation of children's close relationships. Brief description of theory In infants, behavior associated with attachment is primarily a process of proximity seeking to an identified attachment figure in situations of perceived distress or alarm, for the purpose of survival. Infants become attached to adults who are sensitive and responsive in social interactions with the infant, and who remain as consistent caregivers for some months during the period from about six months to two years of age. During the later part of this period, children begin to use attachment figures (familiar people) as a secure base to explore from and return to. Parental responses lead to the development of patterns of attachment which in turn lead to 'internal working models' which will guide the individual's feelings, thoughts, and expectations in later relationships. Separation anxiety or grief following serious loss are normal and natural responses in an attached infant. The human infant is considered by attachment theorists to have a need for a secure relationship with adult caregivers, without which normal social and emotional development will not occur. However, different relationship experiences can lead to different developmental outcomes. Mary Ainsworth developed a theory of a number of attachment patterns or "styles" in infants in which distinct characteristics were identified; these were secure attachment, avoidant attachment, anxious attachment and, later, disorganized attachment. In addition to care-seeking by children, peer relationships of all ages, romantic and sexual attraction, and responses to the care needs of infants or sick or elderly adults may be construed as including some components of attachment behavior. Earlier theories A theory of attachment is a framework of ideas that attempt to explain attachment, the almost universal human tendency to prefer certain familiar companions over other people, especially when ill, injured, or distressed. Historically, certain social preferences, like those of parents for their children, were explained by reference to instinct, or the moral worth of the individual. The concept of infants' emotional attachment to caregivers has been known anecdotally for hundreds of years. Most early observers focused on the anxiety displayed by infants and toddlers when threatened with separation from a familiar caregiver. Psychological theories about attachment were suggested from the late nineteenth century onward. Freudian theory attempted a systematic consideration of infant attachment and attributed the infant's attempts to stay near the familiar person to motivation learned through feeding experiences and gratification of libidinal drives. In the 1930s, the British developmentalist Ian Suttie put forward the suggestion that the child's need for affection was a primary one, not based on hunger or other physical gratifications. A third theory prevalent at the time of Bowlby's development of attachment theory was "dependency". This approach posited that infants were dependent on adult caregivers but that dependency was, or should be outgrown as the individual matured. Such an approach perceived attachment behaviour in older children as regressive whereas within attachment theory older children and adults remain attached and indeed a secure attachment is associated with independent exploratory behaviour rather than dependence. William Blatz, a Canadian psychologist and teacher of Bowlby's colleague Mary Ainsworth, was among the first to stress the need for security as a normal part of personality at all ages, as well as normality of the use of others as a secure base and the importance of social relationships for other aspects of development. Current attachment theory focuses on social experiences in early childhood as the source of attachment in childhood and in later life. Attachment theory was developed by Bowlby as a consequence of his dissatisfaction with existing theories of early relationships. Early developments Bowlby was influenced by the beginnings of the object relations school of psychoanalysis and in particular, Melanie Klein, although he profoundly disagreed with the psychoanalytic belief then prevalent that saw infants' responses as relating to their internal fantasy life rather than to real life events. As Bowlby began to formulate his concept of attachment, he was influenced by case studies by Levy, Powdermaker, Lowrey, Bender and Goldfarb. An example is the one by David Levy that associated an adopted child's lack of social emotion to her early emotional deprivation. Bowlby himself was interested in the role played in delinquency by poor early relationships, and explored this in a study of young thieves. Bowlby's contemporary René Spitz proposed that "psychotoxic" results were brought about by inappropriate experiences of early care. A strong influence was the work of James and Joyce Robertson who filmed the effects of separation on children in hospital. They and Bowlby collaborated in making the 1952 documentary film A Two-Year Old Goes to the Hospital illustrating the impact of loss and suffering experienced by young children separated from their primary caretakers. This film was instrumental in a campaign to alter hospital restrictions on visiting by parents. In his 1951 monograph for the World Health Organization, Maternal Care and Mental Health, Bowlby put forward the hypothesis that "the infant and young child should experience a warm, intimate, and continuous relationship with his mother (or permanent mother substitute) in which both find satisfaction and enjoyment" and that not to do so may have significant and irreversible mental health consequences. This proposition was both influential in terms of the effect on the institutional care of children, and highly controversial. There was limited empirical data at the time and no comprehensive theory to account for such a conclusion. Attachment theory Following the publication of Maternal Care and Mental Health, Bowlby sought new understanding from such fields as evolutionary biology, ethology, developmental psychology, cognitive science and control systems theory and drew upon them to formulate the innovative proposition that the mechanisms underlying an infants tie emerged as a result of evolutionary pressure. He realised that he had to develop a new theory of motivation and behaviour control, built on up-to-date science rather than the outdated psychic energy model espoused by Freud. Bowlby expressed himself as having made good the "deficiencies of the data and the lack of theory to link alleged cause and effect" in "Maternal Care and Mental Health" in his later work "Attachment and Loss" published between 1969 and 1980. Bowlby's first official representations were carried out for the relationship theory in three very controversial lectures in 1957 by the British Psychoanalytical Society in London. The formal origin of attachment theory can be traced to the publication of two 1958 papers, one being Bowlby's The Nature of the Child's Tie to his Mother, in which the precursory concepts of "attachment" were introduced, and Harry Harlow's The Nature of Love, based on the results of experiments which showed, approximately, that infant rhesus monkeys spent more time with soft mother-like dummies that offered no food than they did with dummies that provided a food source but were less pleasant to the touch. Bowlby followed this up with two more papers, Separation Anxiety (1960a), and Grief and Mourning in Infancy and Early Childhood (1960b). At about the same time, Bowlby's former colleague, Mary Ainsworth was completing extensive observational studies on the nature of infant attachments in Uganda with Bowlby's ethological theories in mind. Mary Ainsworth's innovative methodology and comprehensive observational studies informed much of the theory, expanded its concepts and enabled some of its tenets to be empirically tested. Attachment theory was finally presented in 1969 in Attachment the first volume of the Attachment and Loss trilogy. The second and third volumes, Separation: Anxiety and Anger and Loss: Sadness and Depression followed in 1972 and 1980 respectively. Attachment was revised in 1982 to incorporate more recent research. Ethology Bowlby's attention was first drawn to ethology when he read Lorenz's 1952 publication in draft form although Lorenz had published much earlier work. Soon after this he encountered the work of Tinbergen, and began to collaborate with Robert Hinde. In 1953 he stated "the time is ripe for a unification of psychoanalytic concepts with those of ethology, and to pursue the rich vein of research which this union suggests". Konrad Lorenz had examined the phenomenon of "imprinting" and felt that it might have some parallels to human attachment. Imprinting, a behavior characteristic of some birds and a very few mammals, involves rapid learning of recognition by a young bird or animal exposed to a conspecific or an object or organism that behaves suitably. The learning is possible only within a limited age period, known as a critical period. This rapid learning and development of familiarity with an animate or inanimate object is accompanied by a tendency to stay close to the object and to follow when it moves; the young creature is said to have been imprinted on the object when this occurs. As the imprinted bird or animal reaches reproductive maturity, its courtship behavior is directed toward objects that resemble the imprinting object. Bowlby's attachment concepts later included the ideas that attachment involves learning from experience during a limited age period, and that the learning that occurs during that time influences adult behavior. However, he did not apply the imprinting concept in its entirety to human attachment, nor assume that human development was as simple as that of birds. He did, however, consider that attachment behavior was best explained as instinctive in nature, an approach that does not rule out the effect of experience, but that stresses the readiness the young child brings to social interactions. Some of Lorenz's work had been done years before Bowlby formulated his ideas, and indeed some ideas characteristic of ethology were already discussed among psychoanalysts some time before the presentation of attachment theory. Psychoanalysis Bowlby's view of attachment was also influenced by psychoanalytical concepts and the earlier work of psychoanalysts. In particular he was influenced by observations of young children separated from familiar caregivers, as provided during World War II by Anna Freud and her colleague Dorothy Burlingham. Observations of separated children's grief by René Spitz were another important factor in the development of attachment theory. However, Bowlby rejected psychoanalytical explanations for early infant bonds. He rejected both Freudian "drive-theory", which he called the Cupboard Love theory of relationships, and early object-relations theory as both in his view failed to see the attachment as a psychological bond in its own right rather than an instinct derived from feeding or sexuality. Thinking in terms of primary attachment and neo-darwinism, Bowlby identified as what he saw as fundamental flaws in psychoanalysis, namely the overemphasis of internal dangers at the expense of external threat, and the picture of the development of personality via linear "phases" with "regression" to fixed points accounting for psychological illness. Instead he posited that several lines of development were possible, the outcome of which depended on the interaction between the organism and the environment. In attachment this would mean that although a developing child has a propensity to form attachments, the nature of those attachments depends on the environment to which the child is exposed. Internal working model The important concept of the internal working model of social relationships was adopted by Bowlby from the work of the philosopher Kenneth Craik, who had noted the adaptiveness of the ability of thought to predict events, and stressed the survival value of and natural selection for this ability. According to Craik, prediction occurs when a "small-scale model" consisting of brain events is used to represent not only the external environment, but the individual's own possible actions. This model allows a person to mentally try out alternatives and to use knowledge of the past in responding to the present and future. At about the same time that Bowlby was applying Craik's ideas to the study of attachment, other psychologists were using these concepts in discussion of adult perception and cognition. Cybernetics The theory of control systems (cybernetics), developing during the 1930s and '40s, influenced Bowlby's thinking. The young child's need for proximity to the attachment figure was seen as balancing homeostatically with the need for exploration. The actual distance maintained would be greater or less as the balance of needs changed; for example, the approach of a stranger, or an injury, would cause the child to seek proximity when a moment before he had been exploring at a distance. Behavioural development and attachment Behaviour analysts have constructed models of attachment. Such models are based on the importance of contingent relationships. Behaviour analytic models have received support from research. and meta-analytic reviews. Developments Although research on attachment behaviors continued after Bowlby's death in 1990, there was a period of time when attachment theory was considered to have run its course. Some authors argued that attachment should not be seen as a trait (lasting characteristic of the individual), but instead should be regarded as an organizing principle with varying behaviors resulting from contextual factors. Related later research looked at cross-cultural differences in attachment, and concluded that there should be re-evaluation of the assumption that attachment is expressed identically in all humans. In a recent study conducted in Sapporo, Behrens, et al., 2007 found attachment distributions consistent with global norms using the six-year Main & Cassidy scoring system for attachment classification. Interest in attachment theory continued, and the theory was later extended to adult romantic relationships by Cindy Hazan and Phillip Shaver. Peter Fonagy and Mary Target have attempted to bring attachment theory and psychoanalysis into a closer relationship by way of such aspects of cognitive science as mentalization, the ability to estimate what the beliefs or intentions of another person may be. A "natural experiment" has permitted extensive study of attachment issues, as researchers have followed the thousands of Romanian orphans who were adopted into Western families after the end of Nicolae Ceauşescu's regime. The English and Romanian Adoptees Study Team, led by Michael Rutter, has followed some of the children into their teens, attempting to unravel the effects of poor attachment, adoption and new relationships, and the physical and medical problems associated with their early lives. Studies on the Romanian adoptees, whose initial conditions were shocking, have in fact yielded reason for optimism. Many of the children have developed quite well, and the researchers have noted that separation from familiar people is only one of many factors that help to determine the quality of development. Neuroscientific studies are examining the physiological underpinnings of observable attachment style, such as vagal tone which influences capacities for intimacy, stress response which influences threat reactivity (Lupien, McEwan, Gunnar & Heim, 2009), as well as neuroendocrinology such as oxytocin. These types of studies underscore the fact that attachment is an embodied capacity not only a cognitive one. Effects of changing times and approaches Some authors have noted the connection of attachment theory with Western family and child care patterns characteristic of Bowlby's time. The implication of this connection is that attachment-related experiences (and perhaps attachment itself) may alter as young children's experience of care change historically. For example, changes in attitudes toward female sexuality have greatly increased the numbers of children living with their never-married mothers and being cared for outside the home while the mothers work. This social change, in addition to increasing abortion rates, has also made it more difficult for childless people to adopt infants in their own countries, and has increased the number of older-child adoptions and adoptions from third-world sources. Adoptions and births to same-sex couples have increased in number and even gained some legal protection, compared to their status in Bowlby's time. One focus of attachment research has been on the difficulties of children whose attachment history was poor, including those with extensive non-parental child care experiences. Concern with the effects of child care was intense during the so-called "day care wars" of the late 20th century, during which the deleterious effects of day care were stressed. As a beneficial result of this controversy, training of child care professionals has come to stress attachment issues and the need for relationship-building through techniques such as assignment of a child to a specific care provider. Although only high-quality child care settings are likely to follow through on these considerations, nevertheless a larger number of infants in child care receive attachment-friendly care than was the case in the past, and emotional development of children in nonparental care may be different today than it was in the 1980s or in Bowlby's time. Finally, any critique of attachment theory needs to consider how the theory has connected with changes in other psychological theories. Research on attachment issues has begun to include concepts related to behaviour genetics and to the study of temperament (constitutional factors in personality), but it is unusual for popular presentations of attachment theory to include these. Importantly, some researchers and theorists have begun to connect attachment with the study of mentalization or Theory of Mind, the capacity that allows human beings to guess with some accuracy what thoughts, emotions, and intentions lie behind behaviours as subtle as facial expression or eye movement. The connection of theory of mind with the internal working model of social relationships may open a new area of study and lead to alterations in attachment theory. Reception 1950s to the 1970s The maternal deprivation hypothesis, attachment theory's precursor, was enormously controversial. Ten years after the publication of the hypothesis, Ainsworth listed nine concerns that she felt were the chief points of controversy. Ainsworth separated the three dimensions of maternal deprivation into lack of maternal care, distortion of maternal care and discontinuity of maternal care. She analysed the dozens of studies undertaken in the field and concluded that the basic assertions of the maternal deprivation hypothesis were sound although the controversy continued. As the formulation of attachment theory progressed, critics commented on empirical support for the theory and for the possible alternative explanations for results of empirical research. Wootton questioned the suggestion that early attachment history (as it would now be called) had a lifelong impact. In 1957 found the young relationship theory in the DDR (East Germany) by an essay of James Robertson in the Zeitschrift für ärztliche Fortbildung (magazine for a medical further education) and Eva Schmidt-Kolmer carried out some journal extracts from Bowlby's essay Maternal Care and mental Health for WHO. In the following period it came to extensive comparative development psychological in the DDR at the end of the fifties. Examinations between family-bound babies and small children, day and week hayracks-as well as Institution children. The findings could do with regard to the morbidity for the family-bound children, the physical and emotional development as well as adaption disturbances at change of environment. After the construction of the Berlin Wall 1961 it didn't come to any additional publications in the DDR Relationship theory and comparative investigations with family-bound children. The previous ones Research results weren't published further and got like the relationship theory into oblivion in the DDR in the subsequent years. In the 1970s, problems with the emphasis on attachment as a trait (a stable characteristic of an individual) rather than as a type of behaviour with important organising functions and outcomes, led some authors to consider that "attachment (as implying anything but infant-adult interaction) [may be said to have] outlived its usefulness as a developmental construct..." and that attachment behaviours were best understood in terms of their functions in the child's life. Children may achieve a given function, such as a sense of security, in many different ways and the various but functionally comparable behaviours should be categorized as related to each other. This way of thinking saw the secure base concept (the organisation of exploration of an unfamiliar situation around returns to a familiar person) as "central to the logic and coherence of attachment theory and to its status as an organizational construct." Similarly, Thompson pointed out that "other features of early parent-child relationships that develop concurrently with attachment security, including negotiating conflict and establishing cooperation, also must be considered in understanding the legacy of early attachments." Specific disciplines Psychoanalysis From an early point in the development of attachment theory, there was criticism of the theory's lack of congruence with the various branches of psychoanalysis. Like other members of the British object-relations group, Bowlby rejected Melanie Klein's views that considered the infant to have certain mental capacities at birth and to continue to develop emotionally on the basis of fantasy rather than of real experiences. But Bowlby also withdrew from the object-relations approach (exemplified, for example, by Anna Freud), as he abandoned the "drive theory" assumptions in favor of a set of automatic, instinctual behaviour systems that included attachment. Bowlby's decisions left him open to criticism from well-established thinkers working on problems similar to those he addressed. Bowlby was effectively ostracized from the psychoanalytic community. More recently some psychoanalysts have sought to reconcile the two theories in the form of attachment-based psychotherapy, a therapeutic approach. Ethology Ethologists expressed concern about the adequacy of some of the research on which attachment theory was based, particularly the generalisation to humans from animal studies. Schur, discussing Bowlby's use of ethological concepts (pre-1960) commented that these concepts as used in attachment theory had not kept up with changes in ethology itself. Ethologists and others writing in the 1960s and 1970s questioned the types of behaviour used as indications of attachment, and offered alternative approaches. For example, crying on separation from a familiar person was suggested as an index of attachment. Observational studies of young children in natural settings also provided behaviours that might be considered to indicate attachment; for example, staying within a predictable distance of the mother without effort on her part and picking up small objects and bringing them to the mother, but usually not other adults. Although ethological work tended to be in agreement with Bowlby, work like that just described led to the conclusion that "[w]e appear to disagree with Bowlby and Ainsworth on some of the details of the child's interactions with its mother and other people". Some ethologists pressed for further observational data, arguing that psychologists "are still writing as if there is a real entity which is 'attachment', existing over and above the observable measures." Robert Hinde expressed concern with the use of the word "attachment" to imply that it was an intervening variable or a hypothesised internal mechanism rather than a data term. He suggested that confusion about the meaning of attachment theory terms "could lead to the 'instinct fallacy' of postulating a mechanism isomorphous with the behaviours, and then using that as an explanation for the behaviour". However, Hinde considered "attachment behaviour system" to be an appropriate term of theory language which did not offer the same problems "because it refers to postulated control systems that determine the relations between different kinds of behaviour." Cognitive development Bowlby's reliance on Piaget's theory of cognitive development gave rise to questions about object permanence (the ability to remember an object that is temporarily absent) and its connection to early attachment behaviours, and about the fact that the infant's ability to discriminate strangers and react to the mother's absence seems to occur some months earlier than Piaget suggested would be cognitively possible. More recently, it has been noted that the understanding of mental representation has advanced so much since Bowlby's day that present views can be far more specific than those of Bowlby's time. Behaviourism In 1969, Gewirtz discussed how mother and child could provide each other with positive reinforcement experiences through their mutual attention and therefore learn to stay close together; this explanation would make it unnecessary to posit innate human characteristics fostering attachment. Learning theory saw attachment as a remnant of dependency and the quality of attachment as merely a response to the caregivers cues. Behaviourists saw behaviours such as crying as a random activity that meant nothing until reinforced by a caregivers response therefore frequent responses would result in more crying. To attachment theorists, crying is an inborn attachment behaviour to which the caregiver must respond if the infant is to develop emotional security. Conscientious responses produce security which enhances autonomy and results in less crying. Ainsworth's research in Baltimore supported the attachment theorists view. In the last decade, behaviour analysts have constructed models of attachment based on the importance of contingent relationships. These behaviour analytic models have received some support from research and meta-analytic reviews. Methodology There has been critical discussion of conclusions drawn from clinical and observational work, and whether or not they actually support tenets of attachment theory. For example, Skuse based criticism of a basic tenet of attachment theory on the work of Anna Freud with children from Theresienstadt, who apparently developed relatively normally in spite of serious deprivation during their early years. This discussion concluded from Freud's case and from some other studies of extreme deprivation that there is an excellent prognosis for children with this background, unless there are biological or genetic risk factors. The psychoanalyst Margaret Mahler interpreted ambivalent or aggressive behaviour of toddlers toward their mothers as a normal part of development, not as evidence of poor attachment history. Some of Bowlby's interpretations of the data reported by James Robertson were eventually rejected by the researcher, who reported data from 13 young children who were cared for in ideal circumstances during separation from their mothers. Robertson noted, "...Bowlby acknowledges that he draws mainly upon James Robertson's institutional data. But in developing his grief and mourning theory, Bowlby, without adducing non-institutional data, has generalized Robertson's concept of protest, despair and denial beyond the context from which it was derived. He asserts that these are the usual responses of young children to separation from the mother regardless of circumstance..."; however, of the 13 separated children who received good care, none showed protest and despair, but "coped with separation from the mother when cared for in conditions from which the adverse factors which complicate institutional studies were absent". In the second volume of the trilogy, Separation, published two years later, Bowlby acknowledged that Robertsons foster study had caused him to modify his views on the traumatic consequences of separation in which insufficient weight was given to the influence of skilled care from a familiar substitute. Some authors have questioned the idea of attachment patterns, thought to be measured by techniques like the Strange Situation Protocol. Such techniques yield a taxonomy of categories considered to represent qualitative difference in attachment relationships (for example, secure attachment versus avoidant). However, a categorical model is not necessarily the best representation of individual difference in attachment. An examination of data from 1139 15-month-olds showed that variation was continuous rather than falling into natural groupings. This criticism introduces important questions for attachment typologies and the mechanisms behind apparent types, but in fact has relatively little relevance for attachment theory itself, which "neither requires nor predicts discrete patterns of attachment." As was noted above, ethologists have suggested other behavioural measures that may be of greater importance than Strange Situation behaviour. 1980s on Following the argument made in the 1970s that attachment should not be seen as a trait (lasting characteristic of the individual), but instead should be regarded as an organising principle with varying behaviours resulting from contextual factors, later research looked at cross-cultural differences in attachment, and concluded that there should be re-evaluation of the assumption that attachment is expressed identically in all humans. Various studies appeared to show cultural differences but a 2007 study conducted in Sapporo in Japan found attachment distributions consistent with global norms using the six-year Main & Cassidy scoring system for attachment classification. Recent critics such as J. R. Harris, Steven Pinker and Jerome Kagan are generally concerned with the concept of infant determinism (Nature versus nurture) and stress the possible effects of later experience on personality. Building on the earlier work on temperament of Stella Chess, Kagan rejected almost every assumption on which attachment theory etiology was based, arguing that heredity was far more important than the transient effects of early environment, for example a child with an inherent difficult temperament would not illicit sensitive behavioural responses from their care giver. The debate spawned considerable research and analysis of data from the growing number of longitudinal studies. Subsequent research has not bourne out Kagan's argument and broadly demonstrates that it is the caregivers' behaviours that form the child's attachment style although how this style is expressed may differ with temperament. Harris and Pinker have put forward the notion that the influence of parents has been much exaggerated and that socialisation takes place primarily in peer groups, although H. Rudolph Schaffer concludes that parents and peers fulfill different functions and have distinctive roles in children's development. Concern about attachment theory has been raised with regard to the fact that infants often have multiple relationships, within the family as well as in child care settings, and that the dyadic model characteristic of attachment theory cannot address the complexity of real-life social experiences. See also Attachment theory John Bowlby Behavior analysis of child development Notes References (page numbers refer to Pelican edition 1971) External links Richard Karen. 'Becoming Attached'. The Atlantic Monthly February, 1990. Review of Richard Karen. Becoming Attached: First Relationships and How They Shape Our Capacity to Love. Rene Spitz's film "Psychogenic Disease in Infancy" (1957) Ethology Evolutionary biology + Human development Interpersonal relationships Philosophy of love Psychoanalytic theory
History of attachment theory
Biology
6,178