id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
7,363,390
https://en.wikipedia.org/wiki/Sexolog%C3%ADa%20y%20Sociedad
Sexología y Sociedad () is a medical journal published in Cuba. The journal was first published in 1994, and is currently published by the Cuban National Center for Sex Education. The journal is published in both English and Spanish languages. The editor is Mariela Castro. External links Medicine in Cuba Multilingual journals Academic journals established in 1994 Sexology journals Triannual journals
Sexología y Sociedad
Biology
79
32,177,562
https://en.wikipedia.org/wiki/Ulam%27s%20game
Ulam's game, or the Rényi–Ulam game, is a mathematical game similar to the popular game of twenty questions. In Ulam's game, a player attempts to guess an unnamed object or number by asking yes–no questions of another, but one of the answers given may be a lie. introduced the game in a 1961 paper, based on Hungary's Bar Kokhba game, but the paper was overlooked for many years. Stanisław Ulam rediscovered the game, presenting the idea that there are a million objects and the answer to one question can be wrong, and considered the minimum number of questions required, and the strategy that should be adopted. Pelc gave a survey of similar games and their relation to information theory. See also Knights and Knaves References Mathematical games Information theory Guessing games
Ulam's game
Mathematics,Technology,Engineering
166
2,470,414
https://en.wikipedia.org/wiki/Gershgorin%20circle%20theorem
In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gershgorin, Hershhorn, and Hirschhorn. Statement and proof Let be a complex matrix, with entries . For let be the sum of the absolute values of the non-diagonal entries in the -th row: Let be a closed disc centered at with radius . Such a disc is called a Gershgorin disc. Theorem. Every eigenvalue of lies within at least one of the Gershgorin discs Proof. Let be an eigenvalue of with corresponding eigenvector . Find i such that the element of x with the largest absolute value is . Since , in particular we take the ith component of that equation to get: Taking to the other side: Therefore, applying the triangle inequality and recalling that based on how we picked i, Corollary. The eigenvalues of A must also lie within the Gershgorin discs Cj corresponding to the columns of A. Proof. Apply the Theorem to AT while recognizing that the eigenvalues of the transpose are the same as those of the original matrix. Example. For a diagonal matrix, the Gershgorin discs coincide with the spectrum. Conversely, if the Gershgorin discs coincide with the spectrum, the matrix is diagonal. Discussion One way to interpret this theorem is that if the off-diagonal entries of a square matrix over the complex numbers have small norms, the eigenvalues of the matrix cannot be "far from" the diagonal entries of the matrix. Therefore, by reducing the norms of off-diagonal entries one can attempt to approximate the eigenvalues of the matrix. Of course, diagonal entries may change in the process of minimizing off-diagonal entries. The theorem does not claim that there is one disc for each eigenvalue; if anything, the discs rather correspond to the axes in , and each expresses a bound on precisely those eigenvalues whose eigenspaces are closest to one particular axis. In the matrix — which by construction has eigenvalues , , and with eigenvectors , , and — it is easy to see that the disc for row 2 covers and while the disc for row 3 covers and . This is however just a happy coincidence; if working through the steps of the proof one finds that it in each eigenvector is the first element that is the largest (every eigenspace is closer to the first axis than to any other axis), so the theorem only promises that the disc for row 1 (whose radius can be twice the sum of the other two radii) covers all three eigenvalues. Strengthening of the theorem If one of the discs is disjoint from the others then it contains exactly one eigenvalue. If however it meets another disc it is possible that it contains no eigenvalue (for example, or ). In the general case the theorem can be strengthened as follows: Theorem: If the union of k discs is disjoint from the union of the other n − k discs then the former union contains exactly k and the latter n − k eigenvalues of A, when the eigenvalues are counted with their algebraic multiplicities. Proof: Let D be the diagonal matrix with entries equal to the diagonal entries of A and let We will use the fact that the eigenvalues are continuous in , and show that if any eigenvalue moves from one of the unions to the other, then it must be outside all the discs for some , which is a contradiction. The statement is true for . The diagonal entries of are equal to that of A, thus the centers of the Gershgorin circles are the same, however their radii are t times that of A. Therefore, the union of the corresponding k discs of is disjoint from the union of the remaining n-k for all . The discs are closed, so the distance of the two unions for A is . The distance for is a decreasing function of t, so it is always at least d. Since the eigenvalues of are a continuous function of t, for any eigenvalue of in the union of the k discs its distance from the union of the other n-k discs is also continuous. Obviously , and assume lies in the union of the n-k discs. Then , so there exists such that . But this means lies outside the Gershgorin discs, which is impossible. Therefore lies in the union of the k discs, and the theorem is proven. Remarks: It is necessary to count the eigenvalues with respect to their algebraic multiplicities. Here is a counter-example : Consider the matrix, The union of the first 3 disks does not intersect the last 2, but the matrix has only 2 eigenvectors, e1,e4, and therefore only 2 eigenvalues, demonstrating that theorem is false in its formulation. The demonstration of the shows only that eigenvalues are distinct, however any affirmation about number of them is something that does not fit, and this is a counterexample. The continuity of should be understood in the sense of topology. It is sufficient to show that the roots (as a point in space ) is continuous function of its coefficients. Note that the inverse map that maps roots to coefficients is described by Vieta's formulas (note for characteristic polynomials that ), which can be proved an open map. This proves the roots as a whole is a continuous function of its coefficients. Since composition of continuous functions is again continuous, the as a composition of roots solver and is also continuous. Individual eigenvalue could merge with other eigenvalue(s) or appeared from a splitting of previous eigenvalue. This may confuse people and questioning the concept of continuous. However, when viewing from the space of eigenvalue set , the trajectory is still a continuous curve although not necessarily smooth everywhere. Added Remark: The proof given above is arguably (in)correct...... There are two types of continuity concerning eigenvalues: (1) each individual eigenvalue is a usual continuous function (such a representation does exist on a real interval but may not exist on a complex domain), (2) eigenvalues are continuous as a whole in the topological sense (a mapping from the matrix space with metric induced by a norm to unordered tuples, i.e., the quotient space of C^n under permutation equivalence with induced metric). Whichever continuity is used in a proof of the Gerschgorin disk theorem, it should be justified that the sum of algebraic multiplicities of eigenvalues remains unchanged on each connected region. A proof using the argument principle of complex analysis requires no eigenvalue continuity of any kind. For a brief discussion and clarification, see. Application The Gershgorin circle theorem is useful in solving matrix equations of the form Ax = b for x where b is a vector and A is a matrix with a large condition number. In this kind of problem, the error in the final result is usually of the same order of magnitude as the error in the initial data multiplied by the condition number of A. For instance, if b is known to six decimal places and the condition number of A is 1000 then we can only be confident that x is accurate to three decimal places. For very high condition numbers, even very small errors due to rounding can be magnified to such an extent that the result is meaningless. It would be good to reduce the condition number of A. This can be done by preconditioning: A matrix P such that P ≈ A−1 is constructed, and then the equation PAx = Pb is solved for x. Using the exact inverse of A would be nice but finding the inverse of a matrix is something we want to avoid because of the computational expense. Now, since PA ≈ I where I is the identity matrix, the eigenvalues of PA should all be close to 1. By the Gershgorin circle theorem, every eigenvalue of PA lies within a known area and so we can form a rough estimate of how good our choice of P was. Example Use the Gershgorin circle theorem to estimate the eigenvalues of: Starting with row one, we take the element on the diagonal, aii as the center for the disc. We then take the remaining elements in the row and apply the formula to obtain the following four discs: Note that we can improve the accuracy of the last two discs by applying the formula to the corresponding columns of the matrix, obtaining and . The eigenvalues are -10.870, 1.906, 10.046, 7.918. Note that this is a (column) diagonally dominant matrix: . This means that most of the matrix is in the diagonal, which explains why the eigenvalues are so close to the centers of the circles, and the estimates are very good. For a random matrix, we would expect the eigenvalues to be substantially further from the centers of the circles. See also For matrices with non-negative entries, see Perron–Frobenius theorem. Doubly stochastic matrix Hurwitz-stable matrix Joel Lee Brenner Metzler matrix Muirhead's inequality Bendixson's inequality Schur–Horn theorem References . . (Errata). . 1st ed., Prentice Hall, 1962. . External links Eric W. Weisstein. "Gershgorin Circle Theorem." From MathWorld—A Wolfram Web Resource. Semyon Aranovich Gershgorin biography at MacTutor Theorems in algebra Linear algebra Matrix theory Articles containing proofs
Gershgorin circle theorem
Mathematics
2,081
78,612,304
https://en.wikipedia.org/wiki/JGB%20%28company%29
JGB S.A. It is a Colombian company that manufactures pharmaceutical products, multivitamin supplements, oral hygiene products and home care products, founded in 1875.. It is one of the oldest companies in Colombia. Historia In 1875, the doctor Enrique Garcés Velasco founded the Garcés drugstore, which he ran with his wife Joaquina Borrero de Garcés. In 1899, Dr. Jorge Enrique Garcés died and his son Jorge Garcés Borrero, together with his mother, took over the management of the pharmacy. In 1925 the company formally changed its name to Laboratorios JGB (initials of its name), specialising in the production of pharmaceutical products, especially one of its most marketed products, granulated glue "Tarrito Rojo". The main plant is in the city of Cali and has two more in Cartagena and Cajicá. Until 2014 it had 1,000 employees. The company exports its products to several distributors located in the Región Andina and the Estados Unidosand maintains operations in Ecuador and Venezuela. See also Health care in Colombia List of companies of Colombia Tecnoquímicas (Direct competition) References External links JGB oficial website . Companies established in 1925 Cosmetics companies Chemical companies
JGB (company)
Chemistry
265
34,997,307
https://en.wikipedia.org/wiki/Crespi%20effect
The Crespi effect is a behavioural contrast phenomenon observed in classical conditioning in which a conditioned response changes disproportionately to a suddenly changed reinforcement. It was first observed in rats by American psychologist Leo P. Crespi in 1942. He found that in a repeatedly carried out task such as finding food in a maze, the running speed of the rat is proportional to the size of the reward it obtained on the previous trial. The more food reward that was given to it last time upon completion of the task, the faster it will run when attempting to complete the same task. The effect also works in reverse: when rats were shifted from a larger to a smaller reward, they ran more slowly than the control rats that had always received the small reward. It is important to note that the size of the reward has little or no influence on the speed of learning, but that it does have an influence on the performance of tasks already learned. Scholars have been only partially able to replicate Crespi's studies, which remains controversial. See also Contrast effect References Behavioural sciences
Crespi effect
Biology
220
64,345,824
https://en.wikipedia.org/wiki/Elastix%20%28image%20registration%29
Elastix is an image registration toolbox built upon the Insight Segmentation and Registration Toolkit (ITK). It is entirely open-source and provides a wide range of algorithms employed in image registration problems. Its components are designed to be modular to ease a fast and reliable creation of various registration pipelines tailored for case-specific applications. It was first developed by Stefan Klein and Marius Staring under the supervision of Josien P.W. Pluim at Image Sciences Institute (ISI). Its first version was command-line based, allowing the final user to employ scripts to automatically process big data-sets and deploy multiple registration pipelines with few lines of code. Nowadays, to further widen its audience, a version called SimpleElastix is also available, developed by Kasper Marstal, which allows the integration of elastix with high level languages, such as Python, Java, and R. Image registration fundamentals Image registration is a well-known technique in digital image processing that searches for the geometric transformation that, applied to a moving image, obtains a one-to-one map with a target image. Generally, the images acquired from different sensors (multimodal), time instants (multitemporal), and points of view (multiview) should be correctly aligned to proceed with further processing and feature extraction. Even though there are a plethora of different approaches to image registration, the majority is composed of the same macro building blocks, namely the transformation, the interpolator, the metric, and the optimizer. Registering two or more images can be framed as an optimization problem that requires multiple iterations to converge to the best solution. Starting from an initial transformation computed from the image moments the optimization process searches for the best transformation parameters based on the value of the selected similarity metric. The figure on the right shows the high-level representation of the registration of two images, where the reference remains constant during the entire process, while the moving one will be transformed according to the transformation parameters. In other words, the registration ends when the similarity metric, which is a mathematical function with a certain number of parameters to be optimized, reaches the optimal value which is highly dependent on the specific application. Main building blocks Following the structure of the image registration workflow, the elastix toolbox proposes a modular solution that implements for each of the building blocks different algorithms, highly employed in medical image registration, and helps the final users to build their specific pipeline by selecting the most suitable algorithm for each of the main building blocks. Each block is easily configurable both by selecting pre-defined initialization values or by trying multiple sets of parameters and then choosing the most performing one. The registration is performed on images, and the elastix toolbox supports all the data formats supported by ITK, ranging from JPEG and PNG to medical standard formats such as DICOM and NIFTI. It also stores physical pixel spacing, the origin and the relative position to an external world reference system, when provided in the metadata, to facilitate the registration process, especially in medical field applications. Transformation The transformation is an essential building block, since it defines the allowable transformations. In image registration, the main distinction can be done between parallel-to-parallel and parallel-to-non parallel (deformable) line mapping transformations. In the elastix toolbox, the final users can select one transformation or compose more transformations either through addition or via composition. Below are reported the different transformation models in order of increasing flexibility, along with the corresponding elastix class names between brackets. Translation (TranslationTransform) allows only translations Rigid (EulerTransform) expands the translation adding rotations and the object is seen as a rigid body Similarity (SimilarityTransform) expands the rigid transformation by introducing isotropic scaling Affine (AffineTransform) expands the rigid transformation allowing both scaling and shear B-splines (BSplineTransform) is a deformable transformation usually preceded by a rigid or affine one Thin-plate splines (SplineKernelTransform) is a deformable transformation belonging to the class of kernel-based transformations that is a composition of and affine and a non-rigid part Metric The similarity metric is the mathematical function whose parameters should be optimized to reach the desired registration, and, during the process, it is computed multiple times. Below are reported the available metrics computed employing the reference and the transformed images and the corresponding elastix class names between brackets. Mean squared difference (AdvancedMeanSquares) to be used for mono-modal applications Normalized correlation coefficient (AdvancedNormalizedCorrelation) to be used for images that have an intensity linear relationship Mutual information (AdvancedMattesMutualInformation) to be used for both mono- and multi-modal applications and optimized to reach better performance compared to the normalized version Normalized mutual information (NormalizedMutualInformation) for both mono- and multi-modal applications Kappa statistic (AdvancedKappaStatistic) to be used only for binary images Sampler For the computation of the similarity metrics, it is not always necessary to consider all the voxels and, sometimes, it can be useful to use only a fraction of the voxels of the images, i.e. to reduce the execution time for big input images. Below are reported the available criteria for selecting a fraction of the voxels for the similarity metric computation and the corresponding elastix class names between brackets. Full (Full) to employ all the voxels Grid (Grid) to employ a regular grid defined by the user to downsample the image Random (Random) to randomly select a percentage of voxels defined by the users (all voxels have equal probability to be selected) Random coordinate (RandomCoordinate) like the random criterion, but in this case also off-grid positions can be selected to simplify the optimization process Interpolator After the application of the transformation, it may occur that the voxels used for the similarity metric computation are at non-voxel positions, so intensity interpolation should be performed to ensure the correctness of the computed values. Below are reported the implemented interpolators and the corresponding elastix class names between brackets. Nearest neighbor (NearestNeighborInterpolator) exploits little resources, but gives low quality results Linear (LinearInterpolator) is sufficient in general applications N-th order B-spline (BSplineInterpolator) can be used to increase the order N, increasing quality and computation time. N=0 and N=1 indicate the nearest neighbor and linear cases respectively. Optimizer The optimizer defines the strategy employed for searching the best transformation parameter to reach the correct registration, and it is commonly an iterative strategy. Below are reported some of the implemented optimization strategies. Gradient descent Robbins-Monro, similar to the gradient descent, but employing an approximation of the cost function derivatives A wider range of optimizers is also available, such as Quasi-Newton or evolutionary strategies. Other features The elastix software also offers other features that can be employed to speed up the registration procedure and to provide more advanced algorithms to the end-users. Some examples are the introduction of blur and Gaussian pyramid to reduce data complexity, and multi-image and multi-metric framework to deal with more complex applications. Applications Elastix has applications mainly in the medical field, where image registration is fundamental to get comprehensive information regarding the analysed anatomical region. It is widely employed in image-guided surgery, tumour monitoring, and treatment assessment. For example, in radiotherapy planning, image registration allows to correctly deliver the treatment and evaluate the obtained results. Thanks to the wide range of implemented algorithms, the use of the elastix software allows physicians and researchers to test different registration pipelines from the simplest to more complex ones, and to save the best one as a configuration file. This file and the fact that the software is completely open-source makes it easy to reproduce the work, that can help supporting the open science paradigm, and allows fast reuse on different patients data. In image-guided surgery, registration time and accuracy are critical points, considering that, during the registration, the patient is on the operating table, and the images to be registered have lower resolution compared to the target ones. In this field, the possibility to exploit elastix with high-level languages, such as OpenCL, opens to research in the usage of GPUs and other hardware accelerators. References External links GitHub repository Computer vision Medical imaging Free science software Software using the Apache license
Elastix (image registration)
Engineering
1,784
76,551,613
https://en.wikipedia.org/wiki/Cytonuclear%20discordance
Cytonuclear discordance describes the discrepancy in phylogenetic relationships using mitochondrial DNA (mtDNA) versus nuclear genes (or nuclear DNA, nDNA). In other words, mitochondrial and nuclear gene sequences may lead to different, if not contradictory, phylogenetic trees that show the relationships among species. In theory, nuclear DNA and mtDNA sequences should lead to similar phylogenetic tree topologies among species but this is often not the case. Other terms for the concept are Nuclear-mitochondrial discordance or mito-nuclear discordance. Examples An example are Australian rock-wallabies (Petrogale) in which several species form a monophyletic group with nDNA genes, but not with mtDNA. This cytonuclear discordance involves at least four operational taxonomic units (OTUs) across four species. Another example is the relationship among grasshoppers (Orthoptera). Phylogenies based on complete mitogenomes recovered some species as para- or polyphyletic. By contrast, a phylogeny based on nuclear genes derived from transcriptomic data retrieved all species as monophyletic clusters. Many other taxonomic groups display cytonuclear discordance, e.g. Burmese pythons or vipers of the genus Cerastes. References Mitochondrial genetics Phylogenetics
Cytonuclear discordance
Biology
276
25,507,980
https://en.wikipedia.org/wiki/Radial%20trajectory
In astrodynamics and celestial mechanics a radial trajectory is a Kepler orbit with zero angular momentum. Two objects in a radial trajectory move directly towards or away from each other in a straight line. Classification There are three types of radial trajectories (orbits). Radial elliptic trajectory: an orbit corresponding to the part of a degenerate ellipse from the moment the bodies touch each other and move away from each other until they touch each other again. The relative speed of the two objects is less than the escape velocity. This is an elliptic orbit with semi-minor axis = 0 and eccentricity = 1. Although the eccentricity is 1, this is not a parabolic orbit. If the coefficient of restitution of the two bodies is 1 (perfectly elastic) this orbit is periodic. If the coefficient of restitution is less than 1 (inelastic) this orbit is non-periodic. Radial parabolic trajectory, a non-periodic orbit where the relative speed of the two objects is always equal to the escape velocity. There are two cases: the bodies move away from each other or towards each other. Radial hyperbolic trajectory: a non-periodic orbit where the relative speed of the two objects always exceeds the escape velocity. There are two cases: the bodies move away from each other or towards each other. This is a hyperbolic orbit with semi-minor axis = 0 and eccentricity = 1. Although the eccentricity is 1 this is not a parabolic orbit. Unlike standard orbits which are classified by their orbital eccentricity, radial orbits are classified by their specific orbital energy, the constant sum of the total kinetic and potential energy, divided by the reduced mass: where is the distance between the centers of the masses, is the relative velocity, and is the standard gravitational parameter. Another constant is given by: For elliptic trajectories, w is positive. It is the inverse of the apoapsis distance (maximum distance). For parabolic trajectories, w is zero. For hyperbolic trajectories, w is negative, It is where is the velocity at infinite distance. Time as a function of distance Given the separation and velocity at any time, and the total mass, it is possible to determine the position at any other time. The first step is to determine the constant . Use the sign of to determine the orbit type. where and are the separation and relative velocity at any time. Parabolic trajectory where is the time from or until the time at which the two masses, if they were point masses, would coincide, and is the separation. This equation applies only to radial parabolic trajectories, for general parabolic trajectories see Barker's equation. Elliptic trajectory where is the time from or until the time at which the two masses, if they were point masses, would coincide, and is the separation. This is the radial Kepler equation. Hyperbolic trajectory where t is the time from or until the time at which the two masses, if they were point masses, would coincide, and x is the separation. Universal form (any trajectory) The radial Kepler equation can be made "universal" (applicable to all trajectories): or by expanding in a power series: The radial Kepler problem (distance as function of time) The problem of finding the separation of two bodies at a given time, given their separation and velocity at another time, is known as the Kepler problem. This section solves the Kepler problem for radial orbits. The first step is to determine the constant . Use the sign of to determine the orbit type. Where and are the separation and velocity at any time. Parabolic trajectory Universal form (any trajectory) Two intermediate quantities are used: , and the separation at time the bodies would have if they were on a parabolic trajectory, . Where is the time, is the initial position, is the initial velocity, and . The inverse radial Kepler equation is the solution to the radial Kepler problem: Evaluating this yields: Power series can be easily differentiated term by term. Repeated differentiation gives the formulas for the velocity, acceleration, jerk, snap, etc. Orbit inside a radial shaft The orbit inside a radial shaft in a uniform spherical body would be a simple harmonic motion, because gravity inside such a body is proportional to the distance to the center. If the small body enters and/or exits the large body at its surface the orbit changes from or to one of those discussed above. For example, if the shaft extends from surface to surface a closed orbit is possible consisting of parts of two cycles of simple harmonic motion and parts of two different (but symmetric) radial elliptic orbits. See also Kepler's equation Kepler problem List of orbits References Cowell, Peter (1993), Solving Kepler's Equation Over Three Centuries, William Bell. External links Kepler's Equation at Mathworld Orbits Astrodynamics Johannes Kepler
Radial trajectory
Engineering
991
332,410
https://en.wikipedia.org/wiki/Preamplifier
A preamplifier, also known as a preamp, is an electronic amplifier that converts a weak electrical signal into an output signal strong enough to be noise-tolerant and strong enough for further processing, or for sending to a power amplifier and a loudspeaker. Without this, the final signal would be noisy or distorted. They are typically used to amplify signals from analog sensors such as microphones and pickups. Because of this, the preamplifier is often placed close to the sensor to reduce the effects of noise and interference. Description An ideal preamp will be linear (have a constant gain through its operating range), have high input impedance (requiring only a minimal amount of current to sense the input signal) and a low output impedance (when current is drawn from the output there is minimal change in the output voltage). It is used to boost the signal strength to drive the cable to the main instrument without significantly degrading the signal-to-noise ratio (SNR). The noise performance of a preamplifier is critical. According to Friis's formula, when the gain of the preamplifier is high, the SNR of the final signal is determined by the SNR of the input signal and the noise figure of the preamplifier. Three basic types of preamplifiers are available: current-sensitive preamplifier parasitic-capacitance preamplifier charge-sensitive preamplifier. Audio systems In an audio system, they are typically used to amplify signals from analog sensors to line level. The second amplifier is typically a power amplifier (power amp). The preamplifier provides voltage gain (e.g., from 10 mV to 1 V) but no significant current gain. The power amplifier provides the higher current necessary to drive loudspeakers. For these systems, some common sensors are microphones, instrument pickups, and phonographs. Preamplifiers are often integrated into the audio inputs on mixing consoles, DJ mixers, and sound cards. They can also be stand-alone devices. Examples The integrated preamplifier in a foil electret microphone. The first stages of an instrument amplifier, which is then sent to the power amplifier. With instrument amplifiers, the preamp is often designed to produce overdrive or distortion effects. A stand-alone unit for use in live music and recording studio applications. As part of a stand-alone channel strip or channel strip built into an audio mixing desk. A masthead amplifier used with television receiver antenna or a satellite receiver dish. The circuit inside of a hard drive connected to the magnetic heads or the circuit inside of CD/DVD drive which connects to the photodiodes. A switched capacitor circuit used to null the effects of mismatch offset in most CMOS comparator-based flash analog-to-digital converters Due to its unique coloration, some preamplifiers can be emulated in software to be used in mixing. See also Low-noise amplifier (LNA) Instrumentation amplifier Buffer amplifier Logarithmic resistor ladder References External links Electronic amplifiers Audio engineering
Preamplifier
Technology,Engineering
660
21,759,542
https://en.wikipedia.org/wiki/Michael%20Graff
Michael Graff is a co-author and one of several architects of BIND 9. In April 1994, he co-authored The Magic Words are Squeamish Ossifrage. Graff graduated from Iowa State University with a degree in Computer Engineering. He is currently working at Internet Systems Consortium, a non-profit corporation. References External links Iowa State University Internet Systems Consortium Living people Computer engineers Iowa State University alumni American software engineers Year of birth missing (living people)
Michael Graff
Technology
97
23,567,214
https://en.wikipedia.org/wiki/Sarcosphaera
Sarcosphaera is a fungal genus within the Pezizaceae family. It used to be considered a monotypic genus, containing the single species Sarcosphaera coronaria, commonly known as the pink crown, the violet crown-cup, or the violet star cup. However, recent research revealed there are many species in the complex, two in Europe and North Africa (S. coronaria and S. crassa), other in North America (e.g., S. columbiana, S. pacifica, S. montana, S. gigantea) and Asia. S. coronaria is a whitish or grayish cup fungus, distinguished by the manner in which the cup splits into lobes from the top downward. The fruit body, typically found partially buried in soil, is initially like a fleshy hollow ball, and may be mistaken for a puffball. Unlike the latter, it splits open from the top downwards to form a cup with five to ten pointed rays, reaching up to in diameter. It is lavender-brown on the inside surface. It is commonly found in the mountains in coniferous woods under humus on the forest floor, and often appears after the snow melts in late spring and early summer. The fungus is widespread, and has been collected in Europe, Israel and the Asian part of Turkey, North Africa, and North America. In Europe, it is considered a threatened species in 14 countries. Once thought to be a good edible, it is not recommended for consumption, after several reports of poisonings causing stomach aches, and in one instance, death. The fruit bodies are known to bioaccumulate the toxic metalloid arsenic from the soil. Taxonomy The genus was first described by Bernhard Auerswald in 1869, to accommodate the species then known as Peziza macrocalyx. Sarcosphaera coronaria was originally named Peziza coronaria by the Dutch scientist Nikolaus Joseph von Jacquin in 1778, and underwent several name changes before being assigned its current name in 1908 by Joseph Schröter. The Greek genus name means "flesh ball"; the Latin specific epithet, coronaria, refers to the crown-like form of the open fruit body. The species is commonly known by various names, including the "crown fungus", the "pink crown", the "violet crown-cup", or the "violet star cup". Several taxa have been named as belonging to the genus Sarcosphaera over the years, but most lack modern descriptions and have not been reported since their original collections. For example, Sarcosphaera funerata was renamed by Fred Jay Seaver in 1930 based on the basionym Peziza funerata, originally described by Cooke in 1878. Sarcosphaera gigantea was a species collected from Michigan, originally described as Pustularia gigantea by Heinrich Rehm in 1905, and correctly considered distinct from S. coronaria on the basis of its smaller spore size. Sarcosphaera ulbrichiana was described by Wilhem Kirschstein in 1943. Other taxa have been reduced to synonymy with S. coronaria, or transferred to other genera. Sarcosphaera eximia (originally Peziza eximia Durieu & Lév. 1848, and later transferred to Sarcosphaera by René Maire), Sarcosphaera crassa (considered by Zdeněk Pouzar in a 1972 publication to be the correct name for S. coronaria) and Sarcosphaera dargelasii (originally Peziza dargelasii Gachet 1829, transferred to Sarcosphaera by Nannfeldt) used to be considered synonyms of S. coronaria. Sarcosphaera ammophila (originally Peziza ammophila Durieu & Mont.) and Sarcosphaera amplissima (originally Peziza amplissima Fr. 1849) have since been transferred back to Peziza. The 10th edition of the Dictionary of the Fungi (2008) considers Sarcosphaera to be monotypic, and Index Fungorum has only Sarcosphaera coronaria confirmed as valid. In 1947, Helen Gilkey described the genus Caulocarpa based on a single collection made in Wallowa County, Oregon. The type species, C. montana, was thought to be a truffle (formerly classified in the now-defunct Tuberales order) because of its chambered fruit body and subterranean growth habit. It was later noted by mycologist James Trappe to strongly resemble Sarcosphaera. Thirty years later, Trappe revisited the original collection site in eastern Oregon and found fresh specimens that closely matched Gilkey's original description. Some specimens, however, had opened up similar to Sarcosphaera, suggesting that the original specimens had "simply not emerged and often not opened due to habitat factors." Microscopic examination of the preserved type material revealed the species to be Sarcosphaera coronaria (then called S. crassa), and Caulocarpa is now considered a generic synonym of Sarcosphaera. Sarcosphaera is classified in the family Pezizaceae of the order Pezizales. Phylogenetic analysis of ribosomal DNA sequences suggests that Sarcosphaera forms a clade with the genera Boudiera and Iodophanus, and that the three taxa are a sister group to Ascobolus and Saccobolus (both in the family Ascobolaceae). Species in the families Pezizaceae and Ascobolaceae are distinct from other Pezizalean taxa in the positive iodine reaction of the ascus wall. In a more recent (2005) phylogenetic analysis combining the data derived from three genes (the large subunit ribosomal rRNA (LSU), RNA polymerase II (RPB2), and beta-tubulin), Sarcosphaera was shown to be closely related to the truffle genus Hydnotryopsis, corroborating earlier results that used only the LSU rDNA sequences. Description Sarcosphaera is partly hypogeous (fruiting underground) and emerges from the ground as a whitish to cream-colored hollow ball. Young specimens are covered entirely by an easily removed thin protective membrane. As it matures, it splits open to expose the inner spore-bearing layer (hymenium). The cup is up to in diameter, roughly spherical initially but breaking up into a series of five to ten raylike projections, which give the fruit body the shape of a crown. The outer surface of the cup is white, while the inner surface is lilac-gray, although in age the color may fade to a brownish-lavender color. The flesh is white, thick, and fragile. Some specimens may have a short, stubby stalk. S. coronaria has no distinctive taste or odor, although one source says that as it gets older the odor becomes "reminiscent of rhubarb". The spores are hyaline (translucent), smooth, and ellipsoid with the ends truncate. They have dimensions of 11.5–20 by 5–9 μm, and usually contain two large oil drops. The paraphyses (sterile, filamentous cells interspersed among the asci, or spore-producing cells) are 5–8 μm wide at the tip, branched, septate (with partitions that divide the cells into compartments), and constricted at the septa. The asci are cylindrical, and measure 300–360 by 10–13 μm; the tips of the asci stain blue with Melzer's reagent. The finely cylindrical paraphyses have slightly swollen tips and are forked at the base. Chemistry The chemical composition of fruit bodies collected from Turkey has been analyzed, and the dried fruit bodies determined to contain the following nutritional components: protein, 19.46%; fat, 3.65%; ash, 32.51%; carbohydrates, 44.38% (including 6.71% as non-digestible cellulose). Fresh fruit bodies have a moisture content of 84.4%. The mushrooms are a good source of the element vanadium, shown in a 2007 study to be present at a concentration of 0.142 mg/kg (dry weight). Similar species Immature, unopened fruit bodies can be mistaken for truffles, but are distinguished by their hollow interior. Mature specimens somewhat resemble the "earthstar scleroderma" (Scleroderma polyrhizum), but this yellowish-brown species does not have the purple coloration of Sarcosphaera coronaria. Peziza ammophila (formerly classified in the genus Sarcosphaera) has an exterior surface that is colored brown to dark brown, and when young it is cup-shaped. Neournula puchettii also has a pinkish-colored hymenium, but it is smaller and always cup-shaped. Geopora sumneriana is another cup fungus that superficially resembles S. coronaria in its form and subterranean growth habit; however, the surface of its hymenium is cream-colored with ochraceous tinges, and its outer surface is covered with brown hairs. Geopora sepulta may also be included as a potential lookalike to S. coronaria, as it is macroscopically indistinguishable from G. sumneriana. Geopora arenicola and Peziza violacea are also similar. Distribution and habitat The fungus is distributed in 23 European countries, North Africa, and North America, from British Columbia eastward to Michigan and New York, south to Veracruz, Mexico. It has also been collected from Israel and the Asian part of Turkey. The fruit bodies are found singly, scattered, or clustered together in broad-leaf woods favoring beech, less frequently with conifers. A preference for calcareous soils has been noted, but they will also grow on acidic bedrock. Because their initial development is subterranean, young fruit bodies are easy to overlook, as they as usually covered with dirt or forest duff. They are more common in mountainous locations, and occur most frequently in the spring, often near melting snow. Ecology Historically, Sarcosphaera coronaria has been assumed to be saprobic, acquiring nutrients from breaking down decaying organic matter. The fungus, however, is only found with trees known to form mycorrhiza, and it is often locally abundant where it occurs, year after year in the same location, indicative of a mycorrhizal lifestyle. The results of a 2006 study of Pezizalean fungi further suggest that the species is an ectomycorrhizal symbiont, and more generally, that the Pezizales include more ectomycorrhizal fungi than previously thought. In Europe, the fungus is red-listed in 14 countries, and is considered a threatened species by the European Council for Conservation of Fungi. It is short-listed for inclusion in the Bern Convention by the European Council for Conservation of Fungi. Threats to the species include loss and degradation of habitats due to clearcutting and soil disturbance. Toxicity A number of poisonings attributed to this species have been reported from Europe, including one fatal poisoning in the Jura area in 1920, following which a warning was issued not to eat it raw or in salads. The fruit bodies can bioaccumulate the toxic heavy metal arsenic from the soil in the form of the compound methylarsonic acid. Although less toxic than arsenic trioxide, it is still relatively dangerous. Concentrations over 1000 mg/kg (dry weight) are often reached. As reported in one 2004 publication, a mature specimen collected near the town of Český Šternberk in the Czech Republic was found to have an arsenic content of 7090 mg/kg dry weight, the highest concentration ever reported in a mushroom. Typically, the arsenic content of mycorrhizal mushrooms collected from unpolluted areas is lower than 1 mg/kg. In a 2007 Turkish study of 23 wild edible mushroom species (collected from areas not known to be polluted), S. coronaria had the highest concentration of arsenic at 8.8 mg/kg dry weight, while the arsenic concentration of the other tested mushrooms ranged from 0.003 mg/kg (in Sarcodon leucopus) to 0.54 mg/kg (in Lactarius salmonicolor). Uses Although older literature describes it as a good edible species, modern literature does not recommended it for consumption. It gives some individuals gastrointestinal discomfort, reputedly similar to poisoning symptoms caused by morels. Although the fruit bodies are edible after cooking, they are rarely collected by mushroom pickers, and have no commercial value. Notes References Cited books External links Pezizaceae Monotypic Ascomycota genera Poisonous fungi Taxa described in 1869 it:Sarcosphaera
Sarcosphaera
Environmental_science
2,668
49,064,665
https://en.wikipedia.org/wiki/Monolithic%20Power%20Systems
Monolithic Power Systems, Inc. is an American, publicly traded company headquartered in Kirkland, Washington. It operates in more than 15 locations worldwide. Monolithic Power Systems (MPS) provides power circuits for systems found in cloud computing, telecom infrastructures, automotive, industrial applications and consumer applications. History Monolithic Power Systems, Inc. was founded in 1997 by Michael Hsing, who is the current CEO. Prior to the founding of the corporation, Hsing worked as a Senior Silicon Technology Developer at several analog integrated circuit companies. The company then diversified into DC/DC products. In November 2004, Hsing took the company public with an IPO. Since then, the company has grown to incorporate 13 product lines with more than 4,000 products. In February 2021, the company was added to the S&P 500. In 2023, the company made progress on its long-term ESG goals and diversity goals, including the addition of a second female director to its Board of Directors. The company also published its annual ESG report and launched public commitments to reduce Scope 1 and 2 greenhouse gas emissions by 40% by 2030 and to be powered globally by 75% of Renewable Electricity by 2026. About Monolithic Power Systems is headquartered in Kirkland, Washington. The company designs, develops, and markets for communications, storage and computing, consumer electronics, industrial, and automotive markets, in addition to supporting the electrification of transportation. Monolithic Power Systems markets its products through third-party distributors and value-added resellers. It directly markets to original equipment manufacturers, original design manufacturers, and electronic manufacturing service providers in China, Taiwan, Europe, Korea, Southeast Asia, Japan, and the United States. Products Monolithic Power Systems provides digital, analog, and mixed-signal integrated circuits. It offers energy-efficient DC to DC converter ICs that are used to convert and control voltages of various electronic systems, such as portable electronic devices, wireless LAN access points, computers, set top boxes, displays, automobiles, and medical equipment. The company also provides lighting control ICs for backlighting, which are used in systems that provide the light source for LCD panels in notebook computers, LCD monitors, car navigation systems, and LCD televisions. In addition, Monolithic Power Systems supports the electrification of transportation and manufactures class D Audio Amplifier products. Locations Monolithic Power Systems operates at 18 locations primarily in the US, Europe, and east Asia. References External links Companies based in Kirkland, Washington Companies listed on the Nasdaq American companies established in 1997 Technology companies established in 1997 2004 initial public offerings Computer companies of the United States Computer hardware companies
Monolithic Power Systems
Technology
537
51,779,060
https://en.wikipedia.org/wiki/Ethyltestosterone
Ethyltestosterone, or 17α-ethyltestosterone, also known as 17α-ethylandrost-4-en-17β-ol-3-one or 17α-pregn-4-en-17-ol-3-one, is a synthetic, orally active anabolic–androgenic steroid (AAS) of the 17α-alkylated group related to methyltestosterone which was never marketed. Like methyltestosterone, ethyltestosterone is the parent compound of many AAS. Derivatives of ethyltestosterone include norethandrolone (ethylnandrolone, ethylestrenolone), ethylestrenol (ethylnandrol), norboletone, ethyldienolone, tetrahydrogestrinone, bolenol (ethylnorandrostenol), and propetandrol. Ethyltestosterone is described as a very weak AAS and is considerably weaker as an AAS than is methyltestosterone. It is reported to have 1/10 of the anabolic potency and 1/20 of the androgenic potency of testosterone propionate in rodents. Ethyltestosterone was also inactive in boys with dwarfism at 20 to 40 mg/day orally. The low potency of ethyltestosterone is in notable contrast to norethandrolone (17α-ethyl-19-nortestosterone), the C19 nor analogue. Analogues of ethyltestosterone with longer C17α chains such as propyltestosterone (topterone) have further reduced androgenic activity or even antiandrogenic activity. In contrast to ethyltestosterone, its 19-demethyl variant, norethandrolone, is a potent AAS comparable in anabolic activity to testosterone propionate. See also List of androgens/anabolic steroids References Abandoned drugs 1-Ethylcyclopentanols Anabolic–androgenic steroids Androstanes Enones
Ethyltestosterone
Chemistry
456
33,287,686
https://en.wikipedia.org/wiki/HAT-P-17
HAT-P-17 is a K-type main-sequence star about away. It has a mass of about . It is the host of two planets, HAT-P-17b and HAT-P-17c, both discovered in 2010. A search for a binary companion star using adaptive optics at the MMT Observatory was negative. A candidate companion was detected by a spectroscopic search of high-resolution K band infrared spectra taken at the Keck observatory. Planetary system In 2010 a multi-planet system consisting of a transiting hot Saturn in an eccentric orbit and a Jupiter like planet in an outer orbit was detected. The transiting planet HAT-P-17b was detected by the HATNet Project using telescopes located in Hawaii, Arizona and at Wise Observatory in Israel. It was confirmed with radial velocity measurements taken at the Keck telescope which also led to the discovery of the second planet on a much wider orbit. In 2013 radial velocity measurements of the Rossiter-McLaughlin effect showed that the sky-projected angle between the stellar spin axis and the orbit of planet b was approximately 19°. The measurement in 2022 have resulted in slightly larger misalignment of 26.3° References Cygnus (constellation) K-type main-sequence stars Planetary systems with two confirmed planets Planetary transit variables
HAT-P-17
Astronomy
266
7,152,070
https://en.wikipedia.org/wiki/Dagger%20category
In category theory, a branch of mathematics, a dagger category (also called involutive category or category with involution) is a category equipped with a certain structure called dagger or involution. The name dagger category was coined by Peter Selinger. Formal definition A dagger category is a category equipped with an involutive contravariant endofunctor which is the identity on objects. In detail, this means that: for all morphisms , there exists its adjoint for all morphisms , for all objects , for all and , Note that in the previous definition, the term "adjoint" is used in a way analogous to (and inspired by) the linear-algebraic sense, not in the category-theoretic sense. Some sources define a category with involution to be a dagger category with the additional property that its set of morphisms is partially ordered and that the order of morphisms is compatible with the composition of morphisms, that is implies for morphisms , , whenever their sources and targets are compatible. Examples The category Rel of sets and relations possesses a dagger structure: for a given relation in Rel, the relation is the relational converse of . In this example, a self-adjoint morphism is a symmetric relation. The category Cob of cobordisms is a dagger compact category, in particular it possesses a dagger structure. The category Hilb of Hilbert spaces also possesses a dagger structure: Given a bounded linear map , the map is just its adjoint in the usual sense. Any monoid with involution is a dagger category with only one object. In fact, every endomorphism hom-set in a dagger category is not simply a monoid, but a monoid with involution, because of the dagger. A discrete category is trivially a dagger category. A groupoid (and as trivial corollary, a group) also has a dagger structure with the adjoint of a morphism being its inverse. In this case, all morphisms are unitary (definition below). Remarkable morphisms In a dagger category , a morphism is called unitary if self-adjoint if The latter is only possible for an endomorphism . The terms unitary and self-adjoint in the previous definition are taken from the category of Hilbert spaces, where the morphisms satisfying those properties are then unitary and self-adjoint in the usual sense. See also *-algebra Dagger symmetric monoidal category Dagger compact category References
Dagger category
Mathematics
535
73,908,203
https://en.wikipedia.org/wiki/Tympanella%20galanthina
Tympanella galanthina, or cottonbud pouch, is a secotioid fungus in the monotypic genus Tympanella, a member of the Bolbitiaceae family. It is endemic to New Zealand. Etymology Tympanella, derived from the Latin Tympanum meaning little drum or stretched and -ella meaning small. The specific epithet galanthina is possibly derived from the Greek, Γαλα = milk, ανθοσ = flower, and the adjectival ending, -ιον; possibly referring to similarities to the snowdrop, Galanthus. Taxonomy Mordecai Cubitt Cooke and George Edward Massee first described the species as Agaricus galanthinus in 1890. The species was described as Naucoria galanthina in 1891 by Peir Andrea Saccado. The genus Tympanella was re-described by mycologist Egon Horak in 1971. The holotype specimen was collected by Mordecai Cubitt Cooke and George Edward Massee in New Zealand in 1890. Description The pileus of Tympanella galanthina is dry, and 7–30 mm diameter. It is secotioid and globose when young. Later in development the pileus may become convex or campanulate, however, the margin is still strongly incurved. The colour varies and can be white, cream, clay, or buff. The pileus is densely squamulose, with small hairs or scales. There are fibrillose remnants of the veil near the margin, white and conspicuous particularly on young fruit bodies. The lamellae are a distinct rust to cinnamon brown, not lacunose or anastomosing, adnate or adnexed, sometimes subdecurrent and floccose with a coloured edge. The cylindrical stipe is also dry and is typically 10–40 mm tall and 1.5–5 mm wide. It is light brown, concolourus with the pileus and has white fibrils down from the veil. This species does not have a permanent cortina or ring and columella is absent /not attenuated near the apex. The stipe is singularly fistulose, watery brown context without smell or taste. The spores are smooth, elliptical, and thick-walled, 10.4–13.5 × 6.3–8 μ. The spores have a germ pore and are reddish-brown, but are not amyloid or dextrinoid. The basidia are 27–36 × 8–11 μ with 4 spores. The cheilocystidia are 15–45 × 12–25 μ and are thin-walled, hyaline, clavate or lageniform with a sterile zone at the edge. Cylindrical or fusoid hyphae form a cuticle, a trichoderm (5–20 μ diam.) which is thin walled, gelatinised, without pigment, and has suberect tips and clamp connections. Habitat and distribution Tympanella galanthina is found in New Zealand forests among leaf litter or fallen tree fern fronds, and occasionally on rotting wood. It prefers wet forest conditions and is distributed over the North and South Islands. It has been found in association with Beilschmiedia tawa, Sphaeropteris medullaris, Leptospermum scoparium, Nothofagus fusca, N. menziesii, N. solandri, and with Podocarpus. References Bolbitiaceae Fungi of New Zealand Fungus species
Tympanella galanthina
Biology
760
32,492,245
https://en.wikipedia.org/wiki/Trinitroanisole
Trinitroanisole is a chemical compound that exists as pale yellow crystals with a melting point of 68 °C. It is highly toxic. It is an explosive with a detonation velocity of 7200 meters per second. The compound's primary hazard is a blast of an instantaneous explosion, not flying projectiles or fragments. Synthesis Trinitroanisole was first prepared in 1849 by the French chemist Auguste Cahours by reacting p-anisic acid (French: acide anisique) with a mixture of sulfuric acid and fuming nitric acid. Trinitroanisole can be prepared by the reaction of 2,4-dinitrochlorobenzene with methanol in the presence of sodium hydroxide followed by the nitration of the resulting product. Alternatively, it can be prepared directly by the reaction of picryl chloride with methanol in the presence of sodium hydroxide. Use Historically, trinitroanisole was used as a military explosive (e.g., Japanese or German Trinol), having the advantage of being made from readily obtainable raw materials such as phenol. However, due to its toxicity and tendency to form picric acid and dangerous picrate salts, its use has largely been abandoned. See also 2,4-Dinitroanisole Notes Explosive chemicals Nitrobenzene derivatives
Trinitroanisole
Chemistry
288
5,411,259
https://en.wikipedia.org/wiki/Antarafacial%20and%20suprafacial
In organic chemistry, antarafacial (Woodward-Hoffmann symbol a) and suprafacial (s) are two topological concepts in organic chemistry describing the relationship between two simultaneous chemical bond making and/or breaking processes in or around a reaction center. The reaction center can be a p- or spn-orbital (Woodward-Hoffmann symbol ω), a conjugated system (π) or even a sigma bond (σ). The relationship is antarafacial when opposite faces of the π system or isolated orbital are involved in the process (think anti). For a σ bond, it corresponds to involvement of one "interior" lobe and one "exterior" lobe of the bond. The relationship is suprafacial when the same face of the π system or isolated orbital are involved in the process (think syn). For a σ bond, it corresponds to involvement of two "interior" lobes or two "exterior" lobes of the bond. The components of all pericyclic reactions, including sigmatropic reactions and cycloadditions, and electrocyclizations, can be classified as either suprafacial or antarafacial, and this determines the stereochemistry. In particular, antarafacial topology corresponds to inversion of configuration for the carbon atom of a [1, n]-sigmatropic rearrangement, and conrotation for electrocyclic ring closure, while suprafacial corresponds to retention and disrotation. An example is the [1,3]-hydride shift, in which the interacting frontier orbitals are the allyl free radical and the hydrogen 1s orbitals. The suprafacial shift is symmetry-forbidden because orbitals with opposite algebraic signs overlap. The symmetry allowed antarafacial shift would require a strained transition state and is also unlikely. In contrast a symmetry allowed and suprafacial [1,5]-hydride shift is a common event. References Stereochemistry
Antarafacial and suprafacial
Physics,Chemistry
416
1,831,675
https://en.wikipedia.org/wiki/PEPA
Performance Evaluation Process Algebra (PEPA) is a stochastic process algebra designed for modelling computer and communication systems introduced by Jane Hillston in the 1990s. The language extends classical process algebras such as Milner's CCS and Hoare's CSP by introducing probabilistic branching and timing of transitions. Rates are drawn from the exponential distribution and PEPA models are finite-state and so give rise to a stochastic process, specifically a continuous-time Markov process (CTMC). Thus the language can be used to study quantitative properties of models of computer and communication systems such as throughput, utilisation and response time as well as qualitative properties such as freedom from deadlock. The language is formally defined using a structured operational semantics in the style invented by Gordon Plotkin. As with most process algebras, PEPA is a parsimonious language. It has only four combinators, prefix, choice, co-operation and hiding. Prefix is the basic building block of a sequential component: the process (a, r).P performs activity a at rate r before evolving to behave as component P. Choice sets up a competition between two possible alternatives: in the process (a, r).P + (b, s).Q either a wins the race (and the process subsequently behaves as P) or b wins the race (and the process subsequently behaves as Q). The co-operation operator requires the two "co-operands" to join for those activities which are specified in the co-operation set: in the process P < a, b> Q the processes P and Q must co-operate on activities a and b, but any other activities may be performed independently. The reversed compound agent theorem gives a set of sufficient conditions for a co-operation to have a product form stationary distribution. Finally, the process P/{a} hides the activity a from view (and prevents other processes from joining with it). Syntax Given a set of action names, the set of PEPA processes is defined by the following BNF grammar: The parts of the syntax are, in the order given above action the process can perform an action a at rate and continue as the process P. choice the process P+Q may behave as either the process P or the process Q. cooperation processes P and Q exist simultaneously and behave independently for actions whose names do not appear in L. For actions whose names appear in L, the action must be carried out jointly and a race condition determines the time this takes. hiding the process P behaves as usual for action names not in L, and performs a silent action for action names that appear in L. process identifier write to use the identifier A to refer to the process P. Tools PEPA Plug-in for Eclipse ipc: the imperial PEPA compiler GPAnalyser for fluid analysis of massively parallel systems References External links PEPA: Performance Evaluation Process Algebra Process calculi Theoretical computer science
PEPA
Mathematics
615
2,568,551
https://en.wikipedia.org/wiki/Brannock%20Device
The Brannock Device is a measuring instrument invented by Charles F. Brannock for measuring a person's shoe size. Brannock spent two years developing a simple means of measuring the length, width, and arch length of the human foot. He eventually improved on the wooden RITZ Stick, the industry standard of the day, patenting his first prototype in 1925 and an improved version in 1927. The device has both left and right heel cups and is rotated through 180 degrees to measure the second foot. Brannock later formed the Brannock Device Company to manufacture and sell the product, and headed the company until 1992 when he died at age 89. The Smithsonian Institution has the nearly complete records of the development of the Brannock Device and subsequent marketing. The Brannock Device Company was headquartered in Syracuse, New York, until shortly after Charles Brannock's death. Salvatore Leonardi purchased the company from the Brannock Estate in 1993, and moved manufacturing to a small factory in Liverpool, New York. On May 31, 2018, the Syracuse minor league baseball team had a one-night promotion and rebranded as the Syracuse Devices in honor of the Brannock Device. Sizing system The modern Brannock device takes three measurements of each foot: Foot length the length from heel to the tip of the longest toe (in increments of barleycorns) Arch length the length from heel to the inside of the ball of the foot, or medial metatarsophalangeal joint Width the width of the foot perpendicular to the length Foot and arch lengths correspond to numeric Brannock sizes, and foot widths correspond to letter Brannock widths AAAA (narrowest) to EEEE (widest), as follows: Women's Brannock sizes are offset from men's by one: |- ! 16 |- ! |- ! 17 |- ! |- ! 18 |- ! |- ! 19 |- ! |- ! 20 |- ! |- ! 21 |- ! |- ! 22 |- ! |- ! 23 |- ! |- ! 24 |- ! |- ! 25 |- ! |- ! Heel-to-Toe (Foot) Length !! Heel-to-Ball (Arch) Length !! AAAA !! AAA !! AA !! A !! B !! C !! D !! E !! EE !! EEE !! EEEE |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | |- | | | | | | | | | | | | | References Bibliography External links The Brannock Device Co., Inc. Charles Brannock: MIT inventor of the Week (August 2001) Brannock Company history and archives Brannock Device, an early Design Drawing from the Smithsonian (1920s) Smithsonian Institution Libraries Dimensional instruments Shoemaking Anthropometry Manufacturing companies based in Syracuse, New York American inventions
Brannock Device
Physics,Mathematics
1,308
26,730,553
https://en.wikipedia.org/wiki/RH-34
RH-34 is a compound which acts as a potent and selective partial agonist for the 5-HT2A serotonin receptor subtype. It was derived by structural modification of the selective 5-HT2A antagonist ketanserin, with the 4-(p-fluorobenzoyl)piperidine moiety replaced by the N-(2-methoxybenzyl) pharmacophore found in such potent 5-HT2A agonists as NBOMe-2C-B and NBOMe-2C-I. This alteration was found to retain 5-HT2A affinity and selectivity, but reversed activity from an antagonist to a moderate efficacy partial agonist. Legal status RH-34 is a controlled substance in Hungary and Brazil. See also 5-MeO-NBpBrT IHCH-7113 Ketanserin Efavirenz References Amines Quinazolines Serotonin receptor agonists Pyrimidinediones 2-Methoxyphenyl compounds Abandoned drugs
RH-34
Chemistry
229
2,435,860
https://en.wikipedia.org/wiki/Orbit%40home
orbit@home was a BOINC-based volunteer computing project of the Planetary Science Institute. It uses the "Orbit Reconstruction, Simulation and Analysis" framework to optimize the search strategies that are used to find near-Earth objects. On March 4, 2008, orbit@home completed the installation of its new server and officially opened to new members. On April 11, orbit@home launched a Windows version of their client. On February 16, 2013, the project was halted due to lack of grant funding. However, on July 23, 2013, the Orbit@home project was selected for funding by NASA's Near Earth Object Observation program. It was announced that orbit@home is to resume operations sometime in 2014 or 2015. As of July 13, 2018, orbit@home is offline according to its website, and the upgrade announcement has been removed. See also List of volunteer computing projects References External links Science in society Free science software Volunteer computing projects Near-Earth object tracking
Orbit@home
Astronomy
199
39,398,449
https://en.wikipedia.org/wiki/Chamonixia%20caespitosa
Chamonixia caespitosa is a species of secotioid fungus in the family Boletaceae. It was described as new to science in 1899 by French mycologist Léon Louis Rolland. References External links Boletaceae Fungi described in 1899 Fungi of Europe Secotioid fungi Fungus species
Chamonixia caespitosa
Biology
65
1,325,873
https://en.wikipedia.org/wiki/Deflection%20routing
Deflection routing is a routing strategy for networks based on packet switching which can reduce the need of buffering packets. Every packet has preferred outputs along which it wants to leave the router, and when possible, a packet is sent along one of these outputs. However, two or more packets may want to leave along the same output (which is referred to as a contention among packets), and then only one of the packets may be sent along the link, while the others are sent along available outputs, even though the other links are not preferred by the packets (because, for instance, those links do not yield shortest paths). Depending on the rate of incoming packets and the capacity of the outgoing links, deflection routing can work without any packet buffering. Of course, it is always possible to simply drop packets in a network with a best-effort delivery strategy. See also Cut-through switching Dynamic Alternative Routing Hot-potato routing References Routing
Deflection routing
Technology
194
34,504,846
https://en.wikipedia.org/wiki/Kanamycin%20nucleotidyltransferase
In molecular biology, kanamycin nucleotidyltransferase (KNTase) is an enzyme which is involved in conferring resistance to aminoglycoside antibiotics. It catalyses the transfer of a nucleoside monophosphate group from a nucleotide to kanamycin. This enzyme is dimeric with each subunit being composed of two domains. The C-terminal domain contains five alpha helices, four of which are organised into an up-and-down alpha helical bundle. Residues found in this domain may contribute to this enzyme's active site. References Protein domains
Kanamycin nucleotidyltransferase
Biology
128
8,151,064
https://en.wikipedia.org/wiki/Rational%20planning%20model
The rational planning model is a model of the planning process involving a number of rational actions or steps. Taylor (1998) outlines five steps, as follows: Definition of the problems and/or goals; Identification of alternative plans/policies; Evaluation of alternative plans/policies; Implementation of plans/policies; Monitoring of effects of plans/policies. The rational planning model is used in planning and designing neighborhoods, cities, and regions. It has been central in the development of modern urban planning and transportation planning. The model has many limitations, particularly the lack of guidance on involving stakeholders and the community affected by planning, and other models of planning, such as collaborative planning, are now also widely used. The very similar rational decision-making model, as it is called in organizational behavior, is a process for making logically sound decisions. This multi-step model and aims to be logical and follow the orderly path from problem identification through solution. Rational decision making is a multi-step process for making logically sound decisions that aims to follow the orderly path from problem identification through solution. Method Rational decision-making or planning follows a series of steps detailed below: Verify, define, detail the problem, give solution or alternative to the problem Verifying, defining & detailing the problem (problem definition, goal definition, information gathering). This step includes recognizing the problem, defining an initial solution, and starting primary analysis. Examples of this are creative devising, creative ideas, inspirations, breakthroughs, and brainstorms. The very first step which is normally overlooked by the top level management is defining the exact problem. Though we think that the problem identification is obvious, many times it is not. When defining the problem situation, framing is essential part of the process. With correct framing, the situation is identified and possible previous experience with same kind of situation can be utilized. The rational decision making model is a group-based decision making process. If the problem is not identified properly then we may face a problem as each and every member of the group might have a different definition of the problem. Generate all possible solutions This step encloses two to three final solutions to the problem and preliminary implementation to the site. In planning, examples of this are Planned Units of Development and downtown revitalizations. This activity is best done in groups, as different people may contribute different ideas or alternative solutions to the problem. Without alternative solutions, there is a chance of arriving at a non-optimal or a rational decision. For exploring the alternatives it is necessary to gather information. Technology may help with gathering this information. Generate objective assessment criteria Evaluative criteria are measurements to determine success and failure of alternatives. This step contains secondary and final analysis along with secondary solutions to the problem. Examples of this are site suitability and site sensitivity analysis. After going thoroughly through the process of defining the problem, exploring for all the possible alternatives for that problem and gathering information this step says evaluate the information and the possible options to anticipate the consequences of each and every possible alternative that is thought of. At this point optional criteria for measuring the success or failure of the decision taken needs to be considered. The rational model of planning rest largely on objective assessment. Choose the best solution generated This step comprises a final solution and secondary implementation to the site. At this point the process has developed into different strategies of how to apply the solutions to the site. Based on the criteria of assessment and the analysis done in previous steps, choose the best solution generated. These four steps form the core of the Rational Decision Making Model. Implement the preferred alternative This step includes final implementation to the site and preliminary monitoring of the outcome and results of the site. This step is the building/renovations part of the process. Monitor and evaluate outcomes and results Feedback Modify future decisions and actions taken based on the above evaluation of outcomes. Discourse of rational planning model used in policy making The rational model of decision-making is a process for making sound decisions in policy making in the public sector. Rationality is defined as “a style of behavior that is appropriate to the achievement of given goals, within the limits imposed by given conditions and constraints”. It is important to note the model makes a series of assumptions in order for it to work, such as: The model must be applied in a system that is stable, The government is a rational and unitary actor and that its actions are perceived as rational choices, The policy problem is unambiguous, There are no limitations of time or cost. Indeed, some of the assumptions identified above are also pin pointed out in a study written by the historian H.A. Drake, as he states: In its purest form, the Rational Actor approach presumes that such a figure [as Constantine] has complete freedom of action to achieve goals that he or she has articulated through a careful process of rational analysis involving full and objective study of all pertinent information and alternatives. At the same time, it presumes that this central actor is so fully in control of the apparatus of government that a decision once made is as good as implemented. There are no staffs on which to rely, no constituencies to placate, no generals or governors to cajole. By attributing all decision making to one central figure who is always fully in control and who acts only after carefully weighing all options, the Rational Actor method allows scholars to filter out extraneous details and focus attention on central issues. Furthermore, as we have seen, in the context of policy rational models are intended to achieve maximum social gain. For this purpose, Simon identifies an outline of a step by step mode of analysis to achieve rational decisions. Ian Thomas describes Simon's steps as follows: Intelligence gathering— data and potential problems and opportunities are identified, collected and analyzed. Identifying problems Assessing the consequences of all options Relating consequences to values— with all decisions and policies there will be a set of values which will be more relevant (for example, economic feasibility and environmental protection) and which can be expressed as a set of criteria, against which performance (or consequences) of each option can be judged. Choosing the preferred option— given the full understanding of all the problems and opportunities, all the consequences and the criteria for judging options. In similar lines, Wiktorowicz and Deber describe through their study on ‘Regulating biotechnology: a rational-political model of policy development’ the rational approach to policy development. The main steps involved in making a rational decision for these authors are the following: The comprehensive organization and analysis of the information The potential consequences of each option The probability that each potential outcome would materialize The value (or utility) placed on each potential outcome. The approach of Wiktorowicz and Deber is similar to Simon and they assert that the rational model tends to deal with “the facts” (data, probabilities) in steps 1 to 3, leaving the issue of assessing values to the final step. According to Wiktorowicz and Deber values are introduced in the final step of the rational model, where the utility of each policy option is assessed. Many authors have attempted to interpret the above-mentioned steps, amongst others, Patton and Sawicki who summarize the model as presented in the following figure (missing): Defining the problem by analyzing the data and the information gathered. Identifying the decision criteria that will be important in solving the problem. The decision maker must determine the relevant factors to take into account when making the decision. A brief list of the possible alternatives must be generated; these could succeed to resolve the problem. A critical analyses and evaluation of each criterion is brought through. For example, strength and weakness tables of each alternative are drawn and used for comparative basis. The decision maker then weights the previously identified criteria in order to give the alternative policies a correct priority in the decision. The decision-maker evaluates each alternative against the criteria and selects the preferred alternative. The policy is brought through. The model of rational decision-making has also proven to be very useful to several decision making processes in industries outside the public sphere. Nonetheless, many criticisms of the model arise due to claim of the model being impractical and lying on unrealistic assumptions. For instance, it is a difficult model to apply in the public sector because social problems can be very complex, ill-defined and interdependent. The problem lies in the thinking procedure implied by the model which is linear and can face difficulties in extra ordinary problems or social problems which have no sequences of happenings. This latter argument can be best illustrated by the words of Thomas R. Dye, the president of the Lincoln Center for Public Service, who wrote in his book `Understanding Public Policy´ the following passage: There is no better illustration of the dilemmas of rational policy making in America than in the field of health…the first obstacle to rationalism is defining the problem. Is our goal to have good health — that is, whether we live at all (infant mortality), how well we live (days lost to sickness), and how long we live (life spans and adult mortality)? Or is our goal to have good medical care — frequent visits to the doctor, wellequipped and accessible hospitals, and equal access to medical care by rich and poor alike? The problems faced when using the rational model arise in practice because social and environmental values can be difficult to quantify and forge consensus around. Furthermore, the assumptions stated by Simon are never fully valid in a real world context. However, as Thomas states the rational model provides a good perspective since in modern society rationality plays a central role and everything that is rational tends to be prized. Thus, it does not seem strange that “we ought to be trying for rational decision-making”. Decision criteria for policy analysis — Step 2 As illustrated in Figure 1, rational policy analysis can be broken into 6 distinct stages of analysis. Step 2 highlights the need to understand which factors should be considered as part of the decision making process. At this part of the process, all the economic, social, and environmental factors that are important to the policy decision need to be identified and then expressed as policy decision criteria. For example, the decision criteria used in the analysis of environmental policy is often a mix of — Ecological impacts — such as biodiversity, water quality, air quality, habitat quality, species population, etc. Economic efficiency — commonly expressed as benefits and costs. Distributional equity — how policy impacts are distributed amongst different demographics. Factors that can affect the distribution of impacts include location, ethnicity, income, and occupation. Social/Cultural acceptability — the extent to which the policy action may be opposed by current social norms or cultural values. Operational practicality — the capacity required to actually operationalize the policy. Legality — the potential for the policy to be implemented under current legislation versus the need to pass new legislation that accommodates the policy. Uncertainty — the degree to which the level of policy impacts can be known. Some criteria, such as economic benefit, will be more easily measurable or definable, while others such as environmental quality will be harder to measure or express quantitatively. Ultimately though, the set of decision criteria needs to embody all of the policy goals, and overemphasising the more easily definable or measurable criteria, will have the undesirable impact of biasing the analysis towards a subset of the policy goals. The process of identifying a suitably comprehensive decision criteria set is also vulnerable to being skewed by pressures arising at the political interface. For example, decision makers may tend to give "more weight to policy impacts that are concentrated, tangible, certain, and immediate than to impacts that are diffuse, intangible, uncertain, and delayed."^8. For example, with a cap-and-trade system for carbon emissions the net financial cost in the first five years of policy implementation is a far easier impact to conceptualise than the more diffuse and uncertain impact of a country's improved position to influence global negotiations on climate change action. Decision methods for policy analysis — Step 5 Displaying the impacts of policy alternatives can be done using a policy analysis matrix (PAM) such that shown in Table 1. As shown, a PAM provides a summary of the policy impacts for the various alternatives and examination of the matrix can reveal the tradeoffs associated with the different alternatives. Table 1. Policy analysis matrix (PAM) for SO2 emissions control. Once policy alternatives have been evaluated, the next step is to decide which policy alternative should be implemented. This is shown as step 5 in Figure 1. At one extreme, comparing the policy alternatives can be relatively simple if all the policy goals can be measured using a single metric and given equal weighting. In this case, the decision method is an exercise in benefit cost analysis (BCA). At the other extreme, the numerous goals will require the policy impacts to be expressed using a variety of metrics that are not readily comparable. In such cases, the policy analyst may draw on the concept of utility to aggregate the various goals into a single score. With the utility concept, each impact is given a weighting such that 1 unit of each weighted impact is considered to be equally valuable (or desirable) with regards to the collective well-being. Weimer and Vining also suggest that the "go, no go" rule can be a useful method for deciding amongst policy alternatives^8. Under this decision making regime, some or all policy impacts can be assigned thresholds which are used to eliminate at least some of the policy alternatives. In their example, one criterion "is to minimize SO2 emissions" and so a threshold might be a reduction SO2 emissions "of at least 8.0 million tons per year". As such, any policy alternative that does not meet this threshold can be removed from consideration. If only a single policy alternative satisfies all the impact thresholds then it is the one that is considered a "go" for each impact. Otherwise it might be that all but a few policy alternatives are eliminated and those that remain need to be more closely examined in terms of their trade-offs so that a decision can be made. Case study of rational policy analysis To demonstrate the rational analysis process as described above, let’s examine the policy paper “Stimulating the use of biofuels in the European Union: Implications for climate change policy” by Lisa Ryan where the substitution of fossil fuels with biofuels has been proposed in the European Union (EU) between 2005–2010 as part of a strategy to mitigate greenhouse gas emissions from road transport, increase security of energy supply and support development of rural communities. Considering the steps of Patton and Sawicki model as in Figure 1 above, this paper only follows components 1 to 5 of the rationalist policy analysis model: Defining The Problem – the report identifies transportation fuels pose two important challenges for the European Union (EU). First, under the provisions of the Kyoto Protocol to the Climate Change Convention, the EU has agreed to an absolute cap on greenhouse gas emissions; while, at the same time increased consumption of transportation fuels has resulted in a trend of increasing greenhouse gas emissions from this source. Second, the dependence upon oil imports from the politically volatile Middle East generates concern over price fluctuations and possible interruptions in supply. Alternative fuel sources need to be used & substituted in place of fossil fuels to mitigate GHG emissions in the EU. Determine the Evaluation Criteria – this policy sets Environmental impacts/benefits (reduction of GHG’s as a measure to reducing climate change effects) and Economical efficiency (the costs of converting to biofuels as alternative to fossil fuels & the costs of production of biofuels from its different potential sources) as its decision criteria. However, this paper does not exactly talk about the social impacts, this policy may have. It also does not compare the operational challenges involved between the different categories of biofuels considered. Identifying Alternative Policies – The European Commission foresees that three alternative transport fuels: hydrogen, natural gas, and biofuels, will replace transport fossil fuels, each by 5% by 2020. Evaluating Alternative Policies – Biofuels are an alternative motor vehicle fuel produced from biological material and are promoted as a transitional step until more advanced technologies have matured. By modelling the efficiency of the biofuel options the authors compute the economic and environmental costs of each biofuel option as per the evaluation criteria mentioned above. Select The Preferred Policy – The authors suggest that the overall best biofuel comes from the sugarcane in Brazil after comparing the economic & the environmental costs. The current cost of subsidising the price difference between European biofuels and fossil fuels per tonne of CO2 emissions saved is calculated to be €229–2000. If the production of European biofuels for transport is to be encouraged, exemption from excise duties is the instrument that incurs the least transactions costs, as no separate administrative or collection system needs to be established. A number of entrepreneurs are producing biofuels at the lower margin of the costs specified here profitably, once an excise duty rebate is given. It is likely that growth in the volume of the business will engender both economies of scale and innovation that will reduce costs substantially. Requirements and limitations However, there are a lot of assumptions, requirements without which the rational decision model is a failure. Therefore, they all have to be considered. The model assumes that we have or should or can obtain adequate information, both in terms of quality, quantity and accuracy. This applies to the situation as well as the alternative technical situations. It further assumes that you have or should or can obtain substantive knowledge of the cause and effect relationships relevant to the evaluation of the alternatives. In other words, it assumes that you have a thorough knowledge of all the alternatives and the consequences of the alternatives chosen. It further assumes that you can rank the alternatives and choose the best of it. The following are the limitations for the Rational Decision Making Model: requires a great deal of time requires great deal of information assumes rational, measurable criteria are available and agreed upon assumes accurate, stable and complete knowledge of all the alternatives, preferences, goals and consequences assumes a rational, reasonable, non – political world Current status While the rational planning model was innovative at its conception, the concepts are controversial and questionable processes today. The rational planning model has fallen out of mass use as of the last decade. Rather than conceptualising human agents as rational planners, Lucy Suchman argues, agents can better be understood as engaging in situated action. Going further, Guy Benveniste argued that the rational model could not be implemented without taking the political context into account. See also Rationality and power Sources See working paper #2 at http://ewp.uoregon.edu/publications/working References Decision analysis Urban planning Transportation planning
Rational planning model
Engineering
3,812
1,458,875
https://en.wikipedia.org/wiki/Rigidity%20%28mathematics%29
In mathematics, a rigid collection C of mathematical objects (for instance sets or functions) is one in which every c ∈ C is uniquely determined by less information about c than one would expect. The above statement does not define a mathematical property; instead, it describes in what sense the adjective "rigid" is typically used in mathematics, by mathematicians. Examples Some examples include: Harmonic functions on the unit disk are rigid in the sense that they are uniquely determined by their boundary values. Holomorphic functions are determined by the set of all derivatives at a single point. A smooth function from the real line to the complex plane is not, in general, determined by all its derivatives at a single point, but it is if we require additionally that it be possible to extend the function to one on a neighbourhood of the real line in the complex plane. The Schwarz lemma is an example of such a rigidity theorem. By the fundamental theorem of algebra, polynomials in C are rigid in the sense that any polynomial is completely determined by its values on any infinite set, say N, or the unit disk. By the previous example, a polynomial is also determined within the set of holomorphic functions by the finite set of its non-zero derivatives at any single point. Linear maps L(X, Y) between vector spaces X, Y are rigid in the sense that any L ∈ L(X, Y) is completely determined by its values on any set of basis vectors of X. Mostow's rigidity theorem, which states that the geometric structure of negatively curved manifolds is determined by their topological structure. A well-ordered set is rigid in the sense that the only (order-preserving) automorphism on it is the identity function. Consequently, an isomorphism between two given well-ordered sets will be unique. Cauchy's theorem on geometry of convex polytopes states that a convex polytope is uniquely determined by the geometry of its faces and combinatorial adjacency rules. Alexandrov's uniqueness theorem states that a convex polyhedron in three dimensions is uniquely determined by the metric space of geodesics on its surface. Rigidity results in K-theory show isomorphisms between various algebraic K-theory groups. Rigid groups in the inverse Galois problem. Combinatorial use In combinatorics, the term rigid is also used to define the notion of a rigid surjection, which is a surjection for which the following equivalent conditions hold: For every , ; Considering as an -tuple , the first occurrences of the elements in are in increasing order; maps initial segments of to initial segments of . This relates to the above definition of rigid, in that each rigid surjection uniquely defines, and is uniquely defined by, a partition of into pieces. Given a rigid surjection , the partition is defined by . Conversely, given a partition of , order the by letting . If is now the -ordered partition, the function defined by is a rigid surjection. See also Uniqueness theorem Structural rigidity, a mathematical theory describing the degrees of freedom of ensembles of rigid physical objects connected together by flexible hinges. Level structure (algebraic geometry) References Mathematical terminology
Rigidity (mathematics)
Mathematics
653
1,277,228
https://en.wikipedia.org/wiki/NASA%20Astrobiology%20Institute
The NASA Astrobiology Institute (NAI) was established in 1998 by the National Aeronautics and Space Administration (NASA) "to develop the field of astrobiology and provide a scientific framework for flight missions." In December 2019 the institute's activities were suspended. The NAI is a virtual, distributed organization that integrates astrobiology research and training programs in concert with the national and international science communities. History Although NASA had explored the idea of forming an astrobiology institute in the past, when the Viking biological experiments returned negative results for life on Mars, the public lost interest and federal funds for exobiology dried up. In 1996, the announcement of possible traces of ancient life in the Allan Hills 84001 meteorite from Mars led to new interest in the subject. At the same time, NASA developed the Origins Program, broadening its reach from exobiology to astrobiology, the study of the origin, evolution, distribution, and future of life in the universe. In 1998, $9 million was set aside to fund the NASA Astrobiology Institute (NAI), an interdisciplinary research effort using the expertise of different scientific research institutions and universities from across the country, centrally linked to Ames Research Center in Mountain View, California. Gerald Soffen former Project Scientist with the Viking program, helped coordinate the new institute. In May, NASA selected eleven science teams, each with a Principal Investigator (PI). NAI was established in July with Scott Hubbard as interim Director. Nobel laureate Baruch S. Blumberg was appointed the first Director of the institute, and served from May 15, 1999 – October 14, 2002. Program The NASA Astrobiology Program includes the NAI as one of four components, including the Exobiology and Evolutionary Biology Program; the Astrobiology Science and Technology Instrument Development (ASTID) Program; and the Astrobiology Science and Technology for Exploring Planets (ASTEP) Program. Program budgets for fiscal year 2008 were as follows: NAI, $16 million; Grants for the Exobiology and Evolutionary Biology Program, $11 million; ASTID, $9 million; ASTEP, $5 million. Teams As of 2018, the NAI has 10 teams including about 600 researchers distributed across ~100 institutions. It also has 13 international partner organizations. Some past and present teams are: International partners NAI has partnership program with other international astrobiology organizations to provide collaborative opportunities for its researchers within the global science community. Associate Partners Spain Astrobiology Center (CAB) at the Instituto Nacional de Técnica Aeroespacial, Madrid, Spain Australian Centre for Astrobiology (ACA) at the University of New South Wales Affiliate Partners Astrobiology Society of Britain (ASB) Canadian Astrobiology Network (CAN) at Centre for Planetary Science and Exploration (CPSX), at the University of Western Ontario European Exo/Astrobiology Network Association (EANA) Helmholtz Alliance: Planetary Evolution and Life Instituto de Astrobiología Colombia (IAC) Japan AstroBiology Consortium (JABC), a partnership of the Earth-Life Science Institute and the National Institutes of Natural Sciences Nordic Network of Astrobiology Russian Astrobiology Center (RAC) Sociedad Mexicana de Astrobiología (SOMA) (SFE) UK Centre for Astrobiology at The University of Edinburgh University of São Paulo (USP) Research Selected, significant topics of interdisciplinary research by NAI as of 2008: Comets in space and in the laboratory Discovery of the "rare biosphere" Early habitability of Earth Early wet Mars Exoplanet discovery and analysis Life without the Sun Metal isotope tracers of environment and biology Methane on Mars Microbial mat ecology Modeling exoplanet biospheres Origins of life Snowball Earth Sub-seafloor life The rise of oxygen and Earth's "middle age" References Further reading Research institutes in California NASA programs Astrobiology
NASA Astrobiology Institute
Astronomy,Biology
785
2,902,578
https://en.wikipedia.org/wiki/13%20Andromedae
13 Andromedae, abbreviated 13 And, is a single, blue-white hued variable star in the northern constellation of Andromeda. 13 Andromedae is the Flamsteed designation, while it bears the variable star designation V388 Andromedae. With a typical apparent visual magnitude of around 5.75, it is dimly visible to the naked eye under good seeing conditions. The distance to this star can be directly estimated from its annual parallax shift of , yielding a range of 300 light years. At that distance, its brightness is diminished by an extinction of 0.13 magnitude due to interstellar dust. The star is moving closer to the Earth with a heliocentric radial velocity of −8 km/s. The variability of 13 Andromedae was first detected in Hipparcos satellite data, and it received its variable star designation in 1999. This is a magnetic chemically peculiar star that has been assigned stellar classifications of B9 III or B9 Mn. It is a variable star of the Alpha2 Canum Venaticorum type, ranging in magnitude from 5.73 down to 5.77 with a period of 1.47946 days. The star has a high rate of spin, showing a projected rotational velocity of 75 km/s. 13 Andromedae is around 345 million years old and shines with 43 times the Sun's luminosity. References External links Image 13 Andromedae B-type giants Ap stars Alpha2 Canum Venaticorum variables Andromeda (constellation) Durchmusterung objects Andromedae, 13 220885 115755 8913 Andromedae, V388
13 Andromedae
Astronomy
349
32,327,652
https://en.wikipedia.org/wiki/Isospin%20multiplet
In particle physics, isospin multiplets are families of hadrons with approximately equal masses. All particles within a multiplet, have the same spin, parity, and baryon numbers, but differ in electric charges. Isospin formally behaves as an angular momentum operator and thus satisfies the appropriate canonical commutation relations. For a given isospin quantum number I, 2I + 1 states are allowed, as if they were the third components of an angular momentum operator Î. The set of these states is called isospin multiplet and is used to accommodate the particles. An example of an isospin multiplet is the nucleon multiplet consisting of the proton and the neutron. In this case I = 1/2 and by convention the proton corresponds to the I3 = +1/2, while the neutron to I3 = -1/2. Another example is given by the delta baryons. In this case I = 3/2. The existence of the multiplets with approximately equal masses owes to the fact that the masses of up and down quarks are approximately equal (compared to a typical hadron mass), and the strong interaction is quark flavour blind. This makes the isospin symmetry a good approximation. References Hadrons
Isospin multiplet
Physics
264
2,248,681
https://en.wikipedia.org/wiki/Range%20state
Range state is a term generally used in zoogeography and conservation biology to refer to any nation that exercises jurisdiction over any part of a range which a particular species, taxon or biotope inhabits, or crosses or overflies at any time on its normal migration route. The term is often expanded to also include, particularly in international waters, any nation with vessels flying their flag that engage in exploitation (e.g. hunting, fishing, capturing) of that species. Countries in which a species occurs only as a vagrant or ‘accidental’ visitor outside of its normal range or migration route are not usually considered range states. Because governmental conservation policy is often formulated on a national scale, and because in most countries, both governmental and private conservation organisations are also organised at the national level, the range state concept is often used by international conservation organizations in formulating their conservation and campaigning policy. An example of one such organization is the Convention on the Conservation of Migratory Species of Wild Animals (CMS, or the “Bonn Convention”). It is a multilateral treaty focusing on the conservation of critically endangered and threatened migratory species, their habitats and their migration routes. Because such habitats and/or migration routes may span national boundaries, conservation efforts are less likely to succeed without the cooperation, participation, and coordination of each of the range states. External links Bonn Convention (CMS) — Text of Convention Agreement Bonn Convention (CMS): List of Range States for Critically Endangered Migratory Species References Conservation biology Biogeography Biology terminology Endangered species
Range state
Biology
309
31,202,359
https://en.wikipedia.org/wiki/LncRNAdb
In bioinformatics, lncRNAdb is a biological database of Long non-coding RNAs The database focuses on those RNAs which have been experimentally characterised with a biological function. The database currently holds over 290 lncRNAs from around 60 species. Example lncRNAs in the database are HOTAIR and Xist. References External links RNA Non-coding RNA Biological databases
LncRNAdb
Biology
81
78,082,546
https://en.wikipedia.org/wiki/HD%2015337
HD 15337 (TOI-402) is a star with two orbiting exoplanets in the southern constellation of Fornax. It has an apparent magnitude of 9.09, making it too faint to be observed by the naked eye from Earth, but readily visible using a small telescope. It is located distant based on stellar parallax, and is currently heading towards the Solar System with a radial velocity of −3.9 km/s. The star is about 15% smaller than the Sun in both mass and radius and radiates slightly less than half the Sun's luminosity from its photosphere. It has a spectral type of K1V and an effective temperature of , giving the star an orange hue. It is billion years old, making it much older than the Solar System. The star has a solar-like metallicity and displays similar amounts of stellar activity to the Sun, though when the star was only 150 million years old, it may have emitted between 3.7 and 127 times the high-energy luminosity of the Sun in the present day. Planetary system In May 2019, a pair of exoplanets were discovered to revolve around HD 15337 through transit observations by the TESS space telescope, namely HD 15337 b and c. The two planets are far closer to their host star than Mercury is to the Sun (0.3871 AU), which heats them up to equilibrium temperatures of and , respectively, both of which are hot enough to melt lead ( ). The inner planet, HD 15337 b, has a radius of 1.770 and a mass of 6.519 . This places its density at , meaning it is denser than Earth () and very likely to be a rocky super-Earth. The outer planet, c, is only slightly more massive than b at 6.792 , but possesses a radius over 40% larger, which makes it much less dense at , suggesting a mini-Neptune-like composition with a thick (>0.01 ) gaseous envelope probably consisting of hydrogen and helium. This striking difference in the structure of the two planets in spite of their similar masses implies that the two planets are on opposite sides of the small planet radius gap, making the HD 15337 system a prime target for research in planetary formation and evolution. In 2024, the planetary parameters of both planets were precisely gauged through photometric observations by CHEOPS and radial velocity measurements by HARPS. As a result, the uncertainties of HD 15337 b's mass and radius were each reduced to less than 2% and 7%, which put the planet among the most accurately characterized terrestrial exoplanets at the time. Additionally, the radius of HD 15337 c was constrained to within a 3% margin of error. See also Kepler-93b: another precisely characterized hot super-Earth. Notes References K-type main-sequence stars Planetary systems with two confirmed planets Fornax CD-28 00784 015337 011433
HD 15337
Astronomy
618
948,685
https://en.wikipedia.org/wiki/TrES-1b
TrES-1b is an extrasolar planet approximately 523 light-years away in the constellation of Lyra (the Lyre). The planet's mass and radius indicate that it is a Jovian planet with a similar bulk composition to Jupiter. Unlike Jupiter, but similar to many other planets detected around other stars, TrES-1 is located very close to its star, and belongs to the class of planets known as hot Jupiters. The planet was discovered orbiting around GSC 02652-01324 (an orange dwarf star). Detection and discovery TrES-1b was discovered by the Trans-Atlantic Exoplanet Survey by detecting the transit of the planet across its parent star using a telescope. The discovery was confirmed by the Keck Observatory using the radial velocity method, which allowed its mass to be determined. Transit On March 22, 2005, Astronomers using NASA's Spitzer Space Telescope took advantage of this fact to directly capture the infrared light of two previously detected planets orbiting outside our solar system. Their findings revealed the temperatures and orbits of the planets. Upcoming Spitzer observations using a variety of infrared wavelengths may provide more information about the planets' winds and atmospheric compositions. It enabled determination of TrES-1's temperature, which is in excess of 1000 K (1340 °F). The planet's Bond albedo was found to be 0.31 ± 0.14. In the infrared panel, the colors reflect what our eyes might see if we could retune them to the invisible, infrared portion of the light spectrum. The hot star is less bright in infrared light than in visible and appears fainter. The warm planet peaks in infrared light, so is shown brighter. Their hues represent relative differences in temperature. Because the star is hotter than the planet, and because hotter objects give off more blue light than red, the star is depicted in blue, and the planet, red. The overall look of the planet is inspired by theoretical models of hot, gas giant planets. These "hot Jupiters" are similar to Jupiter in composition and mass, but are expected to look quite different at such high temperatures. Radial velocity The transit light-curve signature in the course of the TrES multi-site transiting planet survey, and confirmed the planetary nature of the companion via multicolor photometry and precise radial velocity measurements. With this, the planet has an orbital period similar to that of HD 209458 b, but about twice as long as those of the Optical Gravitational Lensing Experiment (OGLE) transiting planets. Its mass is similar to that of HD 209458 b, but its radius is significantly smaller and fits the theoretical models without the need for an additional source of heat deep in the atmosphere, as has been invoked by some investigators for HD 209458 b. Rotation The spin-orbit angle using the Rossiter–McLaughlin effect was measured to be +30° in 2007, and measurement was not updated by 2012 Physical characteristics Hubble might find water in TrES-1, and it would give a much more precise measurement of the planet's size, and even allow us to search for moons. A satellite is unlikely however, given the likely history and current orbital configuration for the planet, the research team concluded. There are only 11 exomoon candidates around 8 exoplanets, but some researchers have considered that such satellites would be logical places for life to exist around giant gaseous worlds that otherwise could not be expected to support biology. Models indicate that TrES-1 has undergone significant tidal heating in the past due to its eccentric orbit, but this does not appear to have inflated the planet's radius. See also Trans-Atlantic Exoplanet Survey 51 Pegasi b HD 209458 b Hot Jupiter TrES-2b References External links AAVSO Variable Star Of The Season. Fall 2004: The Transiting Exoplanets HD 209458 and TrES-1 Hot Jupiters Lyra Transiting exoplanets Exoplanets discovered in 2004 Giant planets
TrES-1b
Astronomy
811
12,429,429
https://en.wikipedia.org/wiki/HD%2053143
HD 53143 is a star in the Carina constellation, located about from the Earth. With an apparent visual magnitude of 6.80, this star is a challenge to view with the naked eye even under ideal viewing conditions. Using the technique of gyrochronology, which measures the age of a low-mass star based on its rotation, HD 53143 is about old. Depending on the source, the stellar classification for this star is G9 V or K1V, placing it near the borderline between G-type and K-type main sequence stars. In either case, it is generating energy through the thermonuclear fusion of hydrogen at its core. This star is smaller than the Sun, with about 85% of the Sun's radius. It is emitting only 70% of the Sun's luminosity. The effective temperature of the star's outer envelope is cooler than the Sun at 5,224 K, giving it a golden-orange hue. Debris disk Based upon an excess of infrared emission, a circumstellar debris disk has been found in this system. This disk is inclined at an angle of about 40–50° to the line of sight from the Earth and it has an estimated mass of more than . (For comparison, the mass of the Moon is 7.3477 × 1022 kg.) This is one of the oldest known debris disk systems and hence may be replenished through the collision of larger bodies. The observed inner edge of the disk is at a distance of 55 Astronomical Units (AU) from the host star, while it stretches out to twice that distance, or 110 AU. This debris disk may extend outside this range, as the measurements are limited by the sensitivity of the instruments. The dust appears evenly distributed with no indication of clumping. The eccentricity of the ring is also one of the highest known, at 0.21. References Carina (constellation) G-type main-sequence stars K-type main-sequence stars 053143 Circumstellar disks Durchmusterung objects 0260 003690
HD 53143
Astronomy
436
18,425,279
https://en.wikipedia.org/wiki/Help%20Conquer%20Cancer
Help Conquer Cancer is a volunteer computing project that runs on the BOINC platform. It is a joint project of the Ontario Cancer Institute and the Hauptman-Woodward Medical Research Institute. It is also the first project under World Community Grid to run with a GPU counterpart. Project Purpose The goal is to enhance the efficiency of protein X-ray crystallography, which will enable researchers to determine the structure of many cancer-related proteins faster. This will lead to improving the understanding of the function of these proteins, and accelerate the development of new pharmaceutical drugs. See also BOINC List of volunteer computing projects World Community Grid References External links Help Conquer Cancer Berkeley Open Infrastructure for Network Computing projects Science in society Free science software Cancer organizations based in Canada Volunteer computing projects
Help Conquer Cancer
Technology
155
45,219,931
https://en.wikipedia.org/wiki/CCID%20%28protocol%29
CCID (chip card interface device) protocol is a USB protocol that allows a smartcard to be connected to a computer via a card reader using a standard USB interface, without the need for each manufacturer of smartcards to provide its own reader or protocol. This allows the smartcard to be used as a security token for authentication and data encryption, such as that used in BitLocker. Chip card interface devices come in a variety of forms. The smallest CCID form is a standard USB dongle and may contain a SIM card or Secure Digital card inside the USB dongle. Another popular interface is a USB smart card reader keyboard, which in addition to being a standard USB keyboard, has an built-in slot for accepting a smartcard. However, not all CCID compliant devices accept removable smartcards, for example, select Yubikey hardware authentication devices support CCID, where they play the role of both the card reader and the smartcard itself. Hardware implementation According to the CCID specification by the USB standards work group, a CCID exchanges information through a host computer over USB by using a CCID message that consists of a 10-byte header followed by message-specific data. The standard defines fourteen commands that the host computer can use to send data and status and control information in messages. Every command requires at least one response message from the CCID. Software driver CCID driver support has been natively supported by Microsoft beginning with Windows 2000. Apple has included some form of native CCID support since Mac OS X, with support evolving alongside Common Access Card and Personal Identity Verification specifications set by the US Federal Government. Apple's has included native CCID support on iOS since 16.0 and iPadOS since 16.1. On Linux and other Unixes, CCID and CT-API devices are usually accessed with user-space drivers, for which no special kernel adaptation is required. List of CCID providers Advanced Card Systems ActivIdentity Baltech Bit4id Blutronics srl Elyctis FEITIAN Technologies Gemalto Giesecke & Devrient HID Global JaCarta rf IDEAS SafeNet SecuTech Solutions SpringCard Verisign Yubico Reiner Kartenlesegeräte DUALi References Computer access control Microsoft Windows security technology Smart cards
CCID (protocol)
Engineering
464
1,540,333
https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius%20theorem
In matrix theory, the Perron–Frobenius theorem, proved by and , asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems (subshifts of finite type); to economics (Okishio's theorem, Hawkins–Simon condition); to demography (Leslie population age distribution model); to social networks (DeGroot learning process); to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau. Statement Let positive and non-negative respectively describe matrices with exclusively positive real numbers as elements and matrices with exclusively non-negative real numbers as elements. The eigenvalues of a real square matrix A are complex numbers that make up the spectrum of the matrix. The exponential growth rate of the matrix powers Ak as k → ∞ is controlled by the eigenvalue of A with the largest absolute value (modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when A is a non-negative real square matrix. Early results were due to and concerned positive matrices. Later, found their extension to certain classes of non-negative matrices. Positive matrices Let be an positive matrix: for . Then the following statements hold. There is a positive real number r, called the Perron root or the Perron–Frobenius eigenvalue (also called the leading eigenvalue, principal eigenvalue or dominant eigenvalue), such that r is an eigenvalue of A and any other eigenvalue λ (possibly complex) in absolute value is strictly smaller than r , |λ| < r. Thus, the spectral radius is equal to r. If the matrix coefficients are algebraic, this implies that the eigenvalue is a Perron number. The Perron–Frobenius eigenvalue is simple: r is a simple root of the characteristic polynomial of A. Consequently, the eigenspace associated to r is one-dimensional. (The same is true for the left eigenspace, i.e., the eigenspace for AT, the transpose of A.) There exists an eigenvector v = (v1,...,vn)T of A with eigenvalue r such that all components of v are positive: A v = r v, vi > 0 for 1 ≤ i ≤ n. (Respectively, there exists a positive left eigenvector w : wT A = wT r, wi > 0.) It is known in the literature under many variations as the Perron vector, Perron eigenvector, Perron-Frobenius eigenvector, leading eigenvector, principal eigenvector or dominant eigenvector. There are no other positive (moreover non-negative) eigenvectors except positive multiples of v (respectively, left eigenvectors except ww'w), i.e., all other eigenvectors must have at least one negative or non-real component. , where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix vwT is the projection onto the eigenspace corresponding to r. This projection is called the Perron projection. Collatz–Wielandt formula: for all non-negative non-zero vectors x, let f(x) be the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real valued function whose maximum over all non-negative non-zero vectors x is the Perron–Frobenius eigenvalue. A "Min-max" Collatz–Wielandt formula takes a form similar to the one above: for all strictly positive vectors x, let g(x) be the maximum value of [Ax]i / xi taken over i. Then g is a real valued function whose minimum over all strictly positive vectors x is the Perron–Frobenius eigenvalue. Birkhoff–Varga formula: Let x and y be strictly positive vectors. Then, Donsker–Varadhan–Friedland formula: Let p be a probability vector and x a strictly positive vector. Then,Friedland, S., 1981. Convex spectral functions. Linear and multilinear algebra, 9(4), pp.299-316. Fiedler formula: The Perron–Frobenius eigenvalue satisfies the inequalities All of these properties extend beyond strictly positive matrices to primitive matrices (see below). Facts 1–7 can be found in Meyer chapter 8 claims 8.2.11–15 page 667 and exercises 8.2.5,7,9 pages 668–669. The left and right eigenvectors w and v are sometimes normalized so that the sum of their components is equal to 1; in this case, they are sometimes called stochastic eigenvectors. Often they are normalized so that the right eigenvector v sums to one, while . Non-negative matrices There is an extension to matrices with non-negative entries. Since any non-negative matrix can be obtained as a limit of positive matrices, one obtains the existence of an eigenvector with non-negative components; the corresponding eigenvalue will be non-negative and greater than or equal, in absolute value, to all other eigenvalues. However, for the example , the maximum eigenvalue r = 1 has the same absolute value as the other eigenvalue −1; while for , the maximum eigenvalue is r = 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector (1, 0) is not strictly positive. However, Frobenius found a special subclass of non-negative matrices — irreducible matrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the form , where is a real strictly positive eigenvalue, and ranges over the complex h th roots of 1 for some positive integer h called the period of the matrix. The eigenvector corresponding to has strictly positive components (in contrast with the general case of non-negative matrices, where components are only non-negative). Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below. Classification of matrices Let A be a n × n square matrix over field F. The matrix A is irreducible if any of the following equivalent properties holds.Definition 1 : A does not have non-trivial invariant coordinate subspaces. Here a non-trivial coordinate subspace means a linear subspace spanned by any proper subset of standard basis vectors of Fn. More explicitly, for any linear subspace spanned by standard basis vectors ei1 , ..., eik, 0 < k < n its image under the action of A is not contained in the same subspace.Definition 2: A cannot be conjugated into block upper triangular form by a permutation matrix P: where E and G are non-trivial (i.e. of size greater than zero) square matrices.Definition 3: One can associate with a matrix A a certain directed graph GA. It has n vertices labeled 1,...,n, and there is an edge from vertex i to vertex j precisely when aij ≠ 0. Then the matrix A is irreducible if and only if its associated graph GA is strongly connected. If F is the field of real or complex numbers, then we also have the following condition.Definition 4: The group representation of on or on given by has no non-trivial invariant coordinate subspaces. (By comparison, this would be an irreducible representation if there were no non-trivial invariant subspaces at all, not only considering coordinate subspaces.) A matrix is reducible if it is not irreducible. A real matrix A is primitive if it is non-negative and its mth power is positive for some natural number m (i.e. all entries of Am are positive). Let A be real and non-negative. Fix an index i and define the period of index i to be the greatest common divisor of all natural numbers m such that (Am)ii > 0. When A is irreducible, the period of every index is the same and is called the period of A. In fact, when A is irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths in GA (see Kitchens page 16). The period is also called the index of imprimitivity (Meyer page 674) or the order of cyclicity. If the period is 1, A is aperiodic. It can be proved that primitive matrices are the same as irreducible aperiodic non-negative matrices. All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. The same statements also hold for a non-negative irreducible matrix, except that it may possess several eigenvalues whose absolute value is equal to its spectral radius, so the statements need to be correspondingly modified. In fact the number of such eigenvalues is equal to the period. Results for non-negative matrices were first obtained by Frobenius in 1912. Perron–Frobenius theorem for irreducible non-negative matrices Let be an irreducible non-negative matrix with period and spectral radius . Then the following statements hold. The number is a positive real number and it is an eigenvalue of the matrix . It is called Perron–Frobenius eigenvalue. The Perron–Frobenius eigenvalue is simple. Both right and left eigenspaces associated with are one-dimensional. has both a right and a left eigenvectors, respectively and , with eigenvalue and whose components are all positive. Moreover these are the only eigenvectors whose components are all positive are those associated with the eigenvalue . The matrix has exactly (where is the period) complex eigenvalues with absolute value . Each of them is a simple root of the characteristic polynomial and is the product of with an th root of unity. Let . Then the matrix is similar to , consequently the spectrum of is invariant under multiplication by (i.e. to rotations of the complex plane by the angle ). If then there exists a permutation matrix such that where denotes a zero matrix and the blocks along the main diagonal are square matrices. Collatz–Wielandt formula: for all non-negative non-zero vectors let be the minimum value of taken over all those such that . Then is a real valued function whose maximum is the Perron–Frobenius eigenvalue. The Perron–Frobenius eigenvalue satisfies the inequalities The example shows that the (square) zero-matrices along the diagonal may be of different sizes, the blocks Aj need not be square, and h need not divide n. Further properties Let A be an irreducible non-negative matrix, then: (I+A)n−1 is a positive matrix. (Meyer claim 8.3.5 p. 672). For a non-negative A, this is also a sufficient condition. Wielandt's theorem. If |B|<A, then ρ(B)≤ρ(A). If equality holds (i.e. if μ=ρ(A)eiφ is eigenvalue for B), then B = eiφ D AD−1 for some diagonal unitary matrix D (i.e. diagonal elements of D equals to eiΘl, non-diagonal are zero). If some power Aq is reducible, then it is completely reducible, i.e. for some permutation matrix P, it is true that: , where Ai are irreducible matrices having the same maximal eigenvalue. The number of these matrices d is the greatest common divisor of q and h, where h is period of A. If c(x) = xn + ck1 xn-k1 + ck2 xn-k2 + ... + cks xn-ks is the characteristic polynomial of A in which only the non-zero terms are listed, then the period of A equals the greatest common divisor of k1, k2, ... , ks. Cesàro averages: where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix v wT is the spectral projection corresponding to r, the Perron projection. Let r be the Perron–Frobenius eigenvalue, then the adjoint matrix for (r-A) is positive. If A has at least one non-zero diagonal element, then A is primitive. If 0 ≤ A < B, then rA ≤ rB. Moreover, if B is irreducible, then the inequality is strict: rA < rB. A matrix A is primitive provided it is non-negative and Am is positive for some m, and hence Ak is positive for all k ≥ m. To check primitivity, one needs a bound on how large the minimal such m can be, depending on the size of A: If A is a non-negative primitive matrix of size n, then An2 − 2n + 2 is positive. Moreover, this is the best possible result, since for the matrix M below, the power Mk is not positive for every k < n2 − 2n + 2, since (Mn2 − 2n+1)1,1 = 0. Applications Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature. The following examples given below only scratch the surface of its vast application domain. Non-negative matrices The Perron–Frobenius theorem does not apply directly to non-negative matrices. Nevertheless, any reducible square matrix A may be written in upper-triangular block form (known as the normal form of a reducible matrix) PAP−1 = where P is a permutation matrix and each Bi is a square matrix that is either irreducible or zero. Now if A is non-negative then so too is each block of PAP−1, moreover the spectrum of A is just the union of the spectra of the Bi. The invertibility of A can also be studied. The inverse of PAP−1 (if it exists) must have diagonal blocks of the form Bi−1 so if any Bi isn't invertible then neither is PAP−1 or A. Conversely let D be the block-diagonal matrix corresponding to PAP−1, in other words PAP−1 with the asterisks zeroised. If each Bi is invertible then so is D and D−1(PAP−1) is equal to the identity plus a nilpotent matrix. But such a matrix is always invertible (if Nk = 0 the inverse of 1 − N is 1 + N + N2 + ... + Nk−1) so PAP−1 and A are both invertible. Therefore, many of the spectral properties of A may be deduced by applying the theorem to the irreducible Bi. For example, the Perron root is the maximum of the ρ(Bi). While there will still be eigenvectors with non-negative components it is quite possible that none of these will be positive. Stochastic matrices A row (column) stochastic matrix is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible. If A is row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. It might not be the only eigenvalue on the unit circle: and the associated eigenspace can be multi-dimensional. If A is row-stochastic and irreducible then the Perron projection is also row-stochastic and all its rows are equal. Algebraic graph theory The theorem has particular use in algebraic graph theory. The "underlying graph" of a nonnegative n-square matrix is the graph with vertices numbered 1, ..., n and arc ij if and only if Aij ≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, the adjacency matrix of a strongly connected graph is irreducible. Finite Markov chains The theorem has a natural interpretation in the theory of finite Markov chains (where it is the matrix-theoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain; see, for example, the article on the subshift of finite type). Compact operators More generally, it can be extended to the case of non-negative compact operators, which, in many ways, resemble finite-dimensional matrices. These are commonly studied in physics, under the name of transfer operators, or sometimes Ruelle–Perron–Frobenius operators (after David Ruelle). In this case, the leading eigenvalue corresponds to the thermodynamic equilibrium of a dynamical system, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering the arrow of time in what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view of point-set topology. Proof methods A common thread in many proofs is the Brouwer fixed point theorem. Another popular method is that of Wielandt (1950). He used the Collatz–Wielandt formula described above to extend and clarify Frobenius's work. Another proof is based on the spectral theory from which part of the arguments are borrowed. Perron root is strictly maximal eigenvalue for positive (and primitive) matrices If A is a positive (or more generally primitive) matrix, then there exists a real positive eigenvalue r (Perron–Frobenius eigenvalue or Perron root), which is strictly greater in absolute value than all other eigenvalues, hence r is the spectral radius of A. This statement does not hold for general non-negative irreducible matrices, which have h eigenvalues with the same absolute eigenvalue as r, where h is the period of A. Proof for positive matrices Let A be a positive matrix, assume that its spectral radius ρ(A) = 1 (otherwise consider A/ρ(A)). Hence, there exists an eigenvalue λ on the unit circle, and all the other eigenvalues are less or equal 1 in absolute value. Suppose that another eigenvalue λ ≠ 1 also falls on the unit circle. Then there exists a positive integer m such that Am is a positive matrix and the real part of λm is negative. Let ε be half the smallest diagonal entry of Am and set T = Am − εI which is yet another positive matrix. Moreover, if Ax = λx then Amx = λmx thus λm − ε is an eigenvalue of T. Because of the choice of m this point lies outside the unit disk consequently ρ(T) > 1. On the other hand, all the entries in T are positive and less than or equal to those in Am so by Gelfand's formula ρ(T) ≤ ρ(Am) ≤ ρ(A)m = 1. This contradiction means that λ=1 and there can be no other eigenvalues on the unit circle. Absolutely the same arguments can be applied to the case of primitive matrices; we just need to mention the following simple lemma, which clarifies the properties of primitive matrices. Lemma Given a non-negative A, assume there exists m, such that Am is positive, then Am+1, Am+2, Am+3,... are all positive. Am+1 = AAm, so it can have zero element only if some row of A is entirely zero, but in this case the same row of Am will be zero. Applying the same arguments as above for primitive matrices, prove the main claim. Power method and the positive eigenpair For a positive (or more generally irreducible non-negative) matrix A the dominant eigenvector is real and strictly positive (for non-negative A respectively non-negative.) This can be established using the power method, which states that for a sufficiently generic (in the sense below) matrix A the sequence of vectors bk+1 = Abk / | Abk | converges to the eigenvector with the maximum eigenvalue. (The initial vector b0 can be chosen arbitrarily except for some measure zero set). Starting with a non-negative vector b0 produces the sequence of non-negative vectors bk. Hence the limiting vector is also non-negative. By the power method this limiting vector is the dominant eigenvector for A, proving the assertion. The corresponding eigenvalue is non-negative. The proof requires two additional arguments. First, the power method converges for matrices which do not have several eigenvalues of the same absolute value as the maximal one. The previous section's argument guarantees this. Second, to ensure strict positivity of all of the components of the eigenvector for the case of irreducible matrices. This follows from the following fact, which is of independent interest: Lemma: given a positive (or more generally irreducible non-negative) matrix A and v as any non-negative eigenvector for A, then it is necessarily strictly positive and the corresponding eigenvalue is also strictly positive. Proof. One of the definitions of irreducibility for non-negative matrices is that for all indexes i,j there exists m, such that (Am)ij is strictly positive. Given a non-negative eigenvector v, and that at least one of its components say i-th is strictly positive, the corresponding eigenvalue is strictly positive, indeed, given n such that (An)ii >0, hence: rnvi = Anvi ≥ (An)iivi >0. Hence r is strictly positive. The eigenvector is strict positivity. Then given m, such that (Am)ji >0, hence: rmvj = (Amv)j ≥ (Am)jivi >0, hence vj is strictly positive, i.e., the eigenvector is strictly positive. Multiplicity one This section proves that the Perron–Frobenius eigenvalue is a simple root of the characteristic polynomial of the matrix. Hence the eigenspace associated to Perron–Frobenius eigenvalue r is one-dimensional. The arguments here are close to those in Meyer. Given a strictly positive eigenvector v corresponding to r and another eigenvector w with the same eigenvalue. (The vectors v and w can be chosen to be real, because A and r are both real, so the null space of A-r has a basis consisting of real vectors.) Assuming at least one of the components of w is positive (otherwise multiply w by −1). Given maximal possible α such that u=v- α w is non-negative, then one of the components of u is zero, otherwise α is not maximum. Vector u is an eigenvector. It is non-negative, hence by the lemma described in the previous section non-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component of u is zero. The contradiction implies that w does not exist. Case: There are no Jordan blocks corresponding to the Perron–Frobenius eigenvalue r and all other eigenvalues which have the same absolute value. If there is a Jordan block, then the infinity norm (A/r)k∞ tends to infinity for k → ∞ , but that contradicts the existence of the positive eigenvector. Given r = 1, or A/r. Letting v be a Perron–Frobenius strictly positive eigenvector, so Av=v, then: So ‖Ak‖∞ is bounded for all k. This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan block for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan block implies that ‖Ak‖∞ is unbounded. For a two by two matrix: hence ‖Jk‖∞ = |k + λ| (for |λ| = 1), so it tends to infinity when k does so. Since Jk = C−1 AkC, then Ak ≥ Jk/ (C−1 C ), so it also tends to infinity. The resulting contradiction implies that there are no Jordan blocks for the corresponding eigenvalues. Combining the two claims above reveals that the Perron–Frobenius eigenvalue r is simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value as r. The same claim is true for them, but requires more work. No other non-negative eigenvectors Given positive (or more generally irreducible non-negative matrix) A, the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector for A. Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional. Assuming there exists an eigenpair (λ, y) for A, such that vector y is positive, and given (r, x), where x – is the left Perron–Frobenius eigenvector for A (i.e. eigenvector for AT), then rxTy = (xT A) y = xT (Ay) = λxTy, also xT y > 0, so one has: r = λ. Since the eigenspace for the Perron–Frobenius eigenvalue r is one-dimensional, non-negative eigenvector y is a multiple of the Perron–Frobenius one. Collatz–Wielandt formula Given a positive (or more generally irreducible non-negative matrix) A, one defines the function f on the set of all non-negative non-zero vectors x such that f(x) is the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real-valued function, whose maximum is the Perron–Frobenius eigenvalue r. For the proof we denote the maximum of f by the value R. The proof requires to show R = r. Inserting the Perron-Frobenius eigenvector v into f, we obtain f(v) = r and conclude r ≤ R. For the opposite inequality, we consider an arbitrary nonnegative vector x and let ξ=f(x). The definition of f gives 0 ≤ ξx ≤ Ax (componentwise). Now, we use the positive right eigenvector w for A for the Perron-Frobenius eigenvalue r, then ξ wT x = wT ξx ≤ wT (Ax) = (wT A)x = r wT x . Hence f(x) = ξ ≤ r, which implies R ≤ r. Perron projection as a limit: Ak/rk Let A be a positive (or more generally, primitive) matrix, and let r be its Perron–Frobenius eigenvalue. There exists a limit Ak/rk for k → ∞, denote it by P. P is a projection operator: P2 = P, which commutes with A: AP = PA. The image of P is one-dimensional and spanned by the Perron–Frobenius eigenvector v (respectively for PT—by the Perron–Frobenius eigenvector w for AT). P = vwT, where v,w are normalized such that wT v = 1. Hence P is a positive operator. Hence P is a spectral projection for the Perron–Frobenius eigenvalue r, and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices. Actually the claims above (except claim 5) are valid for any matrix M such that there exists an eigenvalue r which is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristic polynomial. (These requirements hold for primitive matrices as above). Given that M is diagonalizable, M is conjugate to a diagonal matrix with eigenvalues r1, ... , rn on the diagonal (denote r1 = r). The matrix Mk/rk will be conjugate (1, (r2/r)k, ... , (rn/r)k), which tends to (1,0,0,...,0), for k → ∞, so the limit exists. The same method works for general M (without assuming that M is diagonalizable). The projection and commutativity properties are elementary corollaries of the definition: MMk/rk = Mk/rk M ; P2 = lim M2k/r2k = P. The third fact is also elementary: M(Pu) = M lim Mk/rk u = lim rMk+1/rk+1u, so taking the limit yields M(Pu) = r(Pu), so image of P lies in the r-eigenspace for M, which is one-dimensional by the assumptions. Denoting by v, r-eigenvector for M (by w for MT). Columns of P are multiples of v, because the image of P is spanned by it. Respectively, rows of w. So P takes a form (a v wT), for some a. Hence its trace equals to (a wT v). Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees that P acts identically on the r-eigenvector for M. So it is one-dimensional. So choosing (wTv) = 1, implies P = vwT. Inequalities for Perron–Frobenius eigenvalue For any non-negative matrix A its Perron–Frobenius eigenvalue r satisfies the inequality: This is not specific to non-negative matrices: for any matrix A with an eigenvalue it is true that . This is an immediate corollary of the Gershgorin circle theorem. However another proof is more direct: Any matrix induced norm satisfies the inequality for any eigenvalue because, if is a corresponding eigenvector, . The infinity norm of a matrix is the maximum of row sums: Hence the desired inequality is exactly applied to the non-negative matrix A. Another inequality is: This fact is specific to non-negative matrices; for general matrices there is nothing similar. Given that A is positive (not just non-negative), then there exists a positive eigenvector w such that Aw = rw and the smallest component of w (say wi) is 1. Then r = (Aw)i ≥ the sum of the numbers in row i of A. Thus the minimum row sum gives a lower bound for r and this observation can be extended to all non-negative matrices by continuity. Another way to argue it is via the Collatz-Wielandt formula. One takes the vector x = (1, 1, ..., 1) and immediately obtains the inequality. Further proofs Perron projection The proof now proceeds using spectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property: The Perron projection of an irreducible non-negative square matrix is a positive matrix. Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that if A is an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also if P is its Perron projection then AP = PA = ρ(A)P so every column of P is a positive right eigenvector of A and every row is a positive left eigenvector. Moreover, if Ax = λx then PAx = λPx = ρ(A)Px which means Px = 0 if λ ≠ ρ(A). Thus the only positive eigenvectors are those associated with ρ(A). If A is a primitive matrix with ρ(A) = 1 then it can be decomposed as P ⊕ (1 − P)A so that An = P + (1 − P)An. As n increases the second of these terms decays to zero leaving P as the limit of An as n → ∞. The power method is a convenient way to compute the Perron projection of a primitive matrix. If v and w are the positive row and column vectors that it generates then the Perron projection is just wv/vw. The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition. Peripheral projection The analysis when A is irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(A) that negate use of the power method and prevent the powers of (1 − P)A decaying as in the primitive case whenever ρ(A) = 1. So we consider the peripheral projection', which is the spectral projection of A corresponding to all the eigenvalues that have modulus ρ(A). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal. Cyclicity Suppose in addition that ρ(A) = 1 and A has h eigenvalues on the unit circle. If P is the peripheral projection then the matrix R = AP = PA is non-negative and irreducible, Rh = P, and the cyclic group P, R, R2, ...., Rh−1 represents the harmonics of A. The spectral projection of A at the eigenvalue λ on the unit circle is given by the formula . All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition of A is given by A = R ⊕ (1 − P)A so the difference between An and Rn is An − Rn = (1 − P)An representing the transients of An which eventually decay to zero. P may be computed as the limit of Anh as n → ∞. Counterexamples The matrices L = , P = , T = , M = provide simple examples of what can go wrong if the necessary conditions are not met. It is easily seen that the Perron and peripheral projections of L are both equal to P, thus when the original matrix is reducible the projections may lose non-negativity and there is no chance of expressing them as limits of its powers. The matrix T is an example of a primitive matrix with zero diagonal. If the diagonal of an irreducible non-negative square matrix is non-zero then the matrix must be primitive but this example demonstrates that the converse is false. M is an example of a matrix with several missing spectral teeth. If ω = eiπ/3 then ω6 = 1 and the eigenvalues of M are {1,ω2,ω3=-1,ω4} with a dimension 2 eigenspace for +1 so ω and ω5 are both absent. More precisely, since M is block-diagonal cyclic, then the eigenvalues are {1,-1} for the first block, and {1,ω2,ω4} for the lower one Terminology A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the terms strictly positive and positive to mean > 0 and ≥ 0 respectively. In this article positive means > 0 and non-negative means ≥ 0. Another vexed area concerns decomposability and reducibility: irreducible is an overloaded term. For avoidance of doubt a non-zero non-negative square matrix A such that 1 + A is primitive is sometimes said to be connected. Then irreducible non-negative square matrices and connected matrices are synonymous. The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity; in this case, the eigenvector is the vector of a probability distribution and is sometimes called a stochastic eigenvector.Perron–Frobenius eigenvalue and dominant eigenvalue are alternative names for the Perron root. Spectral projections are also known as spectral projectors and spectral idempotents. The period is sometimes referred to as the index of imprimitivity or the order of cyclicity. See also Metzler matrix (Quasipositive matrix) Notes References (1959 edition had different title: "Applications of the theory of matrices". Also the numeration of chapters is different in the two editions.) Further reading Abraham Berman, Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, 1994, SIAM. . Chris Godsil and Gordon Royle, Algebraic Graph Theory, Springer, 2001. A. Graham, Nonnegative Matrices and Applicable Topics in Linear Algebra, John Wiley&Sons, New York, 1987. R. A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, 1990 Bas Lemmens and Roger Nussbaum, Nonlinear Perron-Frobenius Theory, Cambridge Tracts in Mathematics 189, Cambridge Univ. Press, 2012. S. P. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability London: Springer-Verlag, 1993. (2nd edition, Cambridge University Press, 2009) Seneta, E. Non-negative matrices and Markov chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973) (The claim that Aj has order n/h'' at the end of the statement of the theorem is incorrect.) . Matrix theory Theorems in linear algebra Markov processes
Perron–Frobenius theorem
Mathematics
8,520
3,690,366
https://en.wikipedia.org/wiki/Flat%20Display%20Mounting%20Interface
The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat panel monitors, televisions, and other displays to stands or wall mounts. It is implemented on most modern flat-panel monitors and televisions. As well as being used for mounting monitors, the standards can be used to attach a small PC to the monitor mount. The first standard in this family was introduced in 1997 and was originally called Flat Panel Monitor Physical Mounting Interface (FPMPMI), it corresponds to part D of the current standard. Variants Most sizes of VESA mount have four screw-holes arranged in a square on the mount, with matching tapped holes on the device. The horizontal and vertical distance between the screw centres respectively labelled as 'A', and 'B'. The original layout was a square of 100mm. A was defined for smaller displays. Later, variants were added for screens with as small as a diagonal. The FDMI was extended in 2006 with additional screw patterns that are more appropriate for larger TV screens. Thus the standard now specifies seven sizes, each with more than one variant. These are referenced as parts B to F of the standard or with official abbreviations, usually prefixed by the word "VESA". Unofficially, the variants are sometimes referenced as just "VESA" followed by the pattern size in mm, which is slightly ambiguous for the names "VESA 50" (four possibilities), "VESA 75" (two possibilities) and "VESA 200" (three possibilities). However, if "VESA 100" is accepted as meaning the original variant ("VESA MIS-D, 100"), then all but "VESA MIS-E" and "VESA MIS-F, 200" have at least one unique dimension that can be used in this way, as can be seen from the tables below. Notes If a screen is heavier or larger than specified in table 1, it should use a larger variant from the table, for instance, a 30-in LCD TV weighing more than would need to use a part F mount. The weight limits were chosen as round numbers in kg or lb for different sizes. The screw lengths for parts C, D and E become whole numbers when adding a 2.6 mm thick bracket (which is how the standard describes them). The screw lengths for part F are minimum / maximum / hole maximum, as in: M6 screws must go at least 9 mm in but at most 10 mm in, and the hole might not be deeper than 12 mm. Details of variant B to E Notes for centre mounts The mounting pattern must be centred between left and right of the screen case. For part D and E, it must also be centred between top and bottom (since 2006). Notes for the edge mounts: x is "T", "B", "R" or "L" for "Top", "Bottom", "Right" or "Left" as seen from the back in landscape mode. More than one x or "C" can be specified for a screen with more than one set of mounting holes. For left or right edge mounting, Swap width and height, so the brackets will look the same with respect to all 4 edges. The rounded rectangles and distances (see below) do not apply towards the edge, except that the surface on the screen need not touch the bracket beyond them. The distance from the edge of the screen case to the centre of the first holes is ±0.5 mm. Common notes for variants B to E Screws are standard M4 machine screws as long as the bracket is thick plus the length given in table 1. Each hole must be within ±0.25 mm of its nominal position. Each hole in the bracket is 5 mm in diameter to allow for this tolerance in both screen and bracket. If the screen manufacturer included different screws, they must be used, not the ones that came with the bracket. If the screen manufacturer provides their own mount, it may be attached in any way as long as it can be removed. The clearance area must be a completely flat surface at most "max. clearance" above or below the general back surface of the screen. The clearance area is a rounded rectangle whose sides extends at least 8.5 mm beyond the hole centres and with a corner radius of at most 7 mm. The bracket is within a rounded rectangle whose sides extend at most 7.5 mm beyond the hole centres and with a corner radius of at least 6 mm. The part of the bracket laying against the clearance area must be at least 12.5 mm wide, it may be straight or diagonal. The two extra holes in type E brackets are in the middle of the long sides of the hole pattern, see the drawing. The screen manufacturer should warrant that filling the standard holes with the specified screws will be able to hold the screen in any direction. The bracket manufacturer should warrant that the bracket can carry the maximum weight from table 1 when using all the specified screws. More details can be found by purchasing a copy of the standard itself, including rules to ensure cables don't prevent using the mounts. Details of variant F yn is Y if the screen can be turned 90 degrees (to alternate between portrait and landscape orientations), N if it cannot. 6/8 is 6 for M6 screws, 8 for M8 screws. The 2006 edition of the VESA standard is very clear that the type F pattern is always square, and that the odd sizes 300×300, 500×500, 700×700 and 900×900 are allowed too. Common notes for variant F Screws are standard M6 or M8 machine screws as long as the bracket is thick plus the length given in table 1. Each hole must be within ±0.5 mm of its nominal position. If the screen manufacturer included different screws they must be used, not the ones that came with the bracket. If the screen manufacturer provides their own mount, it may be attached in any way as long as it can be removed. The hole pattern is a square with holes along the sides only, holes are every 100 mm along the edges, and there are no holes in the square. The bracket is two strips that fit either the left and right or the top and bottom row of holes, and the holes along the two other edges of the square are not used at the same time. The screen manufacturer may omit some of the holes except the outer corners if the remaining holes are enough to carry the weight of the screen and its official add-ons (such as their own sound bar). The mounting area must be the part of the screen furthest towards the back (so no part of the screen may extend further back). The mounting area touching the bracket must be at least 35 mm wide, and the two halves of the bracket itself may be up to 100 mm wide each. Around each mounting hole in the screen, there must be a gripping area for the bracket at least 10 mm (M6)/12 mm (M8). The 35 mm wide areas touching the bracket may deviate at most ±1 mm from complete flatness. The screen manufacturer should warrant that filling the provided holes in either the horizontal or vertical edge (at user's choice) with the specified screws will be able to hold the screen in any direction. The bracket manufacturer should warrant that the bracket can carry the advertised weight when using a few of the included screws (but at least the 4 corners) as would be needed if those were standard industrial grade screws. The end user or contractor should warrant that the wall, ceiling, floor or wherever else the bracket is hung can handle the load safely and within the local building code. More details can be found by purchasing a copy of the standard itself, including rules to ensure cables don't prevent using the mounts. Common deviations from variant F In practice, many screens that almost comply with part F of the standard deviate in various minor ways, and most brands of compliant brackets are designed to handle these deviations with little or no trouble for the end user: Non-square patterns, such as 600 × 200 mm and 600 x 400 mm and 800 × 400 mm. These were apparently permitted by the version 1 (2002) standard. The labeling for these may have been "MIS-F, width, height" e.g. "MIS-F, 600, 200" for 600 × 200 mm. A somewhat strange pattern of 280 × 150 mm. Various protrusions on the screen extending a few millimeters further back than the mounting surfaces. Compliance Manufacturers of FDMI compliant devices can license the use of a hexagonal "VESA mounting compliant" logo. Many compliant or almost compliant devices do not display the logo, as is reflected by the absence of most key vendors from VESA's own public list of licensed manufacturers. Of the members of the standard committee (Ergotron, Peerless Industries, HP, Samsung, Sanus, ViewSonic and Vogel), only Ergotron is on the list. As mentioned above under variant F, there are many almost compliant screens on the market, and some of those use the "VESA" name loosely to refer to their similar mounting patterns. References External links VESA FLAT DISPLAY MOUNTING INTERFACE STANDARD (for Flat Panel Monitors/Displays/Flat TVs), Version 1, Rev. 1, January 16, 2006 Mechanical standards Display technology Audiovisual introductions in 1997
Flat Display Mounting Interface
Engineering
1,966
7,272,109
https://en.wikipedia.org/wiki/Hydrography%20of%20Norte%20de%20Santander
The department of Norte de Santander in northwestern Colombia, and its capital, Cúcuta, contains several rivers. The rivers are mostly part of the Maracaibo Lake basin, with the southeastern section located in the Magdalena River basin. Important fluvial elements are the Zulia, Catatumbo and Pamplonita Rivers. The entity in charge of taking care of these hydrology of Norte de Santander is Corponor. Topography The department of Norte de Santander is for the most part situated in the Eastern Ranges of the Colombian Andes. The northeastern majority of the department is part of the Maracaibo Lake drainage basin, while the southwestern tip of Norte de Santander forms part of the Magdalena River basin. The southeasternmost part of the department is located in the Orinoco River basin. The department, reaching to an altitude of in the Tamá Páramo, has a total surface area of . Hydrography Catatumbo River The Catatumbo River is a fast flowing river, originating as the confluence of the Peralonso, Sardinata and Zulia Rivers in the central valley of Norte de Santander. The upper part of the river is sourced from the highlands near the Macho Rucio Peak ("gray mule peak"), located in the south of Ocaña province. Its mouth is at Lake Maracaibo in Venezuela, through a delta called La Empalizada ("the fence"). Early sections of the Catatumbo River are known as Chorro Oroque, Rio de la Cruz, and Algodonal. Only in the Venezuelan section it is navigable. Left tributaries of the Catatumbo River include: Main affluents Río Frío, Río de Oro, Erbura, Tiradera and San Miguelito Minor affluents Sajada, El Molino, San Lucas, Los Indios, Zurita, Carbón, Naranjito, Sánchez, Joaquín Santos, Teja, San Carlos, Guaduas, Águila, Lejía, Honda, Capitán Largo, Manuel Díaz, Oropora, Huevo, La Vieja, Guayabal, Guamos and Roja Right tributaries include: Main affluents San Miguel, Tarra, Orú, Sardinata and Zulia Minor affluents La Urugmita, La Labranza, Seca, Cargamenta and San Calixto or Maravilla Peralonso River The Peralonso River originates in a small lake in the Guerrero highlands, at altitude. It crosses the municipalities of Salazar, Gramalote, Santiago, San Cayetano and Cúcuta, ending in the Zulia River, near the village of San Cayetano. It forms the upper course of the Catatumbo River. Sardinata River The Sardinata River originates in La Vuelta, in the Guerrero highlands, near the village of Caro at about above sea level. It has a length of almost . Near the river, many forestal and mining activities are present. Its affluents are on the left tributary Santa and the right tributaries Riecito San Miguel, La Sapa, José, La Esmeralda, La Resaca and Pedro José. Its Colombian segment ends in Tres Bocas and continues in Venezuela terminating in the Catatumbo River. Zulia River The Zulia River is formed by several rivers originating in lakes in the highlands of Cachiri at about 4,220 meters above sea level, and located between 12°41'2" east longitude and 8’9" north latitude in the Santander Department, in the eastern range of the Andes mountains. The river flows through the province of Cúcuta, passing through the neighbouring nation of Venezuela, and ends in the waters of Lake Maracaibo. In Colombian territory, this river is navigable for about , starting from the old port of Los Canchos. The river flows for through Venezuela, the last being deep and calm, adaptable to embarkments of big proportions. In the past, this river provided a basic means of transportation which was responsible for much of prosperity of the neighbouring valleys, like the center of nutritioned commerce, whose products fed many of the towns nearby. The Zulia river's tributaries include the Cucutilla, Arboleda, Salazar and the Peralonso Rivers, which flow from the left, and the Pamplonita River with its own tributaries the Táchira and La Grita Rivers, flowing from the right. Areas surrounding the Zulia River are fertile, with many forests decorating the landscape. However, the climate of this area could be seen as unhealthy, due to the density of trees and the many swamps. Salazar River The Salazar River originates near the city of Zulia and terminates near the namesake town of Salazar de las Palmas. It's an important river in the tradition of local inhabitants. It is used for swimming, fishing and cooking the traditional sancocho soup on the beaches of the river. There are some areas of the river near Salazar city that have waterfalls of many minor water streams falling into the river, that are often visited by tourists. La Grita River La Grita River originates in the Venezuelan Andes near the town of La Grita at about above sea level. It's a natural boundary between Colombia and Venezuela for about , until its mouth in the Zulia River. Its affluents are the Guaramito River, La China, Riecito, Río Lobatera, and Caño de La Miel. Pamplonita River The Pamplonita River was of crucial importance in the economy of the country in the 18th to 19th centuries as the main channel for the transportation of cacao. It originates in the Altogrande Mountains at above sea level, near the city of Pamplona. It flows downhill through the Cariongo Valley, and near Chinácota the confluence with the minor affluent Honda River is situated. The Pamplonita River flows through the Cúcuta valley, where it has a slow flow, ending in the Zulia River, flowing towards Maracaibo Lake. Most of this river is above above sea level. The total length of the river is about and its watershed covers . The confluence of the Pamplonita and Zulia Rivers is located near the urban area of Cúcuta, the capital of Norte de Santander, in particular the Rinconada neighborhood. This part contains the risk of flooding, until in the streets of the city. The river has periodically flooded the local hospital and the Colón Park, named after Cristóbal Colón. The river also produces a significant erosion in the surrounding lands, in part because of the local dry climate and shortage of vegetation. This is seen more noticeably in the areas near Cúcuta; La Garita and Los Vados. The Pamplonita River crosses the municipalities of: Cúcuta, Pamplona, Los Patios, Chinácota, Bochalema and Pamplonita, and the villages of El Diamante, La Donjuana, La Garita, San Faustino and Agua Clara. The river receives sewage water from Pamplona, Los Patios and Cúcuta, and residues from slaughterhouses, pesticides and fertilizers. The 1541 law regulates the use of water from the rivers to concessions regulated by the local government, but there are many illegal non-regulated diversions of water. Affluents of Pamplonita River are: Right Main affluents: Táchira River, Rio Viejo, Las Brujas, Caño Cachicana and Caño Guardo Minor affluents: Monteadentro, Los Negros, Los Cerezos, Zipachá, Tanauca, Ulagá, El Gabro, El Ganso, Santa Helena, Cucalina, La Teja, De Piedra, La Palmita, Matagira, La Chorrera, Iscalá, Honda, Cascarena, Villa Felisa, Ciénaga, Juana Paula, Don Pedra, Faustinera, Europea, Rodea, Aguasucia Left Minor affluents: Navarro, San Antonio, La Palma, Hojancha, La Laguna, Batagá, Galindo, Santa Lucía, Las Colonias, El Laurel, Chiracoca, Montuosa, El Masato, Quebraditas, Aguanegra, Zorzana, El Ojito, Jaguala, Viajaguala, Tío José, El Magro, Aguadas, La Rinconada, Periquera, Voladora, La Sarrera, La Cuguera, Guaimaraca, Aguaclarera, La Trigrera, Negro, El Oso, and Chipo Táchira River The Táchira River originates near Tamá, in the mountains of Las Banderas, at an altitude of above sea level. The river flows towards the north, as a natural boundary between Colombia and Venezuela. It crosses the municipalities of Herrán, Ragonvalia, Villa del Rosario and Cúcuta. Tachira River flows into the Pamplonita River near the village El Escobal. Its affluents are El Salado, La Margarita, El Naranjal, Palogordo, El Palito, Agua Sucia and la Horma. See also List of rivers of Colombia References Norte de Santander Geography of Norte de Santander Department Hydrography
Hydrography of Norte de Santander
Environmental_science
1,955
12,923,152
https://en.wikipedia.org/wiki/Insulin%20lispro
Insulin lispro, sold under the brand name Humalog among others, is a modified type of medical insulin used to treat type 1 and type 2 diabetes. It is delivered subcutaneously either by injection or from an insulin pump. Onset of effects typically occurs within 30 minutes and lasts about 5 hours. Often a longer-acting insulin like insulin NPH is also needed. Common side effects include low blood sugar. Other serious side effects may include low blood potassium. Use in pregnancy and breastfeeding is generally safe. It works the same as human insulin by increasing the amount of glucose that tissues take in and decreasing the amount of glucose made by the liver. Insulin lispro was first approved for use in the United States in 1996. It is a manufactured analogue of human insulin where two amino acids have swapped positions. In 2022, it was the 70th most commonly prescribed medication in the United States, with more than 9million prescriptions. Medical uses Insulin lispro is used to treat people with type 1 diabetes or type 2 diabetes. People doing well on short-acting insulin should not routinely be changed to insulin lispro, but may benefit from some advantages like flexibility and responsiveness. Side effects Common side effects include skin irritation at the site of injection, hypoglycemia, hypokalemia, and lipodystrophy. Other serious side effects include anaphylaxis, and hypersensitivity reactions. Mechanism of action Through recombinant DNA technology, the final lysine and proline residues on the C-terminal end of the B-chain are reversed. This modification does not alter receptor binding, but blocks the formation of insulin dimers and hexamers. This allows larger amounts of active monomeric insulin to be immediately available for postprandial injections. Chemistry It is a manufactured form of human insulin where the amino acids lysine and proline have been switched at the end of the B chain of the insulin molecule. This switch of amino acids mimics Insulin-like growth factor 1 which also has lysine (K) and proline (P) in that order at positions 28 and 29. History Insulin lispro (brand name Humalog) was granted marketing authorization in the European Union in April 1996, and it was approved for use in the United States in June 1996. Insulin lispro (brand name Liprolog) was granted marketing authorization in the European Union in May 1997, and again in August 2001. Combination drugs combining insulin lispro and other forms of insulin were approved for use in the United States in December 1999. Insulin lispro Sanofi was granted marketing authorization as a biosimilar in the European Union in July 2017. Insulin lispro injection (brand name Admelog) was approved for use in the United States in December 2017. In January 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency recommended granting of a marketing authorization for insulin lispro acid (brand name Lyumjev) for the treatment of diabetes in adults. Insulin lispro (Lyumjev) was approved for use in the European Union in March 2020, and in the United States in June 2020. Society and culture Economics In the United States, the price of for a vial of Humalog increased from in 2001 to $234 in 2015, or $10.06 and $29.36 per 100 units. In April 2019, Eli Lilly and Company announced they would produce a version selling for $137.35 per vial. The chief executive said that this was a contribution "to fix the problem of high out-of-pocket costs for Americans living with chronic conditions", but Patients for Affordable Drugs Now said it was just a public relations move, as "other countries pay $20 for a vial of insulin." In March 2023, Lilly announced a program capping their insulin prices at $35 per month. References Drugs developed by Eli Lilly and Company Human proteins Insulin receptor agonists Peptide hormones Peptide therapeutics Recombinant proteins Wikipedia medicine articles ready to translate de:Insulinpräparat#Insulin lispro
Insulin lispro
Biology
853
70,988,103
https://en.wikipedia.org/wiki/Pronova%20BioPharma
Pronova BioPharma is a Norwegian company. In Denmark it is a bulk manufacturer of omega-3 products with a manufacturing plant in Kalundborg. It was acquired by BASF in 2014. Pronova BioPharma ASA had its roots in Norway's codfish liver oil industry. The company was founded in 1991 as a spinout from the JC Martens company, which in turn was founded in 1838 in Bergen, Norway. Pronova developed the concentrated omega-3-acid ethyl esters formulation that is the active pharmaceutical ingredient of Lovaza. Lovaza Pronova won approvals to market the drug, called Omacor in Europe (and initially in the US), in several European countries in 2001 after conducting a three-and-a-half-year trial in 11,000 subjects; The company partnered with other companies like Pierre Fabre in France. In 2004, Pronova licensed the US and Puerto Rican rights to Reliant Therapeutics, whose business model was in-licensing of cardiovascular drugs. In that same year, Reliant and Pronova won FDA approval for the drug, and it was launched in the US and Europe in 2005. Global sales in 2005 were $144M, and by 2008, they were $778M. By 17 November 2010, the constituent companies of the OSEAX included Pronova BioPharma. In 2009, generic companies Teva Pharmaceuticals and Par Pharmaceutical made clear their intentions to file Abbreviated New Drug Applications (“ANDAs”) to bring generics to market, and in April 2009, Pronova sued them from infringing the key US patents covering Lovaza, US 5,656,667 (due to expire in April 2017), US 5,502,077 (exp March 2013). Subsequently, in May 2012, a district court ruled in Pronova's favor, saying that the patents were valid. The generic companies appealed, and in September 2013, the Federal Circuit reversed, saying that because more than one year before Pronova's predecessor company applied for a patent, it had sent samples of the fish oil used in Lovaza to a researcher for testing. This event thus constituted "public use" that invalidated the patent in question. Generic versions of Lovaza were introduced in America in April 2014. Pronovo has continued to manufacture the ingredients in Lovaza, and in 2012, BASF announced it would acquire Pronova for $844 million. The deal closed in 2013. Research Pronova BioPharma is a commercial partner in MabCent-SFI. Brand names Lovaz (US)/Omacor (Europe), sold by Woodward Pharma Services, LLC in the US; created and manufactured by Pronova. It was approved in the United States in 2004. References External links https://www.epax.com/why-epax/about-us/ Chemical companies Chemical companies established in 1991
Pronova BioPharma
Chemistry
608
5,724,300
https://en.wikipedia.org/wiki/Backup%20site
A backup site (also work area recovery site or just recovery site) is a location where an organization can relocate following a disaster, such as fire, flood, terrorist threat, or other disruptive event. This is an integral part of the disaster recovery plan and wider business continuity planning of an organization. A backup, or alternate, site can be another data center location which is either operated by the organization, or contracted via a company that specializes in disaster recovery services. In some cases, one organization will have an agreement with a second organization to operate a joint backup site. In addition, an organization may have a reciprocal agreement with another organization to set up a site at each of their data centers. Sites are generally classified based on how prepared they are and the speed with which they can be brought into operation: "cold" (facility is prepared), "warm" (equipment is in place), "hot" (operational data is loaded) –- with increasing cost to implement and maintain with increasing "temperature". Classification Cold site A cold site is operational space with basic facilities like raised floors, air conditioning, power and communication lines etc. Following an incident, equipment is brought in and set up to resume operations. It does not include backed-up copies of data and information from the original location of the organization, nor does it include hardware already set up. The lack of provisioned hardware contributes to the minimal start-up costs of the cold site, but requires additional time following the disaster to have the operation running at a capacity similar to that prior to the disaster. In some cases, a cold site may have equipment available, but it is not operational. Warm site A warm site is a compromise between hot and cold. These sites will have hardware and connectivity already established -- though on a smaller scale. Warm sites might have backups on hand, but they may be incomplete and may be between several days to a week old. The recovery of pre-disaster operations will be delayed while more up-to-date backup tapes are delivered to the warm site, or network connectivity is established to recover data from a remote backup site. Hot site A hot site is a near duplicate of the original site of the organization, including full computer systems as well as complete backups of user data. Real-time synchronization between the two sites may be used to completely mirror the data environment of the original site using wide-area network links and specialized software. Following a disruption to the original site, the hot site exists so that the organization can relocate, with minimal losses to normal operations in the shortest recovery time. Ideally, a hot site will be up and running within a matter of hours. Personnel may need to be moved to the hot site, but it is possible that the hot site may be operational from a data-processing perspective before staff has relocated. The capacity of the hot site may or may not match the capacity of the original site depending on the organization's requirements. This type of backup site is the most expensive to operate. Hot sites are popular with organizations that operate real-time processes such as financial institutions, government agencies, and eCommerce providers. The most important feature offered from a hot site is that the production environment(s) is running concurrently with the main datacenter. This synchronizing allows for minimal impact and downtime to business operations. In the event of a significant outage, the hot site can take the place of the affected site immediately. However, this level of redundancy does not come cheap, and businesses will need to weigh the cost-benefit-analysis (CBA) of hot site utilization. These days, if the backup site is down and misses the "proactive" approach, it may not be considered a hot site depending on the level of maturity of the organization regarding the ISO 22301 approach (international standard for Business Continuity Management). Alternate sites Generally, an Alternate Site refers to a site where people and the equipment that they need to work is relocated for a period of time until the normal production environment, whether reconstituted or replaced, is available. Choosing Choosing the type of backup site to be used is decided by an organizations based on a cost vs. benefit analysis. Hot sites are traditionally more expensive than cold sites, since much of the equipment the company needs must be purchased and thus people are needed to maintain it, making the operational costs higher. However, if the same organization loses a substantial amount of revenue for each day they are inactive, then it may be worth the cost. Another advantage of a hot site is that it can be used for operations prior to a disaster happening. This load balanced production processing method can be cost effective, and will provide the users with the security of minimal downtime during an event that affects one of the data centers. The advantages of a cold site are simple — cost. It requires fewer resources to operate a cold site because no equipment has been brought prior to the disaster. Some organizations may store older versions of the hardware in the center. This may be appropriate in a server farm environment, where old hardware could be used in many cases. The downside with a cold site is the potential cost that must be incurred in order to make the cold site effective. The costs of purchasing equipment on very short notice may be higher and the disaster may make the equipment difficult to obtain. Commercial sites When contracting services from a commercial provider of backup site capability, organizations should take note of contractual usage provision and invocation procedures. Providers may sign up more than one organization for a given site or facility, often depending on various service levels. This is a reasonable proposition because it is unlikely that all organizations subscribed to the service are likely to need it at the same time. It also allows the provider to offer the service at an affordable cost. However, in a large-scale incident that affects a wide area, it is likely that these facilities will become over-subscribed due to multiple customers claiming the same backup site. To gain priority in service over other customers, an organization can request a Priority Service from the provider, which often includes a higher monthly fee. This commercial site can also be used as a company's secondary production site with a full scale mirroring environment for their primary data center. Again, a higher fee will be required; but the cost could be justified by the security and resiliency of the site, which would give that organization the ability to provide its users with uninterrupted access to their data and applications. See also Off-site Data Protection Backup References General references Records Management Services (2004, July 15). Vital Records: How Do You Protect And Store Vital Records? Haag, Cummings, McCubbrey, Pinsonneult, and Donovan. (2004). Information Management Systems, For The Information Age. McGraw-Hill Ryerson. IT Service Continuity (2007, ITIL v3). IT Service Continuity. Retrieved from: http://itlibrary.org/index.php?page=IT_Service_Continuity_Management on 03SEP14 The Three Stages of Disaster Recovery Sites by Bryce CarrollNovember 20th, 2013 Backup
Backup site
Engineering
1,451
15,220,588
https://en.wikipedia.org/wiki/Hearts%20Content%20Scenic%20Area
Hearts Content National Scenic Area is a tract of old-growth forest in Warren County, northwestern Pennsylvania. It represents one of the few remaining old-growth forests in the northeastern United States that contain white pine. The area is protected as a National Scenic Area within the Allegheny National Forest. History While many of the region's forests were being clear-cut, the Wheeler and Dusenbury Lumber Company held the tract of old-growth forest at Heart's Content from 1897 until 1922, when they deeded it to the United States Forest Service. In 1934, the Chief of the Forest Service recognized the old-growth stand and of surrounding land as a National Scenic Area. The forest became a National Natural Landmark in 1973. Scientific study H.J. Lutz's 1930 study of Hearts Content was one of the earliest quantitative analyses of plant communities in an old-growth forest, and it remains influential in the field of ecology. Lutz concluded that the even-aged white pine stand established following a major disturbance in the 17th century, such as a fire (possibly set by Native Americans during the Beaver Wars); since then, the species has not reproduced under the closed canopy. By relocating and resampling Lutz's original plots, Whitney documented 50 years of changes in the structure and composition of the stand. During this time, dense deer populations have reduced the regeneration of many tree and herb species. Vegetation Hearts Content represents E. Lucy Braun's hemlock-white pine-northern hardwood forest type. The old-growth forest is from to in extent, but the scenic area is most famous for its of tall white pine and Eastern hemlock. Many of these trees have diameters of over and heights of over , and most of the white pine are between 300 and 400 years old. American beech is also plentiful in the forest, but is affected by Beech bark scale. Hay-scented fern covers much of the understory due to overbrowsing by deer. Recreation Visitors can walk an easily accessible, loop trail through the old-growth forest. A picnic area, campground and several other trailheads are nearby. A cross-country ski trail passes through the area on old railroad grades. There are also numerous camps owned by private individuals in the area. See also List of National Natural Landmarks in Pennsylvania List of old growth forests References External links Hearts Content Recreation Area (United States Forest Service) Old-growth forests National Natural Landmarks in Pennsylvania National scenic areas Protected areas established in 1934 Protected areas of Warren County, Pennsylvania Allegheny National Forest
Hearts Content Scenic Area
Biology
513
5,202,532
https://en.wikipedia.org/wiki/International%20Conference%20on%20Very%20Large%20Data%20Bases
International Conference on Very Large Data Bases or VLDB conference is an annual conference held by the non-profit Very Large Data Base Endowment Inc. While named after very large databases, the conference covers the research and development results in the broader field of database management. The mission of VLDB Endowment is to "promote and exchange scholarly work in databases and related fields throughout the world." The VLDB conference began in 1975 and is now closely associated with SIGMOD and SIGKDD. Venues See also XLDB References External links VLDB Endowment Inc. Computer science conferences
International Conference on Very Large Data Bases
Technology
119
724,723
https://en.wikipedia.org/wiki/Diltiazem
Diltiazem, sold under the brand name Cardizem among others, is a nondihydropyridine calcium channel blocker medication used to treat high blood pressure, angina, and certain heart arrhythmias. It may also be used in hyperthyroidism if beta blockers cannot be used. It is taken by mouth or given by injection into a vein. When given by injection, effects typically begin within a few minutes and last a few hours. Common side effects include swelling, dizziness, headaches, and low blood pressure. Other severe side effects include an overly slow heart beat, heart failure, liver problems, and allergic reactions. Use is not recommended during pregnancy. It is unclear if use when breastfeeding is safe. Diltiazem works by relaxing the smooth muscle in the walls of arteries, resulting in them opening and allowing blood to flow more easily. Additionally, it acts on the heart to prolong the period until it can beat again. It does this by blocking the entry of calcium into the cells of the heart and blood vessels. It is a class IV antiarrhythmic. Diltiazem was approved for medical use in the United States in 1982. It is available as a generic medication. In 2022, it was the 100th most commonly prescribed medication in the United States, with more than 6million prescriptions. An extended release formulation is also available. Medical uses Diltiazem is indicated for: Stable angina (exercise-induced) – diltiazem increases coronary blood flow and decreases myocardial oxygen consumption, secondary to decreased peripheral resistance, heart rate, and contractility. Variant angina – it is effective owing to its direct effects on coronary dilation. Unstable angina (preinfarction, crescendo) – diltiazem may be particularly effective if the underlying mechanism is vasospasm. Myocardial bridge For supraventricular tachycardias (PSVT), diltiazem appears to be as effective as verapamil in treating re-entrant supraventricular tachycardia. Atrial fibrillation or atrial flutter is another indication. The initial bolus should be 0.25 mg/kg, intravenous (IV). Because of its vasodilatory effects, diltiazem is useful for treating hypertension. Calcium channel blockers are well tolerated, and especially effective in treating low-renin hypertension. It is also used as topical application for anal fissures because it promotes healing due to its vasodilatory property. Contraindications and precautions In congestive heart failure, patients with reduced ventricular function may not be able to counteract the negative inotropic and chronotropic effects of diltiazem, the result being an even higher compromise of function. With SA node or AV conduction disturbances, the use of diltiazem should be avoided in patients with SA or AV nodal abnormalities, because of its negative chronotropic and dromotropic effects. Low blood pressure patients, with systolic blood pressures below 90 mm Hg, should not be treated with diltiazem. Diltiazem may paradoxically increase ventricular rate in patients with Wolff-Parkinson-White syndrome because of accessory conduction pathways. Diltiazem is relatively contraindicated in the presence of sick sinus syndrome, atrioventricular node conduction disturbances, bradycardia, impaired left ventricle function, peripheral artery occlusive disease, and chronic obstructive pulmonary disease. Side effects A reflex sympathetic response, caused by the peripheral dilation of vessels and the resulting drop in blood pressure, works to counteract the negative inotropic, chronotropic and dromotropic effects of diltiazem. Undesirable effects include hypotension, bradycardia, dizziness, flushing, fatigue, headaches and edema. Rare side effects are congestive heart failure, myocardial infarction, and hepatotoxicity. Drug interactions Because of its inhibition of hepatic cytochromes CYP3A4, CYP2C9 and CYP2D6, there are a number of drug interactions. Some of the more important interactions are listed below. Beta-blockers Intravenous diltiazem should be used with caution with beta-blockers because, while the combination is most potent at reducing heart rate, there are rare instances of dysrhythmia and AV node block. Quinidine Quinidine should not be used concurrently with calcium channel blockers because of reduced clearance of both drugs and potential pharmacodynamic effects at the SA and AV nodes. Fentanyl Concurrent use of fentanyl with diltiazem, or any other CYP3A4 inhibitors, as these medications decrease the breakdown of fentanyl and thus increase its effects. Mechanism of action Diltiazem, also known as (2S,3S)-3-acetoxy-5-[2-(dimethylamino)ethyl]-2,3-dihydro-2-(4-methoxyphenyl)-1,5-benzothiazepin-4(5H)-one hydrochlorid has a vasodilating activity attributed to the (2S,3S)-isomer. Diltiazem is a potent vasodilator, increasing blood flow and variably decreasing the heart rate via strong depression of A-V node conduction. It binds to the alpha-1 subunit of L-type calcium channels in a fashion somewhat similar to verapamil, another nondihydropyridine (non-DHP) calcium channel blocker. Chemically, it is based upon a 1,4-thiazepine ring, making it a benzothiazepine-type calcium channel blocker. It is a potent and mild vasodilator of coronary and peripheral vessels, respectively, which reduces peripheral resistance and afterload, though not as potent as the dihydropyridine (DHP) calcium channel blockers. This results in minimal reflexive sympathetic changes. Diltiazem has negative inotropic, chronotropic, and dromotropic effects. This means diltiazem causes a decrease in heart muscle contractility – how strong the beat is, lowering of heart rate – due to slowing of the sinoatrial node, and a slowing of conduction through the atrioventricular node – increasing the time needed for each beat. Each of these effects results in reduced oxygen consumption by the heart, reducing angina, typically unstable angina, symptoms. These effects also reduce blood pressure by causing less blood to be pumped out. Research Diltiazem is prescribed off-label by doctors in the US for prophylaxis of cluster headaches. Some research on diltiazem and other calcium channel antagonists in the treatment and prophylaxis of migraine is ongoing. Recent research has shown diltiazem may reduce cocaine cravings in drug-addicted rats. This is believed to be due to the effects of calcium blockers on dopaminergic and glutamatergic signaling in the brain. Diltiazem also enhances the analgesic effect of morphine in animal tests, without increasing respiratory depression, and reduces the development of tolerance. Diltiazem is also being used in the treatment of anal fissures. It can be taken orally or applied topically with increased effectiveness. When applied topically, it is made into a cream form using either petrolatum or Phlojel. Phlojel absorbs the diltiazem into the problem area better than the petrolatum base. It has good short-term success rates. References Calcium channel blockers CYP2D6 inhibitors CYP3A4 inhibitors Benzothiazepines 4-Methoxyphenyl compounds Drugs developed by AbbVie Drugs developed by Merck Lactams Acetate esters Chemical substances for emergency medicine Wikipedia medicine articles ready to translate
Diltiazem
Chemistry
1,716
4,334,961
https://en.wikipedia.org/wiki/Recovery%20%28metallurgy%29
In metallurgy, recovery is a process by which a metal or alloy's deformed grains can reduce their stored energy by the removal or rearrangement of defects in their crystal structure. These defects, primarily dislocations, are introduced by plastic deformation of the material and act to increase the yield strength of a material. Since recovery reduces the dislocation density, the process is normally accompanied by a reduction in a material's strength and a simultaneous increase in the ductility. As a result, recovery may be considered beneficial or detrimental depending on the circumstances. Recovery is related to the similar processes of recrystallization and grain growth, each of them being stages of annealing. Recovery competes with recrystallization, as both are driven by the stored energy, but is also thought to be a necessary prerequisite for the nucleation of recrystallized grains. It is so called because there is a recovery of the electrical conductivity due to a reduction in dislocations. This creates defect-free channels, giving electrons an increased mean free path. Definition The physical processes that fall under the designations of recovery, recrystallization and grain growth are often difficult to distinguish in a precise manner. Doherty et al. (1998) stated: "The authors have agreed that ... recovery can be defined as all annealing processes occurring in deformed materials that occur without the migration of a high-angle grain boundary" Thus the process can be differentiated from recrystallization and grain growth as both feature extensive movement of high-angle grain boundaries. If recovery occurs during deformation (a situation that is common in high-temperature processing) then it is referred to as 'dynamic' while recovery that occurs after processing is termed 'static'. The principal difference is that during dynamic recovery, stored energy continues to be introduced even as it is decreased by the recovery process - resulting in a form of dynamic equilibrium. Process Deformed structure A heavily deformed metal contains a huge number of dislocations predominantly caught up in 'tangles' or 'forests'. Dislocation motion is relatively difficult in a metal with a low stacking fault energy and so the dislocation distribution after deformation is largely random. In contrast, metals with moderate to high stacking fault energy, e.g. aluminum, tend to form a cellular structure where the cell walls consist of rough tangles of dislocations. The interiors of the cells have a correspondingly reduced dislocation density. Annihilation Each dislocation is associated with a strain field which contributes some small but finite amount to the materials stored energy. When the temperature is increased - typically below one-third of the absolute melting point - dislocations become mobile and are able to glide, cross-slip and climb. If two dislocations of opposite sign meet then they effectively cancel out and their contribution to the stored energy is removed. When annihilation is complete then only excess dislocation of one kind will remain. Rearrangement After annihilation any remaining dislocations can align themselves into ordered arrays where their individual contribution to the stored energy is reduced by the overlapping of their strain fields. The simplest case is that of an array of edge dislocations of identical Burger's vector. This idealized case can be produced by bending a single crystal that will deform on a single slip system (the original experiment performed by Cahn in 1949). The edge dislocations will rearrange themselves into tilt boundaries, a simple example of a low-angle grain boundary. Grain boundary theory predicts that an increase in boundary misorientation will increase the energy of the boundary but decrease the energy per dislocation. Thus, there is a driving force to produce fewer, more highly misoriented boundaries. The situation in highly deformed, polycrystalline materials is naturally more complex. Many dislocations of different Burger's vector can interact to form complex 2-D networks. Development of substructure As mentioned above, the deformed structure is often a 3-D cellular structure with walls consisting of dislocation tangles. As recovery proceeds these cell walls will undergo a transition towards a genuine subgrain structure. This occurs through a gradual elimination of extraneous dislocations and the rearrangement of the remaining dislocations into low-angle grain boundaries. Sub-grain formation is followed by subgrain coarsening where the average size increases while the number of subgrains decreases. This reduces the total area of grain boundary and hence the stored energy in the material. Subgrain coarsen shares many features with grain growth. If the sub-structure can be approximated to an array of spherical subgrains of radius R and boundary energy γs; the stored energy is uniform; and the force on the boundary is evenly distributed, the driving pressure P is given by: Since γs is dependent on the boundary misorientation of the surrounding subgrains, the driving pressure generally does not remain constant throughout coarsening. References Materials science Metallurgy
Recovery (metallurgy)
Physics,Chemistry,Materials_science,Engineering
1,046
42,855,872
https://en.wikipedia.org/wiki/Primitive%20decorating
Primitive decorating is a style of decorating using primitive folk art style that is characteristic of a historic or early Americana time period, typically using elements with muted colors and a rough and simple look to them. Decorating in the primitive style can incorporate either true antiques or contemporary folk art. Contemporary primitive folk art is designed to have an old or antique look but created using new materials. Examples Examples of antiquing techniques used by primitive folk artists include tea or coffee staining and sanding down paint to create a worn, aged look. The style is sometimes referred to as country style. Primitive decorating often features a number of recurring themes and characters including primitive angels, barnstars, primitive crows, primitive dolls & rag dolls, saltbox houses, sheep, willow trees, primitive wooden signs, and pottery. Primitive design focuses on furniture made between the mid-18th century and the early 19th century by farmers. A number of magazines specialize in primitive decorating. Gallery See also Country Living Interior design Willow Tree (figurines) References Interior design Architectural design American folk art
Primitive decorating
Engineering
217
73,100,899
https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20and%20Mines%20%28Dominican%20Republic%29
The Ministry of Energy and Mining (Spanish: Ministerio de Energía y Minas) of the Dominican Republic is a government institution in charge of the responsible development of the country's energy and mining sectors. Its main concern is to keep a reliable energy infrastructure and preserve an adequate exploitation of the country's minerals. This office appeared on 2013 with its current name as a separate institution from the Ministry of Industry and Trade. Its headquarters are located at Santo Domingo. Its Minister is Antonio Almonte, since August 16, 2020. History By 1920, electricity in the Dominican Republic was provided by foreign-investment companies. On 1928, the government created the Santo Domingo Electric Company (Compañía Eléctrica de Santo Domingo), given the first steps to the national electric system. Through 1954 and 1955, the Dominican government focused on acquiring companies dedicated to the generation and distribution of electricity, and created the Dominican Electric Corporation (Corporación Dominicana de Electricidad or CDE) on 1955 by Decree no. 555. On 1966, with the creation of the Secretary of State of Industry and Trade (Secretaría de Estado de Industria y Comercio), all matters related to energy and mining were put under this office. On June 4, 1971, Congress signed Law 146-71 about the mining sector. On July 3, 2013, by Law 100–13, the Dominican government formally created the Ministry of Energy and Mining (Ministerio de Energía y Minas) as the regulator of the energy policies and the nation's mining. Internal structure As the other Ministries of the Dominican Republic, the Ministry of Energy and Mining is subdivided into vice-ministries. These are: Vice-ministry of Government Energy Saving Vice-ministry of Energy Vice-ministry of Nuclear Energy Vice-ministry of Hydrocarbons Vice-ministry of Mining Vice-ministry of Energy Safety and Infrastructure Affiliated agencies The Ministry of Energy and Mining has several institutions affiliated to it. These are: National Commission of Energy (Comisión Nacional de Energía or CNE) Superintendency of Electricity (Superintendencia de Electricidad) General Office of Mining (Dirección General de Minería) National Geologic Service (Servicio Geológico Nacional) The Dominican Corporations of State's Electric Companies (Corporación Dominicana de Empresas Eléctricas Estatales or CDEEE) was dissolved and integrated to this Ministry on 2020. References External links Official website Government of the Dominican Republic Economy of the Dominican Republic
Ministry of Energy and Mines (Dominican Republic)
Engineering
506
20,513,725
https://en.wikipedia.org/wiki/Atriplex%20pacifica
Atriplex pacifica is a species of saltbush known by the common names Davidson's saltbush, South Coast saltbush, and Pacific orach. It is native to the coastline of Southern California, including the Channel Islands, and Baja California, where it grows in saline habitat on the immediate coastline, such as beach bluffs. It is an uncommon plant, chiefly because much of its native habitat has been drastically altered. This is a mat-forming annual herb producing scaly, reddish green, prostrate stems 10 to 30 centimeters long. The leaves are less than 2 centimeters long, usually oval, with gray-green scaly undersides. Male flowers are borne in terminal spike inflorescences that emerge from the distal end of the branches, while female flower clusters appear proximally on the branches. External links Calflora Database: Atriplex pacifica (Pacific saltbrush, South coast saltbush) Jepson Manual eFlora (TJM2) treatment of Atriplex pacifica USDA Plants Profile for Atriplex pacifica (Davidson's saltbush) Flora of North America UC Photos gallery: Atriplex pacifica pacifica Halophytes Flora of California Flora of Baja California Natural history of the California chaparral and woodlands Natural history of the Channel Islands of California Plants described in 1904 Flora without expected TNC conservation status
Atriplex pacifica
Chemistry
285
67,374,419
https://en.wikipedia.org/wiki/Ritu%20Agarwal
Ritu Agarwal is an Indian-American management scientist specializing in management information systems. She is the Wm Polk Carey Distinguished Professor of Information Systems at Johns Hopkins University. Previously, she was the Senior Associate Dean for Faculty and Research and the Robert H. Smith Dean’s Chair of Information Systems at the Robert H. Smith School of Business. Agarwal was the Editor-in-Chief of Information Systems Research and the founder and director of the Center for Health Information and Decision Systems at the Smith School. Early life and education Agarwal earned her Bachelor of Arts degree in mathematics from the St. Stephen's College, Delhi before moving to the United States for her graduate degrees at Syracuse University. Career Upon completing her PhD, Agarwal joined the faculty at the University of Dayton as an associate professor of management intelligence systems. In this role, she received a grant from Dayton's Intelligence Systems Applications Center to assist in the development of an artificial intelligence system to help small business owners craft strategic marketing plans. Agarwal eventually joined the faculty at the University of Maryland, College Park's Robert H. Smith School of Business in 1999. In 2010, Agarwal started the annual Conference on Health IT and Analytics (CHITA). She was also named the editor-in-chief of the journal Information Systems Research beginning January 1, 2011. In her first year as editor-in-chief, Agarwal became a Fellow of the Association for Information Systems, and also received the University of Maryland Distinguished Scholar-Teacher Award. By 2013, she was recognized as "one of the most widely-cited scholars in the field" and was elected a 2013 Distinguished Fellow for her outstanding intellectual contributions to the information systems field. Following the departure of Alexander Triantis in 2019, Agarwal was appointed interim dean of the University of Maryland. While serving in this role, she was a 2019 recipient of the Association for Information Systems Lyons Electronic Office Lifetime Achievements Award for her work in the field of information systems. During the COVID-19 pandemic, Agarwal and colleague Margrét Bjarnadóttir conducted a study titled Precision Therapy for Neonatal Opioid Withdrawal Syndrome in order to "solve big health care challenges through joint research that draws on the institutions’ world-leading expertise in medicine and artificial intelligence." Their study looked to improve clinical decision making in the treatment of neonatal opioid withdrawal syndrome. She was subsequently appointed to serve on the Acquired Immunodeficiency Syndrome (DAIDS) Subcommittee of the National Institute of Allergy and Infectious Diseases and recognised as being in the top 2% of the most-cited scholars and scientists worldwide. Recognition Agarwal is a Fellow of the Institute for Operations Research and the Management Sciences, elected in the 2021 class of fellows. References External links Living people Management scientists University of Maryland, College Park faculty American academic journal editors St. Stephen's College, Delhi alumni Florida State University faculty Information systems researchers Year of birth missing (living people) Syracuse University College of Engineering and Computer Science alumni Martin J. Whitman School of Management alumni University of Dayton faculty Indian Institute of Management Calcutta alumni Fellows of the Institute for Operations Research and the Management Sciences Indian academic journal editors
Ritu Agarwal
Technology
652
39,458,532
https://en.wikipedia.org/wiki/Yoda%20conditions
In programming jargon, Yoda conditions (also called Yoda notation) is a programming style where the two parts of an expression are reversed from the typical order in a conditional statement. A Yoda condition places the constant portion of the expression on the left side of the conditional statement. Yoda conditions are part of the coding standards for Symfony and WordPress. Origin The name for this programming style is derived from the Star Wars character Yoda, who speaks English with a non-standard syntax (e.g., "When 900 years old you reach, look as good you will not."). Thomas M. Tuerke claims to have coined the term Yoda notation and first published it online in 2006. According to him, the term Yoda condition was later popularized by Félix Cloutier in 2010. Example Usually a conditional statement would be written as: if ($value == 42) { /* ... */ } // Reads like: "If the value equals 42..." Yoda conditions describe the same expression, but reversed: if (42 == $value) { /* ... */ } // Reads like: "If 42 equals the value..." Advantage Readability of logically-chained comparisons Some languages, such as Python, support "chained" comparison operators ("comparators") in their syntax. Thus, the following lines are logically equivalent: # Using chained comparators: if 3.14 < y <= 42: ... # Logically equivalent to: if (3.14 < y) and (y <= 42): ... Notice that the second form naturally uses Yoda syntax in the left-hand comparison (). Consider the same line without Yoda syntax: if (y > 3.14) and (y <= 42): ... When handwriting math, many authors prefer the "chained" notation (example, example). When programming in a language that does not literally support the chained notation, the author may prefer the Yoda syntax, as it at least visually evokes the familiar chained notation. Detecting programmer mistakes For symmetric comparisons, such as equality, swapping the left and right operands does not change the behavior of the program. In programming languages that use a single equals sign (=) for assignment expressions, one might mistakenly write an assignment expression where an equality comparison was intended. if (myNumber = 42) { /* ... */ } // This assigns 42 to myNumber instead of evaluating the desired condition Using Yoda conditions: if (42 = myNumber) { /* ... */ } // An error this is and compile it will not Since literal expressions such as 42 are not assignable (they are not "lvalues"), assignment-equality confusion in Yoda conditions often manifests as a compile-time semantic error. Boolean myBoolean = null; if (myBoolean == true) { /* ... */ } // This causes a NullPointerException in Java Runtime, but legal in compilation. // This happens because Java will try to call myBoolean.booleanValue() on a null Object. Changing the target of dynamic dispatch In most object-oriented programming languages, the receiver of a method call is written to the left of the call's other arguments. At the same time, in non-Yoda comparisons, the variable that is the subject of comparison is written on the left-hand side. Comparison method calls are thus ordinarily dynamically dispatched on the object being compared, which is not always desirable. String myString = null; if (myString.equals("foobar")) { /* ... */ } // This causes a NullPointerException in Java With Yoda conditions, the call can be dispatched on a constant object instead. String myString = null; if ("foobar".equals(myString)) { /* ... */ } // This resolves to false without throwing a NullPointerException Criticism Yoda conditions are criticized for compromising readability by increasing the cognitive load of reading the code. Some programming languages (such as Swift, Kotlin and versions of Python below 3.8) do not allow variable assignments within conditionalsfor example by requiring that assignments do not return a value, or by defining as part of their grammar the invariant that conditions cannot contain assignment statementsin which case this error is impossible to encounter (that is, it would be detected as a syntax error by the parser prior to a program ever being allowed to enter into runtime). Many compilers produce a warning for code such as if (myNumber = 42) (e.g., the GCC -Wall option warns suggest parentheses around assignment used as truth value), which alerts the programmer to the likely mistake. In dynamic languages like JavaScript, linters such as ESLint can warn on assignment inside a conditional. Python 3.8 introduced assignment expressions, but uses the walrus operator := instead of a regular equal sign (=) to avoid bugs which simply confuse == with =. Another disadvantage appears in C++ when comparing non-basic types as the == is an operator and there may not be a suitable overloaded operator function defined. Example: a Microsoft's CComBSTR compare against a string literal, written as if (L"Hello" == cbstrMessage), does not map to an overload function. References External links Yoda conditions very harmful seems to be Criticism of this technique, arguing that it can do more harm than good Computer jargon Computer programming Conditional constructs WordPress
Yoda conditions
Technology,Engineering
1,206
9,487,872
https://en.wikipedia.org/wiki/Beatty%20sequence
In mathematics, a Beatty sequence (or homogeneous Beatty sequence) is the sequence of integers found by taking the floor of the positive multiples of a positive irrational number. Beatty sequences are named after Samuel Beatty, who wrote about them in 1926. Rayleigh's theorem, named after Lord Rayleigh, states that the complement of a Beatty sequence, consisting of the positive integers that are not in the sequence, is itself a Beatty sequence generated by a different irrational number. Beatty sequences can also be used to generate Sturmian words. Definition Any irrational number that is greater than one generates the Beatty sequence The two irrational numbers and naturally satisfy the equation . The two Beatty sequences and that they generate form a pair of complementary Beatty sequences. Here, "complementary" means that every positive integer belongs to exactly one of these two sequences. Examples When is the golden ratio , the complementary Beatty sequence is generated by . In this case, the sequence , known as the lower Wythoff sequence, is and the complementary sequence , the upper Wythoff sequence, is These sequences define the optimal strategy for Wythoff's game, and are used in the definition of the Wythoff array. As another example, for the square root of 2, , . In this case, the sequences are For and , the sequences are Any number in the first sequence is absent in the second, and vice versa. History Beatty sequences got their name from the problem posed in The American Mathematical Monthly by Samuel Beatty in 1926. It is probably one of the most often cited problems ever posed in the Monthly. However, even earlier, in 1894 such sequences were briefly mentioned by Lord Rayleigh in the second edition of his book The Theory of Sound. Rayleigh theorem Rayleigh's theorem (also known as Beatty's theorem) states that given an irrational number there exists so that the Beatty sequences and partition the set of positive integers: each positive integer belongs to exactly one of the two sequences. First proof Given let . We must show that every positive integer lies in one and only one of the two sequences and . We shall do so by considering the ordinal positions occupied by all the fractions and when they are jointly listed in nondecreasing order for positive integers j and k. To see that no two of the numbers can occupy the same position (as a single number), suppose to the contrary that for some j and k. Then = , a rational number, but also, not a rational number. Therefore, no two of the numbers occupy the same position. For any , there are positive integers such that and positive integers such that , so that the position of in the list is . The equation implies Likewise, the position of in the list is . Conclusion: every positive integer (that is, every position in the list) is of the form  or of the form , but not both. The converse statement is also true: if p and q are two real numbers such that every positive integer occurs precisely once in the above list, then p and q are irrational and the sum of their reciprocals is 1. Second proof : Suppose that, contrary to the theorem, there are integers j > 0 and k and m such that This is equivalent to the inequalities For non-zero j, the irrationality of r and s is incompatible with equality, so which leads to Adding these together and using the hypothesis, we get which is impossible (one cannot have an integer between two adjacent integers). Thus the supposition must be false. : Suppose that, contrary to the theorem, there are integers j > 0 and k and m such that Since j + 1 is non-zero and r and s are irrational, we can exclude equality, so Then we get Adding corresponding inequalities, we get which is also impossible. Thus the supposition is false. Properties A number belongs to the Beatty sequence if and only if where denotes the fractional part of i.e., . Proof: Furthermore, . Proof: Relation with Sturmian sequences The first difference of the Beatty sequence associated with the irrational number is a characteristic Sturmian word over the alphabet . Generalizations If slightly modified, the Rayleigh's theorem can be generalized to positive real numbers (not necessarily irrational) and negative integers as well: if positive real numbers and satisfy , the sequences and form a partition of integers. For example, the white and black keys of a piano keyboard are distributed as such sequences for and . The Lambek–Moser theorem generalizes the Rayleigh theorem and shows that more general pairs of sequences defined from an integer function and its inverse have the same property of partitioning the integers. Uspensky's theorem states that, if are positive real numbers such that contains all positive integers exactly once, then That is, there is no equivalent of Rayleigh's theorem for three or more Beatty sequences. References Further reading Includes many references. External links Alexander Bogomolny, Beatty Sequences, Cut-the-knot Integer sequences Theorems in number theory Diophantine approximation Combinatorics on words Articles containing proofs
Beatty sequence
Mathematics
1,033
456,234
https://en.wikipedia.org/wiki/Colligative%20properties
In chemistry, colligative properties are those properties of solutions that depend on the ratio of the number of solute particles to the number of solvent particles in a solution, and not on the nature of the chemical species present.<ref>McQuarrie, Donald, et al. Colligative properties of Solutions" General Chemistry Mill Valley: Library of Congress, 2011. .</ref> The number ratio can be related to the various units for concentration of a solution such as molarity, molality, normality (chemistry), etc. The assumption that solution properties are independent of nature of solute particles is exact only for ideal solutions, which are solutions that exhibit thermodynamic properties analogous to those of an ideal gas, and is approximate for dilute real solutions. In other words, colligative properties are a set of solution properties that can be reasonably approximated by the assumption that the solution is ideal. Only properties which result from the dissolution of a nonvolatile solute in a volatile liquid solvent are considered. They are essentially solvent properties which are changed by the presence of the solute. The solute particles displace some solvent molecules in the liquid phase and thereby reduce the concentration of solvent and increase its entropy, so that the colligative properties are independent of the nature of the solute. The word colligative is derived from the Latin colligatus meaning bound together. This indicates that all colligative properties have a common feature, namely that they are related only to the number of solute molecules relative to the number of solvent molecules and not to the nature of the solute. Colligative properties include: Relative lowering of vapor pressure (Raoult's law) Elevation of boiling point Depression of freezing point Osmotic pressure For a given solute-solvent mass ratio, all colligative properties are inversely proportional to solute molar mass. Measurement of colligative properties for a dilute solution of a non-ionized solute such as urea or glucose in water or another solvent can lead to determinations of relative molar masses, both for small molecules and for polymers which cannot be studied by other means. Alternatively, measurements for ionized solutes can lead to an estimation of the percentage of dissociation taking place. Colligative properties are studied mostly for dilute solutions, whose behavior may be approximated as that of an ideal solution. In fact, all of the properties listed above are colligative only in the dilute limit: at higher concentrations, the freezing point depression, boiling point elevation, vapor pressure elevation or depression, and osmotic pressure are all dependent on the chemical nature of the solvent and the solute. Relative lowering of vapor pressure A vapor is a substance in a gaseous state at a temperature lower than its critical point. Vapor Pressure is the pressure exerted by a vapor in thermodynamic equilibrium with its solid or liquid state. The vapor pressure of a solvent is lowered when a non-volatile solute is dissolved in it to form a solution. For an ideal solution, the equilibrium vapor pressure is given by Raoult's law as where is the vapor pressure of the pure component (i= A, B, ...) and is the mole fraction of the component in the solution. For a solution with a solvent (A) and one non-volatile solute (B), and . The vapor pressure lowering relative to pure solvent is , which is proportional to the mole fraction of solute. If the solute dissociates in solution, then the number of moles of solute is increased by the van 't Hoff factor , which represents the true number of solute particles for each formula unit. For example, the strong electrolyte MgCl2 dissociates into one Mg2+ ion and two Cl− ions, so that if ionization is complete, i = 3 and , where is calculated with moles of solute i times initial moles and moles of solvent same as initial moles of solvent before dissociation. The measured colligative properties show that i is somewhat less than 3 due to ion association. Boiling point and freezing point Addition of solute to form a solution stabilizes the solvent in the liquid phase, and lowers the solvent's chemical potential so that solvent molecules have less tendency to move to the gas or solid phases. As a result, liquid solutions slightly above the solvent boiling point at a given pressure become stable, which means that the boiling point increases. Similarly, liquid solutions slightly below the solvent freezing point become stable meaning that the freezing point decreases. Both the boiling point elevation and the freezing point depression are proportional to the lowering of vapor pressure in a dilute solution. These properties are colligative in systems where the solute is essentially confined to the liquid phase. Boiling point elevation (like vapor pressure lowering) is colligative for non-volatile solutes where the solute presence in the gas phase is negligible. Freezing point depression is colligative for most solutes since very few solutes dissolve appreciably in solid solvents. Boiling point elevation (ebullioscopy) The boiling point of a liquid at a given external pressure is the temperature () at which the vapor pressure of the liquid equals the external pressure. The normal boiling point is the boiling point at a pressure equal to 1 atm. The boiling point of a pure solvent is increased by the addition of a non-volatile solute, and the elevation can be measured by ebullioscopy. It is found that Here i is the van 't Hoff factor as above, Kb is the ebullioscopic constant of the solvent (equal to 0.512 °C kg/mol for water), and m is the molality of the solution. The boiling point is the temperature at which there is equilibrium between liquid and gas phases. At the boiling point, the number of gas molecules condensing to liquid equals the number of liquid molecules evaporating to gas. Adding a solute dilutes the concentration of the liquid molecules and reduces the rate of evaporation. To compensate for this and re-attain equilibrium, the boiling point occurs at a higher temperature. If the solution is assumed to be an ideal solution, Kb can be evaluated from the thermodynamic condition for liquid-vapor equilibrium. At the boiling point, the chemical potential μA of the solvent in the solution phase equals the chemical potential in the pure vapor phase above the solution. The asterisks indicate pure phases. This leads to the result , where R is the molar gas constant, M is the solvent molar mass and ΔHvap is the solvent molar enthalpy of vaporization. Freezing point depression (cryoscopy) The freezing point () of a pure solvent is lowered by the addition of a solute which is insoluble in the solid solvent, and the measurement of this difference is called cryoscopy. It is found that (which can also be written as ) Here Kf is the cryoscopic constant (equal to 1.86 °C kg/mol for the freezing point of water), i is the van 't Hoff factor, and m the molality (in mol/kg). This predicts the melting of ice by road salt. In the liquid solution, the solvent is diluted by the addition of a solute, so that fewer molecules are available to freeze. Re-establishment of equilibrium is achieved at a lower temperature at which the rate of freezing becomes equal to the rate of liquefying. At the lower freezing point, the vapor pressure of the liquid is equal to the vapor pressure of the corresponding solid, and the chemical potentials of the two phases are equal as well. The equality of chemical potentials permits the evaluation of the cryoscopic constant as , where ΔfusH is the solvent molar enthalpy of fusion. Osmotic pressure The osmotic pressure of a solution is the difference in pressure between the solution and the pure liquid solvent when the two are in equilibrium across a semipermeable membrane, which allows the passage of solvent molecules but not of solute particles. If the two phases are at the same initial pressure, there is a net transfer of solvent across the membrane into the solution known as osmosis. The process stops and equilibrium is attained when the pressure difference equals the osmotic pressure. Two laws governing the osmotic pressure of a dilute solution were discovered by the German botanist W. F. P. Pfeffer and the Dutch chemist J. H. van’t Hoff: The osmotic pressure of a dilute solution at constant temperature is directly proportional to its concentration. The osmotic pressure of a solution is directly proportional to its absolute temperature. These are analogous to Boyle's law and Charles's law for gases. Similarly, the combined ideal gas law, , has as an analogue for ideal solutions , where is osmotic pressure; V is the volume; n is the number of moles of solute; R is the molar gas constant 8.314 J K−1 mol−1; T is absolute temperature; and i is the Van 't Hoff factor. The osmotic pressure is then proportional to the molar concentration , since The osmotic pressure is proportional to the concentration of solute particles ci and is therefore a colligative property. As with the other colligative properties, this equation is a consequence of the equality of solvent chemical potentials of the two phases in equilibrium. In this case the phases are the pure solvent at pressure P and the solution at total pressure (P + ). History The word colligative (Latin: co, ligare) was introduced in 1891 by Wilhelm Ostwald. Ostwald classified solute properties in three categories:H.W. Smith, Circulation 21, 808 (1960) Theory of Solutions: A Knowledge of the Laws of Solutions ... colligative properties, which depend only on solute concentration and temperature and are independent of the nature of the solute particles additive properties such as mass, which are the sums of properties of the constituent particles and therefore depend also on the composition (or molecular formula) of the solute, and constitutional'' properties, which depend further on the molecular structure of the given solute. References Solutions Physical chemistry Amount of substance
Colligative properties
Physics,Chemistry,Mathematics
2,144
43,401,361
https://en.wikipedia.org/wiki/Alexander%20Hay%20Japp
Alexander Hay Japp (26 December 1836 – 29 September 1905) was a Scottish author, journalist and publisher. Life Born at Dun, Angus, on 26 December 1836, he was youngest son of Alexander Japp, a carpenter, by his wife Agnes Hay. After his father's early death, the mother and her family moved to Montrose, where he was educated at Milne's school. At seventeen Japp became a book-keeper with Messrs. Christie and Sons, tailors, at Edinburgh. Three years later he moved to London, and for two years was employed in the East India department of Smith, Elder and Co. Returning to Scotland in poor health, he worked for Messrs. Grieve and Oliver, Edinburgh hatters, and in his leisure in 1860–1 attended classes at the university in metaphysics, logic, and moral philosophy. He became a double prizeman in rhetoric, and received from Professor William Edmondstoune Aytoun a special certificate of distinction; but he did not graduate. At Edinburgh Japp associated with young artists, including John Pettie and his friends. Turning to journalism, he edited the Inverness Courier and the Montrose Review. Having settled in London in 1864, he joined The Daily Telegraph for a short time. While writing for other papers, he acted as general literary adviser to the publishing firm of Alexander Strahan (later William Isbister &Co.), and assisted in editing their periodicals: Good Words, Sunday Magazine (from 1869 to 1879), and The Contemporary Review from 1866 to 1872, while Henry Alford was editor. He also assisted Robert Carruthers in the third edition of Chambers's Cyclopædia of English Literature. In October 1880, Japp started as a publisher, under the style Marshall Japp and Co., at 17 Holborn Viaduct; but bad health and insufficient capital led him to make the venture over to T. Fisher Unwin in 1882. From that year to 1888 he was literary adviser to the firm of Hurst and Blackett. From 1884 till 1900 he lived at Elmstead, near Colchester, where he cultivated his taste for natural history. After three years in London he finally settled at Coulsdon, Surrey, in September 1903. There, busy to the last, he died on 29 September 1905, and was buried in Abney Park cemetery. He was made LL.D. of Glasgow University in 1879, and in 1880 was elected Fellow of the Royal Society of Edinburgh. Relationship with Stevenson Japp's interest in Henry Thoreau brought him the acquaintance of Robert Louis Stevenson. The two men met at Braemar in August 1881, and Japp's conversation attracted Stevenson and his father. Stevenson read to Japp the early chapters of Treasure Island, then called The Sea Cook, and Japp negotiated its publication in Young Folks. Subsequently Stevenson and Japp corresponded on intimate terms; and Japp's last work, Robert Louis Stevenson: a Record, an Estimate, and a Memorial (1905), was the result of their contacts. Works Japp was versatile and prolific writer, writing under pseudonyms such as "H. A. Page", "A. F. Scot", "E. Conder Gray", and "A. N. Mount Rose" as well as in his own name. In his own name he issued in 1865 Three Great Teachers of our own Time: Carlyle, Tennyson, and Ruskin, which Ruskin found perceptive. He issued a selection of Thomas de Quincey's Posthumous Works (vol. i. 1891; vol. ii. 1893) and De Quincey Memorials: being Letters and other Records here first published (1891). As "H. A. Page" he published: The Memoir of Nathaniel Hawthorne (1872; with several uncollected contributions to American periodicals); an analytical Study of Thoreau (1878); and his major work, De Quincey: his Life and Writings, with Unpublished Correspondence (supplied by De Quincey's daughters) (2 vols. 1877; 2nd edit. 1879, revised edit. in one vol. 1890). Japp tried many genres. Under a double pseudonym he issued in 1878 Lights on the Way (by "the late J. H. Alexander, B.A.", with explanatory note by "H. A. Page"), which was semi-autobiographical fiction. There followed: German Life and Literature (1880; studies of Lessing, Goethe, Moses Mendelssohn, Herder, Novalis, and other writers); Hours in my Garden, and Other Nature-Sketches (1893) three volumes of verse: The Circle of the Year: a Sonnet Sequence with Proem and Envoi (privately printed, 1893); Dramatic Pictures, English Rispetti, Sonnets and other Verses (1894); Adam and Lilith: a Poem in Four Parts (1899; by "A. F. Scot"); Animal Anecdotes arranged on a New Principle (by "H. A. Page") (1887); it attempted to show that the faculties of certain animals differ in degree rather than in kind from those of men; Offering and Sacrifice: an Essay in Comparative Customs and Religious Development by "A. F. Scot" (1899); Some Heresies in Ethnology and Anthropology dealt with under his own name (1899); More Loose Links in the Darwinian Armour (1900); Our Common Cuckoo and Other Cuckoos and Parasitical Birds (1899), a criticism of the Darwinian view of parasitism; and Darwin Considered Mainly as Ethical Thinker (1901), criticism of the hypothesis of natural selection. Family Japp married twice: in 1863 Elizabeth Paul Falconer (died 1888), daughter of John Falconer of Laurencekirk in Kincardineshire; Eliza Love, of Scottish descent. There were seven children of the first marriage. References Attribution External links 1836 births 1905 deaths Non-Darwinian evolution Scottish journalists Scottish magazine editors People from Angus, Scotland
Alexander Hay Japp
Biology
1,244
24,236,771
https://en.wikipedia.org/wiki/Gertrud%20Szabolcsi
Gertrud Szabolcsi (26 January 1923, in Oradea, Romania – 28 March 1993, in Budapest, Hungary) was a biochemist. Her research centered on the structure and function of enzymes. She was a member of the Hungarian Academy of Sciences. She was First Lady of Hungary as the second wife of Brunó Ferenc Straub, the last chairman of the Hungarian Presidential Council from 1988 until 1989. She and her husband received the 41st president of the United States, George H. W. Bush and his wife, Barbara Bush who visited Hungary on 12 July 1989. Her daughter from her first marriage is linguist Anna Szabolcsi. Selected works P Friedrich, L Polgár, G Szabolcsi, 1964. Effect of Photo-oxidation on Glyceraldehyde-3-phosphate Dehydrogenase. nature.com T Devenyi, P Elodi, T Keleti, G Szabolcsi, 1969. Strukturelle Grundlagen der Biologischen Funktion der Proteine. Akadémiai Kiadó. B Szajani, M Sajgo, E Biszku, P Friedrich, G Szabolcsi, 1970. Identification of a cysteinyl residue involved in the activity of rabbit muscle aldolase. European Journal of Biochemistry. E Biszku, M Sajgo, M Solti, G Szabolcsi, 1973. On the mechanism of formation of a partially active aldolase by tryptic digestion. European Journal of Biochemistry. T Devenyi, K. Bocsa, F Kovats, S Pongor, G Szabolcsi, M Such, 1983. Method of modifying the conformation of food and feed proteins. US Patent. G Szabolcsi, 1991. Enzimes Analizis. Akadémiai Kiadó. References External links Hungarian biochemists Women biochemists Hungarian women chemists 1923 births 1993 deaths Hungarian women scientists 20th-century women scientists First ladies of Hungary Romanian emigrants to Hungary
Gertrud Szabolcsi
Chemistry
432
3,071,326
https://en.wikipedia.org/wiki/Giant%20cell
A giant cell (also known as a multinucleated giant cell, or multinucleate giant cell) is a mass formed by the union of several distinct cells (usually histiocytes), often forming a granuloma. Although there is typically a focus on the pathological aspects of multinucleate giant cells (MGCs), they also play many important physiological roles. Osteoclasts are a type of MGC that are critical for the maintenance, repair, and remodeling of bone and are present normally in a healthy human body. Osteoclasts are frequently classified and discussed separately from other MGCs which are more closely linked with disease. Non-osteoclast MGCs can arise in response to an infection, such as tuberculosis, herpes, or HIV, or as part of a foreign body reaction. These MGCs are cells of monocyte or macrophage lineage fused together. Similar to their monocyte precursors, they can phagocytose foreign materials. However, their large size and extensive membrane ruffling make them better equipped to clear up larger particles. They utilize activated CR3s to ingest complement-opsonized targets. Non-osteoclast MGCs are also responsible for the clearance of cell debris, which is necessary for tissue remodeling after injuries. Types include foreign-body giant cells, Langhans giant cells, Touton giant cells, Giant-cell arteritis History Osteoclasts were discovered in 1873. However, it was not until the development of the organ culture in the 1970s that their origin and function could be deduced. Although there was a consensus early on about the physiological function of osteoclasts, theories on their origins were heavily debated. Many believed osteoclasts and osteoblasts came from the same progenitor cell. Because of this, osteoclasts were thought to be derived from cells in connective tissue. Studies that observed that bone resorption could be restored by bone marrow and spleen transplants helped prove osteoclasts' hematopoietic origin. Other multinucleated giant cell formations can arise from numerous types of bacteria, diseases, and cell formations. Giant cells are also known to develop when infections are present. They were first observed as early as the middle of the last century, but it is not fully understood why these reactions occur. In the process of giant cell formation, monocytes or macrophages fuse together, which could cause multiple problems for the immune system. Osteoclast Osteoclasts are the most prominent examples of MGCs and are responsible for the resorption of bones in the body. Like other MGCs, they are formed from the fusion of monocyte/macrophage precursors. However, unlike other MGCs, the fusion pathway they originate from is well elucidated. They also do not ingest foreign materials and instead absorb bone matrix and minerals. Osteoclasts are typically associated more with healthy physiological functions than they are with pathological states. They function alongside osteoblasts to remodel and maintain the integrity of bones in the body. They also contribute to the creation of the niche necessary for hematopoiesis and negatively regulate T cells. However, while the primary functions of osteoclasts are integral to maintaining a healthy physiological state, they have also been linked to osteoporosis and the formation of bone tumors. Giant cell arteritis Giant cell arteritis, also known as temporal arteritis or cranial arteritis, is the most common MGC-linked disease. This type of arteritis causes the arteries in the head, neck, and arm area to swell to abnormal sizes. Although the cause of this disease is not currently known, it appears to be related to polymyalgia rheumatica. Giant cell arteritis is most prevalent in older individuals, with the rate of disease being seen to increase from age 50. Women are 2–3 times more likely to develop the disease than men. Northern Europeans have been observed to have a higher incidence of giant cell arteritis compared to southern European, Hispanic, and Asian populations. It has been suggested that this difference may lie in the criteria used to diagnose giant cell arteritis rather than actual disease incidence, in addition to genetic and geographic factors. Symptoms Symptoms may include a mild fever, loss of appetite, fatigue, vision loss, and severe headaches. These symptoms are often misinterpreted leading to a delay in treatment. If left untreated, this disease can result in permanent blindness. Diagnosis The current highest standard for diagnosis is a temporal artery biopsy. The skin on the patient's face is anesthetized, and an incision is made in the face around the area of the temples to obtain a sample of the temporal artery. The incision is then sutured. A histopathologist examines the sample under a microscope and issues a pathology report (pending extra tests that may be requested by the pathologist). The management regime consists primarily of systemic corticosteroids (e.g. prednisolone), commencing at a high dose. Langhans giant cell Langhans giant cells are named for the pathologist who discovered them, Theodor Langhans. Like many of the other kinds of giant cell formations, epithelioid macrophages fuse together and form a multinucleated giant cell. The nuclei form a circle or semicircle similar to the shape of a horseshoe away from the center of the cell. Langhans giant cell was typically associated with tuberculosis but has been found to occur in many types of granulomatous diseases. Langhans giant cell could be closely related to tuberculosis, syphilis, sarcoidosis, and deep fungal infections. Langhans giant cell occurs frequently in delayed hypersensitivity. Symptoms Symptoms may include fever, weight loss, fatigue and loss of appetite. Diagnosis This type of giant cell could be caused by bacteria that spread from person to person through the air. Tuberculosis is related to HIV; many people who have HIV also have a hard time fighting off diseases and sicknesses. Many tests may be performed to treat other related diseases to obtain the correct diagnosis for Langhans giant cell. Touton giant cell Also known as xanthelasmatic giant cells, Touton giant cells consist of fused epithelioid macrophages and have multiple nuclei. They are characterized by the ring-shaped arrangement of their nuclei and the presence of foamy cytoplasm surrounding the nucleus. Touton giant cells have been observed in lipid-laden lesions such as fat necrosis. Demographics The formation of Touton giant cell is most common in men and women aged 37–78. Symptoms Touton giant cells typically cause similar symptoms to other forms of giant cell, such as fever, weight loss, fatigue and loss of appetite. Foreign-body giant cell Foreign-body giant cells form when a subject is exposed to a foreign substance. Exogenous substances can include talc or sutures. As with other types of giant cells, epithelioid macrophages fusing together causes these giant cells to form and grow. In this form of giant cell, the nuclei are arranged in an overlapping manner. This giant cell is often found in tissue because of medical devices, prostheses, and biomaterials. Reed-Sternberg cell Reed-Sternberg cells are generally thought to originate from B-lymphocytes. They are hard to study due to their rarity, and there are other theories about the origins of these cells. Some less popular theories speculate that they may arise from the fusion between reticulum cells, lymphocytes, and virus-infected cells. Similar to other MGCs, Reed-Sternberg cells are large and are either multinucleated or have a bilobed nucleus. Their nuclei are irregularly shaped, contain clear chromatin, and possess an eosinophilic nucleolus. Role in tumour formation Some researchers have conjectured that Giant cells may be instrumental in the formation of tumours, and that their origin may be in the stress-induced genomic reorganization proposed by Nobel Laureate Barbara McClintock. It had previously been suggested that such genomic stress could be aggravated by some genotoxic agents used in cancer therapy. Poly-aneuploid cancer cells (PACCs) may serve as efficient sources of heritable variation that allows cancer cells to evolve rapidly. Endogenous causative agents Endogenous substances such as keratin, fat, and cholesterol crystals (cholesteatoma) can induce mast cell formation. Multinucleated giant cells in COVID-19 patients Coronavirus disease 2019 (COVID-19) is caused by a novel coronavirus called SARS-CoV-2. Multinucleated giant cells have been detected in biopsy specimens from patients with COVID-19 disease. This type of giant cell was first found in pulmonary pathology of early phase 2019 novel coronavirus (COVID-19) pneumonia in two patients with lung cancer after a biopsy. Specifically, they were located in inflammatory fibrin clusters, sometimes together with mononuclear inflammatory cells. Another pathological study also detected this type of giant cell in COVID-19 and described it as a "multinucleated syncytial cell". The morphological analysis showed that multinucleated syncytial cells and atypical enlarged pneumocytes demonstrating cytomorphological changes consistent with viral infection were found in the intra-alveolar spaces. The viral antigen was detected in the cytoplasm of multinucleated syncytial cells, indicating the presence of the SARS-CoV-2 virus. However, a later post-mortem study has described these cells as 'giant cell-like' rather than true giant cells derived from histiocytes. Instead, they are derived from type II pneumocyte clusters with cytopathic changes, which was confirmed by cytokeratin staining. The infection and pathogenesis of the SARS-CoV-2 virus in the human patient largely remained unknown. Multinucleate giant cells have also been described in MERS-CoV, a closely related coronavirus. A further study to characterize the role of multinucleated giant cells in human immune defense against COVID-19 may lead to more effective therapies. See also Idiopathic giant cell myocarditis Large cell Reed–Sternberg cell Subependymal giant cell astrocytoma Syncitium References External links Macrophage fusion: the making of osteoclasts and giant cells Cell biology
Giant cell
Biology
2,237
39,725,748
https://en.wikipedia.org/wiki/Perspectival%20realism
In Caspar Hare's theory of perspectival realism, there is a defining intrinsic property that the things that are in perceptual awareness have. Consider seeing object A but not object B. Of course, we can say that the visual experience of A is present to you, and no visual experience of B is present to you. But, it can be argued, this misses the fact that the visual experience of A is simply present, not relative to anything. This is what Hare's perspectival realism attempts to capture, resulting in a weak version of metaphysical solipsism. As Hare points out, the same type of argument is often used in the philosophy of time to support theories such as presentism. Of course, we can say that A is happening on [insert today's date]. But, it can be argued, this misses the fact that A is simply happening (right now), not relative to anything. Hare's theory of perspectival realism is closely related to his theory of egocentric presentism. Several other philosophers have written reviews of Hare's work on this topic. See also Metaphysical subjectivism Centered worlds Benj Hellie's vertiginous question J.J. Valberg's personal horizon References External links Hare, Caspar. Self-Bias, Time-Bias, and the Metaphysics of Self and Time. Preprint of article in The Journal of Philosophy (2007). Hare, Caspar. On Myself, and Other, Less Important Subjects. Early draft of book published by Princeton University Press (2009). Hare, Caspar. Realism About Tense and Perspective. Preprint of article in Philosophy Compass (2010). Epistemological theories Metaphysics of mind Philosophical realism Philosophy of time Theory of mind
Perspectival realism
Physics
371
250,815
https://en.wikipedia.org/wiki/Selectron%20tube
The Selectron was an early form of digital computer memory developed by Jan A. Rajchman and his group at the Radio Corporation of America (RCA) under the direction of Vladimir K. Zworykin. It was a vacuum tube that stored digital data as electrostatic charges using technology similar to the Williams tube storage device. The team was never able to produce a commercially viable form of Selectron before magnetic-core memory became almost universal. Development Development of Selectron started in 1946 at the behest of John von Neumann of the Institute for Advanced Study, who was in the midst of designing the IAS machine and was looking for a new form of high-speed memory. RCA's original design concept had a capacity of 4096 bits, with a planned production of 200 by the end of 1946. They found the device to be much more difficult to build than expected, and they were still not available by the middle of 1948. As development dragged on, the IAS machine was forced to switch to Williams tubes for storage, and the primary customer for Selectron disappeared. RCA lost interest in the design and assigned its engineers to improve televisions A contract from the US Air Force led to a re-examination of the device in a 256-bit form. Rand Corporation took advantage of this project to switch their own IAS machine, the JOHNNIAC, to this new version of the Selectron, using 80 of them to provide 512 40-bit words of main memory. They signed a development contract with RCA to produce enough tubes for their machine at a projected cost of $500 per tube ($ in ). Around this time IBM expressed an interest in the Selectron as well, but this did not lead to additional production. As a result, RCA assigned their engineers to color television development, and put the Selectron in the hands of "the mothers-in-law of two deserving employees (the Chairman of the Board and the President)." Both the Selectron and the Williams tube were superseded in the market by the compact and cost-effective magnetic-core memory, in the early 1950s. The JOHNNIAC developers had decided to switch to core even before the first Selectron-based version had been completed. Principle of operation Electrostatic storage The Williams tube was an example of a general class of cathode-ray tube (CRT) devices known as storage tubes. The primary function of a conventional CRT is to display an image by lighting phosphor using a beam of electrons fired at it from an electron gun at the back of the tube. The target point of the beam is steered around the front of the tube though the use of deflection magnets or electrostatic plates. Storage tubes were based on CRTs, sometimes unmodified. They relied on two normally undesirable principles of phosphor used in the tubes. One was that when electrons from the CRT's electron gun struck the phosphor to light it, some of the electrons "stuck" to the tube and caused a localized static electric charge to build up. This charge opposed any future electrons flowing into that area from the gun, and caused differences in brightness. The second was that the phosphor, like many materials, also released new electrons when struck by an electron beam, a process known as secondary emission. Secondary emission had the useful feature that the rate of electron release was significantly non-linear. When a voltage was applied that crossed a certain threshold, the rate of emission increased dramatically. This caused the lit spot to rapidly decay, which also caused any stuck electrons to be released as well. Visual systems used this process to erase the display, causing any stored pattern to rapidly fade. For computer uses it was the rapid release of the stuck charge that allowed it to be used for storage. In the Williams tube, the electron gun at the back of an otherwise typical CRT is used to deposit a series of small patterns representing a 1 or 0 on the phosphor in a grid representing memory addresses. To read the display, the beam scanned the tube again, this time set to a voltage very close to that of the secondary emission threshold. The patterns were selected to bias the tube very slightly positive or negative. When the stored static electricity was added to the voltage of the beam, the total voltage either crossed the secondary emission threshold or didn't. If it crossed the threshold, a burst of electrons was released as the dot decayed. This burst was read capacitively on a metal plate placed just in front of the display side of the tube. There were four general classes of storage tubes; the "surface redistribution type" represented by the Williams tube, the "barrier grid" system, which was unsuccessfully commercialized by RCA as the Radechon tube, the "sticking potential" type which was not used commercially, and the "holding beam" concept, of which the Selectron is a specific example. Holding beam concept In the most basic implementation, the holding beam tube uses three electron guns; one for writing, one for reading, and a third "holding gun" that maintains the pattern. The general operation is very similar to the Williams tube in concept. The main difference was the holding gun, which fired continually and unfocussed so it covered the entire storage area on the phosphor. This caused the phosphor to be continually charged to a selected voltage, somewhat below that of the secondary emission threshold. Writing was accomplished by firing the writing gun at low voltage in a fashion similar to the Williams tube, adding a further voltage to the phosphor. Thus the storage pattern was the slight difference between two voltages stored on the tube, typically only a few tens of volts different. In comparison, the Williams tube used much higher voltages, producing a pattern that could only be stored for a short period before it decayed below readability. Reading was accomplished by scanning the reading gun across the storage area. This gun was set to a voltage that would cross the secondary emission threshold for the entire display. If the scanned area held the holding gun potential a certain number of electrons would be released, if it held the writing gun potential the number would be higher. The electrons were read on a grid of fine wires placed behind the display, making the system entirely self-contained. In contrast, the Williams tube's read plate was in front of the tube, and required continual mechanical adjustment to work properly. The grid also had the advantage of breaking the display into individual spots without requiring the tight focus of the Williams system. General operation was the same as the Williams system, but the holding concept had two major advantages. One was that it operated at much lower voltage differences and was thus able to safely store data for a longer period of time. The other was that the same deflection magnet drivers could be sent to several electron guns to produce a single larger device with no increase in complexity of the electronics. Design The Selectron further modified the basic holding gun concept through the use of individual metal eyelets that were used to store additional charge in a more predictable and long-lasting fashion. Unlike a CRT where the electron gun is a single point source consisting of a filament and single charged accelerator, in the Selectron the "gun" is a plate and the accelerator is a grid of wires (thus borrowing some design notes from the barrier-grid tube). Switching circuits allow voltages to be applied to the wires to turn them on or off. When the gun fires through the eyelets, it is slightly defocussed. Some of the electrons strike the eyelet and deposit a charge on it. The original 4096-bit Selectron was a by vacuum tube configured as 1024 by 4 bits. It had an indirectly heated cathode running up the middle, surrounded by two separate sets of wires — one radial, one axial — forming a cylindrical grid array, and finally a dielectric storage material coating on the inside of four segments of an enclosing metal cylinder, called the signal plates. The bits were stored as discrete regions of charge on the smooth surfaces of the signal plates. The two sets of orthogonal grid wires were normally "biased" slightly positive, so that the electrons from the cathode were accelerated through the grid to reach the dielectric. The continuous flow of electrons allowed the stored charge to be continuously regenerated by the secondary emission of electrons. To select a bit to be read from or written to, all but two adjacent wires on each of the two grids were biased negative, allowing current to flow to the dielectric at one location only. In this respect, the Selectron works in the opposite sense of the Williams tube. In the Williams tube, the beam is continually scanning in a read/write cycle which is also used to regenerate data. In contrast, the Selectron is almost always regenerating the entire tube, only breaking this periodically to do actual reads and writes. This not only made operation faster due to the lack of required pauses but also meant the data was much more reliable as it was constantly refreshed. Writing was accomplished by selecting a bit, as above, and then sending a pulse of potential, either positive or negative, to the signal plate. With a bit selected, electrons would be pulled onto (with a positive potential) or pushed from (negative potential) the dielectric. When the bias on the grid was dropped, the electrons were trapped on the dielectric as a spot of static electricity. To read from the device, a bit location was selected and a pulse sent from the cathode. If the dielectric for that bit contained a charge, the electrons would be pushed off the dielectric and read as a brief pulse of current in the signal plate. No such pulse meant that the dielectric must not have held a charge. The smaller capacity 256-bit (128 by 2 bits) "production" device was in a similar vacuum-tube envelope. It was built with two storage arrays of discrete "eyelets" on a rectangular plate, separated by a row of eight cathodes. The pin count was reduced from 44 for the 4096-bit device down to 31 pins and two coaxial signal output connectors. This version included visible green phosphors in each eyelet so that the bit status could also be read by eye. Patents Cylindrical 4096-bit Selectron Planar 256-bit Selectron References Citations Bibliography Republished in IEEE Annals of the History of Computing, Volume 20 Number 4 (October 1988), pp. 11–28 External links The Selectron Early Devices display: Memories — has a picture of a 256-bit Selectron about halfway down the page More pictures History of the RCA Selectron Computer memory RCA brands Vacuum tubes
Selectron tube
Physics
2,200
2,668,623
https://en.wikipedia.org/wiki/Sigma%20Serpentis
Sigma Serpentis, Latinized from σ Serpentis, is a star in the equatorial constellation Serpens. It is faintly visible to the naked eye with an apparent visual magnitude of +4.82. Based upon an annual parallax shift of 36.67 mas as seen from Earth, it is located 89 light years from the Sun. The star is moving closer to the Sun with a radial velocity of −49 km/s. Barry (1970) assigned this star a stellar classification of F3 V, indicating an ordinary F-type main-sequence star. It is about one billion years old and is spinning with a projected rotational velocity of 77.7 km/s. The star has an estimated 1.58 times the mass of the Sun and is radiating 7.7 times the Sun's luminosity from its photosphere at an effective temperature of 6,952 K. References F-type main-sequence stars Serpentis, Sigma Serpens Durchmusterung objects Serpentis, 50 147449 080179 6093
Sigma Serpentis
Astronomy
218
839,968
https://en.wikipedia.org/wiki/Thai%20six-hour%20clock
The six-hour clock is a traditional timekeeping system used in the Thai and formerly the Lao language and the Khmer language, alongside the official 24-hour clock. Like other common systems, it counts twenty-four hours in a day, but it divides the day into four quarters, counting six hours in each. The hours in each quarter (with the exception of the sixth hour in each quarter) are told with period-designating words or phrases, which are: ... mong chao (, ) for the first half of daytime (07:00 to 12:59) Bai ... mong (, ) for the second half of daytime (13:00 to 18:59) ... thum (, ) for the first half of nighttime (19:00 to 00:59) Ti ... (, ) for the second half of nighttime (01:00 to 06:59) These terms are thought to have originated from the sounds of traditional timekeeping devices. The gong was used to announce the hours in daytime, and the drum at night. Hence the terms mong, an onomatopoeia of the sound of the gong, and thum, that of the sound of the drum. Ti is a verb meaning to hit or strike, and is presumed to have originated from the act of striking the timekeeping device itself. Chao and bai translate as morning and afternoon respectively, and help to differentiate the two daytime quarters. The sixth hours of each quarter are told by a different set of terms. The sixth hour at dawn is called yam rung (, ), and the sixth hour at dusk is called yam kham (, ), both references to the act of striking the gong or drum in succession to announce the turning of day (yam), where rung and kham, meaning dawn and dusk, denote the time of these occurrences. The midday and midnight hours are respectively known as thiang (, , or thiang wan, , ) and thiang khuen (, ), both of which literally translate as midday and midnight. Midnight is also called song yam (, ; note that yam is a different word), a reference to the end of the second three-hour period of the night watch (song translates as the number two). In addition, hok (6) thum and ti hok may also be used to refer to the hours of midnight and dawn, following general usage for the other hours, although more rarely; and the fourth to sixth hours of the second daytime half may also be told as ...mong yen (, ), yen meaning evening. The system has been used in some form since the days of the Ayutthaya Kingdom, but was codified similarly to its present form only in 1901 by King Chulalongkorn in Royal Gazette 17:206. Nowadays, it is used only in colloquial speech. However, a corrupted form of the six-hour clock is more frequently encountered, where usually the first half of daytime (including the sixth hour of the preceding quarter) is counted as in the twelve-hour clock, i.e. hok (6) mong chao, chet (7) mong, etc., up to sip et (11) mong. The six-hour clock system was abolished in Laos and Cambodia during the French protectorate, and the French 24-hour clock system (for example, 3h00) has been used since. Clock format A comparison of the systems is as follows: * The word chao (เช้า) is optional here since the numbers 7 to 11 are not used elsewhere ** Conversationally, si mong yen (สี่โมงเย็น) and ha mong yen (ห้าโมงเย็น) are also spoken if considered as evening See also 12-hour clock 24-hour clock Date and time notation in Thailand The Italian six-hour clock, another six-hour system. Thai calendars, including the Thai solar calendar Thai numerals Time in Thailand References Culture of Thailand Time measurement systems Date and time representation
Thai six-hour clock
Physics
833
309,428
https://en.wikipedia.org/wiki/RR%20Lyrae%20variable
RR Lyrae variables are periodic variable stars, commonly found in globular clusters. They are used as standard candles to measure (extra) galactic distances, assisting with the cosmic distance ladder. This class is named after the prototype and brightest example, RR Lyrae. They are pulsating horizontal branch stars of spectral class A or F, with a mass of around half the Sun's. They are thought to have shed mass during the red-giant branch phase, and were once stars at around 0.8 solar masses. In contemporary astronomy, a period-luminosity relation makes them good standard candles for relatively nearby targets, especially within the Milky Way and Local Group. They are also frequent subjects in the studies of globular clusters and the chemistry (and quantum mechanics) of older stars. Discovery and recognition In surveys of globular clusters, these "cluster-type" variables were being rapidly identified in the mid-1890s, especially by E. C. Pickering. Probably the first star definitely of RR Lyrae type found outside a cluster was U Leporis, discovered by J. Kapteyn in 1890. The prototype star RR Lyrae was discovered prior to 1899 by Williamina Fleming, and reported by Pickering in 1900 as "indistinguishable from cluster-type variables". From 1915 to the 1930s, the RR Lyraes became increasingly accepted as a class of star distinct from the classical Cepheids, due to their shorter periods, differing locations within the galaxy, and chemical differences. RR Lyrae variables are metal-poor, Population II stars. RR Lyraes have proven difficult to observe in external galaxies because of their intrinsic faintness. (In fact, Walter Baade's failure to find them in the Andromeda Galaxy led him to suspect that the galaxy was much farther away than predicted, to reconsider the calibration of Cepheid variables, and to propose the concept of stellar populations.) Using the Canada-France-Hawaii Telescope in the 1980s, Pritchet & van den Bergh found RR Lyraes in Andromeda's galactic halo and, more recently, in its globular clusters. Classification The RR Lyrae stars are conventionally divided into three main types, following classification by S.I. Bailey based on the shape of the stars' brightness curves: RRab variables are the most common, making up 91% of all observed RR Lyrae, and display the steep rises in brightness typical of RR Lyrae RRc are less common, making up 9% of observed RR Lyrae, and have shorter periods and more sinusoidal variation RRd are rare, making up between <1% and 30% of RR Lyrae in a system, and are double-mode pulsators, unlike RRab and RRc Distribution RR Lyrae stars were formerly called "cluster variables" because of their strong (but not exclusive) association with globular clusters; conversely, over 80% of all variables known in globular clusters are RR Lyraes. RR Lyrae stars are found at all galactic latitudes, as opposed to classical Cepheids, which are strongly associated with the galactic plane. Because of their old age, RR Lyraes are commonly used to trace certain populations in the Milky Way, including the halo and thick disk. Several times as many RR Lyraes are known as all Cepheids combined; in the 1980s, about 1900 were known in globular clusters. Some estimates have about 85,000 in the Milky Way. Though binary star systems are common for typical stars, RR Lyraes are very rarely observed in binaries. Properties RR Lyrae stars pulse in a manner similar to Cepheid variables, but the nature and histories of these stars is thought to be rather different. Like all variables on the Cepheid instability strip, pulsations are caused by the κ-mechanism, when the opacity of ionised helium varies with its temperature. RR Lyraes are old, relatively low mass, Population II stars, in common with W Virginis and BL Herculis variables, the type II Cepheids. Classical Cepheid variables are higher mass population I stars. RR Lyrae variables are much more common than Cepheids, but also much less luminous. The average absolute magnitude of an RR Lyrae star is about +0.75, only 40 or 50 times brighter than the Sun. Their period is shorter, typically less than one day, sometimes ranging down to seven hours. Some RRab stars, including RR Lyrae itself, exhibit the Blazhko effect in which there is a conspicuous phase and amplitude modulation. Period-luminosity relationships Unlike Cepheid variables, RR Lyrae variables do not follow a strict period-luminosity relationship at visual wavelengths, although they do in the infrared K band. They are normally analysed using a period-colour-relationship, for example using a Wesenheit function. In this way, they can be used as standard candles for distance measurements although there are difficulties with the effects of metallicity, faintness, and blending. The effect of blending can impact RR Lyrae variables sampled near the cores of globular clusters, which are so dense that in low-resolution observations multiple (unresolved) stars may appear as a single target. Thus the brightness measured for that seemingly single star (e.g., an RR Lyrae variable) is erroneously too bright, given those unresolved stars contributed to the brightness determined. Consequently, the computed distance is wrong, and certain researchers have argued that the blending effect can introduce a systematic uncertainty into the cosmic distance ladder, and may bias the estimated age of the Universe and the Hubble constant. Recent developments The Hubble Space Telescope has identified several RR Lyrae candidates in globular clusters of the Andromeda Galaxy and has measured the distance to the prototype star RR Lyrae. The Kepler space telescope provided accurate photometric coverage of a single field at regular intervals over an extended period. 37 known RR Lyrae variables lie within the Kepler field, including RR Lyrae itself, and new phenomena such as period-doubling have been detected. The Gaia mission mapped 140,784 RR Lyrae stars, of which 50,220 were not previously known to be variable, and for which 54,272 interstellar absorption estimates are available. The Next Generation Virgo Cluster Survey (NGVS) was used by Feng et al. (2024) to identify faint (~21 mag) candidate stars at galactocentric distances of ~20–300 kpc. The study employed empirical pulsation fitting techniques, initially developed in the Sloan Digital Sky Survey (SDSS), to analyze these candidates. Follow-up photometric data from the Dark Energy Survey (DES), Pan-STARRS 1 (PS1), and Subaru HSC strategic survey were used to validate and refine the derived pulsation parameters. In addition, mock RR Lyrae simulations addressed biases caused by measurement uncertainties and fitting complexities. Keck II's ESI spectrograph was also used to analyze spectra of distant Milky Way halo RR Lyrae candidates to identify background quasar contaminants in previously mentioned surveys. References External links APOD M3: Inconstant Star Cluster four-frame animation of RR Lyrae variables in globular cluster M3 Animation of RR Lyrae-Variables in globular cluster M15 Animation with the variable stars RR Lyrae in the center area of the globular cluster M15 RR Lyrae stars AAVSO Variable Star of the Season - RR Lyrae OGLE Atlas of Variable Star Light Curves - RR Lyrae stars Standard candles Variable stars
RR Lyrae variable
Physics
1,620
74,209,862
https://en.wikipedia.org/wiki/Medical%20open%20network%20for%20AI
Medical open network for AI (MONAI) is an open-source, community-supported framework for Deep learning (DL) in healthcare imaging. MONAI provides a collection of domain-optimized implementations of various DL algorithms and utilities specifically designed for medical imaging tasks. MONAI is used in research and industry, aiding the development of various medical imaging applications, including image segmentation, image classification, image registration, and image generation. MONAI was first introduced in 2019 by a collaborative effort of engineers from Nvidia, the National Institutes of Health, and the King's College London academic community. The framework was developed to address the specific challenges and requirements of DL applied to medical imaging. Built on top of PyTorch, a popular DL library, MONAI offers a high-level interface for performing everyday medical imaging tasks, including image preprocessing, augmentation, DL model training, evaluation, and inference for diverse medical imaging applications. MONAI simplifies the development of DL models for medical image analysis by providing a range of pre-built components and modules. MONAI is part of a larger suite of Artificial Intelligence (AI)-powered software called NVIDIA Clara. Besides MONAI, Clara also comprises NVIDIA Parabricks for genome analysis. Medical image analysis foundations Medical imaging is a range of imaging techniques and technologies that enables clinicians to visualize the internal structures of the human body. It aids in diagnosing, treating, and monitoring various medical conditions, thus allowing healthcare professionals to obtain detailed and non-invasive images of organs, tissues, and physiological processes. Medical imaging has evolved, driven by technological advancements and scientific understanding. Today, it encompasses modalities such as X-ray, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), ultrasound, nuclear medicine, and digital pathology, each offering capabilities and insights into human anatomy and pathology. The images produced by these medical imaging modalities are interpreted by radiologists, trained specialists in analyzing and diagnosing medical conditions based on the visual information captured in the images. In recent years, the field has witnessed advancements in computer-aided diagnosis, integrating Artificial intelligence and Deep learning techniques to automatize medical image analysis and assist radiologists in detecting abnormalities and improving diagnostic accuracy. Features MONAI provides a robust suite of libraries, tools, and Software Development Kits (SDKs) that encompass the entire process of building medical imaging applications. It offers a comprehensive range of resources to support every stage of developing Artificial intelligence (AI) solutions in the field of medical imaging, from initial annotation (MONAI Label), through models development and evaluation (MONAI Core), and final application deployment (MONAI deploy application SDK). Medical data labeling MONAI Label is a versatile tool that enhances the image labeling and learning process by incorporating AI assistance. It simplifies the task of annotating new datasets by leveraging AI algorithms and user interactions. Through this collaboration, MONAI Label trains an AI model for a specific task and continually improves its performance as it receives additional annotated images. The tool offers a range of features and integrations that streamline the annotation workflow and ensure seamless integration with existing medical imaging platforms. AI-assisted annotation: MONAI Label assists researchers and practitioners in medical imaging by suggesting annotations based on user interactions by utilizing AI algorithms. This AI assistance significantly reduces the time and effort required for labeling new datasets, allowing users to focus on more complex tasks. The suggestions provided by MONAI Label enhance efficiency and accuracy in the annotation process. Continuous learning: as users provide additional annotated images, MONAI Label utilizes this data to improve its performance over time. The tool updates its AI model with the newly acquired annotations, enhancing its ability to label images and adapt to specific tasks. Integration with medical imaging platforms: MONAI Label integrates with medical imaging platforms such as 3D Slicer, Open Health Imaging Foundation viewer for radiology, QuPath, and digital slide archive for pathology. These integrations enable communication between MONAI Label and existing medical imaging tools, facilitating collaborative workflows and ensuring compatibility with established platforms. Custom viewer integration: developers have the flexibility to integrate MONAI Label into their custom image viewers using the provided server and client APIs. These APIs are abstracted and thoroughly documented, facilitating smooth integration with bespoke applications. Deep learning model development and evaluation Within MONAI Core, researchers can find a collection of tools and functionalities for dataset processing, loading, Deep learning (DL) model implementation, and evaluation. These utilities allow researchers to evaluate the performance of their models. MONAI Core offers customizable training pipelines, enabling users to construct and train models that support various learning approaches such as supervised, semi-supervised, and self-supervised learning. Additionally, users have the flexibility to implement different computing strategies to optimize the training process. Image I/O, processing, and augmentation: domain-specific APIs are available to transform data into arrays and different dictionary formats. Additionally, patch sampling strategies enable the generation of class-balanced samples from high-dimensional images. This ensures that the sampling process maintains balance and fairness across different classes present in the data. Furthermore, invertible transforms provided by MONAI Core allow for the reversal of model outputs to a previous preprocessing step. This is achieved by leveraging tracked metadata and applied operations, enabling researchers to interpret and analyze model results in the context of the original data. Datasets and data loading: multi-threaded cache-based datasets support high-frequency data loading, public dataset availability accelerates model deployment and performance reproducibility, and custom APIs support compressed, image- and patched, and multimodal data sources. Differentiable components, networks, losses, and optimizers: MONAI Core provides network layers and blocks that can seamlessly handle spatial 1D, 2D, and 3D inputs. Users have the flexibility to effortlessly integrate these layers, blocks, and networks into their personalized pipelines. The library also includes commonly used loss functions, such as Dice loss, Tversky loss, and Dice focal loss, which have been (re-)implemented from literature. In addition, MONAI Core offers numerical optimization techniques like Novograd and utilities like learning rate finder to facilitate the optimization process. Evaluation: MONAI Core provides a comprehensive set of evaluation metrics for assessing the performance of medical image models. These metrics include mean Dice, Receiving operating characteristic curves, Confusion matrices, Hausdorff distance, surface distance, and occlusion sensitivity. The metric summary report generates statistical information such as mean, median, maximum, minimum, percentile, and standard deviation for the computed evaluation metrics. GPU acceleration, performance profiling, and optimization: MONAI leverages a range of tools including DLProf, Nsight, NVTX, and NVML to detect performance bottlenecks. The distributed data-parallel APIs seamlessly integrate with the native PyTorch distributed module, PyTorch-ignite distributed module, Horovod, XLA, and the SLURM platform. DL model collection: by offering the MONAI Model Zoo, MONAI establishes itself as a platform that enables researchers and data scientists to access and share cutting-edge models developed by the community. Leveraging the MONAI Bundle format, users can seamlessly and efficiently utilize any model within the MONAI frameworks (Core, Label, or Deploy). AI-inference application development kit The MONAI deploy application SDK offers a systematic series of steps empowering users to develop and fine-tune their AI models and workflows for deployment in clinical settings. These steps act as checkpoints, guaranteeing that the AI inference infrastructure adheres to the essential standards and requirements for seamless clinical integration. Key components of the MONAI Deploy Application SDK include: Pythonic framework for app development: the SDK presents a Python-based framework designed specifically for creating healthcare-focused applications. With its adaptable foundation, this framework enables the streamlined development of AI-driven applications tailored to the healthcare domain. MONAI application package packaging mechanism: the SDK incorporates a tool for packaging applications into MONAI Application Packages (MAP). These MAP instances establish a standardized format for bundling and deploying applications, ensuring portability and facilitating seamless distribution. Local MAP execution via app runner: the SDK provides an app runner feature that enables the local execution of MAP instances. This functionality empowers developers to run and test their applications within a controlled environment, allowing prototyping and debugging. Sample applications: the SDK includes a selection of sample applications that serve as both practical examples and starting points for developers. These sample applications showcase different use cases and exemplify best practices for effectively utilizing the MONAI Deploy framework. API documentation: the SDK is complemented by comprehensive documentation that outlines the available APIs and provides guidance to developers on effectively leveraging the provided tools and functionalities. Applications MONAI has found applications in various research studies and industry implementations across different anatomical regions. For instance, it has been utilized in academic research involving automatic cranio-facial implant design, brain tumor analysis from Magnetic Resonance images, identification of features in focal liver lesions from MRI scans, radiotherapy planning for prostate cancer, preparation of datasets for fluorescence microscopy imaging, and classification of pulmonary nodules in lung cancer. In healthcare settings, hospitals have leveraged MONAI to enhance mammography reading by employing Deep learning models for breast density analysis. This approach reduce the waiting time for patients, allowing them to receive mammography results within 15 minutes. Consequently, clinicians save time, and patients experience shorter wait times. This advancement enables patients to engage in immediate discussions with their clinicians during the same appointment, facilitating prompt decision-making and discussion of next steps before leaving the facility. Moreover, hospitals can employ MONAI to identify indications of a COVID-19 patient's deteriorating condition or determine if they can be safely discharged, optimizing patient care and post-COVID-19 decision-making. In the corporate realm, companies choose MONAI to develop product applications addressing various clinical challenges. These include ultrasound-based scoliosis assessment, Artificial intelligence-based pathology image labeling, in-field pneumothorax detection using ultrasound, characterization of brain morphology, detection of micro-fractures in teeth, and non-invasive estimation of intracranial pressure. See also Artificial intelligence in healthcare Medical imaging Deep learning Image segmentation Image registration Image generation References Further reading External links Medical software Free health care software
Medical open network for AI
Biology
2,178
67,079,155
https://en.wikipedia.org/wiki/Prescription%20drug%20addiction
Prescription drug addiction is the chronic, repeated use of a prescription drug in ways other than prescribed for, including using someone else’s prescription. A prescription drug is a pharmaceutical drug that may not be dispensed without a legal medical prescription. Drugs in this category are supervised due to their potential for misuse and substance use disorder. The classes of medications most commonly abused are opioids, central nervous system (CNS) depressants and central nervous stimulants. In particular, prescription opioid is most commonly abused in the form of prescription analgesics. Prescription drug addiction was recognized as a significant public health and law enforcement problem worldwide in the past decade due to its medical and social consequences. Particularly, the United States declared a public health emergency regarding increased drug overdoses in 2017. Since then, multiple public health organizations have emphasized the importance of prevention, early diagnosis and treatments of prescription drug addiction to address this public health issue. Causes and risk factors There are multiple risk factors that can increase the chance of developing drug addiction, including patient factors, nature of drug and over-prescription. Patient factors Studies have indicated that adolescents and young adults were particularly vulnerable to prescription drug abuse. People with acute or chronic pain, anxiety disorders and ADHD were at increased risk for addiction comorbidity. History of illicit drug use and substance use disorder were consistently identified as risk factors for prescription drug abuse. Misuse of opioid analgesics is frequently associated with mental health disorder, including depression, posttraumatic stress disorder, and anxiety disorders. Some risk factors for opioid and benzodiazepine sedatives or tranquilizers addiction are white race, female sex, panic symptoms, other psychiatric symptoms, alcohol and cigarette dependence and history of illicit drug use. Addiction to pharmaceutical stimulants have been predominantly among adolescents and young adults. Drug characteristics Patients who have been prescribed medications to treat a health condition or disorder are shown to be more vulnerable to prescription drug abuse and addiction, especially when the prescribed medicine falls into the same drug classes of common illicit drugs. For example, methylphenidate and amphetamines are in the same stimulant category as cocaine and methamphetamine, while hydrocodone and oxycodone are under the opioid category as heroin. Key pharmacological factors associated with drug addiction include: high frequency of drug use high doses administered rapid rate of onset of action high drug potency co-ingestion of psychoactive substances with similar (eg. sedatives and alcohol) or different pharmacological profiles (eg. stimulants and nicotine) can result in additional reinforcement of addiction. Over-prescription and doctor shopping Health practitioners can prescribe drugs in a number of ways that inadvertently and unintentionally contribute to prescription drug abuse.. They may inappropriately prescribe drugs due to influence by ill-informed, careless or deceptive patients or by succumbing to patient pressure.. The American Medical Association describes four mechanism by which a physician becomes involved in overprescribing in its four-”Ds” model: ’’’Dated’’’: the physician is outdated regarding knowledge of pharmacology and the differential diagnosis and management of diseases. ’’’Duped’’’: the physician may be vulnerable to a manipulative patient. ’’’Dishonest’’’: a dishonest physician may be motivated to write prescriptions for controlled substances under financial incentives. ’’’Disabled’’’: a physician with medical or psychiatric disability such that they have “loose” standards in prescribing controlled substances. The above over-prescription practices can lead to the aggravation of prescription drug addiction. A person may also gain access to prescription drugs via doctor shopping.. "Doctor shopping" describes a practice in which a person searches for multiple sources of drugs by visiting different health practitioners and presenting a different list of complaints to each practitioner; the patient will then obtain multiple prescriptions and fill them at different pharmacies. Commonly abused drug categories Opioid analgesics Opioid painkillers exert CNS depressant effects by binding to opioid receptors. Its psychoactive properties potentially cause euphoria. Changes in the pain management including more liberal opioids prescription for chronic pain conditions, prescription of higher doses and the development of more potent opioid drugs play an important role contributing to the current epidemic of prescription opioid addiction. Examples of opioid drugs include morphine, codeine, oxycodone, hydrocodone, fentanyl, tramadol and methadone. Stimulants Stimulants are drugs that increase alertness and attention. This class of drugs have been frequently prescribed for patients with attention deficit hyperactivity disorder (ADHD) in many countries. In addition to taking higher doses of medication than prescribed, stimulant users may also combine prescribed stimulants with illicit drugs or alcohol in order to induce euphoria. Examples of prescribed stimulants include amphetamine, dextroamphetamine, methamphetamine and methylphenidate. Anxiolytic sedative-hypnotics Sedatives have potent, dose-dependent CNS depressant effects. These drugs exert a calming effect and may also induce sleepiness. Sedative-hypnotic medications are commonly prescribed for anti-anxiety or sleeping aid purposes. A major class of sedative-hypnotics causing addiction is benzodiazepines, which includes alprazolam, diazepam, clonazepam and lorazepam. Consequences Prescription drug addiction is usually associated with both medical and social consequences. Medical consequences Different drug classes have different side effects. Long-term medical conditions induced by opioid include infection, hyperalgesia, opioid-induced bowel syndrome, opioid-related leukoencephalopathy and opioid amnestic syndrome. Misuse of prescribed opioids medications is associated with increased morbidity and mortality. Syndromes of overdose of stimulants may include tremor, confusion, hallucinations, anxiety and seizures. Inappropriate use of prescribed benzodiazepines may induce nystagmus, stupor or coma, altered mental status (most commonly depression) and respiratory depression. Social consequences Addiction to prescription drugs also brings social impacts. Due to the CNS effect caused by misuse of medications, people are more likely to have poor judgement and thus engaging in risky behaviors. Polydrug addiction with illegal or recreational drugs is also common. It was found that adolescents with opioid addiction show higher rates of past-year criminal behaviors. The risk of motor vehicle accidents may increase if consciousness is greatly reduced. Addiction may also deteriorate academic or work performance and worsen relationships. Diagnosis Signs and symptoms The signs and symptoms of opioids addiction include decreased body temperature and blood pressure, constipation, decreased sex drive, euphoria and others. Conversely, people with addiction to stimulants often have increased blood pressure, heart rate, body temperature, decreased sleep and appetite. Stimulants may cause anxiety and paranoia as well. Addiction of benzodiazepines is diagnosed based on the withdrawal syndrome occurred after termination of regular use. Benzodiazepine withdrawal symptoms are similar to anxiety, including insomnia, excitability, restlessness, panic attacks and so on. Screening and testing Screening tools with high validity are available to assess patients’ risk for opioid misuse, which include rapid opioid dependence screen (RODS), Severity of Dependence Scale (SDS) and OWLS. There is a standardized list of diagnostic criteria provided by the Diagnostic and Statistical Manual of Mental Disorders for patients with positive screening results. Additionally, urine drug testing can be an accurate method to measure specific biomarkers after metabolism. Treatment Pharmacotherapies When a chronic prescription drug user suddenly ceases the use of an addictive drug, the person may experience unpleasant withdrawal symptoms depending on the drug type. A constant opioid user may experience withdrawal symptoms such as nausea and diarrhea. Detoxification is a procedure which treats addicts in withdrawal with low doses of a synthetic opiate drug which helps reduce the severity of their withdrawal symptoms. This type of pharmacotherapy with an opioid agonist or antagonist is adopted widely, together with adjunct psychotherapy to prevent relapse. Examples of medications include methadone, naltrexone and clonidine. Currently, no FDA-approved medications are available for stimulants addiction. However, some agents including bupropion, naltrexone and mirtazapine have demonstrated positive effects in treating addiction to amphetamine-type stimulants. Acetylcholinesterase inhibitors have shown to be a potential treatment target. Notably, benzodiazepines addiction often occurs as a result of polydrug abuse, most commonly with opioids. Medically supervised detoxification remains the first-line treatment for benzodiazepines addiction. The use of other medication to aid withdrawal has not been well-developed. Behavioral therapies Cognitive behavioral therapy and the Matrix model are treatment options for stimulant addicts that have been shown to be effective in preventing relapse, despite that patients addicted to opioid may not respond well to behavioral therapy. Prevention Patients, healthcare providers, the government, pharmaceutical companies and a variety of stakeholders can contribute to the prevention of prescription drug misuse and its subsequent addiction. Regulations regarding drug prescription In addition to existing controlled substance scheduling systems, mandatory prescriber registration, education and training, many governments launched various initiatives and regulations to minimize misuse of prescription drugs. For example, many healthcare providers are legally required to participate in local prescription-drug monitoring programs (PDMPs) to record patient drug use. Nationwide PDMPS are effective in reducing abuse and diversion of prescription medications, and promote safer prescribing practices for patients. PDMPs are effective against doctor shopping and incidents of over-prescription. Furthermore, different regions established specialized agencies to oversee drug addiction and its related regulations. The European Monitoring Centre for Drugs and Drug Addiction (EMCDDA) and the French public interest group OFDT were established in 1993 to provide information concerning drug addiction and consequences. Similarly, the US government founded the National Institute on Drug Abuse (NIDA) directed toward reducing drug misuse and overdose in 1974. In 2016, the Centers for Disease Control and Prevention (CDC) published its CDC Guideline for Prescribing Opioids for Chronic Pain. Screening for addiction Addiction disorders affect 20 to 50 percent of hospitalized patients; therefore physicians must integrate basic screening questions into all histories and physical examinations. Some major evidence-based assessment tools include the Addictions Neuroclinical Assessment, the National Institute on Drug Use Screening Tool, the CRAFFT 2.0 questionnaire, and the Drug Abuse Screening Test (DAST-10). There are many programs to assist addictive individuals in achieving abstinence. In countries like Brazil, the US and India, addictive patients may be referred to 12-step programs such as Alcoholic Anonymous, Narcotics Anonymous, and Pills Anonymous. Optimize alternative treatments Safer, non-controlled and non-addictive medications serve as an alternative to controlled substances. For example, abuse-deterrent formulations (ADF) are drug formulations that lower a drug’s addictiveness and/or prevent misuse by snorting or injection. ADFs have shown to decrease the illicit value of drugs and effectively eradicate substance addiction. Non-pharmacologic treatments with self-management strategies are highly recommended, such as behavioral treatments, relaxation techniques, physical therapy and psychotherapy. Ensuring drug compliance Pharmacists improve drug compliance by counselling patients on medication instructions, along with educating patients about potential side effects related to medications. Nevertheless, healthcare practitioners are responsible for recognizing problematic patterns in prescription drug use. They may also use prescription-drug monitoring programs (PDMPs) to track drug prescription and dispensing patterns in patients. Patient-wise, some organizations have suggested ways to use prescription drugs properly. For example, the NIDA guideline recommends patients to: following the directions as explained on the label or by the pharmacist being aware of potential interactions with other drugs as well as alcohol never stopping or changing a dosing regimen without first discussing it with the doctor never using another person’s prescription and never giving their prescription medications to others storing prescription stimulants, sedatives, and opioids safely. Additionally, the U.S. Food and Drug Administration (FDA) provides a guideline for proper disposal of unused or expired medications. Epidemiology Non-medical use of prescription opioids has been documented in many countries, most notably in West and North Africa, the Near and Middle East, and North America. United States In 2005, the US National Survey on Drug Use and Health (NSDUH) demonstrated that 6.4 million people aged 12 or older had used prescription drugs for non-medical reasons during the past month, including pain relievers, tranquillizers and stimulants. From 2006 to 2016, the total weight of stimulants prescribed in the US nearly doubled; however, the trend of prescription stimulant misuse has been gradually declining since 2017. In 2017, it was estimated that approximately 76 million adults in the US were prescribed with opioid drugs in the previous year, with 12 percent of them reporting prescription opioid misuse between 2016 and 2017. An estimate of more than 1 million Americans misused prescription stimulants, 2 million misused prescription analgesics, 1.5 million misused tranquillizers, and 271,000 misused sedatives for the first time within the past year. United Kingdom In the United Kingdom, deaths from Tramadol (a synthetic opioid painkiller) overdose have risen to 240 per annum as of 2014. Europe In Europe, methadone is the most widely prescribed opioid substitution medication, accounting for about 63 percent of substitution clients, followed by 35 percent of clients treated with buprenorphine-based medications. An average of 6 percent of students from the EU and Norway reported lifetime use of sedatives or tranquillizers without a doctor’s prescription. In 2019, there was an increasing trend of prescription opioid addiction among Europe. Both amphetamines and methamphetamines are stimulant drugs commonly used in Europe, though amphetamines were more frequently prescribed. Methamphetamine use has traditionally been limited to the Czech Republic and Slovakia, although there were signs of increase in other European countries. Asia In comparison to the West, Asia-Pacific has a scarcity of data on prescription drug abuse. Still, the United Nations Office on Drugs and Crime stated that prescription drug abuse is a growing epidemic among recreational drug users in South Asia. Although relevant studies in China were limited, they revealed a similar prevalence of prescription drug misuse among adolescents and young adults, which was 5.9 percent and 25.9 percent, respectively. Most Asian studies, including those from Japan, Thailand, and Singapore, revealed the existence of prescription drug misuse in Asia, but their prevalence rates were found to be lower than that reported in Western developed countries. In 2019, there was an increasing trend of prescription opioid addiction in India. South Africa In comparison to the US, the prevalence of illicit drug use (including prescription drugs) in South Africa is relatively low. Prescription drug and over-the-counter (OTC) drug abuse together constitutes 2.6 percent of all primary illicit substances admitted to South African drug treatment facilities. However, lifetime illicit drug use for prescription or OTC medicines was highest among adolescents, at 16 percent prevalence rate, followed by inhalants, club drugs and others. See also Prescription drug Drug addiction Drug overdose Controlled Substances Act Harm reduction Polysubstance abuse Responsible drug use Drug policy Pharmacy (shop) Regulation of therapeutic goods External links National Institute on Drug Abuse The European Monitoring Centre for Drugs and Drug Addiction References Addiction Addiction medicine Psychoactive drugs
Prescription drug addiction
Chemistry
3,345
66,228,478
https://en.wikipedia.org/wiki/Cost%20distance%20analysis
In spatial analysis and geographic information systems, cost distance analysis or cost path analysis is a method for determining one or more optimal routes of travel through unconstrained (two-dimensional) space. The optimal solution is that which minimizes the total cost of the route, based on a field of cost density (cost per linear unit) that varies over space due to local factors. It is thus based on the fundamental geographic principle of Friction of distance. It is an optimization problem with multiple deterministic algorithm solutions, implemented in most GIS software. The various problems, algorithms, and tools of cost distance analysis operate over an unconstrained two-dimensional space, meaning that a path could be of any shape. Similar cost optimization problems can also arise in a constrained space, especially a one-dimensional linear network such as a road or telecommunications network. Although they are similar in principle, the problems in network space require very different (usually simpler) algorithms to solve, largely adopted from graph theory. The collection of GIS tools for solving these problems are called network analysis. History Humans seem to have an innate desire to travel with minimal effort and time. Historic, even ancient, roads show patterns similar to what modern computational algorithms would generate, traveling straight across flat spaces, but curving around mountains, canyons, and thick vegetation. However, it was not until the 20th century that geographers developed theories to explain this route optimization, and algorithms to reproduce it. In 1957, during the Quantitative revolution in Geography, with its propensity to adopt principles or mathematical formalisms from the "hard" sciences (known as social physics), William Warntz used refraction as an analogy for how minimizing travel cost will make transportation routes change direction at the boundary between two landscapes with very different friction of distance (e.g., emerging from a forest into a prairie). His principle of "parsimonious movement," changing direction to minimize cost, was widely accepted, but the refraction analogy and mathematics (Snell's law) was not, largely because it does not scale well to normally complex geographic situations. Warntz and others then adopted another analogy that proved much more successful in the common situation where travel cost varies continuously over space, by comparing it to terrain. They compared the cost rate (i.e., cost per unit distance, the inverse of velocity if the cost is time) to the slope of a terrain surface (i.e., elevation change per unit distance), both being mathematical derivatives of an accumulated function or field: total elevation above a vertical datum (sea level) in the case of terrain. Integrating the cost rate field from a given starting point would create an analogous surface of total accumulated cost of travel from that point. In the same way that a stream follows the path of least resistance downhill, the streamline on the cost accumulation surface from any point "down" to the source will be the minimum-cost path. Additional lines of research in the 1960s further developed the nature of the cost rate field as a manifestation of the concept of friction of distance, studying how it was affected by various geographic features. At the time, this solution was only theoretical, lacking the data and computing power for the continuous solution. Raster GIS provided the first feasible platform for implementing the theoretical solution by converting the continuous integration into a discrete summation procedure. Dana Tomlin implemented cost distance analysis in his Map Analysis Package by 1986, and Ronald Eastman added it to IDRISI by 1989, with a more efficient "pushbroom" cost accumulation algorithm. Douglas (1994) further refined the accumulation algorithm, which is basically what is implemented in most current GIS software. Cost raster The primary data set used in cost distance analysis is the cost raster, sometimes called the cost-of-passage surface, the friction image, the cost-rate field, or cost surface. In most implementations, this is a raster grid, in which the value of each cell represents the cost (i.e., expended resources, such as time, money, or energy) of a route crossing the cell in a horizontal or vertical direction. It is thus a discretization of a field of cost rate (cost per linear unit), a spatially intensive property. This cost is a manifestation of the principle of friction of distance. A number of different types of cost may be relevant in a given routing problem: Travel cost, the resource expenditure required to move across the cell, usually time or energy/fuel. Construction cost, the resources (usually monetary) required to build the infrastructure that makes travel possible, such as roads, pipes, and cables. While some construction costs are constant (e.g., paving material), others are spatially variant, such as property acquisition and excavation. Environmental impacts, the negative effects on the natural or human environment caused by the infrastructure or the travel along it. For example, building an expressway through a residential neighborhood or a wetland would incur a high political cost (in the form of environmental impact assessments, protests, lawsuits, etc.). Some of these costs are easily quantifiable and measurable, such as transit time, fuel consumption, and construction costs, thus naturally lending themselves to computational solutions. That said, there may be significant uncertainty in predicting the cost prior to implementing the route. Other costs are much more difficult to measure due to their qualitative or subjective nature, such as political protest or ecological impact; these typically require operationalization through the creation of a scale. In many situations, multiple types of cost may be simultaneously relevant, and the total cost is a combination of them. Because different costs are expressed in different units (or, in the case of scales, no units at all), they usually cannot be directly summed, but must be combined by creating an index. A common type of index is created by scaling each factor to a consistent range (say, [0,1]), then combining them using weighted linear combination. An important part of the creation of an index model like this is Calibration (statistics), adjusting the parameters of the formula(s) to make the modeled relative cost match real-world costs, using methods such as the Analytic hierarchy process. The index model formula is typically implemented in a raster GIS using map algebra tools from raster grids representing each cost factor, resulting in a single cost raster grid. Directional cost One limitation of the traditional method is that the cost field is isotropic or omni-directional: the cost at a given location does not depend on the direction of traversal. This is appropriate in many situations, but not others. For example, if one is flying in a windy location, an airplane flying in the direction of the wind incurs a much lower cost than an airplane flying against it. Some research has been done on extending cost distance analysis algorithms to incorporate directional cost, but it is not yet widely implemented in GIS software. IDRISI has some support for anisotropy. Least-cost-path algorithm The most common cost distance task is to determine the single path through the space between a given source location and a destination location that has the least total accumulated cost. The typical solution algorithm is a discrete raster implementation of the cost integration strategy of Warntz and Lindgren, which is a deterministic (NP-complete) optimization. Inputs: cost field raster, source location, destination location (most implementations can solve for multiple sources and destinations simultaneously) Accumulation: Starting at the source location compute the lowest total cost needed to reach every other cell in the grid. Although there are several algorithms, such as those published by Eastman and Douglas, they generally follow a similar strategy. This process also creates, as an important byproduct, a second raster grid usually called the backlink grid (Esri) or movement direction grid (GRASS), in which each cell has a direction code (0-7) representing which of its eight neighbors had the lowest cost. Find a cell that is adjacent to at least one cell that already has an accumulated cost assigned (initially, this is only the source cell) Determine which neighbor has the lowest accumulated cost. Encode the direction from the target to the lowest-cost neighbor in the backlink grid. Add the cost of the target cell (or an average of the costs of the target and neighbor cells) to the neighbor accumulated cost, to create the accumulated cost of the target cell. If the neighbor is diagonal, the local cost is multiplied by The algorithm must also take into account that indirect routes may have lower cost, often using a hash table to keep track of temporary cost values along the expanding fringe of computation that can be reconsidered. Repeat the procedure until all cells are assigned. Drain: In keeping with the terrain analogy, trace the optimal route from the given destination back to the source like a stream draining away from a location. At its most basic, this is accomplished by starting at the destination cell, moving in the direction indicated in the backlink grid, then repeating for the next cell, and so on until the source is reached. Recent software adds some improvements, such as looking across three or more cells to recognize straight lines at angles other than the eight neighbor directions. For example, the r.walk function in GRASS can recognize the "knight's move" (one cell straight, then one cell diagonal) and draw a straight line bypassing the middle cell. Corridor analysis A slightly different version of the least-cost path problem, which could be considered a fuzzy version of it, is to look for corridors more than one cell in width, thus providing some flexibility in applying the results. Corridors are commonly used in transportation planning and in wildlife management. The solution to this problem is to compute, for every cell in the study space, the total accumulated cost of the optimal path between a given source and destination that passes through that cell. Thus, every cell in the optimal path derived above would have the same minimum value. Cells near this path would be reached by paths deviating only slightly from the optimal path, so they would have relatively low cost values, collectively forming a corridor with fuzzy edges as more distant cells have increasing cost values. The algorithm to derive this corridor field is created by generating two cost accumulation grids: one using the source as described above. Then the algorithm is repeated, but using the destination as the source. Then these two grids are added using map algebra. This works because for each cell, the optimal source-destination path passing through that cell is the optimal path from that cell to the source, added to the optimal path from that cell to the destination. This can be accomplished using the cost accumulation tool above, along with a map algebra tool, although ArcGIS provides a Corridor tool that automates the process. Cost-based allocation Another use of the cost accumulation algorithm is to partition space among multiple sources, with each cell assigned to the source it can reach with the lowest cost, creating a series of regions in which each source is the "nearest". In the terrain analogy, these would correspond to watersheds (one could thus call these "cost-sheds," but this term is not in common usage). They are directly related to a voronoi diagram, which is essentially an allocation over a space with constant cost. They are also conceptually (if not computationally) similar to location-allocation tools for network analysis. A cost-based allocation can be created using two methods. The first is to use a modified version of the cost accumulation algorithm, which substitutes the backlink grid for an allocation grid, in which each cell is assigned the same source identifier of its lowest-cost neighbor, causing the domain of each source to gradually grow until they meet each other. This is the approach taken in ArcGIS Pro. The second solution is to first run the basic accumulation algorithm, then use the backlink grid to determine the source into which each cell "flows." GRASS GIS uses this approach; in fact, the same tool is used as for computing watersheds from terrain. Implementations Cost distance tools are available in most raster GIS software: GRASS GIS (often bundled into QGIS), with separate accumulation (r.cost) and drain (r.walk) functions ArcGIS Desktop and ArcGIS Pro, with separate accumulation (Cost Distance) and drain (Cost Path) geoprocessing tools, as well as Corridor generation. Recently, starting with ArcGIS Pro version 2.5, a new set of cost distance tools was introduced, using more advanced algorithms with more flexible options. TerrSet (formerly Idrisi) has several tools, implementing a variety of algorithms to solve different kinds of cost distance problems, including anisotropic (directional) cost. Applications Cost distance analysis has found applications in a wide range of geography related disciplines including archeaology and landscape ecology. See also Distance decay Tobler's first law of geography Tobler's second law of geography Tobler's hiking function Travelling salesman problem Canadian traveller problem Traveling purchaser problem Vehicle routing problem References External links Distance toolset documentation for Esri ArcGIS Pro Cost Surface tools in GRASS GIS Geographic information systems
Cost distance analysis
Physics,Technology
2,688
15,029,327
https://en.wikipedia.org/wiki/MXD4
Max-interacting transcriptional repressor MAD4 is a protein that in humans is encoded by the MXD4 gene. Function This gene is a member of the MAD gene family . The MAD genes encode basic helix-loop-helix-leucine zipper proteins that heterodimerize with MAX protein, forming a transcriptional repression complex. The MAD proteins compete for MAX binding with MYC, which heterodimerizes with MAX forming a transcriptional activation complex. Studies in rodents suggest that the MAD genes are tumor suppressors and contribute to the regulation of cell growth in differentiating tissues. References Further reading External links Transcription factors
MXD4
Chemistry,Biology
131
53,827,955
https://en.wikipedia.org/wiki/NGC%20438
NGC 438 is an intermediate spiral galaxy of type (R')SAB(s)b: located in the constellation Sculptor. It was discovered on September 1, 1834, by John Herschel. It was described by Dreyer as "pretty faint, small, round, gradually a little brighter middle." One supernova has been observed in NGC 438: SN2024vjc (typeIb, mag. 18.97). See also List of NGC objects (1–1000) References External links 0438 18340901 Sculptor (constellation) Intermediate spiral galaxies 004406 Discoveries by John Herschel -06-03-029 01112-3810 296-007
NGC 438
Astronomy
147
45,525,757
https://en.wikipedia.org/wiki/Biological%20recording
Biological recording is the scientific study of the distribution of living organisms, biological records describe the presence, abundance, associations and changes, both in time and space, of wildlife. There has been a long tradition of biological recording in the United Kingdom dating back to John Ray (1627–1705), Robert Plot (1640–1696) and their contemporaries. Methods The basis of a biological record is the 'four Ws': What: the identification of the organism recorded Where: The locality where the organism was seen When: the date (and time) when the organism was recorded Who: the person or persons making the observation Additionally a variety of additional information is often necessary to increase the value of any biological record, including: How: the method of recording the observation, e.g. pitfall trap or moth trap Result Minute apertures called stomata are seen in the temporary mount of lead peel. Each stomata is enclosed by two kidney-shaped guard cells. These guard cells differ from other epidermal cells in having chloroplast Biological recording in the UK In the UK biological recording is a popular hobby and much is organised by national recording schemes for many taxonomic groups of which almost 90 are registered with the national Biological Records Centre. At a national level biological records are managed by the Biological Records Centre, originally set up at Monks Wood Experimental Station, but now based at Wallingford in Oxfordshire which has operated since 1964 to manage records of the country's biodiversity. Following the CCBR report in 1995 the National Biodiversity Network was established as an ideal. This is overseen by the NBN Trust which is responsible for the NBN Gateway which in May 2016 passed 127 million records. At a local level there are a number of field natural history clubs promoting biological recording, including Essex Field Club and Sandwell Valley Naturalists' Club. On a professional level, most of the UK is covered by a network of Local Environmental Records Centres. It was estimated in 1995 that over 60,000 individuals were actively and directly involved in biological recording of which the vast majority were voluntarily engaged out of personal interest. References Additional References Biological Records Centre, Wallingford The Association of Local Environmental Records Centres National Forum for Biological Recording Biodiversity
Biological recording
Biology
448
70,674,164
https://en.wikipedia.org/wiki/Chi%20Octantis
Chi Octantis, Latinized from χ Octantis, is a solitary star located in the southern circumpolar constellation Octans. It is faintly visible to the naked eye as an orange-hued star with an apparent magnitude of 5.28. The object is located relatively close at a distance of 261 light years based on Gaia EDR3 parallax measurements, but it is receding with a heliocentric radial velocity . At its current distance, Chi Octantis' brightness is diminished by 0.24 magnitudes due to interstellar dust. It has an absolute magnitude of +0.81. Chi Octantis is an evolved red giant with a stellar classification of K3 III. It has 125% the mass of the Sun and an enlarged radius of due to its evolved state. It radiates 73.6 times the luminosity of the Sun from its photosphere at an effective temperature of 4,266 K. Chi Octantis is metal enriched with an iron abundance 126% that of the Sun ([Fe/H] = +0.10) and spins slowly with a projected rotational velocity less than 1 km/s−1. References Octans K-type giants Octantis, Chi Octantis, 30 CD-87 00092 164461 092824 6721
Chi Octantis
Astronomy
273
60,220,034
https://en.wikipedia.org/wiki/NGC%201712
NGC 1712, also known as GC 942, JH 2685, and Dunlop 112 is an open cluster in the constellation of Dorado. It is relatively small, and is located inside the Large Magellanic Cloud. NGC 1712 was originally discovered in 1826 by James Dunlop, although Herschel rediscovered it in 1834. Nine variable stars have been discovered in it so far, with three suspected to be binary systems. References 1712 Open clusters Dorado Large Magellanic Cloud Astronomical objects discovered in 1826 Discoveries by James Dunlop
NGC 1712
Astronomy
106
62,077,323
https://en.wikipedia.org/wiki/Dannie%20Heineman%20Prize%20%28G%C3%B6ttingen%29
The Dannie Heineman Prize of the Göttingen Academy of Sciences and Humanities has been awarded biennially since 1961 for excellent recently published publications in a new research field of current interest. It is awarded to younger researchers in natural sciences or mathematics. The prize is named after Dannie Heineman, a Belgian-US philanthropist, engineer and businessman with German roots. Prizewinners 1961 James Franck, biochemistry 1963 Edmund Hlawka, mathematics 1965 Georg Wittig, chemistry 1967 Martin Schwarzschild, astrophysics 1967 Gobind Khorana, biochemistry 1969 Brian Pippard, physics 1971 Neil Bartlett, chemistry 1973 Igor Schafarewitsch, mathematics 1975 Philip Warren Anderson, physics 1977 Albert Eschenmoser, chemistry 1979 Phillip Griffiths, mathematics 1981 Jacques Friedel, physics 1983 Gerd Faltings, mathematics 1986 Rudolf Thauer jr, biology 1987 Alex Müller and Georg Bednorz, physics 1989 Dieter Oesterhelt, biochemistry 1991 Jean-Pierre Demailly, mathematics 1993 Richard N. Zare, chemistry 1995 Donald M. Eigler, physics 1997 Regine Kahmann, biology 1999 Wolfgang Ketterle, physics 2001 Christopher C. Cummins, chemistry 2003 Michael Neuberger, biology 2005 Richard Taylor, mathematics 2007 Bertrand I. Halperin, physics 2009 Gerald F. Joyce, biology 2012 Krzysztof Matyjaszewski, chemistry 2013 Emmanuel Jean Candès, mathematics 2015 Andrea Cavalleri, physics 2018 , chemistry 2019 Oscar Randal-Williams, mathematics 2021 Viola Priesemann, physics 2024 Mayuko Yamashita, mathematical physics See also List of general science and technology awards References External links Dannie Heineman Preis Awards established in 1961 Science and technology awards
Dannie Heineman Prize (Göttingen)
Technology
356
18,485,015
https://en.wikipedia.org/wiki/Rauwolscine
Rauwolscine, also known as isoyohimbine, α-yohimbine, and corynanthidine, is an alkaloid found in various species within the genera Rauvolfia and Corynanthe (including Pausinystalia). It is a stereoisomer of yohimbine. Rauwolscine is a central nervous system stimulant, a local anesthetic and a vague aphrodisiac. Rauwolscine acts predominantly as a α2-adrenergic receptor antagonist. It has also been shown to function as a 5-HT1A receptor partial agonist and 5-HT2A and 5-HT2B receptor antagonist. See also Ajmalicine Corynanthine Spegatrine Yohimbine References Indoloquinolizines Tryptamine alkaloids Quinolizidine alkaloids Alkaloids found in Rauvolfia Alpha-2 blockers 5-HT1A agonists 5-HT2A antagonists 5-HT2B antagonists Methyl esters Heterocyclic compounds with 5 rings
Rauwolscine
Chemistry
238
12,158,344
https://en.wikipedia.org/wiki/Nifuroxazide
Nifuroxazide (INN) is an oral nitrofuran antibiotic, patented since 1966 and used to treat colitis and diarrhea in humans and non-humans. It is sold under the brand names Ambatrol, Antinal, Bacifurane, Diafuryl (Turkey), Benol (Pakistan), Pérabacticel (France), Antinal, Diax (Egypt), Nifrozid, Ercefuryl (Romania, Czech Republic, Russia), Erfuzide (Thailand), Endiex (Slovakia), Enterofuryl (Bosnia and Herzegovina, Montenegro, Russia), Pentofuryl (Germany), Nifuroksazyd Hasco, Nifuroksazyd Polpharma (Poland), Topron, Enterovid (Latin America), Eskapar (Mexico), Enterocolin, Terracolin (Bolivia), Apazid (Morocco), Nifroxid (Tunisia), Nifural (Indonesia) and Septidiaryl. It is sold in capsule form and also as a suspension. History Maurice Claude Ernest Carron patented the drug in the United States in 1966. Subsequent patents issued to Germano Cagliero of Marxer S.p.A. describe the use of nifuroxazide as an antibiotic used to treat livestock. Effectiveness in humans In 1997, in an Ivory Coast promotional leaflet, GlaxoSmithKline claimed that nifuroxazide (under the brand name "Ambatrol") is an anti-dehydration treatment, "neutralise[s] microbacterials" in diarrhoea, and has "a spectrum which covers most enteropathogenic microbacterials, Shigella, Escherichia coli, Salmonella, Staphylococci, Klebsiella, Yersinia". The international non-profit organization Healthy Skepticism, at the time using their former name, Medical Lobby for Appropriate Marketing (MaLAM), disagreed, stating "We have not found any scientific evidence to support these claims." STAT3 inhibition In addition to its antibiotic activity, nifuroxazide has been found to be a potent inhibitor of STAT3, and consequently has been proposed as a cancer treatment. ALDH1 cancer stem cells High aldehyde dehydrogenase (ALDH) 1 enzymatic activity is a marker for cancer stem cell/tumour initiating cell populations in many cancers. Nifuroxazide was found to be bio-activated by ALDH1 enzymes, and shown to selectively kill ALDH1-High melanoma cells in experimental human cell systems and mouse models. ALDH1 is enriched in melanoma patient samples following BRAF and MEK inhibitor treatments, and it has been proposed that nifuroxazide may be useful as a cancer treatment in this context. References Antibiotics Hydrazides 4-Hydroxyphenyl compounds Nitrofurans
Nifuroxazide
Biology
621
8,399,115
https://en.wikipedia.org/wiki/Calcium%20disilicide
Calcium disilicide (CaSi2) is an inorganic compound, a silicide of calcium. It is a whitish or dark grey to black solid matter with melting point 1033 °C. It is insoluble in water, but may decompose when subjected to moisture, evolving hydrogen and producing calcium hydroxide. It decomposes in hot water, and is flammable and may ignite spontaneously in air. Industrial calcium silicide usually contains iron and aluminium as the primary contaminants, and low amounts of carbon and sulfur. Properties At ambient conditions calcium disilicide exists in two polymorphs, hR9 and hR18; in the hR18 structure the hR9 unit cell is stacked twice along the c axis. Upon heating to 1000 °C at a pressure of ca. 40 kBar, calcium disilicide converts to a (semi-stable) tetragonal phase. The tetragonal phase is a superconductor with a transition temperature of 1.37 K to 1.58 K. Although there is no observable superconducting transition temperature for the trigonal/rhombohedral (i.e. hR9 and hR18 unit cells) at ambient pressure, under high pressure (>12 GPa/120 kbar) this phase has been observed exhibit superconducting transition. When the trigonal phase is placed under pressures exceeding 16 GPa, there is a phase transition to an AlB2-like phase. Uses Alloys Calcium silicide is used for manufacture of special metal alloys, e.g. for removing phosphorus and as a deoxidizer. Pyrotechnics In pyrotechnics, it is used as fuel to make special mixtures, e.g. for production of smokes, in flash compositions, and in percussion caps. Specification for pyrotechnic calcium silicide is MIL-C-324C. In some mixtures it may be substituted with ferrosilicon. Silicon-based fuels are used in some time delay mixtures, e.g. for controlling of explosive bolts, hand grenades, and infrared decoys. Smoke compositions often contain hexachloroethane; during burning they produce silicon tetrachloride, which, like titanium tetrachloride used in smoke-screens, reacts with air moisture and produces dense white fog. Gum arabic is used in some mixtures to inhibit calcium silicide decomposition. Heating food Self-heating cans of military food rations developed during WWII used a thermite-like mixture of 1:1 iron(II,III) oxide and calcium silicide. Such mixture, when ignited, generates moderate amount of heat and no gaseous products. References Alkaline earth silicides Calcium compounds Deoxidizers Pyrotechnic fuels
Calcium disilicide
Chemistry,Materials_science
582
59,696,041
https://en.wikipedia.org/wiki/Markiezaatskade
The is a compartmentalisation dam in The Netherlands, situated between South Beveland and , near Bergen op Zoom. The dam was constructed as part of the Delta Works, and has a length of . The dam was conceived as an auxiliary dam to permit the construction of the Oesterdam, and encloses the Markiezaat area of Bergen op Zoom. Without this structure, the location of the Oesterdam would have been situated more to the west, which would have resulted in negative ecological impacts for shellfish and resulted in significantly higher costs for the dam. In combination with the Oesterdam, Philipsdam and Volkerakdam, it divides the waters of Zeeland and South Holland. Construction of the Markiezaatskade commenced in 1980. In March 1982, during construction, a part of the dam was destroyed by a storm. Construction on the dam was completed by March 30, 1983. Design and construction The dam is L-shaped, with a northern part between Molenplaat and Noordland, a western part between Molenplaat and Kreekrak locks, and a discharge structure in its western embankment. Constructed to facilitate the construction of the Oesterdam, it divides (along with the Oesterdam, Bathse Spuis Lock, Bathse Spuis Canal, and Philipsdam) the Zeeland and South Holland waters into compartments for the management of freshwater and navigation. Along with the Oesterdam, it forms the limit of the freshwater . The construction of the dam created the Markiezaatsmeer wetland. The purpose of the Markiezaatskade was to permit easier construction of the Oesterdam and improve the ecological quality of the area. Since the construction of the Oesterdam was completed, it does not play a role in storm surge protection. An area of land behind the current dam location was permanently flooded during the Saint Felix flood of 1530. The dam is located in the estuary of the Eastern Scheldt, where the North Sea meets the Scheldt, with a tidal range of up to five metres. The Markiezaatskade construction cut off this water from tidal influence. Most of the dam was built with sand, except for the 800-metre-long closing gap in the western leg, which was to be filled with a temporarily porous closing dike. This dike was intended to gradually silt up over the years naturally, with the aim of slowly transitioning the from a saline waterbody to a freshwater lake. The original probabilistic design had utilised relatively low dam sections, mainly driven by financial considerations, with a construction schedule planning for the closure works to be undertaken in summer. However, due to delays, the closure shifted to winter. Construction of the dam faced delays due to the extremely soft subsoil. In the spring of 1982, during the placement of armourstone, a small unfinished section of the dam stood only 2.25 meters above the Amsterdam Ordnance Datum (NAP). On 11 March 1982, a minor but intense storm coinciding with a spring tide caused a sudden rise in the Oosterschelde basin's water level. The swift escalation in water levels was partially attributed to the peak wind speeds coinciding with the period of high water. Consequently, and due to the closure dike already extending over half its length at 4 metres above NAP, the water level in the Markiezaat remained markedly low, around 0.6 metres above NAP at the time of the high tide. This posed a threat to the stability of the dam head. A 40-tonne crawler crane was stationed on the dam, and could not be removed due to the unforeseen and sudden rise in water levels, thereby exacerbating the stability issues on the dam. This unexpected surge, which reached 3.67 meters above NAP, caused significant overflow and ultimately led to a breach in the incomplete section, resulting in substantial damage. In a few days, a gap of approximately 150 metres wide formed in the dike, and scour pits with depths of up to 26 metres below NAP on the east side, and 22 metres below NAP on the west side, were formed. The eastern scour pit quickly expanded in the direction of a nearby high-voltage electricity mast. After the breach, emergency measures were taken to ensure the stability of the high-voltage electricity mast and to reduce hindrance to navigation on the Scheldt-Rhine waterway. To repair the breach, various options were examined. Eventually, it was decided to proceed with a new closure and to strengthen and raise the existing northern and southern dam sections. Completion and environmental functions After the completion of the Oesterdam in 1986, the Markiezaatskade took on its final environmental function, as a water barrier between Bergen op Zoom and the Scheldt-Rhine connection. This separation was necessary to isolate the nature reserves in and around the Markiezaatsmeer from the Scheldt-Rhine Canal, which could not be protected from pollution due to the presence of locks and shipping. Subsequently, the Markiezaatsmeer became the largest wetland area in the Netherlands after the Wadden Sea and the IJsselmeer. Most species of breeding birds in the Netherlands can be found there, with a large colony of spoonbills arriving in the spring. Orchids grow along the walking paths, and the lake also serves as a freshwater buffer for the surrounding area. Gallery References Further reading Delta Works Dams completed in 1983 Dams in Zeeland Dams in North Brabant Zuid-Beveland Buildings and structures in Reimerswaal Transport in Reimerswaal
Markiezaatskade
Physics
1,154
5,214,249
https://en.wikipedia.org/wiki/Kepler%20College
Kepler College (formerly Kepler College of Astrological Arts and Sciences) is an online certificate program for the study of astrology . Based out of Seattle, Washington, U.S., it is named after the mathematician and astronomer Johannes Kepler (1571–1630). Kepler College was founded in 2000 as an unaccredited institution of higher learning that was authorized to grant degrees from 2000 until 2010 by the Higher Education Coordinating Board of Washington State. Its programs were based in the liberal arts and it offered degrees in astrological studies with a focus on the history of astrology. Since 2010, students have been awarded certificates of completion of a course of study instead of degrees. History In March 2000, Kepler College received provisional authorization from Washington State's Higher Education Coordinating (HEC) Board to grant degrees while the school pursued regional or national accreditation, a requirement for maintaining degree-granting status. The HEC Board's decision was criticized by many academics due to Kepler's focus on astrology. An administrator at the University of Washington called the HEC Board's approval "ludicrous" and compared the study of astrology to "quack medicine". John Silber, chancellor of Boston University, wrote in a Boston Herald editorial that the school's promoters "honored Kepler not for his strength but for his weakness, as if a society advocating drunkenness named a school for Ernest Hemingway". Silber also said, "The fact is that astrology, whether judged by its theory or its practice, is bunkum. In a free society there is no reason to prevent those who wish to learn nonsense from finding teachers who want to make money peddling nonsense. But it is inexcusable for the government to certify teachers of nonsense as competent or to authorize—that is, endorse—the granting of degrees in nonsense." Kepler College promoted itself as the only institution in the Western Hemisphere to offer bachelor's and master's degrees in astrological studies and 31 students enrolled for the first term in July 2000. The majority of coursework was offered online, allowing students from across the U.S. to enroll with the requirement that they were present in-person for one week of the 11-week term. Kepler College did not obtain the required accreditation status by 2010 and as a result, the HEC Board revoked Kepler's right to grant degrees. After losing this authority, Kepler became an online certificate program. References Further reading —Discusses Kepler College and the science/lack of science behind astrology —Discusses Kepler College and astrology from a skeptical perspective External links Unaccredited institutions of higher learning in the United States History of astrology Astrological organizations Educational institutions established in 1991 1991 establishments in Washington (state)
Kepler College
Astronomy
560
43,585,688
https://en.wikipedia.org/wiki/Bullerwell%20Lecture
The Bullerwell Lecture is an annual award from the British Geophysical Association (BGA) bestowed on an individual for significant contribution to the field of geophysics. Scientists of any nationality but working in an academic institution in the United Kingdom qualify for the award. The award is named in honour of William Bullerwell. Laureates Notable recipients include 2018: Tom Mitchell from University College London 2014: Catherine Rychert 2013: Derek Keir 2003: John-Michael Kendall 2000: James Jackson 1993 Kathryn Whaler 1992: Bob White 1982: Dan McKenzie See also IAMG Distinguished Lectureship Georges Matheron Lectureship List of geophysics awards References Geophysics awards British lecture series Recurring events established in 1982 Science lecture series 1982 establishments in the United Kingdom
Bullerwell Lecture
Technology
150
17,765,922
https://en.wikipedia.org/wiki/Conformal%20supergravity
In theoretical physics, conformal supergravity is the study of the supersymmetrized version of conformal gravity with Weyl transformations. Equivalently, it is the extension of ordinary supergravity to include Weyl transformations. Often, nonconformal gravity is described by conformal gravity with a conformal compensator. For a review of conformal supergravity see E.S. Fradkin and A.A. Tseytlin, "Conformal Supergravity", Phys. Rep. 119 (1985) 233 General relativity Supersymmetry
Conformal supergravity
Physics
122
68,015,338
https://en.wikipedia.org/wiki/Great%20Calcite%20Belt
The Great Calcite Belt (GCB) refers to a region of the ocean where there are high concentrations of calcite, a mineral form of calcium carbonate. The belt extends over a large area of the Southern Ocean surrounding Antarctica. The calcite in the Great Calcite Belt is formed by tiny marine organisms called coccolithophores, which build their shells out of calcium carbonate. When these organisms die, their shells sink to the bottom of the ocean, and over time, they accumulate to form a thick layer of calcite sediment. The Great Calcite Belt occurs in areas of the Southern ocean where the calcite compensation depth (CCD) is relatively shallow, meaning that calcite minerals from the shells of marine organisms dissolve at a shallower depth in the water column. This results in a higher concentration of calcium carbonate sediments in the ocean floor, which can be observed in the form of white chalky sediments. The Great Calcite Belt plays a significant role regulating the global carbon cycle. Calcite is a form of carbon that is removed from the atmosphere and stored in the ocean, which helps to reduce the amount of carbon dioxide in the atmosphere and mitigate the effects of climate change. Recent studies suggest the belt sequesters something between 15 and 30 million tonnes of carbon per year. Scientists have further interest in the calcite sediments in the belt, which contain valuable information about past climate, ocean currents, ocean chemistry, and marine ecosystems. For example, variations in the CCD depth over time can indicate changes in the amount of carbon dioxide in the atmosphere and the ocean's ability to absorb it. The belt is also home to a diverse range of contemporary marine life, including deep-sea corals and fish that are adapted to the unique conditions found in this part of the ocean. The Great Calcite Belt is a region of elevated summertime upper ocean calcite concentration derived from coccolithophores, despite the region being known for its diatom predominance. The overlap of two major phytoplankton groups, coccolithophores and diatoms, in the dynamic frontal systems characteristic of this region provides an ideal setting to study environmental influences on the distribution of different species within these taxonomic groups. Overview The Great Calcite Belt can be defined as an elevated particulate inorganic carbon (PIC) feature occurring alongside seasonally elevated chlorophyll a in austral spring and summer in the Southern Ocean. It plays an important role in climate fluctuations, accounting for over 60% of the Southern Ocean area (30–60° S). The region between 30° and 50° S has the highest uptake of anthropogenic carbon dioxide (CO2) alongside the North Atlantic and North Pacific oceans. Knowledge of the impact of interacting environmental influences on phytoplankton distribution in the Southern Ocean is limited. For example, more understanding is needed of how light and iron availability or temperature and pH interact to control phytoplankton biogeography. Hence, if model parameterizations are to improve to provide accurate predictions of biogeochemical change, a multivariate understanding of the full suite of environmental drivers is required. The Southern Ocean has often been considered as a microplankton-dominated (20–200 μm) system with phytoplankton blooms dominated by large diatoms and Phaeocystis sp. However, since the identification of the Great Calcite Belt (GCB) as a consistent feature and the recognition of picoplankton (< 2 μm) and nanoplankton (2–20 μm) importance in high-nutrient, low-chlorophyll (HNLC) waters, the dynamics of small (bio)mineralizing plankton and their export need to be acknowledged. The two dominant biomineralizing phytoplankton groups in the GCB are coccolithophores and diatoms. Coccolithophores are generally found north of the polar front, though Emiliania huxleyi has been observed as far south as 58° S in the Scotia Sea, at 61° S across Drake Passage, and at 65°S south of Australia. Diatoms are present throughout the GCB, with the polar front marking a strong divide between different size fractions. North of the polar front, small diatom species, such as Pseudo-nitzschia spp. and Thalassiosira spp., tend to dominate numerically, whereas large diatoms with higher silicic acid requirements (e.g., Fragilariopsis kerguelensis) are generally more abundant south of the polar front. High abundances of nanoplankton (coccolithophores, small diatoms, chrysophytes) have also been observed on the Patagonian Shelf and in the Scotia Sea. Currently, few studies incorporate small biomineralizing phytoplankton to species level. Rather, the focus has often been on the larger and noncalcifying species in the Southern Ocean due to sample preservation issues (i.e., acidified Lugol’s solution dissolves calcite, and light microscopy restricts accurate identification to cells > 10 μm. In the context of climate change and future ecosystem function, the distribution of biomineralizing phytoplankton is important to define when considering phytoplankton interactions with carbonate chemistry, and ocean biogeochemistry. The Great Calcite Belt spans the major Southern Ocean circumpolar fronts: the Subantarctic front, the polar front, the Southern Antarctic Circumpolar Current front, and occasionally the southern boundary of the Antarctic Circumpolar Current. The subtropical front (at approximately 10 °C) acts as the northern boundary of the GCB and is associated with a sharp increase in PIC southwards. These fronts divide distinct environmental and biogeochemical zones, making the GCB an ideal study area to examine controls on phytoplankton communities in the open ocean. A high PIC concentration observed in the GCB (1 μmol PIC L−1) compared to the global average (0.2 μmol PIC L−1) and significant quantities of detached E. huxleyi coccoliths (in concentrations > 20,000 coccoliths mL−1) both characterize the GCB. The GCB is clearly observed in satellite imagery spanning from the Patagonian Shelf across the Atlantic, Indian, and Pacific oceans and completing Antarctic circumnavigation via the Drake Passage. Coccolithophores versus the diatom The biogeography of Southern Ocean phytoplankton controls the local biogeochemistry and the export of macronutrients to lower latitudes and depth. Of particular relevance is the competitive interaction between coccolithophores and diatoms, with the former being prevalent along the Great Calcite Belt (40–60°S), while diatoms tend to dominate the regions south of 60°S, as illustrated in the diagram on the right. The ocean is changing at an unprecedented rate as a consequence of increasing anthropogenic CO2 emissions and related climate change. Changes in density stratification and nutrient supply, as well as ocean acidification, lead to changes in phytoplankton community composition and consequently ecosystem structure and function. Some of these changes are already observable today and may have cascading effects on global biogeochemical cycles and oceanic carbon uptake. Changes in Southern Ocean (SO) biogeography are especially critical due to the importance of the Southern Ocean in fuelling primary production at lower latitudes through the lateral export of nutrients and in taking up anthropogenic CO2. For the carbon cycle, the ratio of calcifying and noncalcifying phytoplankton is crucial due to the counteracting effects of calcification and photosynthesis on seawater pCO2, which ultimately controls CO2 exchange with the atmosphere, and the differing ballasting effect of calcite and silicic acid shells for organic carbon export. Calcifying coccolithophores and silicifying diatoms are globally ubiquitous phytoplankton functional groups. Diatoms are a major contributor to global phytoplankton biomass and annual net primary production. In comparison, coccolithophores contribute less to biomass and to global NPP. However, coccolithophores are the major phytoplanktonic calcifier. thereby significantly impacting the global carbon cycle. Diatoms dominate the phytoplankton community in the Southern Ocean, but coccolithophores have received increasing attention in recent years. Satellite imagery of particulate inorganic carbon (PIC, a proxy for coccolithophore abundance) revealed the "Great Calcite Belt", an annually reoccurring circumpolar band of elevated PIC concentrations between 40 and 60°S. In situ observations confirmed coccolithophore abundances of up to 2.4×103 cells mL−1 in the Atlantic sector (blooms on the Patagonian Shelf), up to 3.8×102 cells mL−1 in the Indian sector, and up to 5.4×102 cells mL−1 in the Pacific sector of the Southern Ocean with Emiliania huxleyi being the dominant species. However, the contribution of coccolithophores to total Southern Ocean phytoplankton biomass and NPP has not yet been assessed. Locally, elevated coccolithophore abundance in the GCB has been found to turn surface waters into a source of CO2 for the atmosphere, emphasising the necessity to understand the controls on their abundance in the Southern Ocean in the context of the carbon cycle and climate change. While coccolithophores have been observed to have moved polewards in recent decades, their response to the combined effects of future warming and ocean acidification is still subject to debate. As their response will also crucially depend on future phytoplankton community composition and predator–prey interactions, it is essential to assess the controls on their abundance in today's climate. Top-down and bottom-up approaches Coccolithophore biomass is controlled by a combination of bottom-up (physical–biogeochemical environment) and top-down factors (predator–prey interactions), but the relative importance of the two has not yet been assessed for coccolithophores in the Southern Ocean. Bottom-up factors directly impact phytoplankton growth, and diatoms and coccolithophores are traditionally discriminated based on their differing requirements for nutrients, turbulence, and light. Based on this, Margalef's mandala predicts a seasonal succession from diatoms to coccolithophores as light levels increase and nutrient levels decline. In situ studies assessing Southern Ocean coccolithophore biogeography have found coccolithophores under various environmental conditions, thus suggesting a wide ecological niche, but all of the mentioned studies have almost exclusively focused on bottom-up controls. However, phytoplankton growth rates do not necessarily covary with biomass accumulation rates. Using satellite data from the North Atlantic, Behrenfeld stressed in 2014 the importance of simultaneously considering bottom-up and top-down factors when assessing seasonal phytoplankton biomass dynamics and the succession of different phytoplankton types owing to the spatially and temporally varying relative importance of the physical–biogeochemical and the biological environment. In the Southern Ocean, previous studies have shown zooplankton grazing to control total phytoplankton biomass, phytoplankton community composition, and ecosystem structure, suggesting that top-down control might also be an important driver for the relative abundance of coccolithophores and diatoms. But the role of zooplankton grazing in current Earth system models is not well considered, and the impact of different grazing formulations on phytoplankton biogeography and diversity is subject to ongoing research. The diagram on the left shows the spatial distribution of different types of marine sediments in the Southern Ocean. The greenish area south of the Polar Front shows the extension of the subpolar opal belt where sediments have a significant portion of silicous plankton frustules. Sediments near Antarctica mainly consist of glacial debris in any grain size eroded and delivered by the Antarctic Ice. See also Great Atlantic Sargassum Belt Milky seas effect References Chemical oceanography
Great Calcite Belt
Chemistry
2,607
52,019,358
https://en.wikipedia.org/wiki/NGC%20293
NGC 293 is a barred spiral galaxy in the constellation Cetus. It was discovered on September 27, 1864 by Albert Marth. References External links 0293 18640927 Cetus Barred spiral galaxies Discoveries by Albert Marth 003195
NGC 293
Astronomy
52
32,155,076
https://en.wikipedia.org/wiki/Second%20Boer%20War%20concentration%20camps
During the Second Anglo-Boer War (1899–1902), the British operated concentration camps in the South African Republic, Orange Free State, Natal, and the Cape Colony. In February 1900, Herbert Kitchener took command of the British forces and implemented some controversial tactics that contributed to a British victory. As the Boers used a 'guerrilla warfare' strategy, they lived off the land and used their farms as a source of food, thus making their farms a key item in their many successes at the beginning of the war. When Kitchener realized that a traditional warfare style would not work against the Boers, he began initiating plans that would later cause much controversy among the British public. Scorched-earth policy According to historian Thomas Pakenham, in March 1901, Lord Kitchener initiated plans to deter guerrillas. In a series of systematic drives, organized like a sport, with success defined by a weekly 'bag' of killed, they captured and wounded Boers. They swept the country bare of everything that could give sustance. Large epidemics of diseases including measles, killed thousands, affecting children the most. It was the clearance of civilians and uprooting of a nation that came to dominate the last phases of the war. As Boer farms were destroyed by the British under their "Scorched Earth" policy - including the systematic destruction of crops and the slaughtering or removal of livestock, the burning down of homesteads and farms to prevent the Boers from resupplying themselves from a home base, many tens of thousands of men, women, and children were forcibly moved into camps. it was not the first use of concentration camps, as the Spanish had used them in Cuba during and after the Ten Years' War. However, the Boer War concentration camp system was the first time a whole nation had been systematically targeted, and the first in which entire regions had been depopulated. Eventually, authorities built a total of 45 tented camps for Boer internees and 64 additional camps for Black Africans. The vast majority of Boers who remained in the local camps were women and children. Between 18,000 and 26,000 Boers perished in these concentration camps due to diseases. The camps were poorly administered from the outset, and they became increasingly overcrowded when Lord Kitchener's troops implemented the internment strategy on a vast scale. Conditions were terrible for the health of the internees, mainly due to neglect, poor hygiene and bad sanitation. The supply of all items were unreliable, partly because of the constant disruption of communication lines by the Boers. The food rations were meager, and there was a two-tier allocation policy, whereby families of men still fighting were routinely given smaller rations than others. The inadequate shelter, poor diet, bad hygiene, and overcrowding led to malnutrition and endemic contagious diseases such as measles, typhoid, and dysentery to which the children were particularly vulnerable. Due to a shortage of modern medicine facilities and medical mistreatment, many internees died. UK public opinion and political opposition Although the 1900 UK general election, also known as the "Khaki election", had resulted in a victory for the Conservative government on the back of recent British victories against the Boers, public support quickly waned as it became apparent that the war would not be easy. Further unease developed following reports filtering back to Britain concerning the treatment of Boer civilians by the British. Public and political opposition to government policies in South Africa regarding Boer civilians was first expressed in Parliament in February 1901 in the form of an attack on the government by the Liberal Party MP David Lloyd George. Emily Hobhouse, a delegate of the South African Women and Children's Distress Fund, visited some of the camps in the Orange Free State from January 1901. In May 1901 she returned to England on board the ship, the Saxon. Alfred Milner, High Commissioner in South Africa, also boarded the Saxon for holiday in England but, unfortunately for both the camp internees and the British government, he had no time for Miss Hobhouse, regarding her as a Boer sympathizer and "trouble maker". On her return, Emily Hobhouse did much to publicize the distress of the camp inmates. She managed to speak to the Liberal Party leader, Henry Campbell-Bannerman, who professed to be suitably outraged but was disinclined to press the matter, as his party was split between the imperialists and the pro-Boer factions. St John Brodrick, the Conservative secretary of state for war, first defended the government's policy by arguing that the camps were purely "voluntary" and that the interned Boers were "contented and comfortable", but was somewhat undermined as he had no firm statistics to back up his argument, so when his "voluntary" argument proved untenable, he argued that all measures being taken were "military necessities" and stated that everything possible was being done to ensure satisfactory conditions in the camps. Hobhouse published a report in June 1901 that contradicted Brodrick's claim, and Lloyd George then openly accused the government of "a policy of extermination" directed against the Boer population. The same month, Liberal opposition party leader Campbell-Bannerman took up the assault and answered the rhetorical question "When is a war, not a war?" with his rhetorical answer, "When it is carried on by methods of barbarism in South Africa,", referring to those same camps and the policies that created them. The Hobhouse Report caused an uproar both domestically and internationally. The Fawcett Commission Although the government had comfortably won the parliamentary debate by a margin of 252 to 149, it was stung by the criticism. Concerned by the escalating public outcry, it called on Kitchener for a detailed report. In response, complete statistical returns from camps were sent out in July 1901. By August 1901, it was clear to government and opposition alike that Miss Hobhouse's worst fears were being confirmed – 93,940 Boers and 24,457 black Africans were reported to be in "camps of refuge" and the crisis was becoming a catastrophe as the death rates appeared very high, especially among the children. The government responded to the growing clamor by appointing a commission. The Fawcett Commission, as it became known was, uniquely for its time, an all-woman affair headed by Millicent Fawcett who despite being the leader of the women's suffrage movement was a Liberal Unionist and thus a government supporter and considered a safe pair of hands. Between August and December 1901, the Fawcett Commission conducted its own tour of the camps in South Africa. While it is probable that the British government expected the Commission to produce a report that could be used to fend off criticism, in the end it confirmed everything that Emily Hobhouse had said. Indeed, if anything the Commission's recommendations went even further. The Commission insisted that rations should be increased and that additional nurses be sent out immediately and included a long list of other practical measures designed to improve conditions in the camp. Millicent Fawcett was quite blunt in expressing her opinion that much of the catastrophe was owed to a simple failure to observe elementary rules of hygiene. In November 1901, the Colonial Secretary Joseph Chamberlain ordered Alfred Milner to ensure that "all possible steps are being taken to reduce the rate of mortality". The civil authority took over the running of the camps from Kitchener and the British command, and by February 1902, the annual death rate in the concentration camps for white inmates dropped to 6.9 percent and eventually to 2 percent. However, by then the damage had been done. A report after the war concluded that 27,927 Boers (of whom 24,074 [50 percent of the Boer child population] were children under 16) had died in the camps. In all, about one in four (25 percent) of the Boer inmates, mostly children, died. "Improvements [however] were much slower in coming to the black camps". It is thought that about 12 percent of black African inmates died (about 14,154) but the precise number of deaths of black Africans in concentration camps is unknown as little attempt was made to keep any records of the 107,000 black Africans who were interned. Sir Arthur Conan Doyle had served as a volunteer doctor in the Langman Field Hospital at Bloemfontein between March and June 1900. In his widely distributed and translated pamphlet 'The War in South Africa: Its Cause and Conduct' he justified both the reasoning behind the war and handling of the conflict itself. He also pointed out that over 14,000 British soldiers had died of disease during the conflict (as opposed to 8,000 killed in combat) and at the height of epidemics he was seeing 50–60 British soldiers dying each day in a single ill-equipped and overwhelmed military hospital. Kitchener's policy and the post-war debate It has been argued that "this was not a deliberately genocidal policy; rather it was the result of [a] disastrous lack of foresight and rank incompetence on [the] part of the [British] military". Scottish historian Niall Ferguson has also argued that "Kitchener no more desired the deaths of women and children in the camps than of the wounded Dervishes after Omdurman, or of his own soldiers in the typhoid-stricken hospitals of Bloemfontein." However, to Lord Kitchener and British High Command "the life or death of the 154,000 Boer and African civilians in the camps rated as an abysmally low priority" against military objectives. As the Fawcett Commission was delivering its recommendations, Kitchener wrote to St John Brodrick defending his policy of sweeps, and emphasizing that no new Boer families were being brought in unless they were in danger of facing starvation. This was disingenuous as the countryside had by then been devastated under the "Scorched Earth" policy (the Fawcett Commission in December 1901 in its recommendations commented that: "to turn 100,000 people now being held in the concentration camps out on the field to take care of themselves would be cruelty") and now that the New Model counter insurgency tactics were in full swing, it made little sense to leave the Boer families by themselves in desperate conditions in the countryside. It was according to one historian that "at [the Vereeniging negotiations in May 1902] Boer leader Louis Botha asserted that he had tried to send [Boer] families to the British, but they had refused to receive them". Quoting a Boer commandant referring to Boer women and children made refugees by Britain's scorched-earth policy as saying, "Our families are in a pitiable condition and the enemy uses those families to force us to surrender .. and there is little doubt that that was indeed the intention of Kitchener when he had issued instructions that no more families were to be brought into the concentration camps ". Thomas Pakenham writes of Kitchener's policy U-turn, List of concentration camps Afrikaner concentration camps The exact number of incarcerated victims of the concentration camps for Afrikaners is estimated to number around 40,000 by May of 1902, the majority of which were women and children. The total deaths in camps are officially calculated at 27,927 deaths. Black African concentration camps By May of 1902, when The Treaty of Vereeniging was signed, the total number of Black South Africans in concentration was recorded at 115,700. The total Black deaths in camps are officially calculated at a minimum of 14 154. 81% of the fatalities were children. Notes References excerpt and text search; a standard scholarly history concentration camps Second Boer War Second Boer War 1900 establishments in South Africa 1902 disestablishments in South Africa British war crimes Buildings and structures completed in 1900 Buildings and structures demolished in 1902 Ethnic cleansing in Africa Military history of the United Kingdom Racism in South Africa Total institutions https://www.britannica.com/event/South-African-War
Second Boer War concentration camps
Biology
2,463
8,471
https://en.wikipedia.org/wiki/Delphinus
Delphinus is a small constellation in the Northern Celestial Hemisphere, close to the celestial equator. Its name is the Latin version for the Greek word for dolphin (). It is one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations recognized by the International Astronomical Union. It is one of the smaller constellations, ranked 69th in size. Delphinus' five brightest stars form a distinctive asterism symbolizing a dolphin with four stars representing the body and one the tail. It is bordered (clockwise from north) by Vulpecula, Sagitta, Aquila, Aquarius, Equuleus and Pegasus. Delphinus is a faint constellation with only two stars brighter than an apparent magnitude of 4, Beta Delphini (Rotanev) at magnitude 3.6 and Alpha Delphini (Sualocin) at magnitude 3.8. Mythology Delphinus is associated with two stories from Greek mythology. According to myth, the first Greek god Poseidon wanted to marry Amphitrite, a beautiful nereid. However, wanting to protect her virginity, she fled to the Atlas mountains. Her suitor then sent out several searchers, among them a certain Delphinus. Delphinus accidentally stumbled upon her and was able to persuade Amphitrite to accept Poseidon's wooing. Out of gratitude the god placed the image of a dolphin among the stars. The second story tells of the Greek poet Arion of Lesbos (7th century BC), who was saved by a dolphin. He was a court musician at the palace of Periander, ruler of Corinth. Arion had amassed a fortune during his travels to Sicily and Italy. On his way home from Tarentum his wealth caused the crew of his ship to conspire against him. Threatened with death, Arion asked to be granted a last wish which the crew granted: he wanted to sing a dirge. This he did, and while doing so, flung himself into the sea. There, he was rescued by a dolphin which had been charmed by Arion's music. The dolphin carried Arion to the coast of Greece and left. In non-Western astronomy In Chinese astronomy, the stars of Delphinus are located within the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ). In Polynesia, two cultures recognized Delphinus as a constellation. In Pukapuka, it was called Te Toloa and in the Tuamotus, it was called Te Uru-o-tiki. In Hindu astrology, the Delphinus corresponds to the Nakshatra, or lunar mansion, of Dhanishta. Characteristics Delphinus is bordered by Vulpecula to the north, Sagitta to the northwest, Aquila to the west and southwest, Aquarius to the southeast, Equuleus to the east and Pegasus to the east. Covering 188.5 square degrees, corresponding to 0.457% of the sky, it ranks 69th of the 88 constellations in size. The three-letter abbreviation for the constellation, as adopted by the IAU in 1922, is "Del". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of 14 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . The whole constellation is visible to observers north of latitude 69°S. Features Stars Delphinus has two stars above fourth (apparent) magnitude; its brightest star is of magnitude 3.6. The main asterism in Delphinus is Job's Coffin, nearly a 45°-apex lozenge or diamond of the four brightest stars: Alpha, Beta, Gamma, and Delta Delphini. Delphinus is in a rich Milky Way star field. Alpha and Beta Delphini have 19th century names Sualocin and Rotanev, read backwards: Nicolaus Venator, the Latinized name of a Palermo Observatory director, Niccolò Cacciatore (d. 1841). Alpha Delphini is a blue-white hued main sequence star of magnitude 3.8, 241 light-years from Earth. It is a spectroscopic binary. It is officially named Sualocin. The star has an absolute magnitude of -0.4. Beta Delphini is officially called Rotanev. It was found to be a binary star in 1873. The gap between its close binary stars is visible from large amateur telescopes. To the unaided eye, it appears to be a white star of magnitude 3.6. It has a period of 27 years and is 97 light-years from Earth. Gamma Delphini is a celebrated binary star among amateur astronomers. The primary is orange-gold of magnitude 4.3; the secondary is a light yellow star of magnitude 5.1. The pair form a true binary with an estimated orbital period of over 3,000 years. 125 light-years away, the two components are visible in a small amateur telescope. The secondary, also described as green, is 10 arcseconds from the primary. Struve 2725, called the "Ghost Double", is a pair that appears similar but dimmer. Its components of magnitudes 7.6 and 8.4 are separated by 6 arcseconds and are 15 arcminutes from Gamma Delphini itself. An unconfirmed exoplanet with a minimum mass of 0.7 Jupiter masses may orbit one of the stars. Delta Delphini is a type A-type star of magnitude 4.43. It is a spectroscopic binary, and both stars are Delta Scuti variables. Epsilon Delphini, Deneb Dulfim (lit. "tail [of the] Dolphin"), or Aldulfin, is a star of stellar class B6 III. Its magnitude is variable at around 4.03. Zeta Delphini, an A3Va main-sequence star of magnitude 4.6, was in 2014 discovered to have a brown dwarf orbiting around it. Zeta Delphini B has a mass of 50±15 . Rho Aquilae at magnitude 4.94 is at about 150 light-years away. Due to its proper motion it has been in the (round-figure parameter) bounds of the constellation since 1992. It is an A-type main sequence star with a lower metallicity than the Sun. HR Delphini was a nova that brightened to magnitude 3.5 in December 1967. It took an unusually long time for the nova to reach peak brightness which indicate that it barely satisfied the conditions for a thermonuclear runaway. Another nova by the name V339 Delphini was detected in 2013; it peaked at magnitude 4.3 and was the first nova observed to produce lithium. Musica, also known by its Flamsteed designation 18 Delphini, is one of the five stars with known planets located in Delphinus. It has a spectral type of G6 III. Arion, the planet, is a very dense and massive planet with a mass at least 10.3 times greater than Jupiter. Arion was part of the first NameExoWorlds contest where the public got the opportunity to suggest names for exoplanets and their host stars. Exoplanets In 2024 the planet TOI-6883 b was discovered in the constellation Delphinus. It has a 16.249 day orbital period around its host star, a radius 1.08 times Jupiter's, and a mass 4.34 times Jupiter's. It was discovered from a single transit in TESS data and it was confirmed by a network of citizen scientists. In 2024, the planet TOI-6883 c was discovered in the constellation Delphinus. It has an orbital period of 7.8458 days, a radius of 0.7 times Jupiter's, and a third of Jupiter's mass. The Neptunian-size planet was discovered from an abnormality from data retrieved from TOI-6883 c. Deep-sky objects Its rich Milky Way star field means many modestly deep-sky objects. NGC 6891 is a planetary nebula of magnitude 10.5; another is NGC 6905 or the Blue Flash Nebula. The Blue Flash Nebula shows broad emission lines. The central star in NGC 6905 has a spectral of WO2, meaning it is rich in oxygen. NGC 6934 is a globular cluster of magnitude 9.75. It is about 52,000 light-years away from the Solar System. It is in the Shapley-Sawyer Concentration Class VIII and is thought to share a common origin with another globular cluster in Boötes. It has an intermediate metallicity for a globular cluster, but as of 2018 it has been poorly studied. At a distance of about 137,000 light-years, the globular cluster NGC 7006 is at the outer reaches of the galaxy. It is also fairly dim at magnitude 11.5 and is in Class I. See also Delphinus (Chinese astronomy) Notes Citations References Princeton University Press, Princeton. . University of Wisconsin, "Delphinus" External links The Deep Photographic Guide to the Constellations: Delphinus The clickable Delphinus Star Tales – Delphinus Warburg Institute Iconographic Database (medieval and early modern images of Delphinus) Constellations Northern constellations Constellations listed by Ptolemy Legendary mammals Articles containing video clips
Delphinus
Astronomy
1,983
30,510,739
https://en.wikipedia.org/wiki/Andreu%20Mas-Colell
Andreu Mas-Colell (; born 29 June 1944) is an economist, an expert in microeconomics and a prominent mathematical economist. He is the founder of the Barcelona Graduate School of Economics and a professor in the department of economics at Pompeu Fabra University in Barcelona, Catalonia, Spain. He has also served several times in the cabinet of the Catalan government. Summarizing his and others' research in general equilibrium theory, his monograph gave a thorough exposition of research using differential topology. His textbook Microeconomic Theory, co-authored with Michael Whinston and Jerry Green, is the most used graduate microeconomics textbook in the world. In June 2021, Spain’s Court of Auditors found that he was among those responsible for government expenditure on the unconstitutional 2017 Catalan independence referendum, and announced its intention to fine him millions of euros; one member of the court dissented, and an outcry from economists followed. Biography A native of Barcelona, Mas-Colell completed his undergraduate studies at the University of Barcelona, earning a degree in economics in 1966. He moved to the University of Minnesota for his graduate studies, and completed his Ph.D. in 1972 under the supervision of Marcel Richter. He took a faculty position in mathematics and economics at the University of California, Berkeley, becoming a full professor in 1979. In 1981, he moved to Harvard University, and in 1988 he became the Louis Berkman Professor of Economics at Harvard. In 1995 he moved to Pompeu Fabra to lead the Department of Economics and Business. He was editor-in-chief of the Journal of Mathematical Economics from 1985 to 1989, and of Econometrica from 1988 to 1998. He was president of the Econometric Society in 1993 and of the European Economic Association in 2006. In public service, Mas-Colell was the Commissioner for Universities and Research of the Generalitat of Catalonia in 1999–2000. While Minister of Universities, Research and the Information Society for the Generalitat from 2000 to 2003, Mas-Colell implemented a research institution called Catalan Institution for Research and Advanced Studies (ICREA) to attract top-notch scientists in all fields of knowledge, from philosophers to astrophysicists, to perform their research in 50 different host institutions in Catalonia. Mas-Colell served the Secretary General of the European Research Council from 2009 to 2010. In the Catalonia Government from 2010-2016, presided by Artur Mas, he was appointed as the Councillor for Economy and Knowledge, being responsible for the government's budget, economic policy, and research policy. Spain’s Court of Auditors in June 2021 held Mas-Colell and several former colleagues in the Catalan government responsible for the mismanagement of €4.8m in support of Catalan independence. Paul Romer, a Nobel prizewinner in economics, said that the procedure appeared to be ‘not justice but politics by other means.’ One member of the tribunal voted against the decision. Research Mas-Colell's research has ranged broadly over mathematical economics. In particular, he has been associated with a revival of the use of differential calculus (in the form of "global analysis") at the highest levels of mathematical economics. Following John von Neumann's break-throughs in economics, and particularly after his introduction of functional analysis and topology into economic theory, advanced mathematical economics reduced its emphasis on differential calculus. In general equilibrium theory, mathematical economists used general topology, convex geometry, and optimization theory more than differential calculus. In the 1960s and 1970s, however, Gérard Debreu and Stephen Smale led a revival of the use of differential calculus in mathematical economics. In particular, they were able to prove the existence of a general equilibrium, where earlier writers had failed, through the use of their novel mathematics: Baire category from general topology and Sard's lemma from differential topology and differential geometry. Their publications initiated a period of research "characterized by the use of elementary differential topology": "almost every area in economic theory where the differential approach has been pursued, including general equilibrium" was covered by Mas-Colell's monograph on differentiable analysis and economics. Mas-Colell's book "offers a synthetic and thorough account of a major recent development in general equilibrium analysis, namely, the largely successful reconstruction of the theory using modern ideas of differential topology", according to its back cover. Mas-Colell has also contributed to the theory of general equilibrium in topological vector lattices. The sets of prices and quantities can be described as partially ordered vector spaces, often as vector lattices. Economies with uncertain or dynamic decisions typically require that the vector spaces be infinite-dimensional, in which case the order properties of vector lattices allow stronger conclusions to be made. Recently researchers have studied nonlinear pricing: A "main motivation came from the fact that Mas-Colell's fundamental theory of welfare economics with no interiority assumptions crucially requires lattice properties of commodity spaces, even in finite-dimensional settings." Books Mas-Colell is the author or co-author of: The Theory of General Economic Equilibrium: a Differentiable Approach (Econometric Society Monographs in Pure Theory 9, Cambridge University Press, 1990, ). This book was evaluated for Mathematical Reviews by Dave Furth, who wrote that Mas-Colell's book is one of the first and still one of the most complete and most rigorous of the few textbooks on the applications of differential topology and global analysis to the theory of general economic equilibrium. .... People working in the field ought to own and to have used it already for some time. So those still wanting to consult for the first time and perhaps wanting to buy the book must be mathematicians interested in economic applications of the above-mentioned mathematical subjects. They will not regret consulting and/or buying it; the book is excellent. Microeconomic Theory (with Michael Dennis Whinston, Jerry R. Green, Oxford University Press, 1995, ). writes that this was "the most commonly used textbook in microeconomics", Jönköping excepted (the investigation covers all economics Ph.D. programs in Sweden for the academic year 2003-04). Awards and honors Mas-Colell is a foreign associate of the United States National Academy of Sciences, a foreign honorary member of the American Economic Association, a fellow of the American Academy of Arts and Sciences, a member of the Academia Europaea, a member of the Institute of Catalan Studies, a fellow of the Real Academia de Ciencias Morales y Políticas, and a fellow of the Econometric Society. He has received honorary doctorates from the University of Alicante in Spain, the University of Toulouse and HEC Paris in France, the National University of the South in Argentina, and the University of Chicago in the United States. He is a recipient of the Creu de Sant Jordi, the highest civil honor of Catalonia, and of the King Juan Carlos Prize in Economics. Also he has received the 2009 BBVA Foundation Frontiers of Knowledge Award in Economy, Finance and Management (co-winner with Hugo F. Sonnenschein). In honor of his 65th birthday and his service on the European Research Council, two conferences were held in his honor at Pompeu Fabra in 2009, the Journal of Mathematical Economics published a special issue in his honor, and he was awarded the Medal of Honor of the university. See also Clara Ponsatí i Obiols References External links Web site at Pompeu Fabra University 1944 births Academic staff of the Barcelona Graduate School of Economics Economists from Catalonia Economy ministers of Catalonia Fellows of the American Academy of Arts and Sciences Fellows of the Econometric Society Game theorists General equilibrium theorists Harvard University faculty Living people Mathematical economists Members of Academia Europaea Members of the European Academy of Sciences and Arts Members of the Institute for Catalan Studies Foreign associates of the National Academy of Sciences Academic staff of Pompeu Fabra University Presidents of the Econometric Society University of California, Berkeley College of Letters and Science faculty University of Minnesota alumni Economics journal editors Fellows of the European Economic Association
Andreu Mas-Colell
Mathematics
1,640
11,790,206
https://en.wikipedia.org/wiki/Automatic%20scorer
An automatic scorer is the computerized scoring system to keep track of scoring in ten-pin bowling. It was introduced en masse in bowling alleys in the 1970s and combined with mechanical pinsetters to detect overturned pins. By eliminating the need for manual score-keeping, these systems have introduced new bowlers into the game who otherwise would not participate because they had to count the score themselves, as many do not understand the mathematical formula involved in bowler scoring. At first, people were skeptical about whether a computer could keep an accurate score. In the twenty-first century, automatic scorers are used in most bowling centers around the world. The three manufacturers of these specialty computers have been Brunswick Bowling, AMF Bowling (later QubicaAMF Worldwide), and RCA. History Automatic equipment is considered a cornerstone of the modern bowling center. The traditional bowling center of the early 20th century was advanced in automation when the pinsetter person ("pin boy"), who set back up by hand the bowled down pins, was replaced by a machine that automatically replaced the pins in their proper play positions. This machine came out in the 1950s. A detection system was developed from the pinsetter mechanism in the 1960s that could tell which pins had been knocked down, and that information could be transferred to a digital computer. Automatic electronic scoring was first conceived by Robert Reynolds, who was described by a newspaper story at the time as "a West Coast electronics calculator expert." He worked with the technical staff of Brunswick Bowling to develop it. The goal was realized in the late 1960s when a specialized computer was designed for the purpose of automatic scorekeeping for bowling. The field test for the automatic scorer took place at Village Lanes bowling center, Chicago in 1967. The scoring machine received approval for official use by the American Bowling Congress in August of that year. They were first used in national official league gaming on October 10, 1967. In November, Brunswick announced that they were accepting orders for the new digital computer, which cost around $3,000 per bowling lane. Bowling centers that installed these new automatic scoring devices in the 1970s charged a ten cents extra per line of scoring for the convenience. Description Each Automatic Scorer computer unit kept score for four lanes. It had two bowler identification panels serving two lanes each. The bowler pushed it into his named position when his turn came up so the computer knew who was bowling and score accordingly. After the bowler rolled the bowling ball down the lane and knocked down pins, the pinsetter detected which pins were down and relayed this information back to the computer for scoring. The result was then printed on a scoresheet and projected overhead onto a large screen for all to see. The Automatic Scorer digital computer was mathematically accurate, however the detection system at the pinsetter mechanism sometimes reported the wrong number of pins knocked down. The computer could be corrected manually for any errors in the system; similarly, human errors, such as neglecting to move the bowler identification mechanism, could be corrected for by manual action. The scorer could take into account bowlers' handicaps and could adjust for late-arriving bowlers. The automatic scorer is directly connected to the foul detection unit. As a result, foul line violations are automatically scored. Brunswick had put ten years of research and development into the Automatic Scorer, and by 1972 there were over 500 of these computers installed in bowling centers around the world. AMF Bowling, competitor to Brunswick, entered into the automatic scorer computer field during the 1970s and their systems were installed into their brand of bowling centers. By 1974, RCA was also making these computers for automatic scoring. Reception and further developments The purposes of the computerized scoring were to avoid errors by human scorers and to prevent cheating. It had the side benefit of speeding up the progress of the game and introducing new bowlers to the game. Score-keeping for bowling is based on a formula that many new to bowling were not familiar with and thought difficult to learn. These casual bowlers unfamiliar with the formula thought the scores given by the computers were confusing. Some bowlers were not comfortable with automatic scorers when they were introduced in the 1970s, so kept score using the traditional method on paper score sheets. The introduction of this device increased the popularity of the sport. Automatic scorers came to be considered a normal part of modern bowling installations worldwide, with owners and managers saying that bowlers expect such equipment to be present in bowling establishments and that business increased following their introduction. Brunswick introduced a color television style automatic scorer in 1983. Bowling center owners could use these style automatic scorers for advertising, management, videos, and live television. By the 2010s, these type of electronic visual displays could show bowler avatars and social media connections to publicize the bowlers' scores. Some are capable of being extended entertainment systems of games for children and adults. Some scoring systems support variations on traditional bowling, such as different kinds of bingo games where certain pins have to be knocked down at certain times or practice regimes where certain spares have to be accomplished. By this point, QubicaAMF Worldwide, an outgrowth of AMF, was one of the leading providers of bowling scoring equipment. Footnotes Ten-pin bowling Sports equipment Automation 20th-century inventions American inventions
Automatic scorer
Engineering
1,062
3,597,140
https://en.wikipedia.org/wiki/Ethernet%20Powerlink
Ethernet Powerlink is a real-time protocol for standard Ethernet. It is an open protocol managed by the Ethernet POWERLINK Standardization Group (EPSG). It was introduced by Austrian automation company B&R in 2001. This protocol has nothing to do with power distribution via Ethernet cabling or power over Ethernet (PoE), power line communication, or Bang & Olufsen's PowerLink cable. Overview Ethernet Powerlink expands Ethernet with a mixed polling and timeslicing mechanism. This provides: Guaranteed transfer of time-critical data in very short isochronic cycles with configurable response time Time-synchronisation of all nodes in the network with very high precision of sub-microseconds Transmission of less time-critical data in a reserved asynchronous channel Modern implementations reach cycle-times of under 200 μs and a time-precision (jitter) of less than 1 μs. Standardization Powerlink was standardized by the Ethernet Powerlink Standardization Group (EPSG) and founded in June 2003 as an independent association. Working groups focus on tasks like safety, technology, marketing, certification and end users. The EPSG cooperates with the standardization bodies and associations, like the CAN in Automation (CiA) Group and the IEC. Physical layer The original physical layer specified was 100BASE-TX Fast Ethernet. Since the end of 2006, Ethernet Powerlink with Gigabit Ethernet supported a transmission rate ten times higher (1,000 Mbit/s). Repeating hubs instead of switches within the Real-time domain is recommended to minimise delay and jitter. Ethernet Powerlink uses IAONA's Industrial Ethernet Planning and Installation Guide for clean cabling of industrial networks and both industrial Ethernet connectors 8P8C (commonly known as RJ45) and M12 are accepted. Data link layer The standard Ethernet data link layer is extended by an additional bus scheduling mechanism, which secures that at a time only one node is accessing the network. The schedule is divided into an isochronous phase and an asynchronous phase. During the isochronous phase, time-critical data is transferred, while the asynchronous phase provides bandwidth for the transmission of non time-critical data. The Managing Node (MN) grants access to the physical medium via dedicated poll request messages. As a result, only one single node (CN) has access to the network at a time, which avoids collisions, present on older Ethernet hubs before switches. The CSMA/CD mechanism of non-switched Ethernet, which caused non-deterministic Ethernet behaviour, is avoided by the Ethernet Powerlink scheduling mechanism. Basic cycle After system start-up is finished, the Real-Time domain is operating under Real-Time conditions. The scheduling of the basic cycle is controlled by the Managing Node (MN). The overall cycle time depends on the amount of isochronous data, asynchronous data and the number of nodes to be polled during each cycle. The basic cycle consists of the following phases: Start Phase: The Managing Node is sending out a synchronization message to all nodes. The frame is called SoC—Start of Cycle. Isochronous Phase: The Managing Node calls each node to transfer time-critical data for process or motion control by sending the Preq - Poll Request - frame. The addressed node answers with the Pres - Poll Response - frame. Since all other nodes are listening to all data during this phase, the communication system provides a producer-consumer relationship. The time frame which includes Preq-n and Pres-n is called time slot for the addressed node. Asynchronous Phase: The Managing Node grants the right to one particular node for sending ad-hoc data by sending out the SoA—Start of Asynchronous—frame. The addressed node will answer with ASnd. Standard IP-based protocols and addressing can be used during this phase. The quality of the Real-Time behavior depends on the precision of the overall basic cycle time. The length of individual phases can vary as long as the total of all phases remain within the basic cycle time boundaries. Adherence to the basic cycle time is monitored by the Managing Node. The duration of the isochronous and the asynchronous phase can be configured. Picture 1: Frames above the time line are sent by the MN, below the time line by different CNs. Picture 2: Time slots for nodes and the asynchronous time slot Multiplex for Bandwidth Optimization In addition to transferring isochronous data during each basic cycle, some nodes are also able to share transfer slots for better bandwidth utilization. For that reason, the isochronous phase can distinguish between transfer slots dedicated to particular nodes, which have to send their data in every basic cycle, and slots shared by nodes to transfer their data one after the other in different cycles. Therefore, less important yet still time-critical data can be transferred in longer cycles than the basic cycle. Assigning the slots during each cycle is at the discretion of the Managing Node. Picture 3: Time slots in EPL multiplexed mode. Poll response chaining Mode used mainly for robotics applications and large superstructures. Key is lower number of frames and better data distributions. OpenSAFETY Today, machines, plants and safety systems are stuck in a rigid scheme made up of hardware-based safety functions. The consequences of this are cost-intensive cabling and limited diagnostic options. The solution is the integration of safety relevant application data into the standard serial control protocol. OpenSAFETY allows both publish/subscriber and client/server communication. Safety relevant data is transmitted via an embedded data frame inside of standard communication messages. Measures to avoid any undetected failures due to systematic or stochastic errors are an integral part of a functional safety protocol. OpenSAFETY is in conformance with IEC 61508. The protocol fulfills the requirements of SIL 3. Error detection techniques have no impact on existing transport layers. Notes References External links ethernet-powerlink.org Ethernet POWERLINK Standardization Group website sourceforge.net/projects/openpowerlink Open Source Stack Ethernet Powerlink and OpenSafety Forums on LinkedIn Ethernet Powerlink Group OpenSafety Group Industrial Ethernet Industrial computing
Ethernet Powerlink
Technology,Engineering
1,284
312,562
https://en.wikipedia.org/wiki/Scarecrow%20%28DC%20Comics%29
The Scarecrow is a supervillain appearing in American comic books published by DC Comics. Created by writer Bill Finger and artist Bob Kane, the character first appeared in World's Finest Comics #3 (September 1941), and has become one of the superhero Batman's most enduring enemies belonging to the collective of adversaries that make up his rogues gallery. In the DC Universe, the Scarecrow is the alias of Jonathan Crane, a professor of psychology turned criminal mastermind. Abused and bullied in his youth, he becomes obsessed with fear and develops a hallucinogenic drug—dubbed "fear toxin"—to terrorize Gotham City and exploit the phobias of its protector, Batman. As the self-proclaimed "Master of Fear", the Scarecrow's crimes do not stem from a common desire for wealth or power, but from a sadistic pleasure in subjecting others to his experiments on the manipulation of fear. An outfit symbolic of his namesake with a stitched burlap mask serves as the Scarecrow's visual motif. The character has been adapted in various media incarnations, having been portrayed in film by Cillian Murphy in The Dark Knight Trilogy, and in television by Charlie Tahan and David W. Thompson in the Fox series Gotham, and Vincent Kartheiser in the HBO Max streaming series Titans. Henry Polic II, Jeffrey Combs, Dino Andrade, John Noble, and Robert Englund, among others, have provided the Scarecrow's voice in animation and video games. Publication history Batman creators Bill Finger and Bob Kane introduced the Scarecrow as a new villain in World's Finest Comics #3 (September 1941) during the Golden Age of Comic Books, in which he made only two appearances. Ichabod Crane, the protagonist of Washington Irving's The Legend of Sleepy Hollow, was used as an inspiration for the character's lanky appearance as well as his alter ego, Jonathan Crane. Scarecrow was revived during the Silver Age of Comic Books by writer Gardner Fox and artist Sheldon Moldoff in Batman #189 (February 1967), which featured the debut of the character's signature fear-inducing hallucinogen or "fear toxin". The character remained relatively unchanged throughout the Bronze Age of Comic Books. Following the 1986 multi-title event Crisis on Infinite Earths reboot, the character's origin story is expanded on in Batman Annual #19 and the miniseries Batman/Scarecrow: Year One, with this narrative also revealing that Crane has a fear of bats. In 2011, as a result of The New 52 reboot, Scarecrow's origin (as well as that of various other DC characters) is once again altered, incorporating several elements that differ from the original. Fictional character biography Backstory Born in Georgia, Jonathan Crane is abused by his great-grandmother, and is bullied at school for his resemblance to Ichabod Crane from Washington Irving's "The Legend of Sleepy Hollow", sparking his lifelong obsession with fear and using it as a weapon against others. In his senior year, Crane is humiliated by school bully Bo Griggs and rejected by cheerleader Sherry Squires. He takes revenge during the senior prom by donning his trademark scarecrow costume and brandishing a gun in the school parking lot; in the ensuing chaos, Griggs gets into a car accident, paralyzing himself and killing Squires. Crane's obsession with fear leads him to become a psychologist, taking a position at Arkham Asylum and performing fear-inducing experiments on his patients. He is also a professor of psychology at Gotham University, specializing in the study of phobias. He loses his job after he fires a gun inside a packed classroom, accidentally wounding a student; he takes revenge by killing the professors responsible for his termination and becomes a career criminal. As a college professor, Crane mentors a young Thomas Elliot. The character also has a cameo in Sandman (vol. 2) #5. In stories by Jeph Loeb and Tim Sale, the Scarecrow is depicted as one of the more deranged criminals in Batman's rogues gallery, with a habit of speaking in nursery rhymes. These stories further revise his history, explaining that he was raised by his abusive, fanatically religious great-grandfather, whom he murdered as a teenager. Criminal career Scarecrow plays a prominent role in Doug Moench's "Terror" storyline, set in Batman's early years, where Professor Hugo Strange breaks him out of Arkham and gives him "therapy" to train him to defeat Batman. Strange's therapy proves effective enough to turn the Scarecrow against his "benefactor", impaling him on a weather vane and throwing him in the cellar of his own mansion. The Scarecrow then uses Strange's mansion to lure Batman to Crime Alley, and decapitates one of his former classmates in the alley in front of Batman. With the help of Catwoman, — whom Scarecrow had attempted to blackmail into helping him by capturing her and photographing her unmasked— Batman catches Scarecrow, but loses sight of Strange, with it being unclear whether Strange had actually survived the fall onto the weather vane, or if Scarecrow and Batman are hallucinating from exposure to Scarecrow's fear toxin. Scarecrow appears in Batman: The Long Halloween, first seen escaping from Arkham on Mother's Day with help from Carmine Falcone, who also helps the Mad Hatter escape. The Scarecrow gases Batman with fear toxin as he escapes, causing Batman to flee to his parents' grave as Bruce Wayne, where he is arrested by Commissioner Jim Gordon due to Wayne's suspected ties to Falcone. Scarecrow robs a bank with the Mad Hatter on Independence Day for Falcone, but is stopped by Batman and Catwoman. He later appears in Falcone's office on Halloween with Batman's future rogue's gallery, but is defeated by Batman. Scarecrow returns in Batman: Dark Victory as part of Two-Face's gang, and is first seen putting fear gas in children's dolls on Christmas Eve. He is eventually defeated by Batman. He later appears as one of the villains present at Calendar Man's trial. It is revealed he and Calendar Man had been manipulating Falcone's son Alberto; Scarecrow had determined that Alberto feared his father, and poisoned his cigarettes with the fear toxin to bring out the fear; Calendar Man, meanwhile, had been talking to Alberto, with the fear toxin making Alberto hear his father's voice. Together, they manipulate Alberto into making an unsuccessful assassination attempt on his sister, Sofia Gigante. After Two-Face's hideout is attacked, Batman captures Scarecrow, who tells him where Two-Face is heading. In Catwoman: When in Rome, Scarecrow supplies the Riddler with fear gas to manipulate Catwoman, and later aids Riddler when he fights Catwoman in Rome. Scarecrow accidentally attacks Cheetah with his scythe before Catwoman knocks him out. The Scarecrow appears in such story arcs as Knightfall and Shadow of the Bat, first teaming with the Joker to ransom off the mayor of Gotham City. Batman foils their plan and forces them to retreat. Scarecrow betrays Joker by spraying him with fear gas, but it has no effect; Joker then beats Scarecrow senseless with a chair. Scarecrow later tries to take over Gotham with an army of hypnotized college students, commanding them to spread his fear toxin all over the city. His lieutenant is the son of the first man he killed. He is confronted by both Batman-Azrael and Anarky and tries to escape by forcing his lieutenant to jump off of a building. Batman-Azrael knocks him out, and Anarky manages to save the boy. Despite his criminal history, he is still recognized as a skilled psychologist. When Aquaman needs insight into a serial killer operating in his new city of Sub Diego—San Diego having been sunk and the inhabitants turned into water-breathers by a secret organization—he consults with Scarecrow for insight into the pattern of the killer's crimes. Scarecrow determined that killer chose his victims by the initials of their first and last names to spell out the message 'I can't take it any more', allowing Aquaman to determine both the true identity and final target of the real killer. In DC vs. Marvel, the Scarecrow temporarily allies with the Marvel Universe Scarecrow to capture Lois Lane before they are both defeated by Ben Reilly. In the 2004 story arc As the Crow Flies, Scarecrow is hired by the Penguin under false pretenses. Dr. Linda Friitawa then secretly mutates Scarecrow into a murderous creature known as the "Scarebeast", who Penguin uses to kill off his disloyal minions. The character's later appearances all show him as an unmutated Crane again, except for an appearance during the War Games story arc. Scarecrow appears in the third issue of War Games saving Black Mask from Batman and acting as the crime lord's ally, until Black Mask uses him to disable a security measure in the Clock Tower by literally throwing Scarecrow at it. Scarecrow wakes up, transforms into Scarebeast, and wreaks havoc outside the building trying to find and kill Black Mask. The police are unable to take it down, and allow Catwoman, Robin, Tarantula II, and Onyx to fight Scarebeast, as Commissioner Michael Akins had told all officers to capture or kill any vigilantes, costumed criminals or "masks" they find. Even they cannot defeat the Scarebeast, though he appears to have been defeated after the Clock Tower explodes. The Scarecrow reappears alongside other Batman villains in Gotham Underground; first among the villains meeting at the Iceberg Lounge to be captured by the Suicide Squad. Scarecrow escapes by gassing Bronze Tiger with fear toxin. He later appears warning the Ventriloquist II, Firefly, Killer Moth and Lock-Up, who are planning to attack the Penguin that Penguin is allied with the Suicide Squad. The villains wave off his warnings and mock him. He later leads the same four into a trap orchestrated by Tobias Whale. Killer Moth, Firefly and Lock-Up all survive, but are injured and unconscious to varying degrees, the Scarface puppet is "killed", and Peyton Reily, the new Ventriloquist, is unharmed, though after the attack she is taken away by Whale's men. Whale then betrays Scarecrow simply for touching his shoulder (it is revealed Whale has a pathological hatred of "masks" because his grandfather was one of the first citizens of Gotham killed by a masked criminal). The story arc ends with Whale beating Scarecrow up and leaving him bound and gagged, as a sign to all "masks" that they are not welcome in Whale's new vision of Gotham. Scarecrow appears in Batman: Hush, working for the Riddler and Hush. He composes profiles on the various villains of Gotham so Riddler and Hush can manipulate them to their own ends. He later gases Huntress with his fear gas, making her attack Catwoman. He attacks Batman in a graveyard, only to learn his fear gas is ineffective (due to Hush's bug), but before he can reveal this he is knocked out by Jason Todd. Scarecrow also appears in Batman: Heart of Hush, kidnapping a child to distract Batman so Hush can attack Catwoman. When Batman goes to rescue the child, Scarecrow activates a Venom implant, causing the boy to attack Batman. He is defeated when Batman ties the boy's teddy bear to Scarecrow, causing the child to attack Scarecrow. After capturing Scarecrow, Batman forces him to reveal Hush's location. In the Battle for the Cowl storyline, Scarecrow is recruited by a new Black Mask to be a part of a group of villains who are aiming to take over Gotham in the wake of Batman's apparent death. He later assists the crime lord in manufacturing a recreational drug called "Thrill," which draws the attention of Oracle and Batgirl. He is later defeated by Batgirl and once again arrested. Blackest Night Scarecrow briefly appears in the fourth issue of the Blackest Night storyline. His immunity to fear (brought about by frequent exposure to his own fear toxin) renders him practically invisible to the invading Black Lanterns. The drug has taken a further toll on his sanity, exacerbated by Batman's disappearance in the Batman R.I.P. storyline; he develops a literal addiction to fear, exposing himself deliberately to the revenant army, but knowing that only Batman could scare him again. Using a duplicate of Sinestro's power ring, he is temporarily deputized into the Sinestro Corps to combat the Black Lanterns. Overjoyed at finally being able to feel fear again, Scarecrow gleefully and without question follows Sinestro's commands. His celebration is cut short when Lex Luthor, overwhelmed by the orange light of Avarice, steals his ring. Brightest Day In Brightest Day, Scarecrow begins kidnapping and murdering college interns working for LexCorp as a way of getting back at Lex Luthor for stealing his ring. When Robin and Supergirl attempt to stop him, Scarecrow unleashes a new fear toxin that is powerful enough to affect a Kryptonian. The toxin forces Supergirl to see visions of a Black Lantern Reactron, but she is able to snap out of the illusion and help Robin defeat Scarecrow. He is eventually freed from Arkham when Deathstroke and the Titans break into the asylum to capture one of the inmates. The New 52 In 2011, The New 52 rebooted the DC universe. Scarecrow is a central villain in the Batman family of books and first appeared in the New 52 in Batman: The Dark Knight #4 (February 2012), written by David Finch and Paul Jenkins. His origin story is also altered; in this continuity, his father Gerald Crane used him as a test subject in his fear-based experiments. During one of these experiments, Crane's father locked him inside a little dark room, but suffered a fatal heart attack before he could let Jonathan out. Jonathan was trapped in the test chamber for days until being freed by some employers of the university. As a result of this event, he was irreparably traumatized and developed an obsession with fear. He became a psychologist, specializing in phobias. Eventually, Crane began using patients as test subjects for his fear toxin. His turn to criminality is also markedly different in this version; the New 52 Scarecrow is fired from his professorship for covering an arachnophobic student with spiders, and becomes a criminal after stabbing a patient to death. The Scarecrow kidnaps Poison Ivy, and works with Bane to create and distribute to various Arkham inmates a new form of Venom infused with the Scarecrow's fear toxin. With the help of Superman and the Flash, Batman defeats the villains. The Scarecrow surfaces again in Batman: The Dark Knight #10, penned by Gregg Hurwitz, for a six-issue arc. The Scarecrow kidnaps Commissioner James Gordon and several children, and eventually releases his fear toxin into the atmosphere. Scarecrow is also used as a pawn by the Joker in the "Death of the Family" arc; he is referred to as Batman's physician. Scarecrow appears in Swamp Thing (vol. 5) #19 (June 2013), clipping flowers for his toxins at the Metropolis Botanical Garden. Swamp Thing attempts to save Scarecrow from cutting a poisonous flower, not realizing who the villain is. Scarecrow attempts to use his fear toxin on Swamp Thing. The toxin causes Swamp Thing to lose control of his powers until Superman intervenes. He is later approached by the Outsider of the Secret Society of Super Villains to join up with the group. Scarecrow accepts the offer. As part of "Villains Month", Detective Comics (vol. 2) #23.3 (Sept. 2013) was titled The Scarecrow #1. Scarecrow goes to see Killer Croc, Mr. Freeze, Poison Ivy, and Riddler and informs them of a war at Blackgate Penitentiary is coming and learns where each of the alliances lives. Through his conversations with each, Scarecrow learns that Bane may be the cause of the Blackgate uprising and will be their leader in the impending war. It was also stated that Talons from the Court of Owls were stored at Blackgate on ice. Later, looking over the divided city, Scarecrow claims that once the war is over and the last obstacle has fallen, Gotham City would be his. Scarecrow approaches Professor Pyg at Gotham Memorial Hospital to see if he will give his supplies and Dollotrons to Scarecrow's followers. Scarecrow goes to Penguin next, who has already planned for the impending war, by blowing up the bridges giving access to Gotham City. Scarecrow and Man-Bat attempt to steal the frozen Talons from Blackgate while Penguin is having a meeting with Bane. Killer Croc rescues Scarecrow and Man-Bat from Blackgate and brings Scarecrow to Wayne Tower, where he gives Killer Croc control of Wayne Tower, as it no longer suits him. Scarecrow begins waking the Talons in his possession, having doused them with his fear gas and using Mad Hatter's mind-control technology in their helmets to control them. At Arkham Asylum, Scarecrow senses that he has lost the Talons after Bane freed them from Mad Hatter's mind-control technology. Scarecrow then turns to his next plan, giving the other inmates a small dose of Bane's Venom to temporarily transform them. Upon Bane declaring that Gotham City is finally his, he has Scarecrow hanged between two buildings. In Batman and Robin Eternal, flashbacks reveal that Scarecrow was the first villain faced by Dick Grayson as Robin in the New 52 universe when his and Batman's investigations into Scarecrow's crimes lead Batman to Mother, a woman who believes that tragedy and trauma serve as 'positive' influences to help people become stronger. To this end, Mother has Scarecrow develop a new style of fear toxin that makes the brain suffer the same experience as witnessing a massive trauma, but Scarecrow turns against Mother as the victims of this plan would become incapable of feeling anything. Recognizing that Mother will kill him once he has outlived his usefulness, Scarecrow attempts to turn himself over to Batman, but Batman uses this opportunity to have him deliver a fake psychological profile of him to Mother, claiming that Batman is a scarred child terrified of losing the people he cares for to make Mother think she understands him. In the present day, as Mother unleashes a new hypnotic signal to take control of the world's children, the Bat-Family abduct Scarecrow to brew up a new batch of his trauma toxin after determining that it nullifies the controlling influence of Mother's signal until they can shut down her main base. DC Rebirth In DC Rebirth, Scarecrow works with the Haunter to release a low dose of fear toxin around Gotham on Christmas and sets up a small stand for her to pick up the toxin. Both he and Haunter are paralyzed by the toxin's effects, allowing Batman to apprehend them. The Scarecrow later emerges using a Sinestro Corps power ring to induce fear and rage against Batman in random citizens throughout Gotham, to the point where he provokes Alfred Pennyworth into threatening to shoot Simon Baz as part of his final assault. In Doomsday Clock, Scarecrow is among the villains who meet with the Riddler to discuss the Superman Theory. Wanting to take on villains outside his rogues gallery, Shazam flies to Gotham City where he hears about a hostage situation caused by Scarecrow. Shazam starts to fight him when he begins to get affected by the fear gas. Batman shows up and regains control of the situation by defeating Scarecrow and administering the antidote. As Scarecrow is arrested, Batman states to Shazam that Scarecrow is too dangerous for him to fight. Infinite Frontier During Infinite Frontier, a re-designed Crane is the main foe of the crossover Fear State. Characterization Skills and equipment A master strategist and manipulator, his genius labels him as one of the most cunning criminal masterminds. Crane is a walking textbook on anxiety disorders and psychoactive drugs; he is able to recite the name and description of nearly every known phobia. He is even known to have a frightening ability to tamper with anyone's mind with just words, once managing to drive two men to suicide, and uses this insight to find people's mental pressure points and exploit them. Despite his scrawny build, Crane is a skilled martial artist who uses his long arms and legs in his personal combat style known as "violent dancing", developed during his training in the Kung Fu style of the White Crane, for which Scarecrow sometimes wields a sickle or scythe. Scarecrow also has proficiency in both biochemistry and toxicology, both important to the invention of his fear toxin, which he atomized with mixed chemicals, including powerful synthetic adrenocortical secretions and other potent hallucinogens that can be inhaled or injected into the bloodstream to amplify the victim's darkest fear into a terrifying hallucination. Its potency has upgraded to an extreme level over the years; in some stories in which it appears, fear toxin is depicted as capable of prompting almost instantaneous, terror-induced heart attacks, leaving the victim in a permanent psychosis of chronic fear. Other versions of the toxin are powerful enough that even Superman can be affected; in one story, he mixes the toxin with kryptonite to simultaneously weaken and terrify the Man of Steel. To instill his toxin, he often uses a hand-held sprayer in the shape of a human skull and special straws which can be snapped in half to release it. In one story, Scarecrow concocts a chemical containing wildfowl pheromones from his childhood that causes nearby birds to attack his opponents. Powers and abilities In the story arc As the Crow Flies, after being secretly mutated by Dr. Linda Friitawa, Scarecrow gains the ability to turn into a large, monstrous creature called the Scarebeast. As Scarebeast, he has greatly enhanced strength, endurance, and emits a powerful fear toxin from his body. However, he has to be under physical strain or duress to transform. During the Blackest Night mini-series, Scarecrow is temporarily deputized into the Sinestro Corps by a duplicate of Sinestro's Power ring. He proves to be very capable in manipulating the light of fear to create constructs until his ring is stolen by Lex Luthor. Personality Crane, in almost all of his incarnations, is cruel, sadistic, deranged, and manipulative above all else. Crane is obsessed with fear, and takes sadistic pleasure in frightening his victims, often literally to death, with his fear toxin. Crane also suffers from brain damage from prolonged exposure to his own toxin that renders him nearly incapable of being afraid of anything - except Batman. This is problematic for him, as he is addicted to fear and compulsively seeks out confrontations with Batman to feed his addiction. He is also known to have a warped sense of humor, though not to the level of Black Mask or the Joker, as he has been known to frequently make taunts and quips related to his using his fear toxin or his love of terrifying others. During Alan Grant's "The God of Fear" storyline, Scarecrow develops a god complex; he creates an enormous hologram of himself that he projects against the sky, so he will be recognized and worshipped by the citizens of Gotham as a literal god of fear. Other characters named Scarecrow Madame Crow Abigail O'Shay is a Gotham University student who writes her doctoral thesis on vigilantes like the Bat-Family, whom she calls the "cape and cowl crowd". She is fascinated by the kind of trauma a person would have to go through to fight criminals while in costume. She learns about such trauma first hand when Jonathan Crane, then uses her as the test subject in experiments using his fear toxin, intending to test its readiness for use on Batman. She spends more than a year in Arkham Asylum recuperating from Scarecrow's experiments. Blaming Batman for her trauma, O'Shay adopted the identity of Madame Crow with the intention of making sure no one would feel the kind of fear she did ever again as she becomes a member of the Victim Syndicate. In a reversal to Scarecrow's fear toxin, Madame Crow has a set of gauntlets that fire needles filled with "anti-fear" toxin, which removes fear in the hope of keeping people from fighting to avoid their own trauma. Alternative versions As one of Batman's most recognizable and popular opponents, the Scarecrow appears in numerous comics that are not considered part of the regular DC continuity, including: The Scarecrow appears in Batman/Daredevil: King of New York, in which he attempts to use the Kingpin's criminal empire to disperse his fear gas over New York City. He is defeated when Daredevil, the "Man Without Fear", proves immune to the gas. The Scarecrow is featured in part two of the four-part in JSA: The Liberty Files. This version of Scarecrow is portrayed as a German agent who kills a contact working for the Bat (Batman), the Clock (Hourman), and the Owl (Doctor Mid-Nite). In a struggle with Scarecrow, the fiancée of the agent Terry Sloane is killed. This causes Sloane to return to the field as Mister Terrific and kill Scarecrow. A stand-in for Jonathan Crane named Jenna Clarke / Scarecrone appears in the Elseworlds original graphic novel Batman: Dark Knight Dynasty as a henchwoman/consort under the employ of Vandal Savage. Scarecrone also acts as a stand-in for Two-Face. She has the power to invade a person's psyche and make their deepest fears appear as illusions simply by touching them. "Scarecrone" is actually her alternate personality. Vandal Savage requires Clarke to switch to her Scarecrone persona through a special formula that he has made Clarke dependent on. The two personalities are antagonistic towards each other. It is revealed that when the formula brings out Scarecrone, the right side of her face becomes heavily scarred. This scarring is healed once the formula wears off and the Jenna Clarke personality becomes dominant again. The Scarecrow is one of the main characters in Alex Ross' maxi-series Justice as part of the Legion of Doom. He is first seen out of costume in a hospital, injecting a girl in a wheelchair with a serum allowing her to walk. Scarecrow is later seen in costume during Lex Luthor's speech alongside Clayface inside the home of Black Canary and Green Arrow. Scarecrow gases Canary while Clayface attacks Green Arrow, but the attack fails when Black Canary finds her husband attacked by Clayface. Green Arrow defeats Clayface by electrocuting him with a lamp, and the duo flee soon after Canary unleashes her Canary Cry. Scarecrow is later seen with Clayface and Parasite, having captured Commissioner James Gordon, Batgirl, and Supergirl. When the Justice League storms the Hall of Doom, Scarecrow does not appear to face any particular target and duels the League as a whole. He is one of the few villains to escape the League's initial attack. The Justice League follows Scarecrow to his city, whereupon he sends his city's population to attack the League, knowing that they would not hurt civilians. However, John Stewart's ring frees the city from Scarecrow's control, subsequently freeing Scarecrow from Brainiac's control. Scarecrow does not seem bothered by this realization, admitting he would have done it anyway. He causes a diversion by releasing his fear gas into his entire city, driving his citizens into a homicidal frenzy, and manages to escape capture, but he is ambushed and nearly killed by the Joker in retaliation for not having been invited to the Legion of Doom. Scarecrow's city is again saved by the Justice League. The Scarecrow appears in the third and final chapter of Batman & Dracula: Red Rain, in which he has adorned his Scarecrow costume with laces of the severed fingers of the bullies who tormented him in school. He is about to kill a former football player when vampire Batman appears, noting that Scarecrow is worse than him; as a vampire, he is driven to kill by forces beyond his control, while Scarecrow chooses to be a murderer. Batman then grabs Scarecrow's vial of fear gas, crushing it along with the supervillain's hand, and cuts Scarecrow's head off with his own sickle, declaring that Scarecrow has no idea what fear really is. In the New 52 Batman Beyond books that takes place after Futures End, the future Batman/Terry McGinnis fights a new, female version of the Scarecrow named Adalyn Stern. As a child, Adalyn was traumatized when she witnessed Batman brutally beat up her father (who was a notorious gang leader). She was placed in institutional care until she was assigned to one of Jonathan Crane's disciples who attempted to treat her with technology derived from Crane's work, which only amplified her fear of Batman. She grows up and becomes a co-anchor to Jack Ryder on the New 52. She uses A.I. cubes placed in everyone's homes to brainwash the population into believing that the new Batman is a demon that needs to be put down. She is eventually defeated by the combined efforts of the original and new Batman as well as Jack Ryder and is institutionalized in Arkham Asylum afterward when she views herself as nothing but the Scarecrow. In the alternate timeline of Flashpoint, Scarecrow is one of the many villains subsequently killed by Thomas Wayne, who is that universe's Batman. In the graphic novel Batman: Earth One, Dr. Jonathan Crane is mentioned as the head of the Crane Institute for the Criminally Insane, and one of its escapees is one Ray Salinger, also known as the "Birthday Boy", used by Mayor Cobblepot to his advantages. In Batman/Teenage Mutant Ninja Turtles crossover, the Scarecrow appears mutated into a raven as one of the various other Arkham inmates mutated by Shredder and the Foot Clan to attack Batman and Robin. Batman is captured, but Robin manages to escape. The Teenage Mutant Ninja Turtles and Splinter then arrive, where Splinter defeats the mutated villains, while Batman uses his new Intimidator Armor to defeat Shredder and the Turtles defeat Ra's al Ghul. Later, Jim Gordon tells Batman that the police scientists have managed to turn all of the inmates at Arkham back to normal, and that they are currently in A.R.G.U.S. custody. Scarecrow makes a minor appearance in the 2017 series Batman: White Knight. Crane, along with several other Batman villains, is tricked by Jack Napier (who in this reality was a Joker who had been force-fed an overdose of pills by Batman which temporarily cured him of his insanity) into drinking drinks that had been laced with particles from Clayface's body. This was done so that Napier, who was using Mad Hatter's technology to control Clayface, could control them by way of Clayface's ability to control parts of his body that had been separated from him. Scarecrow and the other villains are then used to attack a library which Napier himself was instrumental in building in one of Gotham City's poorer districts. Later on in the story, the control hat is stolen by Neo-Joker (the second Harley Quinn, who felt that Jack Napier was a pathetic abnormality while Joker was the true, beautiful personality), in an effort to get Napier into releasing the Joker persona. Scarecrow also appears in the sequel storyline Batman: Curse of the White Knight, being among the villains murdered by Azrael. The Scarecrow makes a cameo appearance in Arkham Asylum: A Serious House on Serious Earth. Dr. Jonathan Crane/Scarecrow is one of the main antagonists in the Batman '89 series Echoes. In other media See also List of Batman family enemies References External links Scarecrow at DC CONTINUITY PROJECT Scarecrow at DC Database Scarecrow at Comic Vine Action film villains Batman characters Characters created by Bill Finger Characters created by Bob Kane Comics characters introduced in 1941 DC Comics film characters Fictional victims of child abuse DC Comics male supervillains DC Comics scientists DC Comics television characters Fictional bibliophiles Fictional biochemists Fictional inventors in comics Fictional mad scientists Fictional mass murderers Fictional monsters Fictional psychologists Fictional scarecrows Fictional terrorists Fictional toxicologists Film supervillains Golden Age supervillains Male film villains Video game bosses Villains in animated television series
Scarecrow (DC Comics)
Chemistry
6,778
801,868
https://en.wikipedia.org/wiki/Phlogopite
Phlogopite is a yellow, greenish, or reddish-brown member of the mica family of phyllosilicates. It is also known as magnesium mica. Phlogopite is the magnesium endmember of the biotite solid solution series, with the chemical formula KMg3AlSi3O10(F,OH)2. Iron substitutes for magnesium in variable amounts leading to the more common biotite with higher iron content. For physical and optical identification, it has most of the characteristic properties of biotite. Paragenesis Phlogopite is an important and relatively common end-member composition of biotite. Phlogopite micas are found primarily in igneous rocks, although it is also common in contact metamorphic aureoles of intrusive igneous rocks with magnesian country rocks and in marble formed from impure dolomite (dolomite with some siliclastic sediment). The occurrence of phlogopite mica within igneous rocks is difficult to constrain precisely because the primary control is rock composition as expected, but phlogopite is also controlled by conditions of crystallisation such as temperature, pressure, and vapor content of the igneous rock. Several igneous associations are noted: high-alumina basalts, ultrapotassic igneous rocks, and ultramafic rocks. Basaltic association The basaltic occurrence of phlogopite is in association with picrite basalts and high-alumina basalts. Phlogopite is stable in basaltic compositions at high pressures and is often present as partially resorbed phenocrysts or an accessory phase in basalts generated at depth. Ultrapotassic association Phlogopite mica is a commonly known phenocryst and groundmass phase within ultrapotassic igneous rocks such as lamprophyre, kimberlite, lamproite, and other deeply sourced ultramafic or high-magnesian melts. In this association phlogopite can form well preserved megacrystic plates to 10 cm, and is present as the primary groundmass mineral, or in association with pargasite amphibole, olivine, and pyroxene. Phlogopite in this association is a primary igneous mineral present because of the depth of melting and high vapor pressures. Ultramafic rocks Phlogopite is often found in association with ultramafic intrusions as a secondary alteration phase within metasomatic margins of large layered intrusions. In some cases the phlogopite is considered to be produced by autogenic alteration during cooling. In other instances, metasomatism has resulted in phlogopite formation within large volumes, as in the ultramafic massif at Finero, Italy, within the Ivrea zone. Trace phlogopite, again considered the result of metasomatism, is common within coarse-grained peridotite xenoliths carried up by kimberlite, and so phlogopite appears to be a common trace mineral in the uppermost part of the Earth's mantle. Phlogopite is encountered as a primary igneous phenocryst within lamproites and lamprophyres, the result of highly fluid-rich melt compositions within the deep mantle. Uses As the general thermal, electrical and mechanical properties of phlogopite are those of the mica family, the main uses of phlogopite are similar to these of muscovite. Miscellaneous The largest documented single crystal of phlogopite was found in Lacey mine, Ontario, Canada; it measured and weighed about 330 tonnes. Similar-sized crystals were also found in Karelia, Russia. References Further reading Deer, W.A., Howie, R.A., and Zussman, J., (1963). Rock-forming minerals, v. 3, "Sheet silicates", p. 42–54. Phyllosilicates Potassium minerals Magnesium minerals Iron(II) minerals Manganese minerals Monoclinic minerals Minerals in space group 12 Mica group Luminescent minerals
Phlogopite
Chemistry
855
33,748,122
https://en.wikipedia.org/wiki/Star%20trail
A star trail is a type of photograph that uses long exposure times to capture diurnal circles, the apparent motion of stars in the night sky due to Earth's rotation. A star-trail photograph shows individual stars as streaks across the image, with longer exposures yielding longer arcs. The term is used for similar photos captured elsewhere, such as on board the International Space Station and on Mars. Typical shutter speeds for a star trail range from 15 minutes to several hours, requiring a "Bulb" setting on the camera to open the shutter for a period longer than usual. However, a more practiced technique is to blend a number of frames together to create the final star trail image. Star trails have been used by professional astronomers to measure the quality of observing locations for major telescopes. Capture Star trail photographs are captured by placing a camera on a tripod, pointing the lens toward the night sky, and allowing the shutter to stay open for a long period of time. Star trails are considered relatively easy for amateur astrophotographers to create. Photographers generally make these images by using a DSLR or Mirrorless camera with its lens focus set to infinity. A cable release or intervalometer allows the photographer to hold the shutter open for the desired amount of time. Typical exposure times range from 15 minutes to many hours long, depending on the desired length of the star trail arcs for the image. Even though star trail pictures are created under low-light conditions, long exposure times allow fast films, such as ISO 200 and ISO 400. Wide-apertures, such as f/5.6 and f/4, are recommended for star trails. Because exposure times for star trail photographs can be several hours long, camera batteries can be easily depleted. Mechanical cameras that do not require a battery to open and close the shutter have an advantage over more modern film and digital cameras that rely on battery power. On these cameras, the Bulb, or B, exposure setting keeps the shutter open. Another problem that digital cameras encounter is an increase in electronic noise with increasing exposure time. However, this can be avoided through the use of shorter exposure times that are then stacked in post production software. This avoids possible heat build up or digital noise caused from a single long exposure. American astronaut Don Pettit recorded star trails with a digital camera from the International Space Station in Earth orbit between April and June, 2012. Pettit described his technique as follows: "My star trail images are made by taking a time exposure of about 10 to 15 minutes. However, with modern digital cameras, 30 seconds is about the longest exposure possible, due to electronic detector noise effectively snowing out the image. To achieve the longer exposures I do what many amateur astronomers do. I take multiple 30-second exposures, then 'stack' them using imaging software, thus producing the longer exposure." Star trail images have also been taken on Mars. The Spirit rover produced them while looking for meteors. Since the camera was limited to 60 second exposures the trails appear as dashed lines. Earth's rotation Star trail photographs are possible because of the rotation of Earth about its axis. The apparent motion of the stars is recorded as mostly curved streaks on the film or detector. For observers in the Northern Hemisphere, aiming the camera northward creates an image with concentric circular arcs centered on the north celestial pole (very near Polaris). For those in the Southern Hemisphere, this same effect is achieved by aiming the camera southward. In this case, the arc streaks are centered on the south celestial pole (near Sigma Octantis). Aiming the camera eastward or westward shows straight streaks on the celestial equator, which is tilted at angle with respect to the horizon. The angular measure of this tilt depends on the photographer's latitude (), and is equal to . Astronomical site testing Star trail photographs can be used by astronomers to determine the quality of a location for telescope observations. Star trail observations of Polaris have been used to measure the quality of seeing in the atmosphere, and the vibrations in telescope mounting systems. The first recorded suggestion of this technique is from E.S. Skinner's 1931 book A Manual of Celestial Photography. Gallery References External links 4 Steps To Creating Star Trails Photos Using Stacking Software Star trail photography StarStaX free multi-platform star trail software Photographic techniques Astrophotography Photography by genre Space art Astronomical imaging
Star trail
Astronomy
883
23,203,326
https://en.wikipedia.org/wiki/Steve%20Evets
Steve Evets (born Steven Murphy; 26 July 1959) is an English actor and musician, who found fame for his leading role in the 2009 film Looking for Eric. Personal life Born in Salford, Lancashire, Evets joined the Merchant Navy after leaving school, but was kicked out after three years, after jumping ship twice in Japan and spending his eighteenth birthday in a Bombay brothel. In 1987 Evets was injured in a pub brawl and spent time on a life support machine. He was stabbed through the liver, lung and diaphragm, was glassed in the face and had his throat cut. Evets briefly worked delivering pipes alongside his acting career, and as an electrician. As there was already a Steve Murphy on the books of Equity, he decided on the palindromic stage name Steve Evets, "The first thing that popped into my head was 'Steve' backwards ... so I put that on the form." Career Evets's early acting work included a street theatre company formed with two friends. He moved into theatre work, and had small roles in several television series such as See No Evil: The Moors Murders, Casualty, Life on Mars, The Cops, Shameless, and Emmerdale. In between acting roles, he worked under the name Adolph Chip-pan, performing political comedy poetry in Manchester. He also worked as a musician, and was introduced to Mark E. Smith of the Fall in the mid-1990s, leading to Evets performing his poetry at some Fall gigs. When Smith found that Evets could play bass guitar, he was drafted into the band in Turkey after previous bassist Jim Watts had been sacked. Evets played in The Fall between 2000 and 2002, before leaving to front his own band, Dr Freak's Padded Cell, which he described as "electronic dance music with sort of very political overtones", even getting Smith to provide guest vocals on one track; Evets made a video for the track and posted it on YouTube, much to the dislike of Smith, ending their friendship. His first major film role came in 2008, playing a terminally-ill alcoholic who uses a wheelchair opposite Robert Carlyle, in Summer. He followed this with the lead role in Ken Loach's 2009 film, Looking for Eric. In 2010 to 2014, he appeared as the homeless congregation member Colin in the acclaimed TV series Rev. He played Morty in Vertigo Films' 2012 low-budget horror film The Facility (originally titled Guinea Pigs) directed by Ian Clark. He starred in a music video for Salford-based band Emperor Zero's 2011 song "Man with Red Eyes". Steve Evets appeared in the first three episodes of the first series of BBC Three zombie drama In the Flesh, but did not return for series 2 due to his character's death. In 2015, Evets appeared as Jim Smith in the BBC TV series Death in Paradise episode 4.5 and he also appeared as Bertrand in the BBC TV series The Musketeers episode 2.5 "The Return". In February 2016, he appeared in the BBC One drama series Moving On. From 2019 Steve joined the cast of the popular series Brassic (Sky TV) as a foul mouthed Farmer called Jim. There are currently 6 seasons of Brassic with the seventh being filmed in 2024. Filmography References External links 1959 births 20th-century English male actors 21st-century English male actors Living people British Merchant Navy personnel Male actors from Salford English male film actors English male television actors English people of Irish descent The Fall (band) members Palindromes Stabbing survivors
Steve Evets
Physics
729
52,391,799
https://en.wikipedia.org/wiki/Recurse%20Center
The Recurse Center (formerly known as Hacker School; also called RC) is an independent educational institution, that combines a retreat for computer programmers with a recruiting agency. The retreat is an intentional community, a self-directed academic environment for programmers of all levels to improve their skills in, without charge. There is no curriculum and no particular programming languages or paradigms are institutionally favored; instead, participants work on open-source projects of their own choice, alone or collaboratively, as they see best. The Center has been an active advocate for women in programming. After switching to online programming in 2020, the Recurse Center reopened its physical space in 2023. History The Center was initially founded in the Summer of 2010 as Hackruiter, an engineering recruiting company, using seed money from Y Combinator. The idea quickly arose of trying to transform recruiting for start-ups by running a retreat as part of the process, with the goal of helping clients become better programmers. It officially opened its doors as “Hacker School” in New York in July, 2011, obliquely anticipating the coding bootcamp movement that arose in the mid-2010s. Hacker School came to wide public attention in mid-2012, when it partnered with the e-commerce company Etsy to offer “Hacker Grants” in support of female developers. A number of companies soon joined Etsy in funding these grants, and in 2014 the grant program expanded to offer support to other groups not well represented in American technology industries. In 2015 Hacker School was renamed the Recurse Center. Business model The programming retreat is free of charge for admitted applicants to attend. The organization itself is for-profit and supports itself through recruitment, by placing some participants in programming jobs. It has had recruiting partnerships with Airtable, Notion, Hudson River Trading, Jane Street, OpenAI, and more. In 2014 the retreat reached the "tipping point" of self-sufficiency purely from recruiting income. Internal costs to the company have been reported at "nearly $12,000" for each participant. The Center does not publish statistics on its admission rate, although there is no published rule against reapplication. Educational philosophy and name There is no curriculum; each participant imposes their own structure for self-directed learning on their stay at the Recurse Center, with guidance as requested. Despite its original name ”Hacker School“, the Recurse Center is not a school — its model of self-directed learning was inspired by the Unschooling philosophy of John Holt (1923–1985). Nor does it have any connection to the popular notion of a hacker as someone who breaks into computer systems — rather, “hacker” here was intended to suggest a programmer who is technically resourceful but also supportive of other programmers. In 2015 the organization changed its name to the Recurse Center to avoid confusion over these matters. Since its founding, the faculty have experimented continually with day-to-day experience in the retreat. Experiments have included: “facilitators” for day-to-day shepherding of participants and improvement of the organization itself, a "residents" program for shorter-term specialist guidance, Code Words, a journal about programming, a “maintainers” program to promote contribution to open-source software projects a research lab, half-length batches, and a mentoring program for new coders. Social environment and influence The Center did not initially publish a code of conduct, but eventually formalized its expectations of participant behavior in June 2017. Prior to that, it listed social rules intended to shepherd community behavior and “to remove as many distractions as possible so everyone can focus on programming.” These social rules are one of the retreat's most influential features and have been adopted by a number of other programming communities. There is a large community of alumni that have remained active past the end of their ”batch“, interacting with each other and with new participants in person or via virtual tools. Specializations of participants The level of participants' skill and experience is diverse, in common with retreats in other creative fields and unlike many engineering organizations. Participants range from long-experienced software developers on sabbatical, to people who have been coding for only a few months, to retirees, to college students on vacation. Some participants hold doctoral degrees; others have left school before completing secondary or even primary education. Many participants are engineers, but others have strong non-engineering backgrounds, in the Humanities, journalism, pure mathematics, the performing arts, among many others. Notable alumni Timnit Gebru (Summer 2012), AI ethics researcher Michael Nielsen (Summer 2012), quantum physicist Greg Brockman (Summer 2 2015), co-founder of OpenAI Raph Levien (Fall 1 2017), creator of Advogato Paul Biggar (Fall 1 2016), co-founder of CircleCI , creator of Zig References External links Computer science education Hacker culture Intentional communities in New York (state) Y Combinator companies 2011 establishments in New York City
Recurse Center
Technology
1,013
38,991,948
https://en.wikipedia.org/wiki/Single-cell%20analysis
In cell biology, single-cell analysis and subcellular analysis refer to the study of genomics, transcriptomics, proteomics, metabolomics, and cell–cell interactions at the level of an individual cell, as opposed to more conventional methods which study bulk populations of many cells. The concept of single-cell analysis originated in the 1970s. Before the discovery of heterogeneity, single-cell analysis mainly referred to the analysis or manipulation of an individual cell within a bulk population of cells under the influence of a particular condition using optical or electron microscopy. Due to the heterogeneity seen in both eukaryotic and prokaryotic cell populations, analyzing the biochemical processes and features of a single cell makes it possible to discover mechanisms which are too subtle or infrequent to be detectable when studying a bulk population of cells; in conventional multi-cell analysis, this variability is usually masked by the average behavior of the larger population. Technologies such as fluorescence-activated cell sorting (FACS) allow the precise isolation of selected single cells from complex samples, while high-throughput single-cell partitioning technologies enable the simultaneous molecular analysis of hundreds or thousands of individual unsorted cells; this is particularly useful for the analysis of variations in gene expression between genotypically identical cells, allowing the definition of otherwise undetectable cell subtypes. The development of new technologies is increasing scientists' ability to analyze the genome and transcriptome of single cells, and to quantify their proteome and metabolome. Mass spectrometry techniques have become important analytical tools for proteomic and metabolomic analysis of single cells. Recent advances have enabled the quantification of thousands of proteins across hundreds of single cells, making possible new types of analysis. In situ sequencing and fluorescence in situ hybridization (FISH) do not require that cells be isolated and are increasingly being used for analysis of tissues. Single-cell isolation Many single-cell analysis techniques require the isolation of individual cells. Methods currently used for single-cell isolation include: dielectrophoretic digital sorting, enzymatic digestion, FACS, hydrodynamic traps, laser capture microdissection, manual picking, microfluidics, Inkjet Printing (IJP), micromanipulation, serial dilution, and Raman tweezers. Manual single-cell picking is a method where cells in suspension are viewed under a microscope and individually picked using a micropipette. The Raman tweezers technique combines Raman spectroscopy with optical tweezers, using a laser beam to trap and manipulate cells. The dielectrophoretic digital sorting method utilizes a semiconductor-controlled array of electrodes in a microfluidic chip to trap single cells in dielectrophoretic (DEP) cages. Cell identification is ensured by the combination of fluorescent markers with image observation. Precision delivery is ensured by the semiconductor-controlled motion of DEP cages in the flow cell. Inkjet printing combines microfluidics with MEMS on a CMOS chip to provide individual control over a large number of print nozzles, using the same technology as home Inkjet printing. IJP allows for the adjustment of shear force to the sample ejection, greatly improving cell survivability. This approach, when combined with optical inspection and AI-driven image recognition, not only guarantees single-cell dispensing into the well plate or other medium but also can qualify the cell sample for quality of sample, rejecting defective cells, debris, and fragments. The development of hydrodynamic-based microfluidic biochips has been increasing over the years. In this technique, the cells or particles are trapped in a particular region for single-cell analysis, usually without application of any external force fields such as optical, electrical, magnetic, or acoustic. There is a need to explore the insights of SCA in the cell's natural state and development of these techniques is highly essential for that study. Researchers have highlighted the vast potential field that needs to be explored to develop biochip devices to suit market/researcher demands. Hydrodynamic microfluidics facilitates the development of passive lab-on-chip applications. Hydrodynamic traps allow for the isolation of an individual cell in a "trap" at a single given time by passive microfluidic transport. The number of isolated cells can be manipulated based on the number of traps in the system. The Laser Capture Microdissection technique utilizes a laser to dissect and separate individual cells, or sections, from tissue samples of interest. The methods involve the observation of a cell under a microscope, so that a section for analysis can be identified and labeled so that the laser can cut the cell. Then, the cell can be extracted for analysis. Microfluidics allows for the isolation of individual cells for further analyses. The following principles outline the various microfluidic processes for single-cell separation: droplet-in-oil-based isolation, pneumatic membrane valving, and hydrodynamic cell traps. Droplet-in-oil-based microfluidics uses oil-filled channels to hold separated aqueous droplets. This allows the single cell to be contained and isolated from inside the oil-based channels. Pneumatic membrane valves manipulate air pressure to isolate individual cells by membrane deflection. The manipulation of the pressure source allows the opening or closing of channels in a microfluidic network. Typically, the system requires an operator and is limited in throughput. Genomics Techniques Single-cell genomics is heavily dependent on increasing the copies of DNA found in the cell so that there is enough statistical power for accurate sequencing. This has led to the development of strategies for whole genome amplification (WGA). Currently, WGA strategies can be grouped into three categories: Controlled priming and PCR amplification: Adapter-Linker PCR WGA Random priming and PCR amplification: DOP-PCR, MALBAC Random priming and isothermal amplification: MDA The Adapter-Linker PCR WGA is reported in many comparative studies to be the best-performing technique for diploid single-cell mutation analysis, thanks to its very low Allelic Dropout effect, and for copy number variation profiling due to its low noise, both with aCGH and with NGS low Pass Sequencing. This method is only applicable to human cells, both fixed and unfixed. One widely adopted WGA technique is called degenerate oligonucleotide–primed polymerase chain reaction (DOP-PCR). This method uses the well established DNA amplification method PCR to try and amplify the entire genome using a large set of primers. Although simple, this method has been shown to have very low genome coverage. An improvement on DOP-PCR is Multiple displacement amplification (MDA), which uses random primers and a high fidelity enzyme, usually Φ29 DNA polymerase, to accomplish the amplification of larger fragments and greater genome coverage than DOP-PCR. Despite these improvements MDA still has a sequence-dependent bias (certain parts of the genome are amplified more than others because of their sequence, causing some parts to be overrepresented in the resulting genomic dataset). The method shown to largely avoid the biases seen in DOP-PCR and MDA is Multiple Annealing and Looping–Based Amplification Cycles (MALBAC). Bias in this system is reduced by only copying off the original DNA strand instead of making copies of copies. The main drawback to using MALBAC is that it has reduced accuracy compared to DOP-PCR and MDA due to the enzyme used to copy the DNA. Once amplified using any of the above techniques, the DNA can be sequenced using Sanger sequencing or next-generation sequencing (NGS). Purpose There are two major applications to studying the genome at the single-cell level. One application is to track the changes that occur in bacterial populations, where phenotypic differences are often seen. These differences are easily missed by bulk sequencing of a population, but can be observed in single-cell sequencing. The second major application is to study the genetic evolution of cancer. Since cancer cells are constantly mutating it is of great interest to researchers to see how cancers evolve at the level of individual cells. These patterns of somatic mutations and copy number aberration can be observed using single-cell sequencing. Transcriptomics Techniques Single-cell transcriptomics uses sequencing techniques similar to single-cell genomics or direct detection using fluorescence in situ hybridization. The first step in quantifying the transcriptome is to convert RNA to cDNA using reverse transcriptase so that the contents of the cell can be sequenced using NGS methods as was done in genomics. Once converted, there is not enough cDNA to be sequenced so the same DNA amplification techniques discussed in single-cell genomics are applied to the cDNA to make sequencing possible. Alternatively, fluorescent compounds attached to RNA hybridization probes are used to identify specific sequences and sequential application of different RNA probes will build up a comprehensive transcriptome. Purpose The purpose of single-cell transcriptomics is to determine what genes are being expressed in each individual cell. The transcriptome is often used to quantify gene expression instead of the proteome because of the difficulty currently associated with amplifying protein levels sufficiently to make them convenient to study. There are three major reasons gene expression has been studied using this technique: to study gene dynamics, RNA splicing, and for cell typing. Gene dynamics are usually studied to determine what changes in gene expression affect different cell characteristics. For example, this type of transcriptomic analysis has often been used to study embryonic development. RNA splicing studies are focused on understanding the regulation of different transcript isoforms. Single-cell transcriptomics has also been used for cell typing, where the genes expressed in a cell are used to identify and classify different types of cells. The main goal in cell typing is to find a way to determine the identity of cells that do not express known genetic markers. RNA expression can serve as a proxy for protein abundance. However, protein abundance is governed by the complex interplay between RNA expression and post-transcriptional processes. While more challenging technically, translation can be monitored by ribosome profiling in single cells. Proteomics Techniques There are three major approaches to single-cell proteomics: antibody-based methods, fluorescent protein-based methods, and mass spectroscopy-based methods. Antibody–based methods The antibody based methods use designed antibodies to bind to proteins of interest, allowing the relative abundance of multiple individual targets to be identified by one of several different techniques. Imaging: Antibodies can be bound to fluorescent molecules such as quantum dots or tagged with organic fluorophores for detection by fluorescence microscopy. Since different colored quantum dots or unique fluorophores are attached to each antibody it is possible to identify multiple different proteins in a single cell. Quantum dots can be washed off of the antibodies without damaging the sample, making it possible to do multiple rounds of protein quantification using this method on the same sample. For the methods based on organic fluorophores, the fluorescent tags are attached by a reversible linkage such as a DNA-hybrid (that can be melted/dissociated under low-salt conditions) or chemically inactivated, allowing multiple cycles of analysis, with 3-5 targets quantified per cycle. These approaches have been used for quantifying protein abundance in patient biopsy samples (e.g. cancer) to map variable protein expression in tissues and/or tumors, and to measure changes in protein expression and cell signaling in response to cancer treatment. Mass Cytometry: rare metal isotopes, not normally found in cells or tissues, can be attached to the individual antibodies and detected by mass spectrometry for simultaneous and sensitive identification of proteins. These techniques can be highly multiplexed for simultaneous quantification of many targets (panels of up to 38 markers) in single cells. Antibody-DNA quantification: another antibody-based method converts protein levels to DNA levels. The conversion to DNA makes it possible to amplify protein levels and use NGS to quantify proteins. In one such approach, two antibodies are selected for each protein needed to be quantified. The two antibodies are then modified to have single stranded DNA connected to them that are complementary. When the two antibodies bind to a protein the complementary strands will anneal and produce a double stranded segment of DNA that can then be amplified using PCR. Each pair of antibodies designed for one protein is tagged with a different DNA sequence. The DNA amplified from PCR can then be sequenced, and the protein levels quantified. Mass spectrometry–based methods In mass spectroscopy-based proteomics there are three major steps needed for peptide identification: sample preparation, separation of peptides, and identification of peptides. Several groups have focused on oocytes or very early cleavage-stage cells since these cells are unusually large and provide enough material for analysis. Another approach, single cell proteomics by mass spectrometry (SCoPE-MS) has quantified thousands of proteins in mammalian cells with typical cell sizes (diameter of 10-15 μm) by combining carrier-cells and single-cell barcoding. The second generation, SCoPE2, increased the throughput by automated and miniaturized sample preparation; It also improved quantitative reliability and proteome coverage by data-driven optimization of LC-MS/MS and peptide identification. The sensitivity and consistency of these methods have been further improved by prioritization, and massively parallel sample preparation in nanoliter size droplets. Another direction for single-cell protein analysis is based on a scalable framework of multiplexed data-independent acquisition (plexDIA) enables time saving by parallel analysis of both peptide ions and protein samples, thereby realizing multiplicative gains in throughput. The separation of differently sized proteins can be accomplished by using capillary electrophoresis (CE) or liquid chromatography (LC) (using liquid chromatography with mass spectroscopy is also known as LC-MS). This step gives order to the peptides before quantification using tandem mass-spectroscopy (MS/MS). The major difference between quantification methods is some use labels on the peptides such as tandem mass tags (TMT) or dimethyl labels which are used to identify which cell a certain protein came from (proteins coming from each cell have a different label) while others do not use labels but rather quantify cells individually. The mass spectroscopy data is then analyzed by running data through databases that count the peptides identified to quantify protein levels. These methods are very similar to those used to quantify the proteome of bulk cells, with modifications to accommodate the very small sample volume. Ionization techniques used in mass spectrometry-based single-cell analysis A huge variety of ionization techniques can be used to analyze single cells. The choice of ionization method is crucial for analyte detection. It can be decisive which type of compounds are ionizable and in which state they appear, e.g., charge and possible fragmentation of the ions. A few examples of ionization are mentioned in the paragraphs below. Nano-DESI One of the possible ways to measure the content of single cells is nano-DESI (nanospray desorption electrospray ionization). Unlike desorption electrospray ionization, which is a desorption technique, nano-DESI is a liquid extraction technique that enables the sampling of small surfaces, therefore suitable for single-cell analysis. In nano-DESI, two fused silica capillaries are set up in a V-shaped form, closing an angle of approx. 85 degrees. The two capillaries are touching therefore a liquid bridge can be formed between them and enable the sampling of surfaces as small as a single cell. The primary capillary delivers the solvent to the sample surface where the extraction happens and the secondary capillary directs the solvent with extracted molecules to the MS inlet. Nano-DESI mass spectrometry (MS) enables sensitive molecular profiling and quantification of endogenous species as small as a few hundred fmol-s  in single cells in a higher throughput manner. Lanekoff et al. identified 14 amino acids, 6 metabolites, and several lipid molecules from single cheek cells using nano-DESI MS. LAESI In Laser ablation electrospray ionization (LAESI), a laser is used to ablate the surface of the sample and the emitted molecules are ionized in the gas phase by charged droplets from electrospray. Similar to DESI the ionization happens in ambient conditions. Anderton et al. used this ionization technique coupled to a Fourier transform mass spectrometer to analyze 200 single cells of Allium cepa (red onion) with high spatial resolution. SIMS Secondary-ion mass spectrometry (SIMS) is a technique similar to DESI, but while DESI is an ambient ionization technique, SIMS happens in vacuum. The solid sample surface is bombarded by a highly focused beam of primary ions. As they hit the surface, molecules are emitted from the surface and ionized. The choice of primary ions determines the size of the beam and also the extent of ionization and fragmentation. Pareek et al. performed metabolomics to trace how purines are synthesized within purinosomes and used isotope labeling and SIMS imaging to directly observe hotspots of metabolic activity within frozen HeLa cells. MALDI In matrix-assisted laser desorption and ionization (MALDI), the sample is incorporated in a chemical matrix that is capable of absorbing energy from a laser. Similar to SIMS, ionization happens in vacuum. Laser irradiation ablates the matrix material from the surface and results in charged gas phase matrix particles, with the analyte molecules ionized from this charged chemical matrix. Liu et al. used MALDI-MS to detect eight phospholipids from single A549 cells. MALDI MS imaging can be used for spatial metabolomics and single-cell analysis. Purpose The purpose of studying the proteome is to better understand the activity of proteins at the single-cell level. Since proteins are responsible for determining how the cell acts, understanding the proteome of single cells gives the best understanding of how a cell operates, and how gene expression changes in a cell due to different environmental stimuli. Although transcriptomics has the same purpose as proteomics it is not as accurate at determining gene expression in cells as it does not take into account post-transcriptional regulation (not all messenger RNA transcripts are actually translated into proteins). Transcriptomics is still important, of course, as studying the difference between RNA levels and protein levels can give insight regarding which genes are post-transcriptionally regulated. Metabolomics Techniques There are four major methods used to quantify the metabolome of single cells; they are: fluorescence–based detection, fluorescence biosensors, FRET biosensors, and mass spectroscopy. The first three methods listed use fluorescence microscopy to detect molecules in a cell. Usually these assays use small fluorescent tags attached to molecules of interest, however this has been shown be too invasive for single cell metabolomics, and alters the activity of the metabolites. The current solution to this problem is to use fluorescent proteins which will act as metabolite detectors, fluorescing when ever they bind to a metabolite of interest. Mass spectroscopy is becoming the most frequently used method for single cell metabolomics. Its advantages are that there is no need to develop fluorescent proteins for all molecules of interest, and is capable of detecting metabolites in the femtomole range. Similar to the methods discussed in proteomics, there has also been success in combining mass spectroscopy with separation techniques such as capillary electrophoresis to quantify metabolites. This method is also capable of detecting metabolites present in femtomole concentrations. Another method utilizing capillary microsampling combined with mass spectrometry with ion mobility separation has been demonstrated to enhance the molecular coverage and ion separation for single cell metabolomics. Researchers are trying to develop a technique that can fulfil what current techniques are lacking: high throughput, higher sensitivity for metabolites that have a lower abundance or that have low ionization efficiencies, good replicability and that allow quantification of metabolites. Purpose The purpose of single cell metabolomics is to gain a better understanding at the molecular level of major biological topics such as: cancer, stem cells, aging, as well as the development of drug resistance. In general the focus of metabolomics is mostly on understanding how cells deal with environmental stresses at the molecular level, and to give a more dynamic understanding of cellular functions. Reconstructing developmental trajectories Single-cell transcriptomic assays have allowed reconstruction development trajectories. Branching of these trajectories describes cell differentiation. Various methods have been developed for reconstructing branching developmental trajectories from single-cell transcriptomic data. They use various advanced mathematical concepts from optimal transportation to principal graphs. Some software libraries for reconstruction and visualization of lineage differentiation trajectories are freely available online. Cell–cell interaction Cell–cell interactions are characterized by stable and transient interactions. See also Cell sorting Micro-arrays for mass spectrometry Single-cell variability CITE-Seq References Further reading Analytical chemistry Biochemistry Cell biology Laboratory techniques Scientific techniques
Single-cell analysis
Chemistry,Biology
4,486
68,259,691
https://en.wikipedia.org/wiki/Log%20cradle%20container
A log cradle container is a specialized, open top and open-ended, intermodal container designed for carrying (or cradling) logs. This configuration allows it to be loaded from the open top by means of a loader and the logs can protrude from the ends. Like a regular 20-foot container it has eight twistlocks. From where the container is loaded with logs it can be put unto a truck and then on a flat wagon after a short road trip. At the end of the train journey it may be put on a truck to continue to a further destination. The use of the container saves unloading the truck, loading the flat wagon, unloading the flat wagon and loading a truck. In other words it can be handled like any other intermodal container. This container was designed jointly by KiwiRail and Royal Wolf. References Containers
Log cradle container
Physics
178
2,466,640
https://en.wikipedia.org/wiki/Mittag-Leffler%27s%20theorem
In complex analysis, Mittag-Leffler's theorem concerns the existence of meromorphic functions with prescribed poles. Conversely, it can be used to express any meromorphic function as a sum of partial fractions. It is sister to the Weierstrass factorization theorem, which asserts existence of holomorphic functions with prescribed zeros. The theorem is named after the Swedish mathematician Gösta Mittag-Leffler who published versions of the theorem in 1876 and 1884. Theorem Let be an open set in and be a subset whose limit points, if any, occur on the boundary of . For each in , let be a polynomial in without constant coefficient, i.e. of the form Then there exists a meromorphic function on whose poles are precisely the elements of and such that for each such pole , the function has only a removable singularity at ; in particular, the principal part of at is . Furthermore, any other meromorphic function on with these properties can be obtained as , where is an arbitrary holomorphic function on . Proof sketch One possible proof outline is as follows. If is finite, it suffices to take . If is not finite, consider the finite sum where is a finite subset of . While the may not converge as F approaches E, one may subtract well-chosen rational functions with poles outside of (provided by Runge's theorem) without changing the principal parts of the and in such a way that convergence is guaranteed. Example Suppose that we desire a meromorphic function with simple poles of residue 1 at all positive integers. With notation as above, letting and , Mittag-Leffler's theorem asserts the existence of a meromorphic function with principal part at for each positive integer . More constructively we can let This series converges normally on any compact subset of (as can be shown using the M-test) to a meromorphic function with the desired properties. Pole expansions of meromorphic functions Here are some examples of pole expansions of meromorphic functions: See also Riemann–Roch theorem Liouville's theorem Mittag-Leffler condition of an inverse limit Mittag-Leffler summation Mittag-Leffler function References . . External links Theorems in complex analysis
Mittag-Leffler's theorem
Mathematics
463
42,353,094
https://en.wikipedia.org/wiki/Netcode
Netcode is a blanket term most commonly used by gamers relating to networking in online games, often referring to synchronization issues between clients and servers. Players often infer "bad netcodes" when they experience lag or when their inputs are dropped. Common causes of such issues include high latency between server and client, packet loss, network congestion, and external factors independent to network quality such as frame rendering time or inconsistent frame rates. Netcodes may be designed to uphold a synchronous and seamless experience between users despite these networking challenges. Netcode types Unlike a local game where the inputs of all players are executed instantly in the same simulation or instance of the game, in an online game there are several parallel simulations (one for each player) where the inputs from their respective players are received instantly, while the inputs for the same frame from other players arrive with a certain delay (greater or lesser depending on the physical distance between the players, the quality and speed of the players' network connections, etc.). During an online match, games must receive and process players' input within a certain time for each frame (equal to 16. ms per frame at 60 FPS), and if a remote player's input of a particular frame (for example, of frame number 10) arrives when another one is already running (for example, in frame number 20, 166. ms later), desynchronization between player simulations is produced. There are two main solutions to resolving this conflict and making the game run smoothly: Delay-based The classic solution to this problem is the use of a delay-based netcode. When the inputs of a remote player arrive late, the game delays the inputs of the local player accordingly to synchronize the two inputs and run them simultaneously. This added delay can be disruptive for players (especially when latency is high), but overall the change is not very noticeable. However, these delays can be inconsistent due to sudden fluctuations in current latency. Should the latency between players exceed an established buffer window for the remote player, the game must wait, causing the screens to "freeze". This occurs because a delay-based netcode does not allow the simulation to continue until it receives the inputs from all the players in the frame in question. This variable delay causes an inconsistent and unresponsive experience compared to offline play (or to a LAN game), and can negatively affect player performance in timing-sensitive and fast-paced genres such as fighting games. Rollback An alternative system to the previous netcode is rollback netcode. This system immediately runs the inputs of the local player (so that they are not delayed as with delay-based netcode), as if it were an offline game, and predicts the inputs of the remote player or players instead of waiting for them (assuming they will make the same input as the one in the previous tick). Once these remote inputs arrive (suppose, e.g., 45 ms later), the game can act in two ways: if the prediction is correct, the game continues as-is, in a totally continuous way; if the prediction was incorrect, the game state is reverted and gameplay continues from the corrected state, seen as a "jump" to the other player or players (equivalent to 45 ms, following the example). Some games utilize a hybrid solution in order to disguise these "jumps" (which can become problematic as latency between players grows, as there is less and less time to react to other players' actions) with a fixed input delay and then rollback being used. Rollback is quite effective at concealing lag spikes or other issues related to inconsistencies in the users' connections, as predictions are often correct and players do not even notice. Nevertheless, this system can be troublesome whenever a client's game slows down (usually due to overheating), since rift problems can be caused leading to an exchange of tickets between machines at unequal rates. This generates visual glitches that interrupt the gameplay of those players that receive inputs at a slower pace, while the player whose game is slowed down will have an advantage over the rest by receiving inputs from others at a normal rate (this is known as one-sided rollback). To address this uneven input flow (and consequently, an uneven frame flow as well), there are standard solutions such as waiting for the late entries to arrive to all machines (similar to the delay-based netcode model) or more ingenious solutions as the one currently used in Skullgirls, which consists of the systematic omission of one frame every seven so that when the game encounters the problem in question it can recover the skipped frames in order to gradually synchronize the instances of the games on the various machines. Rollback netcode requires the game engine to be able to turn back its state, which requires modifications to many existing engines, and therefore, the implementation of this system can be problematic and expensive in AAA type games (which usually have a solid engine and a high-traffic network), as commented by Dragon Ball FighterZ producer Tomoko Hiroki, among others. Although this system is often associated with a peer-to-peer architecture and fighting games, there are forms of rollback networking that are also commonly used in client-server architectures (for instance, aggressive schedulers found in database management systems include rollback functionality) and in other video game genres. There is a popular MIT-licensed library named GGPO designed to help implement rollback networking to games (mainly fighting games). Potential causes of netcode issues Latency Latency is unavoidable in online games, and the quality of the player's experience is strictly tied to this (the more latency there is between players, the greater the feeling that the game is not responsive to their inputs). The latency of the players' network (which is largely out of a game's control) is not the only factor in question, but also the latency inherent in the way the game simulations are run. There are several lag compensation methods used to disguise or cope with latency (especially with high latency values). Tick rate A single update of a game simulation is known as a tick. The rate at which the simulation is run on a server is often referred to as the server's tickrate; this is essentially the server equivalent of a client's frame rate, absent any rendering system. Tickrate is limited by the length of time it takes to run the simulation, and is often intentionally limited further to reduce instability introduced by a fluctuating tickrate, and to reduce CPU and data transmission costs. A lower tickrate increases latency in the synchronization of the game simulation between the server and clients. Tickrate for games like first-person shooters is often between 128 ticks per second (such is Valorant's case), 64 ticks per second (in games like Counter-Strike: Global Offensive and Overwatch), 30 ticks per second (like in Fortnite and Battlefield V's console edition) and 20 ticks per second (such are the controversial cases of Call of Duty: Modern Warfare, Call of Duty: Warzone and Apex Legends). A lower tickrate also naturally reduces the precision of the simulation, which itself might cause problems if taken too far, or if the client and server simulations are running at significantly different rates. Because of limitations in the amount of available bandwidth and the CPU time that's taken by network communication, some games prioritize certain vital communications while limiting the frequency and priority of less important information. As with tickrate, this effectively increases synchronization latency. Game engines may limit the number of times that updates (of a simulation) are sent to a particular client and/or particular objects in the game's world in addition to reducing the precision of some values sent over the network to help with bandwidth use. This lack of precision may in some instances be noticeable. Software bugs Various simulation synchronization errors between machines can also fall under the "netcode issues" blanket. These may include bugs which cause the simulation to proceed differently on one machine than on another, or which cause some things to not be communicated when the user perceives that they ought to be. Traditionally, real-time strategy games (such as Age of Empires) have used lockstep protocol peer-to-peer networking models where it is assumed the simulation will run exactly the same on all clients; if, however, one client falls out of step for any reason, the desynchronization may compound and be unrecoverable. Transport layer protocol and communication code: TCP and UDP A game's choice of transport layer protocol (and its management and coding) can also affect perceived networking issues. If a game uses a Transmission Control Protocol (TCP), there will be increased latency between players. This protocol is based on the connection between two machines, in which they can exchange data and read it. These types of connections are very reliable, stable, ordered and easy to implement. These connections, however, are not quite suited to the network speeds that fast-action games require, as this type of protocol (Real Time Streaming Protocols) automatically groups data into packets (which will not be sent until a certain volume of information is reached, unless this algorithm Nagle's algorithm is disabled) which will be sent through the connection established between the machines, rather than directly (sacrificing speed for security). This type of protocol also tends to respond very slowly whenever they lose a packet, or when packets arrive in an incorrect order or duplicated, which can be very detrimental to a real-time online game (this protocol was not designed for this type of software). If the game instead uses a User Datagram Protocol (UDP), the connection between machines will be very fast, because instead of establishing a connection between them the data will be sent and received directly. This protocol is much simpler than the previous one, but it lacks its reliability and stability and requires the implementation of own code to handle indispensable functions for the communication between machines that are handled by TCP (such as data division through packets, automatic packet loss detection, etc.); this increases the engine's complexity and might itself lead to issues. See also Online game Lag (online gaming) GGPO References Multiplayer video games Servers (computing) Video game development Video game platforms
Netcode
Technology
2,155
34,405,333
https://en.wikipedia.org/wiki/Biosecurity%20in%20Australia
National biosecurity in Australia is governed and administered by two federal government departments, the Department of Health and the Department of Agriculture, Fisheries and Forestry. The Biosecurity Act 2015 (C'wealth) and related legislation is administered by the two departments and manages biosecurity risks at the national border. The Act aims to manage biosecurity risks to human health, agriculture, native flora and fauna and the environment. It also covers Australia's international rights and obligations, and lists specific diseases which are contagious and capable of causing severe harm to human health. Each state and territory has additional legislation and protocols to cover biosecurity in their jurisdiction (post-border) including the detection of pests and diseases that have breached the national border. The Intergovernmental Agreement on Biosecurity (IGAB) created a framework for governments to coordinate and identify priority areas of reform and action to build a stronger and more effective national biosecurity system, and established the National Biosecurity Committee (NBC) in 2012. Background The term "biosecurity" was initially used in a narrower sense, to describe preventative and quarantine procedures put in place to minimise the risk of damage to crops, livestock and the environment by invasive pests or diseases that might enter any location. However, the term has evolved to include the oversight and control of biological threats to people and industries as well, including those from pandemic diseases and bioterrorism, whatever or wherever the origin of the organism causing the damage. Biosecurity is now understood as a process involving a defined set of measures and management strategies, designed not only to stop undesirable organisms from getting into the country, but also to quickly discover and eradicate them, or, if eradication proves impossible, to reduce their impact as much as possible. Australia is to some degree protected from exotic pests and diseases by its geographic isolation, but with its island form comes a huge length of border (the coastline), with the sixth longest coastline in the world, at . History of governance Legislation In 2015, the Biosecurity Act 2015 (Commonwealth) replaced the Quarantine Act 1908, which was wholly repealed on 16 June 2016 by the Biosecurity (Consequential Amendments and Transitional Provisions) Act 2015. The new Act was a major reform of the Quarantine Act, in particular in its strengthening and modernising the existing framework of regulations governing biosecurity in Australia. New requirements included how the then Department of Agriculture and Water Resources would manage biosecurity risks associated with goods, people and vessels entering Australia. The Biosecurity Bill 2014 passed through parliament on 14 May 2015 with bipartisan support, as possibly "one of the most substantial and significant pieces of legislation to pass through Parliament during the term of the [Abbott] Government". The Act did not radically change operational functions, but were more clearly described, with the aim of being easier to use and reducing the complexity of administering it. The main change relate was the compliance and enforcement of powers. As recommended by the Beale Review (One Biosecurity: A Working Partnership, Roger Beale et al., 2008) and the earlier Nairn Report, the Act effected a risk-based approach, but includes several measures to manage unacceptable levels of biosecurity risk. Each State and Territory has either a single Biosecurity Act or a suite of biosecurity-related statutes to manage biosecurity within Australia. Administration From August 2007 until September 2009, Biosecurity Australia, an agency of the Department of Agriculture, Fisheries and Forestry, provided science-based quarantine assessments and policy advice to protect plant and animal health in Australia, in order to protect the Australian agricultural economy and to enhance Australia's access to international animal- and plant-related markets. Import risk assessments (IRAs) by Biosecurity Australia included a variety of flora and fauna. In September 2009, a division of DAFF known as Biosecurity Services Group took over its functions. DAFF became the Department of Agriculture in September 2013, followed by the Department of Agriculture and Water Resources in September 2015, and then the Department of Agriculture (Australia, 2019–20), each of which was responsible for biosecurity. Current federal governance , National biosecurity in Australia is governed and administered by two federal departments, the Department of Health and the Department of Agriculture, Water and the Environment. They administer and enforce the various pieces of legislation in the Biosecurity Act 2015 and related ordinances, determinations and instruments. Human health The Department of Health defines biosecurity as "all the measures taken to minimise the risk of infectious diseases caused by viruses, bacteria or other micro-organisms entering, emerging, establishing or spreading in Australia, potentially harming the Australian population, our food security and economy". These risks may enter Australia after people enter the countries from other places (whether on holiday or any other reason), having developed infections through food, water, insect bites, or contact with animals or other people. Often the infection is unknown because it is not obvious, and the infected person is not aware of it themselves, until they become unwell some time later. Some of these diseases may be serious, and biosecurity measures are necessary to ensure that the infection does not spread throughout the population. The Act lists specific diseases (Listed Human Diseases, or LHDs) which are contagious and can cause significant harm to human health; , these LHDs include: human influenza with pandemic potential plague severe acute respiratory syndrome (SARS) Middle East respiratory syndrome (MERS) smallpox viral haemorrhagic fevers (VHDs) yellow fever human coronavirus with pandemic potential Biosecurity Officers from the Department of Agriculture, Water and the Environment must be informed by any aircraft captain or ship's master, should any of their passengers show signs of an infectious disease. Human biosecurity in Australia covers protective measures enforced at the border, travel information and warnings, the import and export of human remains, national public health emergency response planning at the borders and Australia's international obligations, in particular the International Health Regulations (IHR). A Joint External Evaluation (JEE) e following the 2013–2016 Western African Ebola virus epidemic showed that Australia has very high level of capacity of response. Australia's National Action Plan for Health Security 2019-2023 was developed to help to implement the recommendations from the JEE. Management of ill travellers is one aspect of human biosecurity management; prevention of potential disease vectors (such as exotic mosquitoes) is another. COVID-19 pandemic One of the biggest threats to human health in the history of Australia arose with the COVID-19 pandemic in Australia in March 2020. The Federal Government under Scott Morrison invoked the Biosecurity Act 2015 to announce a state of emergency, and brought in various measures to restrict the movement of people in and out of Australia. On 30 April 2021, following a dramatic rise in cases in a second wave of the COVID-19 pandemic, the Federal Government announced a ban on Australian citizens and permanent residents in India from entering Australia via any route, between 3 May and 15 May. Those caught returning from India to Australia via any route would be subject to punishment under the Biosecurity Act, with penalties for breaches including up to five years' jail, a fine of , or both. On 7 May 2021 Morrison announced that the flight ban would end on 15 May and that repatriation flights to the Northern Territory would start on this date. Agriculture Animals Animal biosecurity involves protecting livestock, wildlife, humans and the environment from new diseases or pests. Australia has remained free of many of the serious animal diseases, such as foot and mouth disease and avian influenza (bird flu), but occurrence of one of these diseases would result in significant damage to the economy, as trade would have to be ceased in the affected products. Australia has already experienced outbreaks of animal disease events such as the 2007 Australian equine influenza outbreak and when bird flu was found on poultry farms in New South Wales, leading to widespread culling. New diseases in livestock, often first arising in wild species, may also affect human health, when they are known as zoonotic diseases. These include bird flu, SARS and Hendra virus, the effects of which can be deadly. In November 2016, white spot virus was detected on a prawn farm on the Logan River in south-east Queensland for the first time in the country. By March 2021 it was also being detected in Deception Bay and was widespread in Moreton Bay, in the Brisbane area. The federal government was reviewing its import requirements, and farmers and fishers were lobbying for the inclusion of a requirement that imported prawns should be cooked. Plants Plant industries, in particular the wheat industry and also horticulture, wine, cotton and sugar industries, can be negatively impacted by pests and diseases, as they lead to poorer quality food, less of it, higher costs to produce it, and reduced trade. Australia has remained free of many of the most harmful pest species, such as citrus greening and varroa mite (with Australia the only continent free of this pest affecting honeybee productivity). Food safety The Department of Agriculture, Water and the Environment is also responsible for food safety in Australia. It works with industry and other government agencies, in particular the Department of Health, and Food Standards Australia New Zealand (FSANZ), to develop policy and food standards, and the regulatory system involves the governments of Australia, New Zealand and the Australian states and territories. The department administers relevant legislation at the Australian border, and imported food must meet Australia's biosecurity requirements under the Biosecurity Act 2015, as well as food safety requirements of the Imported Food Control Act 1992. Agricultural and environmental biosecurity coordination Intergovernmental Agreement on Biosecurity (IGAB) The Intergovernmental Agreement on Biosecurity (IGAB) was created in January 2012. It was an agreement between the federal, state and territory governments, with the exception of Tasmania, intended to "improve the national biosecurity system by identifying the roles and responsibilities of governments and outline the priority areas for collaboration to minimise the impact of pests and disease on Australia's economy, environment and the community". It was focussed on controlling animal and plant pests rather than human biosecurity, as it was considered that this aspect was already covered by existing agreements, and set out to improve collaboration and understanding of shared responsibilities among all parties, including industry stakeholders. The 2012 IGAB created a framework for governments to coordinate and identify priority areas of reform and action to build a stronger and more effective national biosecurity system. The agreement comprised two parts: the first part established the goal, objectives and principles of the system, as well as the purpose and scope of the agreement; the second part, the schedules, outlined the priority work areas for governments and their key decision-making committee, the NBC (National Biosecurity Committee). The work based on IGAB led to the development of significant and sound national policy principles and frameworks, including the National Environmental Biosecurity Response Agreement (NEBRA). 2017 review An independent review of Australia's biosecurity system and the underpinning IGAB undertaken in 2017, resulting in the Priorities for Australia's biosecurity system report, noted that the "application of shared responsibility for biosecurity is difficult and challenging,... primarily because the roles and responsibilities of participants across the national biosecurity system are not clearly understood, accepted, or consistently recognised across the system by all involved". The review examined many aspects of the existing system. Excluded from the review were: biosecurity arrangements specific to human health; biosecurity Import Risk Analyses (BIRAs); comprehensive reviews of emergency responses deeds; response plans, such as the Australian Veterinary Emergency Plan (AUSVETPLAN); matters to do with specific biosecurity legislation; and matters to do with Australia's international obligations relating to biosecurity. It explicitly states that its recommendations were not intended to change or impact on human health arrangements in the health department or between the departments of agriculture and health. The report, under a section titled "Market Access is key", said that Australia's world class biosecurity system is a trade and economic asset, but that there was scope for improvement. The report named a number of challenges and topics needing future focus, such as environmental biosecurity (which includes both natural ecosystems and social amenities), which has been viewed as subordinate to agricultural biosecurity in the national biosecurity system, and thus received less funding. Among its recommendations was the appointment of a new position of Chief Community and Environmental Biosecurity Officer (CCEBO) within the environment department, to perform a national policy leadership role similar to the Chief Veterinary Officer and Chief Plant Protection Officer in the national biosecurity system. The report stated that Australia has a mixture of biosecurity strategies and policies that have been tailor-made for each jurisdiction, taxon and/or agency, and that an agreed national approach for prioritising exotic pest and disease risks is desirable, to guide governments' investments. In the area of research, it concluded that the system "no longer [had] the required structure, focus or capacity to address existing and emerging national biosecurity challenges" with "many players but no captain". It recommended several steps for improved governance, including that the NBC should improve its transparency and accountability, including making more information publicly available". In all, it published 41 recommendations to improve Australia's biosecurity system. Managing biosecurity risk has become more challenging due to increasing risks, the changing nature of risks, and increases in associated management costs. Factors such as globalisation, international and interstate migration, climate change, tourism, and the increasing movement of goods are all contributing to increases in biosecurity risks. While the IGAB and NBC had been pivotal in fostering improved government collaboration, there was room for the NBC to improve its transparency and accountability, making more information publicly available. The IGAB had provided a strong mandate for advancing national biosecurity capacity and capability, which critically impacts whole-of economy and whole-of-government arrangements, affecting trade and market access, tourism, agricultural productivity, human health, environmental quality, biodiversity and social amenity. The report considered future challenges, funding measures, governance and performance measurement, listed 42 recommendations, and outlined an implementation pathway for its recommendations, and the potential features of a future system. In June 2018, the role of Chief Environmental Biosecurity Officer (CEBO) was created to oversee environmental biosecurity, with Ian Thompson appointed to the role. IGAB2 A second agreement was effected in January 2019, known as IGAB2, with all states and governments as signatories, following the review. National Biosecurity Committee (NBC) The National Biosecurity Committee (NBC) was established under the IGAB in 2012. The NBC is "responsible for managing a national, strategic approach to biosecurity threats relating to plant and animal pests and diseases, marine pests and aquatics, and the impact of these on agricultural production, the environment, community well-being and social amenity", with one of its core objectives being to cooperation, coordination and consistency among the various government agencies involved. The Secretary of the Department of Agriculture, Water and the Environment (Andrew Metcalfe ) chairs the NBC and up to two senior officials from the federal, state and territory primary industry and/or environment agencies and jurisdictions. It provides advice on national biosecurity matters, and provides updates on progress towards implementing the recommendations of the 2017 Review to the Agriculture Senior Officials Committee. State-based agencies and legislation Summary of state-based legislation State and Territory Governments have authority for biosecurity within their jurisdiction and administer specific biosecurity legislation to manage pests and diseases, including the movement of goods, plants and animals between States that pose a biosecurity risk. , the NSW, WA, Queensland and Tasmanian Governments have developed and passed consolidated Biosecurity Acts. The Australian Capital Territory Government has developed a framework for a new Act, which will closely align with the New South Wales legislation. The Government of South Australia is in the process of developing a new Act. ACT The Environment, Planning and Sustainable Development Directorate of the Australian Capital Territory is responsible for biosecurity. , two Acts provide the mechanisms "to protect the health and welfare of people and animals and to protect markets relating to animals and plants and associated products": the Animal Diseases Act 2005 and the Plant Diseases Act 2002, while the Pest Plants and Animals Act 2005 protects land and aquatic resources from threats posed by from animal and plant pests in the ACT. Between 2017 and 2019, consultation took place on proposals for a new ACT Biosecurity Act, to manage biosecurity as a shared responsibility consistent with approaches taken by the other states and the Commonwealth. New South Wales The NSW Department of Primary Industries is the primary agency responsible for biosecurity in the state, executing its functions under the Biosecurity Act 2015 (NSW), which came into effect on 1 June 2017. In addition, the Public Health Act 2010 was amended in September 2017 to expand the scope of public health orders relating to relate to a few very serious notifiable (Category 4 and 5) conditions, such as MERS or Ebola, to enable people to be detained as a public health risk, where they do not cooperate with voluntary quarantine. The changes were made to bring the Public Health Act into line with the federal Biosecurity Act 2015. During the COVID-19 pandemic in Australia, a serious breach of biosecurity occurred when the cruise ship Ruby Princess was allowed to dock and its 2,777 passengers to disembark, despite some passengers having been diagnosed with COVID-19, with serious consequences. Australian Border Force is responsible for passport control and customs, while the federal Department of Agriculture is responsible for biosecurity; however, it is up to each state's health department to prevent illness in the community. Responsibility for the breakdown in communications will be determined by a later enquiry. Queensland Biosecurity Queensland, which is part of the Department of Agriculture and Fisheries, is responsible for biosecurity in the state. The state's Biosecurity Act 2014 and the Queensland Biosecurity Strategy 2018-2023 govern and guide the department's responsibilities with regard to biosecurity in Queensland. Queensland has had frequent biosecurity incursions affecting a wide range of crops and livestock including Fall armyworm, Myrtle rust, Panama TR4 Disease and Red Imported Fire Ants. Queensland Health liaises with the Department of Agriculture and Fisheries in biosecurity matters which relate to public health (whether by human or animal transmission, for example diphtheria), issues health alerts to the public and provides advice regarding travel and other restrictions on residents' activities relating to biosecurity risk. Northern Territory The Department of Primary Industry and Resources and the Department of Environment and Natural Resources are responsible for biosecurity in the Northern Territory of Australia. Banana freckle disease, Cucumber green mottle mosaic virus, browsing ant (Lepisiota frauenfeldi) and Asian honey bee have been recent threats to agriculture and the environment. The Northern Territory Biosecurity Strategy 2016-2026 was developed in order to address increasing biosecurity risks. South Australia Primary Industries and Regions SA (PIRSA) manages the risks related to animal and plant pests and diseases, food-borne illnesses, and misuse of rural chemicals in South Australia. , PIRSA is managing a review of current biosecurity legislation in South Australia, which has been covered by multiple pieces of legislation, with the aim of creating a new single and cohesive Biosecurity Act for the state based on the current policy developed by PIRSA. The discussion paper was published in 2019. SA Health, "the brand name for the health portfolio of services and agencies responsible to...the Minister for Health and Wellbeing", says that Biosecurity SA, under PIRSA, is responsible for managing the "risks and potential harm to the South Australian community, environment, and economy from pests and diseases". It cites a partnership known as "One Health", supported by the Zoonoses Working Group, which supports collaboration and coordination among stakeholders with regard to human, animal and environmental health. Tasmania The island state of Tasmania has extremely stringent biosecurity requirements. The Department of Primary Industries, Parks, Water and Environment (DPIPWE) is the parent department of the Biosecurity Tasmania agency. Tasmania's Biosecurity Act 2019 (assented to 26 August 2019) replaced seven separate Acts, whose regulations are still being applied until full implementation of the Act, expected around 2023. One of the key products of the Act was the creation of the Biosecurity Advisory Committee. The State has benefited from its geographic isolation but has seen a number of incursions more recently including Blueberry rust (with one incursion successfully eradicated), Myrtle Rust, European Red Fox (eradicated), Indian myna (eradicated) and Queensland fruit fly (eradicated). Victoria Agriculture Victoria, an agency of the Department of Jobs, Precincts and Regions (DJPR) is responsible for managing biosecurity in Victoria. The executive director, Biosecurity Services is in charge of biosecurity. The Victorian Chief Plant Health Officer Unit (CPHO), who exercises powers provided by the Plant Biosecurity Act 2010 and Plant Biosecurity Regulations 2016, is the technical lead on plant health management in Victoria. Victoria's Chief Health Officer is also Chief Human Biosecurity Officer for Victoria. Western Australia The Biosecurity Council of Western Australia was established on 27 February 2008 as a specialist advisory group to the Minister for Agriculture and Food and the Director-General of the Department of Primary Industries and Regional Development, under the Biosecurity and Agriculture Management Act 2007 (BAM Act). The Biosecurity and Agriculture Management Regulations 2013 support the Act. The BAM Act replaced 16 older Acts and 27 sets of regulations with one Act and nine sets of regulations. Within the Department of Health, the State Health Coordinator and State Human Epidemic Controller form part of the Hazard Management Structure created by the State Emergency Management Committee (SEMC), which was established by the Emergency Management Act 2005 (EM Act). The State Hazard Plan was created in 2019. CSIRO The Commonwealth Scientific and Industrial Research Organisation (CSIRO), the government agency responsible for scientific research, collaborates with the relevant government departments, as well as industry, universities and other international agencies, to help protect Australian people, livestock, plants and the environment. In 2014, CSIRO produced an 87-page document titled Australia's Biosecurity Future: Preparing for Future Biological Challenges. Past and present threats 2020: Coronavirus On 18 March 2020, a human biosecurity emergency was declared in Australia owing to the risks to human health posed by the COVID-19 pandemic, after the National Security Committee met the previous day. The Biosecurity Act 2015 specifies that the Governor-General may declare such an emergency exists if the Health Minister (at the time Greg Hunt) is satisfied that "a listed human disease is posing a severe and immediate threat, or is causing harm, to human health on a nationally significant scale". This gives the Minister sweeping powers, including imposing restrictions or preventing the movement of people and goods between specified places, and evacuations. The Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) Declaration 2020 was declared by the Governor-General, David Hurley, under Section 475 of the Act. The Act only allows for three months, but may be extended for a further three if the Governor-General is satisfied that it is required. The Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) (Emergency Requirements) Determination 2020, made by the Health Minister on the same day, forbids international cruise ships from entering Australian ports before 15 April 2020. On 25 March 2020, the Health Minister made a second determination, the Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) (Overseas Travel Ban Emergency Requirements) Determination 2020, which "forbids Australian citizens and permanent residents from leaving Australian territory by air or sea as a passenger". On 25 April 2020, the Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) (Emergency Requirements—Public Health Contact Information) Determination 2020, made under subsection 477(1) of the Act, was signed into law by the Health Minister. The purpose of the new legislation is "to make contact tracing faster and more effective by encouraging public acceptance and uptake of COVIDSafe", COVIDSafe being the new mobile app created for the purpose. The function of the app is to record contact between any two people who both have the app on their phones when they come within of each other. The encrypted data would remain on the phone for 21 days of not encountering a person logged with confirmed COVID-19. , the (federal) Department of Health has a page devoted to the pandemic, which is updated daily. The state and territory governments used existing legislation relating to public health emergencies in order to bring in various measures in March. In South Australia, a public health emergency was declared on 15 March 2020, under Section 87 of the Public Health Act 2011 (SA). SA Health is responsible for the provision, maintenance and coordination of health services under the Emergency Management Act 2004 the State Emergency Management Plan (SEMP). A dedicated web page to provide information for the community and health professionals was created, with linked pages to key information updated daily. On 27 March 2020, using the Emergency Management Act 2004, the State Coordinator, Commissioner of South Australia Police Grant Stevens, made a direction to prohibit gatherings of more than 10 people, and with a limit of one person per 4 square metres. In Victoria, a state of emergency was declared on 16 March under the Public Health and Wellbeing Act 2008 (Vic), allowing health officials to "detain people, search premises without a warrant, and force people or areas into lockdown if it is considered necessary to protect public health". The state of emergency was for four weeks to 13 April, and on 12 April was extended by four weeks to 11 May. On 18 March 2020, New South Wales used Section 7 of their Public Health Act 2010 to require the immediate cancellation of major events with more than 500 people outdoors, and more than 100 people indoors. NSW Health has a page dedicated to COVID-19. See also Australian Plague Locust Commission Centre of Excellence for Biosecurity Risk Analysis (CEBRA), at the University of Melbourne Food safety in Australia Food Standards Australia New Zealand List of Australian Commonwealth Government entities Footnotes References External links – Beta version of new government website (2020) – "part of the Farm Biosecurity Program, a joint initiative of Animal Health Australia (AHA) and Plant Health Australia (PHA), managed on behalf of members". Biosecurity Australia (2007–2009) Biosecurity Australia (archived) Department of Agriculture, Fisheries and Forestry (archived) Environment of Australia Agriculture in Australia + Health in Australia
Biosecurity in Australia
Environmental_science
5,755
48,010,172
https://en.wikipedia.org/wiki/VSL%20International
VSL International (for Vorspann System Losinger) is a specialist construction company founded in 1954. VSL contributes to engineering, building, repairing, upgrading and preserving transport infrastructure (bridges, tunnels, retained earth walls for roads), buildings and energy production facilities. Based in Switzerland, VSL is owned by French construction company Bouygues. VSL specialises in post-tensioned concrete, stay-cable systems and heavy lifting, while its subsidiary Intrafor focuses on ground engineering and foundations. The company has also developed its proprietary systems, mostly related to post-tensioning and stay-cable, and has 370 patents. History In 1943, the Swiss construction company Losinger started to build post-tensioned bridges and began the development of its own post-tensioning system. The patent of this wire-based system was registered in 1954 and Vorspann System Losinger was created to help develop this activity. The patent was first applied in 1956 for the construction of the Pont des Cygnes bridge in Yverdon, Switzerland. In 1966, VSL launched its strand post-tensioning system. In 1978, VSL's stay cable system was installed for the first time on the Liebrüti bridge in Kaiseraugst, Switzerland. While the company developed internationally from the 1970s, in 1991, VSL joined the Bouygues group following the purchase of Losinger. In 2001, VSL diversified its activity to ground engineering. Organization VSL is headquartered in Bern in Switzerland, where the company was founded. In 2015, Jean-Yves Mondon was appointed chief executive officer. VSL operates in more than 30 countries, mostly in Asia, Oceania, the Middle East, Europe and South America. Key figures Workforce: 4,000 employees Patents: 370 3 manufacturing plants (in China, Spain and Thailand) 1 technical centre with offices in Switzerland, Singapore, Hong Kong and Spain Activities VSL's activities are organized in 4 business lines to engineer, build, repair, upgrade and preserve transport infrastructure (bridges, tunnels, retained earth walls for roads), buildings and energy production facilities: Systems and technologies: post-tensioning systems, stay cables, damping systems for buildings and civil works, bearings and joints. Construction: bridges, buildings, containment structures, offshore structures, heavy lifting. Ground engineering: foundations, ground improvement, ground investigation, mechanically stabilised earth walls, ground anchors. Repair, strengthening and preservation, including structural diagnostic, upgrade and retrofitting (de-icing, fire protection…) and monitoring. Projects Among the most important projects VSL has carried out or taken a part in are: Ganter Bridge, in Valais, Switzerland (1980) Tsing Ma Bridge, Hong Kong (1994) Petronas Towers's sky bridge, in Kuala Lumpur, Malaysia (1995) Burj Al Arab hotel (1997) Stadium Australia, in Sydney, Australia (1998) Dubai Metro, UAE (2006) Venitian Macao resort hotel, China (2007) Second Gateway Bridge / Gateway Bridge duplication, in Brisbane, Australia (2010) Marina Bay Sands, Singapore (2010) Hodariyat Bridge, in Abu Dhabi, United Arab Emirates (2012) Baluarte Bridge, Mexico (2012) the highest cable-stayed bridge in the world Newmarket Viaduct replacement, in Auckland, New Zealand (2012) Queensferry Crossing, United Kingdom (2017) Bandra–Worli Sea Link, Mumbai, India (2009) Cable-stayed bridge on the Mumbai Metro, India (2012) HCMC Metro Line 1, Vietnam (2017) Other known projects include: The Dubai Mall, the world's largest shopping mall Incheon Bridge, South Korea Kai Tak Cruise Terminal, Hong Kong Nhật Tân Bridge, Vietnam Rạch Miễu Bridge, Vietnam Stonecutters Bridge, Hong Kong Tarban Creek Bridge, Australia Wadi Leban Bridge, Saudi Arabia VSL also provided heavy lifting (with hydraulic jacks) for fifteen segments of the CMS detector of the Large Hadron Collider. See also :Category:Cable-stayed bridges References External links VSL Swiss companies established in 1954 Bouygues Bridge companies Companies based in Bern Construction and civil engineering companies established in 1954 Concrete pioneers Construction and civil engineering companies of Switzerland Structural steel
VSL International
Engineering
869
48,270,439
https://en.wikipedia.org/wiki/Jeffrey%20Rose
Jeffrey Rose, CMH, is an American clinical hypnotist, lecturer, sleep specialist, addiction recovery coach, cable TV show host, NY State legislative coordinator of Start School Later and writer. Rose is the founder and director of the Advanced Hypnosis Center, which he established in New York City in 1999 and the Advanced Hypnosis Center in Rockland County, N.Y., in 2005. Rose is a colleague of Elena Beloff. At his suggestion, Beloff was cast as the hypnotist at the center of Philippe Parreno's multimedia art extravaganza "H{N0YPN(Y}OSIS". Education Rose received his bachelor's degree from New York University. He is certified by the International Medical Dental Hypnosis Association (IMDHA), National Guild of Hypnotists, and the International Association of Counselors and Therapists. He has received continued education at Integrative Healthcare Symposium. Television appearances Rose, who treats sleep problems in adolescents and adults, is the NY State legislative coordinator of Start School Later. Rose has appeared and been featured in stories on hypnosis in various media. Rose interviewed Dr. James Samuel Gordon, the world-renowned expert in using mind-body medicine. Dr. Mark Hyman (doctor) and David Perlmutter were guests on his Holistic Healing cable show. On April 21, 2016, Rose appeared as an expert on New York City's PIX11 News to discuss how political candidates use hypnosis techniques to sway voters. Radio interviews Rose was the guest on Dr. Ronald Hoffman's Intelligent Medicine Podcast on April 9, 2015 on the "Hypnosis as a tool for overcoming bad habits" episode. Published articles Since 2003, Rose has published many articles for health related publications in his areas of expertise. "Nutrition 101: Carbohydrates" from RECOVER Magazine ine by Jeffrey Rose, Clinical Hypnotist and Nutritionist “Dealing with Stress” from RECOVER Magazine May 2005 by Jeffrey Rose, Clinical Hypnotist “Sugar and Health” From RECOVER Magazine March 2005 by Jeffrey Rose, New York Hypnotist & Nutritionist “Smoking and Addiction” From RECOVER Magazine January 2005 by Jeffrey Rose, Clinical Hypnotist “Exercise and Recovery” From RECOVER Magazine May 2004 by Jeffrey Rose, Clinical Hypnotist and Nutritionist “Easing Recovery with Omega-3 Fatty Acid Supplementation” from RECOVER Magazine March 2004 by Jeffrey Rose, Clinical Hypnotist and Nutritionist “Sleep Hygiene for Health” RECOVER Magazine December 2004 by Jeffrey Rose, Clinical Hypnotist “Caffeine, Health and Recovery” from RECOVER Magazine July 2004 by Jeffrey Rose, Clinical Hypnotist and Nutritionist “Hypnotherapy: Tap Your Subconscious” from RECOVER Magazine November 2003 by Jeffrey Rose, CMH “HYPNOTHERAPY FOR YOUR PATIENTS” from PCI JOURNAL Volume 11 November 4, 2003 Rose has appeared or been featured on WPIX New York's “Dr. Steve Show,” CBS’ “The Early Show”, CNN American Morning news segment “Kick the Butt,” the “Tyra Banks Show” and Arise TV's Entertainment 360. Magazine articles Jeffrey Rose was featured in Martha Stewart Living's article on fear of flying, “Mind Over Matter", Men's Vogue, and New York's Promenade Magazine. Rose was also featured in The Observer's article "Countdown to Bliss". References External links Advanced Hypnosis Center of Rockland County American hypnotists American nutritionists Sleep researchers American motivational speakers Year of birth missing (living people) Living people New York University alumni
Jeffrey Rose
Biology
750
27,643,777
https://en.wikipedia.org/wiki/List%20of%20measuring%20instruments
A measuring instrument is a device to measure a physical quantity. In the physical sciences, quality assurance, and engineering, measurement is the activity of obtaining and comparing physical quantities of real-world objects and events. Established standard objects and events are used as units, and the process of measurement gives a number relating the item under study and the referenced unit of measurement. Measuring instruments, and formal test methods which define the instrument's use, are the means by which these relations of numbers are obtained. All measuring instruments are subject to varying degrees of instrument error and measurement uncertainty. These instruments may range from simple objects such as rulers and stopwatches to electron microscopes and particle accelerators. Virtual instrumentation is widely used in the development of modern measuring instruments. Time In the past, a common time measuring instrument was the sundial. Today, the usual measuring instruments for time are clocks and watches. For highly accurate measurement of time an atomic clock is used. Stopwatches are also used to measure time in some sports. Energy Energy is measured by an energy meter. Examples of energy meters include: Electricity meter An electricity meter measures energy directly in kilowatt-hours. Gas meter A gas meter measures energy indirectly by recording the volume of gas used. This figure can then be converted to a measure of energy by multiplying it by the calorific value of the gas. Power (flux of energy) A physical system that exchanges energy may be described by the amount of energy exchanged per time-interval, also called power or flux of energy. (see any measurement device for power below) For the ranges of power-values see: Orders of magnitude (power). Action Action describes energy summed up over the time a process lasts (time integral over energy). Its dimension is the same as that of an angular momentum. A phototube provides a voltage measurement which permits the calculation of the quantized action (Planck constant) of light. (See also Photoelectric effect.) Geometry Dimensions (size) Length (distance) Length, distance, or range meter For the ranges of length-values see: Orders of magnitude (length) Area Planimeter For the ranges of area-values see: Orders of magnitude (area) Volume Buoyant weight (solids) Eudiometer, pneumatic trough (gases) Flow measurement devices (liquids) Graduated cylinder (liquids) Measuring cup (grained solids, liquids) Overflow trough (solids) Pipette (liquids) If the mass density of a solid is known, weighing allows to calculate the volume. For the ranges of volume-values see: Orders of magnitude (volume) Angle Circumferentor Cross staff Goniometer Graphometer Inclinometer Mural instrument Protractor Quadrant Reflecting instruments Octant Reflecting circles Sextant Theodolite and total station Orientation in three-dimensional space See also the section about navigation below. Level Level (instrument) Laser line level Spirit level Direction Gyroscope Coordinates Coordinate-measuring machine Mechanics This includes basic quantities found in classical- and continuum mechanics; but strives to exclude temperature-related questions or quantities. Mass- or volume flow measurement Gas meter Mass flow meter Metering pump Water meter Speed or velocity (flux of length) Airspeed indicator LIDAR speed gun Radar speed gun, a Doppler radar device, using the Doppler effect for indirect measurement of velocity. Speedometer Tachometer (speed of rotation) Tachymeter Variometer (rate of climb or descent) Velocimetry (measurement of fluid velocity) For the ranges of speed-values see: Orders of magnitude (speed) Acceleration Accelerometer Mass Balance Check weigher measures precise weight of items in a conveyor line, rejecting underweight or overweight objects. Inertial balance Katharometer Mass spectrometers measure the mass-to-charge ratio, not the mass, of ionised particles. Weighing scale For the ranges of mass-values see: Orders of magnitude (mass) Linear momentum Ballistic pendulum Force (flux of linear momentum) Force gauge Spring scale Strain gauge Torsion balance Tribometer Pressure (flux density of linear momentum) Anemometer (measures wind speed) Barometer used to measure the atmospheric pressure. Manometer (see Pressure measurement and Pressure sensor) Pitot tube (measures airspeed) Tire-pressure gauge in industry and mobility For the ranges of pressure-values see: Orders of magnitude (pressure) Angular velocity or rotations per time unit Stroboscope Tachometer For the value-ranges of angular velocity see: Orders of magnitude (angular velocity) For the ranges of frequency see: Orders of magnitude (frequency) Torque Dynamometer Prony brake Torque wrench Energy carried by mechanical quantities, mechanical work Ballistic pendulum, indirectly by calculation and or gauging Electricity, electronics, and electrical engineering Considerations related to electric charge dominate electricity and electronics. Electrical charges interact via a field. That field is called electric field.If the charge doesn't move. If the charge moves, thus realizing an electric current, especially in an electrically neutral conductor, that field is called magnetic. Electricity can be given a quality — a potential. And electricity has a substance-like property, the electric charge. Energy (or power) in elementary electrodynamics is calculated by multiplying the potential by the amount of charge (or current) found at that potential: potential times charge (or current). (See Classical electromagnetism and Covariant formulation of classical electromagnetism) Electric charge Electrometer is often used to reconfirm the phenomenon of contact electricity leading to triboelectric sequences. Torsion balance used by Coulomb to establish a relation between charges and force, see above. For the ranges of charge values see: Orders of magnitude (charge) Electric current (current of charge) Ammeter Clamp meter d'Arsonval galvanometer Galvanometer Voltage (electric potential difference) Oscilloscope allows quantifying time-dependent voltages Voltmeter Electric resistance, electrical conductance, and electrical conductivity Ohmmeter Time-domain reflectometer characterizes and locates faults in metallic cables by runtime measurements of electric signals. Wheatstone bridge Electric capacitance Capacitance meter Electric inductance Inductance meter Energy carried by electricity or electric energy Electricity meter Power carried by electricity (current of energy) Wattmeter Electric field (negative gradient of electric potential, voltage per length) Field mill Magnetic field See also the relevant section in the article about the magnetic field. Compass Hall effect sensor Magnetometer Proton magnetometer SQUID For the ranges of magnetic field see: Orders of magnitude (magnetic field) Combination instruments Multimeter, combines the functions of ammeter, voltmeter, and ohmmeter as a minimum. LCR meter, combines the functions of ohmmeter, capacitance meter, and inductance meter. Also called component bridge due to the bridge circuit method of measurement. Thermodynamics Temperature-related considerations dominate thermodynamics. There are two distinct thermal properties: A thermal potential — the temperature. For example: A glowing coal has a different thermal quality than a non-glowing one. And a substance-like property, — the entropy; for example: One glowing coal won't heat a pot of water, but a hundred will. Energy in thermodynamics is calculated by multiplying the thermal potential by the amount of entropy found at that potential: temperature times entropy. Entropy can be created by friction but not annihilated. Amount of substance (or mole number) A physical quantity introduced in chemistry; usually determined indirectly. If mass and substance type of the sample are known, then atomic- or molecular masses (taken from a periodic table, masses measured by mass spectrometry) give direct access to the value of the amount of substance. (See also Molar mass.) If specific molar values are given, then the amount of substance of a given sample may be determined by measuring volume, mass, or concentration. See also the subsection below about the measurement of the boiling point. Gas collecting tube gases Temperature Electromagnetic spectroscopy Galileo thermometer Gas thermometer principle: relation between temperature and volume or pressure of a gas (gas laws). Constant pressure gas thermometer Constant volume gas thermometer Liquid crystal thermometer Liquid thermometer principle: relation between temperature and volume of a liquid (coefficient of thermal expansion). Alcohol thermometer Mercury-in-glass thermometer Pyranometer principle: solar radiation flux density relates to surface temperature (Stefan–Boltzmann law) Pyrometers principle: temperature dependence of spectral intensity of light (Planck's law), i.e. the color of the light relates to the temperature of its source, range: from about −50 °C to +4000 °C, note: measurement of thermal radiation (instead of thermal conduction, or thermal convection) means: no physical contact becomes necessary in temperature measurement (pyrometry). Also note: thermal space resolution (images) found in thermography. Resistance thermometer principle: relation between temperature and electrical resistance of metals (platinum) (electrical resistance), range: 10 to 1,000 kelvins, application in physics and industry Solid thermometer principle: relation between temperature and length of a solid (coefficient of thermal expansion). Bimetallic strip Thermistors principle: relation between temperature and electrical resistance of ceramics or polymers, range: from about 0.01 to 2,000 kelvins (−273.14 to 1,700 °C) Thermocouples principle: relation between temperature and voltage of metal junctions (Seebeck effect), range: from about −200 °C to +1350 °C Thermometer Thermopile is a set of connected thermocouples Triple point cell used for calibrating thermometers. Imaging technology Thermographic camera uses a microbolometer for detection of heat radiation. See also Temperature measurement and :Category:Thermometers. More technically related may be seen thermal analysis methods in materials science. For the ranges of temperature-values see: Orders of magnitude (temperature) Energy carried by entropy or thermal energy This includes thermal mass or temperature coefficient of energy, reaction energy, heat flow, ... Calorimeters are called passive if gauged to measure emerging energy carried by entropy, for example from chemical reactions. Calorimeters are called active or heated if they heat the sample, or reformulated: if they are gauged to fill the sample with a defined amount of entropy. Actinometer heating power of radiation. Constant-temperature calorimeter, phase change calorimeter for example an ice calorimeter or any other calorimeter observing a phase change or using a gauged phase change for heat measurement. Constant-volume calorimeter, also called bomb calorimeter Constant-pressure calorimeter, enthalpy-meter, or coffee cup calorimeter Differential Scanning Calorimeter Reaction calorimeter See also Calorimeter or Calorimetry Entropy Entropy is accessible indirectly by measurement of energy and temperature. Entropy transfer Phase change calorimeter's energy value divided by absolute temperature give the entropy exchanged. Phase changes produce no entropy and therefore offer themselves as an entropy measurement concept. Thus entropy values occur indirectly by processing energy measurements at defined temperatures, without producing entropy. Constant-temperature calorimeter, phase change calorimeter Heat flux sensor uses thermopiles (which are connected thermocouples) to determine current density or flux of entropy. Entropy content The given sample is cooled down to (almost) absolute zero (for example by submerging the sample in liquid helium). At absolute zero temperature any sample is assumed to contain no entropy (see Third law of thermodynamics for further information). Then the following two active calorimeter types can be used to fill the sample with entropy until the desired temperature has been reached: (see also Thermodynamic databases for pure substances) Constant-pressure calorimeter, enthalpy-meter, active Constant-temperature calorimeter, phase change calorimeter, active Entropy production Processes transferring energy from a non-thermal carrier to heat as a carrier do produce entropy (Example: mechanical/electrical friction, established by Count Rumford). Either the produced entropy or heat are measured (calorimetry) or the transferred energy of the non-thermal carrier may be measured. calorimeter (any device for measuring the work which will or would eventually be converted to heat and the ambient temperature) Entropy lowering its temperature—without losing energy—produces entropy (Example: Heat conduction in an isolated rod; "thermal friction"). calorimeter Temperature coefficient of energy or "heat capacity" Concerning a given sample, a proportionality factor relating temperature change and energy carried by heat. If the sample is a gas, then this coefficient depends significantly on being measured at constant volume or at constant pressure. (The terminology preference in the heading indicates that the classical use of heat bars it from having substance-like properties.) Constant-volume calorimeter, bomb calorimeter Constant-pressure calorimeter, enthalpy-meter Specific temperature coefficient of energy or "specific heat capacity" The temperature coefficient of energy divided by a substance-like quantity (amount of substance, mass, volume) describing the sample. Usually calculated from measurements by a division or could be measured directly using a unit amount of that sample. For the ranges of specific heat capacities see: Orders of magnitude (specific heat capacity) Coefficient of thermal expansion Dilatometer Strain gauge Melting temperature (of a solid) Differential Scanning Calorimeter gives melting point and enthalpy of fusion. Kofler bench Thiele tube Boiling temperature (of a liquid) Ebullioscope a device for measuring the boiling point of a liquid. This device is also part of a method that uses the effect of boiling point elevation for calculating the molecular mass of a solvent. See also Thermal analysis, Heat. More on continuum mechanics This includes mostly instruments which measure macroscopic properties of matter: In the fields of solid-state physics; in condensed matter physics which considers solids, liquids, and in-betweens exhibiting for example viscoelastic behavior; and furthermore, in fluid mechanics, where liquids, gases, plasmas, and in-betweens like supercritical fluids are studied. Density This refers to particle density of fluids and compact(ed) solids like crystals, in contrast to bulk density of grainy or porous solids. Aerometer liquids Dasymeter gases Gas collecting tube gases Hydrometer liquids Pycnometer liquids Resonant frequency and damping analyser (RFDA) solids For the ranges of density-values see: Orders of magnitude (density) Hardness of a solid Durometer Shape and surface of a solid Holographic interferometer Laser produced speckle pattern analysed. Resonant frequency and damping analyser (RFDA) Tribometer Deformation of condensed matter Strain gauge all below Elasticity of a solid (elastic moduli) Resonant frequency and damping analyser (RFDA), using the impulse excitation technique: A small mechanical impulse causes the sample to vibrate. The vibration depends on elastic properties, density, geometry, and inner structures (lattice or fissures). Plasticity of a solid Cam plastometer Plastometer Tensile strength, ductility, or malleability of a solid Universal testing machine Granularity of a solid or of a suspension Grindometer Viscosity of a fluid Rheometer Viscometer Optical activity Polarimeter Surface tension of liquids Tensiometer Imaging technology Tomograph, device and method for non-destructive analysis of multiple measurements done on a geometric object, for producing 2- or 3-dimensional images, representing the inner structure of that geometric object. Wind tunnel This section and the following sections include instruments from the wide field of :Category:Materials science, materials science. More on electric properties of condensed matter, gas Permittivity, relative static permittivity, (dielectric constant), or electric susceptibility Capacitor Such measurements also allow to access values of molecular dipoles. Magnetic susceptibility or magnetization Gouy balance For other methods see the section in the article about magnetic susceptibility. See also :Category:Electric and magnetic fields in matter Substance potential or chemical potential or molar Gibbs energy Phase conversions like changes of aggregate state, chemical reactions or nuclear reactions transmuting substances, from reactants into products, or diffusion through membranes have an overall energy balance. Especially at constant pressure and constant temperature, molar energy balances define the notion of a substance potential or chemical potential or molar Gibbs energy, which gives the energetic information about whether the process is possible or not - in a closed system. Energy balances that include entropy consist of two parts: A balance that accounts for the changed entropy content of the substances, and another one that accounts for the energy freed or taken by that reaction itself, the Gibbs energy change. The sum of reaction energy and energy associated to the change of entropy content is also called enthalpy. Often the whole enthalpy is carried by entropy and thus measurable calorimetrically. For standard conditions in chemical reactions either molar entropy content and molar Gibbs energy with respect to some chosen zero point are tabulated. Or molar entropy content and molar enthalpy with respect to some chosen zero are tabulated. (See Standard enthalpy change of formation and Standard molar entropy) The substance potential of a redox reaction is usually determined electrochemically current-free using reversible cells. Redox electrode Other values may be determined indirectly by calorimetry. Also by analyzing phase-diagrams. Sub-microstructural properties of condensed matter, gas Infrared spectroscopy Neutron detector Radio frequency spectrometers for nuclear magnetic resonance and electron paramagnetic resonance Raman spectroscopy Crystal structure An X-ray tube, a sample scattering the X-rays and a photographic plate to detect them. This constellation forms the scattering instrument used by X-ray crystallography for investigating crystal structures of samples. Amorphous solids lack a distinct pattern and are identifiable thereby. Imaging technology, microscope Electron microscope Scanning electron microscope Transmission electron microscope Optical microscope uses reflectiveness or refractiveness of light to produce an image. Scanning acoustic microscope Scanning probe microscope Atomic force microscope (AFM) Scanning tunneling microscope (STM) Focus variation X-ray microscope (See also Spectroscopy and List of materials analysis methods.) Rays ("waves" and "particles") Sound, compression waves in matter Microphones in general, sometimes their sensitivity is increased by the reflection- and concentration principle realized in acoustic mirrors. Laser microphone Seismometer Sound pressure Microphone or hydrophone properly gauged Shock tube Sound level meter Light and radiation without a rest mass, non-ionizing Antenna (radio) Bolometer measuring the energy of incident electromagnetic radiation. Camera EMF meter Interferometer used in the wide field of interferometry Microwave power meter Optical power meter Photographic plate Photomultiplier Phototube Radio telescope Spectrometer T-ray detectors (for lux meter, see the section about human senses and human body) See also :Category:Optical devices Photon polarization Polarizer Pressure (current density of linear momentum) Nichols radiometer Radiant flux The measure of the total power of light emitted. Integrating sphere for measuring the total radiant flux of a light source Radiation with a rest mass, particle radiation Cathode rays Crookes tube Cathode-ray tube, a phosphor-coated anode Atom polarization and electron polarization Stern–Gerlach experiment Ionizing radiation Ionizing radiation includes rays of "particles" as well as rays of "waves". Especially X-rays and gamma rays transfer enough energy in non-thermal, (single-) collision processes to separate electron(s) from an atom. Particle and ray flux Bubble chamber Cloud chamber Dosimeter, a technical device realizes different working principles. Geiger counter Ionisation chamber Microchannel plate detector Photographic plate Photostimulable phosphor plate Proportional counter Scintillation counter, Lucas cell Semiconductor detector Identification and content This could include chemical substances, rays of any kind, elementary particles, and quasiparticles. Many measurement devices outside this section may be used or at least become part of an identification process. For identification and content concerning chemical substances, see also Analytical chemistry, List of chemical analysis methods, and List of materials analysis methods. Substance content in mixtures, substance identification Carbon dioxide sensor chromatographic device, gas chromatograph separates mixtures of substances. Different velocities of the substance types accomplish the separation. Colorimeter absorbance, and thus concentration Gas detector Gas detector in combination with mass spectrometer, mass spectrometer identifies the chemical composition of a sample on the basis of the mass-to-charge ratio of charged particles. Nephelometer or turbidimeter Oxygen sensor (= lambda sond) Refractometer, indirectly by determining the refractive index of a substance. Smoke detector Ultracentrifuge, separates mixtures of substances. In a force field of a centrifuge, substances of different densities separate. pH: Concentration of protons in a solution pH meter Saturated calomel electrode Humidity Hygrometer the density of water in air Lysimeter the balance of water in soil Human senses and human body Sight Brightness: photometry Photometry is the measurement of light in terms of its perceived brightness to the human eye. Photometric quantities derive from analogous radiometric quantities by weighting the contribution of each wavelength by a luminosity function that models the eye's spectral sensitivity. For the ranges of possible values, see the orders of magnitude in: illuminance, luminance, and luminous flux. Photometers of various kinds: Lux meter for measuring illuminance, i.e. incident luminous flux per unit area Luminance meter for measuring luminance, i.e. luminous flux per unit area and unit solid angle Light meter, an instrument used to set photographic exposures. It can be either a lux meter (incident-light meter) or a luminance meter (reflected-light meter), and is calibrated in photographic units. Integrating sphere for collecting the total luminous flux of a light source, which can then be measured by a photometer Densitometer for measuring the degree to which a photographic material reflects or transmits light Color: colorimetry Tristimulus colorimeter for quantifying colors and calibrating an imaging workflow Radar brightness: radiometry Synthetic Aperture Radar (SAR) instruments measure radar brightness, Radar Cross Section (RCS), which is a function of the reflectivity and moisture of imaged objects at wavelengths which are too long to be perceived by the human eye. Black pixels mean no reflectivity (e.g. water surfaces), white pixels mean high reflectivity (e.g. urban areas). Colored pixels can be obtained by combining three gray-scaled images which usually interpret the polarization of electromagnetic waves. The combination R-G-B = HH-HV-VV combines radar images of waves sent and received horizontally (HH), sent horizontally and received vertically (HV) and sent and received vertically (VV). The calibration of such instruments is done by imaging objects (calibration targets) whose radar brightness is known. Hearing Loudness in phon Headphone, loudspeaker, sound pressure gauge, for measuring an equal-loudness contour of a human ear. Sound level meter calibrated to an equal-loudness contour of the human auditory system behind the human ear. Smell Olfactometer, see also Olfaction. Temperature (sense and body) Body temperature or core temperature Medical thermometer, see also infrared thermometer Circulatory system (mainly heart and blood vessels for distributing substances fast) Blood-related parameters are listed in a blood test. Electrocardiograph records the electrical activity of the heart Glucose meter for obtaining the status of blood sugar. Sphygmomanometer, a blood pressure meter used to determine blood pressure in medicine. See also :Category:Blood tests Respiratory system (lung and airways controlling the breathing process) Spirometer Concentration or partial pressure of carbon dioxide in the respiratory gases Capnograph Nervous system (nerves transmitting and processing information electrically) Electroencephalograph records the electrical activity of the brain Musculoskeletal system (muscles and bones for movement) power, work of muscles Ergometer metabolic system Body fat meter Medical imaging Computed tomography Magnetic resonance imaging Medical ultrasonography Radiology Tomograph, device and method for non-destructive analysis of multiple measurements done on a geometric object, for producing 2- or 3-dimensional images, representing the inner structure of that geometric object. See also: :Category:Physiological instruments and :Category:Medical testing equipment. Meteorology See also :Category:Meteorological instrumentation and equipment. Navigation and surveying See also :Category:Navigational equipment and :Category:Navigation. See also Surveying instruments. Astronomy Radio antenna Telescope See also Astronomical instruments and :Category:Astronomical observatories. Military Some instruments, such as telescopes and sea navigation instruments, have had military applications for many centuries. However, the role of instruments in military affairs rose exponentially with the development of technology via applied science, which began in the mid-19th century and has continued through the present day. Military instruments as a class draw on most of the categories of instrument described throughout this article, such as navigation, astronomy, optics, and imaging, and the kinetics of moving objects. Common abstract themes that unite military instruments are seeing into the distance, seeing in the dark, knowing an object's geographic location, and knowing and controlling a moving object's path and destination. Special features of these instruments may include ease of use, speed, reliability, and accuracy. Uncategorized, specialized, or generalized application Actograph measures and records animal activity within an experimental chamber. Densitometer measures light transmission through processed photographic film or transparent material or light reflection from a reflective material. Force platform measures ground reaction force. Gauge (engineering) A highly precise measurement instrument, also usable to calibrate other instruments of the same kind. Often found in conjunction with defining or applying technical standards. Gradiometer any device that measures spatial variations of a physical quantity. For example, as done in gravity gradiometry. Parking meter measures time a vehicle is parked at a particular spot, usually with a fee. Postage meter measures postage used from a prepaid account. S meter measures the signal strength processed by a communications receiver. Sensor, hypernym for devices that measure with little interaction, typically used in technical applications. Spectroscope is an important tool used by physicists. SWR meter check the quality of the match between the antenna and the transmission line. Universal measuring machine measures geometric locations for inspecting tolerances. Alphabetical listing See also :Category:Instrument-making corporations Data logger measuring devices History of measurement History of weights and measures Instrumentation List of measuring devices List of physical quantities List of sensors Metrology Pocket comparator Sensor or detector Timeline of temperature and pressure measurement technology Notes The alternate spelling "-metre" is never used when referring to a measuring device. References External links Metrology Measuring instruments
List of measuring instruments
Technology,Engineering
5,617
5,492,199
https://en.wikipedia.org/wiki/List%20of%20solid%20waste%20treatment%20technologies
The article contains a list of different forms of solid waste treatment technologies and facilities employed in waste management infrastructure. Waste handling facilities Civic amenity site (CA site) Transfer station Established waste treatment technologies Incineration Landfill Recycling Specific to organic waste: Anaerobic digestion Composting Windrow composting Alternative waste treatment technologies In the UK some of these are sometimes termed advanced waste treatment technologies Biodrying Gasification Plasma gasification: Gasification assisted by plasma torches Hydrothermal carbonization Hydrothermal liquefaction Mechanical biological treatment (sorting into selected fractions) Refuse-derived fuel Mechanical heat treatment Molten salt oxidation Pyrolysis UASB (applied to solid wastes) Waste autoclave Specific to organic waste: Bioconversion of biomass to mixed alcohol fuels In-vessel composting Landfarming Sewage treatment Tunnel composting See also Bioethanol Biodiesel List of waste management companies List of wastewater treatment technologies Pollution control Waste-to-energy Burn pit References Anaerobic digestion Thermal treatment Waste treatment technology Solid waste treatment technologies
List of solid waste treatment technologies
Chemistry,Engineering
213
66,394,140
https://en.wikipedia.org/wiki/1070%20aluminium%20alloy
1070 is a pure aluminium alloy. It is a wrought alloy with a high corrosion resistance and an excellent brazing ability. 1070 Aluminium alloy has aluminium, iron, silicon, zinc, vanadium, copper, titanium, magnesium, and manganese as minor elements. Chemical Composition Applications Aluminium 1070 alloy is used in the following areas: General industrial components Building and construction Transport Electrical material PS plates Strips for ornaments Communication cables Refrigerator and freezer cabinets References Aluminium alloy table Aluminium alloys
1070 aluminium alloy
Chemistry
98