id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
39,197,973 | https://en.wikipedia.org/wiki/Perfectoid%20space | In mathematics, perfectoid spaces are adic spaces of special kind, which occur in the study of problems of "mixed characteristic", such as local fields of characteristic zero which have residue fields of characteristic prime p.
A perfectoid field is a complete topological field K whose topology is induced by a nondiscrete valuation of rank 1, such that the Frobenius endomorphism Φ is surjective on K°/p where K° denotes the ring of power-bounded elements.
Perfectoid spaces may be used to (and were invented in order to) compare mixed characteristic situations with purely finite characteristic ones. Technical tools for making this precise are the tilting equivalence and the almost purity theorem. The notions were introduced in 2012 by Peter Scholze.
Tilting equivalence
For any perfectoid field K there is a tilt K♭, which is a perfectoid field of finite characteristic p. As a set, it may be defined as
Explicitly, an element of K♭ is an infinite sequence (x0, x1, x2, ...) of elements of K such that xi = x. The multiplication in K♭ is defined termwise, while the addition is more complicated. If K has finite characteristic, then K ≅ K♭. If K is the p-adic completion of , then K♭ is the t-adic completion of .
There are notions of perfectoid algebras and perfectoid spaces over a perfectoid field K, roughly analogous to commutative algebras and schemes over a field. The tilting operation extends to these objects. If X is a perfectoid space over a perfectoid field K, then one may form a perfectoid space X♭ over K♭. The tilting equivalence is a theorem that the tilting functor (-)♭ induces an equivalence of categories between perfectoid spaces over K and perfectoid spaces over K♭. Note that while a perfectoid field of finite characteristic may have several non-isomorphic "untilts", the categories of perfectoid spaces over them would all be equivalent.
Almost purity theorem
This equivalence of categories respects some additional properties of morphisms. Many properties of morphisms of schemes have analogues for morphisms of adic spaces. The almost purity theorem for perfectoid spaces is concerned with finite étale morphisms. It's a generalization of Faltings's almost purity theorem in p-adic Hodge theory. The name is alluding to almost mathematics, which is used in a proof, and a distantly related classical theorem on purity of the branch locus.
The statement has two parts. Let K be a perfectoid field.
If X → Y is a finite étale morphism of adic spaces over K and Y is perfectoid, then X also is perfectoid;
A morphism X → Y of perfectoid spaces over K is finite étale if and only if the tilt X♭ → Y♭ is finite étale over K♭.
Since finite étale maps into a field are exactly finite separable field extensions, the almost purity theorem implies that for any perfectoid field K the absolute Galois groups of K and K♭ are isomorphic.
See also
Perfect field
References
External links
Foundations of Perfectoid Spaces by Matthew Morrow
Lean perfectoid spaces. The definition of perfectoid spaces formalized in the Lean theorem prover
Algebraic number theory | Perfectoid space | Mathematics | 683 |
77,919,966 | https://en.wikipedia.org/wiki/Next-Generation%20Overhead%20Persistent%20Infrared | Next-Generation Overhead Persistent Infrared (Next-Gen OPIR) is being developed in USA as the replacement for the current missile warning constellation, the Space-Based Infrared System (SBIRS).
The Next-Gen OPIR satellites are engineered to detect and track ballistic missile launches, delivering early warnings of potential attacks. Equipped with advanced infrared sensors, these satellites identify the heat signatures of incoming missiles and securely transmit this vital data to ground stations.
References
External links
Reconnaissance satellites of the United States
Missile defense
Infrared technology
Early warning systems
Military space program of the United States
Military satellites
Equipment of the United States Space Force
Early warning satellites
Military equipment introduced in the 2020s | Next-Generation Overhead Persistent Infrared | Technology | 132 |
3,362,298 | https://en.wikipedia.org/wiki/Needle%20remover | A needle remover is a device used to physically remove a needle from a syringe. In developing countries, there is still a need for improvements in needle safety in hospital settings as most of the needle removal processes are done manually and under severe risk of hazard from needles puncturing skin risking infection. These countries cannot afford needles with individual safety devices attached, so needle-removers must be used to remove the needle from the syringe. This lowers possible pathogen spread by preventing the reuse of the syringes, reducing incidents of accidental needle-sticks, and facilitating syringe disposal.
Background
In regions surveyed by the World Health Organization (WHO) in the early 2000s, the reported number of needle-stick injuries in developing world countries ranged from .93 to 4.68 injuries per person and per year, which is five times higher than in industrialized nations. Needle-stick injuries are further complicated by disease transmission, such as Hepatitis B, Hepatitis C and HIV. In Ghana, a study of 803 schoolchildren revealed that 61.2% had at least one marker of hepatitis B virus. As a result, health care workers, patients, and the community in developing nations are at an increased risk of contracting blood-borne pathogens via the reuse and improper disposal of needles, and accidental needle-sticks.
In the U.S., the Needlestick Safety Act signed in 2000 and the 2001 Bloodborne Pathogens Standard both mandated the use of safety devices and needle-removers with any sharps or needles. As a result, there was a large increase in research, development, and marketing of needle safety devices and needle-remover. In most hospital and medical settings in the U.S., needle safety regulations are maintained through individual needle safety devices and needle disposal boxes.
Existing solutions
One of the most common causes of needle-stick injuries, which the Needlestick Act and Bloodborne Pathogens Standard were attempting to decrease, was two-handed recapping. As a result, a one-handed capping mechanism was added to insulin and tuberculin syringes. The cap is attached to the syringe via a hinge, which allows the cap to be snapped onto the needle using one hand. The disadvantage to the hinge system is that the cap can get caught by jewelry and clothing, can get bumped when used, and the fixed position can be a hindrance during low angle injection. So Becton Dickinson (BD) has recently come out with a variation on this safety: instead of a hinge, the device slides over the needle and fully covers the tip of the needle, so accidental needle-sticks do not occur.
However, the rest of the world does not have similar needle and syringe regulations. For instance, the WHO is only able to regulate vaccinations in developing countries by ensuring that all vaccination syringes sent to these countries have autodisable features, since the major concern is the reuse of contaminated needles and syringes. These autodisable features allow the syringes to only be used once, so they cannot be reused. These mechanisms could be teeth that interlock to prevent the plunger from being pulled back for another use or a bag prefilled with the vaccine to stop reuse. For example, the SoloShot has a metal clip that locks the plunger down after one use. The BD Uniject is a prefilled vaccine syringe that uses a plastic bulb instead of a plunger and has a disc valve to prevent reuse. Still, over 90% of syringes worldwide do not have autodisable features. Individual protection devices are expensive, and regular needles are much more prevalent. Consequently, many developing world countries use needle-removers to reduce the risk of disease transmission via these exposed.
Benefits of needle-removers
Needle-removers minimize the occurrence of accidental needle-sticks because they allow immediate removal and containment of the needles, especially if the device is near the area of use. Reuse of syringes is prevented because the needle-remover physically separates the needle from the syringe, making the syringe useless. They also improve waste disposal by decreasing both the amount of infectious waste and the amount of safety boxes needed for the waste, since safety boxes can pack syringes 20-60% more compactly without the needles. Additionally, these devices are cost-efficient since one device can handle several hundred needles. Many developing world countries do not have the resources to afford auto-disable syringes, so with needle-removers, the hospitals can continue to use cheap syringes, while only paying a one-time fee to buy a needle-remover that has a life-span of about 200-500 needles.
Social and ethical implications
A significant ethical issue for the project is whether or not the needle-remover will cause more harm than its potential benefits. Engineers are obliged to use their skills and knowledge to improve the safety, health, and welfare of the public. The main concern is for the operator of the device; no engineer should create a device that could injure the operator. Another concern is that children may gain access to the device and accidentally hurt themselves. If a device design could potentially cause either of these problems, the team would be ethically obligated to reexamine that design, and it would either have to be improved or abandoned. When the device functions effectively and safely, it will serve to protect the welfare of the community. In developing countries, the risk of disease transmission is elevated due to the high percentage of needle-stick injuries, which is a result of inadequate needle collection devices. Increased pathogen transmission also occurs from the reuse of contaminated needles when supplies are low. The device will prevent reuse of needles and facilitate needle collection and disposal, and thus will improve the health and safety of hospital workers and the community.
The social and economic effects of the device also need to be recognized. In developing countries, the lack of proper needle collection devices leads to an increase in the number of occupational needle-sticks by health care workers via contaminated needles. Occupational needle-sticks account for 40%-65% of Hepatitis B and C infections in health care workers. As a result, more health care workers have to undergo post-exposure testing and treatment, both of which cost money for the hospitals and the countries. There is also the manpower cost associated with losing trained health care workers to infections acquired on the job. With fewer than 10 doctors for every 100,000 individuals in sub-Saharan nations, any loss of hospital staff puts a strain of hospital resources. In addition, developing countries have made significant investments in training their health care workers, which is lost when occupational needle-sticks cause health care workers to leave the medical field.
The economic considerations are not just limited to costs associated with health care workers. Due to the high cost of needle-disposal containers and the fact that the containers usually have to be shipped overseas, unsafe and dangerous substitutes are used instead. This practice can potentially lead to needle-sticks by health care workers and individuals in the community, as well as needle reuse by members of the community, which can increase the potential spread of diseases.
Possible designs
The easiest needle-removers to operate are electrically powered, and either melt the needle or cut the needles at multiple sections. One patented design involves a syringe falling down into a chamber where powered movable blades advance the syringe onto fixed blades on the opposite side, at which point the syringe is cut with a shearing motion at multiple points. There are other patents that use electricity between electrodes or between rotating gears to short-circuit the needle and melt it off the syringe. A more complex design involves a hammer mill and grinder to break up and grind up the plastic and metal parts of the syringes, after which, the pieces are heated and cooled. The end result is metal particles encapsulated in a piece of plastic.
However, electricity in developing countries is not a dependable source, so hand-powered needle-cutters would be preferred. Some designs use the squeezing force from a hand to force one or two blades to shear across each other and hence cut the needle between the blades. There are other designs in which a twisting motion brings a shearing blade in contact with the needle and thus cuts it. Another design has a stationary outer surface that the syringe body rests against and a cylindrical inner cutting body with a bore for the needle to pass through. A lever rotates the inner body, which shears the needle from the syringe and dumps the needle into a container. A crank system can be used to power a similar design, which also uses a cylindrical inner body. However instead of cutting the needle, the device pulls the needle completely out of the syringe, which deforms the needle, and dumps it into a container. A more complicated design actually pulls the needle and collar from the barrel of the syringe without a rotational motion: the downward motion of putting the syringe into the device powers two arms to pull the needle off the syringe. The interesting aspect of this device is that it appears to be one-handed. Another one-handed device uses a downward motion to cause rotating gears to unscrew the needle and collar from the syringe. This design is very complex to implement, so an improvement of this design involves pegs that grip and rotate the needle collar instead of gears. The downward force is transferred into moving the pegs in helical slots, which causes the collar to rotate and the needle to be removed from the syringe.
In 2006 a cheap and simple solution utilizing old cola or beer cans to dispose needles and specially developed lid to safely seal them was designed by Yellowone and given the name Antivirus. The lid snaps onto the top of the can permanently sealing it without using any glue or tools. The ’collar’ of the cap is protecting the user during the needle separation process. The insertion hole is designed to separate needle and syringe at the point of use. No finger can pass through the opening. Each can securely contains 150-200 used needles (Yellowone).
Commercial models
There are several electrically powered needle-removers on the market now. The Disintegrator Needle Destruction Device, offered by American Scientific Resources (ASFX), uses plasma arc technology to destroy the needle, kill pathogens and blunt the syringe. Designed to be used with only one hand, this device completely eliminates the sharp. One model from Techno Fab uses a regular electrical short-circuit to melt the needle, while another needle-remover, seen at CarePathways.com, uses a plasma arc to melt the needle. A unique needle-remover design is the Needle Remover Device, designed by the Program for Appropriate Technology in Health (PATH). It uses two handles that are squeezed together to slide two circular blades across each other, which cuts the hub from the syringe. It is also reusable, and its target cost is about $15. Another needle-removers currently on the market is Advanced Care Products's Clip&Stor, which uses a hand-powered clipper action to remove the needle. The cost of the Clip&Stor is about seven dollars. There is also the BD Hub Cutter, which uses a squeezing hand motion to cut the syringe. The edges of the squeezable parts have blades that do the actual cutting. However, unlike a regular needle-remover, the BD Hub Cutter cuts the syringe at the hub so the needle is completely separated from the syringe. As a result, the risk of a contaminated puncture is completely eliminated because no needle shards remain on the syringe. The Hub Cutter is not reusable though, and disposal of the whole unit must occur. The cost of the Hub Cutter is about four dollars.
Limitations
Most of these current needle-removers require the use of two hands; one to hold the needle in place and the other to activate the mechanism. This form of operation can cause problems because if hospital personnel are busy, especially in a developing world country, they may not have the time or hands needed to operate the device. As a result, the needle will remain exposed on the syringe, posing a risk to both health care workers and patients.
Furthermore, many of these existing needle-removers do not make use of cheap and readily available materials, like used motor oil jugs, for containers, which raises the price of the device and requires that the hospital continuously buys more containers from the company. A typical 3-gallon Bemis sharps container with a rotating lid costs about $8 without including shipping costs. If these containers must be shipped overseas, the price of the device can far exceed the available resources of many hospitals in developing countries, which causes them not to buy needle-remover.
See also
Occupational Safety and Health Administration
Hypodermic needle
Injection (medicine)
International Council of Nurses
Biomedical technology
Biomedical engineering
Needle-exchange programme
References
External links
Becton Dickinson Corporate Website
International Health Care Worker Safety Center
Biomedical engineering
Medical hygiene | Needle remover | Engineering,Biology | 2,693 |
8,936,967 | https://en.wikipedia.org/wiki/Hydraulic%20clearance | Hydraulic clearance. Flow in narrow clearances are of vital importance in hydraulic system component design. The flow in a narrow circular clearance of a spool valve can be calculated according to the formula below if the height is negligible compared to the width of the clearance, such as most of the clearances in hydraulic pumps, hydraulic motors, and spool valves. Flow is considered to be laminar. The formula below is valid for a spool valve when the spool is steady.
Concentric spool/valve housing position i.e. the height/radial clearance c is the same all around:
Units as per SI conventions :
Flow Qi = (∆P · π · d · c3) ÷ (12 · ν · ρ · L) where:
Q = volumetric flow rate (m^3/sec)
ΔP = P1-P2 = pressure drop over the clearance (N/m^2, Pa)
d = valve spool diameter (metre)
c = clearance height (radial clearance) (metre)
ν = kinematic viscosity for the oil (m^2/sec)
ρ = density for the oil (kg/m^3)
L = clearance length (metre)
As can be seen from the formula, the clearance height c has much more influence on the leakage than the length.
The formula clearly hints of pure laminar flow conditions.
It is also valid for gases.
Contact between the spool and the wall, the value that is generally used for practical calculations:
Flow Qe = 2.5 · Qi
Hydraulic Clearance in Hydraulic Components
Pistons:
The clearance between the piston and cylinder wall is crucial for preventing leakage and maintaining hydraulic efficiency. A tight clearance minimizes fluid loss, while a clearance that is too small can lead to increased friction and wear. The piston's design and the material used influence the optimal clearance.
Hydraulic spool valves:
These valves rely on precise clearances to control the flow of hydraulic fluid. The clearance between the spool and valve body affects the valve's responsiveness, leakage rate, and overall performance. Different types of spool valves, such as two-way, three-way, and four-way valves, have varying clearance requirements.
Hydraulic seals:
Seals are essential for preventing fluid leakage in hydraulic systems. The clearance between the seal and the mating surface is critical for ensuring effective sealing. Different seal materials and designs have different clearance tolerances. Proper clearance is necessary to avoid excessive friction, wear, and seal failure.
Hydraulic cylinders:
The clearances within a hydraulic cylinder, such as between the piston and cylinder wall, and between the rod and gland, affect the cylinder's efficiency, service life, and leakage rate. Accurate clearances are necessary for smooth operation and to prevent damage to the cylinder components.
Understanding and controlling hydraulic clearance is essential for optimizing the performance, efficiency, and longevity of hydraulic systems.
References
Hydraulics | Hydraulic clearance | Physics,Chemistry | 598 |
36,699,980 | https://en.wikipedia.org/wiki/Sobolev%20spaces%20for%20planar%20domains | In mathematics, Sobolev spaces for planar domains are one of the principal techniques used in the theory of partial differential equations for solving the Dirichlet and Neumann boundary value problems for the Laplacian in a bounded domain in the plane with smooth boundary. The methods use the theory of bounded operators on Hilbert space. They can be used to deduce regularity properties of solutions and to solve the corresponding eigenvalue problems.
Sobolev spaces with boundary conditions
Let be a bounded domain with smooth boundary. Since is contained in a large square in , it can be regarded as a domain in by identifying opposite sides of the square. The theory of Sobolev spaces on can be found in , an account which is followed in several later textbooks such as and .
For an integer, the (restricted) Sobolev space is defined as the closure of in the standard Sobolev space .
.
Vanishing properties on boundary: For the elements of are referred to as " functions on which vanish with their first derivatives on ." In fact if agrees with a function in , then is in . Let be such that in the Sobolev norm, and set . Thus in . Hence for and ,
By Green's theorem this implies
where
with the unit normal to the boundary. Since such form a dense subspace of , it follows that on .
Support properties: Let be the complement of and define restricted Sobolev spaces analogously for . Both sets of spaces have a natural pairing with . The Sobolev space for is the annihilator in the Sobolev space for of and that for is the annihilator of . In fact this is proved by locally applying a small translation to move the domain inside itself and then smoothing by a smooth convolution operator.
Suppose in annihilates . By compactness, there are finitely many open sets covering such that the closure of is disjoint from and each is an open disc about a boundary point such that in small translations in the direction of the normal vector carry into . Add an open with closure in to produce a cover of and let be a partition of unity subordinate to this cover. If translation by is denoted by , then the functions
tend to as decreases to and still lie in the annihilator, indeed they are in the annihilator for a larger domain than , the complement of which lies in . Convolving by smooth functions of small support produces smooth approximations in the annihilator of a slightly smaller domain still with complement in . These are necessarily smooth functions of compact support in .
Further vanishing properties on the boundary: The characterization in terms of annihilators shows that lies in if (and only if) it and its derivatives of order less than vanish on . In fact can be extended to by setting it to be on . This extension defines an element in using the formula for the norm
Moreover satisfies for g in .
Duality: For , define to be the orthogonal complement of in . Let be the orthogonal projection onto , so that is the orthogonal projection onto . When , this just gives . If and , then
This implies that under the pairing between and , and are each other's duals.
Approximation by smooth functions: The image of is dense in for . This is obvious for since the sum + is dense in . Density for follows because the image of is dense in and annihilates .
Canonical isometries: The operator gives an isometry of into and of onto . In fact the first statement follows because it is true on . That is an isometry on follows using the density of in : for we have:
Since the adjoint map between the duals can by identified with this map, it follows that is a unitary map.
Application to Dirichlet problem
Invertibility of
The operator defines an isomorphism between and . In fact it is a Fredholm operator of index . The kernel of in consists of constant functions and none of these except zero vanish on the boundary of . Hence the kernel of is and is invertible.
In particular the equation has a unique solution in for in .
Eigenvalue problem
Let be the operator on defined by
where is the inclusion of in and of in , both compact operators by Rellich's theorem. The operator is compact and self-adjoint with for all . By the spectral theorem, there is a complete orthonormal set of eigenfunctions in with
Since , lies in . Setting , the are eigenfunctions of the Laplacian:
Sobolev spaces without boundary condition
To determine the regularity properties of the eigenfunctions and solutions of
enlargements of the Sobolev spaces have to be considered. Let be the space of smooth functions on which with their derivatives extend continuously to . By Borel's lemma, these are precisely the restrictions of smooth functions on . The Sobolev space is defined to the Hilbert space completion of this space for the norm
This norm agrees with the Sobolev norm on so that can be regarded as a closed subspace of . Unlike , is not naturally a subspace of , but the map restricting smooth functions from to is continuous for the Sobolev norm so extends by continuity to a map .
Invariance under diffeomorphism: Any diffeomorphism between the closures of two smooth domains induces an isomorphism between the Sobolev space. This is a simple consequence of the chain rule for derivatives.
Extension theorem: The restriction of to the orthogonal complement of its kernel defines an isomorphism onto . The extension map is defined to be the inverse of this map: it is an isomorphism (not necessarily norm preserving) of onto the orthogonal complement of such that . On , it agrees with the natural inclusion map. Bounded extension maps of this kind from to were constructed first constructed by Hestenes and Lions. For smooth curves the Seeley extension theorem provides an extension which is continuous in all the Sobolev norms. A version of the extension which applies in the case where the boundary is just a Lipschitz curve was constructed by Calderón using singular integral operators and generalized by .
It is sufficient to construct an extension for a neighbourhood of a closed annulus, since a collar around the boundary is diffeomorphic to an annulus with a closed interval in . Taking a smooth bump function with , equal to 1 near the boundary and 0 outside the collar, will provide an extension on . On the annulus, the problem reduces to finding an extension for in . Using a partition of unity the task of extending reduces to a neighbourhood of the end points of . Assuming 0 is the left end point, an extension is given locally by
Matching the first derivatives of order k or less at 0, gives
This matrix equation is solvable because the determinant is non-zero by Vandermonde's formula. It is straightforward to check that the formula for , when appropriately modified with bump functions, leads to an extension which is continuous in the above Sobolev norm.
Restriction theorem: The restriction map is surjective with . This is an immediate consequence of the extension theorem and the support properties for Sobolev spaces with boundary condition.
Duality: is naturally the dual of H−k0(Ω). Again this is an immediate consequence of the restriction theorem. Thus the Sobolev spaces form a chain:
The differentiation operators carry each Sobolev space into the larger one with index 1 less.
Sobolev embedding theorem: is contained in . This is an immediate consequence of the extension theorem and the Sobolev embedding theorem for .
Characterization: consists of in such that all the derivatives ∂αf lie in for |α| ≤ k. Here the derivatives are taken within the chain of Sobolev spaces above. Since is weakly dense in , this condition is equivalent to the existence of functions fα such that
To prove the characterization, note that if is in , then lies in Hk−|α|(Ω) and hence in . Conversely the result is well known for the Sobolev spaces : the assumption implies that the is in and the corresponding condition on the Fourier coefficients of shows that lies in . Similarly the result can be proved directly for an annulus . In fact by the argument on the restriction of to any smaller annulus [−δ',δ'] × T lies in : equivalently the restriction of the function lies in for . On the other hand in as , so that must lie in . The case for a general domain reduces to these two cases since can be written as with ψ a bump function supported in such that is supported in a collar of the boundary.
Regularity theorem: If in has both derivatives and in then lies in . This is an immediate consequence of the characterization of above. In fact if this is true even when satisfied at the level of distributions: if there are functions g, h in such that (g,φ) = (f, φx) and (h,φ) = (f,φy) for φ in , then is in .
Rotations on an annulus: For an annulus , the extension map to is by construction equivariant with respect to rotations in the second variable,
On it is known that if is in , then the difference quotient in ; if the difference quotients are bounded in Hk then ∂yf lies in . Both assertions are consequences of the formula:
These results on imply analogous results on the annulus using the extension.
Regularity for Dirichlet problem
Regularity for dual Dirichlet problem
If with in and in with , then lies in .
Take a decomposition with supported in and supported in a collar of the boundary. Standard Sobolev theory for can be applied to : elliptic regularity implies that it lies in and hence . lies in of a collar, diffeomorphic to an annulus, so it suffices to prove the result with a collar and replaced by
The proof proceeds by induction on , proving simultaneously the inequality
for some constant depending only on . It is straightforward to establish this inequality for , where by density can be taken to be smooth of compact support in :
The collar is diffeomorphic to an annulus. The rotational flow on the annulus induces a flow on the collar with corresponding vector field . Thus corresponds to the vector field . The radial vector field on the annulus is a commuting vector field which on the collar gives a vector field proportional to the normal vector field. The vector fields and commute.
The difference quotients can be formed for the flow . The commutators are second order differential operators from to . Their operators norms are uniformly bounded for near ; for the computation can be carried out on the annulus where the commutator just replaces the coefficients of by their difference quotients composed with . On the other hand, lies in , so the inequalities for apply equally well for :
The uniform boundedness of the difference quotients implies that lies in with
It follows that lies in where is the vector field
Moreover, satisfies a similar inequality to .
Let be the orthogonal vector field
It can also be written as for some smooth nowhere vanishing function on a neighbourhood of the collar.
It suffices to show that lies in . For then
so that and lie in and must lie in .
To check the result on , it is enough to show that and lie in . Note that
are vector fields. But then
with all terms on the right hand side in . Moreover, the inequalities for show that
Hence
Smoothness of eigenfunctions
It follows by induction from the regularity theorem for the dual Dirichlet problem that the eigenfunctions of in lie in . Moreover, any solution of with in and in must have in . In both cases by the vanishing properties, the eigenfunctions and vanish on the boundary of .
Solving the Dirichlet problem
The dual Dirichlet problem can be used to solve the Dirichlet problem:
By Borel's lemma is the restriction of a function in . Let be the smooth solution of with on . Then solves the Dirichlet problem. By the maximal principle, the solution is unique.
Application to smooth Riemann mapping theorem
The solution to the Dirichlet problem can be used to prove a strong form of the Riemann mapping theorem for simply connected domains with smooth boundary. The method also applies to a region diffeomorphic to an annulus. For multiply connected regions with smooth boundary have given a method for mapping the region onto a disc with circular holes. Their method involves solving the Dirichlet problem with a non-linear boundary condition. They construct a function such that:
is harmonic in the interior of ;
On we have: , where is the curvature of the boundary curve, is the derivative in the direction normal to and is constant on each boundary component.
gives a proof of the Riemann mapping theorem for a simply connected domain with smooth boundary. Translating if necessary, it can be assumed that . The solution of the Dirichlet problem shows that there is a unique smooth function on which is harmonic in and equals on . Define the Green's function by . It vanishes on and is harmonic on away from . The harmonic conjugate of is the unique real function on such that is holomorphic. As such it must satisfy the Cauchy–Riemann equations:
The solution is given by
where the integral is taken over any path in . It is easily verified that and exist and are given by the corresponding derivatives of . Thus is a smooth function on , vanishing at . By the Cauchy-Riemann is smooth on , holomorphic on and . The function is only defined up to multiples of , but the function
is a holomorphic on and smooth on . By construction, and for . Since has winding number , so too does . On the other hand, only for where there is a simple zero. So by the argument principle assumes every value in the unit disc, , exactly once and does not vanish inside . To check that the derivative on the boundary curve is non-zero amounts to computing the derivative of , i.e. the derivative of should not vanish on the boundary curve. By the Cauchy-Riemann equations these tangential derivative are up to a sign the directional derivative in the direction of the normal to the boundary. But vanishes on the boundary and is strictly negative in since . The Hopf lemma implies that the directional derivative of in the direction of the outward normal is strictly positive. So on the boundary curve, has nowhere vanishing derivative. Since the boundary curve has winding number one, defines a diffeomorphism of the boundary curve onto the unit circle. Accordingly, is a smooth diffeomorphism, which restricts to a holomorphic map and a smooth diffeomorphism between the boundaries.
Similar arguments can be applied to prove the Riemann mapping theorem for a doubly connected domain bounded by simple smooth curves (the inner curve) and (the outer curve). By translating we can assume 1 lies on the outer boundary. Let be the smooth solution of the Dirichlet problem with on the outer curve and on the inner curve. By the maximum principle for in and so by the Hopf lemma the normal derivatives of are negative on the outer curve and positive on the inner curve. The integral of over the boundary is zero by Stokes' theorem so the contributions from the boundary curves cancel. On the other hand, on each boundary curve the contribution is the integral of the normal derivative along the boundary. So there is a constant such that satisfies
on each boundary curve. The harmonic conjugate of can again be defined by
and is well-defined up to multiples of . The function
is smooth on and holomorphic in . On the outer curve and on the inner curve . The tangential derivatives on the outer curves are nowhere vanishing by the Cauchy-Riemann equations, since the normal derivatives are nowhere vanishing. The normalization of the integrals implies that restricts to a diffeomorphism between the boundary curves and the two concentric circles. Since the images of outer and inner curve have winding number and about any point in the annulus, an application of the argument principle implies that assumes every value within the annulus exactly once; since that includes multiplicities, the complex derivative of is nowhere vanishing in . This is a smooth diffeomorphism of onto the closed annulus , restricting to a holomorphic map in the interior and a smooth diffeomorphism on both boundary curves.
Trace map
The restriction map extends to a continuous map for . In fact
so the Cauchy–Schwarz inequality yields
where, by the integral test,
The map is onto since a continuous extension map can be constructed from to . In fact set
where
Thus . If g is smooth, then by construction Eg restricts to g on 1 × T. Moreover, E is a bounded linear map since
It follows that there is a trace map τ of Hk(Ω) onto Hk − 1/2(∂Ω). Indeed, take a tubular neighbourhood of the boundary and a smooth function ψ supported in the collar and equal to 1 near the boundary. Multiplication by ψ carries functions into Hk of the collar, which can be identified with Hk of an annulus for which there is a trace map. The invariance under diffeomorphisms (or coordinate change) of the half-integer Sobolev spaces on the circle follows from the fact that an equivalent norm on Hk + 1/2(T) is given by
It is also a consequence of the properties of τ and E (the "trace theorem"). In fact any diffeomorphism f of T induces a diffeomorphism F of T2 by acting only on the second factor. Invariance of Hk(T2) under the induced map F* therefore implies invariance of Hk − 1/2(T) under f*, since f* = τ ∘ F* ∘ E.
Further consequences of the trace theorem are the two exact sequences
and
where the last map takes f in H2(Ω) to f|∂Ω and ∂nf|∂Ω. There are generalizations of these sequences to Hk(Ω) involving higher powers of the normal derivative in the trace map:
The trace map to takes f to
Abstract formulation of boundary value problems
The Sobolev space approach to the Neumann problem cannot be phrased quite as directly as that for the Dirichlet problem. The main reason is that for a function in , the normal derivative cannot be a priori defined at the level of Sobolev spaces. Instead an alternative formulation of boundary value problems for the Laplacian on a bounded region in the plane is used. It employs Dirichlet forms, sesqulinear bilinear forms on , or an intermediate closed subspace. Integration over the boundary is not involved in defining the Dirichlet form. Instead, if the Dirichlet form satisfies a certain positivity condition, termed coerciveness, solution can be shown to exist in a weak sense, so-called "weak solutions". A general regularity theorem than implies that the solutions of the boundary value problem must lie in , so that they are strong solutions and satisfy boundary conditions involving the restriction of a function and its normal derivative to the boundary. The Dirichlet problem can equally well be phrased in these terms, but because the trace map is already defined on , Dirichlet forms do not need to be mentioned explicitly and the operator formulation is more direct. A unified discussion is given in and briefly summarised below. It is explained how the Dirichlet problem, as discussed above, fits into this framework. Then a detailed treatment of the Neumann problem from this point of view is given following .
The Hilbert space formulation of boundary value problems for the Laplacian on a bounded region in the plane proceeds from the following data:
A closed subspace .
A Dirichlet form for given by a bounded Hermitian bilinear form defined for such that for .
is coercive, i.e. there is a positive constant and a non-negative constant such that .
A weak solution of the boundary value problem given initial data in is a function u satisfying
for all g.
For both the Dirichlet and Neumann problem
For the Dirichlet problem . In this case
By the trace theorem the solution satisfies in .
For the Neumann problem is taken to be .
Application to Neumann problem
The classical Neumann problem on consists in solving the boundary value problem
Green's theorem implies that for
Thus if in and satisfies the Neumann boundary conditions, , and so is constant in .
Hence the Neumann problem has a unique solution up to adding constants.
Consider the Hermitian form on defined by
Since is in duality with , there is a unique element in such that
The map is an isometry of onto , so in particular is bounded.
In fact
So
On the other hand, any in defines a bounded conjugate-linear form on sending to . By the Riesz–Fischer theorem, there exists such that
Hence and so is surjective. Define a bounded linear operator on by
where is the map , a compact operator, and is the map , its adjoint, so also compact.
The operator has the following properties:
is a contraction since it is a composition of contractions
is compact, since and are compact by Rellich's theorem
is self-adjoint, since if , they can be written with so
has positive spectrum and kernel , for
and implies and hence .
There is a complete orthonormal basis of consisting of eigenfunctions of . Thus
with and decreasing to .
The eigenfunctions all lie in since the image of lies in .
The are eigenfunctions of with
Thus are non-negative and increase to .
The eigenvalue occurs with multiplicity one and corresponds to the constant function. For if satisfies , then
so is constant.
Regularity for Neumann problem
Weak solutions are strong solutions
The first main regularity result shows that a weak solution expressed in terms of the operator and the Dirichlet form is a strong solution in the classical sense, expressed in terms of the Laplacian and the Neumann boundary conditions. Thus if with , then , satisfies and . Moreover, for some constant independent of ,
Note that
since
Take a decomposition with supported in and supported in a collar of the boundary.
The operator is characterized by
Then
so that
The function and are treated separately, being essentially subject to usual elliptic regularity considerations for interior points while requires special treatment near the boundary using difference quotients. Once the strong properties are established in terms of and the Neumann boundary conditions, the "bootstrap" regularity results can be proved exactly as for the Dirichlet problem.
Interior estimates
The function lies in where is a region with closure in . If and
By continuity the same holds with replaced by and hence . So
Hence regarding as an element of , . Hence . Since for , we have . Moreover,
so that
Boundary estimates
The function is supported in a collar contained in a tubular neighbourhood of the boundary. The difference quotients can be formed for the flow and lie in , so the first inequality is applicable:
The commutators are uniformly bounded as operators from to . This is equivalent to checking the inequality
for , smooth functions on a collar. This can be checked directly on an annulus, using invariance of Sobolev spaces under dffeomorphisms and the fact that for the annulus the commutator of with a differential operator is obtained by applying the difference operator to the coefficients after having applied to the function:
Hence the difference quotients are uniformly bounded, and therefore with
Hence and satisfies a similar inequality to :
Let be the orthogonal vector field. As for the Dirichlet problem, to show that , it suffices to show that .
To check this, it is enough to show that . As before
are vector fields. On the other hand, for , so that and define the same distribution on . Hence
Since the terms on the right hand side are pairings with functions in , the regularity criterion shows that . Hence since both terms lie in and have the same inner products with 's.
Moreover, the inequalities for show that
Hence
It follows that . Moreover,
Neumann boundary conditions
Since , Green's theorem is applicable by continuity. Thus for ,
Hence the Neumann boundary conditions are satisfied:
where the left hand side is regarded as an element of and hence .
Regularity of strong solutions
The main result here states that if and , then and
for some constant independent of .
Like the corresponding result for the Dirichlet problem, this is proved by induction on . For , is also a weak solution of the Neumann problem so satisfies the estimate above for . The Neumann boundary condition can be written
Since commutes with the vector field corresponding to the period flow , the inductive method of proof used for the Dirichlet problem works equally well in this case: for the difference quotients preserve the boundary condition when expressed in terms of .
Smoothness of eigenfunctions
It follows by induction from the regularity theorem for the Neumann problem that the eigenfunctions of in lie in . Moreover, any solution of with in and in must have in . In both cases by the vanishing properties, the normal derivatives of the eigenfunctions and vanish on .
Solving the associated Neumann problem
The method above can be used to solve the associated Neumann boundary value problem:
By Borel's lemma is the restriction of a function . Let be a smooth function such that near the boundary. Let be the solution of with . Then solves the boundary value problem.
Notes
References
Partial differential equations
Harmonic analysis
Operator theory
Functional analysis | Sobolev spaces for planar domains | Mathematics | 5,304 |
68,669,415 | https://en.wikipedia.org/wiki/Telosa | Telosa is a proposed utopian planned US city conceived by American billionaire Marc Lore and announced in September 2021. The project has a target population of 5 million people by 2050, with the first phase of construction expected to house 50,000. The location had initially not been chosen, with the project's planners intending the city to be built on cheap land in Appalachia or the American West desert.
The name Telosa is derived from the Ancient Greek word telos, in this case meaning "purpose".
Planning
Telosa was conceived by former Walmart U.S. eCommerce president and billionaire Marc Lore. In a statement announcing his resignation from Walmart, Lore expressed his desire to construct a "city of the future" based on a "reformed version of capitalism". Lore refers to his design philosophy for the city as "equitism", described as "a new model for society, where wealth is created in a fair way... It's not burdening the wealthy; it's not increasing taxes. It is simply giving back to the citizens and the people the wealth that they helped create".
Lore hired the architectural firm Bjarke Ingels Group, owned by Danish architect Bjarke Ingels, to handle the proposed city's master planning.
Features
Telosa is planned to be a 15-minute city, with workplaces, schools, and basic goods and services being within a 15-minute commute from residents' homes. Vehicles that are powered by fossil fuels will not be permitted within the city, with an emphasis instead being placed upon walkability and the use of scooters, bicycles, and autonomous electric vehicles.
A massive skyscraper, dubbed "Equitism Tower", is conceived to serve as a "beacon for the city". The skyscraper's projected features include space for water storage, aeroponic farms, and a photovoltaic roof.
The proposed land ownership in the city is based on Georgist principles, as advocated by political economist Henry George in his 1879 book Progress and Poverty. Under the proposed rules, anyone would be licensed to build, keep or sell a home, building or any other structure, and residents would share ownership of the land under a community endowment.
Possible locations
The project's planners intend the city to be built on cheap desert land in a location not yet decided , with Utah, Idaho, Nevada, Arizona, Texas, and Appalachia proposed as potential locations.
Reception
Writing in Timeout.com in September 2021, Ed Cunningham stated that "the blueprint designs are, depending on your taste, either dazzlingly utopian or unsettlingly dystopian. There’s plenty of innovative architecture on display, alongside futuristic visions of public transport and spaces filled with greenery and nature." It has been criticized as being an unrealistic vanity project which would be less sustainable than building upon existing urban areas.
See also
Neom
References
External links
Architecture related to utopias
Georgist communities
Proposed populated places in the United States
Utopian communities in the United States | Telosa | Engineering | 621 |
14,131,481 | https://en.wikipedia.org/wiki/BAG1 | BAG family molecular chaperone regulator 1 is a protein that in humans is encoded by the BAG1 gene.
Function
The oncogene BCL2 is a membrane protein that blocks a step in a pathway leading to apoptosis or programmed cell death. The protein encoded by this gene binds to BCL2 and is referred to as BCL2-associated athanogene. It enhances the anti-apoptotic effects of BCL2 and represents a link between growth factor receptors and anti-apoptotic mechanisms. At least three protein isoforms are encoded by this mRNA through the use of alternative translation initiation sites, including a non-AUG site.
Clinical significance
BAG gene has been implicated in age related neurodegenerative diseases as Alzheimer's. It has been demonstrated that BAG1 and BAG 3 regulate the proteasomal and lysosomal protein elimination pathways, respectively.
Interactions
BAG1 has been shown to interact with:
Androgen receptor,
C-Raf,
Calcitriol receptor,
Glucocorticoid receptor,
HSPA8,
HBEGF,
PPP1R15A,
NR1B1, and
SIAH1.
References
External links
Further reading
Ageing
Oncogenes
Aging-related genes
Aging-related proteins
Co-chaperones | BAG1 | Biology | 267 |
644,779 | https://en.wikipedia.org/wiki/F-theory | In theoretical physics, F-theory is a branch of string theory developed by Iranian-American physicist Cumrun Vafa. The new vacua described by F-theory were discovered by Vafa and allowed string theorists to construct new realistic vacua — in the form of F-theory compactified on elliptically fibered Calabi–Yau four-folds. The letter "F" supposedly stands for "Father" in relation to "Mother"-theory.
Compactifications
F-theory is formally a 12-dimensional theory, but the only way to obtain an acceptable background is to compactify this theory on a two-torus. By doing so, one obtains type IIB superstring theory in 10 dimensions. The SL(2,Z) S-duality symmetry of the resulting type IIB string theory is manifest because it arises as the group of large diffeomorphisms of the two-dimensional torus.
More generally, one can compactify F-theory on an elliptically fibered manifold (elliptic fibration), i.e. a fiber bundle whose fiber is a two-dimensional torus (also called an elliptic curve). For example, a subclass of the K3 manifolds is elliptically fibered, and F-theory on a K3 manifold is dual to heterotic string theory on a two-torus. Also, the moduli spaces of those theories should be isomorphic.
The large number of semirealistic solutions to string theory referred to as the string theory landscape, with elements or so, is dominated by F-theory compactifications on Calabi–Yau four-folds. There are about of those solutions consistent with the Standard Model of particle physics.
Phenomenology
New models of Grand Unified Theory have recently been developed using F-theory.
Extra time dimension
F-theory has the metric signature (10,2), which means that it includes a second time dimension.
See also
Dilaton
Axion
M-theory
References
String theory | F-theory | Astronomy | 413 |
67,237,637 | https://en.wikipedia.org/wiki/Dragvanti | Dragvanti (stylized DragVanti) is a web portal dedicated to drag performers based in India.
History
DragVanti was launched on June 20, 2020 by Patruni Sastry. The platform also connects emerging drag artists to the entertainment industry. Originally, DragVanti was only a website. It became a monthly publication from 2019 to 2021 that was circulated online for no cost. The drag directory was launched in June 2020.
Patruni Sastry who founded the platform says "When I started performing drag in 2019, there was no content about Indian drag available; the only content coming in was that from the West, However Drag is present in classical Indian culture with a mention of it occurs in the Nātya Śāstra, a record of Indian performance art estimated to be around 2,000 years old. Yet today, We don’t acknowledge what drag artists are doing within India” when asked about the intent of creating such a platform.
Events
In 2020 June, DragVanti co-hosted Pride Online fest in collaboration with Social Samosa where there was a curated drag panel discussions and performances.
In August 2020, DragVanti hosted a TED circle for drag performers.
In 2021 March, DragVanti hosted an open online mic evenings via its social media handle.
In 2021 June , as a part of pride month celebration, Dragvanti has organized India's first ever Drag conference with more than 6 drag queens to initiate academic discussions in the field of drag.
In 2021 August, DragVanti hosted India's first ever BI/PAN festival to create awareness on Bisexuality and Pansexuality spectrums.
DragVanti also hosts annual celebration of queer Halloween.
References
External links
Website
Indian websites
LGBTQ-related websites
LGBTQ-related Internet forums | Dragvanti | Technology | 357 |
14,777,789 | https://en.wikipedia.org/wiki/Coxeter%E2%80%93Todd%20lattice | In mathematics, the Coxeter–Todd lattice K12, discovered by , is a 12-dimensional even integral lattice of discriminant 36 with no norm-2 vectors. It is the sublattice of the Leech lattice fixed by a certain automorphism of order 3, and is analogous to the Barnes–Wall lattice. The automorphism group of the Coxeter–Todd lattice has order 210·37·5·7=78382080, and there are 756 vectors in this lattice of norm 4 (the shortest nonzero vectors in this lattice).
Properties
The Coxeter–Todd lattice can be made into a 6-dimensional lattice self dual over the Eisenstein integers. The automorphism group of this complex lattice has index 2 in the full automorphism group of the Coxeter–Todd lattice and is a complex reflection group (number 34 on the list) with structure 6.PSU4(F3).2, called the Mitchell group.
The genus of the Coxeter–Todd lattice was described by and has 10 isometry classes: all of them other than the Coxeter–Todd lattice have a root system of maximal rank 12.
Construction
Based on Nebe web page we can define K12 using following 6 vectors in 6-dimensional complex coordinates. ω is complex number of order 3 i.e. ω3=1.
(1,0,0,0,0,0), (0,1,0,0,0,0), (0,0,1,0,0,0),
½(1,ω,ω,1,0,0), ½(ω,1,ω,0,1,0), ½(ω,ω,1,0,0,1),
By adding vectors having scalar product -½ and multiplying by ω we can obtain all lattice vectors. We have 15 combinations of two zeros times 16 possible signs gives 240 vectors; plus 6 unit vectors times 2 for signs gives 240+12=252 vectors. Multiply it by 3 using multiplication by ω we obtain 756 unit vectors in K12 lattice.
Further reading
The Coxeter–Todd lattice is described in detail in and .
References
External links
Coxeter–Todd lattice in Sloane's lattice catalogue
Quadratic forms
Lattice points | Coxeter–Todd lattice | Mathematics | 474 |
2,142,405 | https://en.wikipedia.org/wiki/Palm%20i705 | The Palm i705 was an upgrade from the last series of Palm PDAs to use the now discontinued Palm.net service via Mobitex to access the World Wide Web from Palm devices. It featured 8MB of onboard memory and an SD/MMC slot for additional storage or SDIO cards. It used the Motorola Dragonball VZ 33 MHz processor and ran Palm OS 4.1, it was noted as being the first Palm.net capable device without a flip out antenna and with an internal rechargeable battery, although it was the third and final of the three models manufactured by Palm that were capable of utilizing this network.
See also
Palm.net
Palm (PDA)
Palm OS
PalmSource, Inc.
Palm, Inc.
Graffiti (Palm OS)
External links
Palm i705 Handheld Debuts: Only Secure, Integrated Wireless, Email Solution With Web Access, Palm Press Release, January 28, 2002
i705
68k-based mobile devices | Palm i705 | Technology | 196 |
2,909,609 | https://en.wikipedia.org/wiki/Forest-fire%20model | In applied mathematics, a forest-fire model is any of a number of dynamical systems displaying self-organized criticality. Note, however, that according to Pruessner et al. (2002, 2004) the forest-fire model does not behave critically on very large, i.e. physically relevant scales. Early versions go back to Henley (1989) and Drossel and Schwabl (1992). The model is defined as a cellular automaton on a grid with Ld cells. L is the sidelength of the grid and d is its dimension. A cell can be empty, occupied by a tree, or burning. The model of Drossel and Schwabl (1992) is defined by four rules which are executed simultaneously:
A burning cell turns into an empty cell
A tree will burn if at least one neighbor is burning
A tree ignites with probability f even if no neighbor is burning
An empty space fills with a tree with probability p
The controlling parameter of the model is p/f which gives the average number of trees planted between two lightning strikes (see Schenk et al. (1996) and Grassberger (1993)). In order to exhibit a fractal frequency-size distribution of clusters a double separation of time scales is necessary
where Tsmax is the burn time of the largest cluster. The scaling behavior is not simple, however (Grassberger 1993, 2002 and Pruessner et al. 2002, 2004).
A cluster is defined as a coherent set of cells, all of which have the same state. Cells are coherent if they can reach each other via nearest neighbor relations. In most cases, the von Neumann neighborhood (four adjacent cells) is considered.
The first condition allows large structures to develop, while the second condition keeps trees from popping up alongside a cluster while burning.
In landscape ecology, the forest fire model is used to illustrate the role of the fuel mosaic in the wildfire regime. The importance of the fuel mosaic on wildfire spread is debated. Parsimonious models such as the forest fire model can help to explore the role of the fuel mosaic and its limitations in explaining observed patterns.
References
Henley, C. L. (1989), "Self-organized percolation: a simpler model." Bull. Am. Phys. Soc. 34, 838.
External links
An HTML 5 demo of the forest fire model
Self-organization | Forest-fire model | Mathematics | 491 |
507,209 | https://en.wikipedia.org/wiki/Hyperfactorial | In mathematics, and more specifically number theory, the hyperfactorial of a positive integer is the product of the numbers of the form from to
Definition
The hyperfactorial of a positive integer is the product of the numbers . That is,
Following the usual convention for the empty product, the hyperfactorial of 0 is 1. The sequence of hyperfactorials, beginning with , is:
Interpolation and approximation
The hyperfactorials were studied beginning in the 19th century by Hermann Kinkelin and James Whitbread Lee Glaisher. As Kinkelin showed, just as the factorials can be continuously interpolated by the gamma function, the hyperfactorials can be continuously interpolated by the K-function.
Glaisher provided an asymptotic formula for the hyperfactorials, analogous to Stirling's formula for the factorials:
where is the Glaisher–Kinkelin constant.
Other properties
According to an analogue of Wilson's theorem on the behavior of factorials modulo prime numbers, when is an odd prime number
where is the notation for the double factorial.
The hyperfactorials give the sequence of discriminants of Hermite polynomials in their probabilistic formulation.
References
External links
Integer sequences
Factorial and binomial topics | Hyperfactorial | Mathematics | 269 |
5,913,863 | https://en.wikipedia.org/wiki/Pantetheine | Pantetheine is the cysteamine amide analog of pantothenic acid (vitamin B5). The dimer of this compound, pantethine is more commonly known, and is considered to be the most potent form of vitamin B5. Pantetheine is an intermediate in the catabolism of coenzyme A by the body.
Metabolism
Pantetheine is the product of dephosphorylation of phosphopantetheine:
phosphopantetheine → pantetheine + Pi
In E. coli, this reaction is catalyzed by for example alkaline phosphatase. The reverse reaction, phosphopantetheine synthesis, is catalyzed by various kinases:
phosphopantetheine + ATP → pantetheine + ADP
These kinases are able to act upon pantothenoic acid as well and are present in both microorganisms and animal livers.
Pantetheine is degraded by pantetheinase, which splits it into cysteamine and pantothenic acid:
pantetheine → cysteamine + pantothenate
Prebiotic evolution
Since pantetheine is a part of coenzyme A, a common cofactor, it is thought to have been present in prebiotic soup. A synthesis mechanism has also been suggested.
References
Carboxamides
Thiols
Vitamins
Diols | Pantetheine | Chemistry | 300 |
49,392,240 | https://en.wikipedia.org/wiki/Pentafluorothiophenol | Pentafluorothiophenol is an organosulfur compound with the formula C6F5SH. It is a colorless volatile liquid. The compound is prepared by the reaction of sodium hydrosulfide and hexafluorobenzene. With a pKa of 2.68, it is one of the most acidic thiols. Its conjugate base has been used as a ligand in coordination chemistry
Related compounds
Pentafluorophenol
References
Thiols
Fluoroarenes
Foul-smelling chemicals | Pentafluorothiophenol | Chemistry | 112 |
2,155,746 | https://en.wikipedia.org/wiki/Mobile%20phone%20feature | A mobile phone feature is a capability, service, or application that a mobile phone offers to its users. Mobile phones are often referred to as feature phones, and offer basic telephony. Handsets with more advanced computing ability through the use of native code try to differentiate their own products by implementing additional functions to make them more attractive to consumers. This has led to great innovation in mobile phone development over the past 20 years.
The common components found on all phones are:
A number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips.
A battery (typically a lithium-ion battery), providing the power source for the phone functions.
An input mechanism to allow the user to interact with the phone. The most common input mechanism is a keypad, but touch screens are also found in smartphones.
Basic mobile phone services to allow users to make calls and send text messages.
All GSM phones use a SIM card to allow an account to be swapped among devices. Some CDMA devices also have a similar card called a R-UIM.
Individual GSM, WCDMA, IDEN and some satellite phone devices are uniquely identified by an International Mobile Equipment Identity (IMEI) number.
All mobile phones are designed to work on cellular networks and contain a standard set of services that allow phones of different types and in different countries to communicate with each other. However, they can also support other features added by various manufacturers over the years:
roaming which permits the same phone to be used in multiple countries, providing that the operators of both countries have a roaming agreement.
send and receive data and faxes (if a computer is attached), access WAP services, and provide full Internet access using technologies such as GPRS.
applications like a clock, alarm, calendar, contacts, and calculator and a few games.
Sending and receiving pictures and videos (by without internet) through MMS, and for short distances with e.g. Bluetooth.
In Multimedia phones Bluetooth is commonly but important Feature.
GPS receivers integrated or connected (i.e. using Bluetooth) to cell phones, primarily to aid in dispatching emergency responders and road tow truck services. This feature is generally referred to as E911.
Push to Talk over Cellular, available on some mobile phones, is a feature that allows the user to be heard only while the talk button is held, similar to a walkie-talkie.
A hardware notification LED on some phones.
MOS integrated circuit chips
A typical smartphone contains a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, which in turn contain billions of tiny MOS field-effect transistors (MOSFETs). A typical smartphone contains the following MOS IC chips.
Application processor (CMOS system-on-a-chip)
Flash memory (floating-gate MOS memory)
Cellular modem (baseband RF CMOS)
RF transceiver (RF CMOS)
Phone camera image sensor (CMOS image sensor)
Power management integrated circuit (power MOSFETs)
Display driver (LCD or LED driver)
Wireless communication chips (Wi-Fi, Bluetooth, GPS receiver)
Sound chip (audio codec and power amplifier)
Gyroscope
Capacitive touchscreen controller (ASIC and DSP)
RF power amplifier (LDMOS)
User interface
Besides the number keypad and buttons for accepting and declining calls (typically from left to right and coloured green and red respectively), button mobile phones commonly feature two option keys, one to the left and one to the right, and a four-directional D-pad which may feature a center button which acts in resemblance to an "Enter" and "OK" button.
A pushable scroll wheel has been implemented in the 1990s on the Nokia 7110.
Software, applications and services
In early stages, every mobile phone company had its own user interface, which can be considered as "closed" operating system, since there was a minimal configurability. A limited variety of basic applications (usually games, accessories like calculator or conversion tool and so on) was usually included with the phone and those were not available otherwise. Early mobile phones included basic web browser, for reading basic WAP pages. Handhelds (Personal digital assistants like Palm, running Palm OS) were more sophisticated and also included more advanced browser and a touch screen (for use with stylus), but these were not broadly used, comparing to standard phones. Other capabilities like Pulling and Pushing Emails or working with calendar were also made more accessible but it usually required physical (and not wireless) Syncing. BlackBerry 850, an email pager, released January 19, 1999, was the first device to integrate Email.
A major step towards a more "open" mobile OS was the symbian S60 OS, that could be expanded by downloading software (written in C++, java or python), and its appearance was more configurable. In July 2008, Apple introduced its App store, which made downloading mobile applications more accessible. In October 2008, the HTC Dream was the first commercially released device to use the Linux-based Android OS, which was purchased and further developed by Google and the Open Handset Alliance to create an open competitor to other major smartphone platforms of the time (Mainly Symbian operating system, BlackBerry OS, and iOS)-The operating system offered a customizable graphical user interface and a notification system showing a list of recent messages pushed from apps.
The most commonly used data application on mobile phones is SMS text messaging. The first SMS text message was sent from a computer to a mobile phone in 1992 in the UK, while the first person-to-person SMS from phone to phone was sent in Finland in 1993.
The first mobile news service, delivered via SMS, was launched in Finland in 2000. Mobile news services are expanding with many organizations providing "on-demand" news services by SMS. Some also provide "instant" news pushed out by SMS.
Mobile payments were first trialled in Finland in 1998 when two Coca-Cola vending machines in Espoo were enabled to work with SMS payments. Eventually, the idea spread and in 1999 the Philippines launched the first commercial mobile payments systems, on the mobile operators Globe and Smart. Today, mobile payments ranging from mobile banking to mobile credit cards to mobile commerce are very widely used in Asia and Africa, and in selected European markets. Usually, the SMS services utilize short code.
Some network operators have utilized USSD for information, entertainment or finance services (e.g. M-Pesa).
Other non-SMS data services used on mobile phones include mobile music, downloadable logos and pictures, gaming, gambling, adult entertainment and advertising. The first downloadable mobile content was sold to a mobile phone in Finland in 1998, when Radiolinja (now Elisa) introduced the downloadable ringtone service. In 1999, Japanese mobile operator NTT DoCoMo introduced its mobile Internet service, i-Mode, which today is the world's largest mobile Internet service.
Even after the appearance of smartphones, network operators have continued to offer information services, although in some places, those services have become less common.
Power supply
Mobile phones generally obtain power from rechargeable batteries. There are a variety of ways used to charge cell phones, including USB, portable batteries, mains power (using an AC adapter), cigarette lighters (using an adapter), or a dynamo. In 2009, the first wireless charger was released for consumer use. Some manufacturers have been experimenting with alternative power sources, including solar cells.
Various initiatives, such as the EU Common External Power Supply have been announced to standardize the interface to the charger, and to promote energy efficiency of mains-operated chargers. A star rating system is promoted by some manufacturers, where the most efficient chargers consume less than 0.03 watts and obtain a five-star rating.
Battery
Most modern mobile phones use a lithium-ion battery. A popular early mobile phone battery was the nickel metal-hydride (NiMH) type, due to its relatively small size and low weight. Lithium-ion batteries later became commonly used, as they are lighter and do not have the voltage depression due to long-term over-charging that nickel metal-hydride batteries do. Many mobile phone manufacturers use lithium–polymer batteries as opposed to the older lithium-ion, the main advantages being even lower weight and the possibility to make the battery a shape other than strict cuboid.
SIM card
GSM mobile phones require a small microchip called a Subscriber Identity Module or SIM card, to function. The SIM card is approximately the size of a small postage stamp and is usually placed underneath the battery in the rear of the unit. The SIM securely stores the service-subscriber key (IMSI) used to identify a subscriber on mobile telephony devices (such as mobile phones and computers). The SIM card allows users to change phones by simply removing the SIM card from one mobile phone and inserting it into another mobile phone or broadband telephony device.
A SIM card contains its unique serial number, internationally unique number of the mobile user (IMSI), security authentication and ciphering information, temporary information related to the local network, a list of the services the user has access to and two passwords (PIN for usual use and PUK for unlocking).
SIM cards are available in three standard sizes. The first is the size of a credit card (85.60 mm × 53.98 mm x 0.76 mm, defined by ISO/IEC 7810 as ID-1). The newer, most popular miniature version has the same thickness but a length of 25 mm and a width of 15 mm (ISO/IEC 7810 ID-000), and has one of its corners truncated (chamfered) to prevent misinsertion. The newest incarnation known as the 3FF or micro-SIM has dimensions of 15 mm × 12 mm. Most cards of the two smaller sizes are supplied as a full-sized card with the smaller card held in place by a few plastic links; it can easily be broken off to be used in a device that uses the smaller SIM.
The first SIM card was made in 1991 by Munich smart card maker Giesecke & Devrient for the Finnish wireless network operator Radiolinja. Giesecke & Devrient sold the first 300 SIM cards to Elisa (ex. Radiolinja).
Those cell phones that do not use a SIM card have the data programmed into their memory. This data is accessed by using a special digit sequence to access the "NAM" as in "Name" or number programming menu. From there, information can be added, including a new number for the phone, new Service Provider numbers, new emergency numbers, new Authentication Key or A-Key code, and a Preferred Roaming List or PRL. However, to prevent the phone being accidentally disabled or removed from the network, the Service Provider typically locks this data with a Master Subsidiary Lock (MSL). The MSL also locks the device to a particular carrier when it is sold as a loss leader.
The MSL applies only to the SIM, so once the contract has expired, the MSL still applies to the SIM. The phone, however, is also initially locked by the manufacturer into the Service Provider's MSL. This lock may be disabled so that the phone can use other Service Providers' SIM cards. Most phones purchased outside the U.S. are unlocked phones because there are numerous Service Providers that are close to one another or have overlapping coverage. The cost to unlock a phone varies but is usually very cheap and is sometimes provided by independent phone vendors.
A similar module called a Removable User Identity Module or RUIM card is present in some CDMA networks, notably in China and Indonesia.
Multi-card hybrid phones
A hybrid mobile phone can take more than one SIM card, even of different types. The SIM and RUIM cards can be mixed together, and some phones also support three or four SIMs.
From 2010 onwards they became popular in India and Indonesia and other emerging markets, attributed to the desire to obtain the lowest on-net calling rate. In Q3 2011, Nokia shipped 18 million of its low cost dual SIM phone range in an attempt to make up lost ground in the higher end smartphone market.
Display
Mobile phones have a display device, some of which are also touch screens. The screen size varies greatly by model and is usually specified either as width and height in pixels or the diagonal measured in inches.
Some phones have more than one display, for example the Kyocera Echo, an Android smartphone with a dual 3.5 inch screen. The screens can also be combined into a single 4.7 inch tablet style computer.
Artificial intelligence is the hot center of the technology industry, especially with the introduction of Large Language Models (LLMs) like ChatGPT and Gemini. The AI revolution, which is underway, has affected the semiconductor market and we have seen chipmaker stocks skyrocket with it. However, semiconductor stocks are not the only beneficiaries, data centers also benefit greatly from the surge in AI.
According to Future Market Intelligence, the data center market is estimated at around $30.4 billion during 2024, it is expected to grow at a compound annual growth rate of 14.4% to reach $117.24 billion by 2034. Data centers were in demand before the AI boom as well, with data from Jefferies showing their demand rising 10% to 20% for the last 15 years before AI. However, AI accelerated the market to around 30% in just two years. Read more here:
Central processing unit
Mobile phones have central processing units (CPUs), similar to those in computers, but optimised to operate in low power environments. In smartphones, the CPU is typically integrated in a system-on-a-chip (SoC) application processor.
Mobile CPU performance depends not only on the clock rate (generally given in multiples of hertz) but also the memory hierarchy also greatly affects overall performance. Because of these problems, the performance of mobile phone CPUs is often more appropriately given by scores derived from various standardized tests to measure the real effective performance in commonly used applications.
Miscellaneous features
Other features that may be found on mobile phones include GPS navigation, music (MP3) and video (MP4) playback, RDS radio receiver, built-in projector, vibration and other "silent" ring options, alarms, memo recording, personal digital assistant functions, ability to watch streaming video, video download, video calling, built-in cameras (1.0+ Mpx) and camcorders (video recording), with autofocus and flash, ringtones, games, PTT, memory card reader (SD), USB (2.0), dual line support, infrared, Bluetooth (2.0) and WiFi connectivity, NFC, instant messaging, Internet e-mail and browsing and serving as a wireless modem.
The first smartphone was the Nokia 9000 Communicator in 1996 which added PDA functionality to the basic mobile phone at the time. As miniaturization and increased processing power of microchips has enabled ever more features to be added to phones, the concept of the smartphone has evolved, and what was a high-end smartphone five years ago, is a standard phone today.
Several phone series have been introduced to address a given market segment, such as the RIM BlackBerry focusing on enterprise/corporate customer email needs; the SonyEricsson Walkman series of musicphones and Cybershot series of cameraphones; the Nokia Nseries of multimedia phones, the Palm Pre the HTC Dream and the Apple iPhone.
Nokia and the University of Cambridge demonstrated a bendable cell phone called the Morph. Some phones have an electromechanical transducer on the back which changes the electrical voice signal into mechanical vibrations. The vibrations flow through the cheek bones or forehead allowing the user to hear the conversation. This is useful in the noisy situations or if the user is hard of hearing.
As of 2018, there are smartphones that offer reverse wireless charging.
Multi-mode and multi-band mobile phones
Most mobile phone networks are digital and use the GSM, CDMA or iDEN standard which operate at various radio frequencies. These phones can only be used with a service plan from the same company. For example, a Verizon phone cannot be used with a T-Mobile service, and vica versa.
A multi-mode phone operates across different standards whereas a multi-band phone (also known more specifically as dual, tri or quad band) mobile phone is a phone which is designed to work on more than one radio frequency. Some multi-mode phones can operate on analog networks as well (for example, dual band, tri-mode: AMPS 800 / CDMA 800 / CDMA 1900).
For a GSM phone, dual-band usually means 850 / 1900 MHz in the United States and Canada, 900 / 1800 MHz in Europe and most other countries. Tri-band means 850 / 1800 / 1900 MHz or 900 / 1800 / 1900 MHz. Quad-band means 850 / 900 / 1800 / 1900 MHz, also called a world phone, since it can work on any GSM network.
Multi-band phones have been valuable to enable roaming whereas multi-mode phones helped to introduce WCDMA features without customers having to give up the wide coverage of GSM. Almost every single true 3G phone sold is actually a WCDMA/GSM dual-mode mobile. This is also true of 2.75G phones such as those based on CDMA-2000 or EDGE.
Challenges in producing multi-mode phones
The special challenge involved in producing a multi-mode mobile is in finding ways to share the components between the different standards. The phone keypad and display should be shared, otherwise it would be hard to treat as one phone. Beyond that, though, there are challenges at each level of integration. How difficult these challenges are depends on the differences between systems. When talking about IS-95/GSM multi-mode phones, for example, or AMPS/IS-95 phones, the base band processing is very different from system to system. This leads to real difficulties in component integration and so to larger phones.
An interesting special case of multi-mode phones is the WCDMA/GSM phone. The radio interfaces are very different from each other, but mobile to core network messaging has strong similarities, meaning that software sharing is quite easy. Probably more importantly, the WCDMA air interface has been designed with GSM compatibility in mind. It has a special mode of operation, known as punctured mode, in which, instead of transmitting continuously, the mobile is able to stop sending for a short period and try searching for GSM carriers in the area. This mode allows for safe inter-frequency handovers with channel measurements which can only be approximated using "pilot signals" in other CDMA based systems.
A final interesting case is that of mobiles covering the DS-WCDMA and MC-CDMA 3G variants of the CDMA-2000 protocol. Initially, the chip rate of these phones was incompatible. As part of the negotiations related to patents, it was agreed to use compatible chip rates. This should mean that, despite the fact that the air and system interfaces are quite different, even on a philosophical level, much of the hardware for each system inside a phone should be common with differences being mostly confined to software.
Data communications
Mobile phones are now heavily used for data communications. such as SMS messages, browsing mobile web sites, and even streaming audio and video files. The main limiting factors are the size of the screen, lack of a keyboard, processing power and connection speed. Most cellphones, which supports data communications, can be used as wireless modems (via cable or bluetooth), to connect computer to internet. Such access method is slow and expensive, but it can be available in very remote areas.
With newer smartphones, screen resolution and processing power has become bigger and better. Some new phone CPUs run at over 1 GHz. Many complex programs are now available for the various smartphones, such as Symbian and Windows Phone.
Connection speed is based on network support. Originally data transfers over GSM networks were possible only over CSD (circuit switched data), it has bandwidth of 9600 bit/s and usually is billed by connection time (from network point of view, it does not differ much from voice call). Later, there were introduced improved version of CSD – HSCSD (high speed CSD), it could use multiple time slots for downlink, improving speed. Maximum speed for HSCSD is ~42 kbit/s, it also is billed by time. Later was introduced GPRS (general packet radio service), which operates on completely different principle. It also can use multiple time slots for transfer, but it does not tie up radio resources, when not transferring data (as opposed to CSD and like). GPRS usually is prioritized under voice and CSD, so latencies are large and variable. Later, GPRS was upgraded to EDGE, which differs mainly by radio modulation, squeezing more data capacity in same radio bandwidth. GPRS and EDGE usually are billed by data traffic volume. Some phones also feature full Qwerty keyboards, such as the LG enV.
As of April 2006, several models, such as the Nokia 6680, support 3G communications. Such phones have access to the Web via a free download of the Opera web browser. Verizon Wireless models come with Internet Explorer pre-loaded onto the phone.
Vulnerability to viruses
As more complex features are added to phones, they become more vulnerable to viruses which exploit weaknesses in these features. Even text messages can be used in attacks by worms and viruses. Advanced phones capable of e-mail can be susceptible to viruses that can multiply by sending messages through a phone's address book. In some phone models, the USSD was exploited for inducing a factory reset, resulting in clearing the data and resetting the user settings.
A virus may allow unauthorized users to access a phone to find passwords or corporate data stored on the device. Moreover, they can be used to commandeer the phone to make calls or send messages at the owner's expense.
Mobile phones used to have proprietary operating system unique only to the manufacturer which had the beneficial effect of making it harder to design a mass attack. However, the rise of software platforms and operating systems shared by many manufacturers such as Java, Microsoft operating systems, Linux, or Symbian OS, may increase the spread of viruses in the future.
Bluetooth is a feature now found in many higher-end phones, and the virus Caribe hijacked this function, making Bluetooth phones infect other Bluetooth phones running the Symbian OS. In early November 2004, several web sites began offering a specific piece of software promising ringtones and screensavers for certain phones. Those who downloaded the software found that it turned each icon on the phone's screen into a skull-and-crossbones and disabled their phones, so they could no longer send or receive text messages or access contact lists or calendars. The virus has since been dubbed "Skulls" by security experts. The Commwarrior-A virus was identified in March 2005, and it attempts to replicate itself through MMS to others on the phone's contact list. Like Cabir, Commwarrior-A also tries to communicate via Bluetooth wireless connections with other devices, which can eventually lead to draining the battery. The virus requires user intervention for propagation however.
Bluetooth phones are also subject to bluejacking, which although not a virus, does allow for the transmission of unwanted messages from anonymous Bluetooth users.
Cameras
Most current phones also have a built-in digital camera (see camera phone), that can have resolutions as high as 108M pixels.
This gives rise to some concern about privacy, in view of possible voyeurism, for example in swimming pools. South Korea has ordered manufacturers to ensure that all new handsets emit a beep whenever a picture is taken.
Sound recording and video recording is often also possible. Most people do not walk around with a video camera, but do carry a phone. The arrival of video camera phones is transforming the availability of video to consumers, and helps fuel citizen journalism.
See also
Mobile game
Ringtone
Smartphone
Mobile phone form factor
Wallpaper
References
Mobile phone
Mobile phones | Mobile phone feature | Technology | 5,023 |
11,305,098 | https://en.wikipedia.org/wiki/Pestalotiopsis%20sydowiana | Pestalotiopsis sydowiana is a plant pathogen infecting azaleas, heather, loquats, and rhododendrons.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Ornamental plant pathogens and diseases
sydowiana
Fungus species | Pestalotiopsis sydowiana | Biology | 61 |
1,637,931 | https://en.wikipedia.org/wiki/Elinvar | Elinvar is a nickel–iron–chromium alloy notable for having a modulus of elasticity which does not change much with temperature changes.
Metal
The name is a contraction of the French ('invariable elasticity'). It was invented by Charles Édouard Guillaume, a Swiss physicist who also invented Invar, another alloy of nickel and iron with very low thermal expansion. Guillaume won the 1920 Nobel Prize in Physics for these discoveries, which shows how important these alloys were for scientific instruments.
Elinvar originally consisted of 52% iron, 36% nickel, and 12% chromium. It is almost non-magnetic, and corrosion resistant.
Other variations of the Elinvar alloy are
Iron- and cobalt-based ferromagnetic Elinvar alloy
Manganese- and chromium-based antiferromagnetic Elinvar alloy
Palladium-based non-magnetic Elinvar alloy
The largest use of Elinvar was in balance springs for mechanical watches and chronometers. A major cause of inaccuracy in watches and clocks was that ordinary steels used in springs lost elasticity slightly as the temperature increased, so the balance wheel would oscillate more slowly back and forth, and the clock would lose time. Chronometers and precision watches required complex temperature-compensated balance wheels for accurate timekeeping. Springs made of Elinvar, and other low temperature coefficient alloys such as Nivarox that followed, were minimally affected by temperature, so they made the temperature-compensated balance wheel obsolete.
References
Ferrous alloys
Nickel alloys
Horology
External links
. Explanation of Elinvar use in pocket watch balance wheel hair springs. | Elinvar | Physics,Chemistry | 346 |
6,292,708 | https://en.wikipedia.org/wiki/IEEE%20P1619 | Institute of Electrical and Electronics Engineers (IEEE) standardization project for encryption of stored data, but more generically refers to the Security in Storage Working Group (SISWG), which includes a family of standards for protection of stored data and for the corresponding cryptographic key management.
Standards
SISWG oversees work on the following standards:
The base IEEE 1619 Standard Architecture for Encrypted Shared Storage Media uses the XTS-Advanced Encryption Standard (XEX-based Tweaked CodeBook mode (TCB) with ciphertext stealing (CTS); the proper name should be XTC (XEX TCB CTS), but that acronym is already used to denote the drug ecstasy).
The P1619.1 Authenticated Encryption with Length Expansion for Storage Devices uses the following algorithms:
Counter mode with CBC-MAC (CCM)
Galois/Counter Mode (GCM)
Cipher Block Chaining (CBC) with HMAC-Secure Hash Algorithm
XTS-HMAC-Secure Hash Algorithm
The P1619.2 Standard for Wide-Block Encryption for Shared Storage Media has proposed algorithms including:
Extended Codebook (XCB)
Encrypt Mix Encrypt V2 (EME2)
The P1619.3 Standard for Key Management Infrastructure for Cryptographic Protection of Stored Data defines a system for managing encryption data at rest security objects which includes architecture, namespaces, operations, messaging and transport.
P1619 also standardized the key backup in the XML format.
Narrow-block vs. wide-block encryption
An encryption algorithm used for data storage has to support independent encryption and decryption of portions of data. So-called narrow-block algorithms operate on relatively small portions of data, while the wide-block algorithms encrypt or decrypt a whole sector. Narrow-block algorithms have the advantage of more efficient hardware implementation. On the other hand, smaller block size provides finer granularity for data modification attacks. There is no standardized "acceptable granularity"; however, for example, the possibility of data modification with the granularity of one bit (bit-flipping attack) is generally considered unacceptable.
For these reasons, the working group selected the narrow-block (128 bits) encryption with no authentication in the standard P1619, assuming that the added efficiency warrants the additional risk. But recognizing that wide-block encryption might be useful in some cases, another project P1619.2 has been started to study the usage of wide-block encryption.
The project is maintained by the IEEE Security in Storage Working Group (SISWG). Both the disk storage standard P1619 (sometimes called P1619.0) and the tape storage standard P1619.1 were standardized in December 2007.
A discussion was ongoing on standardization of the wide-block encryption for disk drives, like CMC and EME as P1619.2, and on key management as P1619.3.
LRW issue
From 2004 to 2006, drafts of the P1619 standards used the Advanced Encryption Standard (AES) in LRW mode. In the 30 Aug 2006 meeting of the SISWG, a straw poll showed that most members would not approve P1619 as it was. Consequently, LRW-AES has been replaced by the XEX-AES tweakable block cipher in P1619.0 Draft 7 (and renamed to XTS-AES in Draft 11). Some members of the group found it non-trivial to abandon LRW, because it had been available for public peer-review for many years (unlike most of the newly suggested variants). The issues of LRW were:
An attacker can derive the LRW tweak key K2 from the ciphertext if the plaintext contains K2||0n or 0n||K2. Here || is the concatenation operator and 0n is a zero block. This may be an issue for software that encrypts the partition of an operating system under which this encryption software is running (at the same time). The operating system could write the LRW tweak key to an encrypted swap/hibernation file.
If the tweak key K2 is known, LRW does not offer indistinguishability under chosen plaintext attack (IND-CPA) anymore, and the same input block permutation attacks of ECB mode are possible. Leak of the tweak key does not affect the confidentiality of the plaintext.
See also
Comparison of disk encryption software
Disk encryption
Encryption
Full disk encryption
Key management
Key Management Interoperability Protocol
On-the-fly encryption
References
External links
SISWG home page
IEEE 1619-2007 IEEE Standard for Cryptographic Protection of Data on Block-Oriented Storage Devices
IEEE 1619-2018 IEEE Standard for Cryptographic Protection of Data on Block-Oriented Storage Devices
IEEE 1619.2-2021 IEEE Standard for Wide-Block Encryption for Shared Storage Media
Email archive for SISWG in general and P1619 in particular
Email archive for P1619.1 (Authenticated Encryption)
Email archive for P1619.2 (Wide-block Encryption)
Email archive for P1619.3 (Key Management) (withdrawn)
Cryptography standards
IEEE standards
Disk encryption | IEEE P1619 | Technology | 1,087 |
46,756,296 | https://en.wikipedia.org/wiki/Josef%20Sch%C3%A4chter | Josef Schächter (September 16, 1901, in Kundrynce, Galicia – March 27, 1994, in Haifa) was an Austrian rabbi, philosopher and member of the Vienna Circle from 1925 to 1936.
Life
Schächter was the son of Shoel Schächter and Sarah, née Distenfield. He trained as a rabbi and was ordained in 1926. He worked as a Talmud teacher from 1922 to 1929 at the Hebraic school in Vienna and from 1935 to 1938 at the Bible Rambam Institute.
At the same time, he studied philosophy, primarily with Moritz Schlick and completed his studies in 1931 with a dissertation under Schlick with the title “Critical Account of N. Hartmann’s ‘Grundzüge einer Metaphysik der Erkenntnis’” („Kritische Darstellung von N. Hartmanns‚ Grundzüge einer Metaphysik der Erkenntnis‘“).
From 1925 to 1936 Schächter attended the meetings of the Vienna Circle. His work Prolegomena zu einer kritischen Grammatik (Prolegomena to a Critical Grammar) was published with a preface by Schlick in the Circle’s book series Schriften zur wissenschaftlichen Weltauffassung (Monographs on the Scientific World-Conception) in 1935. This work was influenced by Schlick, Friedrich Waismann, and Ludwig Wittgenstein. After Schlick’s murder, Schächter intermittently substituted Friedrich Waismann in running philosophical seminars.
In 1938 Schächter emigrated to Palestine. He taught at secondary schools, first in Tel Aviv until 1940 and then in Haifa until 1950. In 1943 he married the teacher Netti Dlugacz. From 1951 to 1952 he was superintendent of schools in the Israeli school system. Later he worked as a lecturer for Bible and Aggadah at the teacher’s seminar in Haifa.
At the beginning of the 1950s a group of his students founded the Kibbuz "Yodefat" in Galilee in order to put Schächter’s ideas into practice.
Schächter published numerous works on classical Judaism, on language, meaning, and belief in the context of science and religion.
Selected works
"Kritische Darstellung von N. Hartmanns 'Grundzüge einer Metaphysik der Erkenntnis'", Diss., Vienna 1931.
Prolegomena zu einer kritischen Grammatik (= Schriften zur wissenschaftlichen Weltauffassung, 10), Vienna 1935. – Reedited as Prolegomena to a Critical Grammar, Preface by J. F. Staal. Reidel, Dordrecht-Boston 1973.
Mavo Kazar L'Logistikah (A brief outline of logistics [hebr.], with a preface by Hugo Bermann, Vienna 1937.
"Der Sinn pessimistischer Sätze", in: Synthese 3, 1938, 223-233.
"Über das Verstehen", in: Synthese 8, 1950/51, 367-384.
"The Task of the Modern Intellectual", in: An Anthology of Hebrew Essays II, 1966, 299-310.
(Together with Heinrich Melzer), „Über den Physikalismus“, in: B. McGuinness (Hrsg.), Zurück zu Schlick. Eine Neubewertung von Werk und Wirkung, Hölder-Pichler-Tempsky, Wien 1985, 92-103.
Bibliography
Stadler, Friedrich. The Vienna Circle. Studies in the Origins, Development, and Influence of Logical Empiricism. New York: Springer, 2001. – 2nd Edition: Dordrecht: Springer, 2015. – Biobibliographical presentation of Schächter: 720-721.
Moritz Schlick: „Geleitwort“ [in: Josef Schächter, Prolegomena zu einer kritischen Grammatik], in: Moritz Schlick Gesamtausgabe, Abteilung I, Band 6, Die Wiener Zeit, ed. by Johannes Friedl, Heiner Rutte, 635-642. (German)
Friedrich Waismann, Josef Schächter und Moritz Schlick: Ethics and the Will. Essays, ed. and with an introduction by Brian McGuinness and Joachim Schulte, Kluwer, Dordrecht-Boston-London 1994.
J. S. Diamond: "Josef Schächter: An Approach to ›Jewish Consciousness‹," in: Reconstructionist, Annual Israel Issue 30, 1964, S. 17-24.
Volker Thurm (ed.): Wien und der Wiener Kreis: Orte einer unvollendeten Moderne; ein Begleitbuch, in collaboration with Elisabeth Nemeth, Schriftenreihe Wissenschaftliche Weltauffassung und Kunst; Sonderbd., WUV, Vienna 2003, , 348 f. (German)
External links
Publications of Schächter on WorldCat
Notes
20th-century Austrian rabbis
Austrian philosophers
Jewish philosophers
Vienna Circle
Logical positivism
1901 births
1994 deaths | Josef Schächter | Mathematics | 1,116 |
3,461,109 | https://en.wikipedia.org/wiki/Stamp%20program | The stamp program of a postal organization is an umbrella term for the entire process of postage stamp issuance and distribution by the organization. Aspects include the decision of about stamps to issue, what postal rates they will pay, postage stamp design, printing, and publicity for the new stamps. The stamp program is generally managed by a specialized department within the organization, which balances demands from the rest of the organization, the nation's government, stamp collectors, and the public which actually buy and use the stamps.
Originally, the choice of stamps to issue was primarily driven by changes to postal rates, and by major changes in the government (for instance, the accession of a new monarch meant that the stamp portrait had to change). Results of research show that the process was often very hasty and reactive, sometimes only a few days or weeks elapsing between the identification of a need and the beginning of printing and distribution.
Through the 20th century, the process became more organized; for instance, it proved possible to sell more commemorative stamps if the public was formally informed of their availability, which inspired first day of issue ceremonies. In turn, simultaneous availability nationwide meant that everything had to be planned out ahead of time. Postal administrations also discovered that collectors were not a bottomless well of money, and that excessive stamp issues would simply go unpurchased, so they decide ahead of time how many stamps are a "reasonable" number.
The upshot is that much of a year's stamp program is known and can be announced in advance. In 2005 for instance, Canada Post announced its 2006 stamps in July 2005, while the United States Postal Service (USPS) announced at the end of November.
The existence of a preannounced program does not preclude last-minute changes; a souvenir sheet commemorating the Mars Pathfinder mission was issued by the USPS December 10, 1997, some five months after the touchdown, while the "United We Stand" stamp in response to the September 11, 2001 attacks came out on October 2, just three weeks later.
External links
Canada Post press release for 2006 stamp program, dated July 2005
Lengthy USPS news release describing 2006 stamp program, November 2005
USPS release for 2005 stamp program
Philatelic terminology
Postal systems | Stamp program | Technology | 456 |
1,505,128 | https://en.wikipedia.org/wiki/Harvard%E2%80%93Smithsonian%20Center%20for%20Astrophysics | The Center for Astrophysics | Harvard & Smithsonian (CfA), previously known as the Harvard–Smithsonian Center for Astrophysics, is an astrophysics research institute jointly operated by the Harvard College Observatory and Smithsonian Astrophysical Observatory. Founded in 1973 and headquartered in Cambridge, Massachusetts, United States, the CfA leads a broad program of research in astronomy, astrophysics, Earth and space sciences, as well as science education. The CfA either leads or participates in the development and operations of more than fifteen ground- and space-based astronomical research observatories across the electromagnetic spectrum, including the forthcoming Giant Magellan Telescope (GMT) and the Chandra X-ray Observatory, one of NASA's Great Observatories.
Hosting more than 850 scientists, engineers, and support staff, the CfA is among the largest astronomical research institutes in the world. Its projects have included Nobel Prize-winning advances in cosmology and high energy astrophysics, the discovery of many exoplanets, and the first image of a black hole. The CfA also serves a major role in the global astrophysics research community: the CfA's Astrophysics Data System (ADS), for example, has been universally adopted as the world's online database of astronomy and physics papers. Known for most of its history as the "Harvard-Smithsonian Center for Astrophysics", the CfA rebranded in 2018 to its current name in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. The CfA's current director (since 2022) is Lisa Kewley, who succeeds Charles R. Alcock (Director from 2004 to 2022), Irwin I. Shapiro (Director from 1982 to 2004) and George B. Field (Director from 1973 to 1982).
History of the CfA
The Center for Astrophysics | Harvard & Smithsonian is not formally an independent legal organization, but rather an institutional entity operated under a memorandum of understanding between Harvard University and the Smithsonian Institution. This collaboration was formalized on July 1, 1973, with the goal of coordinating the related research activities of the Harvard College Observatory (HCO) and the Smithsonian Astrophysical Observatory (SAO) under the leadership of a single director, and housed within the same complex of buildings on the Harvard campus in Cambridge, Massachusetts. The CfA's history is therefore also that of the two fully independent organizations that comprise it. With a combined history of more than 300 years, HCO and SAO have been host to major milestones in astronomical history that predate the CfA's founding. These are briefly summarized below.
History of the Smithsonian Astrophysical Observatory (SAO)
Samuel Pierpont Langley, the third Secretary of the Smithsonian, founded the Smithsonian Astrophysical Observatory on the south yard of the Smithsonian Castle (on the U.S. National Mall) on March 1, 1890. The Astrophysical Observatory's initial, primary purpose was to "record the amount and character of the Sun's heat". Charles Greeley Abbot was named SAO's first director, and the observatory operated solar telescopes to take daily measurements of the Sun's intensity in different regions of the optical electromagnetic spectrum. In doing so, the observatory enabled Abbot to make critical refinements to the Solar constant, as well as to serendipitously discover Solar variability. It is likely that SAO's early history as a solar observatory was part of the inspiration behind the Smithsonian's "sunburst" logo, designed in 1965 by Crimilda Pontes.
In 1955, the scientific headquarters of SAO moved from Washington, D.C. to Cambridge, Massachusetts, to affiliate with the Harvard College Observatory (HCO). Fred Lawrence Whipple, then the chairman of the Harvard Astronomy Department, was named the new director of SAO. The collaborative relationship between SAO and HCO therefore predates the official creation of the CfA by 18 years. SAO's move to Harvard's campus also resulted in a rapid expansion of its research program. Following the launch of Sputnik (the world's first human-made satellite) in 1957, SAO accepted a national challenge to create a worldwide satellite-tracking network, collaborating with the United States Air Force on Project Space Track.
With the creation of NASA the following year and throughout the Space Race, SAO led major efforts in the development of orbiting observatories and large ground-based telescopes, laboratory and theoretical astrophysics, as well as the application of computers to astrophysical problems.
History of Harvard College Observatory (HCO)
Partly in response to renewed public interest in astronomy following the 1835 return of Halley's Comet, the Harvard College Observatory was founded in 1839, when the Harvard Corporation appointed William Cranch Bond as an "Astronomical Observer to the University". For its first four years of operation, the observatory was situated at the Dana-Palmer House (where Bond also resided) near Harvard Yard, and consisted of little more than three small telescopes and an astronomical clock. In his 1840 book recounting the history of the college, then Harvard President Josiah Quincy III noted that "there is wanted a reflecting telescope equatorially mounted". This telescope, the 15-inch "Great Refractor", opened seven years later (in 1847) at the top of Observatory Hill in Cambridge (where it still exists today, housed in the oldest of the CfA's complex of buildings). The telescope was the largest in the United States from 1847 until 1867. William Bond and pioneer photographer John Adams Whipple used the Great Refractor to produce the first clear Daguerrotypes of the Moon (winning them an award at the 1851 Great Exhibition in London). Bond and his son, George Phillips Bond (the second director of HCO), used it to discover Saturn's 8th moon, Hyperion (which was also independently discovered by William Lassell).
Under the directorship of Edward Charles Pickering from 1877 to 1919, the observatory became the world's major producer of stellar spectra and magnitudes, established an observing station in Peru, and applied mass-production methods to the analysis of data. It was during this time that HCO became host to a series of major discoveries in astronomical history, powered by the observatory's so-called "Computers" (women hired by Pickering as skilled workers to process astronomical data). These "Computers" included Williamina Fleming, Annie Jump Cannon, Henrietta Swan Leavitt, Florence Cushman and Antonia Maury, all widely recognized today as major figures in scientific history. Henrietta Swan Leavitt, for example, discovered the so-called period-luminosity relation for Classical Cepheid variable stars, establishing the first major "standard candle" with which to measure the distance to galaxies. Now called "Leavitt's law", the discovery is regarded as one of the most foundational and important in the history of astronomy; astronomers like Edwin Hubble, for example, would later use Leavitt's law to establish that the Universe is expanding, the primary piece of evidence for the Big Bang model.
Upon Pickering's retirement in 1921, the directorship of HCO fell to Harlow Shapley (a major participant in the so-called "Great Debate" of 1920). This era of the observatory was made famous by the work of Cecelia Payne-Gaposchkin, who became the first woman to earn a PhD in astronomy from Radcliffe College (a short walk from the observatory). Payne-Gapochkin's 1925 thesis proposed that stars were composed primarily of hydrogen and helium, an idea thought ridiculous at the time. Between Shapley's tenure and the formation of the CfA, the observatory was directed by Donald H. Menzel and then Leo Goldberg, both of whom maintained widely recognized programs in solar and stellar astrophysics. Menzel played a major role in encouraging the Smithsonian Astrophysical Observatory to move to Cambridge and collaborate more closely with HCO.
Joint history as the Center for Astrophysics (CfA)
The collaborative foundation for what would ultimately give rise to the Center for Astrophysics began with SAO's move to Cambridge in 1955. Fred Whipple, who was already chair of the Harvard Astronomy Department (housed within HCO since 1931), was named SAO's new director at the start of this new era; an early test of the model for a unified directorship across HCO and SAO. The following 18 years would see the two independent entities merge ever closer together, operating effectively (but informally) as one large research center.
This joint relationship was formalized as the new Harvard–Smithsonian Center for Astrophysics on July 1, 1973. George B. Field, then affiliated with Berkeley, was appointed as its first director. That same year, a new astronomical journal, the CfA Preprint Series was created, and a CfA/SAO instrument flying aboard Skylab discovered coronal holes on the Sun. The founding of the CfA also coincided with the birth of X-ray astronomy as a new, major field that was largely dominated by CfA scientists in its early years. Riccardo Giacconi, regarded as the "father of X-ray astronomy", founded the High Energy Astrophysics Division within the new CfA by moving most of his research group (then at American Sciences and Engineering) to SAO in 1973. That group would later go on to launch the Einstein Observatory (the first imaging X-ray telescope) in 1976, and ultimately lead the proposals and development of what would become the Chandra X-ray Observatory. Chandra, the second of NASA's Great Observatories and still the most powerful X-ray telescope in history, continues operations today as part of the CfA's Chandra X-ray Center. Giacconi would later win the 2002 Nobel Prize in Physics for his foundational work in X-ray astronomy.
Shortly after the launch of the Einstein Observatory, the CfA's Steven Weinberg won the 1979 Nobel Prize in Physics for his work on electroweak unification. The following decade saw the start of the landmark CfA Redshift Survey (the first attempt to map the large scale structure of the Universe), as well as the release of the "Field Report", a highly influential Astronomy and Astrophysics Decadal Survey chaired by the outgoing CfA Director George Field. He would be replaced in 1982 by Irwin Shapiro, who during his tenure as director (1982 to 2004) oversaw the expansion of the CfA's observing facilities around the world, including the newly named Fred Lawrence Whipple Observatory, the Infrared Telescope (IRT) aboard the Space Shuttle, the 6.5-meter Multiple Mirror Telescope (MMT), the SOHO satellite, and the launch of Chandra in 1999. CfA-led discoveries throughout this period include canonical work on Supernova 1987A, the "CfA2 Great Wall" (then the largest known coherent structure in the Universe), the best-yet evidence for supermassive black holes, and the first convincing evidence for an extrasolar planet.
The 1980s also saw the CfA play a distinct role in the history of computer science and the internet: in 1986, SAO started developing SAOImage, one of the world's first X11-based applications made publicly available (its successor, DS9, remains the most widely used astronomical FITS image viewer worldwide). During this time, scientists and software developers at the CfA also began work on what would become the Astrophysics Data System (ADS), one of the world's first online databases of research papers. By 1993, the ADS was running the first routine transatlantic queries between databases, a foundational aspect of the internet today.
The CfA today
Research at the CfA
Charles Alcock, known for a number of major works related to massive compact halo objects, was named the third director of the CfA in 2004. Today Alcock oversees one of the largest and most productive astronomical institutes in the world, with more than 850 staff and an annual budget in excess of $100 million. The Harvard Department of Astronomy, housed within the CfA, maintains a continual complement of approximately 60 PhD students, more than 100 postdoctoral researchers, and roughly 25 undergraduate astronomy and astrophysics majors from Harvard College. SAO, meanwhile, hosts a long-running and highly rated REU Summer Intern program as well as many visiting graduate students. The CfA estimates that roughly 10% of the professional astrophysics community in the United States spent at least a portion of their career or education there.
The CfA is either a lead or major partner in the operations of the Fred Lawrence Whipple Observatory, the Submillimeter Array, MMT Observatory, the South Pole Telescope, VERITAS, and a number of other smaller ground-based telescopes. The CfA's 2019–2024 Strategic Plan includes the construction of the Giant Magellan Telescope as a driving priority for the center.
Along with the Chandra X-ray Observatory, the CfA plays a central role in a number of space-based observing facilities, including the recently launched Parker Solar Probe, Kepler space telescope, the Solar Dynamics Observatory (SDO), and Hinode. The CfA, via the Smithsonian Astrophysical Observatory, recently played a major role in the Lynx X-ray Observatory, a NASA-funded large mission concept study commissioned as part of the 2020 Astronomy and Astrophysics Decadal Survey ("Astro2020"). If launched, Lynx would be the most powerful X-ray observatory constructed to date, enabling order-of-magnitude advances in capability over Chandra.
SAO is one of the 13 stakeholder institutes for the Event Horizon Telescope Board, and the CfA hosts its Array Operations Center. In 2019, the project revealed the first direct image of a black hole. The result is widely regarded as a triumph not only of observational astronomy, but of its intersection with theoretical astrophysics. Union of the observational and theoretical subfields of astrophysics has been a major focus of the CfA since its founding.
In 2018, the CfA rebranded, changing its official name to the "Center for Astrophysics | Harvard & Smithsonian" in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. Today, the CfA receives roughly 70% of its funding from NASA, 22% from Smithsonian federal funds, and 4% from the National Science Foundation. The remaining 4% comes from contributors including the United States Department of Energy, the Annenberg Foundation, as well as other gifts and endowments.
Organizational structure
Research across the CfA is organized into six divisions and seven research centers:
Scientific divisions within the CfA
Atomic and Molecular Physics (AMP)
High Energy Astrophysics (HEA)
Optical and Infrared Astronomy (OIR)
Radio and Geoastronomy (RG)
Solar, Stellar, and Planetary Sciences (SSP)
Theoretical Astrophysics (TA)
Centers hosted at the CfA
Chandra X-ray Center (CXC), the science operations center for NASA's Chandra X-ray Observatory
Institute for Theory and Computation (ITC)
Institute for Theoretical Atomic, Molecular, and Optical Physics (ITAMP)
Center for Parallel Astrophysical Computing (CPAC)
Minor Planet Center (MPC)
Telescope Data Center (TDC)
Radio Telescope Data Center (RTDC)
Solar & Stellar X-ray Group (SSXG)
The CfA is also host to the Harvard University Department of Astronomy, large central engineering and computation facilities, the Science Education Department, the John G. Wolbach Library, the world's largest database of astronomy and physics papers (ADS), and the world's largest collection of astronomical photographic plates.
Observatories operated with CfA participation
Ground-based observatories
Fred Lawrence Whipple Observatory
Magellan telescopes
MMT Observatory
Event Horizon Telescope
South Pole Telescope
Submillimeter Array
1.2-Meter Millimeter-Wave Telescope
Very Energetic Radiation Imaging Telescope Array System (VERITAS)
Space-based observatories and probes
Chandra X-ray Observatory
Transiting Exoplanet Survey Satellite (TESS)
Parker Solar Probe
Hinode
Kepler
Solar Dynamics Observatory (SDO)
Solar and Heliospheric Observatory (SOHO)
Spitzer Space Telescope
Planned future observatories
Lynx X-ray Observatory
Giant Magellan Telescope
Murchison Widefield Array
Square Kilometer Array
Pan-STARRS
Vera C. Rubin Observatory (formerly called the Large Synoptic Survey Telescope)
See also
Clara Sousa-Silva, research scientist
List of astronomical observatories
References
External links
01
Astronomical observatories in Massachusetts
Astronomy institutes and departments
Astrophysics research institutes
Harvard University research institutes
Smithsonian Institution research programs
Research institutes established in 1973
1973 establishments in Massachusetts
Harvard University buildings | Harvard–Smithsonian Center for Astrophysics | Physics,Astronomy | 3,351 |
6,919,308 | https://en.wikipedia.org/wiki/Nicole%20Stott | Nicole Marie Passonno Stott (born November 19, 1962) is an American engineer and a retired NASA astronaut. She served as a flight engineer on ISS Expedition 20 and Expedition 21 and was a mission specialist on STS-128 and STS-133. After 27 years of working at NASA, the space agency announced her retirement effective June 1, 2015. She is married to Christopher Stott, a Manx-born American space entrepreneur.
Early life and education
Stott was born in Albany, New York and resides in St. Petersburg, Florida. She attended St. Petersburg College studying aviation administration, graduated with a B.S. degree in aeronautical engineering from Embry-Riddle Aeronautical University in 1987, and received her M.S. degree in Engineering Management from the University of Central Florida in 1992. Nicole Stott began her career in 1987 as a structural design engineer with Pratt & Whitney Government Engines in West Palm Beach, Florida. She spent a year with the Advanced Engines Group performing structural analyses of advanced jet engine component designs. Stott is an instrument rated private pilot.
NASA career
In 1988, Stott joined NASA at the Kennedy Space Center (KSC), Florida as an Operations Engineer in the Orbiter Processing Facility (OPF). After six months, she was detailed to the Director of Shuttle Processing as part of a two-person team tasked with assessing the overall efficiency of Shuttle processing flows, and implementing tools for measuring the effectiveness of improvements. She was the NASA KSC Lead for a joint Ames/KSC software project to develop intelligent scheduling tools. The Ground Processing Scheduling System (GPSS) was developed as the technology demonstrator for this project. GPSS was a success at KSC, and also a commercial success that is part of the PeopleSoft suite of software products. During her time at KSC, Stott also held a variety of positions within NASA Shuttle Processing, including Vehicle Operations Engineer; NASA Convoy Commander; assistant to the Flow Director for Space Shuttle Endeavour; and Orbiter Project Engineer for Columbia. During her last two years at KSC, she was a member of the Space Station Hardware Integration Office and relocated to Huntington Beach, California where she served as the NASA Project Lead for the ISS truss elements under construction at the Boeing Space Station facility. In 1998, she joined the Johnson Space Center (JSC) team in Houston, Texas as a member of the NASA Aircraft Operations Division, where she served as a Flight Simulation Engineer (FSE) on the Shuttle Training Aircraft (STA).
Selected as a mission specialist by NASA in July 2000, Stott reported for astronaut candidate training in August 2000. Following the completion of two years of training and evaluation, she was assigned technical duties in the Astronaut Office Station Operations Branch, where she performed crew evaluations of station payloads. She also worked as a support astronaut and CAPCOM for the ISS Expedition 10 crew. In April 2006, she was a crew member on the NEEMO 9 mission (NASA Extreme Environment Mission Operations) where she lived and worked with a six-person crew for 18 days on the Aquarius undersea research habitat. Stott was previously assigned to Expedition 20 and Expedition 21. She was launched to the International Space Station with the crew of STS-128, participating in the first spacewalk of that mission, and returned on STS-129, thus becoming the last Expedition crew-member to return to Earth via the space shuttle. Stott completed her second spaceflight on STS-133, the third to last (antepenultimate) flight of the space shuttle.
First live tweet-up from space
On October 21, 2009, Stott and her Expedition 21 crewmate Jeff Williams participated in the first NASA Tweetup from the station with members of the public gathered at NASA Headquarters in Washington, D.C. This involved the first live Twitter connection for the astronauts. Previously, astronauts on board the Space Shuttle or ISS had sent the messages they desired to send as tweets down to Mission Control which then posted them via the Internet to Twitter.
Post NASA
Stott was featured in a Super Bowl LIV commercial promoting Girls Who Code. Stott has also written Back To Earth, described as "What Life in Space Taught Me About Our Home Planet and Our Mission to Protect It". She is also an artist and brought a small watercolor kit on ISS Expedition 21 where she was the first person to paint with watercolor in space. Her current works often relate to astronomy including her Earth Observation collection and Spacecraft collection. In 2022, she is providing the narration to a piece being performed by the Schenectady Symphony Orchestra, Glen Cortese's "Voyager: A Journey to the Stars."
References
External links
Nicole Stott – Spacefacts biography
Nicole Stott – Video-opinion (4:19) (NYT; April 26, 2020)
1962 births
Living people
American people of German descent
American people of Italian descent
American astronauts
Aquanauts
Clearwater High School alumni
University of Central Florida alumni
Crew members of the International Space Station
American women astronauts
Space art
Space artists
Space Shuttle program astronauts
Spacewalkers | Nicole Stott | Astronomy | 1,031 |
7,671,308 | https://en.wikipedia.org/wiki/Cytolysin | Cytolysin refers to the substance secreted by microorganisms, plants or animals that is specifically toxic to individual cells, in many cases causing their dissolution through lysis. Cytolysins that have a specific action for certain cells are named accordingly. For instance, the cytolysins responsible for the destruction of red blood cells, thereby liberating hemoglobins, are named hemolysins, and so on. Cytolysins may be involved in immunity as well as in venoms.
Hemolysin is also used by certain bacteria, such as Listeria monocytogenes, to disrupt the phagosome membrane of macrophages and escape into the cytoplasm of the cell.
History and background
The term "Cytolysin" or "Cytolytic toxin" was first introduced by Alan Bernheimer to describe membrane damaging toxins (MDTs) that have cytolytic effects to cells. The first kind of cytolytic toxin discovered have hemolytic effects on erythrocytes of certain sensitive species, such as Human. For this reason "Hemolysin" was first used to describe any MDTs. In the 1960s certain MDTs were proved to be destructive on cells other than erythrocytes, such as leukocytes. The term "Cytolysin" is then introduced by Bernheimer to replace "Hemolysin". Cytolysins can destruct membranes without creating lysis to cells. Therefore, "membrane damaging toxins" (MDTs) describes the essential actions of cytolysins. Cytolysins comprise more than 1/3 of all bacterial protein toxins. Bacterial protein toxins can be highly poisonous to human. For example, Botulinum is 3x105 more toxic than snake venom to human and its toxic dose is only 0.8x10−8 mg. A wide variety of gram-positive and gram-negative bacteria use cytolysin as their primary weapon for creating diseases, such as Enterococcus faecalis, Staphylococcus and Clostridium perfringens.
A diverse range of studies has been done on cytolysins. Since the 1970s, more than 40 new cytolysins have been discovered and grouped into different families. At genetic level, the genetic structures of about 70 Cytolysin proteins has been studied and published. The detailed process of membrane damage has also been surveyed. Rossjohn et al. present the crystal structure of perfringolysin O, a thiol-activated cytolysin, which creates membrane holes on eukaryotic cells. A detailed model of membrane channel formation that reveals membrane insertion mechanism is constructed. Shatursky et al. studied the membrane insertion mechanism of Perfringolysin O (PFO), a cholesterol-dependent pore-forming cytolysin produced by pathogenic Clostridium perfringens. Instead of using a single amphipathic β hairpin per polypeptide, PFO monomer contains two amphipathic β hairpins, each spans the whole membrane. Larry et al. focused on the membrane penetrating models of RTX toxins, a family of MDT secreted by many gram-negative bacteria. The insertion and transport process of the protein from RTX to target lipid membrane was revealed.
Classification
The membrane-damaging cytolysins can be classified into three types based on their damaging mechanism:
Cytolysins which attack eukaryotic cells' bilayer membranes by dissolving their phospholipids. Representative cytolysins include C. perfringens α-toxin (phospholipase C), S. aureus β-toxin (sphingomyelinase C) and Vibrio damsela (phospholipase D). Farlane et al. recognized C. perfringens α-toxin's molecular mechanism in 1941, which marked the pioneering work on any bacterial protein toxins.
Cytolysins which attack the hydrophobic regions of membranes and act like "detergents". Examples of this type include the 26-amino-acid δ-toxins from Straphylococcus aureus, S. haemolyticus and S. lugdunensis, Bacillus subtilis toxin and the cytolysin from Pseudomonas aeruginosa.
Cytolysins which form pores on target cells' membranes. These types of cytolysin are also known as pore-forming toxins (PFTs) and comprise the largest portion of all cytolysins. Examples of this type include perfringiolysin O from Clostridium perfringens bacteria, hemolysin from Escherichia coli, and listeriolysin from Listeria monocytogenes. Targets of this type of cytolysins range from general cell membranes to more specific microorganisms, such as cholesterols and phagocyte membranes.
Pore forming cytolysins
Pore forming cytolysins (PFCs) comprise near 65% of all membrane-damaging cytolysins. The first pore forming cytolysin is discovered by Manfred Mayer in 1972 of the C5-C9 insertion of erythrocytes. PFCs can be produced by a wide variety of sources, such as bacteria, fungi and even plants. The pathogenic process of PFCs normally involves forming channels or pores at the target cells' membranes. Note that the pores can have many structures. A porin-like structure allows molecules of certain sizes to pass through. Electric fields distribute unevenly across the pore and enable the selection molecules that can get through. This type of structure is shown in staphylococcal α-hemolysin. A pore can also be formed through membrane fusions. Controlled by Ca2+, the membrane fusion of vesicles form water-filled pores from proteolipids. Pore forming cytolysins such as perforin are used in cytotoxic killer T and NK cells to destroy infected cells.
pore forming process
A more complex pore formation process involves an oligomerization process of several PFC monomers. The pore forming process comprise three basic steps. The cytolysins are produced by certain microorganisms at first. Sometimes the producer organism needs to create a pore at its own membrane to release such cytolysins, like the case colicins produced by Escherichia coli. Cytolysins are released as protein monomers in a water-soluble state in this step. Note that cytolysins are often toxic to its producing hosts as well. For example, colicins consume nucleic acids of cells by using several enzymes. To prevent such toxicity, host cells produce immunity proteins for binding cytolysins before they do any damage inward.
In the second step, cytolysins adhere to target cell membranes by matching the "receptors" on the membranes. Most receptors are proteins, but they can be other molecules as well, such as lipids or sugars. With the help of receptors, cytolysin monomers combine with each other and form clusters of oligomers. During this stage, cytolysins complete transition from water-soluble monomers state into oligomers state.
Finally, the formed cytolysin clusters penetrate target cells' membranes and form membrane pores. The size of these pores varies from 1–2 nm ( S. aureus α-toxin, E. coli α-hemolysin, Aeromonas aerolysin) to 25–30 nm (streplysin O, pneumolysin).
Depending on how the pores are formed, the pore forming cytolysins fall into two categories. Those forming pores with α-helices are named α-PFTs (Pore forming toxins). Those forming pores with β-barrel structures are named β-PFTs. Some of the common α-PFTs and β-PFTs are listed in the table below.
Consequences of cytolysins
The lethal effects of pore-forming cytolysins are performed by causing influx and outflux disorder in a single cell. Pores that allow ions like Na+ to pass through created imbalance in the target cell which exceeds its ion-balancing capacity. Attacked cells therefore expand to lysis. When target cell membranes are destructed, bacteria which produce the cytolysins can consume the intracellular elements of the cell, such as iron and cytokines. Some enzymes that decompose target-cells' critical structures can enter the cells without obstructions.
Cholesterol-dependent cytolysin
One specific type of cytolysin is the cholesterol-dependent cytolysin (CDC). CDCs exist in many Gram-positive bacteria. The pore forming process of CDCs require the presence of cholesterols on target-cell membranes. The pore size created by CDC is large (25–30 nm) due to the oligomeric process of cytolysins. Note that cholesterol are not always necessary at during the adhering phase. For example, Intermedilysin requires only the presence of protein receptors when attaching to target cells and cholesterols are required at pore forming. The formation of pores through CDCs involve an additional step than the steps analyzed above. The water-soluble monomers oligomerize to form an intermediate product named "pre-pore" complex and then a β-barrel is penetrated into the membrane.
See also
Hemolysis (microbiology)
Thiol-activated cytolysin
Sea anemone cytotoxic protein
References
Cell biology
Peripheral membrane proteins | Cytolysin | Biology | 2,087 |
24,970,625 | https://en.wikipedia.org/wiki/RNA-based%20evolution | RNA-based evolution is a theory that posits that RNA is not merely an intermediate between Watson and Crick model of the DNA molecule and proteins, but rather a far more dynamic and independent role-player in determining phenotype. By regulating the transcription in DNA sequences, the stability of RNA, and the capability of messenger RNA to be translated, RNA processing events allow for a diverse array of proteins to be synthesized from a single gene. Since RNA processing is heritable, it is subject to natural selection suggested by Darwin and contributes to the evolution and diversity of most eukaryotic organisms.
Role of RNA in conventional evolution
In accordance with the central dogma of molecular biology, RNA passes information between the DNA of a genome and the proteins expressed within an organism. Therefore, from an evolutionary standpoint, a mutation within the DNA bases results in an alteration of the RNA transcripts, which in turn leads to a direct difference in phenotype.
RNA is also believed to have been the genetic material of the first life on Earth. The role of RNA in the origin of life is best supported by the ease of forming RNA from basic chemical building blocks (such as amino acids, sugars, and hydroxyl acids) that were likely present 4 billion years ago. Molecules of RNA have also been shown to effectively self-replicate, catalyze basic reactions, and store heritable information. As life progressed and evolved over time only DNA, which is much more chemically stable than RNA, could support large genomes and eventually took over the role as the major carrier of genetic information.
Single-Stranded RNA can fold into complex structures
Single-stranded RNA molecules can single handedly fold into complex structures. The molecules fold into secondary and tertiary structures by intramolecular base pairing. There is a fine dynamic of disorder and order that facilitate an efficient structure formation. RNA strands form complementary base pairs. These complementary strands of RNA base pair with another strand, which results in a three-dimensional shape from the paired strands folding in on itself. The formation of the secondary structure results from base pairing by hydrogen bonds between the strands, while tertiary structure results from folding of the RNA. The three-dimensional structure consists of grooves and helices. The formation of these complex structure gives reason to suspect that early life could have formed by RNA.
Variability of RNA processing
Research within the past decade has shown that strands of RNA are not merely transcribed from regions of DNA and translated into proteins. Rather RNA has retained some of its former independence from DNA and is subject to a network of processing events that alter the protein expression from that bounded by just the genomic DNA. Processing of RNA influences protein expression by managing the transcription of DNA sequences, the stability of RNA, and the translation of messenger RNA.
Alternative splicing
Splicing is the process by which non-coding regions of RNA are removed. The number and combination of splicing events varies greatly based on differences in transcript sequence and environmental factors. Variation in phenotype caused by alternative splicing is best seen in the sex determination of D. melanogaster. The Tra gene, determinant of sex, in male flies becomes truncated as splicing events fail to remove a stop codon that controls the length of the RNA molecule. In others the stop signal is retained within the final RNA molecule and a functional Tra protein is produced resulting in the female phenotype. Thus, alternative RNA splicing events allow differential phenotypes, regardless of the identity of the coding DNA sequence.
RNA stability
Phenotype may also be determined by the number of RNA molecules, as more RNA transcripts lead to a greater expression of protein. Short tails of repetitive nucleic acids are often added to the ends of RNA molecules in order to prevent degradation, effectively increasing the number of RNA strands able to be translated into protein. During mammalian liver regeneration RNA molecules of growth factors increase in number due to the addition of signaling tails. With more transcripts present the growth factors are produced at a higher rate, aiding the rebuilding process of the organ.
RNA silencing
Silencing of RNA occurs when double stranded RNA molecules are processed by a series of enzymatic reactions, resulting in RNA fragments that degrade complementary RNA sequences. By degrading transcripts, a lower amount of protein products are translated and the phenotype is altered by yet another RNA processing event.
RNA and Protein
In Earth's early developmental history RNA was the primary substance of life. RNA served as a blueprint for genetic material and was the catalyst to multiply said blueprint. Currently RNA acts by forming proteins. protein enzymes carry out catalytic reactions. RNAs are critical in gene expression and that gene expression depends on mRNA, rRNA, and tRNA. There is a relationship between protein and RNAs. This relationship could suggest that there is a mutual transfer of energy or information. In vitro RNA selection experiments have produced RNA that bind tightly to amino acids. It has been shown that the amino acids recognized by the RNA nucleotide sequences had a disproportionately high frequency of codons for said amino acids. There is a possibility that the direct association of amino acids containing specific RNA sequences yielded a limited genetic code.
Evolutionary mechanism
Most RNA processing events work in concert with one another and produce networks of regulating processes that allow a greater variety of proteins to be expressed than those strictly directed by the genome. These RNA processing events can also be passed on from generation to generation via reverse transcription into the genome. Over time, RNA networks that produce the fittest phenotypes will be more likely to be maintained in a population, contributing to evolution. Studies have shown that RNA processing events have especially been critical with the fast phenotypic evolution of vertebrates—large jumps in phenotype explained by changes in RNA processing events. Human genome searches have also revealed RNA processing events that have provided significant “sequence space for more variability”. On the whole, RNA processing expands the possible phenotypes of a given genotype and contributes to the evolution and diversity of life.
RNA virus evolution
RNA virus evolution appears to be facilitated by a high mutation rate caused by the lack of a proofreading mechanism during viral genome replication. In addition to mutation, RNA virus evolution is also facilitated by genetic recombination. Genetic recombination can occur when at least two RNA viral genomes are present in the same host cell and has been studies in numerous RNA viruses. RNA recombination appears to be a major driving force in viral evolution among Picornaviridae ((+)ssRNA) (e.g. poliovirus). In the Retroviridae ((+)ssRNA)(e.g. HIV), damage in the RNA genome appears to be avoided during reverse transcription by strand switching, a form of genetic recombination. Recombination also occurs in the Coronaviridae ((+)ssRNA) (e.g. SARS). Recombination in RNA viruses appears to be an adaptation for coping with genome damage. Recombination can occur infrequently between animal viruses of the same species but of divergent lineages. The resulting recombinant viruses may sometimes cause an outbreak of infection in humans.
See also
RNA world
References
Evolutionary biology
RNA | RNA-based evolution | Biology | 1,489 |
55,591,926 | https://en.wikipedia.org/wiki/Legal%20status%20of%20ayahuasca%20by%20country | This is an overview of the legality of ayahuasca by country. DMT, one of the active ingredients in ayahuasca, is classified as a Schedule I drug under the United Nations 1971 Convention on Psychotropic Substances, meaning that international trade in DMT is supposed to be closely monitored; use of DMT is supposed to be restricted to scientific research and medical use. Natural materials containing DMT, including ayahuasca, are not regulated under the 1971 Psychotropic Convention. The majority of the world's nations classify DMT as a scheduled drug; however, few countries seem to have laws specifically addressing the possession or use of ayahuasca.
References
Further reading
Labate, Bia; Cavnar, Clancy (2023). Religious Freedom and the Global Regulation of Ayahuasca. ISBN 978-0367028756
External links
Country-by-country map of ayahuasca's legal status on ICEERS.org
Drug control law
Drug policy by country
Entheogens
Ayahuasca
Ayahuasca
Psychoactive fungi | Legal status of ayahuasca by country | Chemistry | 224 |
339,208 | https://en.wikipedia.org/wiki/Vitaly%20Ginzburg | Vitaly Lazarevich Ginzburg, ForMemRS (; 4 October 1916 – 8 November 2009) was a Russian physicist who was honored with the Nobel Prize in Physics in 2003, together with Alexei Abrikosov and Anthony Leggett for their "pioneering contributions to the theory of superconductors and superfluids."
He spent his career in the former Soviet Union and was one of the leading figure in former Soviet program of nuclear weapons, working towards designs of the thermonuclear devices. He became a member of the Russian Academy of Sciences and succeeded Igor Tamm as head of the Department of Theoretical Physics of the Lebedev Physical Institute of the Russian Academy of Sciences (FIAN). In his later life, Ginzburg become an outspoken atheist and was critical of clergy's influence in Russian society.
Biography
Vitaly Ginzburg was born to a Jewish family in Moscow on 4 October 1916— the son of an engineer, Lazar Yefimovich Ginzburg, and a doctor, Augusta Wildauer who was a graduate from the Physics Faculty of Moscow State University in 1938. After attending his mother's alma mater, he defended his qualifications of the candidate's (Kandidat Nauk) dissertation in 1940, and his comprehensive thesis for the doctor's (Doktor Nauk) qualification in 1942. In 1944, he became a member of the Communist Party of the Soviet Union. Among his achievements are a partially phenomenological theory of superconductivity, the Ginzburg–Landau theory, developed with Lev Landau in 1950; the theory of electromagnetic wave propagation in plasmas (for example, in the ionosphere); and a theory of the origin of cosmic radiation. He is also known to biologists as being part of the group of scientists that helped bring down the reign of the politically connected anti-Mendelian agronomist Trofim Lysenko, thus allowing modern genetic science to return to the USSR.
In 1937, Ginzburg married Olga Zamsha. In 1946, he married his second wife, Nina Ginzburg (nee Yermakova), who had spent more than a year in custody on fabricated charges of plotting to assassinate the Soviet leader Joseph Stalin.
As a renowned professor and researcher, Ginzburg was an obvious candidate for the Soviet bomb project. From 1948 through 1952 Ginzburg worked under Igor Kurchatov to help with the hydrogen bomb. Ginzburg and Igor Tamm both proposed ideas that would make it possible to build a hydrogen bomb. When the bomb project moved to Arzamas-16 to continue in even more secrecy, Ginzburg was not allowed to follow. Instead he stayed in Moscow and supported from afar, staying under watch due to his background and past. As the work got continuously more classified, Ginzburg was phased out of the project and allowed to pursue his true passion, superconductors. During the Cold War, the thirst for knowledge and technological advancement was never-ending. This was no different with the research done on superconductors. The Soviet Union believed that the research done on superconductors would place them ahead of their American counterparts. Both sides sought to leverage the potential military applications of superconductors.
Ginzburg was the editor-in-chief of the scientific journal Uspekhi Fizicheskikh Nauk. He also headed the Academic Department of Physics and Astrophysics Problems, which Ginzburg founded at the Moscow Institute of Physics and Technology in 1968.
Ginzburg identified as a secular Jew, and following the collapse of communism in the former Soviet Union, he was very active in Jewish life, especially in Russia, where he served on the board of directors of the Russian Jewish Congress. He is also well known for fighting anti-Semitism and supporting the state of Israel.
In the 2000s (decade), Ginzburg was politically active, supporting the Russian liberal opposition and human rights movement. He defended Igor Sutyagin and Valentin Danilov against charges of espionage put forth by the authorities. On 2 April 2009, in an interview to the Radio Liberty Ginzburg denounced the FSB as an institution harmful to Russia and the ongoing expansion of its authority as a return to Stalinism.
Ginzburg worked at the P. N. Lebedev Physical Institute of Soviet and Russian Academy of Sciences in Moscow since 1940. Russian Academy of Sciences is a major institution where mostly all Nobel Prize laureates of physics from Russia have done their studies and/or research works.
Stance on religion
Ginzburg was an avowed atheist, both under the militantly atheist Soviet government and in post-Communist Russia when religion made a strong revival. He criticized clericalism in the press and wrote several books devoted to the questions of religion and atheism. Because of this, some Orthodox Christian groups denounced him and said no science award could excuse his verbal attacks on the Russian Orthodox Church. He was one of the signers of the Open letter to the President Vladimir V. Putin from the Members of the Russian Academy of Sciences against clericalisation of Russia.
Nobel Prize
Vitaly Ginzburg, along with Anthony Leggett and Alexei Abrikosov were awarded the Nobel Prize in Physics in 2003 for their groundbreaking work on the theory of superconductors. The Nobel Prize recognized Ginzburg's work in theoretical physics, specifically his contributions to understanding the behavior of matter at extremely low temperatures.
His collaboration with Lev Landau in 1950 led to the development of the Ginzburg-Landau theory, which became paramount to later work on superconductors. Landau had been working on superconductors for years before their partnership, with Landau publishing many papers between 1941 and 1947 on the properties of quantum fluids at extremely low temperatures. Lev Landau would later receive a Nobel Prize in 1962 for this research on the properties of the superfluid liquid helium in 1941. Before their collaboration, Landau had just done research on liquid helium and other quantum fluids, but Ginzburg allowed them to go a step further.
Ginzburg introduced the concept of an order parameter, which would allow them to characterize the state of the superconductor. To do this, they derived a complex set of equations that would allow them to describe the behavior of the superconductor. These equations provided a model from which researchers can understand the transition between a normal and superconducting state, as well as be able to predict various properties of other superconductors. Using these equations, they were also able to introduce the Ginzburg-Landau Parameter. This parameter used a separate set of equations in order to classify if they were looking at a Type-I or Type-II superconductor. This advancement allowed Anthony Leggett to build upon it and complete his own research on superconductors.
This research on superconductors allowed many new technological advancements to unfold, including some we can see in everyday life. The use of superconductors can be seen in MRI machines, engines, and new Maglev trains.
Death
A spokeswoman for the Russian Academy of Sciences announced that Ginzburg died in Moscow on 8 November 2009 from cardiac arrest. He had been suffering from ill health for several years, and three years before his death said "In general, I envy believers. I am 90, and [am] being overcome by illnesses. For believers, it is easier to deal with them and with life's other hardships. But what can be done? I cannot believe in resurrection after death."
Prime Minister of Russia Vladimir Putin sent his condolences to Ginzburg's family, saying "We bid farewell to an extraordinary personality whose outstanding talent, exceptional strength of character and firmness of convictions evoked true respect from his colleagues". President of Russia Dmitry Medvedev, in his letter of condolences, described Ginzburg as a "top physicist of our time whose discoveries had a huge impact on the development of national and world science."
Ginzburg was buried on 11 November in the Novodevichy Cemetery in Moscow, the resting place of many famous politicians, writers and scientists of Russia.
Family
The first wife (in 1937–1946) is a graduate of the Faculty of Physics of Moscow State University (1938) Olga Ivanovna Zamsha (born 1915, Yeysk), candidate of physical and mathematical sciences (1945), associate professor at MEPhI (1949–1985), author of the “Collection of problems on general physics" (with co-authors, 1968, 1972, 1975).
The second wife (since 1946) is a graduate of the Faculty of Mechanics and Mathematics of Moscow State University, experimental physicist Nina Ivanovna Ginzburg (née Ermakova) (October 2, 1922 — May 19, 2019).
Daughter — Irina Vitalievna Dorman (born 1939), graduate of the Faculty of Physics of Moscow State University (1961), candidate of physical and mathematical sciences, historian of science (her husband is a cosmophysicist, doctor of physical and mathematical sciences Leib (Lev) Isaakovich Dorman).
Granddaughter — Victoria Lvovna Dorman, American physicist, graduate of the physics department of Moscow State University and Princeton University, deputy dean for academic affairs at the Princeton School of Engineering and Applied Science; her husband is physicist and writer Mikhail Petrov.
Great cousin — Mark Ginzburg.
Other honors and awards
Medal "For Valiant Labour in the Great Patriotic War 1941–1945" (1946)
Medal "In Commemoration of the 800th Anniversary of Moscow" (1948)
Stalin Prize in 1953
Order of Lenin (1954)
Order of the Badge of Honour, twice (1954, 1975)
Order of the Red Banner of Labour, twice (1956, 1986)
Lenin Prize in 1966
Medal "For Valiant Labour. To commemorate the 100th anniversary of the birth of Vladimir Ilyich Lenin" (1970)
Marian Smoluchowski Medal (1984)
Elected a Foreign Member of the Royal Society (ForMemRS) in 1987
Gold Medal of the Royal Astronomical Society in 1991
Wolf Prize in Physics in 1994/5
Vavilov Gold Medal (1995) – for outstanding work in physics, including a series of papers on the theory of radiation by uniformly moving sources
Lomonosov Gold Medal in 1995 – for outstanding achievement in the field of theoretical physics and astrophysics
3rd class (3 October 1996) – for outstanding scientific achievements and the training of highly qualified personnel
Elected a Fellow of the American Physical Society in 2003.
Order "For Merit to the Fatherland", 1st class (4 October 2006) – for outstanding contribution to the development of national science and many years of fruitful activity
See also
List of Jewish Nobel laureates
References
External links
including the Nobel Lecture On Superconductivity and Superfluidity
Ginzburg's homepage
Curriculum Vitae
Open letter to the President of the Russian Federation Vladimir V. Putin
Obituary The Daily Telegraph 11 Nov 2009.
Obituary The Independent November 14, 2009 (by Martin Childs).
Biography
Obituary
Archival collections
Vitalii Ginzburg papers, 1992, Niels Bohr Library & Archives
1916 births
2009 deaths
Scientists from Moscow
People from Moskovsky Uyezd
Communist Party of the Soviet Union members
Members of the Congress of People's Deputies of the Soviet Union
Russian atheism activists
Jewish atheists
Jewish Russian physicists
Nuclear weapons program of the Soviet Union people
Soviet astronomers
Soviet inventors
Soviet physicists
Russian theoretical physicists
Superconductivity
Academic journal editors
Moscow State University alumni
Academic staff of the Moscow Institute of Physics and Technology
Full Members of the USSR Academy of Sciences
Full Members of the Russian Academy of Sciences
Foreign associates of the National Academy of Sciences
Foreign fellows of the Indian National Science Academy
Fellows of the American Physical Society
Foreign members of the Royal Society
Recipients of the Stalin Prize
Recipients of the Lenin Prize
Recipients of the Gold Medal of the Royal Astronomical Society
Recipients of the Lomonosov Gold Medal
Recipients of the Order "For Merit to the Fatherland", 1st class
Recipients of the Order of Lenin
Recipients of the Order of the Red Banner of Labour
Nobel laureates in Physics
Russian Nobel laureates
Wolf Prize in Physics laureates
UNESCO Niels Bohr Medal recipients
Burials at Novodevichy Cemetery
Russian scientists | Vitaly Ginzburg | Physics,Materials_science,Technology,Engineering | 2,500 |
58,507,154 | https://en.wikipedia.org/wiki/Aspergillus%20bicolor | Aspergillus bicolor is a species of fungus in the genus Aspergillus. It is from the Aenei section. The species was first described in 1978. It has been reported to produce sterigmatocystin, versicolorins, and some anthraquinones.
Growth on agar plates
Apsergillus bicolor has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
bicolor
Fungi described in 1978
Fungus species | Aspergillus bicolor | Biology | 133 |
1,402,463 | https://en.wikipedia.org/wiki/Hydroinformatics | Hydroinformatics is a branch of informatics which concentrates on the application of information and communications technologies (ICTs) in addressing the increasingly serious problems of the equitable and efficient use of water for many different purposes. Growing out of the earlier discipline of computational hydraulics, the numerical simulation of water flows and related processes remains a mainstay of hydroinformatics, which encourages a focus not only on the technology but on its application in a social context.
On the technical side, in addition to computational hydraulics, hydroinformatics has a strong interest in the use of techniques originating in the so-called artificial intelligence community, such as artificial neural networks or recently support vector machines and genetic programming. These might be used with large collections of observed data for the purpose of data mining for knowledge discovery, or with data generated from an existing, physically based model in order to generate a computationally efficient emulator of that model for some purpose.
Hydroinformatics recognises the inherently social nature of the problems of water management and of decision-making processes, and strives to understand the social processes by which technologies are brought into use. Since the problems of water management are most severe in the majority world, while the resources to obtain and develop technological solutions are concentrated in the hands of the minority, the need to examine these social processes are particularly acute.
Hydroinformatics draws on and integrates hydraulics, hydrology, environmental engineering and many other disciplines. It sees application at all points in the water cycle from atmosphere to ocean, and in artificial interventions in that cycle such as urban drainage and water supply systems. It provides support for decision making at all levels from governance and policy through management to operations.
Hydroinformatics has a growing world-wide community of researchers and practitioners, and postgraduate programmes in Hydroinformatics are offered by many leading institutions. The Journal of Hydroinformatics provides a specific outlet for Hydroinformatics research, and the community gathers to exchange ideas at the biennial conferences. These activities are coordinated by the joint IAHR, IWA, IAHS Hydroinformatics Section.
Classic Soft-Computing Techniques is the first volume of the three, in the Handbook of HydroInformatics series (Elsevier) by Saeid Eslamian.
Handbook of HydroInformatics, Volume II: Advanced Machine Learning Techniques presents both the art of designing good learning algorithms, as well as the science of analyzing an algorithm's computational and statistical properties and performance guarantees
Handbook of HydroInformatics Volume III: Water Data Management Best Practices presents the latest and most updated data processing techniques that are fundamental to Water Science and Engineering disciplines.
References
External links
Hydroinformatics Lab at Brigham Young University
Hydroinformatics Lab at the University of Iowa - Research and Community Platform.
IHE Delft MSc / PhD in Hydroinformatics.
EuroAquae - European master course of Hydroinformatics and Water Management.
Hydroinformatics MSc at Newcastle University.
The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System.
Environmental engineering
Hydrology
Information science by discipline
Computational fields of study | Hydroinformatics | Chemistry,Technology,Engineering,Environmental_science | 640 |
68,219,713 | https://en.wikipedia.org/wiki/ExoKyoto | ExoKyoto is a database written in C++ that includes over 3,500 confirmed exoplanets as well as more than 120,000 stars. The database is led by Professor Yosuke Yamashiki of the Graduate School of Advanced Leadership Studies, at Kyoto University. ExoKyoto is particularly useful to visualize the habitable zone of different stars and compare their planets with the Solar System in terms of irradiance. Together with the Extrasolar Planets Encyclopaedia, the NASA Exoplanet Archive, the Open Exoplanet Catalogue, and the Exoplanet Data Explorer, ExoKyoto is a popular exoplanet database that is widely used.
See also
NASA Exoplanet Archive
References
Databases in Japan
Astronomical databases | ExoKyoto | Astronomy | 161 |
247,733 | https://en.wikipedia.org/wiki/Bath%20brick | The bath brick (also known as Patent Scouring or Flanders bricks), patented in 1823 by William Champion and John Browne, was a predecessor of the scouring pad used for cleaning and polishing.
Bath bricks were made by a number of companies in the town of Bridgwater, England, from fine clay dredged from the River Parrett near Dunball. The silt, which was collected from the river on either side of the Town Bridge, contained fine particles of alumina and silica. It was collected from beds of brick rubble left in the rain for the salt to be washed out and then put into a "pugging mill" which was powered by a horse to be mixed, before being shaped into moulds and dried. These would be wrapped in paper and boxed for sale in England and throughout the British Empire. By the end of the 19th century around 24 million bath bricks had been produced in Bridgwater for the home and international markets.
The brick, similar in size to an ordinary house brick, could be used in a number of ways. A mild abrasive powder could be scraped from the brick and used as a scouring powder on floors and other surfaces. Powder could also be moistened with water for use on a cloth for polishing or as a kind of sand paper. Items such as knives might be polished directly on a wetted brick.
See also
List of cleaning products
References
Cleaning products
Bridgwater | Bath brick | Chemistry | 298 |
11,363,542 | https://en.wikipedia.org/wiki/Thinned-array%20curse | The thinned-array curse (sometimes, sparse-array curse) is a theorem in electromagnetic theory of antennas. It states that a transmitting antenna which is synthesized from a coherent phased array of smaller antenna apertures that are spaced apart will have a smaller minimum beam spot size, but the amount of power that is beamed into this main lobe is reduced by an exactly proportional amount, so that the total power density in the beam is constant.
The origin of the term is not clear. Robert L. Forward cites use of the term in unpublished Hughes Research Laboratories reports dating from 1976.
Example
Consider a number of small sub-apertures that are mutually adjacent to one another, so that they form a filled aperture array. Suppose that they are in orbit, beaming microwaves at a spot on the ground. Now, suppose you hold constant the number of sub-apertures and the power emitted by each, but separate the sub-apertures (while keeping them mutually phased) so as to synthesize a larger aperture. The spot size on the ground is reduced in size proportionally to the diameter of the synthesized array (and hence the area is reduced proportionally to the diameter of the synthesized array squared), but the power density at the ground is unchanged.
Thus:
The array is radiating the same amount of power (since each individual sub-aperture making the array radiates a constant amount of power whether or not it is adjacent the next aperture).
It has the same power per unit area at the center of the receiving spot on the ground.
The receiving spot on the ground is smaller.
From these three facts, it is clear that if the synthesized aperture has an area A, and the total area of it that is filled by active transmitters is a, then at most a fraction a/A of the radiated power reaches the target, and the fraction 1 - a/A is lost. This loss shows up in the form of power in side lobes.
This theorem can also be derived in more detail by considering a partially filled transmitter array as being the superposition of a fully filled array plus an array consisting of only the gaps, broadcasting exactly out of phase with the filled array. The interference pattern between the two reduces the power in the main beam lobe by exactly the factor 1 - a/A.
Note that the thinned array curse applies only to mutually coherent sources. If the transmitting sources are not mutually coherent, the size of the ground spot does not depend on the relationship of the individual sources to one another, but is simply the sum of the individual spots from each source.
Consequences
The thinned array curse means that while synthesized apertures are useful for receivers with high angular resolution, they are not useful for power transmitters. It also means that if a filled array transmitter has gaps between individual elements, the main lobe of the beam will lose an amount of power proportional to the area of the gaps. Likewise, if a transmitter comprises multiple individual transmitters, some of which fail, the power lost from the main lobe will exceed the power of the lost transmitter, because power will also be diverted into the side lobes.
The thinned array curse has consequences for microwave power transmission and wireless energy transfer concepts such as solar power satellites; it suggests that it is not possible to make a smaller beam and hence reduce the size of a receiver (called a rectenna for microwave power beaming) by phasing together beams from many small satellites.
A short derivation of the thinned array curse, focusing on the implications for use of lasers to provide impulse for an interstellar probe (an application of beam-powered propulsion), can be found in Robert Forward's paper "Roundtrip Interstellar Travel Using Laser Pushed Lightsails."
See also
Radiation pattern
Notes
References
The general theory of phased array antennas, from which the thinned array curse can be derived, can be found in Chapter 19 of Sophocles J. Orfanidis, Electromagnetic Waves and Antennas (electronic version accessed July 20, 2009).
See also Constantine A. Balanis: “Antenna Theory, Analysis and Design”, John Wiley & Sons, Inc., 2nd ed. 1982
Interferometry
Electromagnetic radiation | Thinned-array curse | Physics | 840 |
39,013,900 | https://en.wikipedia.org/wiki/Non-linear%20coherent%20states | Coherent states are quasi-classical states that may be defined in different ways, for instance as eigenstates of the annihilation operator
,
or as a displacement from the vacuum
,
where is the Sudarshan-Glauber displacement operator.
One may think of a non-linear coherent state by generalizing the
annihilation operator:
,
and then using any of the above definitions by exchanging by . The above definition is also known as an -deformed annihilation operator.
References
Quantum mechanics | Non-linear coherent states | Physics | 104 |
70,021,966 | https://en.wikipedia.org/wiki/Abell%2063 | Abell 63 is a planetary nebula with an eclipsing binary central star system in the northern constellation of Sagitta. Based on parallax measurements of the central star, it is located at a distance of approximately 8,810 light years from the Sun. The systemic radial velocity of the nebula is . The nuclear star system is the progenitor of the nebula and it has a combined apparent visual magnitude of 14.67. During mid eclipse the magnitude drops to 19.24.
The star H.V. 5452 was found to be a candidate eclipsing binary system in 1932 by Dorrit Hoffleit, and it was given the variable star designation UU Sagittae (UU Sge). In 1955, George O. Abell discovered a nebula in the same region of the sky from photographic plates taken by the National Geographic Society – Palomar Observatory Sky Survey. The identifier 'Abell 63' comes from a follow-up publication by Abell in 1966, which identified the nebula as a homogeneous disk in diameter with a central star of magnitude 14.67. In 1976, Howard E. Bond noted that the positions of the variable star and the center of the nebula coincide. That same year, J. S. Miller and associates confirmed that UU Sge is an eclipsing binary, finding a period of 11h 09.6m with an eclipse duration of 70 minutes. The deep eclipse decreased the brightness of the pair by ~4.3 magnitudes.
The general shape of this nebula appears to be a hollow tube with a prominent hyperbolic-shaped waist. The bright central rim has faint extensions leading to end caps; the primary axis of the tube being aligned along a position angle of 34°. The overall profile has a 7:1 aspect ratio spanning an angular size of , with the ends at an equal angular distance from the center. The nebula is expanding with a velocity of . Surrounding the bright central rim is a faint circular shell, which may be the remnant of the stellar wind produced as the central star passed through the asymptotic giant branch.
The central system is a close detached binary with an orbital period of 11.2 hours. The length of the total eclipse of the primary component by the secondary is 13.4 minutes. They have a projected separation of at least 2.45 times the radius of the Sun. The primary is an O-type subdwarf star (sdO) that has passed through the asymptotic giant branch stage, during which it ejected the surrounding planetary nebula. It has 63% of the mass of the Sun and 35% of the Sun's radius, with an effective temperature of ~78,000 K. The secondary has the mass of an M-type main-sequence star, or 29% of the mass of the Sun. However, the effective temperature of 6,136 K is much higher than expected for an M dwarf, and the radius of 56% of the Sun is too large. This is because the point on the secondary facing the primary is being heated by its much hotter companion. The hot primary is also providing the illumination of the surrounding nebula.
References
Further reading
Planetary nebulae
Eclipsing binaries
O-type subdwarfs
M-type main-sequence stars
Sagitta
Sagittae, UU | Abell 63 | Astronomy | 680 |
71,797,666 | https://en.wikipedia.org/wiki/Robert%20H.%20Carter%20III | Robert H. Carter III (January 12, 1847 - January 13, 1908) was an American pharmacist. He was the first African American certified pharmacist from Massachusetts.
By 1871, Carter worked for druggist and chemist William P. S. Caldwell on 49 Purchase Street. He married hairdresser Parthenia M. Harris on July 8, 1869. They had six children. Between 1876 and 1907, he owned and managed pharmacies in Boston and New Bedford, Massachusetts. On January 5, 1896, he was certified as a registered pharmacist. He worked as a pharmacist for 37 years. He was a member of the National Negro Business League.
He died January 13, 1908, in Brighton, Boston of tuberculosis.
References
1847 births
1908 deaths
Pharmacists from Massachusetts
African-American pharmacists
19th-century American pharmacists
19th-century American businesspeople
African-American businesspeople
Businesspeople from Boston
Tuberculosis deaths in Massachusetts | Robert H. Carter III | Chemistry | 205 |
70,087,251 | https://en.wikipedia.org/wiki/Fairfield%20Enterprises | Fairfiel Enterprises was a leading British-based machine distributor and supplier of tooling and support services to the printing and packaging industries. The company was taken over in 2000 by the Swiss company Bobst AG, a supplier of machinery and services to the packaging industry.
History
The business was founded in London in 1884 by the Jewish-German immigrant Oscar Friedheim (1858–1928), at first trading in the supply of cardboard and paper. Five years later the company turned to the import and distribution of machinery, starting with a German made card cutting and scoring machine for the production of visiting cards. From there on Oscar Friedheim focused on building up its connections with overseas machine manufacturers in the paper and packaging industry (e. g. Bobst, Faber & Schleicher, Muller Martini and after the Second World War Winkler + Dünnebier).
In 1913 the company was incorporated as a limited company with a nominal capital of £17,000. During The Blitz its head office in Water Lane was hit hard twice. Most of the machinery and the company records got destroyed. Therefor a new provisional office had to be set up at Mill Hill. 1948 Oscar Friedheim Ltd. bought the engineers and sundries business of John Haddon & Co, a London based printing and advertising company.
In 1970 Fairfield Enterprises Ltd. was created as a holding company of Oscar Friedheim Ltd. To extend its business into spare parts Fairfield bought 50% of Lasercomb Dies Ltd. (Redditch) in 1984 and purchased the remaining 50% in 1991.
In August 1997 Fairfield was listed on the London Stock Exchange at 80p per share to raise funds for acquisitions. In 1998 Fairfield bought Palatine Engraving (Leeds) and Kennedy Grinding (London). In 1999 the headquarters of Fairfield Enterprises was moved from London to Redditch.
In 2000 Bobst AG of Switzerland acquired Fairfield Enterprises, paying 200p per Share. Fairfield's activities concerning the distribution of Bobst machinery in the UK and Ireland were renamed Bobst UK Holdings Ltd. The remaining parts of Fairfield, Friedheim International and Lasercomb Group (including Palatine Engraving), were sold to their management in 2005 and 2006 respectively.
References
Further reading
Roy Brewer: Friedheim - A Century of Service - 1884–1984, Oscar Friedheim Ltd, London 1984.
External links
Homepage of Friedheim International
Homepage of Lasercomb Group
British companies established in 1884
1884 establishments in England
Companies based in London
Distribution companies of the United Kingdom
Business services companies of the United Kingdom
Papermaking in the United Kingdom
Stationers of the United Kingdom
Packaging industry
Printing devices
Companies formerly listed on the London Stock Exchange
2000 mergers and acquisitions
History of London
Economy of the City of London | Fairfield Enterprises | Physics,Technology | 541 |
78,404,878 | https://en.wikipedia.org/wiki/ARKA%20descriptors%20in%20QSAR | One of the most commonly used in silico approaches for assessing new molecules' activity/property/toxicity is the Quantitative Structure-Activity/Property/Toxicity Relationship (QSAR/QSPR/QSTR), which generates predictive models for efficiently predicting query compounds . QSAR/QSPR/QSTR uses numerical chemical information in the form of molecular descriptors and correlates these to the response activity/property/toxicity using statistical techniques. While QSAR is essentially a similarity-based approach, the occurrence of activity/property cliffs may greatly reduce the predictive accuracy of the developed models. The novel Arithmetic Residuals in K-groups Analysis (ARKA) approach is a supervised dimensionality reduction technique that can easily identify activity cliffs in a data set. Activity cliffs are similar in their structures but differ considerably in their activity. The basic idea of the ARKA descriptors is to group the conventional QSAR descriptors based on a predefined criterion and then assign weightage to each descriptor in each group. ARKA descriptors have also been used to develop classification-based and regression-based QSAR models with acceptable quality statistics.
References
Cheminformatics
Dimension reduction | ARKA descriptors in QSAR | Chemistry | 249 |
5,976,406 | https://en.wikipedia.org/wiki/BM-14 | The BM-14 (BM for Boyevaya Mashina, 'combat vehicle'), is a Soviet-made 140mm multiple launch rocket system (MLRS), normally mounted on a truck.
The BM-14 can fire 140 mm M-14 rockets with a high-explosive fragmentation warhead, a smoke warhead or a chemical warhead. It is similar to the BM-13 "Katyusha" and was partly replaced in service by the 122 mm BM-21 Grad.
Launchers were built in 16 and 17-round variants. The rockets have a maximum range of .
The weapon is not accurate as there is no guidance system, but it is extremely effective in saturation fire.
Variants
BM-14 (8U32) - 16-round model (two rows of 8), launcher mounted on the ZIS-151 truck. Entered service in 1952. Also known as BM-14-16.
BM-14M (2B2) - modified model, mounted on the ZIL-157.
BM-14MM (2B2R) - final upgrade, mounted on the ZIL-131.
BM-14-17 (8U35) - 17-round (8+9 launch tubes) launcher, mounted on the GAZ-63A. Developed in 1959. This launcher was also used on naval vessels, for example Project 1204 patrol boats.
BM-14-17M (8U35M) - modified model, mounted on the GAZ-66.
RPU-14 (8U38) - towed 16-round version, based on the carriage of the 85mm gun D-44 and used by Soviet Airborne Troops, where it was replaced by the 122mm BM-21V "Grad-V".
Ammunition
The BM-14 launcher and its variants can fire 140mm rockets of the M-14-series (also called Soviet-made M14 artillery rockets). They have a minimum range of and a maximum range of . The M-14 series consist of three known types:
M-14-OF - an M-14 rocket with a high-explosive fragmentation warhead containing of TNT.
M-14-D - an M-14 rocket with a smoke warhead containing white phosphorus.
M-14-S - an M-14 rocket with a chemical warhead containing of sarin.
Use
During the Syrian Civil War, a rocket engine from a 140 mm M-14-series rocket was identified on 26 August 2013 by the U.N. fact-finding mission in the Muadamiyat al-Sham district southwest of Damascus, allegedly originating from the chemical attack on Western Ghouta on 21 August 2013.
The rockets nozzle assembly had 10 jet nozzles ordered evenly in a circle with an electrical contact plate in the middle. The bottom ring of the rocket engine had the lot number "Г ИШ 4 25 - 6 7 - 179 К" engraved, which means it was produced in 1967 by factory 179 (Sibselmash plant in Novosibirsk). However, no warhead was observed at the impact site and none of the 13 environmental samples taken in the Western Ghouta area tested positive for sarin, although three had "degradation and/or by-products" possibly originating from sarin. On 18 September, the Russian Presidential Chief of Staff Sergei Ivanov commented on the U.N. missions findings. He said "these rockets were supplied to dozens of countries", but that "the Soviet Union never supplied warheads with sarin to anyone". Another type of rockets was used in the Eastern Ghouta attack.
Operators
Current operators
− 48 BM-14/16
− 20 BM-14-16
− 32
− BM-14-17 mounted on Shmel-class (Project 1204) patrol boats as of 2023
− 200 BM-14 purchased in 1967. Was in service as late as 2016
− fielded during the Vietnam War from 1967
Former operators
? / / / -
/ − A number destroyed during the Angolan Civil War. Operated BM-14-16s as late as 2005
/ /
- Indonesian Marine Corps (Korps Marinir) operated 36 BM-14-17 launchers. Replaced by the RM-70 in 2003
Federation of Arab Republics (1972-1977)
- − retired
− BM-14-16 and BM-14-17
− Passed on to successor states in 1991
− 15 BM-14 in 1989. Passed on to the unified Yemeni state
Similar designs
The Type 63 130mm multiple rocket launcher (not to be confused with the towed Type 63 of 107mm) is the Chinese version of the BM-14-17. It has a slightly smaller calibre but is fitted with 19 instead of 17 launch tubes. The Type 63 MRL is based on the Nanjing NJ-230 or 230A 4x4 truck, a licence-produced version of the Soviet GAZ-63/63A.
The WP-8z () was a Polish towed rocket launcher that was developed in 1960. The weapon was subsequently produced between 1964 and 1965. It fired the same rockets as the RPU-14 but had only 8 launch tubes. The main operator was the 6th Pomeranian Airborne Division (). with 12-18 WP-8s in its inventory.
See also
BM-12 multiple rocket launcher
Katyusha World War II multiple rocket launchers (BM-13, BM-8, and BM-31)
M16 (rocket), U.S. 4.5 inch multiple rocket launcher
BM-21 Grad 122 mm multiple rocket launcher
BM-27 Uragan 220 mm multiple rocket launcher
References
External links
Use of BM-14 by the Taliban
Description of BM-14
Use of BM-14 by Cuban Armed Forces
Range and Payload
Algerian use of BM-14 as of 1993
Walk-around of Type 63 130mm MRL
Bibliography
Cold War artillery of the Soviet Union
Multiple rocket launchers of the Soviet Union
Chemical weapon delivery systems
Military equipment introduced in the 1950s | BM-14 | Chemistry | 1,258 |
40,663,753 | https://en.wikipedia.org/wiki/Power-egg | A power-egg is a complete "unitized" modular engine installation, consisting of engine and all ancillary equipment, which can be swapped between suitably designed equipment, with standardised quick-changing attachment points and connectors.
In aircraft so designed, the power-egg is typically removed before mean time to failure is reached and a fresh one installed, the removed engine then being sent for maintenance. Spare power-eggs may be stored in sealed containers, to be opened when needed.
The power-egg or Kraftei format was used in some German Second World War era aircraft, particularly for twin or multi-engined airframe designs. It existed in two differing formats – the initial Motoranlage format which used some specialized added components depending on what airframe it was meant for use on, and the Triebwerksanlage format, a more complete unitization format usually including exhaust and oil cooling systems.
Applications
Germany
Inline and radial engines were both incorporated into the Kraftei concept: the Junkers Jumo 211 was a pioneering example of engine unitization, as used on both the Junkers Ju 88 using a novel annular radiator for both main engine coolant and engine oil cooling needs (viewable on the National Museum of the U.S. Air Force's restored Ju 88D-1 reconnaissance aircraft ), with exactly the same nacelle packaging used to power the Messerschmitt Me 264 V1's first flights. Both the examples of the Dornier Do 217 medium bomber powered by inline engines, and the Axis Powers' largest-flown powered aircraft of any type, the Blohm & Voss BV 238 flying boat used essentially the same unitized Daimler-Benz DB 603 powerplants, complete with "chin" radiators under the nacelles as integral components. A differing Kraftei physical packaging is also believed to have been crafted by the Heinkel firm for the DB 603 engines used on its Heinkel He 219A night fighter, as what appears to be the same exact engine installation design used for the He 219A was also used for the quartet of ordered airframes for the same firm's He 177B four-DB 603-engined heavy bomber design's prototype series, as both airframe types' engine "units" used annular radiators and cylindrical cowls of identical appearance to enclose them.
The air-cooled BMW 801 fourteen-cylinder, twin-row radial engine was also provided in both formats for a number of German designs, especially for twin and multi-engined airframes, with the "M" or "T" first suffix letter designating whether it was a Motoranlage (the original format of the Kraftei concept) or the more comprehensively consolidated Triebwerksanlage format unitized powerplant – the BMW-designed forward cowling ring always used with the 801 incorporated the engine's oil cooler, making it an easy task for aviation engineers to use for such a "unitized" mounting concept.
One known surviving Motoranlage-packaged BMW 801 radial still exists and is on restored display at the New England Air Museum, Bradley International Airport, Windsor Locks, CT, with preserved examples of a Ju 88R-1 night fighter and Ju 388L-1 reconnaissance aircraft, one each in the United Kingdom and the United States respectively, also having unitized Kraftei-installation BMW 801 radials on them.
Soviet Union
Project 651E, originally envisaged as a modification of the Juliett-class submarine, consisted of a small mostly self-contained additional 600 kW nuclear reactor, model VAU-6, the so-called Dollezhal egg. This nuclear powerpack aimed to greatly prolong submerged capabilities of what was otherwise a normal diesel-electric submarine with long duration idling and underwater recharging of batteries. The system was developed but did not see unclassified service through 1985.
United Kingdom
A scheme for unitised engine installations was initiated by the Air Ministry in 1937 and after consultation with the Society of British Aircraft Constructors (SBAC) a system was devised allowing standardised dimensions and bulkhead fittings for both inline and radial engine installations of similar power.
The Bristol Aeroplane Company devised an installation known as a "power egg" for the Hercules engine in 1938, an example of which was exhibited at the 1938 Paris Aeronautical Salon. The Hercules installation was used on the Bristol Beaufighter, Armstrong Whitworth Albemarle, Vickers Wellington, Short Stirling, and Handley Page Halifax.
After an early "Power Unit" installation was devised by Rolls-Royce (RR) for the Merlin X and used in the Armstrong Whitworth Whitley and Vickers Wellington, a more advanced "Power Plant" design was devised for the Merlin XX, a unitized Merlin XX-series engine installation and nacelle being designed and first used on the Beaufighter Mark II which was later also used on the Miles M.20, Avro Lancaster and Avro York, and the post-war CASA 2.111. Merlin Power Plant production rose from just over 100 in 1939 to nearly 14,000 by 1944, mostly destined for the Lancaster.
A new installation was subsequently designed as the "Universal Power Plant" (UPP) radiator and cowling installation developed for the Avro Lincoln (Merlin 65, 68, and 85) and also used on the Vickers Windsor (Merlin 85), and subsequently used on the Avro Tudor (Merlin 100-series), Canadair North Star/Argonaut (Merlin 600-series), and Avro Shackleton (Griffon 61 and 62).
Capable of mounting either the 27 litre Merlin or the larger 37 litre Griffon, the UPP attached to the nacelle firewall via the SBAC standard circular bulkhead. In the North Star (A Canadian-built variant of the Douglas DC-4) the UPP design had to be changed slightly due to having to use the non-standard Douglas DC-4 bulkhead attachment, resulting in the North Star's cowling panels being tapered slightly rather than parallel-sided. The UPP installation had the advantage that all engines were interchangeable between nacelle positions, i.e., an inboard engine could be exchanged with an outboard engine, and engine types (Merlin or Griffon) and Mark No.s could be mixed and flown on the same aircraft, a Hucknall Lancaster test bed being flown with two Merlins for the North Star in one position, and with two Merlins for the Tudor in the others.
Rolls-Royce continued the practice of unitised engine packages post-war with the Dart and Tyne turboprops, and later with podded jet engines such as the Conway and RB211 being supplied as complete RR-designed units with all cowling panels and nacelle fittings, including thrust reverser, ready for attachment to the engine pylon.
United States
In the United States Pratt & Whitney produced a R-2180-E Twin Wasp E "power egg" installation certificated in 1945 for use as an engine upgrade for the Douglas DC-4, however finding few buyers, it was eventually only used on the Saab 90 Scandia.
See also
Avia S-199, Bf 109 airframes fitted with engines and propellers of the Heinkel He 111 bomber
Powerpack (drivetrain)
Power module
Prime mover
References
External links
"Interchangeability" a 1939 Flight article
"Engine Mountings" a 1944 Flight article
"The Hercules Power Plant" a 1942 Flight article
"Rolls-Royce Power Plants" a 1942 advertisement in Flight
Aircraft engines | Power-egg | Technology | 1,557 |
59,185,446 | https://en.wikipedia.org/wiki/Carrier%20aggregation | In wireless communication, carrier aggregation is a technique used to increase the data rate per user, whereby multiple frequency blocks (called component carriers) are assigned to the same user. The maximum possible data rate per user is increased the more frequency blocks are assigned to a user. The sum data rate of a cell is increased as well because of a better resource utilization. In addition, load balancing is possible with carrier aggregation. Channel selection schemes for CA systems taking into account the optimal values for the training length and power, the number of the probed sub-channels and the feedback threshold such that the sum rate is also important for optimal achievable capacity.
Types of carrier aggregation
Depending on the positions of the component carriers three cases of carrier aggregation are distinguished:
The case where the component carriers are contiguous in the same frequency band is called intra-band contiguous carrier aggregation.
If the component carriers are in the same frequency band but are separated by a gap the carrier aggregation is called intra-band non-contiguous.
The most complex case is when the component carriers lie in different frequency band. This is called inter-band carrier aggregation applied to heterogeneous networks.
There is no difference between these three cases from a baseband perspective. However, the complexity from a radio frequency (RF) point of view is increased in the case inter-band carrier aggregation.
Applications
UMTS/HSPA+
The channel bandwidth for UMTS/HSPA+ is about 3.8 MHz with a carrier spacing of 5 MHz. Carrier aggregation is also called Dual Cell in the context of UMTS/HSPA+.
Through carrier aggregation (part of the UMTS extension HSPA+) two downlink carriers may be assigned to one user since Release 8. Release 10 supports four-carrier aggregation and eight-carrier-aggregation is supported since Release 11. 3GPP standardized carrier aggregation for HSPA+ for the uplink for up to two component carriers since Release 9.
LTE/LTE-Advanced
LTE supports since its first release channel bandwidths of 1.4 MHz, 3 MHz, 5 MHz, 10 MHz, 15 MHz and 20 MHz. Since LTE-Advanced Rel. 10 any two channels (of possibly different bandwidths) may be aggregated and be assigned to a single user. A difference between two aggregated 10 MHz component carriers and a single ordinary 20 MHz channel is that in the case of carrier aggregation the control information is transmitted on both component carriers.
LTE Advanced with carrier aggregation allows Gigabit LTE. This is made possible through higher-order modulation (256QAM), carrier aggregation and 4x4 MIMO. Since LTE Release 10 up to 5 component carriers may be aggregated, allowing for transmission bandwidths of up to 100 MHz. Using five aggregated component carriers, MIMO and 256QAM allows theoretical data rates of up to 2 gigabits per second. A management architecture that can aggregate particular systems, networks, and terminals in view of better managing the collection of available resources on a heterogeneous system level, taking into account all systems', networks', and terminals' traffic requirements and technical capabilities is considered for LTE-A system with potential deployment to 5G networks.
References
Wireless networking | Carrier aggregation | Technology,Engineering | 656 |
1,831,507 | https://en.wikipedia.org/wiki/Ethylmercury | Ethylmercury (sometimes ethyl mercury) is a cation composed of an organic CH3CH2— species (an ethyl group) bound to a mercury(II) centre, making it a type of organometallic cation, and giving it a chemical formula C2H5Hg+. The main source of ethylmercury is thimerosal.
Synthesis and structure
Ethylmercury (C2H5Hg+) is a substituent of compounds: it occurs as a component of compounds of the formula C2H5HgX where X = chloride, thiolate, or another organic group. Most famously X = the mercaptide group of thiosalicylic acid as in thiomersal. In the body, ethylmercury is most commonly encountered as derivatives with a thiolate attached to the mercury. In these compounds, Hg(II) has a linear or sometimes trigonal coordination geometry. Given the comparable electronegativities of mercury and carbon, the mercury-carbon bond is described as covalent.
Toxicity
The toxicity of ethylmercury is well studied. Like methylmercury, ethylmercury distributes to all body tissues, crossing the blood–brain barrier and the placental barrier, and ethylmercury also moves freely throughout the body. Risk assessment for effects on the human nervous system have been made by extrapolating from dose-response relationships for methylmercury. Estimates have suggested that ethylmercury clears from blood with a half-life of 3—7 days in adult humans. In monkeys, it clears from brain tissue with a half-life of 24 days and blood in 7 days.
It is a fungicide but has been banned from use in the U.S. on food grain and even on seeds only used to grow crops.
Public health concerns
Concerns based on extrapolations of the effect of methylmercury caused thimerosal to be removed from U.S. childhood vaccines in 1999, but it remains in use in all multi-dose vaccines and flu shots (though many single use vaccines without thimerosal are available). Researchers have argued that risk assessments based on methylmercury were overly conservative in light of observations that ethylmercury is eliminated from the body and the brain significantly faster than methylmercury. Moreover, the same researchers have argued that inorganic mercury metabolized from ethylmercury, despite its much longer half-life in the brain, is much less toxic than the inorganic mercury produced from mercury vapor, for reasons not yet understood.
See also
Diethylmercury
Mercury poisoning
References and notes
Further reading
External links
EPA Organic Mercury TEACH Chemical Summary, 2007.
EPA Chemistry Dashboard, Ethyl Mercury Ion, 2017.
ATSDR Toxicological Profile for Mercury, search "Organic Mercury".
Organomercury compounds
Cations
Mercury(II) compounds | Ethylmercury | Physics,Chemistry | 604 |
61,566,443 | https://en.wikipedia.org/wiki/Monroe%20Avenue%20Water%20Filtration%20Plant | The Monroe Avenue Water Filtration Plant is a municipal water treatment plant located at 430 Monroe Avenue NW in Grand Rapids, Michigan. Built in 1910, it was likely the first water filtration plant in Michigan. In 1945, the plant was the site of the first public introduction of water fluoridation in the United States. It was listed on the National Register of Historic Places in 2002. The building now serves as an event center, known as Clearwater Place.
History
By the 1870s, the city of Grand Rapids realized the need for a city-wide water system. Bonds were issued in 1874, and a reservoir constructed. However, by the 1900s, there was increasing pressure to find a new source of clean water for the city. In 1910 bonds were issued to construct a filtration plant to clean water from the Grand River. The city hired nationally known New York City engineers Rudolph Hering and George Warren Fuller of Hering and Fuller Engineers to design the new plant, and construction began in 1910. Gentz Brothers of Grand Rapids was the general contractor. The plant was first put on line in 1912, and was an immediate success, substantially reducing water-borne diseases in the city. By the 1920s, however, the plant already needed to be expanded. A large addition, designed by R E. Harrison, was constructed in 1922-24. Additional expansion was done in 1935.
In 1944, the Grand Rapids City Commission authorized fluoridation of the city's water supply, the first city in the United States to do so. Actual application to the water began in early 1945. In 1961, Grand Rapids constructed a large regional filtration plant using water from Lake Michigan, relegating the Monroe Avenue plant to use as a backup facility. In 1988, the plant was designated as a Michigan Historic Civil Engineering Landmark by the American Society of Civil Engineers. The plant was closed in 1992. In 2005, DeVries Development began renovating the building into a mixed use space, including offices and apartments, named "Clearwater Place." Renovation was completed in 2008. In 2017, the building was renovated into an events center.
Description
The Monroe Avenue Water Filtration Plant consists of two buildings (only one of which, the main building, is historically significant) and two wash tanks. The main building is a simple two-story, red brick Romanesque Revival structure sitting on a concrete base. It has a hipped roof covered in green tile, some of which has been replaced with asphalt shingles. Square towers are sited on the corners of the front facade. These towers contain side entrances under triple sets of arches. Prominently visible is a large, hip roofed central tower, located at the rear, known as the "head house." The two wash tanks are large brick structures located to wither side of the main building. They have conical, low-pitched roofs clad in green tile, and a single row of small rectangular windows.
See also
Glendive City Water Filtration Plant, NRHP-listed in Glendive, Montana
References
External links
Clearwater Place
Monroe Avenue Water Filtration Plant at the Historical Marker Database
Further reading
"Monroe Water Filtration Plant Turns 100 (November/December 2024). Michigan History p. 56. Lansing, Michigan: Historical Society of Michigan. ISSN 0026-2196. Retrieved via Gale OneFile
National Register of Historic Places in Grand Rapids, Michigan
Romanesque Revival architecture in Michigan
Industrial buildings completed in 1912
Water supply infrastructure on the National Register of Historic Places
Water treatment facilities
Water in Michigan
1912 establishments in Michigan | Monroe Avenue Water Filtration Plant | Chemistry | 718 |
36,183,207 | https://en.wikipedia.org/wiki/Digermane | Digermane is an inorganic compound with the chemical formula . One of the few hydrides of germanium, it is a colourless liquid. Its molecular geometry is similar to ethane.
Synthesis
Digermane was first synthesized and examined in 1924 by Dennis, Corey, and Moore. Their method involves the hydrolysis of magnesium germanide using hydrochloric acid. Many of the properties of digermane and trigermane were determined in the following decade using electron diffraction studies. Further considerations of the compound involved examinations of various reactions such as pyrolysis and oxidation.
Digermane is produced together with germane by the reduction of germanium dioxide with sodium borohydride. Although the major product is germane, a quantifiable amount of digermane is produced in addition to traces of trigermane. It also arises by the hydrolysis of magnesium-germanium alloys.
Reactions
The reactions of digermane exhibit some differences between analogous compounds of the Group 14 elements carbon and silicon. However, there are still some similarities seen, especially in regard to pyrolysis reactions.
The oxidation of digermane takes place at lower temperatures than monogermane. The product of the reaction, germanium oxide, has been shown to act in turn as a catalyst of the reaction. This exemplifies a fundamental difference between germanium and the other Group 14 elements carbon and silicon (carbon dioxide and silicon dioxide do not exhibit the same catalytic properties).
In liquid ammonia, digermane undergoes disproportionation. Ammonia acts as a weakly basic catalyst. Products of the reaction are hydrogen, germane, and a solid polymeric germanium hydride.
Pyrolysis of digermane is proposed to follow multiple steps:
This pyrolysis has been found to be more endothermic than the pyrolysis of disilane. This difference is attributed to the greater strength of the Ge-H bond vs the Si-H bond. As seen in the last reaction of the mechanism above, pyrolysis of digermane may induce polymerization of the group, where acts as a chain propagator and molecular hydrogen gas is released. The dehydrogenation of digermane on gold leads to the formation of germanium nanowires.
Digermane is a precursor to , where E is either sulfur or selenium. These trifluoromethylthio () and trifluoromethylseleno () derivatives possess a markedly higher thermal stability than digermane itself.
Applications
Digermane has a limited number of applications; germane itself is the preferred volatile germanium hydride. Generally, digermane is primarily used a precursor to germanium for use in various applications. Digermane can be used to deposit Ge-containing semiconductors via chemical vapor deposition.
References
Germanium compounds
Metal hydrides
Substances discovered in the 1920s
Chemical compounds containing metal–metal bonds | Digermane | Chemistry | 614 |
6,058,615 | https://en.wikipedia.org/wiki/Calcein | Calcein, also known as fluorexon, fluorescein complex, is a fluorescent dye with excitation and emission wavelengths of 495 and 515 nm, respectively, and has the appearance of orange crystals. Calcein self-quenches at concentrations above 70 mM and is commonly used as an indicator of lipid vesicle leakage. It has also been traditionally used as a complexometric indicator for titration of calcium ions with EDTA, and for fluorometric determination of calcium.
Applications
The non-fluorescent acetomethoxy derivate of calcein (calcein AM, AM = acetoxymethyl) is used in biology as it can be transported through the cellular membrane into live cells, which makes it useful for testing of cell viability and for short-term labeling of cells. Alternatively, Fura-2, Furaptra, Indo-1 and aequorin may be used. An acetomethoxy group obscures the part of the molecule that chelates Ca2+, Mg2+, Zn2+ and other ions. After transport into the cells, intracellular esterases remove the acetomethoxy group, the molecule gets trapped inside and gives out strong green fluorescence. As dead cells lack active esterases, only live cells are labeled and counted by flow cytometry.
Calcein is now rarely used as a Ca2+ or Mg2+ indicator because its fluorescence is directly sensitive to these ions only at strongly alkaline pH, and thus it is not particularly useful for measuring Ca2+ or Mg2+ in cells. Fluorescence of calcein is quenched strongly by Co2+, Ni2+ and Cu2+ and appreciably by Fe3+ and Mn2+ at physiological pH. This fluorescence quenching response can be exploited for detecting the opening of the mitochondrial permeability transition pore (mPTP) and for measuring cell volume changes. Calcein is commonly used for cell tracing and in studies of endocytosis, cell migration, and gap junctions.
The acetoxymethyl ester of calcein is also used to detect drug interactions with multidrug resistance proteins (ABC transporters ATP-binding cassette transporter genes) in intact cells as it is an excellent substrate of the multidrug resistance transporter 1 (MDR1) P-glycoprotein and the Multidrug Resistance-Associated Protein (MRP1). The calcein AM assay can be used as a model for drug-drug interactions, for screening transporter substrates and/or inhibitors; and also to determine in vitro drug resistance of cells, including samples from patients.
Calcein is also used for marking freshly hatched fish and for labeling of bones in live animals.
References
Cell culture reagents
Lactones
Amines
Fluorone dyes
Complexometric indicators
Acetic acids
Spiro compounds | Calcein | Chemistry,Materials_science,Biology | 608 |
22,878,918 | https://en.wikipedia.org/wiki/Ursa%20Major%20%28excavator%29 | The Ursa Major (lit. Great Bear) at Black Thunder Coal Mine, Wyoming, is the largest dragline excavator currently in use in North America and the third largest ever built. It is a Bucyrus-Erie 2570WS model and cost US$50 million. The Ursa Major was one of five large walking draglines operated at Black Thunder, with the next two largest in the dragline fleet being Thor, a B-E 1570W - which has a boom and a bucket - and Walking Stick, a B-E 1300W with a boom and a bucket.
Specifications
Its bucket is , and it has a boom. It weighs .
History
Shortly before the scrapping of Big Muskie in 1999, the largest dragline ever built, construction of another ultraheavy dragline excavator commenced. Although not as large as the Big Muskie, the Ursa Major was still a large and substantial excavator.
It first began operation around early 2001, the bucket which was newly cast at the time, was delivered at Black Thunder Mine. Operation to deliver the 165,000 pound bucket (82.5 tons) Bucyrus had to obtain special permits for an overweight and oversized load for it to be permissible to be transported. The company also had to check with the power company to make sure it won't hit any power lines on the way to the mine.
See also
Dragline excavator
References
Engineering vehicles
Excavators
Industrial equipment
Draglines
Bucyrus-Erie | Ursa Major (excavator) | Engineering | 313 |
37,690,619 | https://en.wikipedia.org/wiki/Phi2%20Lupi | {{DISPLAYTITLE:Phi2 Lupi}}
Phi2 Lupi, Latinized from φ2 Lupi, is a solitary star in the southern constellation of Lupus. With an apparent magnitude of 4.535, it is bright enough to be seen with the naked eye. Based upon an annual parallax shift of 6.28 mas as seen from Earth, it is located around 520 light years from the Sun. At that distance, the visual magnitude of the star is diminished by an extinction factor of due to interstellar dust. It is a member of the Upper Centaurus–Lupus subgroup of the Scorpius–Centaurus association.
This is an ordinary B-type main sequence star with a stellar classification of B4 V. It has an estimated 6.1 times the mass of the Sun and about 3.4 times the Sun's radius. The star is roughly 40 million years and is spinning with a projected rotational velocity of 141 km/s. It is radiating about 800 times the solar luminosity from its photosphere at an effective temperature of 16,780 K.
References
B-type main-sequence stars
Lupus (constellation)
Lupi, Phi2
136664
075304
5712
Durchmusterung objects
Upper Centaurus Lupus | Phi2 Lupi | Astronomy | 266 |
22,223,374 | https://en.wikipedia.org/wiki/Multiscale%20geometric%20analysis | Multiscale geometric analysis or geometric multiscale analysis is an emerging area of high-dimensional signal processing and data analysis.
See also
Wavelet
Scale space
Multi-scale approaches
Multiresolution analysis
Singular value decomposition
Compressed sensing
Further reading
Signal processing
Spatial analysis | Multiscale geometric analysis | Physics,Technology,Engineering | 53 |
28,769,399 | https://en.wikipedia.org/wiki/Identity%20line | In a 2-dimensional Cartesian coordinate system, with x representing the abscissa and y the ordinate, the identity line or line of equality is the y = x line. The line, sometimes called the 1:1 line, has a slope of 1. When the abscissa and ordinate are on the same scale, the identity line forms a 45° angle with the abscissa, and is thus also, informally, called the 45° line. The line is often used as a reference in a 2-dimensional scatter plot comparing two sets of data expected to be identical under ideal conditions. When the corresponding data points from the two data sets are equal to each other, the corresponding scatters fall exactly on the identity line.
In economics, an identity line is used in the Keynesian cross diagram to identify equilibrium, as only on the identity line does aggregate demand equal aggregate supply.
References
Coordinate systems
Statistical charts and diagrams
Economics curves | Identity line | Mathematics | 196 |
14,881,761 | https://en.wikipedia.org/wiki/ZNF217 | Zinc finger protein 217, also known as ZNF217, is a protein which in humans is encoded by the ZNF217 gene.
Function
ZNF217 can attenuate apoptotic signals resulting from telomere dysfunction and may promote neoplastic transformation and later stages of malignancy. Znf217 was shown to be a prognostic biomarker and therapeutic target during breast cancer progression.
See also
Zinc finger
References
Further reading
External links
Transcription factors | ZNF217 | Chemistry,Biology | 104 |
2,331,351 | https://en.wikipedia.org/wiki/Air%20gap%20%28plumbing%29 | An air gap, as related to the plumbing trade, is the unobstructed vertical space between the water outlet and the flood level of a fixture. Air gaps of appropriate design are legally required by water health and safety regulations in many countries. An air gap is the simplest form of a backflow prevention device.
Function
A simple example is the vertical space between a wall-mounted faucet and the sink rim (this space is the air gap). Water can easily fall from the faucet into the sink, but there is no way that water can be drawn up from the sink into the faucet. This arrangement prevents any contaminants in the sink from entering into the potable water system by siphonage; this is the simplest form of backflow prevention.
A common use of the term "air gap" in domestic plumbing refers to a specialized fixture that provides backflow prevention for an installed dishwasher. This "air gap" is visible above the sink as a small cylindrical fixture mounted near the faucet. In the base cabinet under the sink, the drain hose from the dishwasher feeds the "top" of the air gap, and the "bottom" of the air gap is plumbed into the sink drain below the basket, or into a garbage disposal unit. When installed and maintained properly, the air gap works as described above, and prevents drain water from the sink from backing up into the dishwasher, possibly contaminating dishes.
To further illustrate the air gap, consider what could happen if the air gap were eliminated by attaching a hose to the faucet and lowering the hose into a sink full of contaminated water. Under the right conditions (if the water supply loses pressure and the sink is higher than the point at which the water supply enters the house, for instance), the dirty water in the sink could be siphoned backwards into the water pipes through the hose and faucet. The dirty water could then be dispersed throughout the drinking water system.
Standards and codes
All plumbing codes require backflow prevention in several ways. The fixtures must be manufactured and installed to meet these codes. Plumbers must not build cross-connections during their daily work practices, and plumbing inspectors look for improper designs or connections of piping and plumbing fixtures. A common misconception is that a "high loop" (routing a continuous drain line above a sink's flood level, for instance) will provide the same function as an air gap; this is not true, because the continuous connection in such a case will still allow backflow through siphoning.
According to the International Residential Code 2003, an air gap length must meet the requirements of being two times the effective inner diameter of the pipe (2×D) in order to be sufficient.
A standard widely use in the United States is:
A112.1.2 Air Gaps in Plumbing Systems (For Plumbing Fixtures and Water-Connected Receptors)
In the United Kingdom, legislation is by statutory instrument and varies by country, but includes Water Supply (Water Quality) Regulations 2016 and Water Supply (Water Quality) Regulations (Wales). The categorization of air gaps is standardized by European standards, which cover the basic design and dimensions for appropriate to different uses.
EN 13076 -- Devices to prevent pollution by backflow of potable water - Unrestricted air gap - Family A - Type A
EN 13077 -- Devices to prevent pollution by backflow of potable water - Air gap with non-circular overflow (unrestricted) - Family A - Type B
EN 13078 -- Devices to prevent pollution by backflow of potable water - Air gap with submerged feed incorporating air inlet plus overflow - Family A, type C
and others for each family and type of air gap
See also
Hydrostatic loop
Pressure vacuum breaker
Double check valve
Chemigation valve
Reduced pressure zone device
Atmospheric vacuum breaker
Upstream contamination
References
Plumbing
Backflow | Air gap (plumbing) | Engineering | 797 |
48,697,908 | https://en.wikipedia.org/wiki/GS1%20EDI | GS1 EDI is a set of global electronic messaging standards for business documents used in Electronic Data Interchange (EDI). The standards are developed and maintained by GS1. GS1 EDI is part of the overall GS1 system, fully integrated with other GS1 standards, increasing the speed and accuracy of the supply chain.
Examples of GS1 EDI standards include messages such as: Order, Despatch Advice (Shipping Notice), Invoice, Transport Instruction, etc.
The development and maintenance of all GS1 standards is based on a rigorous process called the Global Standard Management Process (GSMP). GS1 develops its global supply chain standards in partnership with the industries using them. Any organization can submit a request to modify the standard. Maintenance releases of GS1 EDI standards are typically published every two years, while code lists can be updated up to 4 times a year.
Standards
GS1 developed the following sets of complementary EDI standards:
GS1 EANCOM - a subset of UN/EDIFACT, which comprises a set of internationally agreed UN standards, directories and guidelines for EDI. EANCOM is fully compliant to UN/EDIFACT.
GS1 XML - a GS1 set of electronic messages developed using XML, a language designed for information exchange over internet. GS1 XML is based on UN/CEFACT Core Component Technical Specification (CCTS) and UN/CEFACT Modeling Methodology (UMM).
GS1 UN/XML - GS1 has also developed its own profiles of four UN/CEFACT XML standards (Cross Industry Order, Order Response, Invoice and Despatch Advice), which are fully compliant with UN/XML.
These groups of standards are being implemented in parallel by various users, GS1 supports and maintains all of them.
GS1 EDI standards are designed to work together with other GS1 standards for the identification and labeling of goods, locations, parties and packages. This means that the information and product flows can be combined to provide business with tool enabling traceability, visibility and safety.
In EDI, it is essential to unambiguously identify products, services and parties involved in the transaction. In GS1 EDI standard messages, each product, party and location is identified by a unique GS1 identification key, e.g.:
products by Global Trade Item Number (GTIN)
parties, such as buyer, seller, and any third parties involved in the transaction as well as locations by Global Location Number (GLN)
logistic units by Serial Shipping Container Code (SSCC)
other GS1 ID keys, used e.g. for shipment and consignment identification
Using the GS1 ID Keys enables master data alignment between trading partners before any trading transaction takes place. This ensures data quality, eliminates errors and removes the need to send redundant information in electronic messages (such as product specifications, party addresses, etc.).
Collaboration with other global standard organizations and industry associations
GS1 EDI standards are developed based on other global standards, such as:
ISO – e.g. code lists re-use
UN/CEFACT – global methodologies applied, EDIFACT is a base for GS1 EANCOM standard
W3C – XML syntax
User companies are involved in the development of GS1 standards, either directly or via industry associations, such as The Consumer Goods Forum.
Implementation of GS1 EDI standards
GS1 EDI standards are globally used by companies and organizations from different sectors and applied in various processes like Retail Up- and Downstream, Transport and Warehouse Management, Healthcare, Defense, Finance, Packaging (collaborative artwork development), Cash Handling, public administration and much more.
See also
List of GS1 member organizations
Global Trade Item Number
Global Location Number
Global Data Synchronization Network (GDSN)
Serial Shipping Container Code
References
External links
GS1 EDI at GS1 website
“Crossfire Cloud, EDI” Crossfire
“EDI Document Standards” EDI Basics
“A Survey and Analysis of Electronic Business Document Standards” Middle East Technical University (METU), Turkey
GS1 standards
Technical specifications
Electronic data interchange
Markup languages
Supply chain management
Technical communication | GS1 EDI | Technology | 844 |
31,803,163 | https://en.wikipedia.org/wiki/BF-graph | In graph theory, a BF-graph is a type of directed hypergraph where each hyperedge is directed either to one particular vertex or away from one particular vertex.
In a directed hypergraph, each hyperedge may be directed away from some of its vertices (its tails) and towards some others of its vertices (its heads).
A hyperedge that is directed to a single head vertex, and away from all its other vertices, is called a B-arch. Symmetrically, a hyperedge that is directed away from a single tail vertex, and towards all its other vertices, is called an F-arc.
A hypergraph with only B-arcs is a B-graph and a hypergraph with only F-arcs is a F-graph.
References
Hypergraphs | BF-graph | Mathematics | 157 |
22,803,616 | https://en.wikipedia.org/wiki/Lilleby%20smelteverk | Lilleby smelteverk was a smeltmill located in Lilleby, Trondheim, Sør-Trøndelag county, Norway, next to City Lade. It is well known for having produced the world's cleanest ferrosilicon (an alloy that contains iron and silicon) for NASA. Shut down in December 20th, 2002, the production moved to Mo I Rana.
The building is demolished.
Early years (1927–1949)
Professor Harald Christian Pederson founded the A/S Ila and Lilleby smelteverk melting facilities in the 1920s. He worked with a chemical process which later has been called the Pederson-2 process. It consists of melting ironmalm which gives ferrosilicon as a by-product.
Occupation years (1940–1945)
Lilleby closed on the same day Norway was attacked by Nazi Germany, but it did not remain closed for long. The Norwegian aluminium industry was of great strategic importance for the German government, which requested that Lilleby resume operation right away. Birger Solberg was placed in charge, because professor Pedersen had left with his family to Sweden. Solberg was dismissed the day that Pederson returned, undoubtedly related to his views on the occupation, which differed from Professor Pedersen's views: Pedersen was a supporter of Nazi Germany and wanted to collaborate with the occupation.
During the war, the plant was geared mostly towards aluminium, which was more important for the German war effort; however, many employees sabotaged the work in order to keep productivity low.
Post-World War II
After the war, Birger Solberg resumed control, but the economics and equipment of the facility had become unfavorable. Feeling empathy for the former workers, he devised a new business plan based on collecting German plane wrecks and other debris in middle-Norway and re-melting them.
The facility was closed on December 20, 2002 and production moved to Mo i Rana.
References
Metallurgical facilities
Buildings and structures in Trondheim | Lilleby smelteverk | Chemistry,Materials_science | 407 |
1,435,893 | https://en.wikipedia.org/wiki/Little%20green%20men | Little green men is the stereotypical portrayal of extraterrestrials as little humanoid creatures with green skin and sometimes with antennae on their heads. The term is also sometimes used to describe gremlins, mythical creatures said to cause problems in airplanes and mechanical devices.
Although there have been references to small, green-colored men or children going back much further, the term "little green men" came into popular usage in reference to aliens during the reports of flying saucers in the 1950s. In one classic case, the Kelly-Hopkinsville sighting in 1955, two rural Kentucky men described a supposed encounter with metallic-silver, somewhat humanoid-looking aliens no more than in height. Employing journalistic licence and deviating from the witnesses' accounts, The Evansville Courier used the term "little green men" in writing up the story. Other media then followed suit.
History of the term
Usage of the term clearly predates the 1955 incident; for example, in England reference to little green men or children dates back to the 12th century green children of Woolpit, although exactly when the term was first applied to extraterrestrial aliens has been difficult to pin down. In his historical satire A History of New York (1809), American author Washington Irving described Lunatics (or men from the Moon) as "pea green", in contrast to the "white" inhabitants of Earth. Science fiction scholar Adam Roberts writes that these may be the first green aliens in literature.
Folklore researcher Chris Aubeck has used electronic searches of old newspapers and found a number of instances dating from around the turn of the 20th century referring to green aliens. Aubeck found one story from 1899 in the Atlanta Constitution, about a little green-skinned alien, in a tale called Green Boy From Hurrah, "Hurrah" being another planet, perhaps Mars. Edgar Rice Burroughs referred to the "green men of Mars" and "green Martian women" in his first science fiction novel A Princess of Mars (1912), although at tall, they were hardly "little". However, the first use of the specific phrase "little green man" in reference to extraterrestrials that Aubeck found dates to 1908 in the Daily Kennebec Journal (Augusta, Maine), in this case the aliens again being Martians.
In 1910 (or 1915), a "little green man" was allegedly captured from his crashed spaceship in Apulia, in south-east Italy.
Green aliens soon came to commonly portray extraterrestrials and adorned the covers of many of the 1920s to 1950s science fiction pulp magazines with such things as pictures of Buck Rogers and Flash Gordon battling green alien monsters. The first documented print example specifically linking "little green men" to extraterrestrial spaceships is in a newspaper column satirizing the public panic following Orson Welles' famous "War of the Worlds" Halloween broadcast of October 31, 1938. The column by reporter Bill Barnard in the Corpus Christi Times the next day begins, "Thirteen little green men from Mercury stepped out of their space ship at Cliff Maus Field [local airport] late yesterday afternoon for a good-will visit to Corpus Christi" and ends with: "Then the 13 little green men got in their space ship and flew away." The familiarity with which the term was used suggests that this probably was not the first instance where it was applied to extraterrestrials in spaceships.
In 1946, Harold M. Sherman published a pulp science fiction book entitled The Green Man: A Visitor From Space. The cover illustration was of a normal-looking and proportioned human being, albeit with a green skin.
Nationally syndicated columns by humorist Hal Boyle spoke of a green man from Mars in his flying saucer in early July 1947 during the height of the brand new flying saucer phenomenon in the U.S. that started June 24 after Kenneth Arnold's famous sighting and the Roswell UFO incident. However, Boyle did not describe his green Martian as "small".
The 1951 science fiction book The Case of the Little Green Men, by Mack Reynolds, tells of a private detective hired to investigate disguised aliens living among the human population. As he was being hired, the detective referred derisively and familiarly to the aliens in the flying saucers being "little green men". The cover illustration is notable for depicting the LGM with the classic antennae sticking out of the head. Mack Reynolds would go on to write the first Star Trek novel in 1968 (Mission to Horatius).
By early 1950, stories began circulating in newspapers about little beings being recovered from flying saucer crashes. Though largely considered to be hoaxes, some of the stories from the sources about little aliens eventually made it into the popular 1950 book Behind the Flying Saucers by Variety magazine columnist Frank Scully.
A witness reporting a flying saucer sighting to a Wichita, Kansas newspaper in June 1950 stated that he saw "absolutely no little green men with egg on their whiskers".
The term "little green men" was specifically used in reference to science fiction and flying saucers by at least 1951 in The New York Times and The Washington Post (in the Post, a book review of a mystery/science fiction novel called The Little Green Man), and 1952 in the Los Angeles Times and the Chicago Tribune (the Tribune mocking flying saucer reports using a "little green man with pink polka dots"). The New York Times used the term in 1955 in a book review of the sci-fi satire Martians, Go Home, saying the Martians were obnoxious "little green men" whose appearance was "true to prophecy".
Following a nationally publicized flurry of UFO sightings in November 1957, syndicated Washington columnist Frederick Othman wrote: "New Flying Saucer Epidemic On. All over this land again are flying saucers ... No little green men have climbed out of these celestial vehicles so far, but in another couple of days I wouldn't be surprised ..."
Origins and other uses
The term also shows up much earlier in other contexts. Film gossip columnist Hedda Hopper used it in 1939 referring to small cast members of The Wizard of Oz (1939), and admonished against drinking on the set. In 1942, The Los Angeles Times used the term in a pictorial on Marines training for jungle combat. In this case, "little green men" referred to camouflaged Japanese soldiers. The Washington Post in 1942 likewise used the term "little green man" in reference to a camouflaged Japanese sniper who nearly killed one of their war correspondents.
Before its more modern application to aliens, little green men was commonly used to describe various supernatural beings in old legends and folklore and in later fairy tales and children's books such as goblins. Aubeck noted several examples of the latter in 19th and early 20th century literature. As an example, Rudyard Kipling had a "little green man" in Puck of Pook's Hill from 1906.
Another example, and the earliest use of little green man in The New York Times and the Chicago Tribune, dates from 1902, in a review of a children's book called The Gift of the Magic Staff, where a supernatural "Little Green Man" is a boy's friend and helps him visit the cloudland fairies. The next use in The New York Times was in 1950, and references a planned film by Walt Disney Company of a 1927 novel by poet/novelist Robert Nathan called The Woodcutter's House. The only animated character in the picture was to be Nathan's "Little Green Man", a confidant of the woodland animals. (The film was never made.)
In 1923, a serialized romance, When Hearts Command by Elizabeth York Miller, which appeared in newspapers such as the Chicago Tribune and The Washington Post, has a former mental patient who still sees "little green men" and who simultaneously comments that a fellow patient "conversed with the inhabitants of Mars".
Other instances of imaginary small green beings have been found in a newspaper column from 1936 sarcastically discussing doctors and their medical advice, saying these are the same people who have breakdowns in middle age and start hallucinating "a little green man with big ears". Syndicated columnist Sydney J. Harris used "little green man" in 1948 as a child's imaginary friend while condemning the age-old tradition of frightening children with stories of "boogeymen".
These examples illustrate that use of little green men was already deeply engrained in English vernacular long before the flying saucer era, used for a variety of supernatural, imaginary, or mythical beings. It also seems to have easily extended beyond the imaginary to real people, such as the reference to small actors in the Wizard of Oz or camouflaged Japanese soldiers. Similarly, Aubeck and others suspect that when flying saucers came along in 1947, with subsequent speculation about alien origins, the term naturally and quickly attached itself to the modern age equivalent. The Mekon, the green-skinned adversary in Dan Dare, Pilot of the Future, from Eagle comic's long-running series, first appeared 1950. It is also clear that by the early 1950s, the term was already commonly used as a sarcastic reference to the occupants of flying saucers. By 1954, the image of little green men had become inscribed in the public's collective consciousness.
Further electronic searches suggest that the term became increasingly more common in the 1960s and always used in a derisive or humorous way. The Chicago Tribune in 1960 carried a front-page story on the speculations of a Harvard anthropologist about how aliens might look and alien sex. The article opens with the comment, "If there really are 'little green men' out there in space, there are probably also little green women–and sex." A cartoon was attached showing two amorous centaur-like male and female aliens with antennae sticking out of their heads. The article also enigmatically states, "The 'little green men' designation came from Dr. Otto Struve, director of the national radio astronomy observatory, Green Bank, W. Va. He said that's what the possible outerspacers are called 'among themselves'."
The term even penetrated into the commentary of The Wall Street Journal. First use in the Journal was 1960 in an article on the Brookings Report commissioned by NASA, studying the possible social effects of the discovery of extraterrestrial life. The Journal commented that they thought the report overly pessimistic, assuming that "the little green men with the wiggly antennae" would be hostile. Another Journal use of the term occurred in 1968 in an editorial on a planned Congressional investigation of UFOs. The writer sarcastically asked how they planned to subpoena "a little green man". In 1969, they commented that the Condon Committee UFO study commissioned by the Air Force was a waste of money. The editorial stated that even if they did prove that "UFOs were people with little green men", what were we supposed to do about it?
A green-skinned little green man had even appeared in The Flintstones as a recurring character. The Great Gazoo (introduced in Episode 145) typified the representation of a little green man with his short, green stature and helmet with antennae. However, the 1960s also marked a transition in the way people imagined a stereotypical alien. In alien abduction stories they are often small but grey beings and in Arthur C. Clarke's 2001: A Space Odyssey (1968) they are unseen.
Current usage
Aliens
Little green aliens and the term "little green men" have fallen out of general use in serious science fiction circles and are most commonly used to ridicule the notion that aliens may exist, with a few exceptions, such as Yoda in the Star Wars movie saga. A derisive usage can be seen in the original Star Trek episode "Tomorrow Is Yesterday", set in 1969, as Captain Kirk, captured by the US Air Force while attempting to steal film showing the Enterprise in Earth's atmosphere, calls himself a "little green man from Alpha Centauri" when interrogated by the base security officer. Earlier in the same episode, a rescued Air Force captain brought aboard the Enterprise tells Kirk he's never believed in little green men, immediately before meeting the obviously alien Mr. Spock (who replies, "Neither have I"). In the 1988 Doctor Who serial Remembrance of the Daleks, the line is parodied when the Doctor states that the Daleks are aliens. Group Captain Gilmore asks if he's fighting little green men, to which the Doctor says "no, little green blobs in bonded polycarbide armour".
Instead, the little green alien image seems to have migrated mainly to the world of children's media where it can still be found in abundance. Examples include
The small, green squeeze toy aliens from Pizza Planet in the 1995 film Toy Story and its sequels). In some pieces of Toy Story media, most prominently the cartoon Buzz Lightyear of Star Command, they are even referred to as the "LGMs".
The Pokémon species "Elgyem" is based on little green men ("LGM") in its design, characteristics, and name.
The Irkens from Invader Zim bear a similarity to green little men.
In the space-simulation game Kerbal Space Program, Kerbals are the only species in the game and are portrayed as little green men with a large head compared to their bodies.
The Saibamen in the anime Dragon Ball Z are depicted as little green men.
In Destroy All Humans!, many of the human characters refer to the main character Crypto as a little green man, much to his annoyance, where Crypto himself resembles a stereotypical grey alien.
"Unidentified defending objects"
The pro-Russian uniformed "local self-defence" forces with camouflage and modern Russian weaponry but no identifying badges or insignia, operating in 2014 during the Russo-Ukrainian War were also called "martians" or "little green men" by the locals and the media.
Astronomy
In 1967, Jocelyn Bell Burnell and Antony Hewish of the University of Cambridge, UK dubbed the first discovered pulsar LGM-1 for "little green men" because the regular oscillations of its signal suggested a possible intelligent origin. Its designation was later changed to CP 1919, and is now known as PSR B1919+21.
See also
Bug-eyed monster
Extraterrestrial life
Fairies
Green Man (folklore and ornamentation)
Grey alien
Irkens
Jinn
Leprechaun
List of alleged extraterrestrial beings
Little people (mythology)
Men in black
Nordic aliens
Orbit (mascot)
Reptilians
The Awful Green Things from Outer Space
References
Further reading
Karyl, Anna The Kelly Incident, 2004,
Roth, Christopher F. (2005) "Ufology as Anthropology: Race, Extraterrestrials, and the Occult." In E.T. Culture: Anthropology in Outerspaces, ed. by Debbora Battaglia. Durham, N.C.: Duke University Press.
Vallee, Jacques Anatomy of a Phenomenon: Unidentified Objects in Space, 1965, .
External links
Summary of folklore LGM research by Chris Aubeck
Summary of electronic LGM search of New York Times and Wall Street Journal by David Rudiak
Extraterrestrial life
Alleged extraterrestrial beings
Stock characters
Gremlins | Little green men | Astronomy,Biology | 3,133 |
38,640,490 | https://en.wikipedia.org/wiki/Internet%20rush%20hour | Internet rush hour is the time period when the majority of Internet users are online at the same time. Typically, in the UK the peak hours are between 7 and 11 pm. During this time frame, users commonly experience slowness while browsing or downloading content. The congestion experienced during the rush hour is similar to transportation rush hour, where demand for resources outweighs capacity.
In contrast to the hours cited above from a source dated 2011, a Google Analytics report dated 2017 indicates very strongly that daily web use peaks between 9 am and midday, falling off steadily throughout the day with a modest levelling off between 7 pm and 10 pm, and then collapses to a base at 4 am.
Reasons
Growth
Global Internet usage has increased significantly across the world from 12,881,000 hosts in 1998 to 908,585,739 hosts in 2012. Internet use has surged with the introduction of mobile devices and tablet computing. Internet access has also changed during this time frame from 56 kbit/s dial-up to high-speed bandwidth access at 100 Mbit/s or higher. The increase in Internet users and increase in access bandwidth is a contributing factor to the Internet Rush Hour.
The table below shows the big picture of world internet usage versus the population.
Infrastructure
End users connect to the Internet through Internet service providers (ISPs). The Tier 1 ISPs own the infrastructure, which includes routers, switches and fiber optic footprints. The back bone of the Internet is connected through Tier 1 ISPs that peer with other Tier 1 ISPs in a transit-free network. These peering agreements between Tier 1 ISPs have no overt settlements, meaning there is no money exchanged for the right to pass traffic between Tier 1 peers. Tier 2 and Tier 3 ISPs are customers of the Tier 1 ISPs and rely on the Tier 1 ISPs to route their traffic across the Internet. This is a disadvantage for the lower tier ISPs due to the amount of traffic hops and shared common gateways to Tier 1 ISPs. The shared common gateways are choke points that contribute to the Internet Rush Hour. Each Tier 1 ISP has a peering policy that defines how IP Traffic exchanges are created and guidelines for managing peer traffic.
Tier 1 Internet Service Providers
AT&T | AS 7018
DTAG | AS 3320
XO | AS 2828
Telecom Tunisia | AS 6762
Inteliquent | AS 3257
Verizon | AS 701
Sprint | AS 1239
Telia | AS 1299
NTT | AS 2914
Level 3 | AS 1 / 3356 / 3549
Tata | AS 6453
Telefónica | AS 12956
Zayo | AS 6461
Bandwidth throttling
Some ISPs have been criticized for implementing bandwidth throttling to intentionally slow down a user's internet service at various points on the network. The key problem at peak hour is however the peering capacity: it represents the size of the doors between two networks. If everyone wants to go through the door at the same time, it gets jammed. The constant increase in Internet traffic requires regular increases in the size of the peering points and there is a dispute as to who is going to cover the cost of these increases. For instance, Netflix and Google are reported to represent more than 50% of peak hour downstream traffic in the US in a study by Sandvine (figures for October 2013). There is a commercial and legal battle in progress to determine who will pay for the costs induced by peak hour traffic. Three positions exist:
The users must cover the cost through higher subscription fees because they generate the higher traffic.
The ISPs must cover the cost because they receive money from the users to access the Internet and it's part of their job to guarantee that the users can access the services.
The top content providers must cover the cost because they receive money from the user to access their service and they use an unfair share of the Internet infrastructure.
As an example of these fights, Comcast went up against the Federal Communications Commission (FCC) in regard to net neutrality, or keeping their networks open regardless of the content. The federal appeals court ruled that the FCC had no authority to stop Comcast from slowing internet traffic.
Performance
British broadband
A study with British telecom regulators, Ofcom and SamKnows Broadband determined that the average British broadband connection actually achieves less than half its advertised speed. That study determined that the average broadband speed was 3.6 Mbit/s versus an advertised speed of 7.2 Mbit/s with speed declining 30% during Internet Rush hour.
Netflix
Netflix rated various ISPs in 2011 on their ability to stream content to consumers and determine what type of bandwidth the ISPs provide. Netflix tested HD streams which use about 4800 kilobits per second and averaged what ISPs were able to provide.
FCC broadband study
The United States Federal Communications Commission (FCC) measuring broadband program from August 2011, is a study of performance in the United States covering 80% of the residential market. The report measures ISPs promise of performance to advertised bandwidth rates. In 2011, the average ISP delivered 87 percent of advertised rates, increasing to 96 percent performance in 2012. Average broadband speeds increased from 11.1 Mbit/s in 2011 to 14.3 Mbit/s in 2012. Further increases may be inhibited by outdated modems which prevent ISPs from controlling broadband performance.
References
Internet architecture | Internet rush hour | Technology | 1,094 |
634,266 | https://en.wikipedia.org/wiki/FR-4 | FR-4 (or FR4) is a NEMA grade designation for glass-reinforced epoxy laminate material. FR-4 is a composite material composed of woven fiberglass cloth with an epoxy resin binder that is flame resistant (self-extinguishing).
"FR" stands for "flame retardant", and does not denote that the material complies with the standard UL94V-0 unless testing is performed to UL 94, Vertical Flame testing in Section 8 at a compliant lab. The designation FR-4 was created by NEMA in 1968.
FR-4 glass epoxy is a popular and versatile high-pressure thermoset plastic laminate grade with good strength to weight ratios. With near zero water absorption, FR-4 is most commonly used as an electrical insulator possessing considerable mechanical strength. The material is known to retain its high mechanical values and electrical insulating qualities in both dry and humid conditions. These attributes, along with good fabrication characteristics, lend utility to this grade for a wide variety of electrical and mechanical applications.
Grade designations for glass epoxy laminates are: G-10, G-11, FR-4, FR-5 and FR-6. Of these, FR-4 is the grade most widely in use today. G-10, the predecessor to FR-4, lacks FR-4's self-extinguishing flammability characteristics. Hence, FR-4 has since replaced G-10 in most applications.
FR-4 epoxy resin systems typically employ bromine, a halogen, to facilitate flame-resistant properties in FR-4 glass epoxy laminates. Some applications where thermal destruction of the material is a desirable trait will still use G-10 non flame resistant.
Properties
Which materials fall into the "FR-4" category is defined in the NEMA LI 1-1998 standard. Typical physical and electrical properties of FR-4 are as follows. The abbreviations LW (lengthwise, warp yarn direction) and CW (crosswise, fill yarn direction) refer to the conventional perpendicular fiber orientations in the XY plane of the board (in-plane). In terms of Cartesian coordinates, lengthwise is along the x-axis, crosswise is along the y-axis, and the z-axis is referred to as the through-plane direction. The values shown below are an example of a certain manufacturer's material. Another manufacturer's material will usually have slightly different values. Checking the actual values, for any particular material, from the manufacturer's datasheet, can be very important, for example in high frequency applications.
where:
LW Lengthwise
CW Crosswise
PF Perpendicular to laminate face
Applications
FR-4 is a common material for printed circuit boards (PCBs). A thin layer of copper foil is typically laminated to one or both sides of an FR-4 glass epoxy panel. These are commonly referred to as copper clad laminates. The copper thickness or copper weight can vary and so is specified separately.
FR-4 is also used in the construction of relays, switches, standoffs, busbars, washers, arc shields, transformers and screw terminal strips.
See also
FR-2
Polyimide
G-10 (material)
References
Further reading
Printed circuit board manufacturing
Fibre-reinforced polymers | FR-4 | Engineering | 692 |
67,505,568 | https://en.wikipedia.org/wiki/Niihari%20temple%20ruins | The is an archaeological site with the ruins of a Buddhist temple located in the Kujira neighborhood of the city of Chikusei, Ibaraki, Japan. The temple no longer exists, but the temple grounds were designated a National Historic Site in 1942.
Overview
The Niihari temple site is located approximately 200 to 300 meters north of the Niihari Gunga ruins, and therefore is most likely the official temple associated with that Nara period country-level government administration complex. It is located on a river terrace on the bank of the Kokai River and Japan National Route 50 cuts through the southern end of the ruins. The site has been known since ancient times, and fragments of roof tiles and earthenware have been found in the locale. Three archaeological excavations from 1939 have found the foundations of the Kondō, east and west Pagodas, and Lecture Hall. A corridor from the Middle Gate surrounds the main hall and town pagodas, and connects with the lecture hall. The arrangement of structures was the same as the famed Yakushi-ji in Nara. From the numerous roof tiles uncovered, the temple also had a strong connection to the Shimotsuke Yakushi-ji which also dates from the same period, and also with the Yūki temple ruins. From the layout and artifacts, this temple ruin indicates the spread of Buddhism into the northern Kantō region from an early date, with strong Kansai influences.
The site was backfilled after excavation and is now an empty field with stone markers indicating they locations of the various building foundations. The site is located about 30 minutes on foot from Niihari Station on the JR East Mito Line.
Uenohara Tile Kiln Site
The is an archaeological site with the ruins of the kiln that was used to make the roof tiles found at the Niihari temple ruins. The kiln ruin is located in the neighboring city of Sakuragawa. The large, flat-style kiln had a length of 13.8 meters and width of 3.64 meters, of which only the bottom portion has survived. The kiln ruins are part of the National Historic Site designation.
See also
List of Historic Sites of Japan (Ibaraki)
References
External links
Niihari Abandoned Temple Ibaraki Prefecture Board of Education official site
Tile Kiln Site Ibaraki Prefecture Board of Education official site
Chikusei city home page
Sakuragawa city home page
Historic Sites of Japan
Chikusei
Sakuragawa, Ibaraki
Shimotsuke Province
Nara period
Buddhist archaeological sites in Japan
Japanese pottery kiln sites
Former Buddhist temples | Niihari temple ruins | Chemistry,Engineering | 518 |
3,740,391 | https://en.wikipedia.org/wiki/Comparison%20of%20operating%20system%20kernels | A kernel is a component of a computer operating system. A comparison of system kernels can provide insight into the design and architectural choices made by the developers of particular operating systems.
Comparison criteria
The following tables compare general and technical information for a number of widely used and currently available operating system kernels. Please see the individual products' articles for further information.
Even though there are a large number and variety of available Linux distributions, all of these kernels are grouped under a single entry in these tables, due to the differences among them being of the patch level. See comparison of Linux distributions for a detailed comparison. Linux distributions that have highly modified kernels — for example, real-time computing kernels — should be listed separately. There are also a wide variety of minor BSD operating systems, many of which can be found at comparison of BSD operating systems.
The tables specifically do not include subjective viewpoints on the merits of each kernel or operating system.
Feature overview
The major contemporary general-purpose kernels are shown in comparison. Only an overview of the technical features is detailed.
Realtime support
Transport protocol support
In-kernel security
In-kernel virtualization
In-kernel server support
Binary format support
A comparison of OS support for different binary formats (executables):
File system support
Physical file systems:
Networked file system support
Supported CPU instruction sets and microarchitectures
Supported GPU processors
Supported kernel execution environment
This table indicates, for each kernel, what operating systems' executable images and device drivers can be run by that kernel.
Supported cipher algorithms
This may be usable on some situations like file system encrypting.
Supported compression algorithms
This may be usable on some situations like compression file system.
Supported message digest algorithms
Supported Bluetooth protocols
Audio support
Graphics support
See also
Comparison of open-source operating systems
Comparison of Linux distributions
Comparison of BSD operating systems
Comparison of Microsoft Windows versions
List of operating systems
Comparison of file systems
Comparison of operating systems
Footnotes
Kernels
Computing platforms
Operating system kernels | Comparison of operating system kernels | Technology | 404 |
12,053,305 | https://en.wikipedia.org/wiki/Cepaea | Cepaea is a genus of large air-breathing land snails, terrestrial pulmonate gastropod molluscs in the family Helicidae. The shells are often brightly coloured and patterned with brown stripes. The two species in this genus, C. nemoralis and C. hortensis, are widespread and common in Western and Central Europe and have been introduced to North America. Both have been influential model species for ongoing studies of genetics and natural selection. Like many Helicidae, these snails use love darts during mating.
Species
For a long time, four species were classified in the genus Cepaea. However, molecular phylogenetic studies suggested that two of them should be placed in the genera Macularia and Caucasotachea, which are not immediate relatives of either Cepaea or each other:
Cepaea hortensis (O. F. Müller, 1774) – white-lipped snail or garden banded snail
Cepaea nemoralis (Linnaeus, 1758) – brown-lipped snail or grove snail
Cepaea sylvatica (Draparnaud, 1801), now Macularia sylvatica
Cepaea vindobonensis (Férussac, 1821), now Caucasotachea vindobonensis
Interspecific relations
The range of C. hortensis extends further north than that of C. nemoralis in Scotland and Scandinavia and it is the only one of the two species in Iceland. Likewise in the Swiss Alps C. hortensis is found as high as 2050 m, but C. nemoralis only up to 1600 m. Conversely, the southern edge of the range lies further north in C. hortensis; unlike C. nemoralis it does not occur in Italy, and in Spain it has a more restricted distribution (in the north-east corner).
Where the ranges overlap C. hortensis prefers cooler sites with longer and damper vegetation. But the two species often co-occur at a site, in which situation the densities of both affect each other's growth, fecundity and mortality. However, they differ somewhat in their behaviour: C. hortensis is more active at lower temperatures, aestivates higher on the vegetation and is more diurnal, although this appears to be independent of whether the other species is present or not.
When given no choice of partner in the laboratory, the two Cepaea species can form hybrids, which will backcross with the parental species, but the fertility is very low.
Shell polymorphism
Description and genetics
The two Cepaea species share a genetic polymorphism for the colour and banding pattern of the shell.
The background colour of the shell ranges from dark brown, through pink to yellow or even approaching white. This variation is continuous, but there are peaks in the distribution corresponding to brown, pink and yellow morphs. The colour is mainly determined by alleles at a single locus with brown dominant to pink, which is dominant to yellow.
Up to five bands (very rarely more) run spirally around the shell, numbered 1 to 5 with the larger numbers further from the shell apex. The conventional scoring annotation is to write 12345 if all bands are present and separated, but to replace a number with 0 if a band is absent from its usual position and to enclose numbers in parentheses if bands are fused with their neighbours. Thus 003(45) would mean that the top two bands are absent and the lower two fused.
A dominant allele at one locus causes the absence of all bands, a dominant allele at another locus causes the loss of all bands except band 3, and a dominant allele at a third locus causes the loss of just bands 1 and 2. The first of these three loci is closely linked to the locus determining shell colour, to another influencing the spread of the band pigment, and to one determining the colour of the lip and bands. This collection of linked loci are part of a supergene. A consequence of this arrangement is that the shells of different background colours within a population often exhibit different ratios of banded to unbanded shells: this is an example of linkage disequilibrium.
The bands are usually dark brown, but this is affected by genes influencing intensity and colouration (e.g. black or orange). Another locus (part of the supergene) determines whether the band is continuous or forms a sequence of spots. The genetics underlying the fusion of adjacent bands is not well understood.
Evolutionary explanations
In both species, most populations exhibit polymorphism in one or more of these shell characters. Nevertheless, statistically we can detect systematic variation at continental scales, and also between habitats, and at various scales down to a few tens of metres. There is also statistical evidence of change with time, based both on comparisons between sub-fossil and modern shells, and on resampling the same sites some decades apart, although the latter has more often found little change over the period (stasis). Very much research in ecological genetics has addressed the reasons for both the variation and the systematic trends.
The two selection pressures that might most feasibly act on the appearance of shells are climatic selection and predation. Darker shells heat up more quickly in the sun, which might well be advantageous for cold-blooded animals in shaded woodland but risks causing overheating and death in open habitats. This trade-off is also presumed to be responsible for the greater proportion of yellow C. nemoralis to the south, but it is curious why the trend is not present in C. hortensis. Contrary to predictions, recent global warming has not led to a detectable increase in yellow morphs on a continental scale. The use of photosensitive paint has shown that paler morphs spend more time exposed to the sun, which may imply that the shell polymorphism allows different morphs to coexist at a site by occupying different microhabitats.
Both temperature regulation and predation make the same prediction of pale shells in open habitats and dark shells in woodland, so—although the prediction has often been confirmed—it is difficult to test which is the more important explanation. However, song thrushes (Turdus philomelos) break open Cepaea shells on stones ("anvils"), allowing a comparison of those they predate with those present in the local environment. Besides the directional selection favouring camouflaged individuals, visually searching predators might cause apostatic selection. The hypothesis is that they form a search image for the commonest morphs, favouring whichever morphs are locally rare, thus promoting diversity. As well as its visual effect, the shell pigments are associated with differences in shell strength, so may affect predation by predators searching non-visually, for instance at night.
Several studies have demonstrated a predicted evolutionary response of shell appearance to a change of habitat. However, the association of shell appearance and habitat is not always consistent, especially in more disturbed environments, so it is believed that random effects are also influential, particularly founder effects. The two Cepaea species colonised much of Europe only within the last 4000 generations, so the time available for selection to act has been limited, and local anthropogenetic disturbances must often have reversed which morphs are optimal. Moreover, snails disperse more slowly than many other animals, so the most suitable genes may be locally absent.
For instance, biologists were at one time puzzled by the phenomenon of "area effects"; the same morph of Cepaea may be found consistently over a wide area but in adjacent areas of similar habitat a different set of morphs predominate instead, with a sharp transition between. The explanation accepted nowadays is that relatively recently a change of habitat allowed the rapid colonisation of vacant areas by descendants of a few founder individuals until the colony had expanded out to areas occupied by other populations; subsequently intraspecific competition slowed the dispersal of genes into the neighbouring, occupied areas. Nevertheless, occasional transfer of genes between areas of different habitat is proposed to be important in maintaining the local diversity of phenotypes.
References
Further reading
Helicidae
Gastropod genera
Model organisms
Ecological genetics
Polymorphism (biology) | Cepaea | Biology | 1,677 |
52,797 | https://en.wikipedia.org/wiki/Digital%20camera | A digital camera, also called a digicam, is a camera that captures photographs in digital memory. Most cameras produced today are digital, largely replacing those that capture images on photographic film or film stock. Digital cameras are now widely incorporated into mobile devices like smartphones with the same or more capabilities and features of dedicated cameras. High-end, high-definition dedicated cameras are still commonly used by professionals and those who desire to take higher-quality photographs.
Digital and digital movie cameras share an optical system, typically using a lens with a variable diaphragm to focus light onto an image pickup device. The diaphragm and shutter admit a controlled amount of light to the image, just as with film, but the image pickup device is electronic rather than chemical. However, unlike film cameras, digital cameras can display images on a screen immediately after being recorded, and store and delete images from memory. Many digital cameras can also record moving videos with sound. Some digital cameras can crop and stitch pictures and perform other kinds of image editing.
History
The first semiconductor image sensor was the charge-coupled device (CCD), invented by Willard S. Boyle and George E. Smith at Bell Labs in 1969, based on MOS capacitor technology. The NMOS active-pixel sensor was later invented by Tsutomu Nakamura's team at Olympus in 1985, which led to the development of the CMOS active-pixel sensor (CMOS sensor) at the NASA Jet Propulsion Laboratory in 1993.
In the 1960s, Eugene F. Lally of the Jet Propulsion Laboratory was thinking about how to use a mosaic photosensor to capture digital images. His idea was to take pictures of the planets and stars while travelling through space to give information about the astronauts' position. As with Texas Instruments employee Willis Adcock's filmless camera (US patent 4,057,830) in 1972, the technology had yet to catch up with the concept.
In 1972, the Landsat 1 satellite's multispectral scanner (MSS) started taking digital images of Earth. The MSS, designed by Virginia Norwood at Hughes Aircraft Company starting in 1969, captured and transmitted image data from green, red, and two infrared bands with 6 bits per channel, using a mechanical rocking mirror and an array of 24 detectors. Operating for six years, it transmitted more than 300,000 digital photographs of Earth while orbiting the planet about 14 times per day.
Also in 1972, Thomas McCord from MIT and James Westphal from Caltech together developed a digital camera for use with telescopes. Their 1972 "photometer-digitizer system" used an analog-to-digital converter and a digital frame memory to store 256 x 256-pixel images of planets and stars, which were then recorded on digital magnetic tape. CCD sensors were not yet commercially available, and the camera used a silicon diode vidicon tube detector, which was cooled using dry ice to reduce dark current, allowing exposure times of up to one hour.
The Cromemco Cyclops was an all-digital camera introduced as a commercial product in 1975. Its design was published as a hobbyist construction project in the February 1975 issue of Popular Electronics magazine. It used a 32×32 metal–oxide–semiconductor (MOS) image sensor, which was a modified MOS dynamic RAM (DRAM) memory chip.
Steven Sasson, an engineer at Eastman Kodak, built a self-contained electronic camera that used a monochrome Fairchild CCD image sensor in 1975. Around the same time, Fujifilm began developing CCD technology in the 1970s. Early uses were mainly military and scientific, followed by medical and news applications.
The first filmless SLR (single lens reflex) camera was publicly demonstrated by Sony in August 1981. The Sony "Mavica" (magnetic still video camera) used a color-striped 2/3" format CCD sensor with 280K pixels, along with analogue video signal processing and recording. The Mavica electronic still camera recorded FM-modulated analog video signals on a newly developed 2" magnetic floppy disk, dubbed the "Mavipak". The disk format was later standardized as the "Still Video Floppy", or "SVF".
The Canon RC-701, introduced in May 1986, was the first SVF camera (and the first electronic SLR camera) sold in the US. It employed an SLR viewfinder, included a 2/3" format color CCD sensor with 380K pixels, and was sold along with a removable 11-66mm and 50-150mm zoom lens.
Over the next few years, many other companies began selling SVF cameras. These analog electronic cameras included the Nikon QV-1000C, which had an SLR viewfinder and a 2/3" format monochrome CCD sensor with 380K pixels and recorded analog black-and-white images on a Still Video Floppy.
At Photokina 1988, Fujifilm introduced the FUJIX DS-1P, the first fully digital camera, which recorded digital images using a semiconductor memory card. The camera's memory card had a capacity of 2 MB of SRAM (static random-access memory) and could hold up to ten photographs. In 1989, Fujifilm released the FUJIX DS-X, the first fully digital camera to be commercially released. In 1996, Toshiba's 40 MB flash memory card was adopted for several digital cameras.
The first commercial camera phone was the Kyocera Visual Phone VP-210, released in Japan in May 1999. It was called a "mobile videophone" at the time, and had a 110,000-pixel front-facing camera. It stored up to 20 JPEG digital images, which could be sent over e-mail, or the phone could send up to two images per second over Japan's Personal Handy-phone System (PHS) cellular network. The Samsung SCH-V200, released in South Korea in June 2000, was also one of the first phones with a built-in camera. It had a TFT liquid-crystal display (LCD) and stored up to 20 digital photos at 350,000-pixel resolution. However, it could not send the resulting image over the telephone function but required a computer connection to access photos. The first mass-market camera phone was the J-SH04, a Sharp J-Phone model sold in Japan in November 2000. It could instantly transmit pictures via cell phone telecommunication. By the mid-2000s, higher-end cell phones had an integrated digital camera, and by the early 2010s, almost all smartphones had an integrated digital camera.
Image sensors
The two major types of digital image sensors are CCD and CMOS. A CCD sensor has one amplifier for all the pixels, while each pixel in a CMOS active-pixel sensor has its own amplifier. Compared to CCDs, CMOS sensors use less power. Cameras with a small sensor use a back-side-illuminated CMOS (BSI-CMOS) sensor. The image processing capabilities of the camera determine the outcome of the final image quality much more than the sensor type.
Sensor resolution
The resolution of a digital camera is often limited by the image sensor that turns light into discrete signals. The brighter the image at a given point on the sensor, the larger the value that is read for that pixel.
Depending on the physical structure of the sensor, a color filter array may be used, which requires demosaicing to recreate a full-color image. The number of pixels in the sensor determines the camera's "pixel count".
In a typical sensor, the pixel count is the product of the number of rows and the number of columns. For example, a 1,000 by 1,000-pixel sensor would have 1,000,000 pixels, or 1 megapixel.
Resolution options
Firmwares' resolution selector allows the user to optionally lower the resolution to reduce the file size per picture and extend lossless digital zooming. The bottom resolution option is typically 640×480 pixels (0.3 megapixels).
A lower resolution extends the number of remaining photos in free space, postponing the exhaustion of space storage, which is of use where no further data storage device is available and for captures of lower significance, where the benefit from less space storage consumption outweighs the disadvantage from reduced detail.
Image sharpness
An image's sharpness is presented through the crisp detail, defined lines, and its depicted contrast. Sharpness is a factor of multiple systems throughout the DSLR camera by its ISO, resolution, lens, and the lens settings, the environment of the image, and its post-processing. Images have a possibility of being too sharp, but they can never be too in focus.
A digital camera resolution is determined by a digital sensor. The digital sensor indicates a high level of sharpness can be produced through the amount of noise and grain that is tolerated through the lens of the camera. Resolution within the field of digital stills and digital movies is indicated through the camera's ability to determine detail based on the distance, which is then measured by frame size, pixel type, number, and organization. Although some DSLR cameras have limited resolutions, it is almost impossible to not have the proper sharpness for an image. The ISO choice when taking a photo affects the quality of the image, as high ISO settings equate to an image that is less sharp due to the increased amount of noise allowed into the image, along with too little noise, which can also produce an image that is not sharp.
Methods of image capture
Since the first digital backs were introduced, there have been three main methods of capturing the image, each based on the hardware configuration of the sensor and color filters.
Single-shot capture systems use either one sensor chip with a Bayer filter mosaic, or three separate image sensors (one each for the primary additive colors red, green, and blue) which are exposed to the same image via a beam splitter (see Three-CCD camera).
Multi-shot exposes the sensor to the image in a sequence of three or more openings of the lens aperture. There are several methods of application of the multi-shot technique. The most common was originally to use a single image sensor with three filters passed in front of the sensor in sequence to obtain the additive color information. Another multiple-shot method is called microscanning. This method uses a single sensor chip with a Bayer filter and physically moves the sensor on the focus plane of the lens to construct a higher resolution image than the native resolution of the chip. A third version combines these two methods without a Bayer filter on the chip.
The third method is called scanning because the sensor moves across the focal plane much like the sensor of an image scanner. The linear or tri-linear sensors in scanning cameras utilize only a single line of photosensors, or three lines for the three colors. Scanning may be accomplished by moving the sensor (for example, when using color co-site sampling) or by rotating the whole camera. A digital rotating line camera offers images consisting of a total resolution that is very high.
The choice of method for a given capture is determined largely by the subject matter. It is usually inappropriate to attempt to capture a subject that moves with anything but a single-shot system. However, the higher color fidelity and larger file sizes and resolutions that are available with multi-shot and scanning backs make them more attractive for commercial photographers who are working with stationary subjects and large-format photographs.
Improvements in single-shot cameras and image file processing at the beginning of the 21st century made single-shot cameras almost completely dominant, even in high-end commercial photography.
Filter mosaics, interpolation, and aliasing
Most current consumer digital cameras use a Bayer filter mosaic in combination with an optical anti-aliasing filter to reduce the aliasing due to the reduced sampling of the different primary-color images.
A demosaicing algorithm is used to interpolate color information to create a full array of RGB image data.
Cameras that use a beam-splitter single-shot 3CCD approach, three-filter multi-shot approach, color co-site sampling or Foveon X3 sensor do not use anti-aliasing filters, nor demosaicing.
Firmware in the camera, or a software in a raw converter program such as Adobe Camera Raw, interprets the raw data from the sensor to obtain a full-color image, because the RGB color model requires three intensity values for each pixel: one each for the red, green, and blue (other color models, when used, also require three or more values per pixel).
A single sensor element cannot simultaneously record these three intensities, so a color filter array (CFA) must be used to selectively filter a particular color for each pixel.
The Bayer filter pattern is a repeating 2x2 mosaic pattern of light filters, with green ones at opposite corners and red and blue in the other two positions. The high proportion of green takes advantage of the properties of the human visual system, which determines brightness mostly from green and is far more sensitive to brightness than to hue or saturation. Sometimes a 4-color filter pattern is used, often involving two different hues of green. This provides potentially more accurate color, but requires a slightly more complicated interpolation process.
The color intensity values not captured for each pixel can be interpolated from the values of adjacent pixels which represent the color being calculated.
Sensor size and angle of view
Cameras with digital image sensors that are smaller than the typical 35 mm film size have a smaller field or angle of view when used with a lens of the same focal length. This is because the angle of view is a function of both focal length and the sensor or film size used.
The crop factor is relative to the 35mm film format. If a smaller sensor is used, as in most digicams, the field of view is cropped by the sensor to smaller than the 35 mm full-frame format's field of view. This narrowing of the field of view may be described as crop factor, a factor by which a longer focal length lens would be needed to get the same field of view on a 35 mm film camera. Full-frame digital SLRs utilize a sensor of the same size as a frame of 35 mm film.
Common values for field of view crop in DSLRs using active pixel sensors include 1.3x for some Canon (APS-H) sensors, 1.5x for Sony APS-C sensors used by Nikon, Pentax and Konica Minolta and for Fujifilm sensors, 1.6 (APS-C) for most Canon sensors, ~1.7x for Sigma's Foveon sensors and 2x for Kodak and Panasonic 4/3-inch sensors currently used by Olympus and Panasonic. Crop factors for non-SLR consumer compact and bridge cameras are larger, frequently 4x or more.
Sensor resolution
The resolution of a digital camera is often limited by the image sensor that turns light into discrete signals. The brighter the image at a given point on the sensor, the larger the value that is read for that pixel. Depending on the physical structure of the sensor, a color filter array may be used, which requires demosaicing to recreate a full-color image. The number of pixels in the sensor determines the camera's "pixel count". In a typical sensor, the pixel count is the product of the number of rows and the number of columns. Pixels are square and is often equal to 1, for example, a 1,000 by 1,000-pixel sensor would have 1,000,000 pixels, or 1 megapixel. On full-frame sensors (i.e., 24 mm 36 mm), some cameras propose images with 20–25 million pixels that were captured by 7.5–m photosites, or a surface that is 50 times larger.
Types of digital cameras
Digital cameras come in a wide range of sizes, prices, and capabilities. In addition to general-purpose digital cameras, specialized cameras including multispectral imaging equipment and astrographs are used for scientific, military, medical, and other special purposes.
Compacts
Compact cameras are intended to be portable (pocketable) and are particularly suitable for casual "snapshots". Point-and-shoot cameras usually fall under this category.
Many incorporate a retractable lens assembly that provides optical zoom. In most models, an auto-actuating lens cover protects the lens from elements. Most ruggedized or water-resistant models do not retract, and most with superzoom capability do not retract fully.
Compact cameras are usually designed to be easy to use. Almost all include an automatic mode, or "auto mode", which automatically makes all camera settings for the user. Some also have manual controls. Compact digital cameras typically contain a small sensor that trades-off picture quality for compactness and simplicity; images can usually only be stored using lossy compression (JPEG). Most have a built-in flash usually of low power, sufficient for nearby subjects. A few high-end compact digital cameras have a hotshoe for connecting to an external flash. Live preview is almost always used to frame the photo on an integrated LCD. In addition to being able to take still photographs almost all compact cameras have the ability to record video.
Compacts often have macro capability and zoom lenses, but the zoom range (up to 30x) is generally enough for candid photography but less than is available on bridge cameras (more than 60x), or the interchangeable lenses of DSLR cameras available at a much higher cost. Autofocus systems in compact digital cameras generally are based on a contrast-detection methodology using the image data from the live preview feed of the main imager. Some compact digital cameras use a hybrid autofocus system similar to what is commonly available on DSLRs.
Typically, compact digital cameras incorporate a nearly silent leaf shutter into the lens but play a simulated camera sound for skeuomorphic purposes.
For low cost and small size, these cameras typically use image sensor formats with a diagonal between 6 and 11 mm, corresponding to a crop factor between 7 and 4. This gives them weaker low-light performance, greater depth of field, generally closer focusing ability, and smaller components than cameras using larger sensors. Some cameras use a larger sensor including, at the high end, a pricey full-frame sensor compact camera, such as Sony Cyber-shot DSC-RX1, but have capability near that of a DSLR.
A variety of additional features are available depending on the model of the camera. Such features include GPS, compass, barometers and altimeters.
Starting in 2010, some compact digital cameras can take 3D still photos. These 3D compact stereo cameras can capture 3D panoramic photos with dual lens or even a single lens for playback on a 3D TV.
In 2013, Sony released two add-on camera models without display, to be used with a smartphone or tablet, controlled by a mobile application via WiFi.
Rugged compacts
Rugged compact cameras typically include protection against submersion, hot and cold conditions, shock, and pressure. Terms used to describe such properties include waterproof, freeze-proof, heatproof, shockproof, and crushproof, respectively. Nearly all major camera manufacturers have at least one product in this category. Some are waterproof to a considerable depth up to 100 feet (30 m); others only 10 feet (3 m), but only a few will float. Ruggeds often lack some of the features of ordinary compact camera, but they have video capability and the majority can record sound. Most have image stabilization and built-in flash. Touchscreen LCD and GPS do not work underwater.
Action cameras
GoPro and other brands offer action cameras that are rugged, small, and can be easily attached to helmets, arms, bicycles, etc. Most have a wide angle and fixed focus and can take still pictures and video, typically with sound.
360-degree cameras
The 360-degree camera can take picture or video 360 degrees using two lenses back-to-back and shooting at the same time. Some of the cameras are Ricoh Theta S, Nikon Keymission 360 and Samsung Gear 360. Nico360 was launched in 2016 and claimed as the world's smallest 360-degree camera with size 46 x 46 x 28 mm (1.8 x 1.8 x 1.1 in) and price less than $200. With virtual reality mode built-in stitching, Wifi, and Bluetooth, live streaming can be done. Due to it also being water resistant, the Nico360 can be used as action camera.
There are tend that action cameras have capabilities to shoot 360 degrees with at least 4K resolution.
Bridge cameras
Bridge cameras physically resemble DSLRs, and are sometimes called DSLR-shape or DSLR-like. They provide some similar features but, like compacts, they use a fixed lens and a small sensor. Some compact cameras have also PSAM mode. Most use live preview to frame the image. Their usual autofocus is by the same contrast-detect mechanism as compacts, but many bridge cameras have a manual focus mode and some have a separate focus ring for greater control.
The big physical size and small sensor allow superzoom and wide aperture. Bridge cameras generally include an image stabilization system to enable longer handheld exposures, sometimes better than DSLR for low light conditions.
As of 2014, bridge cameras come in two principal classes in terms of sensor size, firstly the more traditional 1/2.3" sensor (as measured by image sensor format) which gives more flexibility in lens design and allows for handholdable zoom from 20 to 24 mm (35 mm equivalent) wide angle all the way up to over 1000 mm supertele, and secondly a 1" sensor that allows better image quality particularly in low light (higher ISO) but puts greater constraints on lens design, resulting in zoom lenses that stop at 200 mm (constant aperture, e.g. Sony RX10) or 400 mm (variable aperture, e.g. Panasonic Lumix FZ1000) equivalent, corresponding to an optical zoom factor of roughly 10 to 15.
Some bridge cameras have a lens thread to attach accessories such as wide-angle or telephoto converters as well as filters such as UV or Circular Polarizing filter and lens hoods. The scene is composed by viewing the display or the electronic viewfinder (EVF). Most have a slightly longer shutter lag than a DSLR. Many of these cameras can store images in a raw format in addition to supporting JPEG. The majority have a built-in flash, but only a few have a hotshoe.
In bright sun, the quality difference between a good compact camera and a digital SLR is minimal but bridge cameras are more portable, cost less and have a greater zoom ability. Thus a bridge camera may better suit outdoor daytime activities, except when seeking professional-quality photos.
Mirrorless interchangeable-lens cameras
In late 2008, a new type of camera emerged, called a mirrorless interchangeable-lens camera. It is technically a DSLR camera that does not require a reflex mirror, a key component of the former. While a typical DSLR has a mirror that reflects light from the lens up to the optical viewfinder, in a mirrorless camera, there is no optical viewfinder. The image sensor is exposed to light at all times, giving the user a digital preview of the image either on the built-in rear LCD screen or an electronic viewfinder (EVF).
These are simpler and more compact than DSLRs due to not having a lens reflex system. MILCs, or mirrorless cameras for short, come with various sensor sizes depending on the brand and manufacturer, these include: a small 1/2.3 inch sensor, as is commonly used in bridge cameras such as the original Pentax Q (more recent Pentax Q versions have a slightly larger 1/1.7 inch sensor); a 1-inch sensor; a Micro Four Thirds sensor; an APS-C sensor found in Sony NEX series and α "DSLR-likes", Fujifilm X series, Pentax K-01, and Canon EOS M; and some, such as the Sony α7, use a full frame (35 mm) sensor, with the Hasselblad X1D being the first medium format mirrorless camera. Some MILCs have a separate electronic viewfinder to compensate the lack of an optical one. In other cameras, the back display is used as the primary viewfinder in the same way as in compact cameras. One disadvantage of mirrorless cameras compared to a typical DSLR is its battery life due to the energy consumption of the electronic viewfinder, but this can be mitigated by a setting inside the camera in some models. Many mirrorless cameras have a hotshoe.
Olympus and Panasonic released many Micro Four Thirds cameras with interchangeable lenses that are fully compatible with each other without any adapter, while others have proprietary mounts. In 2014, Kodak released its first Micro Four Third system camera.
, mirrorless cameras are fast becoming appealing to both amateurs and professionals alike due to their simplicity, compatibility with some DSLR lenses, and features that match most DSLRs today.
Modular cameras
While most digital cameras with interchangeable lenses feature a lens-mount of some kind, there are also a number of modular cameras, where the shutter and sensor are incorporated into the lens module.
The first such modular camera was the Minolta Dimâge V in 1996, followed by the Minolta Dimâge EX 1500 in 1998 and the Minolta MetaFlash 3D 1500 in 1999. In 2009, Ricoh released the Ricoh GXR modular camera.
At CES 2013, Sakar International announced the Polaroid iM1836, an 18MP camera with 1"-sensor with interchangeable sensor-lens. An adapter for Micro Four Thirds, Nikon and K-mount lenses was planned to ship with the camera.
There are also a number of add-on camera modules for smartphones, they are called lens-style cameras (lens camera or smart lens). They contain all the essential components of a digital camera inside a DSLR lens-shaped module, hence the name, but lack any sort of viewfinder and most controls of a regular camera. Instead, they are connected wirelessly and/or mounted to a smartphone to be used as its display output and operate the camera's various controls.
Lens-style cameras include:
Sony Cyber-shot QX series "Smart Lens" or "SmartShot" cameras, announced and released in mid 2013 with the Cyber-shot DSC-QX10. In January 2014, a firmware update was announced for the DSC-QX10 and DSC-QX100. In September 2014, Sony announced the Cyber-shot DSC-QX30 as well as the Alpha ILCE-QX1, the former an ultrazoom with a built-in 30x optical zoom lens, the latter opting for an interchangeable Sony E-mount instead of a built-in lens.
Kodak PixPro smart lens camera series, announced in 2014. These include: the 5X optical zoom SL5, 10X optical zoom SL10, and the 25X optical zoom SL25; all featuring 16MP sensors and 1080p video recording, except for the SL5 which caps at 720p.
ViviCam IU680 smart lens camera from Sakar-owned brand, Vivitar, announced in 2014.
Olympus Air A01 lens camera, announced in 2014 and released in 2015, the lens camera is an open platform with an Android operating system and can detach into 2 parts (sensor module and lens), just like the Sony QX1, and all compatible Micro Four Thirds lenses can then be attached to the built-in lens mount of the camera's sensor module.
Digital single-lens reflex cameras (DSLR)
Digital single-lens reflex cameras (DSLR) is a camera with a digital sensor that utilizes a reflex mirror to split or direct light into the viewfinder to produce an image. The reflex mirror finds the image by blocking light to the camera's sensor and then reflecting it into the camera's pentaprism which allows it to be seen through the viewfinder. When the shutter release is fully pressed the reflex mirror pulls out horizontally below the pentaprism briefly darkening the viewfinder and then opening up the sensor for exposure which creates the photo. The digital image is produced by the sensor which is an array of photoreceptors on a microchip capable of recording light values. Many modern DSLRs offer the ability for "live view" or the framing of the subject emitted from the sensor onto a digital screen, and many have a hotshoe.
The sensor also known as a full-frame sensor is much larger than the other types, typically 18mm to 36mm on the diagonal (crop factor 2, 1.6, or 1). The larger sensor permits more light to be received by each pixel; this, combined with the relatively large lenses provides superior low-light performance. For the same field of view and the same aperture, a larger sensor gives shallower focus. DSLRs can equip interchangeable lenses for versatility by removing it from the lens mount of the camera, typically a silver ring on the front side of DSLRs. These lenses work in tandem with the mechanics of the DSLR to adjust aperture and focus. Autofocus is accomplished using sensors in the mirror box and on most modern lenses can be activated from the lens itself which will trigger upon shutter release.
Digital Still Cameras (DSC)
Digital Still Camera (DSC), such as the Sony DSC cameras, is a type of camera that does not use a reflex mirror. DSCs are like point-and-shoot cameras and are the most common type of cameras, due to their comfortable price and its quality.
Here are a list of DSCs: List of Sony Cyber-shot cameras
Fixed-mirror DSLT cameras
Cameras with fixed semi-transparent mirrors, also known as DSLT cameras, such as the Sony SLT cameras, are single-lens without a moving reflex mirror as in a conventional DSLR. A semi-transparent mirror transmits some of the light to the image sensor and reflects some of the light along the path to a pentaprism/pentamirror which then goes to an optical view finder (OVF) as is done with a reflex mirror in DSLR cameras. The total amount of light is not changed, just some of the light travels one path and some of it travels the other. The consequences are that DSLT cameras should shoot a half stop differently from DSLR. One advantage of using a DSLT camera is the blind moments a DSLR user experiences while the reflecting mirror is moved to send the light to the sensor instead of the viewfinder do not exist for DSLT cameras. Because there is no time at which light is not traveling along both paths, DSLT cameras get the benefit of continuous auto-focus tracking. This is especially beneficial for burst-mode shooting in low-light conditions and also for tracking when taking video.
Digital rangefinders
A rangefinder is a device to measure subject distance, with the intent to adjust the focus of a camera's objective lens accordingly (open-loop controller). The rangefinder and lens focusing mechanism may or may not be coupled. In common parlance, the term "rangefinder camera" is interpreted very narrowly to denote manual-focus cameras with a visually-read out optical rangefinder based on parallax. Most digital cameras achieve focus through analysis of the image captured by the objective lens and distance estimation, if it is provided at all, is only a byproduct of the focusing process (closed-loop controller).
Line-scan camera systems
A line-scan camera traditionally has a single row of pixel sensors, instead of a matrix of them. The lines are continuously fed to a computer that joins them to each other and makes an image. This is most commonly done by connecting the camera output to a frame grabber which resides in a PCI slot of an industrial computer. The frame grabber acts to buffer the image and sometimes provide some processing before delivering to the computer software for processing. Industrial processes often require height and width measurements performed by digital line-scan systems.
Multiple rows of sensors may be used to make colored images, or to increase sensitivity by TDI (time delay and integration).
Many industrial applications require a wide field of view. Traditionally maintaining consistent light over large 2D areas is quite difficult. With a line scan camera all that is necessary is to provide even illumination across the "line" currently being viewed by the camera. This makes sharp pictures of objects that pass the camera at high speed.
Such cameras are also commonly used to make photo finishes, to determine the winner when multiple competitors cross the finishing line at nearly the same time. They can also be used as industrial instruments for analyzing fast processes.
Line-scan cameras are also extensively used in imaging from satellites (see push broom scanner). In this case the row of sensors is perpendicular to the direction of satellite motion. Line-scan cameras are widely used in scanners. In this case, the camera moves horizontally.
Superzoom cameras
Digital superzoom cameras are digital cameras that can zoom in very far. These superzoom cameras are suitable for people who have nearsightedness.
The HX series is a series containing Sony's superzoom cameras like HX20V, HX90V and the newest HX99. HX stands for HyperXoom.
Light-field camera
This type of digital camera captures information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the direction that the light rays are traveling in space. This contrasts with a conventional digital camera, which records only light intensity.
Event camera
Instead of measuring the intensity of light over some predetermined time interval (the exposure time), event cameras detect when the intensity of light changes by some threshold for each pixel independently, usually with microsecond precision.
Integration into other devices
Many devices have a built-in digital camera, including, for example, smartphones, mobile phones, PDAs and laptop computers. Built-in cameras generally store the images in the JPEG file format, although cameras in Apple's iPhone line have used the HEIC format since 2017.
Mobile phones incorporating digital cameras were introduced in Japan in 2001 by J-Phone. In 2003 camera phones outsold stand-alone digital cameras, and in 2006 they outsold film and digital stand-alone cameras. Five billion camera phones were sold in five years, and by 2007 more than half of the installed base of all mobile phones were camera phones. Sales of separate cameras peaked in 2008.
Notable digital camera manufacturers
There are many manufacturers that lead in the production of digital cameras (commonly DSLRs). Each brand embodies different mission statements that differ them from each other outside of the physical technology that they produce. While the majority of manufacturers share modern features amongst their production of cameras, some specialize in specific details either physically on camera or within the system and image quality.
Market trends
Sales of traditional digital cameras have declined due to the increasing use of smartphones for casual photography, which also enable easier manipulation and sharing of photos through the use of apps and web-based services. "Bridge cameras", in contrast, have held their ground with functionality that most smartphone cameras lack, such as optical zoom and other advanced features. DSLRs have also lost ground to Mirrorless interchangeable-lens camera (MILC)s offering the same sensor size in a smaller camera. A few expensive ones use a full-frame sensor, just like DSLR professional cameras.
In response to the convenience and flexibility of smartphone cameras, some manufacturers produced "smart" digital cameras that combine features of traditional cameras with those of a smartphone. In 2012, Nikon and Samsung released the Coolpix S800c and Galaxy Camera, the first two digital cameras to run the Android operating system. Since this software platform is used in many smartphones, they can integrate with some of the same services (such as e-mail attachments, social networks and photo sharing sites) that smartphones do and use other Android-compatible software.
In an inversion, some phone makers have introduced smartphones with cameras designed to resemble traditional digital cameras. Nokia released the 808 PureView and Lumia 1020 in 2012 and 2013; the two devices respectively run the Symbian and Windows Phone operating systems, and both include a 41-megapixel camera (along with a camera grip attachment for the latter). Similarly, Samsung introduced the Galaxy S4 Zoom, having a 16-megapixel camera and 10x optical zoom, combining traits from the Galaxy S4 Mini with the Galaxy Camera. Panasonic Lumix DMC-CM1 is an Android KitKat 4.4 smartphone with 20MP, 1" sensor, the largest sensor for a smartphone ever, with Leica fixed lens equivalent of 28 mm at F2.8, can take RAW image and 4K video, has 21 mm thickness. Furthermore, in 2018 Huawei P20 Pro is an android Oreo 8.1 has triple Leica lenses in the back of the smartphone with 40MP 1/1.7" RGB sensor as first lens, 20MP 1/2.7" monochrome sensor as second lens and 8MP 1/4" RGB sensor with 3x optical zoom as third lens. Combination of first lens and second lens will produce bokeh image with larger high dynamic range, whereas combination of mega pixel first lens and optical zoom will produce maximum 5x digital zoom without loss of quality by reducing the image size to 8MP.
Light-field cameras were introduced in 2013 with one consumer product and several professional ones.
After a big dip of sales in 2012, consumer digital camera sales declined again in 2013 by 36 percent. In 2011, compact digital cameras sold 10 million per month. In 2013, sales fell to about 4 million per month. DSLR and MILC sales also declined in 2013 by 10–15% after almost ten years of double digit growth.
Worldwide unit sales of digital cameras is continuously declining from 148 million in 2011 to 58 million in 2015 and tends to decrease more in the following years.
Film camera sales hit their peak at about 37 million units in 1997, while digital camera sales began in 1989. By 2008, the film camera market had died and digital camera sales hit their peak at 121 million units in 2010. In 2002, cell phones with an integrated camera had been introduced and in 2003 the cell phone with an integrated camera had sold 80 million units per year. By 2011, cell phones with an integrated camera were selling hundreds of millions per year, which were causing a decline in digital cameras. In 2015, digital camera sales were 35 million units or only less than a third of digital camera sales numbers at their peak and also slightly less than film camera sold number at their peak.
Connectivity
Transferring photos
Many digital cameras can connect directly to a computer to transfer data:-
Early cameras used the PC serial port. USB is now the most widely used method (most cameras are viewable as USB mass storage), though some have a FireWire port. Some cameras use USB PTP mode for connection instead of USB MSC; some offer both modes.
Other cameras use wireless connections, via Bluetooth or IEEE 802.11 Wi-Fi, such as the Kodak EasyShare One. Wi-Fi integrated Memory cards (SDHC, SDXC) can transmit stored images, video and other files to computers or smartphones. Mobile operating systems such as Android allow automatic upload and backup or sharing of images over Wi-Fi to photo sharing and cloud services.
Cameras with integrated Wi-Fi or specific Wi-Fi adapters mostly allow camera control, especially shutter release, exposure control and more (tethering) from computer or smartphone apps additionally to the transfer of media data.
Cameraphones and some high-end stand-alone digital cameras also use cellular networks to connect for sharing images. The most common standard on cellular networks is the MMS Multimedia Messaging Service, commonly called "picture messaging". The second method with smartphones is to send a picture as an email attachment. Many old cameraphones, however, do not support email.
A common alternative is the use of a card reader which may be capable of reading several types of storage media, as well as high speed transfer of data to the computer. Use of a card reader also avoids draining the camera battery during the download process. An external card reader allows convenient direct access to the images on a collection of storage media. But if only one storage card is in use, moving it back and forth between the camera and the reader can be inconvenient. Many computers have a card reader built in, at least for SD cards.
Printing photos
Many modern cameras support the PictBridge standard, which allows them to send data directly to a PictBridge-capable printer without the need for a computer. PictBridge uses PTP to transfer images and control information.
Wireless connectivity can also provide for printing photos without a cable connection.
An instant-print camera, is a digital camera with a built-in printer. This confers a similar functionality as an instant camera which uses instant film to quickly generate a physical photograph. Such non-digital cameras were popularized by Polaroid with the SX-70 in 1972.
Displaying photos
Many digital cameras include a video output port. Usually sVideo, it sends a standard-definition video signal to a television, allowing the user to show one picture at a time. Buttons or menus on the camera allow the user to select the photo, advance from one to another, or automatically send a "slide show" to the TV.
HDMI has been adopted by many high-end digital camera makers, to show photos in their high-resolution quality on an HDTV.
In January 2008, Silicon Image announced a new technology for sending video from mobile devices to a television in digital form. MHL sends pictures as a video stream, up to 1080p resolution, and is compatible with HDMI.
Some DVD recorders and television sets can read memory cards used in cameras; alternatively several types of flash card readers have TV output capability.
Weather-sealing and waterproofing
Cameras can be equipped with a varying amount of environmental sealing to provide protection against splashing water, moisture (humidity and fog), dust and sand, or complete waterproofness to a certain depth and for a certain duration. The latter is one of the approaches to allow underwater photography, the other approach being the use of waterproof housings. Many waterproof digital cameras are also shockproof and resistant to low temperatures.
Some waterproof cameras can be fitted with a waterproof housing to increase the operational depth range. The Olympus 'Tough' range of compact cameras is an example.
Modes
Many digital cameras have preset modes for different applications. Within the constraints of correct exposure various parameters can be changed, including exposure, aperture, focusing, light metering, white balance, and equivalent sensitivity. For example, a portrait might use a wider aperture to render the background out of focus, and would seek out and focus on a human face rather than other image content.
Few cameras are equipped with a voice note (audio-only) recording feature.
Scene modes
Vendors implement a variety scene modes in cameras' firmwares for various purposes, such as a "landscape mode" which prevents focusing on rainy and/or stained window glass such as a windshield, and a "sports mode" which reduces motion blur of moving subjects by reducing exposure time with the help of increased light sensitivity. Firmwares may be equipped with the ability to select a suitable scene mode automatically through artificial intelligence.
Image data storage
Many camera phones and most stand alone digital cameras store image data in flash memory cards or other removable media. Most stand-alone cameras use SD format, while a few use CompactFlash, CFexpress or other types. In January 2012, a faster XQD card format was announced. In early 2014, some high end cameras have two hot-swappable memory slots. Photographers can swap one of the memory card with camera-on. Each memory slot can accept either Compact Flash or SD Card. All new Sony cameras also have two memory slots, one for its Memory Stick and one for SD Card, but not hot-swapable.
The approximate count of remaining photos until space exhaustion is calculated by the firmware throughout use and indicated in the viewfinder, to prepare the user for an impending necessary hot swap of the memory card, and/or file offload.
A few cameras used other removable storage such as Microdrives (very small hard disk drives), CD single (185 MB), and 3.5" floppy disks (e. g. Sony Mavica). Other unusual formats include:
Onboard (internal) flash memory — Cheap cameras and cameras secondary to the device's main use (such as a camera phone). Some have small capacities such as 100 Megabytes and less, where intended use is buffer storage for uninterrupted operation during a memory card hot swap.
SuperDisk (LS120) used in two Panasonic digital cameras, the PV-SD4090 and PV-SD5000, which allowed them to use both SuperDisk and 3.5" floppy disks
PC Card hard drives — early professional cameras (discontinued)
PC Card flash memory cards
Thermal printer — known only in the Casio Petit Colle ZR-1 and ZR-10 which printed images immediately rather than storing
Zink technology — printing images immediately rather than storing
PocketZip — media used in the Agfa ePhoto CL30 Clik!
Most manufacturers of digital cameras do not provide drivers and software to allow their cameras to work with Linux or other free software. Still, many cameras use the standard USB mass storage and/or Media Transfer Protocol, and are thus widely supported. Other cameras are supported by the gPhoto project, and many computers are equipped with a memory card reader.
File formats
The Joint Photography Experts Group standard (JPEG) is the most common file format for storing image data. Other file types include Tagged Image File Format (TIFF) and various Raw image formats.
Many cameras, especially high-end ones, support a raw image format. A raw image is the unprocessed set of pixel data directly from the camera's sensor, often saved in a proprietary format. Adobe Systems has released the DNG format, a royalty-free raw image format used by at least 10 camera manufacturers.
Raw files initially had to be processed in specialized image editing programs, but over time many mainstream editing programs, such as Google's Picasa, have added support for raw images. Rendering to standard images from raw sensor data allows more flexibility in making major adjustments without losing image quality or retaking the picture.
Formats for movies are AVI, DV, MPEG, MOV (often containing motion JPEG), WMV, and ASF (basically the same as WMV). Recent formats include MP4, which is based on the QuickTime format and uses newer compression algorithms to allow longer recording times in the same space.
Other formats that are used in cameras (but not for pictures) are the Design Rule for Camera Format (DCF), an ISO specification, used in almost all camera since 1998, which defines an internal file structure and naming. Also used is the Digital Print Order Format (DPOF), which dictates what order images are to be printed in and how many copies. The DCF 1998 defines a logical file system with 8.3 filenames and makes the usage of either FAT12, FAT16, FAT32 or exFAT mandatory for its physical layer in order to maximize platform interoperability.
Most cameras include Exif data that provides metadata about the picture. Exif data may include aperture, exposure time, focal length, date and time taken. Some are able to tag the location.
Directory and file structure
In order to guarantee interoperability, DCF specifies the file system for image and sound files to be used on formatted DCF media (like removable or non-removable memory) as FAT12, FAT16, FAT32, or exFAT. Media with a capacity of more than 2 GB must be formatted using FAT32 or exFAT.
The filesystem in a digital camera contains a DCIM (Digital Camera IMages) directory, which can contain multiple subdirectories with names such as "123ABCDE" that consist of a unique directory number (in the range 100...999) and five alphanumeric characters, which may be freely chosen and often refer to a camera maker. These directories contain files with names such as "ABCD1234.JPG" that consist of four alphanumeric characters (often "100_", "DSC0", "DSCF", "IMG_", "MOV_", or "P000"), followed by a number. Handling of directories with possibly user-created duplicate numbers may vary among camera firmwares.
DCF 2.0 adds support for DCF optional files recorded in an optional color space (that is, Adobe RGB rather than sRGB). Such files must be indicated by a leading "_" (as in "_DSC" instead of "100_" or "DSC0").
Thumbnail files
To enable loading many images in miniature view quickly and efficiently, and to retain meta data, some vendors' firmwares generate accompanying low-resolution thumbnail files for videos and raw photos. For example, those of Canon cameras end with .THM. JPEG can already store a thumbnail image standalone.
Batteries
Digital cameras have become smaller over time, resulting in an ongoing need to develop a battery small enough to fit in the camera and yet able to power it for a reasonable length of time.
Digital cameras utilize either proprietary or standard consumer batteries. , most cameras use proprietary lithium-ion batteries while some use standard AA batteries or primarily use a proprietary Lithium-ion rechargeable battery pack but have an optional AA battery holder available.
Proprietary
The most common class of battery used in digital cameras is proprietary battery formats. These are built to a manufacturer's custom specifications. Almost all proprietary batteries are lithium-ion. In addition to being available from the OEM, aftermarket replacement batteries are commonly available for most camera models.
Standard consumer batteries
Digital cameras that utilize off-the-shelf batteries are typically designed to be able to use both single-use disposable and rechargeable batteries, but not with both types in use at the same time. The most common off-the-shelf battery size used is AA. CR2, CR-V3 batteries, and AAA batteries are also used in some cameras. The CR2 and CR-V3 batteries are lithium based, intended for a single use. Rechargeable RCR-V3 lithium-ion batteries are also available as an alternative to non-rechargeable CR-V3 batteries.
Some battery grips for DSLRs come with a separate holder to accommodate AA cells as an external power source.
Conversion of film cameras to digital
When digital cameras became common, many photographers asked whether their film cameras could be converted to digital. The answer was not immediately clear, as it differed among models. For the majority of 35 mm film cameras the answer is no, the reworking and cost would be too great, especially as lenses have been evolving as well as cameras. For most a conversion to digital, to give enough space for the electronics and allow a liquid crystal display to preview, would require removing the back of the camera and replacing it with a custom built digital unit.
Many early professional SLR cameras, such as the Kodak DCS series, were developed from 35 mm film cameras. The technology of the time, however, meant that rather than being digital "backs" the bodies of these cameras were mounted on large, bulky digital units, often bigger than the camera portion itself. These were factory built cameras, however, not aftermarket conversions.
A notable exception is the Nikon E2 and Nikon E3, using additional optics to convert the 35 mm format to a 2/3 CCD-sensor.
A few 35 mm cameras have had digital camera backs made by their manufacturer, Leica being a notable example with the Leica R8–R9. Medium format and large format cameras (those using film stock greater than 35 mm), have a low unit production, and typical digital backs for them cost over $10,000. These cameras also tend to be highly modular, with handgrips, film backs, winders, and lenses available separately to fit various needs.
The very large sensor these backs use leads to enormous image sizes. For example, Phase One's P45 39 MP image back creates a single TIFF image of size up to 224.6 MB, and even greater pixel counts are available. Medium format digitals such as this are geared more towards studio and portrait photography than their smaller DSLR counterparts; the ISO speed in particular tends to have a maximum of 400, versus 6400 for some DSLR cameras. (Canon EOS-1D Mark IV and Nikon D3S have ISO 12800 plus Hi-3 ISO 102400 with the Canon EOS-1Dx's ISO of 204800).
Digital camera backs
In the industrial and high-end professional photography market, some camera systems use modular (removable) image sensors. For example, some medium format SLR cameras, such as the Mamiya 645D series, allow installation of either a digital camera back or a traditional photographic film back.
Area array
CCD
CMOS
Linear array
CCD (monochrome)
3-strip CCD with color filters
Linear array cameras are also called scan backs.
Single-shot
Multi-shot (three-shot, usually)
Most earlier digital camera backs used linear array sensors, moving vertically to digitize the image. Many of them only capture grayscale images. The relatively long exposure times, in the range of seconds or even minutes generally limit scan backs to studio applications, where all aspects of the photographic scene are under the photographer's control.
Some other camera backs use CCD arrays similar to typical cameras. These are called single-shot backs.
Since it is much easier to manufacture a high-quality linear CCD array with only thousands of pixels than a CCD matrix with millions, very high resolution linear CCD camera backs were available much earlier than their CCD matrix counterparts. For example, you could buy an (albeit expensive) camera back with over 7,000 pixel horizontal resolution in the mid-1990s. However, , it is still difficult to buy a comparable CCD matrix camera of the same resolution. Rotating line cameras, with about 10,000 color pixels in its sensor line, are able, , to capture about 120,000 lines during one full 360 degree rotation, thereby creating a single digital image of 1,200 Megapixels.
Most modern digital camera backs use CCD or CMOS matrix sensors. The matrix sensor captures the entire image frame at once, instead of incrementing scanning the frame area through the prolonged exposure. For example, Phase One produces a 39 million pixel digital camera back with a 49.1 x 36.8 mm CCD in 2008. This CCD array is a little smaller than a frame of 120 film and much larger than a 35 mm frame (36 x 24 mm). In comparison, consumer digital cameras use arrays ranging from 36 x 24 mm (full frame on high end consumer DSLRs) to 1.28 x 0.96 mm (on camera phones) CMOS sensor.
See also
List of digital camera brands
Computational photography
DigitaOS
Magic Lantern (firmware)
Pixel shift
Smart camera
Video camera
Digital signal processor
Vision processing unit
Image sensor
Notes
References
External links
History of the digital camera and digital imaging, Digital Camera Museum
American inventions
Audiovisual introductions in 1975
1975 in the arts
1975 in technology
Computer-related introductions in 1975
20th-century inventions | Digital camera | Technology | 11,570 |
54,427 | https://en.wikipedia.org/wiki/Computer%20algebra%20system | A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The development of the computer algebra systems in the second half of the 20th century is part of the discipline of "computer algebra" or "symbolic computation", which has spurred work in algorithms over mathematical objects such as polynomials.
Computer algebra systems may be divided into two classes: specialized and general-purpose. The specialized ones are devoted to a specific part of mathematics, such as number theory, group theory, or teaching of elementary mathematics.
General-purpose computer algebra systems aim to be useful to a user working in any scientific field that requires manipulation of mathematical expressions. To be useful, a general-purpose computer algebra system must include various features such as:
a user interface allowing a user to enter and display mathematical formulas, typically from a keyboard, menu selections, mouse or stylus.
a programming language and an interpreter (the result of a computation commonly has an unpredictable form and an unpredictable size; therefore user intervention is frequently needed),
a simplifier, which is a rewrite system for simplifying mathematics formulas,
a memory manager, including a garbage collector, needed by the huge size of the intermediate data, which may appear during a computation,
an arbitrary-precision arithmetic, needed by the huge size of the integers that may occur,
a large library of mathematical algorithms and special functions.
The library must not only provide for the needs of the users, but also the needs of the simplifier. For example, the computation of polynomial greatest common divisors is systematically used for the simplification of expressions involving fractions.
This large amount of required computer capabilities explains the small number of general-purpose computer algebra systems. Significant systems include Axiom, GAP, Maxima, Magma, Maple, Mathematica, and SageMath.
History
In the 1950s, while computers were mainly used for numerical computations, there were some research projects into using them for symbolic manipulation. Computer algebra systems began to appear in the 1960s and evolved out of two quite different sources—the requirements of theoretical physicists and research into artificial intelligence.
A prime example for the first development was the pioneering work conducted by the later Nobel Prize laureate in physics Martinus Veltman, who designed a program for symbolic mathematics, especially high-energy physics, called Schoonschip (Dutch for "clean ship") in 1963. Other early systems include FORMAC.
Using Lisp as the programming basis, Carl Engelman created MATHLAB in 1964 at MITRE within an artificial-intelligence research environment. Later MATHLAB was made available to users on PDP-6 and PDP-10 systems running TOPS-10 or TENEX in universities. Today it can still be used on SIMH emulations of the PDP-10. MATHLAB ("mathematical laboratory") should not be confused with MATLAB ("matrix laboratory"), which is a system for numerical computation built 15 years later at the University of New Mexico.
In 1987, Hewlett-Packard introduced the first hand-held calculator CAS with the HP-28 series. Other early handheld calculators with symbolic algebra capabilities included the Texas Instruments TI-89 series and TI-92 calculator, and the Casio CFX-9970G.
The first popular computer algebra systems were muMATH, Reduce, Derive (based on muMATH), and Macsyma; a copyleft version of Macsyma is called Maxima. Reduce became free software in 2008. Commercial systems include Mathematica and Maple, which are commonly used by research mathematicians, scientists, and engineers. Freely available alternatives include SageMath (which can act as a front-end to several other free and nonfree CAS). Other significant systems include Axiom, GAP, Maxima and Magma.
The movement to web-based applications in the early 2000s saw the release of WolframAlpha, an online search engine and CAS which includes the capabilities of Mathematica.
More recently, computer algebra systems have been implemented using artificial neural networks, though as of 2020 they are not commercially available.
Symbolic manipulations
The symbolic manipulations supported typically include:
simplification to a smaller expression or some standard form, including automatic simplification with assumptions and simplification with constraints
substitution of symbols or numeric values for certain expressions
change of form of expressions: expanding products and powers, partial and full factorization, rewriting as partial fractions, constraint satisfaction, rewriting trigonometric functions as exponentials, transforming logic expressions, etc.
partial and total differentiation
some indefinite and definite integration (see symbolic integration), including multidimensional integrals
symbolic constrained and unconstrained global optimization
solution of linear and some non-linear equations over various domains
solution of some differential and difference equations
taking some limits
integral transforms
series operations such as expansion, summation and products
matrix operations including products, inverses, etc.
statistical computation
theorem proving and verification which is very useful in the area of experimental mathematics
optimized code generation
In the above, the word some indicates that the operation cannot always be performed.
Additional capabilities
Many also include:
a programming language, allowing users to implement their own algorithms
arbitrary-precision numeric operations
exact integer arithmetic and number theory functionality
Editing of mathematical expressions in two-dimensional form
plotting graphs and parametric plots of functions in two and three dimensions, and animating them
drawing charts and diagrams
APIs for linking it on an external program such as a database, or using in a programming language to use the computer algebra system
string manipulation such as matching and searching
add-ons for use in applied mathematics such as physics, bioinformatics, computational chemistry and packages for physical computation
solvers for differential equations
Some include:
graphic production and editing such as computer-generated imagery and signal processing as image processing
sound synthesis
Some computer algebra systems focus on specialized disciplines; these are typically developed in academia and are free. They can be inefficient for numeric operations as compared to numeric systems.
Types of expressions
The expressions manipulated by the CAS typically include polynomials in multiple variables; standard functions of expressions (sine, exponential, etc.); various special functions (Γ, ζ, erf, Bessel functions, etc.); arbitrary functions of expressions; optimization; derivatives, integrals, simplifications, sums, and products of expressions; truncated series with expressions as coefficients, matrices of expressions, and so on. Numeric domains supported typically include floating-point representation of real numbers, integers (of unbounded size), complex (floating-point representation), interval representation of reals, rational number (exact representation) and algebraic numbers.
Use in education
There have been many advocates for increasing the use of computer algebra systems in primary and secondary-school classrooms. The primary reason for such advocacy is that computer algebra systems represent real-world math more than do paper-and-pencil or hand calculator based mathematics.
This push for increasing computer usage in mathematics classrooms has been supported by some boards of education. It has even been mandated in the curriculum of some regions.
Computer algebra systems have been extensively used in higher education. Many universities offer either specific courses on developing their use, or they implicitly expect students to use them for their course work. The companies that develop computer algebra systems have pushed to increase their prevalence among university and college programs.
CAS-equipped calculators are not permitted on the ACT, the PLAN, and in some classrooms though it may be permitted on all of College Board's calculator-permitted tests, including the SAT, some SAT Subject Tests and the AP Calculus, Chemistry, Physics, and Statistics exams.
Mathematics used in computer algebra systems
Knuth–Bendix completion algorithm
Root-finding algorithms
Symbolic integration via e.g. Risch algorithm or Risch–Norman algorithm
Hypergeometric summation via e.g. Gosper's algorithm
Limit computation via e.g. Gruntz's algorithm
Polynomial factorization via e.g., over finite fields, Berlekamp's algorithm or Cantor–Zassenhaus algorithm.
Greatest common divisor via e.g. Euclidean algorithm
Gaussian elimination
Gröbner basis via e.g. Buchberger's algorithm; generalization of Euclidean algorithm and Gaussian elimination
Padé approximant
Schwartz–Zippel lemma and testing polynomial identities
Chinese remainder theorem
Diophantine equations
Landau's algorithm (nested radicals)
Derivatives of elementary functions and special functions. (e.g. See derivatives of the incomplete gamma function.)
Cylindrical algebraic decomposition
Quantifier elimination over real numbers via cylindrical algebraic decomposition
See also
List of computer algebra systems
Scientific computation
Statistical package
Automated theorem proving
Algebraic modeling language
Constraint-logic programming
Satisfiability modulo theories
References
External links
Curriculum and Assessment in an Age of Computer Algebra Systems - From the Education Resources Information Center Clearinghouse for Science, Mathematics, and Environmental Education, Columbus, Ohio.
Richard J. Fateman. "Essays in algebraic simplification." Technical report MIT-LCS-TR-095, 1972. (Of historical interest in showing the direction of research in computer algebra. At the MIT LCS website: )
Algebra education | Computer algebra system | Mathematics | 1,891 |
77,933,869 | https://en.wikipedia.org/wiki/Icom%20IC-V82 | The Icom IC-V82 is a VHF band handheld transceiver designed by Icom for radio amateurs and professionals who require VHF communication. Although it is a little outdated, (launched in 2004 and discontinued in 2014), the IC-V82 is still valued in the second hand market for a number of additional features such as the ability to convert it, by adding a module, into a digital device, which make it ideal for certain applications requiring voice and/or data encryption.
Features
It is a portable VHF transceiver with coverage in the two-meter band (144–146 MHz) and a maximum output power of 7 watts. It was manufactured and sold by Icom from 2004 to 2014.
frequency : VHF 136-174 MHz
output power : 7 W (high), 4 W (medium), 0.5 W (low)
modulation : FM (Frequency Modulated)
channel memory : 207 channels
screen : LCD with backlight
battery : BP-222N (Ni-Cd) or BP-227 (Li-Ion)
Digital Module
One of the most outstanding features of the IC-V82 is the ability to convert it into a digital device using the additional UT-1181 module sold by Icom Inc.. This module allowed the addition of advanced digital communication and encryption capabilities, including a trunking DMR protocol, digital voice communication and low-speed data in D-STAR format.
History
In June 2022, United Against Nuclear Iran, a U.S. advocacy organization, identified the Icom IC-V82 as being used by Hezbollah, a U.S. designated Foreign Terrorist Organization. It sent a letter to Icom outlining its concerns about the dual-use capability of the transceiver (analog+crypted-digital) and regarding Icom's business ties to Power Group (Icom's representatives in Lebanon) and Faza Gostrar, which claims to be the "Official ICOM representative in Iran".
Many of the devices purchased by Hezbollah that later played a role in the 2024 Lebanon electronic device attacks, killing at least 25 people and wounding over 708, were reported as being IC-V82s. Icom opened an investigation into the case on September 19, 2024, while a sales executive at the company's U.S. subsidiary said the devices involved appeared to be counterfeit units.
Counterfeit models and controversy
After Icom discontinued the IC-V82 in 2014, counterfeit models emerged in China. In addition, another counterfeit model was sold to Hezbollah, and many of the devices used by this group, including pagers like the Gold Apollo AR924, were exploded on September 18, 2024.
Having ceased its production, Icom issued an advisory warning about counterfeit transceivers, including the IC-V82. In October 2018, the company issued a cease-and-desist order against a Chinese manufacturer suspected of producing counterfeit Icom products; it also noted that this was not the first time it had taken such steps.
Protocols
IIDAS
IIDAS is Icom's implementation of the NXDN protocol for two-way digital radio products intended for commercial private land mobile radios (PLMRs) and low-end public safety communications systems. NXDN is a Common Air Interface (CAI) technical standard for mobile communications. It was jointly developed by Icom and Kenwood Corporation.
D-STAR
The "open" D-STAR radio system was developed by Icom based on digital radio protocols developed by the Japan Amateur Radio League and funded by the Ministry of Posts and Telecommunications of Japan. This system is designed to provide advanced voice and data communications over amateur radio using open standards.
Accessories and options
The IC-V82 has a variety of accessories that improve its functionality and ease of use:
Antenna : High gain antenna to improve reception and transmission.
Belt Clip : For comfortable and safe transport.
Optional batteries : Available in different capacities and technologies (Ni-Cd, Li-Ion).
References
External links
World official website (in English)
Old information from Icom Archived (in English)
Complete list of all radio amateur equipment manufactured by Icom
Walkie-talkies
Consumer electronics
Israeli–Lebanese conflict
Mobile telecommunications user equipment
Amateur radio transceivers | Icom IC-V82 | Technology | 868 |
24,423,507 | https://en.wikipedia.org/wiki/C27H34O11 | {{DISPLAYTITLE:C27H34O11}}
The molecular formula C27H34O11 (molar mass: 534.55 g/mol, exact mass: 534.2101 u) may refer to:
Arctiin, a lignan
Phillyrin, a lignan | C27H34O11 | Chemistry | 67 |
42,934 | https://en.wikipedia.org/wiki/Cryostasis%20%28clathrate%20hydrates%29 | The term cryostasis was introduced to name the reversible preservation technology for live biological objects which is based on using clathrate-forming gaseous substances under increased hydrostatic pressure and hypothermic temperatures.
Living tissues cooled below the freezing point of water are damaged by the dehydration of the cells as ice is formed between the cells. The mechanism of freezing damage in living biological tissues has been elucidated by Renfret.
The vapor pressure of the ice is lower than the vapor pressure of the solute water in the surrounding cells and as heat is removed at the freezing point of the solutions, the ice crystals grow between the cells, extracting water from them. As the ice crystals grow, the volume of the cells shrinks, and the cells are crushed between the ice crystals. Additionally, as the cells shrink, the solutes inside the cells are concentrated in the remaining water, increasing the intracellular ionic strength and interfering with the organization of the proteins and other organized intercellular structures. Eventually, the solute concentration inside the cells reaches the eutectic and freezes. The final state of frozen tissues is pure ice in the former extracellular spaces, and inside the cell membranes a mixture of concentrated cellular components in ice and bound water. In general, this process is not reversible to the point of restoring the tissues to life.
Cryostasis utilizes clathrate-forming gases that penetrate and saturate the biological tissues causing clathrate hydrates formation (under specific pressure-temperature conditions) inside the cells and in the extracellular matrix. Clathrate hydrates are a class of solids in which gas molecules occupy "cages" made up of hydrogen-bonded water molecules. These "cages" are unstable when empty, collapsing into conventional ice crystal structure, but they are stabilised by the inclusion of the gas molecule within them. Most low molecular weight gases (including CH4, H2S, Ar, Kr, and Xe) will form a hydrate under some pressure-temperature conditions.
Clathrates formation will prevent the biological tissues from dehydration which will cause irreversible inactivation of intracellular enzymes.
See also
Cryopreservation
Cryoprotectant
Hibernation
References
Cryobiology | Cryostasis (clathrate hydrates) | Physics,Chemistry,Biology | 465 |
9,075,104 | https://en.wikipedia.org/wiki/Dark%20Energy%20Survey | The Dark Energy Survey (DES) is an astronomical survey designed to constrain the properties of dark energy. It uses images taken in the near-ultraviolet, visible, and near-infrared to measure the expansion of the universe using Type Ia supernovae, baryon acoustic oscillations, the number of galaxy clusters, and weak gravitational lensing. The collaboration is composed of research institutions and universities from the United States, Australia, Brazil, the United Kingdom, Germany, Spain, and Switzerland. The collaboration is divided into several scientific working groups. The director of DES is Josh Frieman.
The DES began by developing and building Dark Energy Camera (DECam), an instrument designed specifically for the survey. This camera has a wide field of view and high sensitivity, particularly in the red part of the visible spectrum and in the near infrared. Observations were performed with DECam mounted on the 4-meter Víctor M. Blanco Telescope, located at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. Observing sessions ran from 2013 to 2019; the DES collaboration has published results from the first three years of the survey.
DECam
DECam, short for the Dark Energy Camera, is a large camera built to replace the previous prime focus camera on the Victor M. Blanco Telescope. The camera consists of three major components: mechanics, optics, and CCDs.
Mechanics
The mechanics of the camera consists of a filter changer with an 8-filter capacity and shutter. There is also an optical barrel that supports 5 corrector lenses, the largest of which is 98 cm in diameter. These components are attached to the CCD focal plane which is cooled to with liquid nitrogen in order to reduce thermal noise in the CCDs. The focal plane is also kept in an extremely low vacuum of to prevent the formation of condensation on the sensors. The entire camera with lenses, filters, and CCDs weighs approximately 4 tons. When mounted at the prime focus it was supported with a hexapod system allowing for real time focal adjustment.
Optics
The camera is outfitted with u, g, r, i, z, and Y filters spanning roughly from 340–1070 nm, similar to those used in the Sloan Digital Sky Survey (SDSS). This allows DES to obtain photometric redshift measurements to z≈1. DECam also contains five lenses acting as corrector optics to extend the telescope's field of view to a diameter of 2.2°, one of the widest fields of view available for ground-based optical and infrared imaging. One significant difference between previous charge-coupled devices (CCD) at the Victor M. Blanco Telescope and DECam is the improved quantum efficiency in the red and near-infrared wavelengths.
CCDs
The scientific sensor array on DECam is an array of 62 2048×4096 pixel back-illuminated CCDs totaling 520 megapixels; an additional 12 2048×2048 pixel CCDs (50 Mpx) are used for guiding the telescope, monitoring focus, and alignment. The full DECam focal plane contains 570 megapixels. The CCDs for DECam use high resistivity silicon manufactured by Dalsa and LBNL with 15×15 micron pixels. By comparison, the OmniVision Technologies back-illuminated CCD that was used in the iPhone 4 has a 1.75×1.75 micron pixel with 5 megapixels. The larger pixels allow DECam to collect more light per pixel, improving low light sensitivity which is desirable for an astronomical instrument. DECam's CCDs also have a 250-micron crystal depth; this is significantly larger than most consumer CCDs. The additional crystal depth increases the path length travelled by entering photons. This, in turn, increases the probability of interaction and allows the CCDs to have an increased sensitivity to lower energy photons, extending the wavelength range to 1050 nm. Scientifically this is important because it allows one to look for objects at a higher redshift, increasing statistical power in the studies mentioned above. When placed in the telescope's focal plane each pixel has a width of 0.27 on the sky, resulting in a total field of view of 3 square degrees.
Survey
DES imaged 5,000 square degrees of the southern sky in a footprint that overlaps with the South Pole Telescope and Stripe 82 (in large part avoiding the Milky Way). The survey took 758 observing nights spread over six annual sessions between August and February to complete, covering the survey footprint ten times in five photometric bands (g, r, i, z, and Y). The survey reached a depth of 24th magnitude in the i band over the entire survey area. Longer exposure times and faster observing cadence were made in five smaller patches totaling 30 square degrees to search for supernovae.
First light was achieved on 12 September 2012; after a verification and testing period, scientific survey observations started in August 2013. The last observing session was completed on 9 January 2019.
Other surveys using DECam
After completion of the Dark Energy Survey, the Dark Energy Camera was used for other sky surveys:
Dark Energy Camera Legacy Survey (DECaLS) covers the sky below 32°Declination, not including the Milky Way. This survey covers over 9000 square degrees.
The DESI Legacy Imaging Surveys (Legacy Surveys), as of data release 10, includes DECaLS, BASS and MzLS. It also incorporating additional DECam data, which means that it covers almost the entire extragalactic southern sky, including parts of the Magellanic Clouds. The purpose of the Legacy Surveys is to find targets for the Dark Energy Spectroscopic Instrument.
Dark Energy Camera Plane Survey (DECaPS), covers the Milky Way in the southern sky.
Observing
Each year from August through February, observers will stay in dormitories on the mountain. During a weeklong period of work, observers sleep during the day and use the telescope and camera at night. There will be some DES members working at the telescope console to monitor operations while others are monitoring camera operations and data process.
For the wide-area footprint observations, DES takes roughly every two minutes for each new image: The exposures are typically 90 seconds long, with another 30 seconds for readout of the camera data and slewing to point the telescope at its next target. Despite the restrictions on each exposure, the team also need to consider different sky conditions for the observations, such as moonlight and cloud cover.
In order to get better images, DES team use a computer algorithm called the "Observing Tactician" (ObsTac) to help with sequencing observations. It optimizes among different factors, such as the date and time, weather conditions, and the position of the moon. ObsTac automatically points the telescope in the best direction, and selects the exposure, using the best light filter. It also decides whether to take a wide-area or time-domain survey image, depending on whether or not the exposure will also be used for supernova searches.
Results
Cosmology
Dark Energy Group published several papers presenting their results for cosmology. Most of these cosmology results coming from its first-year data and the third-year data. Their results for cosmology were concluded with a Multi-Probe Methodology, which mainly combine the data from Galaxy-Galaxy Lensing, different shape of weak lensing, cosmic shear, galaxy clustering and photometric data set.
For the first-year data collected by DES, Dark Energy Survey Group showed the Cosmological Constraints results from Galaxy Clustering and Weak Lensing results and cosmic shear measurement. With Galaxy Clustering and Weak Lensing results, and for ΛCDM, , and at 68% confidence limits for ωCMD. Combine the most significant measurements of cosmic shear in a galaxy survey, Dark Energy Survey Group showed that at 68% confidence limits and for ΛCDM with . Other cosmological analyses from first year data showed a derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources. The DES team also published a paper summarize all the Photometric Data Set for Cosmology for their first-year data.
For the third-year data collected by DES, they updated the Cosmological Constraints to for the ΛCDM model with the new cosmic shear measurements. From third-year data of Galaxy Clustering and Weak Lensing results, DES updated the Cosmological Constraints to and in ΛCDM at 68% confidence limits, , and in at 68% confidence limits. Similarly, the DES team published their third-year observations for photometric data set for cosmology comprising nearly 5000 deg2 of imaging in the south Galactic cap, including nearly 390 million objects, with depth reaching S/N ~ 10 for extended objects up to ~ 23.0, and top-of-the-atmosphere photometric uniformity < 3mmag.
Weak lensing
Weak lensing was measured statistically by measuring the shear-shear correlation function, a two-point function, or its Fourier Transform, the shear power spectrum. In April 2015, the Dark Energy Survey released mass maps using cosmic shear measurements of about 2 million galaxies from the science verification data between August 2012 and February 2013. In 2021 weak lensing was used to map the dark matter in a region of the southern hemisphere sky, in 2022 together with galaxy clustering data to give new cosmological constrains. and in 2023 with data from the Planck telescope and South Pole telescope to give once new improved constraints.
Another big part of weak lensing result is to calibrate the redshift of the source galaxies. In December 2020 and June 2021, DES team published two papers showing their results about using weak lensing to calibrate the redshift of the source galaxies in order to mapping the matter density field with gravitational lensing.
Gravitational waves
After LIGO detected the first gravitational wave signal from GW170817, DES made follow-up observations of GW170817 using DECam. With DECam independent discovery of the optical source, DES team establish its association with GW170817 by showing that none of the 1500 other sources found within the event localization region could plausibly be associated with the event. DES team monitored the source for over two weeks and provide the light curve data as a machine-readable file. From the observation data set, DES concluded that the optical counterpart they have identified near NGC 4993 is associated with GW170817. This discovery ushers in the era of multi-messenger astronomy with gravitational waves and demonstrates the power of DECam to identify the optical counterparts of gravitational-wave sources.
Dwarf galaxies
In March 2015, two teams released their discoveries of several new potential dwarf galaxy candidates found in Year 1 DES data. In August 2015, the Dark Energy Survey team announced the discovery of eight additional candidates in Year 2 DES data. Later on, Dark Energy Survey team found more dwarf galaxies. With more Dwarf Galaxy results, the team was able to take a deep look about more properties of the detected Dwarf Galaxy such as the chemical abundance, the structure of stellar population, and Stellar Kinematics and Metallicities. In Feb 2019, the team also discovered a sixth star cluster in the Fornax Dwarf Spheroidal Galaxy and a tidally Disrupted Ultra-Faint Dwarf Galaxy.
Baryon acoustic oscillations
The signature of baryon acoustic oscillations (BAO) can be observed in the distribution of tracers of the matter density field and used to measure the expansion history of the Universe. BAO can also be measured using purely photometric data, though at less significance. DES team observation samples consists of 7 million galaxies distributed over a footprint of 4100 deg2 with and a typical redshift uncertainty of 0.03(1+z). From their statistics, they combine the likelihoods derived from angular correlations and spherical harmonics to constrain the ratio of comoving angular diameter distance at the effective redshift of our sample to the sound horizon scale at the drag epoch.
Type Ia supernova observations
In May 2019, Dark Energy Survey team published their first cosmology results using Type Ia supernovae. The supernova data was from DES-SN3YR. The Dark Energy Survey team found Ωm = 0.331 ± 0.038 with a flat ΛCDM model and Ωm = 0.321 ± 0.018, w = −0.978 ± 0.059 with a flat model. Analyzing the same data from DES-SN3YR, they also found a new current Hubble constant, . This result has an excellent agreement with the Hubble constant measurement from Planck Satellite Collaboration in 2018. In June 2019, there a follow-up paper was published by DES team discussing the systematic uncertainties, and validation of using the supernovae to measure the cosmology results mentioned before. The team also published their photometric pipeline and light curve data in another paper published in the same month.
Minor planets
Several minor planets were discovered by DeCam in the course of The Dark Energy Survey, including high-inclination trans-Neptunian objects (TNOs).
{| class="wikitable" style="font-size:89%; float:left; text-align:center; width:27em; margin-right:1em; line-height:1.65em !important; height:155px;"
|+ List of DES discovered minor planets
|-
! Numbered MPdesignation !! Discoverydate
!style="width:3em;" |
! Ref
|-
|
| 19 November 2012
|
|
|-
|
| 8 September 2013
|
|
|-
|
| 18 August 2014
|
|
|-
|
| 19 August 2014
|
|
|-
|
| 15 November 2012
|
|
|-
|
| 15 November 2012
|
|
|-
|
| 28 September 2012
|
|
|-
|
| 12 November 2012
|
|
|-
|
| 13 October 2013
|
|
|-
!colspan=4 style="font-weight:normal; text-align:center; padding:4px 12px;"| Discoveries are credited either to"DECam" or "Dark Energy Survey".
|}
The MPC has assigned the IAU code W84 for DeCam's observations of small Solar System bodies. As of October 2019, the MPC inconsistently credits the discovery of nine numbered minor planets, all of them trans-Neptunian objects, to either "DeCam" or "Dark Energy Survey". The list does not contain any unnumbered minor planets potentially discovered by DeCam, as discovery credits are only given upon a body's numbering, which in turn depends on a sufficiently secure orbit determination.
Gallery
See also
Cosmic Evolution Survey
References
External links
Dark Energy Survey website
Dark Energy Survey Science Program (PDF)
Dark Energy Survey Data Management
Dark Energy Camera (DECam)
Astronomical surveys
Dark energy
Fermilab experiments
Minor-planet discovering observatories | Dark Energy Survey | Physics,Astronomy | 3,080 |
295,066 | https://en.wikipedia.org/wiki/Source%20lines%20of%20code | Source lines of code (SLOC), also known as lines of code (LOC), is a software metric used to measure the size of a computer program by counting the number of lines in the text of the program's source code. SLOC is typically used to predict the amount of effort that will be required to develop a program, as well as to estimate programming productivity or maintainability once the software is produced.
Measurement methods
Many useful comparisons involve only the order of magnitude of lines of code in a project. Using lines of code to compare a 10,000-line project to a 100,000-line project is far more useful than when comparing a 20,000-line project with a 21,000-line project. While it is debatable exactly how to measure lines of code, discrepancies of an order of magnitude can be clear indicators of software complexity or man-hours.
There are two major types of SLOC measures: physical SLOC (LOC) and logical SLOC (LLOC). Specific definitions of these two measures vary, but the most common definition of physical SLOC is a count of lines in the text of the program's source code excluding comment lines.
Logical SLOC attempts to measure the number of executable "statements", but their specific definitions are tied to specific computer languages (one simple logical SLOC measure for C-like programming languages is the number of statement-terminating semicolons). It is much easier to create tools that measure physical SLOC, and physical SLOC definitions are easier to explain. However, physical SLOC measures are more sensitive to logically irrelevant formatting and style conventions than logical SLOC. However, SLOC measures are often stated without giving their definition, and logical SLOC can often be significantly different from physical SLOC.
Consider this snippet of C code as an example of the ambiguity encountered when determining SLOC:
for (i = 0; i < 100; i++) printf("hello"); /* How many lines of code is this? */
In this example we have:
1 physical line of code (LOC),
2 logical lines of code (LLOC) (for statement and printf statement),
1 comment line.
Depending on the programmer and coding standards, the above "line" of code could be written on many separate lines:
/* Now how many lines of code is this? */
for (i = 0; i < 100; i++)
{
printf("hello");
}
In this example we have:
4 physical lines of code (LOC): is placing braces work to be estimated?
2 logical lines of code (LLOC): what about all the work writing non-statement lines?
1 comment line: tools must account for all code and comments regardless of comment placement.
Even the "logical" and "physical" SLOC values can have a large number of varying definitions. Robert E. Park (while at the Software Engineering Institute) and others developed a framework for defining SLOC values, to enable people to carefully explain and define the SLOC measure used in a project. For example, most software systems reuse code, and determining which (if any) reused code to include is important when reporting a measure.
Origins
At the time when SLOC was introduced as a metric, the most commonly used languages, such as FORTRAN and assembly language, were line-oriented languages. These languages were developed at the time when punched cards were the main form of data entry for programming. One punched card usually represented one line of code. It was one discrete object that was easily counted. It was the visible output of the programmer, so it made sense to managers to count lines of code as a measurement of a programmer's productivity, even referring to such as "card images". Today, the most commonly used computer languages allow a lot more leeway for formatting. Text lines are no longer limited to 80 or 96 columns, and one line of text no longer necessarily corresponds to one line of code.
Usage of SLOC measures
SLOC measures are somewhat controversial, particularly in the way that they are sometimes misused. Experiments have repeatedly confirmed that effort is highly correlated with SLOC, that is, programs with larger SLOC values take more time to develop. Thus, SLOC can be effective in estimating effort. However, functionality is less well correlated with SLOC: skilled developers may be able to develop the same functionality with far less code, so one program with fewer SLOC may exhibit more functionality than another similar program. Counting SLOC as productivity measure has its caveats, since a developer can develop only a few lines and yet be far more productive in terms of functionality than a developer who ends up creating more lines (and generally spending more effort). Good developers may merge multiple code modules into a single module, improving the system yet appearing to have negative productivity because they remove code. Furthermore, inexperienced developers often resort to code duplication, which is highly discouraged as it is more bug-prone and costly to maintain, but it results in higher SLOC.
SLOC counting exhibits further accuracy issues at comparing programs written in different languages unless adjustment factors are applied to normalize languages. Various computer languages balance brevity and clarity in different ways; as an extreme example, most assembly languages would require hundreds of lines of code to perform the same task as a few characters in APL. The following example shows a comparison of a "hello world" program written in BASIC, C, and COBOL (a language known for being particularly verbose).
Another increasingly common problem in comparing SLOC metrics is the difference between auto-generated and hand-written code. Modern software tools often have the capability to auto-generate enormous amounts of code with a few clicks of a mouse. For instance, graphical user interface builders automatically generate all the source code for a graphical control elements simply by dragging an icon onto a workspace. The work involved in creating this code cannot reasonably be compared to the work necessary to write a device driver, for instance. By the same token, a hand-coded custom GUI class could easily be more demanding than a simple device driver; hence the shortcoming of this metric.
There are several cost, schedule, and effort estimation models which use SLOC as an input parameter, including the widely used Constructive Cost Model (COCOMO) series of models by Barry Boehm et al., PRICE Systems True S and Galorath's SEER-SEM. While these models have shown good predictive power, they are only as good as the estimates (particularly the SLOC estimates) fed to them. Many have advocated the use of function points instead of SLOC as a measure of functionality, but since function points are highly correlated to SLOC (and cannot be automatically measured) this is not a universally held view.
Example
According to Vincent Maraia, the SLOC values for various operating systems in Microsoft's Windows NT product line are as follows:
David A. Wheeler studied the Red Hat distribution of the Linux operating system, and reported that Red Hat Linux version 7.1 (released April 2001) contained over 30 million physical SLOC. He also extrapolated that, had it been developed by conventional proprietary means, it would have required about 8,000 person-years of development effort and would have cost over $1 billion (in year 2000 U.S. dollars).
A similar study was later made of Debian GNU/Linux version 2.2 (also known as "Potato"); this operating system was originally released in August 2000. This study found that Debian GNU/Linux 2.2 included over 55 million SLOC, and if developed in a conventional proprietary way would have required 14,005 person-years and cost US$1.9 billion to develop. Later runs of the tools used report that the following release of Debian had 104 million SLOC, and , the newest release is going to include over 213 million SLOC.
Utility
Advantages
Scope for automation of counting: since line of code is a physical entity, manual counting effort can be easily eliminated by automating the counting process. Small utilities may be developed for counting the LOC in a program. However, a logical code counting utility developed for a specific language cannot be used for other languages due to the syntactical and structural differences among languages. Physical LOC counters, however, have been produced which count dozens of languages.
An intuitive metric: line of code serves as an intuitive metric for measuring the size of software because it can be seen, and the effect of it can be visualized. Function points are said to be more of an objective metric which cannot be imagined as being a physical entity, it exists only in the logical space. This way, LOC comes in handy to express the size of software among programmers with low levels of experience.
Ubiquitous measure: LOC measures have been around since the earliest days of software. As such, it is arguable that more LOC data is available than any other size measure.
Disadvantages
Lack of accountability: lines-of-code measure suffers from some fundamental problems. Some think that it isn't useful to measure the productivity of a project using only results from the coding phase, which usually accounts for only 30% to 35% of the overall effort.
Lack of cohesion with functionality: though experiments have repeatedly confirmed that while effort is highly correlated with LOC, functionality is less well correlated with LOC. That is, skilled developers may be able to develop the same functionality with far less code, so one program with less LOC may exhibit more functionality than another similar program. In particular, LOC is a poor productivity measure of individuals, because a developer who develops only a few lines may still be more productive than a developer creating more lines of code – even more: some good refactoring like "extract method" to get rid of redundant code and keep it clean will mostly reduce the lines of code.
Adverse impact on estimation: because of the fact presented under point #1, estimates based on lines of code can adversely go wrong, in all possibility.
Developer's experience: implementation of a specific logic differs based on the level of experience of the developer. Hence, number of lines of code differs from person to person. An experienced developer may implement certain functionality in fewer lines of code than another developer of relatively less experience does, though they use the same language.
Difference in languages: consider two applications that provide the same functionality (screens, reports, databases). One of the applications is written in C++ and the other application written in a language like COBOL. The number of function points would be exactly the same, but aspects of the application would be different. The lines of code needed to develop the application would certainly not be the same. As a consequence, the amount of effort required to develop the application would be different (hours per function point). Unlike lines of code, the number of function points will remain constant.
Advent of GUI tools: with the advent of GUI-based programming languages and tools such as Visual Basic, programmers can write relatively little code and achieve high levels of functionality. For example, instead of writing a program to create a window and draw a button, a user with a GUI tool can use drag-and-drop and other mouse operations to place components on a workspace. Code that is automatically generated by a GUI tool is not usually taken into consideration when using LOC methods of measurement. This results in variation between languages; the same task that can be done in a single line of code (or no code at all) in one language may require several lines of code in another.
Problems with multiple languages: in today's software scenario, software is often developed in more than one language. Very often, a number of languages are employed depending on the complexity and requirements. Tracking and reporting of productivity and defect rates poses a serious problem in this case, since defects cannot be attributed to a particular language subsequent to integration of the system. Function point stands out to be the best measure of size in this case.
Lack of counting standards: there is no standard definition of what a line of code is. Do comments count? Are data declarations included? What happens if a statement extends over several lines? – These are the questions that often arise. Though organizations like SEI and IEEE have published some guidelines in an attempt to standardize counting, it is difficult to put these into practice especially in the face of newer and newer languages being introduced every year.
Psychology: a programmer whose productivity is being measured in lines of code will have an incentive to write unnecessarily verbose code. The more management is focusing on lines of code, the more incentive the programmer has to expand their code with unneeded complexity. This is undesirable, since increased complexity can lead to increased cost of maintenance and increased effort required for bug fixing.
In the PBS documentary Triumph of the Nerds, Microsoft executive Steve Ballmer criticized the use of counting lines of code:
In IBM there's a religion in software that says you have to count K-LOCs, and a K-LOC is a thousand lines of code. How big a project is it? Oh, it's sort of a 10K-LOC project. This is a 20K-LOCer. And this is 50K-LOCs. And IBM wanted to sort of make it the religion about how we got paid. How much money we made off OS/2, how much they did. How many K-LOCs did you do? And we kept trying to convince them – hey, if we have – a developer's got a good idea and he can get something done in 4K-LOCs instead of 20K-LOCs, should we make less money? Because he's made something smaller and faster, less K-LOC. K-LOCs, K-LOCs, that's the methodology. Ugh! Anyway, that always makes my back just crinkle up at the thought of the whole thing.
According to the Computer History Museum Apple Developer Bill Atkinson in 1982 found problems with this practice:
When the Lisa team was pushing to finalize their software in 1982, project managers started requiring programmers to submit weekly forms reporting on the number of lines of code they had written. Bill Atkinson thought that was silly. For the week in which he had rewritten QuickDraw’s region calculation routines to be six times faster and 2000 lines shorter, he put “-2000″ on the form. After a few more weeks the managers stopped asking him to fill out the form, and he gladly complied.
See also
Software development effort estimation
Estimation (project management)
Cost estimation in software engineering
Notes
References
Further reading
External links
Definitions of Practical Source Lines of Code Resource Standard Metrics (RSM) defines "effective lines of code" as a realistics code metric independent of programming style.
Effective Lines of Code eLOC Metrics for popular Open Source Software Linux Kernel 2.6.17, Firefox, Apache HTTPD, MySQL, PHP using RSM.
Tanenbaum, Andrew S. Modern Operating Systems (2nd ed.). Prentice Hall. .
Folklore.org: Macintosh Stories: -2000 Lines Of Code
Software metrics | Source lines of code | Mathematics,Engineering | 3,127 |
37,800 | https://en.wikipedia.org/wiki/Dendrochronology | Dendrochronology (or tree-ring dating) is the scientific method of dating tree rings (also called growth rings) to the exact year they were formed in a tree. As well as dating them, this can give data for dendroclimatology, the study of climate and atmospheric conditions during different periods in history from the wood of old trees. Dendrochronology derives from the Ancient Greek (), meaning "tree", (), meaning "time", and (), "the study of".
Dendrochronology is useful for determining the precise age of samples, especially those that are too recent for radiocarbon dating, which always produces a range rather than an exact date. However, for a precise date of the death of the tree a full sample to the edge is needed, which most trimmed timber will not provide. It also gives data on the timing of events and rates of change in the environment (most prominently climate) and also in wood found in archaeology or works of art and architecture, such as old panel paintings. It is also used as a check in radiocarbon dating to calibrate radiocarbon ages.
New growth in trees occurs in a layer of cells near the bark. A tree's growth rate changes in a predictable pattern throughout the year in response to seasonal climate changes, resulting in visible growth rings. Each ring marks a complete cycle of seasons, or one year, in the tree's life. As of 2023, securely dated tree-ring data for Germany and Ireland are available going back 13,910 years. A new method is based on measuring variations in oxygen isotopes in each ring, and this 'isotope dendrochronology' can yield results on samples which are not suitable for traditional dendrochronology due to too few or too similar rings. Some regions have "floating sequences", with gaps which mean that earlier periods can only be approximately dated. As of 2024, only three areas have continuous sequences going back to prehistoric times, the foothills of the Northern Alps, the southwestern United States and the British Isles. Miyake events, which are major spikes in cosmic rays at known dates, are visible in trees rings and can fix the dating of a floating sequence.
History
The Greek botanist Theophrastus (c. 371 – c. 287 BC) first mentioned that the wood of trees has rings. In his Trattato della Pittura (Treatise on Painting), Leonardo da Vinci (1452–1519) was the first person to mention that trees form rings annually and that their thickness is determined by the conditions under which they grew. In 1737, French investigators Henri-Louis Duhamel du Monceau and Georges-Louis Leclerc de Buffon examined the effect of growing conditions on the shape of tree rings. They found that in 1709, a severe winter produced a distinctly dark tree ring, which served as a reference for subsequent European naturalists. In the U.S., Alexander Catlin Twining (1801–1884) suggested in 1833 that patterns among tree rings could be used to synchronize the dendrochronology of various trees and thereby to reconstruct past climates across entire regions. The English polymath Charles Babbage proposed using dendrochronology to date the remains of trees in peat bogs or even in geological strata (1835, 1838).
During the latter half of the nineteenth century, the scientific study of tree rings and the application of dendrochronology began. In 1859, the German-American Jacob Kuechler (1823–1893) used crossdating to examine oaks (Quercus stellata) in order to study the record of climate in western Texas. In 1866, the German botanist, entomologist, and forester Julius Theodor Christian Ratzeburg (1801–1871) observed the effects on tree rings of defoliation caused by insect infestations. By 1882, this observation was already appearing in forestry textbooks. In the 1870s, the Dutch astronomer Jacobus Kapteyn (1851–1922) was using crossdating to reconstruct the climates of the Netherlands and Germany. In 1881, the Swiss-Austrian forester Arthur von Seckendorff-Gudent (1845–1886) was using crossdating. From 1869 to 1901, Robert Hartig (1839–1901), a German professor of forest pathology, wrote a series of papers on the anatomy and ecology of tree rings. In 1892, the Russian physicist (1841–1905) wrote that he had used patterns found in tree rings to predict droughts in 1882 and 1891.
During the first half of the twentieth century, the astronomer A. E. Douglass founded the Laboratory of Tree-Ring Research at the University of Arizona. Douglass sought to better understand cycles of sunspot activity and reasoned that changes in solar activity would affect climate patterns on earth, which would subsequently be recorded by tree-ring growth patterns (i.e., sunspots → climate → tree rings).
Methods
Growth rings
Horizontal cross sections cut through the trunk of a tree can reveal growth rings, also referred to as tree rings or annual rings. Growth rings result from new growth in the vascular cambium, a layer of cells near the bark that botanists classify as a lateral meristem; this growth in diameter is known as secondary growth. Visible rings result from the change in growth speed through the seasons of the year; thus, critical for the title method, one ring generally marks the passage of one year in the life of the tree. Removal of the bark of the tree in a particular area may cause deformation of the rings as the plant overgrows the scar.
The rings are more visible in trees which have grown in temperate zones, where the seasons differ more markedly. The inner portion of a growth ring forms early in the growing season, when growth is comparatively rapid (hence the wood is less dense) and is known as "early wood" (or "spring wood", or "late-spring wood"); the outer portion is the "late wood" (sometimes termed "summer wood", often being produced in the summer, though sometimes in the autumn) and is denser.
Many trees in temperate zones produce one growth-ring each year, with the newest adjacent to the bark. Hence, for the entire period of a tree's life, a year-by-year record or ring pattern builds up that reflects the age of the tree and the climatic conditions in which the tree grew. Adequate moisture and a long growing season result in a wide ring, while a drought year may result in a very narrow one.
Direct reading of tree ring chronologies is a complex science, for several reasons. First, contrary to the single-ring-per-year paradigm, alternating poor and favorable conditions, such as mid-summer droughts, can result in several rings forming in a given year. In addition, particular tree species may present "missing rings", and this influences the selection of trees for study of long time-spans. For instance, missing rings are rare in oak and elm trees.
Critical to the science, trees from the same region tend to develop the same patterns of ring widths for a given period of chronological study. Researchers can compare and match these patterns ring-for-ring with patterns from trees which have grown at the same time in the same geographical zone (and therefore under similar climatic conditions). When one can match these tree-ring patterns across successive trees in the same locale, in overlapping fashion, chronologies can be built up—both for entire geographical regions and for sub-regions. Moreover, wood from ancient structures with known chronologies can be matched to the tree-ring data (a technique called 'cross-dating'), and the age of the wood can thereby be determined precisely. Dendrochronologists originally carried out cross-dating by visual inspection; more recently, they have harnessed computers to do the task, applying statistical techniques to assess the matching. To eliminate individual variations in tree-ring growth, dendrochronologists take the smoothed average of the tree-ring widths of multiple tree-samples to build up a 'ring history', a process termed replication. A tree-ring history whose beginning- and end-dates are not known is called a 'floating chronology'. It can be anchored by cross-matching a section against another chronology (tree-ring history) whose dates are known.
A fully anchored and cross-matched chronology for oak and pine in central Europe extends back 12,460 years, and an oak chronology goes back 7,429 years in Ireland and 6,939 years in England. Comparison of radiocarbon and dendrochronological ages supports the consistency of these two independent dendrochronological sequences. Another fully anchored chronology that extends back 8,500 years exists for the bristlecone pine in the Southwest US (White Mountains of California).
Dendrochronological equation
The dendrochronological equation defines the law of growth of tree rings. The equation was proposed by Russian biophysicist Alexandr N. Tetearing in his work "Theory of populations" in the form:
where ΔL is width of annual ring, t is time (in years), ρ is density of wood, kv is some coefficient, M(t) is function of mass growth of the tree.
Ignoring the natural sinusoidal oscillations in tree mass, the formula for the changes in the annual ring width is:
where c1, c2, and c4 are some coefficients, a1 and a2 are positive constants.
The formula is useful for correct approximation of samples data before data normalization procedure. The typical forms of the function ΔL(t) of annual growth of wood ring are shown in the figures.
Sampling and dating
Dendrochronology allows specimens of once-living material to be accurately dated to a specific year. Dates are often represented as estimated calendar years B.P., for before present, where "present" refers to 1 January 1950.
Timber core samples are sampled and used to measure the width of annual growth rings; by taking samples from different sites within a particular region, researchers can build a comprehensive historical sequence. The techniques of dendrochronology are more consistent in areas where trees grew in marginal conditions such as aridity or semi-aridity where the ring growth is more sensitive to the environment, rather than in humid areas where tree-ring growth is more uniform (complacent). In addition, some genera of trees are more suitable than others for this type of analysis. For instance, the bristlecone pine is exceptionally long-lived and slow growing, and has been used extensively for chronologies; still-living and dead specimens of this species provide tree-ring patterns going back thousands of years, in some regions more than 10,000 years. Currently, the maximum span for fully anchored chronology is a little over 11,000 years B.P.
IntCal20 is the 2020 "Radiocarbon Age Calibration Curve", which provides a calibrated carbon 14 dated sequence going back 55,000 years. The most recent part, going back 13,900 years, is based on tree rings.
Reference sequences
European chronologies derived from wooden structures initially found it difficult to bridge the gap in the fourteenth century when there was a building hiatus, which coincided with the Black Death. However, there do exist unbroken chronologies dating back to prehistoric times, for example the Danish chronology dating back to 352 BC.
Given a sample of wood, the variation of the tree-ring growths not only provides a match by year, but can also match location because climate varies from place to place. This makes it possible to determine the source of ships as well as smaller artifacts made from wood, but which were transported long distances, such as panels for paintings and ship timbers.
Miyake events
Miyake events, such as the ones in 774–775 and 993–994, can provide fixed reference points in an unknown time sequence as they are due to cosmic radiation. As they appear as spikes in carbon 14 in tree rings for that year all round the world, they can be used to date historical events to the year. For example, wooden houses in the Viking site at L'Anse aux Meadows in Newfoundland were dated by finding the layer with the 993 spike, which showed that the wood is from a tree felled in 1021. Researchers at the University of Bern have provided exact dating of a floating sequence in a Neolithic settlement in northern Greece by tying it to a spike in cosmogenic radiocarbon in 5259 BC.
Frost rings
Frost ring is a term used to designate a layer of deformed, collapsed tracheids and traumatic parenchyma cells in tree ring analysis. They are formed when air temperature falls below freezing during a period of cambial activity. They can be used in dendrochronology to indicate years that are colder than usual.
Applications
Radiocarbon dating calibration
Dates from dendrochronology can be used as a calibration and check of radiocarbon dating. This can be done by checking radiocarbon dates against long master sequences, with Californian bristle-cone pines in Arizona being used to develop this method of calibration as the longevity of the trees (up to c.4900 years) in addition to the use of dead samples meant a long, unbroken tree ring sequence could be developed (dating back to ). Additional studies of European oak trees, such as the master sequence in Germany that dates back to , can also be used to back up and further calibrate radiocarbon dates.
Climatology
Dendroclimatology is the science of determining past climates from trees primarily from the properties of the annual tree rings. Other properties of the annual rings, such as maximum latewood density (MXD) have been shown to be better proxies than simple ring width. Using tree rings, scientists have estimated many local climates for hundreds to thousands of years previous.
Art history
Dendrochronology has become important to art historians in the dating of panel paintings. However, unlike analysis of samples from buildings, which are typically sent to a laboratory, wooden supports for paintings usually have to be measured in a museum conservation department, which places limitations on the techniques that can be used.
In addition to dating, dendrochronology can also provide information as to the source of the panel. Many Early Netherlandish paintings have turned out to be painted on panels of "Baltic oak" shipped from the Vistula region via ports of the Hanseatic League. Oak panels were used in a number of northern countries such as England, France and Germany. Wooden supports other than oak were rarely used by Netherlandish painters.
Since panels of seasoned wood were used, an uncertain number of years has to be allowed for seasoning when estimating dates. Panels were trimmed of the outer rings, and often each panel only uses a small part of the radius of the trunk. Consequently, dating studies usually result in a terminus post quem (earliest possible) date, and a tentative date for the arrival of a seasoned raw panel using assumptions as to these factors. As a result of establishing numerous sequences, it was possible to date 85–90% of the 250 paintings from the fourteenth to seventeenth century analysed between 1971 and 1982; by now a much greater number have been analysed.
A portrait of Mary, Queen of Scots in the National Portrait Gallery, London was believed to be an eighteenth-century copy. However, dendrochronology revealed that the wood dated from the second half of the sixteenth century. It is now regarded as an original sixteenth-century painting by an unknown artist.
On the other hand, dendrochronology was applied to four paintings depicting the same subject, that of Christ expelling the money-lenders from the Temple. The results showed that the age of the wood was too late for any of them to have been painted by Hieronymus Bosch.
While dendrochronology has become an important tool for dating oak panels, it is not effective in dating the poplar panels often used by Italian painters because of the erratic growth rings in poplar.
The sixteenth century saw a gradual replacement of wooden panels by canvas as the support for paintings, which means the technique is less often applicable to later paintings. In addition, many panel paintings were transferred onto canvas or other supports during the nineteenth and twentieth centuries.
Archaeology
The dating of buildings with wooden structures and components is also done by dendrochronology; dendroarchaeology is the term for the application of dendrochronology in archaeology. While archaeologists can date wood and when it was felled, it may be difficult to definitively determine the age of a building or structure in which the wood was used; the wood could have been reused from an older structure, may have been felled and left for many years before use, or could have been used to replace a damaged piece of wood. The dating of building via dendrochronology thus requires knowledge of the history of building technology. Many prehistoric forms of buildings used "posts" that were whole young tree trunks; where the bottom of the post has survived in the ground these can be especially useful for dating.
Examples:
The Post Track and Sweet Track, ancient timber trackways in the Somerset levels, England, have been dated to 3838 BC and 3807 BC.
Navan Fort where in Prehistoric Ireland a large structure was built with more than two hundred posts. The central oak post was felled in 95 BC.
The Fairbanks House in Dedham, Massachusetts. While the house had long been claimed to have been built (and being the oldest wood-framed house in North America), core samples of wood taken from a summer beam confirmed the wood was from an oak tree felled in 1637–8, as wood was not seasoned before use in building at that time in New England. An additional sample from another beam yielded a date of 1641, thus confirming the house had been constructed starting in 1638 and finished sometime after 1641 .
The burial chamber of Gorm the Old, who died c. 958, was constructed from wood of timbers felled in 958.
Veliky Novgorod, where, between the tenth and the fifteenth century, numerous consecutive layers of wooden log pavement have been placed over the accumulating dirt.
Measurement platforms, software, and data formats
There are many different file formats used to store tree ring width data. Effort for standardisation was made with the development of TRiDaS. Further development led to the database software Tellervo, which is based on the new standard format whilst being able to import lots of different data formats. The desktop application can be attached to measurement devices and works with the database server that is installed separately.
Continuous sequence
Bard et al write in 2023: "The oldest tree-ring series are known as floating since, while their constituent rings can be counted to create a relative internal chronology, they cannot be dendro-matched with the main Holocene absolute chronology. However, 14C analyses performed at high resolution on overlapped absolute and floating tree-rings series enable one to link them almost absolutely and hence to extend the calibration on annual tree rings until ≈13 900 cal yr BP."
Related chronologies
Herbchronology is the analysis of annual growth rings (or simply annual rings) in the secondary root xylem of perennial herbaceous plants. Similar seasonal patterns also occur in ice cores and in varves (layers of sediment deposition in a lake, river, or sea bed). The deposition pattern in the core will vary for a frozen-over lake versus an ice-free lake, and with the fineness of the sediment. Sclerochronology is the study of algae deposits.
Some columnar cacti also exhibit similar seasonal patterns in the isotopes of carbon and oxygen in their spines (acanthochronology). These are used for dating in a manner similar to dendrochronology, and such techniques are used in combination with dendrochronology, to plug gaps and to extend the range of the seasonal data available to archaeologists and paleoclimatologists.
A similar technique is used to estimate the age of fish stocks through the analysis of growth rings in the otolith bones.
See also
Dendrology
International Tree-Ring Data Bank
Post excavation
Timeline of dendrochronology timestamp events
References
External links
Nottingham Tree-Ring Dating Laboratory
Oxford Tree-Ring Laboratory
Dendrochronology and Art History of Painted Ceilings (Historic Environment Scotland, 2017).
Video & commentary on medullary rays, heart wood, and tree rings.
Video & commentary on Tree Rings – Formation and Purpose
Bibliography of Dendrochronology
Multilingual Glossary of Dendrochronology
Digital Collaboratory for Cultural Dendrochronology (DCCD)
International Tree-Ring Data Bank
Laboratory of Tree-Ring Research University of Arizona
"Tree Ring Science", the academic site of Prof. Henri D. Grissino-Mayer, Department of Geography, The University of Tennessee, and the Laboratory of Tree-Ring Science
American inventions
Art history
Conservation and restoration of cultural heritage
Dating methods
Dendrology
Incremental dating
Paleoecology | Dendrochronology | Biology | 4,347 |
68,013,481 | https://en.wikipedia.org/wiki/Gallium%20palladide | Gallium palladide (GaPd or PdGa) is an intermetallic combination of gallium and palladium. It has the iron monosilicide crystal structure. The compound has been suggested as an improved catalyst for hydrogenation reactions. In principle, gallium palladide can be a more selective catalyst since unlike substituted compounds, the palladium atoms are spaced out in a regular crystal structure rather than randomly.
References
Intermetallics
Palladium compounds
Gallium compounds
Iron monosilicide structure type | Gallium palladide | Physics,Chemistry,Materials_science | 107 |
45,318,437 | https://en.wikipedia.org/wiki/Ecosystem%20decay | Ecosystem decay is a term coined by Thomas Lovejoy to define the process of which species become extinct locally based on habitat fragmentation. This process is what led to the extinction of several species, including the Irish Elk. Ecosystem decay can be mainly attributed to population isolation, leading to inbreeding, leading to a decrease in the population of local species. Another factor is the absence of competition, preventing the mechanisms of natural selection to benefit the population. This leads to a lack of a skill set for the animal to adjust and adapt to a new environment. Habitat fragmentation and loss lead to smaller habitat sizes, and ecosystem decay predicts ecological processes are changed so heavily in smaller habitats that the loss in diversity is more extreme than expected by fragmentation alone.
Although similar to forest fragmentation and island biogeography, ecosystem decay is what results in the event of forest fragmentation.
Overview
Ecosystem decay is a natural phenomenon that has several resulting features.
Decline of native populations of animals
Decrease in genetic diversity
Decrease of the interior:edge ratio
Isolation of an area of viable habitat
Reduction in viable habitats and often extreme separation
Process
The process through which ecosystem decay occurs can be long and complicated or short and hasty. Overall, it still follows some basic guidelines. First, a piece of habitat is surrounded and thus isolated by farmland or cities.
Secondly, pollination of the plants immediately ceases and the number of species thins out. Thirdly, through generations of inbreeding and thus higher birth mortality than birth survival rate and infertile dirt, the forest fragment will slowly decline to nothing.
Causes
Ecosystem decay is commonly caused by the harvesting of rain forest in appliance to certain laws or illegally for profit by humans. Certain countries such as Brazil prohibit the harvesting of Brazil nut trees and groves of this species causing forest fragmentation and thus causing ecosystem decay to occur. Cities, roads, farms and any other substantial barrier impeding and animals habitat can be a direct or an indirect cause. Naturally, fires and rising sea levels on low land can also cause habitat fragmentation and thus ecosystem decay. Although this process is much more lengthy, many species such as the Irish Elk and several species of ancient Australian Marsupials have been indirectly killed this way with contributions by Climate Change, Glaciation and Forest Fires.
Studies
Eleonore Setz was studying a patch of equatorial rainforest named reserve #1202 containing Pithecia pithecia (white-faced sakis), to study the effects of ecosystem decay. The 9.2 hectare (less than 25 acre) area had been isolated for five years when David Quammen noted results on the fragmentation of their habitat which resulted in them being stranded. The population of P. pithecia was slowly declining at the time of the study and the population had declined to six.
References
General references
Harris, Larry D. (1984). The Fragmented Forest: Island Biogeography Theory and the Preservation of Biotic Diversity. The University of Chicago Press. .
Ecosystem Decay of Amazonian Forest Fragments:a 22-Year Investigation (Conservation Biology, Pages 605–618 Volume 16, No. 3, June 2002) William F. Laurance, Thomas E. Lovejoy, Heraldo L. Vasconcelos, Emilio M. Bruna, Raphael K. Didham, Philip C. Stouffer, Claude Gascon, Richard O. Bierregaard, Susan Laurance and Erica Sampaio
Habitat
Ecology | Ecosystem decay | Biology | 687 |
11,421,646 | https://en.wikipedia.org/wiki/Prime%20k-tuple | In number theory, a prime -tuple is a finite collection of values representing a repeatable pattern of differences between prime numbers. For a -tuple , the positions where the -tuple matches a pattern in the prime numbers are given by the set of integers such that all of the values are prime. Typically the first value in the -tuple is 0 and the rest are distinct positive even numbers.
Named patterns
Several of the shortest k-tuples are known by other common names:
OEIS sequence covers 7-tuples (prime septuplets) and contains an overview of related sequences, e.g. the three sequences corresponding to the three admissible 8-tuples (prime octuplets), and the union of all 8-tuples. The first term in these sequences corresponds to the first prime in the smallest prime constellation shown below.
Admissibility
In order for a -tuple to have infinitely many positions at which all of its values are prime, there cannot exist a prime such that the tuple includes every different possible value modulo . If such a prime existed, then no matter which value of was chosen, one of the values formed by adding to the tuple would be divisible by , so the only possible placements would have to include itself, and there are at most of those. For example, the numbers in a -tuple cannot take on all three values 0, 1, and 2 modulo 3; otherwise the resulting numbers would always include a multiple of 3 and therefore could not all be prime unless one of the numbers is 3 itself.
A -tuple that includes every possible residue modulo is said to be inadmissible modulo . It should be obvious that this is only possible when . A tuple which is not inadmissible modulo any prime is called admissible.
It is conjectured that every admissible -tuple matches infinitely many positions in the sequence of prime numbers. However, there is no tuple for which this has been proven except the trivial 1-tuple (0). In that case, the conjecture is equivalent to the statement that there are infinitely many primes. Nevertheless, Yitang Zhang proved in 2013 that there exists at least one 2-tuple which matches infinitely many positions; subsequent work showed that such a 2-tuple exists with values differing by 246 or less that matches infinitely many positions.
Positions matched by inadmissible patterns
Although is inadmissible modulo 3, it does produce the single set of primes, .
Because 3 is the first odd prime, a non-trivial () -tuple matching the prime 3 can only match in one position. If the tuple begins (i.e. is inadmissible modulo 2) then it can only match if the tuple contains only even numbers, it can only match
Inadmissible -tuples can have more than one all-prime solution if they are admissible modulo 2 and 3, and inadmissible modulo a larger prime . This of course implies that there must be at least five numbers in the tuple. The shortest inadmissible tuple with more than one solution is the 5-tuple , which has two solutions: and , where all values mod 5 are included in both cases. Examples with three or more solutions also exist.
Prime constellations
The diameter of a -tuple is the difference of its largest and smallest elements. An admissible prime -tuple with the smallest possible diameter (among all admissible -tuples) is a prime constellation. For all this will always produce consecutive primes. (Recall that all are integers for which the values are prime.)
This means that, for large :
where is the th prime number.
The first few prime constellations are:
The diameter as a function of is sequence A008407 in the OEIS.
A prime constellation is sometimes referred to as a prime -tuplet, but some authors reserve that term for instances that are not part of longer -tuplets.
The first Hardy–Littlewood conjecture predicts that the asymptotic frequency of any prime constellation can be calculated. While the conjecture is unproven it is considered likely to be true. If that is the case, it implies that the second Hardy–Littlewood conjecture, in contrast, is false.
Prime arithmetic progressions
A prime -tuple of the form is said to be a prime arithmetic progression. In order for such a -tuple to meet the admissibility test, must be a multiple of the primorial of .
Skewes numbers
The Skewes numbers for prime k-tuples are an extension of the definition of Skewes' number to prime k-tuples based on the first Hardy–Littlewood conjecture (). Let denote a prime -tuple, the number of primes below such that are all prime, let and let denote its Hardy–Littlewood constant (see first Hardy–Littlewood conjecture). Then the first prime that violates the Hardy–Littlewood inequality for the -tuple , i.e., such that
(if such a prime exists) is the Skewes number for .
The table below shows the currently known Skewes numbers for prime k-tuples:
The Skewes number (if it exists) for sexy primes is still unknown.
References
.
.
Prime numbers | Prime k-tuple | Mathematics | 1,106 |
3,428,418 | https://en.wikipedia.org/wiki/International%20Prize%20for%20Biology | The is an annual award for "outstanding contribution to the advancement of research in fundamental biology." The Prize, although it is not always awarded to a biologist, is one of the most prestigious honours a natural scientist can receive. There are no restrictions on the nationality of the recipient.
Past laureates include John B. Gurdon, Motoo Kimura, Edward O. Wilson, Ernst Mayr, Thomas Cavalier-Smith, Yoshinori Ohsumi and many other great biologists in the world.
Information
The International Prize of Biology was created in 1985 to commemorate the 60-year reign of Emperor Shōwa of Japan and his longtime interest in and support of biology. The selection and award of the prize is managed by the Japan Society for the Promotion of Science. The laureate is awarded a beautiful medal, 10 million yen, and an international symposium on the scientist's area of research is held in Tokyo. The prize ceremony is held in the presence of Emperor of Japan.
The first International Prize for Biology was awarded to E. J. H. Corner, who was a prominent scientist in the field of systematic biology, because Emperor Shōwa was interested in and worked on this field for long time.
Criteria
The Prize is awarded in accordance with the following criteria:
The Prize shall be made by the Committee every year, commencing in 1985.
The Prize shall consist of a medal and a prize of ten million (10,000,000) yen.
There shall be no restrictions on the nationality of the recipient.
The Prize shall be awarded to an individual who, in the judgment of the members of the Committee, has made an outstanding contribution to the advancement of research in fundamental biology.
The specialty within the field of biology for which the Prize will be awarded shall be decided upon annually by the Committee.
The Committee shall be advised on suitable candidates for the Prize by a selection committee, which will consist of Japanese and overseas members.
The selection committee shall invite nominations of candidates from such relevant individuals and organizations at home and abroad as the selection committee may deem appropriate.
The selection committee shall submit to the Committee a report containing recommendations of the candidate for the Prize and supporting statement.
The Prize shall be presented every year. In conjunction with the ceremony, an international symposium is held in which the Prize recipient is invited to give a special lecture.
Background
The Emperors of Japan have been famous for their special interest in science, in particular biology. Emperor Akihito has strived over many years to advance the study taxonomy of gobioid fishes.
Laureates
Source: Japan Society for the Promotion of Science
See also
Japan Society for the Promotion of Science
List of biology awards
External links
International Prize for Biology
References
Academic awards
Awards established in 1985
Biology awards
Hirohito
International awards
Japanese science and technology awards
1985 establishments in Japan | International Prize for Biology | Technology | 561 |
68,396,728 | https://en.wikipedia.org/wiki/HAT-P-65 | HAT-P-65 is a faint star located in the equatorial constellation Equuleus. With an apparent magnitude of 13.16, it requires a telescope to be seen. The star is located away from Earth, but is drifting close with a radial velocity of -48 km/s.
Properties
HAT-P-65 has a similar spectral type to that of the Sun. However, it is 21% more massive, and 86% larger than the latter. HAT-P-65 is slightly hotter, with an effective temperature of 5,916 K compared to 5,778 K of the Sun. It also has a higher luminosity and metallicity, with an iron content 26% greater than the Sun.
Planetary system
In 2016, an inflated hot Jupiter was discovered orbiting the star in a tight 2 day orbit. As of 2019, the planet has been suffering orbital decay due to its proximity.
References
G-type subgiants
Planetary systems with one confirmed planet
Equuleus | HAT-P-65 | Astronomy | 202 |
48,108,635 | https://en.wikipedia.org/wiki/Refuge%20Water%20Supply%20Program | The Refuge Water Supply Program (RWSP) is administered by the United States Department of the Interior jointly by the Bureau of Reclamation and Fish and Wildlife Service and tasked with acquiring a portion and delivering a total of 555,515 acre feet (AF) of water annually to 19 specific protected wetland areas in the Central Valley of California as mandated with the passing of the Central Valley Project Improvement Act signed on October 30, 1992, by President George H. W. Bush.
Background
The Central Valley Project (CVP)
The Central Valley (California) once contained over 4 million acres of naturally occurring wetlands that provided habitat: land, food, and shelter for resident and migratory birds and wildlife. The Central Valley, historically and today, constitutes a significant portion of the Pacific Flyway used by millions of migrating birds each year.
The Central Valley's winter flood-prone geography and summer dry climate were natural constraints to permanent human settlement. The Central Valley Project (CVP); an interconnected engineered system of reservoirs, aqueducts, and flood control measures, constructed by the US Bureau of Reclamation; managed flooding and provided reliable water supplies year-round with highly managed and calculated water storage, release and conveyance infrastructure. Along with the construction of similar facilities by others, the CVP's flood control and water delivery systems created a stable environment suitable for permanent human development in the Central Valley.
Controlling and manipulating the water supply for human benefit dramatically and quickly transformed the landscape. All but 400,000 acres of natural wetlands were transformed for development, a reduction in wetland area of 90%. The loss of wetlands concentrated the migrating and resident wildlife on less land and required their sharing of and dependence on less water. This unhealthy crowding caused bird populations to decline as they suffered from disease and the lack of necessary food, shelter, and water. Compounding the problem, human activity, in some cases, polluted the waters that flowed into the remaining wetlands. The Kesterson Reservoir disaster provided a clear indication that wildlife was suffering in the modified Central Valley and helped inspire actions to mitigate the CVP's effects on bird and fish populations.
Central Valley Project Improvement Act
The Central Valley Project Improvement Act (CVPIA) was signed into law on October 30, 1992, as mitigation and remedy for some of the CVP's adverse environmental effects, specifically, to increase the population and improve the health of the Central Valley's anadromous fish and increase the acreage and health of wetlands used by migratory birds and other resident wildlife. The CVPIA is managed by the United States Department of Interior through collaboration between the Bureau of Reclamation and the Fish and Wildlife Service.
Refuge Water Supply Program
CVPIA Section 3406(d) mandates that 555,515 AF of water of suitable quality be delivered to maintain and improve wetland habitat areas in 19 wetland areas specifically identified in the Report on Refuge Water Supply Investigations (March 1989) and the San Joaquin Basin Action Plan/Kesterson Mitigation Action Plan (December 1989), collectively referred to as 'the Refuges'. These Refuges comprise nearly 200,000 acres of wetlands and as such represent almost 50% of the wetlands remaining in California's Central Valley. Reclamation created the Refuge Water Supply Program (RWSP) to manage and administer the activities necessary to ensure the acquisition and delivery of this water as required under this section. Like the CVPIA, the RWSP is administered jointly by the Bureau of Reclamation (from the Mid-Pacific Regional Office in Sacramento, CA) and the Fish and Wildlife Service (from the Pacific Southwest Regional Office in Sacramento, CA)
The Refuges
National Wildlife Refuges
The following Refuges, benefiting from CVPIA legislation, are administered by the Department of the Interior, Fish and Wildlife Service as National Wildlife Refuges. In some instances, the specific Refuge named in the CVPIA is currently a constituent part of an FWS administrative complex of refuges that includes several such refuges and/or other non-benefiting lands.
The following CVPIA benefiting refuges are components of the Sacramento National Wildlife Refuge Complex: Sacramento National Wildlife Refuge, Delevan National Wildlife Refuge, Colusa National Wildlife Refuge, Sutter National Wildlife Refuge.
The following CVPIA benefiting Refuges are components of the San Luis National Wildlife Refuge Complex: San Luis Unit, West Bear Creek Unit, East Bear Creek Unit, Kesterson Unit, Freitas Unit and Merced National Wildlife Refuge. The refuges currently identified as 'Units' were separate refuges at the time the legislation was written and passed.
The following CVPIA benefiting Refuges are components of the Kern National Wildlife Refuge Complex: Kern National Wildlife Refuge and Pixley National Wildlife Refuge
California State Wildlife Areas
The following Refuges, benefiting from CVPIA legislation, are administered by the State of California, Department of Fish and Wildlife as Wildlife Areas. In some instances, the specific Refuge named in the CVPIA is currently a part of a DFW administrative unit that includes several such refuges and/or other non-benefiting lands.
Gray Lodge Wildlife Area; Los Banos Wildlife Area (portion); the following currently designated 'units' of the North Grasslands Wildlife Area, China Island Unit and Salt Slough Unit; and the Volta Wildlife Area (portion). Note: 'portion', is used to indicate that the current existing wildlife area boundary is larger than it was in the defining report. CVP water obligated for the RWSP is only permitted to be used on that portion of the wildlife area specifically described in the defining report and legislation.
The Grasslands Resource Conservation District
The Grassland Resource Conservation District (GRCD) comprises 75,000 acres of land including: the Grassland Water District (GWD) which provides water to 165 hunting clubs; the Kesterson and Freitas Units of the San Luis National Wildlife Refuge (NWR); Volta Wildlife Management Area (WMA); Los Banos WMA; and privately owned wetlands. As such, the GRCD includes 60,000 acres of privately owned hunting clubs, 12,000 acres of land owned by the Federal and state governments, and 3,000 acres of cropland. The federal and state Refuges identified in the CVPIA legislation that are within the GRCD do not share GRCD's water allocation.
The Water
The water associated with the program is either categorized as Level 2 or Incremental Level 4 and there are different supply quantities and characteristics of each. The goal of the program is to provide the 'Full Level 4' water quantity which is the cumulative sum of the full quantity of each category for each Refuge. Program-wide, typically between 75% and 85% of Full Level 4 is delivered annually.
The water the RWSP provides accounts for varying portions of an individual Refuge's total water supply. Because some Refuges do not have adequate conveyance capacity to them (Pixley NWR, Merced NWR, Sutter NWR, East Bear Creek Unit and Gray Lodge WA) delivered water supplies vary annually with hydrological and climatic conditions. Construction projects enabling these Refuges to receive water supplies have been identified and in some cases are progressing but funding limitations will likely cause this condition to persist.
Full Level 4
The amount of water identified as being required for the optimal management of a designated wetland is defined as that refuge's 'Full Level 4' quantity. The 555,515AF of water the RWSP is tasked with providing is the sum of all of the specified refuges' Full Level 4 quantities. Full Level 4 is a contractually obligated amount of water that consists of 2 blocks, Level 2 and Incremental Level 4. Each refuge has a 'Full Level 4' quantity which is the sum of its total Level 2, and total Incremental Level 4 quantity of water. These amounts are provided in the table "RWSP Contract Water Quantities".
Level 2 Water
Each of the 19 benefiting Refuges has its own Level 2 water quantity which is based on the average water supplies necessary to maintain the wetland areas in existence prior to the passing of the CVPIA or equate to its prior dependably delivered quantity (regardless of water quality) and collectively totals 422,251 AF. For this reason, the delivery of a Refuge's Level 2 allocation is considered to be essential for a Refuge's successful operation.
For those refuges that have the infrastructure to receive it, Level 2 water comes from the CVP, meaning a fixed portion of the federal water supply stored and delivered by the CVP Project is automatically dedicated annually for Refuge use and thus provides a perennially reliable water source. The RWSP manages and funds several long-term contracts (5 – 40 years) with a variety of water agencies to convey this water from its CVP source to a Refuges' boundaries.
It is important to note that the individual Refuges determine the amount of this water to be delivered, per month, at their discretion. This is a unique condition because most CVP water contracts impose limitations on both the total monthly delivery amount and the months in which deliveries may occur.
Incremental Level 4 Water
The incremental difference between the Refuges' Full Level 4 allocation and its Level 2 allocation defines Incremental Level 4 (IL4) and represents the quantity of water necessary for Refuges to ideally manage all lands identified in the refuge reports for the benefit of waterfowl. In most cases, IL4 water is needed to fully support an expanded wetland footprint. Like Level 2 water, each refuge has its own Incremental Level 4 quantity but unlike Level 2 supplies, this water is not dedicated from CVP supply and must be acquired from other sources, such as willing sellers or from those relinquishing their federal or state supplies. The RWSP manages and funds contracts of varied duration to acquire and convey this water from its source to the refuges' boundaries. The suppliers, availability, and cost of water available as Incremental Level 4 are less predictable than Level 2 supplies because of unpredictable region-wide water needs and usage; the potential lack of sufficient conveyance infrastructure; inconsistent annual natural conditions, specifically rainfall; and occasional water quality concerns.
Additionally, Individual refuges retain the right to refuse to accept water that the RWSP has the ability to acquire if it is not of suitable quality or does not benefit the refuge at the time it is available. Thus, water supplied delivered in a year may be less than those that were potentially available.
RWSP Program Components
The RWSP's efforts are concentrated into 3 components, Water Acquisition, Facility Construction and Water Conveyance.
Water Acquisitions
CVPIA Section 3406(d)(2) requires the acquisition of IL4 Water for critical wetland habitat supporting resident and migratory waterfowl, threatened and endangered species, and wetland-dependent aquatic biota on the Refuges. These supplies are ideally used to allow refuges to optimally manage the preserved land for the improvement of waterfowl populations.
IL4 water consists of long-term and annual purchases from willing sellers of both surface and groundwater supplies; supplies at no cost, e.g., water exchanges; water delivered under a mitigation agreement with the Federal Energy Regulatory Commission and 'permanent water', water that the program has purchased a permanent right to take under specific conditions.
North Valley Regional Recycled Water Program
California, and the Central Valley, experienced persistent drought conditions for much of the early part of the twenty-first century. With global warming expected to alter historic conditions, the long-term availability of IL4 supplies is questionable. To meet its acquisition and delivery mandate the RWSP was challenged to find reliable and affordable sources for Incremental 4 water to deliver to the refuges well into this uncertain future. The North Valley Regional Recycled Water Program will make treated, recycled water from the Cities of Turlock and Modesto available for re-use at the Refuges. The RWSP took an active role in this pioneering program's development and in return, in 2016, signed a 40-year contract for water deliveries from it.
Facility Construction
CVPIA Section 3406(d)(5) provides for facilities construction to benefit the mandate of supplying refuge water. This component funds projects that identify, construct and/or maintain infrastructure projects supporting the long-term delivery of firm, reliable water supplies to the boundary of the Refuges. The RWSP's goal is to have the necessary facilities in place allowing for the delivery of Full L4 water supplies to the Refuges that meet their timing and scheduling requirements. A total of 46 new or modifications to major structures and/or actions were identified to provide needed capacity for the delivery of Full Level 4 surface supplies to these refuges.
Water Conveyance (Wheeling)
CVPIA Sections 3406(d)(1),(2) and (5) of the CVPIA describe the functions and responsibilities of the Refuge Water Conveyance (Wheeling) Component. The use of a water conveyance facility by someone other than the owner or operator to transport water is referred to as "wheeling." The conveyance component is responsible for ensuring the delivery of a refuge's level 2 and acquired water supplies through contracts and agreements that allow for these water supplies to move from source to refuge destination.
The reservoirs that hold water destined for a refuge are connected to the refuges by a network of channels, owned and operated by multiple entities. Similar to a network of roadways there are conveyance channels of many kinds (with names like aqueduct, canal, slough, and ditch) and sizes. Some channels are free to use by the RWSP, like rivers, and some require payment, such as those built and maintained by a water district. The RWSP negotiated contracts coordinating the delivery of water. Reclamation currently has nine long-term (15–50 years) conveyance agreements that are administered by the RWSP, one Service 40-year conveyance agreement, cooperative agreements to reimburse delivering entities for costs of conveying L2 and IL4 water supplies through federal, state, and private water distribution systems to the refuges and agreements to reimburse costs for groundwater pumping in instances where groundwater is pumped at the refuge itself. Deliveries are monitored throughout the system as water enters and exits metered channels.
Water that is transferred any distance suffers what is termed 'conveyance losses' which means that the amount of water released at the start is not the same amount that ultimately arrives. The difference between what is sent versus what is received is conveyance loss and can be the result of evaporation or seepage (soaking into the land). In some cases, water travels over 300 miles to reach its final refuge destination.
Accomplishments and Benefits to Nature
The CVPIA was enacted to increase the population and improve the health of the Central Valley's anadromous fish and increase the acreage and health of wetlands used by migratory birds and other resident wildlife both of which suffered as a result of the construction of the CVP. The RWSP focuses on the health of wetlands by acquiring and conveying necessary water supplies. Since CVPIA was enacted numerous biological benefits have resulted from supplying the Refuges with a reliable year-round water supply that adequately meets the refuge-specific water delivery schedules developed for individual wetland management requirements. Prior to CVPIA, refuge managers had to concentrate the majority of their water use in the fall and early winter months, when Central Valley waterfowl numbers peaked. With the passage of CVPIA, the habitat calendar was expanded to the full year. These increased and reliable supplies of water enable managers to enhance existing habitats, expand their wetland base, and provide increased benefits to a greater number of wetland-dependent species.
Habitat and Biodiversity
Refuge activities are water dependent. Having a firm and adequate supply of water available when most beneficial throughout the year enables managers to implement improved management techniques and allows them to better manage lands and activities. This increased efficiency and reliability have both increased wetland acreage and improved wetland health by affording refuge managers with the ability to manage a diverse mix of habitat types that more fully satisfied the year-round environmental requirements of many wildlife species.
Benefits of Water Reliability
Before the CVPIA, refuge managers concentrated the majority of water use in the fall and early winter months (October - December), when Central Valley waterfowl numbers peaked. With the passage of CVPIA, the habitat calendar was expanded to the full year, allowing refuge managers' the ability to provide habitat to an extended group of migratory birds as well as other wildlife and grow plant materials that provide a food supply or habitat for food sources. Under CVPIA programs, moist soil food plant irrigations are carried out since water is reliably available during August and September to satisfy the needs of the early arriving migrant waterfowl and shorebirds, maintenance flows are applied throughout the winter months to improve water quality and decrease avian disease outbreaks, and during spring and summer, when wetland habitat can be particularly limited by hydrology, water provides critical nesting habitat for waterfowl and colonial birds as well as habitat for resident wildlife and their young. Wintering wildlife also benefits from this habitat diversity, as seasonal wetlands are now managed to coincide with peak migration times of shorebirds and waterfowl.
With the increased frequency and acreage of irrigated moist soil food plants, there has been a doubling in desirable plant biomass, which equates to more high-quality, high-energy food available to waterfowl. In some refuges, waterfowl food production has increased tenfold. Timely de-watering and irrigation promote the germination and irrigation of important moist-soil food plants, such as swamp timothy and watergrass. These plants provide a high-energy food source through both their seeds and associated invertebrate communities. The increase in supply reliability allows wetland managers to lower water depths to make seeds and invertebrates available without the fear of having wetlands completely evaporate.
Benefits of Increased Wetland Acreage
Waterfowl, shorebirds, and other wetland-dependent wildlife have benefited as their habitats have been expanded and enhanced. Central Valley wetlands receiving CVP water supplies have increased by more than 20,000 acres since the passage of the CVPIA while tens of thousands of acres of habitat have been enhanced. This wetland acreage helps explain the 75% decrease in waterfowl disease-related mortality in some wetland areas as the birds spread out over a greater area.
Benefits of Improved Water Quality
At least as important as the increase in acreage, improvement in the quality of previously existing wetlands that has resulted from the delivery of suitable water. Increasing water supplies to wetlands also has the effect of improving water quality, both on and off refuges. For example, providing firm, quality water supplies has reduced the exposure of waterfowl and shorebirds spending the winter in the Grasslands area of the valley to contaminants. In a report on selenium in aquatic birds from the Central Valley, 1986-1994, FWS scientists noted that application of freshwater resulted in the decline of selenium contamination. Improved maintenance water flows through refuge ponds to improve water quality and reduce avian disease. The ability to battle avian disease outbreaks, such as botulism and cholera, is greatly assisted by applying additional water and creating a "flow-through" system of water delivery and drainage. This "flow-through" also helps deal with wetland areas high in salinity, which are often lower in productivity and diversity. CVPIA water allows wetland managers to "flush" salts from wetland basins and improve soil quality.
Migratory Waterfowl
Since the passage of the CVPIA, Sacramento Valley areas receiving CVP water have seen a 20% increase in waterfowl use and a significant decline in water-borne wildlife diseases. Waterfowl use in the early fall has recorded increases of 800 percent, from 2 million to over 18 million waterfowl use days per year.
White-faced Ibis and Sandhill Cranes are excellent examples of how the availability of adequate water supplies enabled refuge managers to provide habitat for endemic species that had been in severe decline for decades. Improved water supplies first led to an increase in the numbers of frogs, snails, aquatic insects, and small fish. This, in turn, provided the ibis and cranes with habitat for late-spring and summer nesting, essential components for these species. The increased and improved breeding habitat resulted in a steady upswing in bird numbers.
This, in turn, provided the ibis and cranes with habitat for late-spring and summer nesting, essential components for these species. The increased and improved breeding habitat resulted in a steady upswing in bird numbers.
Other Wildlife
Wetlands are diverse ecosystems. The increased wetland acreage and improved water quality have benefited more than bird populations. Improved water supplies led to an increase in the numbers of frogs, snails, aquatic insects, and small fish. Habitat that is now available during the months of August and early September is utilized by resident wildlife and their young during a critical time of the year when wetland habitat is otherwise reduced. Introducing water for semi-permanent and permanent wetland habitats in the spring and summer directly benefits the recovery of special status species such as the giant garter snake and tri-colored blackbirds.
Budget
The CVPIA is a federal program and receives funding every year through Congressional appropriation. It would require between $50 - $60 Million annually to achieve the goals of the RWSP. The 2017 budget provided approximately $22 million.
References
1992 establishments in California
Joint ventures
United States Bureau of Reclamation
United States Fish and Wildlife Service
Central Valley Project | Refuge Water Supply Program | Engineering | 4,373 |
2,364,264 | https://en.wikipedia.org/wiki/Ibn%20al-Rawandi | Abu al-Hasan Ahmad ibn Yahya ibn Ishaq al-Rawandi (), commonly known as Ibn al-Rawandi (; 827–911 CE), was a scholar and theologian. In his early days, he was a Mu'tazilite scholar, but then rejected the Mu'tazilite doctrine. Afterwards, he became a Shia scholar; there is some debate about whether he stayed a Shia until his death or became a skeptic, though most sources confirm his eventual rejection of all religion and becoming an atheist. Although none of his works have survived, his opinions had been preserved through his critics and the surviving books that answered him. His book with the most preserved fragments (through an Ismaili book refuting al-Rawandi's ideology) is the Kitab al-Zumurrud (The Book of the Emerald).
Life
Abu al-Husayn Ahmad bin Yahya ben Isaac al-Rawandi was born in 827 CE in Greater Khorasan, modern-day northwest Afghanistan. Al-Rawandi was born in Basra during the reign of the Abbasid Caliph Al-Ma'mun. His father, Yahya, was a Persian Jewish scholar who converted to Islam and schooled Muslims on refuting the Talmud.
He joined the Mu'tazili of Baghdad and gained prominence among them. However, he eventually became estranged from his fellow Mu'tazilites and formed close alliances with Shia Muslims and then with non-Muslims (Manichaeans, Jews and perhaps also Christians). Al-Rawandi then became a follower of the Manichaean zindiq Abu Isa al-Warraq before eventually rejecting religion in general, writing several books that criticized all religion, particularly Islam.
Philosophy
Most sources agree that he spent time as a Mu'tazilite and a Shia before eventually denouncing all religion. Some sources look for the roots of his views in his connections with Shia Islam and Mu'tazilia, and claim that his heresy was exaggerated by his rivals.
Ibn al-Rawandi spent time as a Mu'tazilite and later a Shia scholar before eventually turning to atheism. He never denied God, rather denounced all religions and criticized the Abrahamic deity. Most of his 114 books have been lost, but those with at least some remaining fragments include The Scandal of the Mu'tazilites (Fadihat al-Mu'tazila), which presents the arguments of various Mu'tazilite theologians and then makes the case that they are internally inconsistent, The Refutation (ad-Damigh), which attacks the Quran, and The Book of the Emerald (Kitab al-Zumurrud) which critiques prophecy and rejects Islam. Among his arguments, he critiques dogma as antithetical to reason, argues miracles are fake, that prophets (including Muhammad) are just magicians, and that the Paradise as described by the Quran is not desirable.
Some scholars also try to account for the more positive view of Ibn al-Rawandi in some Muslim sources. Josef van Ess has suggested an original interpretation that aims at accommodating all the contradictory information. He notes that the sources which portray Ibn al-Rawandi as a heretic are predominantly Mutazilite and stem from Iraq, whereas in eastern texts he appears in a more positive light. As an explanation for this difference, van Ess suggests "a collision of two different intellectual traditions," i.e., those in Iran and in Iraq. He further suggests that Ibn al-Rawandi's notoriety was the result of the fact that after Ibn al-Rawandi left Baghdad, "his colleagues in Baghdad ... profiting from his absence ... could create a black legend." In other words, van Ess believes that Ibn al-Rawandi, although eccentric and disputatious, was not a heretic at all. However, these views are discounted by most scholars given the weight of evidence to the contrary.
Subjects discussed in the Kitab al-Zumurrud
Muslim traditions
According to the Zumurrud, traditions concerning miracles are inevitably problematic. At the time of the performance of a supposed miracle, only a small number of people could be close enough to the Prophet to observe his deeds. Reports given by such a small number of people cannot be trusted, for such a small group can easily have conspired to lie. The Muslim tradition thus falls into the category of flimsy traditions, those based on a single authority (khabar al-ahad) rather than on multiple authorities (khabar mutawatir). These religious traditions are lies endorsed by conspiracies.
The Zumurrud points out that Muhammad's own presuppositions (wad) and system (qanun) show that religious traditions are not trustworthy. The Jews and Christians say that Jesus really died, but the Qu'ran contradicts them.
Ibn al-Rawandi also points out specific Muslim traditions, and tries to show that they are laughable. The tradition that the angels rallied round to help Muhammad is not logical, because it implies that the angels of Badr were weaklings, able to kill only seventy of the Prophet's enemies. And if the angels were willing to help Muhammad at Badr, where were they at Uhud when their help was so badly needed?
The Zumurrud criticizes prayer, preoccupation with ritual purity, and the ceremonies of the hajj; throwing stones, circumambulating a house that cannot respond to prayers, running between stones that can neither help nor harm. It goes on to ask why Safa and Marwa are venerated and what difference there is between them and any other hill in the vicinity of Mecca, for example, the hill of Abu Qubays, and why the Kaaba is any better than any other house.
From the Encyclopaedia of Islam:
See also
Turan Dursun
Baron d'Holbach
Further reading
References
External links
Mehmet Karabela, IBN AL-RAWANDI,The Oxford Encyclopedia of Philosophy, Science, and Technology in Islam, vol. 1, New York: Oxford University Press, 2014
Encyclopedia Iranica, "EBN RĀVANDĪ, ABU’l-ḤOSAYN AḤMAD" b. Yaḥyā (d. 910?), Muʿtazilite theologian and “heretic” of Ḵorāsānī origin
The blinding emerald: Ibn al-Rawandi's 'Kitab al-Zumurrud.'
İşte 1.000 yıl önceki Turan Dursun in Turkish.
827 births
911 deaths
Critics of Sunni Islam
Freethought
Former Muslims
Rawandi
Iranian people of Jewish descent
Rawandi
People from Kashan | Ibn al-Rawandi | Physics | 1,410 |
5,614,016 | https://en.wikipedia.org/wiki/Medical%20science%20liaison | A medical science liaison (MSL) is a healthcare consulting professional who is employed by pharmaceutical, biotechnology, medical device, and managed care companies. Other job titles for medical science liaisons may include medical liaisons, clinical science liaisons, medical science managers, regional medical scientists, and regional medical directors.
The term "MSL" was originally trademarked by Upjohn as "Education services – namely, initiation of drug studies in laboratory and clinical settings and development of workshops, symposia, and seminars for physicians, medical societies, specialty organizations, academicians, in concert, concerned with drug related medical topics" in 1967 and with first use in commerce in 1967.
As the number of MSL programs in healthcare increased, subsequent peer-reviewed journal publications and books became available to examine the emerging role of medical affairs and the use of MSLs in an increasingly vertically integrated biotechnology industry.
Role
MSLs build relationships with key opinion leaders or thought leaders and health care providers, providing critical windows of insight into the market and competition. Through such monitoring, MSLs can gain access to key influencers by interacting with national and regional societies and organizations. Moreover, as MSLs specialize in a particular therapeutic area and have scientific knowledge related to it. The educational background of MSLs consists primarily of MDs, DMSc, PharmD, and PhD professionals. Other professions who work as MSLs include Physician Assistants and Nurses. According to the program's advocates, the Board Certified Medical Affairs Specialist (BCMAS) program is the recognized MSL board certification for MSL professionals. They are now highly involved in activities related to clinical trials.
Responsibilities
The medical science liaison role is varied and day-to-day activities include (but are not limited to);
Managing investigator initiated studies
Performing KOL stakeholder mapping
Developing collaborative relationships with KOLs
Organising advisory boards
Maintaining a high level of therapeutic area knowledge
Training sales representatives
Providing medical review to ensure all company materials are compliant and accurately reflect the body of scientific evidence
Delivering insights from KOLs to inform the medical affairs strategy
See also
References
Pharmaceutical industry
Promotion and marketing communications | Medical science liaison | Chemistry,Biology | 424 |
46,974,885 | https://en.wikipedia.org/wiki/Penicillium%20onobense | Penicillium onobense is an anamorph species in the genus Penicillium which was isolated from beech forest in Navarra in Spain.
References
Further reading
onobense
Fungi described in 1981
Fungus species | Penicillium onobense | Biology | 48 |
7,695,962 | https://en.wikipedia.org/wiki/Sympetrum | Sympetrum is a genus of small to medium-sized skimmer dragonflies, known as darters in the UK and as meadowhawks in North America. The more than 50 species predominantly live in the temperate zone of the Northern Hemisphere; 15 species are native to North America. No Sympetrum species is native to Australia.
Most North American darters fly in late summer and autumn, breeding in ponds and foraging over meadows. Commonly, they are yellow-gold as juveniles, with mature males and some females becoming bright red on part or all of their bodies. An exception to this color scheme is the black darter (Sympetrum danae).
The genus includes the following species:
Sympetrum ambiguum – blue-faced meadowhawk
Sympetrum anomalum
Sympetrum arenicolor
Sympetrum baccha
Sympetrum chaconi
Sympetrum commixtum
Sympetrum cordulegaster
Sympetrum corruptum – variegated meadowhawk
Sympetrum costiferum – saffron-winged meadowhawk
Sympetrum croceolum
Sympetrum daliensis
Sympetrum danae – black darter, black meadowhawk
Sympetrum darwinianum
Sympetrum depressiusculum – spotted darter
Sympetrum dilatatum – St. Helena darter
Sympetrum durum
Sympetrum eroticum
Sympetrum evanescens
Sympetrum flaveolum – yellow-winged darter
Sympetrum fonscolombii – red-veined darter, nomad
Sympetrum frequens
Sympetrum gilvum
Sympetrum gracile
Sympetrum haematoneura
Sympetrum haritonovi – dwarf darter
Sympetrum himalayanum
Sympetrum hypomelas
Sympetrum illotum – cardinal meadowhawk
Sympetrum imitans
Sympetrum infuscatum
Sympetrum internum – cherry-faced meadowhawk
Sympetrum kunckeli
Sympetrum maculatum
Sympetrum madidum – red-veined meadowhawk
Sympetrum meridionale – southern darter
Sympetrum nigrifemur – island darter
Sympetrum nigrocreatum – Talamanca meadowhawk
Sympetrum nomurai
Sympetrum obtrusum – white-faced meadowhawk
Sympetrum orientale
Sympetrum pallipes – striped meadowhawk
Sympetrum paramo
Sympetrum parvulum
Sympetrum pedemontanum – banded darter
Sympetrum risi
Sympetrum roraimae
Sympetrum rubicundulum – ruby meadowhawk
Sympetrum ruptum
Sympetrum sanguineum – ruddy darter
Sympetrum semicinctum – band-winged meadowhawk
Sympetrum signiferum
Sympetrum sinaiticum – desert darter
Sympetrum speciosum
Sympetrum striolatum – common darter
Sympetrum tibiale
Sympetrum uniforme
Sympetrum verum
Sympetrum vicinum – yellow-legged meadowhawk, autumn meadowhawk
Sympetrum villosum
Sympetrum vulgatum – vagrant darter, moustached darter
Sympetrum xiaoi
References
External links
Animal migration
Anisoptera genera
Libellulidae
Taxa named by Edward Newman | Sympetrum | Biology | 747 |
29,260,379 | https://en.wikipedia.org/wiki/Jig%20concentrators | Jig concentrators are devices used mainly in the mining industry for mineral processing, to separate particles within the ore body, based on their specific gravity (relative density).
The particles would usually be of a similar size, often crushed and screened prior to being fed over the jig bed. There are many variations in design; however the basic principles are constant: The particles are introduced to the jig bed (usually a screen) where they are thrust upward by a pulsing water column or body, resulting in the particles being suspended within the water. As the pulse dissipates, the water level returns to its lower starting position and the particles once again settle on the jig bed. As the particles are exposed to gravitational energy whilst in suspension within the water, those with a higher specific gravity (density) settle faster than those with a lower count, resulting in a concentration of material with higher density at the bottom, on the jig bed. The particles are now concentrated according to density and can be extracted from the jig bed separately. In the mining of most heavy minerals, the denser material would be the desired mineral and the rest would be discarded as floats (or tailings).
There are some minerals, notably coal, that are lighter (lower in density) than the surrounding rock and in such instances the process of extraction would work in reverse, i.e. the coal would settle on top with the rock below (on the jig bed). There are several designs and methods of extraction from the jig bed.
See also
Mineral jig
References
External links
Pulsating jig at Free Patents Online
Mining equipment | Jig concentrators | Engineering | 332 |
55,085,766 | https://en.wikipedia.org/wiki/Creepiness | Creepiness is the state of being creepy, or causing an unpleasant feeling of fear or unease to someone and/or something. Certain traits or hobbies may make people seem creepy to others; interest in horror or the macabre might come across as 'creepy', and often people who are perverted or exhibit predatory behavior are called 'creeps'. The internet, especially some functions of social media, has been described as increasingly creepy. Adam Kotsko has compared the modern conception of creepiness to the Freudian concept of . The term has also been used to describe paranormal or supernatural phenomena some people have phobias which are an irrational fear which can make them see something as creepy.
History and studies
In the abstract the feeling of "creepiness" is subjective: for example some dolls have been described as creepy, while what makes something "creepy" or "strange" to someone might seem totally normal to someone else, as the concept varies from person to person.
The adjective "creepy", referring to a feeling of creeping in the flesh, was first used in 1831, but it was Charles Dickens who coined and popularized the term "the creeps" in his 1849 novel David Copperfield.
During the 20th century, association was made between involuntary celibacy and creepiness.
The concept of creepiness has only recently been formally addressed in social media marketing.
The sensation of creepiness has only recently been the subject of psychological research, despite the widespread colloquial use of the word throughout the years. Frank McAndrew of Knox College is the first psychologist to do an empirical study on creepiness.
Causes
The state of creepiness has been associated with "feeling scared, nervous, anxious or worried", "awkward or uncomfortable", "vulnerable or violated" in a study conducted by Watt et al. This state arises in the presence of a creepy element, which can be an individual or, as recently observed, new technologies.
Individuals
Creepiness can be caused by the appearance of an individual.
Another study investigated the characteristics that make people creepy. Creepy people were thought to be more often male than female by an overwhelming majority of participants (around 95% of both male and female participants). Another study conducted by Watt et al. also found that participants associated the ectomorphic body type (more linear) with creepiness, more than the other two body types (51% vs mesomorphic, 24% and endomorphic, 23%). Other cues of creepiness included low hygiene, especially according to female participants, and a disheveled appearance. Participants also identified the face as an area with potentially creepy features: in particular the eyes and the teeth. Both of those physical features were deemed creepy not only for their unpleasant appearance (ex. squinty eyes or crooked teeth) but also for the movements and expressions they engaged it (ex. darting eye movements and odd smiles).
In fact, appearance does not seem to be the only factor making an individual creepy: behaviors provide cues as well. Behaviors such as "being unusually quiet and staring (34%), following or lurking (15%), behaving abnormally (21%), or in a socially awkward, "sketchy" or suspicious way (20%)" are all contributing to a feeling of creepiness, as described by Watt et al.'s study.
Technology
In addition to other individuals, new technologies, such as marketing's targeted ads and AI, have been qualified as creepy.
A study by Moore et al. described what aspect of marketing participants considered creepy. The main three reasons are the following: using invasive tactics, causing discomfort and violating of norms. Invasive tactics are practiced by marketers that know so much about the consumer that the ads are "creepily" personalized. Secondly, some ads create discomfort by making the consumer question "the motives of the company advertising the product". Finally, some ads violate social norms by having inappropriate content, for example by unnecessarily sexualizing it.
It is marketing's extensive knowledge used in an improper way, together with a certain loss of control over our data, that creates a feeling of creepiness.
Another creepy aspect of technology is human-looking AI: this phenomenon is called the uncanny valley.
Humans find robots creepy when they start closely resembling humans. It has been hypothesized that the reason why they are viewed as creepy is because they violate our notion of how a robot should look. A study focusing on children's responses to this phenomenon found evidence to support the hypothesis.
Evolutionary explanation
Several studies have hypothesized that creepiness is an evolutionary response to potentially dangerous situations. It could be linked to a mechanism called agent detection which makes individuals expect malignant agents to be responsible for small changes in the environment. McAndrew et al. illustrates the idea with the example of a person hearing some noises while walking in a dark alley. That person would go in high alert, fearing that some dangerous individual was there. If that was not the case the loss would be small. If, on the other hand, a dangerous individual was actually in the alley and the person had not been alerted by this creepy feeling, the loss could have been significant.
Creepiness would therefore serve the purpose of alerting us in situations in which the danger is not outright obvious but rather ambiguous. In this case, ambiguity both refers to the possible presence of a threat and to its nature, sexual or physical for example. Creepiness "may reside in between the unknowing and the fear" in the sense that individuals experiencing it are unsure if there truly is something to fear or not. Creepy characteristics are not simply caused by threat potential: in fact, ectomorphic body types are not the most powerful bodies and facial expressions are not a proxy of physical strength either. Therefore, creepiness is not only related to how threatening a characteristic is, in the sense of how dangerous and strong the individual can be. There are more facets to consider.
Another characteristic of creepiness is unpredictable behavior. Unpredictability links back to this idea of ambiguity. When an individual is unpredictable it is not possible to tell when their behavior will turn violent: this adds to the ambiguity of a potentially dangerous situation. This theory is endorsed by studies. Not only is unpredictability directly listed as a creepy characteristic, but other behaviors, such as norm-breaking behaviors are indirectly linked with unpredictability. Such behaviors show that the individual does not conform to some social standards others would expect in a given situation. For example, the aforementioned staring at strangers or lack of hygiene—behaviors that make us uneasy or creeped out because they do not fit the norm and therefore are not expected. More generally, participants tended to define creepiness as "different" in the sense of not behaving, or looking, socially acceptable. Such differences point towards a "social mismatch".
Humans have a natural system of detection of such mismatch: a physical feeling of coldness. When an individual is creeped out, they report feeling those "cold chills". This phenomenon has been studied by Leander et al, with relation to nonverbal mimicry in social interactions, meaning the unintentional copying of another's behavior. Inappropriate mimicry may leave a person feeling like something is off about the other. Absence of non-verbal mimicry in a friendly interaction, or the presence of it in a professional setting, raises suspicion as it does not follow the relevant social norms. Individuals are left wondering what other unusual behavior the other might engage in.
See also
Ableism
Internet privacy
Social skills
References
General citations
Fear
Feeling
Emotions
Social media | Creepiness | Technology | 1,558 |
19,317,802 | https://en.wikipedia.org/wiki/Decision%20list | Decision lists are a representation for Boolean functions which can be easily learnable from examples. Single term decision lists are more expressive than disjunctions and conjunctions; however, 1-term decision lists are less expressive than the general disjunctive normal form and the conjunctive normal form.
The language specified by a k-length decision list includes as a subset the language specified by a k-depth decision tree.
Learning decision lists can be used for attribute efficient learning.
Definition
A decision list (DL) of length is of the form:
if then
output
else if then
output
...
else if then
output
where is the th formula and is the th boolean for . The last if-then-else is the default case, which means formula is always equal to true. A -DL is a decision list where all of formulas have at most terms. Sometimes "decision list" is used to refer to a 1-DL, where all of the formulas are either a variable or its negation.
See also
Decision stump
References
Machine learning | Decision list | Engineering | 213 |
77,609,886 | https://en.wikipedia.org/wiki/Indolylpropylaminopentane | Indolylpropylaminopentane (IPAP), also known as α,N-dipropyltryptamine (α,N-DPT), is a monoaminergic activity enhancer (MAE) that is closely related to benzofuranylpropylaminopentane (BPAP) and phenylpropylaminopentane (PPAP). It is a tryptamine derivative and the corresponding analogue of PPAP and BPAP with an indole ring instead of a benzene ring or benzofuran ring, respectively. IPAP is also a positional isomer of N,N-dipropyltryptamine (N,N-DPT).
MAEs are agents that enhance the action potential-mediated release of monoamine neurotransmitters. IPAP is a MAE of serotonin, norepinephrine, and dopamine. However, IPAP acts preferentially as a MAE of serotonin and is about 10-fold more potent in enhancing serotonin than in enhancing norepinephrine or dopamine. This is in contrast to BPAP, which is of similar potency as a MAE of serotonin and the catecholamines. It is also in contrast to PPAP and selegiline, which act exclusively as catecholaminergic activity enhancers (CAEs) and do not enhance serotonin. Hence, IPAP is a representative selective serotonergic activity enhancer (SAE) at lower doses.
IPAP is more potent as a MAE than PPAP and selegiline but is less potent than BPAP. As with BPAP and PPAP, the negative enantiomer (i.e., R(–)-IPAP) is more biologically active as a MAE and is often the employed compound. The effects of MAEs appear to be mediated by intracellular TAAR1 agonism coupled with uptake by monoamine transporters into monoaminergic neurons.
In contrast to amphetamines, IPAP has no classical monoamine releasing agent actions. It is a weak MAO-A inhibitor similarly to BPAP.
IPAP was first described in the scientific literature in 2001, following BPAP in 1999. It was discovered by József Knoll and colleagues.
References
Alpha-Alkyltryptamines
Drugs with unknown mechanisms of action
Enantiopure drugs
Experimental drugs
Monoaminergic activity enhancers | Indolylpropylaminopentane | Chemistry | 520 |
1,236,542 | https://en.wikipedia.org/wiki/Projective%20hierarchy | In the mathematical field of descriptive set theory, a subset of a Polish space is projective if it is for some positive integer . Here is
if is analytic
if the complement of , , is
if there is a Polish space and a subset such that is the projection of onto ; that is,
The choice of the Polish space in the third clause above is not very important; it could be replaced in the definition by a fixed uncountable Polish space, say Baire space or Cantor space or the real line.
Relationship to the analytical hierarchy
There is a close relationship between the relativized analytical hierarchy on subsets of Baire space (denoted by lightface letters and ) and the projective hierarchy on subsets of Baire space (denoted by boldface letters and ). Not every subset of Baire space is . It is true, however, that if a subset X of Baire space is then there is a set of natural numbers A such that X is . A similar statement holds for sets. Thus the sets classified by the projective hierarchy are exactly the sets classified by the relativized version of the analytical hierarchy. This relationship is important in effective descriptive set theory. Stated in terms of definability, a set of reals is projective iff it is definable in the language of second-order arithmetic from some real parameter.
A similar relationship between the projective hierarchy and the relativized analytical hierarchy holds for subsets of Cantor space and, more generally, subsets of any effective Polish space.
Table
See also
Borel hierarchy
References
Descriptive set theory
Mathematical logic hierarchies | Projective hierarchy | Mathematics | 324 |
3,399,064 | https://en.wikipedia.org/wiki/Total%20air%20temperature | In aviation, stagnation temperature is known as total air temperature and is measured by a temperature probe mounted on the surface of the aircraft. The probe is designed to bring the air to rest relative to the aircraft. As the air is brought to rest, kinetic energy is converted to internal energy. The air is compressed and experiences an adiabatic increase in temperature. Therefore, total air temperature is higher than the static (or ambient) air temperature.
Total air temperature is an essential input to an air data computer in order to enable the computation of static air temperature and hence true airspeed.
The relationship between static and total air temperatures is given by:
where:
static air temperature, SAT (kelvins or degrees Rankine)
total air temperature, TAT (kelvins or degrees Rankine)
Mach number
ratio of specific heats, approx 1.400 for dry air
In practice, the total air temperature probe will not perfectly recover the energy of the airflow, and the temperature rise may not be entirely due to adiabatic process. In this case, an empirical recovery factor (less than 1) may be introduced to compensate:
where e is the recovery factor (also noted Ct)
Typical recovery factors
Platinum wire ratiometer thermometer ("flush bulb type"): e ≈ 0.75 − 0.9
Double platinum tube ratiometer thermometer ("TAT probe"): e ≈ 1
Other notations
Total air temperature (TAT) is also called: indicated air temperature (IAT) or ram air temperature (RAT)
Static air temperature (SAT) is also called: outside air temperature (OAT) or true air temperature
Ram rise
The difference between TAT and SAT is called ram rise (RR) and is caused by compressibility and friction of the air at high velocities.
In practice the ram rise is negligible for aircraft flying at (true) airspeeds under Mach 0.2. For airspeeds (TAS) over Mach 0.2, as airspeed increases the temperature exceeds that of still air. This is caused by a combination of kinetic (friction) heating and adiabatic compression.
Kinetic heating. As the airspeed increases, more and more molecules of air per second hit the aircraft. This causes a temperature rise in the Direct Reading thermometer probe of the aircraft due to friction. Because the airflow is thought to be compressible and isentropic, which, by definition, is adiabatic and reversible, the equations used in this article do not take account of friction heating. This is why the calculation of static air temperature requires the use of the recovery factor, . Kinetic heating for modern passenger jets is almost negligible.
Adiabatic compression. As described above, this is caused by a conversion of energy and not by direct application of heat. At airspeeds over Mach 0.2, in the Remote Reading temperature probe (TAT-probe), the outside airflow, which may be several hundred knots, is brought virtually to rest very rapidly. The energy (Specific Kinetic Energy) of the moving air is then released (converted) in the form of a temperature rise (Specific Enthalpy). Energy cannot be destroyed but only transformed; this means that according to the first law of thermodynamics, the total energy of an isolated system must remain constant.
The total of kinetic heating and adiabatic temperature change (caused by adiabatic compression) is the Total Ram Rise.
Combining equations () & (), we get:
If we use the Mach number equation for dry air:
where , we get
Which can be simplified to:
by using
and
local speed of sound.
adiabatic index (ratio of heat capacities) and is assumed for aviation purposes to be 7/5 = 1.400.
specific gas constant. The approximate value of for dry air is 286.9 J·kg−1·K−1.
heat capacity constant for constant pressure.
heat capacity constant for constant volume.
static air temperature, SAT, measured in kelvins.
true airspeed of the aircraft, TAS.
recovery factor, which has an approximate value of 0.98, typical for a modern TAT-probe.
By solving (3) for the above values with TAS in knots, a simple accurate formula for ram rise is then:
See also
Stagnation point
Stagnation temperature
Outside air temperature
Mach number
Speed of sound
Adiabatic process
Isentropic process
Specific enthalpy
External links
In-Flight Temperature Measurements
Measurement of Temperature on Aircraft
TAT Sensor Operation and Equations
TAT Sensor Heater Error Effect
High speed flight - Viscous Interaction
Atmospheric thermodynamics
Aircraft instruments
Atmospheric temperature | Total air temperature | Technology,Engineering | 956 |
68,186,603 | https://en.wikipedia.org/wiki/Mike%20Short | Michael John Short CBE FREng FIET (born 19 July 1953) is a British telecommunications engineer and businessman. He helped to get the mobile telecommunications industry off the ground in the UK, when head of technology at Cellnet, and since 2017 has been Chief Scientific Adviser at the Department for International Trade (DIT).
Early life
He was born in Surrey. He lived abroad and attended several foreign schools, including a French secondary school (he speaks fluent French) and later at Vyners Grammar School in west London (former Middlesex). Due to changing schools, he did not achieve the O-levels that he required to study Physics and Double Maths at A-level, so he had to choose Pure and Applied Maths, Economics and Geography.
He gained a degree in Economics and Maths, being the treasurer of his student union in his second year.
Career
Mobile telecommunications
He worked in the research and development site of BT.
He became head of technology at Cellnet, where in 1998 he was responsible for negotiating with other mobile telecommunications companies to allow text messages to be sent across networks, and not simply to customers on their own individual network.
IET
He was president from 2011 to 2012 of the IET.
DIT
He was the first Chief Scientific Adviser at the DIT in December 2017.
Personal life
He lives in west London, near the M4.
He was awarded the CBE in the 2012 Birthday Honours.
See also
Peter Erskine (businessman)
Institute for Communication Systems (ICS, former Centre for Communications Systems Research) at the University of Surrey
References
External links
DIT biography
1953 births
British telecommunications engineers
British telecommunications industry businesspeople
Commanders of the Order of the British Empire
Department for International Trade
Fellows of the Institute of Engineering and Technology
Fellows of the Royal Academy of Engineering
History of mobile telecommunications in the United Kingdom
O2 (UK)
Living people | Mike Short | Technology | 373 |
197,767 | https://en.wikipedia.org/wiki/Radioactive%20decay | Radioactive decay (also known as nuclear decay, radioactivity, radioactive disintegration, or nuclear disintegration) is the process by which an unstable atomic nucleus loses energy by radiation. A material containing unstable nuclei is considered radioactive. Three of the most common types of decay are alpha, beta, and gamma decay. The weak force is the mechanism that is responsible for beta decay, while the other two are governed by the electromagnetic and nuclear forces.
Radioactive decay is a random process at the level of single atoms. According to quantum theory, it is impossible to predict when a particular atom will decay, regardless of how long the atom has existed. However, for a significant number of identical atoms, the overall decay rate can be expressed as a decay constant or as a half-life. The half-lives of radioactive atoms have a huge range: from nearly instantaneous to far longer than the age of the universe.
The decaying nucleus is called the parent radionuclide (or parent radioisotope), and the process produces at least one daughter nuclide. Except for gamma decay or internal conversion from a nuclear excited state, the decay is a nuclear transmutation resulting in a daughter containing a different number of protons or neutrons (or both). When the number of protons changes, an atom of a different chemical element is created.
There are 28 naturally occurring chemical elements on Earth that are radioactive, consisting of 35 radionuclides (seven elements have two different radionuclides each) that date before the time of formation of the Solar System. These 35 are known as primordial radionuclides. Well-known examples are uranium and thorium, but also included are naturally occurring long-lived radioisotopes, such as potassium-40. Each of the heavy primordial radionuclides participates in one of the four decay chains.
History of discovery
Henri Poincaré laid the seeds for the discovery of radioactivity through his interest in and studies of X-rays, which significantly influenced physicist Henri Becquerel. Radioactivity was discovered in 1896 by Becquerel and independently by Marie Curie, while working with phosphorescent materials. These materials glow in the dark after exposure to light, and Becquerel suspected that the glow produced in cathode-ray tubes by X-rays might be associated with phosphorescence. He wrapped a photographic plate in black paper and placed various phosphorescent salts on it. All results were negative until he used uranium salts. The uranium salts caused a blackening of the plate in spite of the plate being wrapped in black paper. These radiations were given the name "Becquerel Rays".
It soon became clear that the blackening of the plate had nothing to do with phosphorescence, as the blackening was also produced by non-phosphorescent salts of uranium and by metallic uranium. It became clear from these experiments that there was a form of invisible radiation that could pass through paper and was causing the plate to react as if exposed to light.
At first, it seemed as though the new radiation was similar to the then recently discovered X-rays. Further research by Becquerel, Ernest Rutherford, Paul Villard, Pierre Curie, Marie Curie, and others showed that this form of radioactivity was significantly more complicated. Rutherford was the first to realize that all such elements decay in accordance with the same mathematical exponential formula. Rutherford and his student Frederick Soddy were the first to realize that many decay processes resulted in the transmutation of one element to another. Subsequently, the radioactive displacement law of Fajans and Soddy was formulated to describe the products of alpha and beta decay.
The early researchers also discovered that many other chemical elements, besides uranium, have radioactive isotopes. A systematic search for the total radioactivity in uranium ores also guided Pierre and Marie Curie to isolate two new elements: polonium and radium. Except for the radioactivity of radium, the chemical similarity of radium to barium made these two elements difficult to distinguish.
Marie and Pierre Curie's study of radioactivity is an important factor in science and medicine. After their research on Becquerel's rays led them to the discovery of both radium and polonium, they coined the term "radioactivity" to define the emission of ionizing radiation by some heavy elements. (Later the term was generalized to all elements.) Their research on the penetrating rays in uranium and the discovery of radium launched an era of using radium for the treatment of cancer. Their exploration of radium could be seen as the first peaceful use of nuclear energy and the start of modern nuclear medicine.
Early health dangers
The dangers of ionizing radiation due to radioactivity and X-rays were not immediately recognized.
X-rays
The discovery of X‑rays by Wilhelm Röntgen in 1895 led to widespread experimentation by scientists, physicians, and inventors. Many people began recounting stories of burns, hair loss and worse in technical journals as early as 1896. In February of that year, Professor Daniel and Dr. Dudley of Vanderbilt University performed an experiment involving X-raying Dudley's head that resulted in his hair loss. A report by Dr. H.D. Hawks, of his suffering severe hand and chest burns in an X-ray demonstration, was the first of many other reports in Electrical Review.
Other experimenters, including Elihu Thomson and Nikola Tesla, also reported burns. Thomson deliberately exposed a finger to an X-ray tube over a period of time and suffered pain, swelling, and blistering. Other effects, including ultraviolet rays and ozone, were sometimes blamed for the damage, and many physicians still claimed that there were no effects from X-ray exposure at all.
Despite this, there were some early systematic hazard investigations, and as early as 1902 William Herbert Rollins wrote almost despairingly that his warnings about the dangers involved in the careless use of X-rays were not being heeded, either by industry or by his colleagues. By this time, Rollins had proved that X-rays could kill experimental animals, could cause a pregnant guinea pig to abort, and that they could kill a foetus. He also stressed that "animals vary in susceptibility to the external action of X-light" and warned that these differences be considered when patients were treated by means of X-rays.
Radioactive substances
However, the biological effects of radiation due to radioactive substances were less easy to gauge. This gave the opportunity for many physicians and corporations to market radioactive substances as patent medicines. Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie protested against this sort of treatment, warning that "radium is dangerous in untrained hands". Curie later died from aplastic anaemia, likely caused by exposure to ionizing radiation. By the 1930s, after a number of cases of bone necrosis and death of radium treatment enthusiasts, radium-containing medicinal products had been largely removed from the market (radioactive quackery).
Radiation protection
Only a year after Röntgen's discovery of X-rays, the American engineer Wolfram Fuchs (1896) gave what is probably the first protection advice, but it was not until 1925 that the first International Congress of Radiology (ICR) was held and considered establishing international protection standards. The effects of radiation on genes, including the effect of cancer risk, were recognized much later. In 1927, Hermann Joseph Muller published research showing genetic effects and, in 1946, was awarded the Nobel Prize in Physiology or Medicine for his findings.
The second ICR was held in Stockholm in 1928 and proposed the adoption of the röntgen unit, and the International X-ray and Radium Protection Committee (IXRPC) was formed. Rolf Sievert was named chairman, but a driving force was George Kaye of the British National Physical Laboratory. The committee met in 1931, 1934, and 1937.
After World War II, the increased range and quantity of radioactive substances being handled as a result of military and civil nuclear programs led to large groups of occupational workers and the public being potentially exposed to harmful levels of ionising radiation. This was considered at the first post-war ICR convened in London in 1950, when the present International Commission on Radiological Protection (ICRP) was born.
Since then the ICRP has developed the present international system of radiation protection, covering all aspects of radiation hazards.
In 2020, Hauptmann and another 15 international researchers from eight nations (among them: Institutes of Biostatistics, Registry Research, Centers of Cancer Epidemiology, Radiation Epidemiology, and also the U.S. National Cancer Institute (NCI), International Agency for Research on Cancer (IARC) and the Radiation Effects Research Foundation of Hiroshima) studied definitively through meta-analysis the damage resulting from the "low doses" that have afflicted survivors of the atomic bombings of Hiroshima and Nagasaki and also in numerous accidents at nuclear plants that have occurred. These scientists reported, in JNCI Monographs: Epidemiological Studies of Low Dose Ionizing Radiation and Cancer Risk, that the new epidemiological studies directly support excess cancer risks from low-dose ionizing radiation. In 2021, Italian researcher Sebastiano Venturi reported the first correlations between radio-caesium and pancreatic cancer with the role of caesium in biology, in pancreatitis and in diabetes of pancreatic origin.
Units
The International System of Units (SI) unit of radioactive activity is the becquerel (Bq), named in honor of the scientist Henri Becquerel. One Bq is defined as one transformation (or decay or disintegration) per second.
An older unit of radioactivity is the curie, Ci, which was originally defined as "the quantity or mass of radium emanation in equilibrium with one gram of radium (element)". Today, the curie is defined as disintegrations per second, so that 1 curie (Ci) = .
For radiological protection purposes, although the United States Nuclear Regulatory Commission permits the use of the unit curie alongside SI units, the European Union European units of measurement directives required that its use for "public health ... purposes" be phased out by 31 December 1985.
The effects of ionizing radiation are often measured in units of gray for mechanical or sievert for damage to tissue.
Types
Radioactive decay results in a reduction of summed rest mass, once the released energy (the disintegration energy) has escaped in some way. Although decay energy is sometimes defined as associated with the difference between the mass of the parent nuclide products and the mass of the decay products, this is true only of rest mass measurements, where some energy has been removed from the product system. This is true because the decay energy must always carry mass with it, wherever it appears (see mass in special relativity) according to the formula E = mc2. The decay energy is initially released as the energy of emitted photons plus the kinetic energy of massive emitted particles (that is, particles that have rest mass). If these particles come to thermal equilibrium with their surroundings and photons are absorbed, then the decay energy is transformed to thermal energy, which retains its mass.
Decay energy, therefore, remains associated with a certain measure of the mass of the decay system, called invariant mass, which does not change during the decay, even though the energy of decay is distributed among decay particles. The energy of photons, the kinetic energy of emitted particles, and, later, the thermal energy of the surrounding matter, all contribute to the invariant mass of the system. Thus, while the sum of the rest masses of the particles is not conserved in radioactive decay, the system mass and system invariant mass (and also the system total energy) is conserved throughout any decay process. This is a restatement of the equivalent laws of conservation of energy and conservation of mass.
Alpha, beta and gamma decay
Early researchers found that an electric or magnetic field could split radioactive emissions into three types of beams. The rays were given the names alpha, beta, and gamma, in increasing order of their ability to penetrate matter. Alpha decay is observed only in heavier elements of atomic number 52 (tellurium) and greater, with the exception of beryllium-8 (which decays to two alpha particles). The other two types of decay are observed in all the elements. Lead, atomic number 82, is the heaviest element to have any isotopes stable (to the limit of measurement) to radioactive decay. Radioactive decay is seen in all isotopes of all elements of atomic number 83 (bismuth) or greater. Bismuth-209, however, is only very slightly radioactive, with a half-life greater than the age of the universe; radioisotopes with extremely long half-lives are considered effectively stable for practical purposes.
In analyzing the nature of the decay products, it was obvious from the direction of the electromagnetic forces applied to the radiations by external magnetic and electric fields that alpha particles carried a positive charge, beta particles carried a negative charge, and gamma rays were neutral. From the magnitude of deflection, it was clear that alpha particles were much more massive than beta particles. Passing alpha particles through a very thin glass window and trapping them in a discharge tube allowed researchers to study the emission spectrum of the captured particles, and ultimately proved that alpha particles are helium nuclei. Other experiments showed beta radiation, resulting from decay and cathode rays, were high-speed electrons. Likewise, gamma radiation and X-rays were found to be high-energy electromagnetic radiation.
The relationship between the types of decays also began to be examined: For example, gamma decay was almost always found to be associated with other types of decay, and occurred at about the same time, or afterwards. Gamma decay as a separate phenomenon, with its own half-life (now termed isomeric transition), was found in natural radioactivity to be a result of the gamma decay of excited metastable nuclear isomers, which were in turn created from other types of decay. Although alpha, beta, and gamma radiations were most commonly found, other types of emission were eventually discovered. Shortly after the discovery of the positron in cosmic ray products, it was realized that the same process that operates in classical beta decay can also produce positrons (positron emission), along with neutrinos (classical beta decay produces antineutrinos).
Electron capture
In electron capture, some proton-rich nuclides were found to capture their own atomic electrons instead of emitting positrons, and subsequently, these nuclides emit only a neutrino and a gamma ray from the excited nucleus (and often also Auger electrons and characteristic X-rays, as a result of the re-ordering of electrons to fill the place of the missing captured electron). These types of decay involve the nuclear capture of electrons or emission of electrons or positrons, and thus acts to move a nucleus toward the ratio of neutrons to protons that has the least energy for a given total number of nucleons. This consequently produces a more stable (lower energy) nucleus.
A hypothetical process of positron capture, analogous to electron capture, is theoretically possible in antimatter atoms, but has not been observed, as complex antimatter atoms beyond antihelium are not experimentally available. Such a decay would require antimatter atoms at least as complex as beryllium-7, which is the lightest known isotope of normal matter to undergo decay by electron capture.
Nucleon emission
Shortly after the discovery of the neutron in 1932, Enrico Fermi realized that certain rare beta-decay reactions immediately yield neutrons as an additional decay particle, so called beta-delayed neutron emission. Neutron emission usually happens from nuclei that are in an excited state, such as the excited 17O* produced from the beta decay of 17N. The neutron emission process itself is controlled by the nuclear force and therefore is extremely fast, sometimes referred to as "nearly instantaneous". Isolated proton emission was eventually observed in some elements. It was also found that some heavy elements may undergo spontaneous fission into products that vary in composition. In a phenomenon called cluster decay, specific combinations of neutrons and protons other than alpha particles (helium nuclei) were found to be spontaneously emitted from atoms.
More exotic types of decay
Other types of radioactive decay were found to emit previously seen particles but via different mechanisms. An example is internal conversion, which results in an initial electron emission, and then often further characteristic X-rays and Auger electrons emissions, although the internal conversion process involves neither beta nor gamma decay. A neutrino is not emitted, and none of the electron(s) and photon(s) emitted originate in the nucleus, even though the energy to emit all of them does originate there. Internal conversion decay, like isomeric transition gamma decay and neutron emission, involves the release of energy by an excited nuclide, without the transmutation of one element into another.
Rare events that involve a combination of two beta-decay-type events happening simultaneously are known (see below). Any decay process that does not violate the conservation of energy or momentum laws (and perhaps other particle conservation laws) is permitted to happen, although not all have been detected. An interesting example discussed in a final section, is bound state beta decay of rhenium-187. In this process, the beta electron-decay of the parent nuclide is not accompanied by beta electron emission, because the beta particle has been captured into the K-shell of the emitting atom. An antineutrino is emitted, as in all negative beta decays.
If energy circumstances are favorable, a given radionuclide may undergo many competing types of decay, with some atoms decaying by one route, and others decaying by another. An example is copper-64, which has 29 protons, and 35 neutrons, which decays with a half-life of hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay to the other particle, which has opposite isospin. This particular nuclide (though not all nuclides in this situation) is more likely to decay through beta plus decay (%) than through electron capture (%). The excited energy states resulting from these decays which fail to end in a ground energy state, also produce later internal conversion and gamma decay in almost 0.5% of the time.
List of decay modes
Decay chains and multiple modes
The daughter nuclide of a decay event may also be unstable (radioactive). In this case, it too will decay, producing radiation. The resulting second daughter nuclide may also be radioactive. This can lead to a sequence of several decay events called a decay chain (see this article for specific details of important natural decay chains). Eventually, a stable nuclide is produced. Any decay daughters that are the result of an alpha decay will also result in helium atoms being created.
Some radionuclides may have several different paths of decay. For example, % of bismuth-212 decays, through alpha-emission, to thallium-208 while % of bismuth-212 decays, through beta-emission, to polonium-212. Both thallium-208 and polonium-212 are radioactive daughter products of bismuth-212, and both decay directly to stable lead-208.
Occurrence and applications
According to the Big Bang theory, stable isotopes of the lightest three elements (H, He, and traces of Li) were produced very shortly after the emergence of the universe, in a process called Big Bang nucleosynthesis. These lightest stable nuclides (including deuterium) survive to today, but any radioactive isotopes of the light elements produced in the Big Bang (such as tritium) have long since decayed. Isotopes of elements heavier than boron were not produced at all in the Big Bang, and these first five elements do not have any long-lived radioisotopes. Thus, all radioactive nuclei are, therefore, relatively young with respect to the birth of the universe, having formed later in various other types of nucleosynthesis in stars (in particular, supernovae), and also during ongoing interactions between stable isotopes and energetic particles. For example, carbon-14, a radioactive nuclide with a half-life of only years, is constantly produced in Earth's upper atmosphere due to interactions between cosmic rays and nitrogen.
Nuclides that are produced by radioactive decay are called radiogenic nuclides, whether they themselves are stable or not. There exist stable radiogenic nuclides that were formed from short-lived extinct radionuclides in the early Solar System. The extra presence of these stable radiogenic nuclides (such as xenon-129 from extinct iodine-129) against the background of primordial stable nuclides can be inferred by various means.
Radioactive decay has been put to use in the technique of radioisotopic labeling, which is used to track the passage of a chemical substance through a complex system (such as a living organism). A sample of the substance is synthesized with a high concentration of unstable atoms. The presence of the substance in one or another part of the system is determined by detecting the locations of decay events.
On the premise that radioactive decay is truly random (rather than merely chaotic), it has been used in hardware random-number generators. Because the process is not thought to vary significantly in mechanism over time, it is also a valuable tool in estimating the absolute ages of certain materials. For geological materials, the radioisotopes and some of their decay products become trapped when a rock solidifies, and can then later be used (subject to many well-known qualifications) to estimate the date of the solidification. These include checking the results of several simultaneous processes and their products against each other, within the same sample. In a similar fashion, and also subject to qualification, the rate of formation of carbon-14 in various eras, the date of formation of organic matter within a certain period related to the isotope's half-life may be estimated, because the carbon-14 becomes trapped when the organic matter grows and incorporates the new carbon-14 from the air. Thereafter, the amount of carbon-14 in organic matter decreases according to decay processes that may also be independently cross-checked by other means (such as checking the carbon-14 in individual tree rings, for example).
Szilard–Chalmers effect
The Szilard–Chalmers effect is the breaking of a chemical bond as a result of a kinetic energy imparted from radioactive decay. It operates by the absorption of neutrons by an atom and subsequent emission of gamma rays, often with significant amounts of kinetic energy. This kinetic energy, by Newton's third law, pushes back on the decaying atom, which causes it to move with enough speed to break a chemical bond. This effect can be used to separate isotopes by chemical means.
The Szilard–Chalmers effect was discovered in 1934 by Leó Szilárd and Thomas A. Chalmers. They observed that after bombardment by neutrons, the breaking of a bond in liquid ethyl iodide allowed radioactive iodine to be removed.
Origins of radioactive nuclides
Radioactive primordial nuclides found in the Earth are residues from ancient supernova explosions that occurred before the formation of the Solar System. They are the fraction of radionuclides that survived from that time, through the formation of the primordial solar nebula, through planet accretion, and up to the present time. The naturally occurring short-lived radiogenic radionuclides found in today's rocks, are the daughters of those radioactive primordial nuclides. Another minor source of naturally occurring radioactive nuclides are cosmogenic nuclides, that are formed by cosmic ray bombardment of material in the Earth's atmosphere or crust. The decay of the radionuclides in rocks of the Earth's mantle and crust contribute significantly to Earth's internal heat budget.
Aggregate processes
While the underlying process of radioactive decay is subatomic, historically and in most practical cases it is encountered in bulk materials with very large numbers of atoms. This section discusses models that connect events at the atomic level to observations in aggregate.
Terminology
The decay rate, or activity, of a radioactive substance is characterized by the following time-independent parameters:
The half-life, , is the time taken for the activity of a given amount of a radioactive substance to decay to half of its initial value.
The decay constant, "lambda", the reciprocal of the mean lifetime (in ), sometimes referred to as simply decay rate.
The mean lifetime, "tau", the average lifetime (1/e life) of a radioactive particle before decay.
Although these are constants, they are associated with the statistical behavior of populations of atoms. In consequence, predictions using these constants are less accurate for minuscule samples of atoms.
In principle a half-life, a third-life, or even a (1/√2)-life, could be used in exactly the same way as half-life; but the mean life and half-life have been adopted as standard times associated with exponential decay.
Those parameters can be related to the following time-dependent parameters:
Total activity (or just activity), , is the number of decays per unit time of a radioactive sample.
Number of particles, , in the sample.
Specific activity, , is the number of decays per unit time per amount of substance of the sample at time set to zero (). "Amount of substance" can be the mass, volume or moles of the initial sample.
These are related as follows:
where N0 is the initial amount of active substance — substance that has the same percentage of unstable particles as when the substance was formed.
Assumptions
The mathematics of radioactive decay depend on a key assumption that a nucleus of a radionuclide has no "memory" or way of translating its history into its present behavior. A nucleus does not "age" with the passage of time. Thus, the probability of its breaking down does not increase with time but stays constant, no matter how long the nucleus has existed. This constant probability may differ greatly between one type of nucleus and another, leading to the many different observed decay rates. However, whatever the probability is, it does not change over time. This is in marked contrast to complex objects that do show aging, such as automobiles and humans. These aging systems do have a chance of breakdown per unit of time that increases from the moment they begin their existence.
Aggregate processes, like the radioactive decay of a lump of atoms, for which the single-event probability of realization is very small but in which the number of time-slices is so large that there is nevertheless a reasonable rate of events, are modelled by the Poisson distribution, which is discrete. Radioactive decay and nuclear particle reactions are two examples of such aggregate processes. The mathematics of Poisson processes reduce to the law of exponential decay, which describes the statistical behaviour of a large number of nuclei, rather than one individual nucleus. In the following formalism, the number of nuclei or the nuclei population N, is of course a discrete variable (a natural number)—but for any physical sample N is so large that it can be treated as a continuous variable. Differential calculus is used to model the behaviour of nuclear decay.
One-decay process
Consider the case of a nuclide that decays into another by some process (emission of other particles, like electron neutrinos and electrons e− as in beta decay, are irrelevant in what follows). The decay of an unstable nucleus is entirely random in time so it is impossible to predict when a particular atom will decay. However, it is equally likely to decay at any instant in time. Therefore, given a sample of a particular radioisotope, the number of decay events expected to occur in a small interval of time is proportional to the number of atoms present , that is
Particular radionuclides decay at different rates, so each has its own decay constant . The expected decay is proportional to an increment of time, :
The negative sign indicates that decreases as time increases, as the decay events follow one after another. The solution to this first-order differential equation is the function:
where is the value of at time = 0, with the decay constant expressed as
We have for all time :
where is the constant number of particles throughout the decay process, which is equal to the initial number of nuclides since this is the initial substance.
If the number of non-decayed nuclei is:
then the number of nuclei of (i.e. the number of decayed nuclei) is
The number of decays observed over a given interval obeys Poisson statistics. If the average number of decays is , the probability of a given number of decays is
Chain-decay processes
Chain of two decays
Now consider the case of a chain of two decays: one nuclide decaying into another by one process, then decaying into another by a second process, i.e. . The previous equation cannot be applied to the decay chain, but can be generalized as follows. Since decays into , then decays into , the activity of adds to the total number of nuclides in the present sample, before those nuclides decay and reduce the number of nuclides leading to the later sample. In other words, the number of second generation nuclei increases as a result of the first generation nuclei decay of , and decreases as a result of its own decay into the third generation nuclei . The sum of these two terms gives the law for a decay chain for two nuclides:
The rate of change of , that is , is related to the changes in the amounts of and , can increase as is produced from and decrease as produces .
Re-writing using the previous results:
The subscripts simply refer to the respective nuclides, i.e. is the number of nuclides of type ; is the initial number of nuclides of type ; is the decay constant for – and similarly for nuclide . Solving this equation for gives:
In the case where is a stable nuclide ( = 0), this equation reduces to the previous solution:
as shown above for one decay. The solution can be found by the integration factor method, where the integrating factor is . This case is perhaps the most useful since it can derive both the one-decay equation (above) and the equation for multi-decay chains (below) more directly.
Chain of any number of decays
For the general case of any number of consecutive decays in a decay chain, i.e. , where is the number of decays and is a dummy index (), each nuclide population can be found in terms of the previous population. In this case , , ..., . Using the above result in a recursive form:
The general solution to the recursive problem is given by Bateman's equations:
Multiple products
In all of the above examples, the initial nuclide decays into just one product. Consider the case of one initial nuclide that can decay into either of two products, that is and in parallel. For example, in a sample of potassium-40, 89.3% of the nuclei decay to calcium-40 and 10.7% to argon-40. We have for all time :
which is constant, since the total number of nuclides remains constant. Differentiating with respect to time:
defining the total decay constant in terms of the sum of partial decay constants and :
Solving this equation for :
where is the initial number of nuclide A. When measuring the production of one nuclide, one can only observe the total decay constant . The decay constants and determine the probability for the decay to result in products or as follows:
because the fraction of nuclei decay into while the fraction of nuclei decay into .
Corollaries of laws
The above equations can also be written using quantities related to the number of nuclide particles in a sample;
The activity: .
The amount of substance: .
The mass: .
where = is the Avogadro constant, is the molar mass of the substance in kg/mol, and the amount of the substance is in moles.
Decay timing: definitions and relations
Time constant and mean-life
For the one-decay solution :
the equation indicates that the decay constant has units of , and can thus also be represented as 1/, where is a characteristic time of the process called the time constant.
In a radioactive decay process, this time constant is also the mean lifetime for decaying atoms. Each atom "lives" for a finite amount of time before it decays, and it may be shown that this mean lifetime is the arithmetic mean of all the atoms' lifetimes, and that it is , which again is related to the decay constant as follows:
This form is also true for two-decay processes simultaneously , inserting the equivalent values of decay constants (as given above)
into the decay solution leads to:
Half-life
A more commonly used parameter is the half-life . Given a sample of a particular radionuclide, the half-life is the time taken for half the radionuclide's atoms to decay. For the case of one-decay nuclear reactions:
the half-life is related to the decay constant as follows: set and = to obtain
This relationship between the half-life and the decay constant shows that highly radioactive substances are quickly spent, while those that radiate weakly endure longer. Half-lives of known radionuclides vary by almost 54 orders of magnitude, from more than years ( sec) for the very nearly stable nuclide 128Te, to seconds for the highly unstable nuclide 5H.
The factor of in the above relations results from the fact that the concept of "half-life" is merely a way of selecting a different base other than the natural base for the lifetime expression. The time constant is the -life, the time until only 1/e remains, about 36.8%, rather than the 50% in the half-life of a radionuclide. Thus, is longer than . The following equation can be shown to be valid:
Since radioactive decay is exponential with a constant probability, each process could as easily be described with a different constant time period that (for example) gave its "(1/3)-life" (how long until only 1/3 is left) or "(1/10)-life" (a time period until only 10% is left), and so on. Thus, the choice of and for marker-times, are only for convenience, and from convention. They reflect a fundamental principle only in so much as they show that the same proportion of a given radioactive substance will decay, during any time-period that one chooses.
Mathematically, the life for the above situation would be found in the same way as aboveby setting , and substituting into the decay solution to obtain
Example for carbon-14
Carbon-14 has a half-life of years and a decay rate of 14 disintegrations per minute (dpm) per gram of natural carbon.
If an artifact is found to have radioactivity of 4 dpm per gram of its present C, we can find the approximate age of the object using the above equation:
where:
Changing rates
The radioactive decay modes of electron capture and internal conversion are known to be slightly sensitive to chemical and environmental effects that change the electronic structure of the atom, which in turn affects the presence of 1s and 2s electrons that participate in the decay process. A small number of nuclides are affected. For example, chemical bonds can affect the rate of electron capture to a small degree (in general, less than 1%) depending on the proximity of electrons to the nucleus. In 7Be, a difference of 0.9% has been observed between half-lives in metallic and insulating environments. This relatively large effect is because beryllium is a small atom whose valence electrons are in 2s atomic orbitals, which are subject to electron capture in 7Be because (like all s atomic orbitals in all atoms) they naturally penetrate into the nucleus.
In 1992, Jung et al. of the Darmstadt Heavy-Ion Research group observed an accelerated β− decay of 163Dy66+. Although neutral 163Dy is a stable isotope, the fully ionized 163Dy66+ undergoes β− decay into the K and L shells to 163Ho66+ with a half-life of 47 days.
Rhenium-187 is another spectacular example. 187Re normally undergoes beta decay to 187Os with a half-life of 41.6 × 109 years, but studies using fully ionised 187Re atoms (bare nuclei) have found that this can decrease to only 32.9 years. This is attributed to "bound-state β− decay" of the fully ionised atom – the electron is emitted into the "K-shell" (1s atomic orbital), which cannot occur for neutral atoms in which all low-lying bound states are occupied.
A number of experiments have found that decay rates of other modes of artificial and naturally occurring radioisotopes are, to a high degree of precision, unaffected by external conditions such as temperature, pressure, the chemical environment, and electric, magnetic, or gravitational fields. Comparison of laboratory experiments over the last century, studies of the Oklo natural nuclear reactor (which exemplified the effects of thermal neutrons on nuclear decay), and astrophysical observations of the luminosity decays of distant supernovae (which occurred far away so the light has taken a great deal of time to reach us), for example, strongly indicate that unperturbed decay rates have been constant (at least to within the limitations of small experimental errors) as a function of time as well.
Recent results suggest the possibility that decay rates might have a weak dependence on environmental factors. It has been suggested that measurements of decay rates of silicon-32, manganese-54, and radium-226 exhibit small seasonal variations (of the order of 0.1%). However, such measurements are highly susceptible to systematic errors, and a subsequent paper has found no evidence for such correlations in seven other isotopes (22Na, 44Ti, 108Ag, 121Sn, 133Ba, 241Am, 238Pu), and sets upper limits on the size of any such effects. The decay of radon-222 was once reported to exhibit large 4% peak-to-peak seasonal variations (see plot), which were proposed to be related to either solar flare activity or the distance from the Sun, but detailed analysis of the experiment's design flaws, along with comparisons to other, much more stringent and systematically controlled, experiments refute this claim.
GSI anomaly
An unexpected series of experimental results for the rate of decay of heavy highly charged radioactive ions circulating in a storage ring has provoked theoretical activity in an effort to find a convincing explanation. The rates of weak decay of two radioactive species with half lives of about 40 s and 200 s are found to have a significant oscillatory modulation, with a period of about 7 s.
The observed phenomenon is known as the GSI anomaly, as the storage ring is a facility at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. As the decay process produces an electron neutrino, some of the proposed explanations for the observed rate oscillation invoke neutrino properties. Initial ideas related to flavour oscillation met with skepticism. A more recent proposal involves mass differences between neutrino mass eigenstates.
Nuclear processes
A nuclide is considered to "exist" if it has a half-life greater than 2x10−14s. This is an arbitrary boundary; shorter half-lives are considered resonances, such as a system undergoing a nuclear reaction. This time scale is characteristic of the strong interaction which creates the nuclear force. Only nuclides are considered to decay and produce radioactivity.
Nuclides can be stable or unstable. Unstable nuclides decay, possibly in several steps, until they become stable. There are 251 known stable nuclides. The number of unstable nuclides discovered has grown, with about 3000 known in 2006.
The most common and consequently historically the most important forms of natural radioactive decay involve the emission of alpha-particles, beta-particles, and gamma rays. Each of these correspond to a fundamental interaction predominantly responsible for the radioactivity:
alpha-decay -> strong interaction,
beta-decay -> weak interaction,
gamma-decay -> electromagnetism.
In alpha decay, a particle containing two protons and two neutrons, equivalent to a He nucleus, breaks out of the parent nucleus. The process represents a competition between the electromagnetic repulsion between the protons in the nucleus and attractive nuclear force, a residual of the strong interaction. The alpha particle is an especially strongly bound nucleus, helping it win the competition more often. However some nuclei break up or fission into larger particles and artificial nuclei decay with the emission of
single protons, double protons, and other combinations.
Beta decay transforms a neutron into proton or vice versa. When a neutron inside a parent nuclide decays to a proton, an electron, a anti-neutrino, and nuclide with high atomic number results. When a proton in a parent nuclide transforms to a neutron, a positron, a neutrino, and nuclide with a lower atomic number results. These changes are a direct manifestation of the weak interaction.
Gamma decay resembles other kinds of electromagnetic emission: it corresponds to transitions between an excited quantum state and lower energy state. Any of the particle decay mechanisms often leave the daughter in an excited state, which then decays via gamma emission.
Other forms of decay include neutron emission, electron capture, internal conversion, cluster decay.
Hazard warning signs
See also
Actinides in the environment
Background radiation
Chernobyl disaster
Crimes involving radioactive substances
Decay chain
Decay correction
Fallout shelter
Geiger counter
Induced radioactivity
Lists of nuclear disasters and radioactive incidents
National Council on Radiation Protection and Measurements
Nuclear engineering
Nuclear pharmacy
Nuclear physics
Nuclear power
Nuclear chain reaction
Particle decay
Poisson process
Radiation therapy
Radioactive contamination
Radioactivity in biology
Radiometric dating
Stochastic
Transient equilibrium
Notes
References
External links
The Lund/LBNL Nuclear Data Search – Contains tabulated information on radioactive decay types and energies.
Nomenclature of nuclear chemistry
Specific activity and related topics.
The Live Chart of Nuclides – IAEA
Interactive Chart of Nuclides
Health Physics Society Public Education Website
Annotated bibliography for radioactivity from the Alsos Digital Library for Nuclear Issues
Stochastic Java applet on the decay of radioactive atoms by Wolfgang Bauer
Stochastic Flash simulation on the decay of radioactive atoms by David M. Harrison
"Henri Becquerel: The Discovery of Radioactivity", Becquerel's 1896 articles online and analyzed on BibNum [click 'à télécharger' for English version].
"Radioactive change", Rutherford & Soddy article (1903), online and analyzed on Bibnum [click 'à télécharger' for English version]
Exponentials
Poisson point processes | Radioactive decay | Physics,Chemistry,Mathematics | 8,959 |
1,267,762 | https://en.wikipedia.org/wiki/Variable%20speed%20of%20light | A variable speed of light (VSL) is a feature of a family of hypotheses stating that the speed of light may in some way not be constant, for example, that it varies in space or time, or depending on frequency. Accepted classical theories of physics, and in particular general relativity, predict a constant speed of light in any local frame of reference and in some situations these predict apparent variations of the speed of light depending on frame of reference, but this article does not refer to this as a variable speed of light. Various alternative theories of gravitation and cosmology, many of them non-mainstream, incorporate variations in the local speed of light.
Attempts to incorporate a variable speed of light into physics were made by Robert Dicke in 1957, and by several researchers starting from the late 1980s.
VSL should not be confused with faster than light theories, its dependence on a medium's refractive index or its measurement in a remote observer's frame of reference in a gravitational potential. In this context, the "speed of light" refers to the limiting speed c of the theory rather than to the velocity of propagation of photons.
Historical proposals
Background
Einstein's equivalence principle, on which general relativity is founded, requires that in any local, freely falling reference frame, the speed of light is always the same. This leaves open the possibility, however, that an inertial observer inferring the apparent speed of light in a distant region might calculate a different value. Spatial variation of the speed of light in a gravitational potential as measured against a distant observer's time reference is implicitly present in general relativity. The apparent speed of light will change in a gravity field and, in particular, go to zero at an event horizon as viewed by a distant observer. In deriving the gravitational redshift due to a spherically symmetric massive body, a radial speed of light dr/dt can be defined in Schwarzschild coordinates, with t being the time recorded on a stationary clock at infinity. The result is
where m is MG/c2 and where natural units are used such that c0 is equal to one.
Dicke's proposal (1957)
Robert Dicke, in 1957, developed a VSL theory of gravity, a theory in which (unlike general relativity) the speed of light measured locally by a free-falling observer could vary. Dicke assumed that both frequencies and wavelengths could vary, which since resulted in a relative change of c. Dicke assumed a refractive index (eqn. 5) and proved it to be consistent with the observed value for light deflection. In a comment related to Mach's principle, Dicke suggested that, while the right part of the term in eq. 5 is small, the left part, 1, could have "its origin in the remainder of the matter in the universe".
Given that in a universe with an increasing horizon more and more masses contribute to the above refractive index, Dicke considered a cosmology where c decreased in time, providing an alternative explanation to the cosmological redshift.
Subsequent proposals
Variable speed of light models, including Dicke's, have been developed which agree with all known tests of general relativity.
Other models make a link to Dirac large numbers hypothesis.
Several hypotheses for varying speed of light, seemingly in contradiction to general relativity theory, have been published, including those of Giere and Tan (1986) and Sanejouand (2009). In 2003, Magueijo gave a review of such hypotheses.
Cosmological models with varying speeds of light have been proposed independently by Jean-Pierre Petit in 1988, John Moffat in 1992, and the team of Andreas Albrecht and João Magueijo in 1998 to explain the horizon problem of cosmology and propose an alternative to cosmic inflation.
Relation to other constants and their variation
Gravitational constant G
In 1937, Paul Dirac and others began investigating the consequences of natural constants changing with time. For example, Dirac proposed a change of only 5 parts in 1011 per year of the Newtonian constant of gravitation G to explain the relative weakness of the gravitational force compared to other fundamental forces. This has become known as the Dirac large numbers hypothesis.
However, Richard Feynman showed that the gravitational constant most likely could not have changed this much in the past 4 billion years based on geological and solar system observations, although this may depend on assumptions about G varying in isolation. (See also strong equivalence principle.)
Fine-structure constant α
One group, studying distant quasars, has claimed to detect a variation of the fine-structure constant at the level in one part in 105. Other authors dispute these results. Other groups studying quasars claim no detectable variation at much higher sensitivities.
The natural nuclear reactor of Oklo has been used to check whether the atomic fine-structure constant α might have changed over the past 2 billion years. That is because α influences the rate of various nuclear reactions. For example, captures a neutron to become , and since the rate of neutron capture depends on the value of α, the ratio of the two samarium isotopes in samples from Oklo can be used to calculate the value of α from 2 billion years ago. Several studies have analysed the relative concentrations of radioactive isotopes left behind at Oklo, and most have concluded that nuclear reactions then were much the same as they are today, which implies α was the same too.
Paul Davies and collaborators have suggested that it is in principle possible to disentangle which of the dimensionful constants (the elementary charge, the Planck constant, and the speed of light) of which the fine-structure constant is composed is responsible for the variation. However, this has been disputed by others and is not generally accepted.
Criticisms of various VSL concepts
Dimensionless and dimensionful quantities
To clarify what a variation in a dimensionful quantity actually means, since any such quantity can be changed merely by changing one's choice of units, John Barrow wrote:
"[An] important lesson we learn from the way that pure numbers like α define the world is what it really means for worlds to be different. The pure number we call the fine-structure constant and denote by α is a combination of the electron charge, e, the speed of light, c, and the Planck constant, h. At first we might be tempted to think that a world in which the speed of light was slower would be a different world. But this would be a mistake. If c, h, and e were all changed so that the values they have in metric (or any other) units were different when we looked them up in our tables of physical constants, but the value of α remained the same, this new world would be observationally indistinguishable from our world. The only thing that counts in the definition of worlds are the values of the dimensionless constants of Nature. If all masses were doubled in value [including the Planck mass mP] you cannot tell because all the pure numbers defined by the ratios of any pair of masses are unchanged."
Any equation of physical law can be expressed in a form in which all dimensional quantities are normalized against like-dimensioned quantities (called nondimensionalization), resulting in only dimensionless quantities remaining. Physicists can choose their units so that the physical constants c, G, ħ = h/(2π), 4πε0, and kB take the value one, resulting in every physical quantity being normalized against its corresponding Planck unit. For that, it has been claimed that specifying the evolution of a dimensional quantity is meaningless and does not make sense. When Planck units are used and such equations of physical law are expressed in this nondimensionalized form, no dimensional physical constants such as c, G, ħ, ε0, nor kB remain, only dimensionless quantities, as predicted by the Buckingham π theorem. Short of their anthropometric unit dependence, there is no speed of light, gravitational constant, nor the Planck constant, remaining in mathematical expressions of physical reality to be subject to such hypothetical variation. For example, in the case of a hypothetically varying gravitational constant, G, the relevant dimensionless quantities that potentially vary ultimately become the ratios of the Planck mass to the masses of the fundamental particles. Some key dimensionless quantities (thought to be constant) that are related to the speed of light (among other dimensional quantities such as ħ, e, ε0), notably the fine-structure constant or the proton-to-electron mass ratio, could in principle have meaningful variance and their possible variation continues to be studied.
General critique of varying c cosmologies
From a very general point of view, G. F. R. Ellis and Jean-Philippe Uzan expressed concerns that a varying c would require a rewrite of much of modern physics to replace the current system which depends on a constant c. Ellis claimed that any varying c theory (1) must redefine distance measurements; (2) must provide an alternative expression for the metric tensor in general relativity; (3) might contradict Lorentz invariance; (4) must modify Maxwell's equations; and (5) must be done consistently with respect to all other physical theories. VSL cosmologies remain out of mainstream physics.
References
External links
Is the speed of light constant? "Varying constants"
Hypotheses
Electromagnetic radiation
Light
Physical cosmological concepts
Special relativity
Fringe physics
de:Physikalische Konstante#Konstanz der Naturkonstanten | Variable speed of light | Physics | 1,965 |
7,291,166 | https://en.wikipedia.org/wiki/Strange%20B%20meson | The meson is a meson composed of a bottom antiquark and a strange quark. Its antiparticle is the meson, composed of a bottom quark and a strange antiquark.
B–B oscillations
Strange B mesons are noted for their ability to oscillate between matter and antimatter via a box-diagram with measured by CDF experiment at Fermilab.
That is, a meson composed of a bottom quark and strange antiquark, the strange meson, can spontaneously change into an bottom antiquark and strange quark pair, the strange meson, and vice versa.
On 25 September 2006, Fermilab announced that they had claimed discovery of previously-only-theorized Bs meson oscillation. According to Fermilab's press release:
Ronald Kotulak, writing for the Chicago Tribune, called the particle "bizarre" and stated that the meson "may open the door to a new era of physics" with its proven interactions with the "spooky realm of antimatter".
Better understanding of the meson is one of the main objectives of the LHCb experiment conducted at the Large Hadron Collider. On 24 April 2013, CERN physicists in the LHCb collaboration announced that they had observed CP violation in the decay of strange mesons for the first time. Scientists found the Bs meson decaying into two muons for the first time, with Large Hadron Collider experiments casting doubt on the scientific theory of supersymmetry.
CERN physicist Tara Shears described the CP violation observations as "verification of the validity of the Standard Model of physics".
Rare decays
The rare decays of the Bs meson are an important test of the Standard Model. The branching fraction of the strange b-meson to a pair of muons is very precisely predicted with a value of Br(Bs→ μ+μ−)SM = (3.66 ± 0.23) × 10−9. Any variation from this rate would indicate possible physics beyond the Standard Model, such as supersymmetry. The first definitive measurement was made from a combination of LHCb and CMS experiment data:
This result is compatible with the Standard Model and set limits on possible extensions.
See also
B meson
B– oscillation
References
External links
Mesons
Strange quark
B physics | Strange B meson | Physics | 494 |
33,176,131 | https://en.wikipedia.org/wiki/Design%20classic | A design classic is an industrially manufactured object with timeless aesthetic value. It serves as a standard of its kind and remains up to date regardless of the year of its design.
Whether a particular object is a design classic might often be debatable and the term is sometimes abused
but there exists a body of acknowledged classics of product designs from the 19th and 20th century.
For an object to become a design classic requires time, and whatever lasting impact the design has had on society, together with its influence on later designs, play large roles in determining whether something becomes a design classic. Thus, design classics are often strikingly simple, going to the essence, and are described with words like iconic, neat, valuable or having meaning.
References
Industrial design | Design classic | Engineering | 150 |
31,294,625 | https://en.wikipedia.org/wiki/Median%20center%20of%20the%20United%20States%20population | The median center of U.S. population is determined by the United States Census Bureau from the results of each census. The Bureau defines it to be:
As of the 2020 U.S. census, this places roughly 165.7 million Americans living on each side of a longitude line passing through a location in Gibson County, Indiana, and the same number living on each side of a latitude line through the same point.
During the 20th century the median center of U.S. population moved roughly southwest, from a location in Randolph County, Indiana to a location in Daviess County, Indiana. The majority of this southwest shift happened in the second half of the century, as the center shifted within a narrow circular band between 1900 and 1950 – all within roughly of the 1900 starting point in Randolph County.
See also
Mean center of the United States population
Center of population
Geographic center of the United States
Geographic center of the contiguous United States
References
Demographic history of the United States
Center of population | Median center of the United States population | Physics,Mathematics | 197 |
33,207,610 | https://en.wikipedia.org/wiki/Caffeine%20%28data%20page%29 | This page provides supplementary chemical data on caffeine.
References
Chemical data pages
Caffeine
Chemical data pages cleanup | Caffeine (data page) | Chemistry | 24 |
30,871,197 | https://en.wikipedia.org/wiki/Avira | Avira Operations GmbH & Co. KG is a German multinational computer security software company mainly known for its Avira Free Security antivirus software. Although founded in 2006, the Avira antivirus application has been under active development since 1986 through its predecessor company H+BEDV Datentechnik GmbH. Since 2021, Avira has been owned by American software company NortonLifeLock (now Gen Digital), which also operates Norton, Avast and AVG. It was previously owned by investment firm Investcorp.
The company also has offices in the United States, China, Romania, and Japan.
Technology
Virus definition
Avira periodically "cleans out" its virus definition files, replacing specific signatures with generic ones for a general increase in performance and scanning speed. A 15MB database clean-out was made on 27 October 2008, causing problems to the users of the Free edition because of its large size and Avira's slow Free edition servers. Avira responded by reducing the size of the individual update files, delivering less data in each update. Nowadays there are 32 smaller definition files that are updated regularly in order to avoid peaks in the download of the updates.
Its file-by-file scanning feature has jokingly been titled "Luke Filewalker" by the developers, as a reference to the Star Wars media franchise character "Luke Skywalker".
Advance heuristic
Avira products contain heuristics that proactively uncover unknown malware, before a special virus signature to combat the damaging element has been created and before a virus guard update has been sent.
Heuristic virus detection involves extensive analysis and investigation of the affected codes for functions typical of malware. If the code being scanned exhibits these characteristic features it is reported as being suspicious, although not necessarily malware; the user decides whether to act on or ignore the warning.
ProActiv
The ProActiv component uses rule sets developed by the Avira Malware Research Center to identify suspicious behavior. The rule sets are supplied by Avira databases. ProActiv sends information on suspicious programs to the Avira databases for logging.
Firewall
Avira removed its own firewall technology in 2014, with protection supplied instead by Windows Firewall (Windows 7 and after), because Windows 8, and later the Microsoft Certification Program, forces developers to use interfaces introduced in Windows Vista.
Protection Cloud
Avira Protection Cloud (APC) was first introduced in version 2013. It uses information available via the Internet (cloud computing) to improve detection and affect system performance less. This technology was implemented in all paid 2013 products.
APC was initially only used during a manual quick system scan; later it was extended to real-time protection.
System requirements
Windows: operating system Windows 7, 8/8.1, 10, 11; processor speed 1.6 GHz; requires 256 MB of RAM; hard disk memory 2 GB.
Mac: operating system MacOS 10.15 (Catalina) or higher; 500 MB free hard disk space
Partners
Avira offers its antivirus engine in the form of a software development kit to implement in complementary products. Strategic and technology partners of Avira include Canonical, CYAN Networks, IBM, intelligence AG, Microsoft, novell, OPSWAT, Synergy Systems and others.
On 4 September 2014, Avira announced a partnership with Dropbox, to combine Avira's security with Dropbox's "sync and share" capabilities.
Tjark Auerbach, the founder of Avira sold almost 100% of his stake in the company to the Investcorp Group of Manama (Bahrain) in April 2020. The stakes were reportedly sold at a price of 180 million dollars. The Investcorp Group has invested in several other firms from the cybersecurity sector in the past. The directors of Investcorp Group belong to several royal families of Middle East countries like Kuwait, Bahrain, Saudi Arabia, etc. However, 20% of its total ordinary and preferred shares are owned by the Abu Dhabi-based Mubadala Group since 2017.
On December 7, 2020, NortonLifeLock announced acquisition of Avira for approximately US$360 million from Investcorp Technology Partners. The acquisition was closed in January 2021.
In February 2021, joined Avira as part of NortonLifeLock.
Products
Windows
Avira offers the following security products and tools for Microsoft Windows:
Avira Free Antivirus: The free edition antivirus/anti-spyware, for non-commercial use, with promotional pop-ups.
Avira Antivirus Pro: The premium edition antivirus/anti-spyware.
Avira System Speedup Free: A free suite of PC tune-up tools.
Avira System Speedup Pro: The premium edition of the suite of PC tune-up tools.
Avira Internet Security Suite: Consists of Antivirus Pro + System Speedup + Firewall Manager.
Avira Ultimate Protection Suite: Consists of Internet Security Suite + additional PC maintenance tools (e.g. SuperEasy Driver Updater).
Avira Rescue System: A set of free tools that include a utility used to write a Linux-based bootable CD. It can be used to clean an unbootable PC, and is also able to find malware that hides when the host's operating system is active (e.g., some rootkits). The tool contains the antivirus program and the virus database current at the time of download. It boots the machine into the antivirus program, then scans for and removes malware, and restores normal boot and operation if necessary.
MacOS
Avira Free Mac Security for Mac: Runs on MacOS 10.9 and above.
Android and iOS
Avira offers the following security applications for mobile devices running Android and iOS:
Avira Antivirus Security for Android: Free application for Android, runs on versions 2.2 and above.
Avira Antivirus Security Pro for Android: Premium edition for Android, runs on versions 2.2 and above. Available as an upgrade from within the free application, it provides additional safe browsing, hourly update and free tech support.
Avira Mobile Security for iOS: Free edition for iOS devices, such as iPhone and iPad.
Other products
Avira Phantom VPN: Avira's virtual private network software for Android, iOS, macOS and Windows.
Avira Prime: In April 2017 Avira launched a single-user, multi-device subscription-based product designed to provide a complete set of all Avira products available for the duration of the license along with premium support.
Avira Prime is compatible with Windows, macOS, iOS, and Android operating systems and related devices and is available to consumers in 5- and 25-device editions, dubbed "Avira Prime" and "Avira Prime Unlimited" respectively.
Subscriptions are in 30-day and 1-year increments.
Discontinued platforms
Avira formerly offered free antivirus software for Unix and Linux. It was discontinued in 2013, although updates were supplied until June 2016.
Security vulnerabilities
In 2005, Avira was hit by ACE archive buffer overflow vulnerability. A remote attacker could have exploited this vulnerability by crafting an ACE archive and delivering it via a malicious web page or e-mail. A buffer overflow could occur when Avira scanned the malicious archive. That would have allowed the attacker to execute arbitrary code on the affected system.
In 2010, Avira Management Console was hit by the use-after-free remote code execution vulnerability. The vulnerability allowed remote attackers to execute arbitrary code on vulnerable installations of Avira Management Console. Authentication was not required to exploit the vulnerability.
In 2013, Avira engines were hit by a 0-day vulnerability that allowed attackers to get access to a customer's PC. The bug was found in the avipbb.sys driver file and allowed privilege escalation.
Awards and reviews
In January 2008, Anti-Malware Test Lab gave Avira "gold" status for proactive virus detection and detection/removal of rootkits.
AV-Comparatives awarded Avira its "AV Product of the Year" award in its "Summary Report 2008."
In April 2009, PC Pro awarded Avira Premium Security Suite 9 the maximum six stars and a place on its A-list for Internet security software.
In August 2009, Avira performed at a 98.9% percent overall malware detection rate, and was the fastest for both on-demand scans and on-access scans conducted by PC World magazine, which ranked it first on its website.
Avira was among the first companies to receive OESIS OK Gold Certification, indicating that both the antispyware and antivirus components of several of its security products achieved the maximum compatibility score with widespread network technologies such as SSL/TLS VPN and network access control from companies including Juniper Networks, Cisco Systems, and SonicWALL.
In February 2010, testing by firm AV-TEST, Avira tied for first place (with another German company) in the "malware on demand" detection test and earned a 99% score in the "adware/spyware on demand" test.
AV-Comparatives gave Avira its Silver award (for 99.5% detection rate) in its "Summary Report 2010."
For 2012, AV-Comparatives awarded Avira with "gold" status for its 99.6% performance in the "On-Demand Malware Detection" category and classified Avira as a "Top Rated" product overall for that year.
In the AV-Comparatives August 2014 "Real-World Protection Test," with 669 total test cases tried against various security products, Avira tied for first place.
AV-Comparatives awarded Avira its "AV Product of the Year" award in its "Summary Report 2016."
See also
Comparison of antivirus software
Comparison of firewalls
Comparison of virtual private network services
References
External links
Auerbach Stiftung (Foundation)
Antivirus software
Freeware
Software companies established in 1986
Computer security software
Computer security software companies
Software companies of Germany
Windows security software
Linux security software
MacOS security software
Android (operating system) software
1986 establishments in West Germany
2020 mergers and acquisitions
2021 mergers and acquisitions
Gen Digital acquisitions
Gen Digital software
German brands | Avira | Engineering | 2,061 |
17,010,869 | https://en.wikipedia.org/wiki/Variants%20of%20PCR | The versatility of polymerase chain reaction (PCR) has led to modifications of the basic protocol being used in a large number of variant techniques designed for various purposes. This article summarizes many of the most common variations currently or formerly used in molecular biology laboratories; familiarity with the fundamental premise by which PCR works and corresponding terms and concepts is necessary for understanding these variant techniques.
Basic modifications
Often only a small modification needs to be made to the standard PCR protocol to achieve a desired goal:
Multiplex-PCR uses several pairs of primers annealing to different target sequences. This permits the simultaneous analysis of multiple targets in a single sample. For example, in testing for genetic mutations, six or more amplifications might be combined. In the standard protocol for DNA fingerprinting, the targets assayed are often amplified in groups of 3 or 4. Multiplex Ligation-dependent Probe Amplification (MLPA) permits multiple targets to be amplified using only a single pair of primers, avoiding the resolution limitations of multiplex PCR. Multiplex PCR has also been used for analysis of microsatellites and SNPs.
Variable Number of Tandem Repeats (VNTR) PCR targets repetitive areas of the genome that exhibit length variation. Analysis of the genotypes in the samples usually involves sizing of the amplification products by gel electrophoresis. Analysis of smaller VNTR segments known as short tandem repeats (or STRs) is the basis for DNA fingerprinting databases such as CODIS.
Asymmetric PCR preferentially amplifies one strand of a double-stranded DNA target. It is used in some sequencing methods and hybridization probing to generate one DNA strand as product. Thermocycling is carried out exactly as in conventional PCR, but with a limiting amount or leaving out one of the primers. When the limiting primer becomes depleted, replication increases arithmetically rather than exponentially through extension of the excess primer. A modification of this process, named Linear-After-The-Exponential-PCR (or LATE-PCR), uses a limiting primer with a higher melting temperature (Tm) than the excess primer in order to maintain reaction efficiency as the limiting primer concentration decreases mid-reaction. See also overlap-extension PCR.
Some modifications are needed to perform long PCR. The original Klenow-based PCR process did not generate products that were larger than about 400 bp. Taq polymerase can however amplify targets of up to several thousand bp long. Since then, modified protocols with Taq enzyme have allowed targets of over 50 kb to be amplified.
Nested PCR is used to increase the specificity of DNA amplification. Two sets of primers are used in two successive reactions. In the first PCR, one pair of primers is used to generate DNA products, which may contain products amplified from non-target areas. The products from the first PCR are then used as template in a second PCR, using one ('hemi-nesting') or two different primers whose binding sites are located (nested) within the first set, thus increasing specificity. Nested PCR is often more successful in specifically amplifying long DNA products than conventional PCR, but it requires more detailed knowledge of the sequence of the target.
Quantitative PCR (qPCR) is used to measure the specific amount of target DNA (or RNA) in a sample. By measuring amplification only within the phase of true exponential increase, the amount of measured product more accurately reflects the initial amount of target. Special thermal cyclers are used that monitor the amount of product during the amplification.
Quantitative Real-Time PCR (QRT-PCR), sometimes simply called Real-Time PCR (RT-PCR), refers to a collection of methods that use fluorescent dyes, such as Sybr Green, or fluorophore-containing DNA probes, such as TaqMan, to measure the amount of amplified product in real time as the amplification progresses.
Hot-start PCR is a technique performed manually by heating the reaction components to the DNA melting temperature (e.g. 95 °C) before adding the polymerase. In this way, non-specific amplification at lower temperatures is prevented. Alternatively, specialized reagents inhibit the polymerase's activity at ambient temperature, either by the binding of an antibody, or by the presence of covalently bound inhibitors that only dissociate after a high-temperature activation step. 'Hot-start/cold-finish PCR' is achieved with new hybrid polymerases that are inactive at ambient temperature and are only activated at elevated temperatures.
In touchdown PCR, the annealing temperature is gradually decreased in later cycles. The annealing temperature in the early cycles is usually 3–5 °C above the standard Tm of the primers used, while in the later cycles it is a similar amount below the Tm. The initial higher annealing temperature leads to greater specificity for primer binding, while the lower temperatures permit more efficient amplification at the end of the reaction.
Assembly PCR (also known as Polymerase Cycling Assembly or PCA) is the synthesis of long DNA structures by performing PCR on a pool of long oligonucleotides with short overlapping segments, to assemble two or more pieces of DNA into one piece. It involves an initial PCR with primers that have an overlap and a second PCR using the products as the template that generates the final full-length product. This technique may substitute for ligation-based assembly.
In colony PCR, bacterial colonies are screened directly by PCR, for example, the screen for correct DNA vector constructs. Colonies are sampled with a sterile pipette tip and a small quantity of cells transferred into a PCR mix. To release the DNA from the cells, the PCR is either started with an extended time at 95 °C (when standard polymerase is used), or with a shortened denaturation step at 100 °C and special chimeric DNA polymerase.
The digital polymerase chain reaction simultaneously amplifies thousands of samples, each in a separate droplet within an emulsion or partition within an micro-well.
Suicide PCR is typically used in paleogenetics or other studies where avoiding false positives and ensuring the specificity of the amplified fragment is the highest priority. It was originally described in a study to verify the presence of the microbe Yersinia pestis in dental samples obtained from 14th-century graves of people supposedly killed by plague during the medieval Black Death epidemic. The method prescribes the use of any primer combination only once in a PCR (hence the term "suicide"), which should never have been used in any positive-control PCR reaction, and the primers should always target a genomic region never amplified before in the lab using this or any other set of primers. This ensures that no contaminating DNA from previous PCR reactions is present in the lab, which could otherwise generate false positives.
COLD-PCR (co-amplification at lower denaturation temperature-PCR) is a modified protocol that enriches variant alleles from a mixture of wild-type and mutation-containing DNA samples.
Pretreatments and extensions
The basic PCR process can sometimes precede or follow another technique.
RT-PCR (or Reverse Transcription PCR) is used to reverse-transcribe and amplify RNA to cDNA. PCR is preceded by a reaction using reverse transcriptase, an enzyme that converts RNA into cDNA. The two reactions may be combined in a tube, with the initial heating step of PCR being used to inactivate the transcriptase. The Tth polymerase (described below) has RT activity, and can carry out the entire reaction. RT-PCR is widely used in expression profiling, which detects the expression of a gene. It can also be used to obtain sequence of an RNA transcript, which may aid the determination of the transcription start and termination sites (by RACE-PCR) and facilitate mapping of the location of exons and introns in a gene sequence.
Two-tailed PCR uses a single primer that binds to a microRNA target with both 3' and 5' ends, known as hemiprobes. Both ends must be complementary for binding to occur. The 3'-end is then extended by reverse transcriptase forming a long cDNA. The cDNA is then amplified using two target specific PCR primers. The combination of two hemiprobes, both targeting the short microRNA target, makes the Two-tailed assay exceedingly sensitive and specific.
Ligation-mediated PCR uses small DNA oligonucleotide 'linkers' (or adaptors) that are first ligated to fragments of the target DNA. PCR primers that anneal to the linker sequences are then used to amplify the target fragments. This method is deployed for DNA sequencing, genome walking, and DNA footprinting. A related technique is amplified fragment length polymorphism, which generates diagnostic fragments of a genome.
Methylation-specific PCR (MSP) is used to identify patterns of DNA methylation at cytosine-guanine (CpG) islands in genomic DNA. Target DNA is first treated with sodium bisulfite, which converts unmethylated cytosine bases to uracil, which is complementary to adenosine in PCR primers. Two amplifications are then carried out on the bisulfite-treated DNA: one primer set anneals to DNA with cytosines (corresponding to methylated cytosine), and the other set anneals to DNA with uracil (corresponding to unmethylated cytosine). MSP used in quantitative PCR provides quantitative information about the methylation state of a given CpG island.
Other modifications
Adjustments of the components in PCR is commonly used for optimal performance.
The divalent magnesium ion (Mg++) is required for PCR polymerase activity. Lower concentrations Mg++ will increase replication fidelity, while higher concentrations will introduce more mutations.
Denaturants(such as DMSO) can increase amplification specificity by destabilizing non-specific primer binding. Other chemicals, such as glycerol, are stabilizers for the activity of the polymerase during amplification. Detergents (such as Triton X-100) can prevent polymerase stick to itself or to the walls of the reaction tube.
DNA polymerases occasionally incorporate mismatch bases into the extending strand. High-fidelity PCR employs enzymes with 3'-5' exonuclease activity that decreases this rate of mis-incorporation. Examples of enzymes with proofreading activity include Pfu; adjustments of the Mg++ and dNTP concentrations may help maximize the number of products that exactly match the original target DNA.
Primer modifications
Adjustments to the synthetic oligonucleotides used as primers in PCR are a rich source of modification:
Normally PCR primers are chosen from an invariant part of the genome, and might be used to amplify a polymorphic area between them. In allele-specific PCR the opposite is done. At least one of the primers is chosen from a polymorphic area, with the mutations located at (or near) its 3'-end. Under stringent conditions, a mismatched primer will not initiate replication, whereas a matched primer will. The appearance of an amplification product therefore indicates the genotype. (For more information, see SNP genotyping.)
InterSequence-Specific PCR (or ISSR-PCR) is method for DNA fingerprinting that uses primers selected from segments repeated throughout a genome to produce a unique fingerprint of amplified product lengths. The use of primers from a commonly repeated segment is called Alu-PCR, and can help amplify sequences adjacent (or between) these repeats.
Primers can also be designed to be 'degenerate' – able to initiate replication from a large number of target locations. Whole genome amplification (or WGA) is a group of procedures that allow amplification to occur at many locations in an unknown genome, and which may only be available in small quantities. Other techniques use degenerate primers that are synthesized using multiple nucleotides at particular positions (the polymerase 'chooses' the correctly matched primers). Also, the primers can be synthesized with the nucleoside analog inosine, which hybridizes to three of the four normal bases. A similar technique can force PCR to perform Site-directed mutagenesis. (also see Overlap extension polymerase chain reaction)
Normally the primers used in PCR are designed to be fully complementary to the target. However, the polymerase is tolerant to mis-matches away from the 3' end. Tailed-primers include non-complementary sequences at their 5' ends. A common procedure is the use of linker-primers, which ultimately place restriction sites at the ends of the PCR products, facilitating their later insertion into cloning vectors.
An extension of the 'colony-PCR' method (above), is the use of vector primers. Target DNA fragments (or cDNA) are first inserted into a cloning vector, and a single set of primers are designed for the areas of the vector flanking the insertion site. Amplification occurs for whatever DNA has been inserted.
PCR can easily be modified to produce a labeled product for subsequent use as a hybridization probe. One or both primers might be used in PCR with a radioactive or fluorescent label already attached, or labels might be added after amplification. These labeling methods can be combined with 'asymmetric-PCR' (above) to produce effective hybridization probes.
RNase H-dependent PCR (rhPCR) can reduce primer-dimer formation, and increase the number of assays in multiplex PCR. The method utilizes primers with a cleavable block on the 3’ end that is removed by the action of a thermostable RNase HII enzyme.
DNA Polymerases
There are several DNA polymerases that are used in PCR.
The Klenow fragment, derived from the original DNA Polymerase I from E. coli, was the first enzyme used in PCR. Because of its lack of stability at high temperature, it needs be replenished during each cycle, and therefore is not commonly used in PCR.
The bacteriophage T4 DNA polymerase (family A) was also initially used in PCR. It has a higher fidelity of replication than the Klenow fragment, but is also destroyed by heat. T7 DNA polymerase (family B) has similar properties and purposes. It has been applied to site-directed mutagenesis and Sanger sequencing.
Taq polymerase, the DNA Polymerase I from Thermus aquaticus, was the first thermostable polymerase used in PCR, and is still the one most commonly used. The enzyme can be isolated from its native source, or from its cloned gene expressed in E. coli. A 61kDa truncated from lacking 5'-3' exonuclease activity is known as the Stoffel fragment, and is expressed in E. coli. The lack of exonuclease activity may allow it to amplify longer targets than the native enzyme. It has been commercialized as AmpliTaq and Klentaq. A variant designed for hot-start PCR called the "Faststart polymerase" has also been produced. It requires strong heat activation, thereby avoiding non-specific amplification due to polymerase activity at low temperature. Many other variants have been created.
Other Thermus polymerases, such as Tth polymerase I () from Thermus thermophilus, has seen some use. Tth has reverse transcriptase activity in the presence of Mn2+ ions, allowing PCR amplification from RNA targets.
The archean genus Pyrococcus has proven a rich source of thermostable polymerases with proofreading activity. Pfu DNA polymerase, isolated from the P. furiosus shows a 5-fold decrease in the error rate of replication compared to Taq. Since errors increase as PCR progresses, Pfu is the preferred polymerase when products are to be individually cloned for sequencing or expression. Other lesser used polymerases from this genus include Pwo () from Pyrococcus woesei, Pfx from an unnamed species, "Deep Vent" polymerase () from strain GB-D.
Vent or Tli polymerase is an extremely thermostable DNA polymerase isolated from Thermococcus litoralis. The polymerase from Thermococcus fumicolans (Tfu) has also been commercialized.
Mechanism modifications
Sometimes even the basic mechanism of PCR can be modified.
Unlike normal PCR, Inverse PCR allows amplification and sequencing of DNA that surrounds a known sequence. It involves initially subjecting the target DNA to a series of restriction enzyme digestions, and then circularizing the resulting fragments by self ligation. Primers are designed to be extended outward from the known segment, resulting in amplification of the rest of the circle. This is especially useful in identifying sequences to either side of various genomic inserts.
Similarly, thermal asymmetric interlaced PCR (or TAIL-PCR) is used to isolate unknown sequences flanking a known area of the genome. Within the known sequence, TAIL-PCR uses a nested pair of primers with differing annealing temperatures. A 'degenerate' primer is used to amplify in the other direction from the unknown sequence.
Isothermal amplification methods
Some DNA amplification protocols have been developed that may be used alternatively to PCR. They are isothermal, meaning that they are run at a constant temperature.
Helicase-dependent amplification (HDA) is similar to traditional PCR, but uses a constant temperature rather than cycling through denaturation and annealing/extension steps. DNA Helicase, an enzyme that unwinds DNA, is used in place of thermal denaturation. Loop-mediated isothermal amplification is a similar idea, but done with a strand-displacing polymerase.
Nicking enzyme amplification reaction (NEAR) and its cousin strand displacement amplification (SDA) are isothermal, replicating DNA at a constant temperature using a polymerase and nicking enzyme.
Recombinase Polymerase Amplification (RPA) uses a recombinase to specifically pair primers with double-stranded DNA on the basis of homology, thus directing DNA synthesis from defined DNA sequences present in the sample. Presence of the target sequence initiates DNA amplification, and no thermal or chemical melting of DNA is required. The reaction progresses rapidly and results in specific DNA amplification from just a few target copies to detectable levels typically within 5–10 minutes. The entire reaction system is stable as a dried formulation and does not need refrigeration. RPA can be used to replace PCR in a variety of laboratory applications and users can design their own assays.
Other types of isothermal amplification include whole genome amplification (WGA), Nucleic acid sequence-based amplification (NASBA), and transcription-mediated amplification (TMA).
See also
Vectorette PCR
References
External links
PCR Applications Manual (from Roche Diagnostics).
The Reference in qPCR -- an Academic & Industrial Information Platform
www.eConferences.de streaming portal -- Amplify your knowledge in qPCR, dPCR and NGS!
Polymerase chain reaction | Variants of PCR | Chemistry,Biology | 4,148 |
8,753,589 | https://en.wikipedia.org/wiki/Steam%20digester | The steam digester or bone digester (also known as Papin’s digester) is a high-pressure cooker invented by French physicist Denis Papin in 1679. It is a device for extracting fats from bones in a high-pressure steam environment, which also renders them brittle enough to be easily ground into bone meal. It is the forerunner of the autoclave and the domestic pressure cooker.
The steam-release valve, which was invented for Papin's digester following various explosions of the earlier models, inspired the development of the piston-and-cylinder steam engine.
History
The artificial vacuum was first produced in 1643 by Italian scientist Evangelista Torricelli and further developed by German scientist Otto von Guericke with his Magdeburg hemispheres. Guerike's demonstration was documented by Gaspar Schott, in a book that was read by Robert Boyle. Boyle and his assistant Robert Hooke improved Guericke's air pump design and built their own. From this, through various experiments, they formulated what is called Boyle's law, which states that the volume of a body of an ideal gas is inversely proportional to its pressure. Soon Jacques Charles formulated Charles' Law, which states that the volume of a gas at a constant pressure is proportional to its temperature. Boyle's and Charles' Laws were combined into the ideal gas law.
Based on these concepts in 1679 Boyle's associate, Denis Papin, built a bone digester, which is a closed vessel with a tightly fitting lid that confines steam until a high pressure is generated. Later designs implemented a steam release valve to keep the machine from exploding. By watching the valve rhythmically moving up and down, Papin conceived the idea of a piston and cylinder engine. He did not, however, follow through with his design. In 1697, independent of Papin's designs, engineer Thomas Savery built the world's first steam engine. By 1712 an improved design based on Papin's ideas was developed by Thomas Newcomen.
Boyle speaks of Papin as having gone to England in the hope of finding a place in which he could satisfactorily pursue his favorite studies. Boyle himself had already been long engaged in the study of pneumatics, and had been especially interested in the investigations which had been original with Guericke. He admitted young Papin into his laboratory, and the two philosophers worked together at these attractive problems.
He probably invented his "Digester" while in England, and it was first described in a brochure written in English, under the title, "The New Digester." It was subsequently published in Paris.
This was a vessel with a safety valve, which can be tightly closed by a screw and a lid. Food can be cooked along with water in the vessel when the vessel is heated, and the vessel's internal temperature can be raised by as much as the pressure inside the vessel will permit safely. The maximum pressure is limited by a weight placed on the safety valve lever. If the pressure exceeds this limit, the safety valve will be forced open and steam will escape until the pressure drops sufficient for the weight to close the valve again.
It is probable that this essential attachment to the steam boiler had previously been used for other purposes; but Papin is given the credit of having first made use of it to control the pressure of steam. In 1787, Antoine Lavoisier, in his Elements of Chemistry, refers to "Papin's digester" as an example of an environment where high pressure prevents evaporation when he explains that the pressure caused by evaporation of fluid prevents further evaporation.
See also
Steam engine
History of thermodynamics
References
External links
Papin's Digester - Good Quality Image
Robert Boyle - has drawing of Papin's digester
French inventions
Steam power
Thermodynamics | Steam digester | Physics,Chemistry,Mathematics | 795 |
45,374,363 | https://en.wikipedia.org/wiki/Single-cell%20DNA%20template%20strand%20sequencing | Single-cell DNA template strand sequencing, or Strand-seq, is a technique for the selective sequencing of a daughter cell's parental template strands.
This technique offers a wide variety of applications, including the identification of sister chromatid exchanges in the parental cell prior to segregation, the assessment of non-random segregation of sister chromatids, the identification of misoriented contigs in genome assemblies, de novo genome assembly of both haplotypes in diploid organisms including humans, whole-chromosome haplotyping, and the identification of germline and somatic genomic structural variation, the latter of which can be detected robustly even in single cells.
Background
Strand-seq (single-cell and single-strand sequencing) was one of the first single-cell sequencing protocols described in 2012. This genomic technique selectively sequencings the parental template strands in single daughter cells DNA libraries. As a proof of concept study, the authors demonstrated the ability to acquire sequence information from the Watson and/or Crick chromosomal strands in an individual DNA library, depending on the mode of chromatid segregation; a typical DNA library will always contain DNA from both strands. The authors were specifically interested in showing the utility of strand-seq in detecting sister chromatid exchanges (SCEs) at high-resolution. They successfully identified eight putative SCEs in the murine (mouse) embryonic stem (meS) cell line with resolution up to 23 bp. This methodology has also been shown to hold great utility in discerning patterns of non-random chromatid segregation, especially in stem cell lineages. Furthermore, SCEs have been implicated as diagnostic indicators of genome stress, information that has utility in cancer biology. Most research on this topic involves observing the assortment of chromosomal template strands through many cell development cycles and correlating non-random assortment with particular cell fates. Single-cell sequencing protocols were foundational in the development of this technique, but they differ in several aspects.
Methodology
Similar methods
Past methods have been used to track the inheritance patterns of chromatids on a per-strand basis and elucidate the process of non-random segregation:
Pulse-chase
Pulse-chase experiments have been used for determining the segregation patterns of chromosomes in addition to studying other time-dependent cellular processes. Briefly, pulse-chase assays allow researchers to track radioactively labelled molecules in the cell. In experiments used to study non-random chromosome assortment, stem cells are labeled or "pulsed" with a nucleotide analog that is incorporated in the replicated DNA strands. This allows the nascent stands to be tracked through many rounds of replication. Unfortunately, this method is found to have poor resolution as it can only be observed at the chromatid level.
Chromosome-orientation fluorescence in situ hybridization (CO-FISH)
CO-FISH, or strand-specific fluorescence in situ hybridization, facilitates strand-specific targeting of DNA with fluorescently-tagged probes. It exploits the uniform orientation of major satellites relative to the direction of telomeres, thus allowing strands to be unambiguously designated as "Watson" or "Crick" strands. Using unidirectional probes that recognize major satellite regions, coupled to fluorescently labelled dyes, individual strands can be bound. To ensure that only the template strand is labelled, the newly formed strands must be degraded by BrdU incorporation and photolysis. This protocol offers improved cytogenetic resolution, allowing researchers to observe single strands as opposed to whole chromatids with pulse-chase experiments. Moreover, non-random segregation of chromatids can be directly assayed by targeting major satellite markers.
Wet lab protocols
Cells of interest are cultured either in vivo or in vitro. During S-phase cells are treated with bromodeoxyuridine (BrdU) which is then incorporated into their nascent DNA, acting as a substitute for thymidine. After at least one replication event has occurred, the daughter cells are synchronized at the G2 phase and individually separated by fluorescence-activated cell sorting (FACS). The cells are directly sorted into lysis buffer and their DNA is extracted. Having been arrested at a specified number of generations (usually one), the inheritance patterns of sister chromatids can be assessed. The following methods concentrate on the DNA sequencing of a single daughter cell's DNA. At this point the chromosomes are composed of nascent strands with BrdU in place of thymidine and the original template strands are primed for DNA sequencing library preparation. Since this protocol was published in 2012, the canonical methodology is only well described for Illumina sequencing platforms; the protocol could very easily be adapted for other sequencing platforms, depending on the application. Next, the DNA is incubated with a special dye such that when the BrdU-dye complex is excited by UV light, nascent strands are nicked by photolysis. This process inhibits polymerase chain reaction (PCR) amplification of the nascent strand, allowing only the parental template strands to be amplified. Library construction proceeds as normal for Illumina paired-end sequencing. Multiplexing PCR primers are then ligated to the PCR amplicons with hexamer barcodes identifying which cell each fragment they are derived from. Unlike single cell sequencing protocols, Strand-seq does not utilize multiple displacement amplification or MALBAC for DNA amplification. Rather, it is solely dependent on PCR.
Bioinformatic processing
The majority of current applications for Strand-seq start by aligning sequenced reads to a reference genome. Alignment can be performed using a variety of short-read aligners such as BWA and Bowtie. By aligning Strand-seq reads from a single cell to the reference genome, the inherited template strands can be determined. If the cell was sequenced after more than one generation, a pattern of chromatid assortment can be ascertained for the particular cell lineage at hand.
The Bioinformatic Analysis of Inherited Templates (BAIT) was the first bioinformatic software to exclusively analyze reads generated from the Strand-seq methodology. It begins by aligning the reads to a reference sequence, binning the genome into sections, and finally counting the number of Watson and Crick reads falling within each bin. From here, BAIT enables the identification of SCE events, misoriented contigs in the reference genome, aneuploid chromosomes and modes of sister chromatid segregation. It can also aid in assembling early-build genomes and assigning orphan scaffolds to locations within late-build genomes. Following BAIT, numerous bioinformatics tools have recently been introduced that use Strand-seq data for a variety of applications (see, for example, the following sections on haplotyping, de novo genome assembly, and discovery of structural variations in single cells, with reference to the respective linked articles).
Limitations
Strand-seq requires cells undergoing cell division for BrdU labeling, and thus is not applicable to formalin-fixed specimens or non-dividing cells. But it may be applied to normal mitotic cells and tissues, organoids, as well as leukemia and tumor samples using fresh or frozen primary specimens. Strand-seq is using Illumina sequencing, and applications that require sequence information from different sequencing technologies require new protocols, or alternatively integration of data generated using distinct sequencing platforms as recently show-cased.
Authors from the initial papers describing Strand-seq showed that they were able to attain a 23bp resolution for mapping SCEs, and other large chromosomal abnormalities are likely to share this mapping resolution (if breakpoint fine-mapping is performed). Resolution, however, is dependent on a combination of the sequencing platform used, library preparation protocols, and the number of cells analysed as well as the depth of sequencing per cell. However, it would be sensical for precision to further increase with sequencing technologies that don't incur errors in homopolymeric repeats.
Applications and utility
Identifying sister chromatid exchanges
Strand-seq was initially proposed as a tool to identify sister chromatid exchanges. Being a process that is localized to individual cells, DNA sequencing of more than one cell would naturally scatter these effects and suggest an absence of SCE events. Moreover, classic single cell sequencing techniques are unable to show these events due to heterogeneous amplification biases and dual-strand sequence information, thereby necessitating Strand-seq. Using the reference alignment information, researchers can identify an SCE if the directionality of an inherited template strand changes.
Identifying misoriented contigs
Misoriented contigs are present in reference genomes at significant rates (ex. 1% in the mouse reference genome). Strand-seq, in contrast to conventional sequencing methods, can detect these misorientations. Misoriented contigs are present where strand inheritance changes from one homozygous state to the other (ex. WW to CC, or CC to WW). Moreover, this state change is visible in every Strand-seq library, reinforcing the presence of a misoriented contig.
Identifying non-random segregation of sister chromatids
Prior to the 1960s, it was assumed that sister chromatids were segregated randomly into daughter cells. However, non-random segregation of sister chromatids has been observed in mammalian cells ever since. There have been a few hypotheses proposed to explain the non-random segregation, including the Immortal Strand Hypothesis and the Silent Sister Hypothesis, one of which may hopefully be verified by methods involving Strand-seq.
‘’Immortal Strand Hypothesis’’
Mutations occur every time a cell divides. Certain long-lived cells (ex. stem cells) may be particularly affected by these mutations. The Immortal Strand Hypothesis proposes that these cells avoid mutation accumulation by consistently retaining parental template strands[9]. For this hypothesis to be true, sister chromatids from each and every chromosome must segregate in a non-random fashion. Additionally, one cell will retain the exact same set of template strands after each division, giving the rest to the other cell products of the division.
‘’Silent Sister Hypothesis’’
This hypothesis states that sister chromatids have differing epigenetic signatures, thereby also differing expression regulation. When replication occurs, non-random segregation of sister chromatids ensures the fates of the daughter cells. Assessing the validity of this hypothesis would require a joint analysis of Strand-seq and gene expression profiles for both daughter cells.
Discovery of structural variations & aneuploid chromosomes
The output of BAIT shows the inheritance of parental template strands along the genome. Normally, two template strands are inherited for each autosome, and any deviation from this number indicates an instance of aneuploidy, which can be visualised in single cells.
Inversions are a class of copy-number balanced structural variation, which lead to a change in strand directionality readily visualised by Strand-seq. Strand-seq can hence be used to readily detect polymorphic inversions in humans and primates, including Megbase-sized events embedded in large segmental duplications known to be inaccessible to Illumina sequencing.
A study published online in 2019 further demonstrated that using Strand-seq, all classes of structural variation ≥200kb including deletions, duplications, inversions, inverted duplications, balanced translocations, unbalanced translocations, breakage-fusion-bridge cycle mediated complex DNA rearrangements, and chromothripsis events are sensitively detected in single cells or subclones, using single-cell tri-channel processing (scTRIP). scTRIP works via joint modelling of read-orientation, read-depth, and haplotype-phase to discover SVs in single cells. Using scTRIP, structural variants are resolved by chromosome-length haplotype which confers higher sensitivity and specificity for single-cell structural variant calling than other current technologies. Since scTRIP does not require reads (or read pairs) transversing the boundaries (or breakpoints) of structural variants in single cells for variant calling, it does not suffer from known artefacts of single-cell methods based on whole genome amplification (i.e. so-called read chimera) which tend to confound structural variation analysis in single cells.
Haplotyping, genome assembly & generation of high-resolution human genetic variation maps
Early-build genomes are quite fragmented, with unordered and unoriented contigs. Using Strand-seq provides directionality information to accompany the sequence, which ultimately helps resolve the placement of contigs. Contigs present in the same chromosome will exhibit the same directionality, provided SCE events have not occurred. Conversely, contigs present in different chromosomes will only exhibit the same directionality in 50% of the Strand-seq libraries.
Scaffolds, successive contigs intersected by a gap, can be localized in the same manner.
The same principle of using strand direction to distinguish large DNA molecules enables the use of Strand-seq as a tool to construct whole-chromosome haplotypes of genetic variation, from telomere to telomere.
Recent reports have shown that Strand-seq can be computationally integrated with long-read sequencing technology, with the unique advantages of both technologies enabling the generation of highly contiguous haplotype-resolved de novo human genome assemblies. These genomic assemblies integrate all forms of genetic variation including single nucleotide variants, indels and structural variation even across complex genomic loci, and have recently been applied to generate comprehensive haplotype-aware maps of structural variation in a diversity panel of humans from distinct ancestries.
Considerations
The possibility that BrdU being substituted for thymine in the genomic DNA could induce double stranded chromosomal breaks and specifically resulting in SCEs has been previously discussed in the literature. Additionally, BrdU incorporation has been suggested to interfere with strand segregation patterns. If this is the case, there would be an inflation in false positive SCEs which may be annotated. Therefore, many cells should be analyzed using the Strand-seq protocol to ensure that SCEs are in fact present in the population. For structural variants detected in single cells, detection of the same variant (on the same haplotype) in more than one cell can exclude BrdU incorporation as a possible cause.
The number of single cell strands that need to be sequenced in order for an annotation to be accepted has yet to be proposed and is highly dependent on the questions being asked. As Strand-seq is founded on single cell sequencing techniques, one must consider the problems faced with single cell sequencing as well. These include the lacking standards for cell isolation and amplification. Even though previous Strand-seq studies isolated cells using FACS, microfluidics also serves as an attractive alternative. PCR has been shown to produce more erroneous amplification products compared to strand displacement based methods such as MDA and MALBAC, whereas the latter two techniques generate chimeric reads as a byproduct that can result in erroneous structural variation calls. MDA and MALBAC also generate more dropouts than Strand-seq during SV detection because they require reads that cross the breakpoint of an SV to enable its detection (this is not required for any of the different SV classes that Strand-seq can detect). Strand displacement amplification also tends to generate more sequence and longer products which could be beneficial for long read sequencing technologies.
References
A wikibook on next generation sequencing
A free didactic directory for DNA sequencing analysis.
DNA sequencing
Genomics techniques
2012 in biotechnology
2012 introductions | Single-cell DNA template strand sequencing | Chemistry,Biology | 3,268 |
21,518,774 | https://en.wikipedia.org/wiki/Neofavolus%20alveolaris | Neofavolus alveolaris, commonly known as the hexagonal-pored polypore, is a species of fungus in the family Polyporaceae. It causes a white rot of dead hardwoods. Found on sticks and decaying logs, its distinguishing features are its yellowish to orange scaly cap, and the hexagonal or diamond-shaped pores. It is widely distributed in North America, and also found in Asia, Australia, and Europe.
Taxonomy
The first scientific description of the fungus was published in 1815 by Augustin Pyramus de Candolle, under the name Merulius alveolaris. A few years later in 1821 it was sanctioned by Elias Magnus Fries as Cantharellus alveolaris. It was transferred to the genus Polyporus in a 1941 publication by Appollinaris Semenovich Bondartsev and Rolf Singer. It was then transferred to its current genus in 2012.
The genus name is derived from the Greek meaning "many pores", while the specific epithet alveolaris means "with small pits or hollows".
Description
The fruit bodies of P. alveolaris are in diameter, rounded to kidney- or fan-shaped. Fruit bodies sometimes have stems, but they are also found attached directly to the growing surface. The cap surface is dry, covered with silk-like fibrils, and is an orange-yellow or reddish-orange color, which weathers to cream to white. The context is thin (2 mm), tough, and white. Tubes are radially elongated, with the pore walls breaking down in age. The pores are large—compared to other species in this genus—typically 0.5–3 mm wide, angular (diamond-shaped) or hexagonal; the pore surface is a white to buff color. The stipe, if present, is 0.5–2 cm long by 1.5–5 mm thick, placed either laterally or centrally, and has a white to tan color. The pores extend decurrently on the stipe. The spore deposit is white.
Microscopic features
Spores are narrowly elliptical and smooth, hyaline, with dimensions of 11–14.5 × 4–5 μm. The basidia are club-shaped and four-spored, with dimensions of 28–42 × 7–9 μm.
Similar species
Polyporus craterellus bears a resemblance to P. alveolaris, but the former species has a more prominent stalk and does not have the reddish-orange colors observed in the latter.
Edibility
This mushroom is edible when young. It has been described as "edible but tough," with toughness increasing with age, and not having "all that distinctive of a flavor." Another reference lists the species as inedible.
Habitat and distribution
Neofavolus alveolaris is found growing singly or grouped together on branches and twigs of hardwoods, commonly on shagbark hickory in the spring and early summer. It has been reported growing on the dead hardwoods of genera Acer, Castanea, Cornus, Corylus, Crataegus, Erica, Fagus, Fraxinus, Juglans, Magnolia, Morus, Populus, Pyrus, Robinia, Quercus, Syringa, Tilia, and Ulmus.
This species is widely distributed in North America,
and has also been collected in Australia, China, and Europe (Czechoslovakia, Italy and Portugal).
Antifungal compounds
A polypeptide with antifungal properties has been isolated from the fresh fruit bodies of this species. Named alveolarin, it inhibits the growth of the species Botrytis cinerea, Fusarium oxysporum, Mycosphaerella arachidicola, and Physalospora piricola.
References
alveolaris
Edible fungi
Fungi described in 1815
Fungi of Australia
Fungi of China
Fungi of Europe
Fungi of North America
Fungi of Asia
Polyporaceae
Taxa named by Augustin Pyramus de Candolle
Fungus species | Neofavolus alveolaris | Biology | 843 |
6,172,305 | https://en.wikipedia.org/wiki/Helvella%20crispa | Helvella crispa, also known as the fluted white elfin saddle, white saddle, elfin saddle or common helvel, is an ascomycete fungus of the family Helvellaceae. The mushroom is readily identified by its irregularly shaped whitish cap, fluted stem, and fuzzy undersurfaces. It is found in eastern North America and in Europe, near deciduous trees in summer and autumn.
Etymology
The fungus was originally described as Phallus crispus by the naturalist Giovanni Antonio Scopoli in 1772. Its specific epithet is Latin adjective crispa 'wrinkled' or 'curly'. The generic name was originally a type of Italian herb but became associated with morels.
Description
Helvella crispa is creamy white in colour, in length, with a cap 2–6 cm (1–2 in) in diameter. It is striking due to its irregularly shaped lobes on the cap, but with a robust creamy-white base (2–8×1–2.5 cm in size). Its flesh is thin and brittle. The stem is 3–10 cm (1¼–4 in) long, white or pinkish in colour and ornately ribbed. It gives off a pleasant aroma, but is not edible raw.
The spore print is white, the oval spores average 19 x 11.5 μm. Occasionally white-capped forms are found. It can be distinguished from occasional white forms of Helvella lacunosa by its furry cap undersurface and inrolled margins when young.
Distribution and habitat
H. crispa is found in China, Japan, Europe and eastern North America, though is replaced by the related Helvella lacunosa in western parts.
It grows in grass as well as in humid hardwoods, such as beech (not so well in resinous ones), along the side of pathways, in hedges and on the talus of meadows. They can be spotted from the end of summer until the end of autumn.
Edibility and adverse effects
Although some guidebooks list this species as edible, there is speculation that it may contains monomethylhydrazine, which can cause severe intoxication, and may be carcinogenic. It has been reported to cause gastrointestinal symptoms when eaten raw. If consumed, it is recommended to cook thoroughly and only in small amounts.
Recent evidence suggests that this fungus and similar species containing gyromitrin may cause the potentially fatal disease amyotrophic lateral sclerosis (ALS) after many years or even decades.
References
External links
Helvella crispa on Mushroomexpert.com
crispa
Fungi of Asia
Fungi of Europe
Fungi described in 1822
Fungi of North America
Fungus species | Helvella crispa | Biology | 562 |
826,956 | https://en.wikipedia.org/wiki/Clausius%E2%80%93Mossotti%20relation | In electromagnetism, the Clausius–Mossotti relation, named for O. F. Mossotti and Rudolf Clausius, expresses the dielectric constant (relative permittivity, ) of a material in terms of the atomic polarizability, , of the material's constituent atoms and/or molecules, or a homogeneous mixture thereof. It is equivalent to the Lorentz–Lorenz equation, which relates the refractive index (rather than the dielectric constant) of a substance to its polarizability. It may be expressed as:
where
is the dielectric constant of the material, which for non-magnetic materials is equal to , where is the refractive index;
is the permittivity of free space;
is the number density of the molecules (number per cubic meter);
is the molecular polarizability in SI-units [C·m2/V].
In the case that the material consists of a mixture of two or more species, the right hand side of the above equation would consist of the sum of the molecular polarizability contribution from each species, indexed by in the following form:
In the CGS system of units the Clausius–Mossotti relation is typically rewritten to show the molecular polarizability volume which has units of volume [m3]. Confusion may arise from the practice of using the shorter name "molecular polarizability" for both and within literature intended for the respective unit system.
The Clausius–Mossotti relation assumes only an induced dipole relevant to its polarizability and is thus inapplicable for substances with a significant permanent dipole. It is applicable to gases such as and at sufficiently low densities and pressures. For example, the Clausius–Mossotti relation is accurate for N2 gas up to 1000 atm between 25 °C and 125 °C. Moreover, the Clausius–Mossotti relation may be applicable to substances if the applied electric field is at a sufficiently high frequencies such that any permanent dipole modes are inactive.
Lorentz–Lorenz equation
The Lorentz–Lorenz equation is similar to the Clausius–Mossotti relation, except that it relates the refractive index (rather than the dielectric constant) of a substance to its polarizability. The Lorentz–Lorenz equation is named after the Danish mathematician and scientist Ludvig Lorenz, who published it in 1869, and the Dutch physicist Hendrik Lorentz, who discovered it independently in 1878.
The most general form of the Lorentz–Lorenz equation is (in Gaussian-CGS units)
where is the refractive index, is the number of molecules per unit volume, and is the mean polarizability.
This equation is approximately valid for homogeneous solids as well as liquids and gases.
When the square of the refractive index is , as it is for many gases, the equation reduces to:
or simply
This applies to gases at ordinary pressures. The refractive index of the gas can then be expressed in terms of the molar refractivity as:
where is the pressure of the gas, is the universal gas constant, and is the (absolute) temperature, which together determine the number density .
References
Bibliography
Lorenz, Ludvig, "Experimentale og theoretiske Undersogelser over Legemernes Brydningsforhold", Vidensk Slsk. Sckrifter 8,205 (1870) https://www.biodiversitylibrary.org/item/48423#page/5/mode/1up
O. F. Mossotti, Discussione analitica sull'influenza che l'azione di un mezzo dielettrico ha sulla distribuzione dell'elettricità alla superficie di più corpi electrici disseminati in esso, Memorie di Mathematica e di Fisica della Società Italiana della Scienza Residente in Modena, vol. 24, p. 49-74 (1850).
Electrodynamics
Electromagnetism
Electric and magnetic fields in matter
Eponymous equations of physics | Clausius–Mossotti relation | Physics,Chemistry,Materials_science,Mathematics,Engineering | 845 |
31,860,373 | https://en.wikipedia.org/wiki/Moss%E2%80%93Burstein%20effect | The Moss-Burstein effect, also known as the Burstein–Moss shift, is the phenomenon in which the apparent band gap of a semiconductor is increased as the absorption edge is pushed to higher energies as a result of some states close to the conduction band being populated. This is observed for a degenerate electron distribution such as that found in some degenerate semiconductors and is known as a Moss–Burstein shift.
The effect occurs when the electron carrier concentration exceeds the conduction band edge density of states, which corresponds to degenerate doping in semiconductors. In nominally doped semiconductors, the Fermi level lies between the conduction and valence bands. For example, in n-doped semiconductor, as the doping concentration is increased, electrons populate states within the conduction band which pushes the Fermi level to higher energy. In the case of degenerate level of doping, the Fermi level lies inside the conduction band. The "apparent" band gap of a semiconductor can be measured using transmission/reflection spectroscopy. In the case of a degenerate semiconductor, an electron from the top of the valence band can only be excited into conduction band above the Fermi level (which now lies in conduction band) since all the states below the Fermi level are occupied states. Pauli's exclusion principle forbids excitation into these occupied states. Thus we observe an increase in the apparent band gap. Apparent band gap = Actual band gap + Moss-Burstein shift (as shown in the figure).
Negative Burstein shifts can also occur. These are due to band structure changes due to doping.
References
Further reading
Electronic band structures | Moss–Burstein effect | Physics,Chemistry,Materials_science | 343 |
5,158,653 | https://en.wikipedia.org/wiki/Breakthrough%20of%20the%20Year | The Breakthrough of the Year is an annual award for the most significant development in scientific research made by the AAAS journal Science, an academic journal covering all branches of science.
Originating in 1989 as the Molecule of the Year, and inspired by Time Person of the Year, it was renamed the Breakthrough of the Year in 1996.
Molecule of the Year
1989 PCR and DNA polymerase
1990 the manufacture of synthetic diamonds
1991 buckminsterfullerene
1992 nitric oxide
1993 p53
1994 DNA repair enzyme
Breakthrough of the Year
1996: Understanding HIV
1997: Dolly the sheep, the first mammal to be cloned from adult cells
1998: Accelerating universe
1999: Prospective stem-cell therapies
2000: Full genome sequencing
2001: Nanocircuits or Molecular circuit
2002: RNA interference
2003: Dark energy
2004: Spirit rover landed on Mars
2005: Evolution in action
2006: Proof of the Poincaré conjecture
2007: Human genetic variation
2008: Cellular reprogramming
2009: Ardipithecus ramidus
2010: The first quantum machine
2011: HIV treatment as prevention (HPTN 052)
2012: Discovery of the Higgs boson
2013: Cancer immunotherapy
2014: Rosetta comet mission
2015: CRISPR genome-editing method
2016: First observation of gravitational waves
2017: Neutron star merger (GW170817)
2018: Single-cell sequencing
2019: A black hole made visible
2020: COVID-19 vaccine, developed and tested at record speed
2021: An AI brings protein structures to all
2022: James Webb Space Telescope debut
2023: GLP-1 Drugs
2024: Lenacapavir
See also
Physics World, also has a Breakthrough of the Year award
References
Science and technology awards
American Association for the Advancement of Science
Awards established in 1989
Awards established in 1996
Scientific research awards | Breakthrough of the Year | Technology | 368 |
21,506 | https://en.wikipedia.org/wiki/Numerical%20analysis | Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.
Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid 20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms.
The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square.
Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used.
Applications
The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to a wide variety of hard problems, many of which are infeasible to solve symbolically:
Advanced numerical methods are essential in making numerical weather prediction feasible.
Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations.
Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically.
In the financial field, (private investment funds) and other financial institutions use quantitative finance tools from numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants.
Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research.
Insurance companies use numerical programs for actuarial analysis.
History
The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method. The origins of modern numerical analysis are often linked to a 1947 paper by John von Neumann and Herman Goldstine,
but others consider modern numerical analysis to go back to work by E. T. Whittaker in 1912.
To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.
The Leslie Fox Prize for Numerical Analysis was initiated in 1985 by the Institute of Mathematics and its Applications.
Key concepts
Direct and iterative methods
Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, and the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution (assuming stability).
In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps, even if infinite precision were possible. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems.
Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method.
As an example, consider the problem of solving
3x3 + 4 = 28
for the unknown quantity x.
For the iterative method, apply the bisection method to f(x) = 3x3 − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57.
From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2.
Conditioning
Ill-conditioned problem: Take the function . Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an ill-conditioned problem.
Well-conditioned problem: By contrast, evaluating the same function near x = 10 is a well-conditioned problem. For instance, f(10) = 1/9 ≈ 0.111 and f(11) = 0.1: a modest change in x leads to a modest change in f(x).
Discretization
Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called 'discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum.
Generation and propagation of errors
The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem.
Round-off
Round-off errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are).
Truncation and discretization error
Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of , after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01.
Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type is even more inexact.
A truncation error is created when a mathematical procedure is approximated. To integrate a function exactly, an infinite sum of regions must be found, but numerically only a finite sum of regions can be found, and hence the approximation of the exact solution. Similarly, to differentiate a function, the differential element approaches zero, but numerically only a nonzero value of the differential element can be chosen.
Numerical stability and well-posed problems
An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error.
Both the original problem and the algorithm used to solve that problem can be well-conditioned or ill-conditioned, and any combination is possible.
So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem.
Areas of study
The field of numerical analysis includes many sub-disciplines. Some of the major ones are:
Computing values of functions
One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating-point arithmetic.
Interpolation, extrapolation, and regression
Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found.
Regression is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The least squares-method is one way to achieve this.
Solving equations and systems of equations
Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation is linear while is not.
Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a matrix splitting.
Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations.
Solving eigenvalue or singular value problems
Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis.
Optimization
Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints.
The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method.
The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems.
Evaluating integrals
Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grids.
Differential equations
Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations.
Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation.
Software
Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free-software alternative is the GNU Scientific Library.
Over the years the Royal Statistical Society published numerous algorithms in its Applied Statistics (code for these "AS" functions is here);
ACM similarly, in its Transactions on Mathematical Software ("TOMS" code is here).
The Naval Surface Warfare Center several times published its Library of Mathematics Subroutines (code here).
There are several popular numerical computing applications such as MATLAB, TK Solver, S-PLUS, and IDL as well as free and open-source alternatives such as FreeMat, Scilab, GNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R (similar to S-PLUS), Julia, and Python with libraries such as NumPy, SciPy and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude.
Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results.
Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis.
Excel, for example, has hundreds of available functions, including for matrices, which may be used in conjunction with its built in "solver".
See also
:Category:Numerical analysts
Analysis of algorithms
Approximation theory
Computational science
Computational physics
Gordon Bell Prize
Interval arithmetic
List of numerical analysis topics
Local linearization method
Numerical differentiation
Numerical Recipes
Probabilistic numerics
Symbolic-numeric computation
Validated numerics
Notes
References
Citations
Sources
David Kincaid and Ward Cheney: Numerical Analysis : Mathematics of Scientific Computing, 3rd Ed., AMS, ISBN 978-0-8218-4788-6 (2002).
(examples of the importance of accurate arithmetic).
External links
Journals
Numerische Mathematik, volumes 1–..., Springer, 1959–
volumes 1–66, 1959–1994 (searchable; pages are images).
Journal on Numerical Analysis (SINUM), volumes 1–..., SIAM, 1964–
Online texts
Numerical Recipes, William H. Press (free, downloadable previous editions)
First Steps in Numerical Analysis (archived), R.J.Hosking, S.Joe, D.C.Joyce, and J.C.Turner
CSEP (Computational Science Education Project), U.S. Department of Energy (archived 2017-08-01)
Numerical Methods, ch 3. in the Digital Library of Mathematical Functions
Numerical Interpolation, Differentiation and Integration, ch 25. in the Handbook of Mathematical Functions (Abramowitz and Stegun)
Online course material
Numerical Methods (), Stuart Dalziel University of Cambridge
Lectures on Numerical Analysis, Dennis Deturck and Herbert S. Wilf University of Pennsylvania
Numerical methods, John D. Fenton University of Karlsruhe
Numerical Methods for Physicists, Anthony O’Hare Oxford University
Lectures in Numerical Analysis (archived), R. Radok Mahidol University
Introduction to Numerical Analysis for Engineering, Henrik Schmidt Massachusetts Institute of Technology
Numerical Analysis for Engineering, D. W. Harder University of Waterloo
Introduction to Numerical Analysis, Doron Levy University of Maryland
Numerical Analysis - Numerical Methods (archived), John H. Mathews California State University Fullerton
Mathematical physics
Computational science | Numerical analysis | Physics,Mathematics | 3,552 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.