id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
23,842,907 | https://en.wikipedia.org/wiki/Gymnopilus%20fibrillosipes | Gymnopilus fibrillosipes is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus fibrillosipes at Index Fungorum
fibrillosipes
Taxa named by William Alphonso Murrill
Fungus species | Gymnopilus fibrillosipes | Biology | 68 |
35,035,936 | https://en.wikipedia.org/wiki/C13H18ClNO2 | The molecular formula C13H18ClNO2 (molar mass: 255.74 g/mol, exact mass: 255.1026 u) may refer to:
Alaproclate
(2R,3R)-Hydroxybupropion
Cloforex (Oberex)
Hydroxybupropion
Radafaxine
Molecular formulas | C13H18ClNO2 | Physics,Chemistry | 78 |
2,506,410 | https://en.wikipedia.org/wiki/NACA%20duct | A NACA duct, also sometimes called a NACA scoop or NACA inlet, is a common form of low-drag air inlet design, originally developed by the U.S. National Advisory Committee for Aeronautics (NACA), the precursor to NASA, in 1945.
Design
Prior submerged inlet experiments showed poor pressure recovery due to the slow-moving boundary layer entering the inlet. The NACA design is believed to work because the combination of the gentle ramp angle and the curvature profile of the walls creates counter-rotating vortices which deflect the boundary layer away from the inlet and draws in the faster moving air, while avoiding the form drag and flow separation that can occur with protruding scoop designs.
Aircraft applications
When properly implemented, a NACA duct allows air to flow into an internal duct, often for cooling purposes, with a minimal disturbance to the flow. The design was originally called a submerged inlet, since it consists of a shallow ramp with curved walls recessed into the exposed surface of a streamlined body, such as an aircraft.
This type of flush inlet generally cannot achieve the greater ram pressures and flow volumes of an external design, and so is rarely used for the jet engine intake application for which it was originally designed, such as the North American YF-93 and Short SB.4 Sherpa. It is commonly used for piston engine and ventilation intakes.
Automobile applications
It is especially favored in racing car design. Sports cars featuring prominent NACA ducts include the Ferrari F40, the Lamborghini Countach, the 1996–2002 Dodge Viper, the 1971–1973 Ford Mustang, the 1973 Pontiac GTO, the 1979 Porsche 924 Turbo, the Maserati Biturbo, the Nissan S130, and the Porsche 911 GT2. It is also prevalent in some motorcycle designs, such as the 1994–1997 Honda VFR750F or the 1994-1998 Ducati 916.
See also
NACA airfoil
NACA cowling
References
Further reading
Aircraft aerodynamics
Engine technology
Duct | NACA duct | Technology | 408 |
52,188,012 | https://en.wikipedia.org/wiki/Fly%20algorithm | The Fly Algorithm is a computational method within the field of evolutionary algorithms, designed for direct exploration of 3D spaces in applications such as computer stereo vision, robotics, and medical imaging. Unlike traditional image-based stereovision, which relies on matching features to construct 3D information, the Fly Algorithm operates by generating a 3D representation directly from random points, termed "flies." Each fly is a coordinate in 3D space, evaluated for its accuracy by comparing its projections in a scene. By iteratively refining the positions of flies based on fitness criteria, the algorithm can construct an optimized spatial representation. The Fly Algorithm has expanded into various fields, including applications in digital art, where it is used to generate complex visual patterns.
History
The Fly Algorithm is a type of cooperative coevolution based on the Parisian approach. The Fly Algorithm has first been developed in 1999 in the scope of the application of Evolutionary algorithms to computer stereo vision. Unlike the classical image-based approach to stereovision, which extracts image primitives then matches them in order to obtain 3-D information, the Fly Agorithm is based on the direct exploration of the 3-D space of the scene. A fly is defined as a 3-D point described by its coordinates (x, y, z). Once a random population of flies has been created in a search space corresponding to the field of view of the cameras, its evolution (based on the Evolutionary Strategy paradigm) used a fitness function that evaluates how likely the fly is lying on the visible surface of an object, based on the consistency of its image projections. To this end, the fitness function uses the grey levels, colours and/or textures of the calculated fly's projections.
The first application field of the Fly Algorithm has been stereovision. While classical `image priority' approaches use matching features from the stereo images in order to build a 3-D model, the Fly Algorithm directly explores the 3-D space and uses image data to evaluate the validity of 3-D hypotheses. A variant called the "Dynamic Flies" defines the fly as a 6-uple (x, y, z, x’, y’, z’) involving the fly's velocity. The velocity components are not explicitly taken into account in the fitness calculation but are used in the flies' positions updating and are subject to similar genetic operators (mutation, crossover).
The application of Flies to obstacle avoidance in vehicles exploits the fact that the population of flies is a time compliant, quasi-continuously evolving representation of the scene to directly generate vehicle control signals from the flies. The use of the Fly Algorithm is not strictly restricted to stereo images, as other sensors may be added (e.g. acoustic proximity sensors, etc.) as additional terms to the fitness function being optimised. Odometry information can also be used to speed up the updating of flies' positions, and conversely the flies positions can be used to provide localisation and mapping information.
Another application field of the Fly Algorithm is reconstruction for emission Tomography in nuclear medicine. The Fly Algorithm has been successfully applied in single-photon emission computed tomography and positron emission tomography
. Here, each fly is considered a photon emitter and its fitness is based on the conformity of the simulated illumination of the sensors with the actual pattern observed on the sensors. Within this application, the fitness function has been re-defined to use the new concept of 'marginal evaluation'. Here, the fitness of one individual is calculated as its (positive or negative) contribution to the quality of the global population. It is based on the leave-one-out cross-validation principle. A global fitness function evaluates the quality of the population as a whole; only then the fitness of an individual (a fly) is calculated as the difference between the global fitness values of the population with and without the particular fly whose individual fitness function has to be evaluated. In the fitness of each fly is considered as a `level of confidence'. It is used during the voxelisation process to tweak the fly's individual footprint using implicit modelling (such as metaballs). It produces smooth results that are more accurate.
More recently it has been used in digital art to generate mosaic-like images or spray paint. Examples of images can be found on YouTube
Parisian evolution
Here, the population of individuals is considered as a society where the individuals collaborate toward a common goal.
This is implemented using an evolutionary algorithm that includes all the common genetic operators (e.g. mutation, cross-over, selection).
The main difference is in the fitness function.
Here two levels of fitness function are used:
A local fitness function to assess the performance of a given individual (usually used during the selection process).
A global fitness function to assess the performance of the whole population. Maximising (or minimising depending on the problem considered) this global fitness is the goal of the population.
In addition, a diversity mechanism is required to avoid individuals gathering in only a few areas of the search space.
Another difference is in the extraction of the problem solution once the evolutionary loop terminates. In classical evolutionary approaches, the best individual corresponds to the solution and the rest of the population is discarded.
Here, all the individuals (or individuals of a sub-group of the population) are collated to build the problem solution.
The way the fitness functions are constructed and the way the solution extraction is made are of course problem-dependent.
Examples of Parisian Evolution applications include:
The Fly algorithm.
Text-mining.
Hand gesture recognition.
Modelling complex interactions in industrial agrifood process.
Positron Emission Tomography reconstruction.
Disambiguation
Parisian approach vs cooperative coevolution
Cooperative coevolution is a broad class of evolutionary algorithms where a complex problem is solved by decomposing it into subcomponents that are solved independently.
The Parisian approach shares many similarities with the cooperative coevolutionary algorithm. The Parisian approach makes use of a single-population whereas multi-species may be used in cooperative coevolutionary algorithm.
Similar internal evolutionary engines are considered in classical evolutionary algorithm, cooperative coevolutionary algorithm and Parisian evolution.
The difference between cooperative coevolutionary algorithm and Parisian evolution resides in the population's semantics.
Cooperative coevolutionary algorithm divides a big problem into sub-problems (groups of individuals) and solves them separately toward the big problem. There is no interaction/breeding between individuals of the different sub-populations, only with individuals of the same sub-population.
However, Parisian evolutionary algorithms solve a whole problem as a big component.
All population's individuals cooperate together to drive the whole population toward attractive areas of the search space.
Fly Algorithm vs particle swarm optimisation
Cooperative coevolution and particle swarm optimisation (PSO) share many similarities. PSO is inspired by the social behaviour of bird flocking or fish schooling.
It was initially introduced as a tool for realistic animation in computer graphics.
It uses complex individuals that interact with each other in order to build visually realistic collective behaviours through adjusting the individuals' behavioural rules (which may use random generators).
In mathematical optimisation, every particle of the swarm somehow follows its own random path biased toward the best particle of the swarm.
In the Fly Algorithm, the flies aim at building spatial representations of a scene from actual sensor data; flies do not communicate or explicitly cooperate, and do not use any behavioural model.
Both algorithms are search methods that start with a set of random solutions, which are iteratively corrected toward a global optimum.
However, the solution of the optimisation problem in the Fly Algorithm is the population (or a subset of the population): The flies implicitly collaborate to build the solution. In PSO the solution is a single particle, the one with the best fitness. Another main difference between the Fly Algorithm and with PSO is that the Fly Algorithm is not based on any behavioural model but only builds a geometrical representation.
Applications of the Fly algorithnm
Computer stereo vision
Obstacle avoidance
Simultaneous localization and mapping (SLAM)
Single-photon emission computed tomography (SPECT) reconstruction
Positron emission tomography (PET) reconstruction
Digital art
Example: Tomography reconstruction
Tomography reconstruction is an inverse problem that is often ill-posed due to missing data and/or noise. The answer to the inverse problem is not unique, and in case of extreme noise level it may not even exist. The input data of a reconstruction algorithm may be given as the Radon transform or sinogram of the data to reconstruct . is unknown; is known.
The data acquisition in tomography can be modelled as:
where is the system matrix or projection operator and corresponds to some Poisson noise.
In this case the reconstruction corresponds to the inversion of the Radon transform:
Note that can account for noise, acquisition geometry, etc.
The Fly Algorithm is an example of iterative reconstruction. Iterative methods in tomographic reconstruction are relatively easy to model:
where is an estimate of , that minimises an error metrics (here -norm, but other error metrics could be used) between and . Note that a regularisation term can be introduced to prevent overfitting and to smooth noise whilst preserving edges.
Iterative methods can be implemented as follows:
(i) The reconstruction starts using an initial estimate of the image (generally a constant image),
(ii) Projection data is computed from this image,
(iii) The estimated projections are compared with the measured projections,
(iv) Corrections are made to correct the estimated image, and
(v) The algorithm iterates until convergence of the estimated and measured projection sets.
The pseudocode below is a step-by-step description of the Fly Algorithm for tomographic reconstruction. The algorithm follows the steady-state paradigm. For illustrative purposes, advanced genetic operators, such as mitosis, dual mutation, etc. are ignored. A JavaScript implementation can be found on Fly4PET.
algorithm fly-algorithm is
input: number of flies (N),
input projection data (preference)
output: the fly population (F),
the projections estimated from F (pestimated)
the 3-D volume corresponding to the voxelisation of F (VF)
postcondition: the difference between pestimated and preference is minimal.
START
1. // Initialisation
2. // Set the position of the N flies, i.e. create initial guess
3. for each fly i in fly population F do
4. F(i)x ← random(0, 1)
5. F(i)y ← random(0, 1)
6. F(i)z ← random(0, 1)
7. Add F(i)'s projection in pestimated
8.
9. // Compute the population's performance (i.e. the global fitness)
10. Gfitness(F) ← Errormetrics(preference, pestimated)
11.
12. fkill ← Select a random fly of F
13.
14. Remove fkill's contribution from pestimated
15.
16. // Compute the population's performance without fkill
17. Gfitness(F-{fkill}) ← Errormetrics(preference, pestimated)
18.
19. // Compare the performances, i.e. compute the fly's local fitness
20. Lfitness(fkill) ← Gfitness(F-{fkill}) - Gfitness(F)
21.
22. If the local fitness is greater than 0, // Thresholded-selection of a bad fly that can be killed
23. then go to Step 26. // fkill is a good fly (the population's performance is better when fkill is included): we should not kill it
24. else go to Step 28. // fkill is a bad fly (the population's performance is worse when fkill is included): we can get rid of it
25.
26. Restore the fly's contribution, then go to Step 12.
27.
28. Select a genetic operator
29.
30. If the genetic operator is mutation,
31. then go to Step 34.
32. else go to Step 50.
33.
34. freproduce ← Select a random fly of F
35.
14. Remove freproduce's contribution from pestimated
37.
38. // Compute the population's performance without freproduce
39. Gfitness(F-{freproduce}) ← Errormetrics(preference, pestimated)
40.
41. // Compare the performances, i.e. compute the fly's local fitness
42. Lfitness(freproduce) ← Gfitness(F-{freproduce}) - Gfitness(F)
43.
44. Restore the fly's contribution
45.
46. If the local fitness is lower than or equal to 0, // Thresholded-selection of a good fly that can reproduce
47. else go to Step 34. // freproduce is a bad fly: we should not allow it to reproduce
48. then go to Step 53. // freproduce is a good fly: we can allow it to reproduce
49.
50. // New blood / Immigration
51. Replace fkill by a new fly with a random position, go to Step 57.
52.
53. // Mutation
54. Copy freproduce into fkill
55. Slightly and randomly alter fkill's position
56.
57. Add the new fly's contribution to the population
58.
59. If stop the reconstruction,
60. then go to Step 63.
61. else go to Step 10.
62.
63. // Extract solution
64. VF ← voxelisation of F
65.
66. return VF
END
Example: Digital arts
In this example, an input image is to be approximated by a set of tiles (for example as in an ancient mosaic). A tile has an orientation (angle θ), a three colour components (R, G, B), a size (w, h) and a position (x, y, z). If there are N tiles, there are 9N unknown floating point numbers to guess. In other words for 5,000 tiles, there are 45,000 numbers to find. Using a classical evolutionary algorithm where the answer of the optimisation problem is the best individual, the genome of an individual would be made up of 45,000 genes. This approach would be extremely costly in term of complexity and computing time. The same applies for any classical optimisation algorithm. Using the Fly Algorithm, every individual mimics a tile and can be individually evaluated using its local fitness to assess its contribution to the population's performance (the global fitness). Here an individual has 9 genes instead of 9N, and there are N individuals. It can be solved as a reconstruction problem as follows:
where is the input image, and are the pixel coordinates along the horizontal and vertical axis respectively, and are the image width and height in number of pixels respectively, is the fly population, and is a projection operator that creates an image from flies. This projection operator can take many forms. In her work, Z. Ali Aboodd uses OpenGL to generate different effects (e.g. mosaics, or spray paint). For speeding up the evaluation of the fitness functions, OpenCL is used too.
The algorithm starts with a population that is randomly generated (see Line 3 in the algorithm above). is then assessed using the global fitness to compute (see Line 10). is the objective function that has to be minimized.
See also
Mathematical optimization
Metaheuristic
Search algorithm
Stochastic optimization
Evolutionary computation
Evolutionary algorithm
Genetic algorithm
Mutation (genetic algorithm)
Crossover (genetic algorithm)
Selection (genetic algorithm)
References
Optimization algorithms and methods
Genetic algorithms
Evolutionary algorithms
Heuristics
Nature-inspired metaheuristics
Evolutionary computation | Fly algorithm | Biology | 3,279 |
3,791,086 | https://en.wikipedia.org/wiki/Hidden%20sector | In particle physics, the hidden sector, also known as the dark sector, is a hypothetical collection of yet-unobserved quantum fields and their corresponding hypothetical particles. The interactions between the hidden sector particles and the Standard Model particles are weak, indirect, and typically mediated through gravity or other new particles. Examples of new hypothetical mediating particles in this class of theories include the dark photon, sterile neutrino, and axion.
In many cases, hidden sectors include a new gauge group that is independent from the Standard Model gauge group. The hidden sectors are commonly predicted by the models from string theory. They may be relevant as a source of dark matter and supersymmetry breaking, solving the Muon g-2 anomaly and beryllium-8 decay anomaly.
See also
Fifth force
Dark energy
Dark matter
Dark radiation
Higgs sector
References
Physics beyond the Standard Model | Hidden sector | Physics | 175 |
12,591,953 | https://en.wikipedia.org/wiki/Crithidia%20fasciculata | Crithidia fasciculata is a species of parasitic excavates. C. fasciculata, like other species of Crithidia have a single host life cycle with insect host, in the case of C. fasciculata this is the mosquito. C. fasciculata have low host species specificity and can infect many species of mosquito.
Life cycle
C. fasciculata is found in two morphologically different life cycle stages – the free swimming choanomastigote form, which has a long external flagellum for motility, and the attached, immotile, amastigote form in the mosquito gut. Amastigotes excreted in the faeces contaminate the mosquito habitat; contamination of flowers during nectar feeding is common.
Transmission of C fasciculata primarily occurs when amastigotes, washed into standing water, are ingested by mosquito larvae. The amastigotes are typically found in the rectum of a larva. Each molt of the larva results in loss of infection, but it is generally quickly re-acquired from the environment by ingestion of more amastigotes. When the fourth instar larva pupates the amastigote infection is maintained in the gut through metamorphosis giving rise to an infected adult mosquito.
Role in research
C. fasciculata is an example of a non-human infective trypanosomatid and is related to several human parasites, including Trypanosoma brucei (which causes African trypanosomiasis) and Leishmania spp. (which cause Leishmaniasis). C. fasciculata parasitizes several species of insects and has been widely used to test new therapeutic strategies against parasitic infections. C. fasciculata is often used as a model organism in research into trypanosomatid biology that may then be applied to understanding the biology of the human infective species.
As is typical of the trypanosomatids, but unlike many other protists, C. fasciculata possess one mitochondrion. The mitochondrial DNA is found in a single structure, the kinetoplast, at the base of the single flagellum. As is common with parasitic species C. fasciculata requires a high nutrient content broth (including heme and folic acid) in which to grow under laboratory conditions.
References
Further reading
Trypanosomatida
Parasitic excavates
Model organisms
Parasites of Diptera
Protists described in 1902
Euglenozoa species | Crithidia fasciculata | Biology | 534 |
41,065,295 | https://en.wikipedia.org/wiki/Satellite%20surface%20salinity | Satellite surface salinity refers to measurements of surface salinity made by remote sensing satellites. The radiative properties of the ocean surface are exploited in order to estimate the salinity of the water's surface layer.
The depth of the water column that a satellite surface salinity measurement is sensitive to depends on the frequency (or wavelength) of the radiance that is being measured. For instance, the optical depth for seawater at the 1.413 GHz microwave frequency, used for the Aquarius mission, is about 1–2 cm.
Background
As with many passive remote sensing satellite products, satellites measure surface salinity by initially taking radiance measurements emitted by the Earth's atmosphere and ocean. If the object emitting the measured radiance is considered to be a black body, then the relationship between the object's temperature and the measured radiance can be related, at a given frequency, through the Planck function (or Planck's law).
where
(the Intensity or Brightness) is the amount of energy emitted per unit surface per unit time per unit solid angle and in the frequency range between and ; is the temperature of the black body; is the Planck constant; is frequency; is the speed of light; and is the Boltzmann constant.
This equation can be rewritten to express the temperature, T, in terms of the measured radiance at a particular frequency. The temperature derived from the Planck function is referred to as the brightness temperature (which see, for derivation).
For ideal black bodies, the brightness temperature is also the directly measurable temperature. For objects in nature, often called Gray Bodies, the actual temperature is only a fraction of the brightness temperature. The fraction of brightness temperature to actual temperature is defined as the emissivity. The relationship between brightness temperature and temperature can be written as:
where Tb is the brightness temperature, e is the emissivity, and T is the temperature of the surface sea water. The emissivity describes the ability of an object to emit energy by radiation. Several factors can affect the emissivity of water, including temperature, emission angle, wavelength, and chemical composition. The emissivity of sea water has been modeled as a function of its temperature, salinity, and radiant energy frequency.
Measurement technique
Studies have shown that measurements of seawater brightness temperature at the 1.413 GHz (L-band) are sufficient to make reasonably accurate measurements of seawater surface salinity. The emissivity of seawater can be described in terms of its polarized components of emissivity as:
The above equations are governed by the Fresnel equations, the instrument viewing angle from nadir θ, and the dielectric coefficient ε. Microwave radiometers can be further equipped to measure the vertical and horizontal components of the surface seawater's brightness temperature, which relates to the horizontal and vertical components of the emissivity as:
,
where refers to the brightness temperature and is simply the temperature of the surface seawater. Since the viewing angle from nadir is typically set by the remote sensing instrument, measurements of the polarized components of the brightness temperature can be related to the surface seawater's temperature and dielectric coefficient.
Several models have been proposed to estimate the dielectric constant of sea water given its salinity and temperature. The "Klein and Swift" dielectric model function is a common and well-tested model used to compute the dielectric coefficient of seawater at a given salinity, temperature, and frequency. The Klein and Swift model is based on the Debye equation and fitted with laboratory measurements of the dielectric coefficient.
Using this model, if the temperature of the seawater is known from external sources, then measurements of the brightness temperature can be used to compute the salinity of surface seawater directly. Figure 1 shows an example of the brightness temperature curves associated with sea surface salinity, as a function of sea surface temperature.
When looking at the polarized components of the brightness temperature, the spread of the brightness temperature curves will be different depending on the component. The vertical component of the brightness temperature shows a greater spread in constant salinity curves than the horizontal component. This implies a greater sensitivity to salinity in the vertical component of brightness temperature than in the horizontal.
Sources of measurement error
There are many sources of error associated with measurements of sea surface salinity:
Radiometer
Antenna
System pointing
Roughness (of sea surface)
Solar
Galactic
Rain (total liquid water)
Ionosphere
Atmosphere(other)
Sea surface temperature
Antenna gain near land and ice
Model function
Most of the error sources on the previous list stem from either standard instrument errors (Antenna, System Pointing, etc.) or noise from external sources measurement signal (Solar, Galactic, etc.). However, the largest error source comes from the effect of ocean surface roughness. A rough ocean surface tends to cause an increase in the measured brightness temperature as a result of multiple scattering and shadowing effects. Quantifying the influence of ocean roughness to the measured temperature brightness is crucial to make an accurate measurement. Some instruments use radar scatterometers to measure the surface roughness to account for this source of error.
List of satellite instruments measuring sea surface salinity
Soil Moisture and Ocean Salinity satellite
Aquarius (SAC-D instrument)
References
Satellite surface salinity | Satellite surface salinity | Physics,Environmental_science | 1,099 |
25,756,037 | https://en.wikipedia.org/wiki/Bed%20skirt | A bed skirt, sometimes spelled bedskirt, a bed ruffle, a dust ruffle in North America, a valance, or a valance sheet in the British Isles, is a piece of decorative fabric that is placed between the mattress and the box spring of a bed that extends to the floor around the sides. In addition to its aesthetics, a bed skirt is used to hide the ensemble fabric, wheels and other unsightly objects underneath the bed, or as protection against dust.
Popularized in the early 20th century, though dating back to the late 18th century in their earliest usage, valances were strictly utilitarian up until the 1930s and 1940s, when many women began to lavishly decorate their bedrooms. For about a century, bed skirts have been considered as intrinsic pieces of bedding and as crucial as the bedcover itself. Bed skirts generally measure between and in their drop.
History
Although bed skirts became more generally used from the early 1900s, artworks of bedrooms in the Regency, Georgian and Victorian eras displayed beds with fancy valances, typically in upper class or royal settings.
In the early 1900s, conventional cotton and coil mattresses with coordinated box springs supplanted wool or feather-filled mattresses. Valances were initially used to block drafts which could chill the undersides of beds (from the floor upwards). They were also used to conceal box springs, bed frames and badly shaped bed posts. Furthermore, people who had bed skirts discovered that bed bugs and dust mites were a minor issue in their homes as the valance averted dust.
Between the 1920s and 1930s, women adorned beds with more choices than ever, where traditional bedding components began to include valances. In the 1940s, the bedspread and valance was popular. In children's rooms, the valances stored or hid toys and assorted items beneath the bed.
Up until the 1960s, fabrics and materials such as cotton, satin, chiffon, and wool were fashionable bed skirt cloths. Towards the 1970s and 1980s, the term "dust ruffles" was superseded by "bed skirt". From the 1980s and onwards, valances were still pivotal to the bedroom's general décor.
In recent years, tailored, or pleated bed skirts have become more efficient. Springmaid, Martha Stewart, Sears, JCPenney and Laura Ashley were some bed skirt-making brands popular with consumers within the last three decades. By the late 2010s, valance styles such as ruching, lace, and ruffle declined in use as minimalists popularized the straight bed skirt. More recently, mattresses are getting thicker in size and therefore bed skirts are becoming more uncommon, as such bulky mattresses tend to touch the floor.
Purpose
Valances can preserve the bed by keeping it from coming into contact with the floor. They can act as a buffer to preserve the unity of linens and bedding, since continual friction between the bed and the ground may lead to tearing or wearing down of material; therefore sheets and bedspreads will reduce quality over time. They are also used to prevent dust and allergens from accumulating under the bed, particularly by those who suffer from respiratory problems.
Aesthetically, the valance's function is to provide a snazzy look to a bed, in addition to reducing exposure to the box spring, assorted items, clutter or any space beneath the bed that can be used for storage. Embellished bed boots may be used to conceal bed posts and improve the décor when the bed skirts do not reach the floor. Valances can also be used to increase the aesthetics of the bedroom that complements the surrounding decoration. Some bed skirts can also be adjusted, depending on the size of the mattresses.
Types
There are threes types of bed skirts:
Traditional bed skirts, which are generally straight-pieced, tailored fabric that will fit around the circumference of the mattress and box spring, where they reach the floor.
Box pleat valances, which have pleats at the corners and sides, giving the bed skirt a tailored appearance in a formal bedding setting.
The wraparound bed skirt requires lifting the mattress for installation, as they feature an elasticized design that wraps around the box spring without removing the mattress.
Generally, there exists a base valance; which is placed below the mattress and covers the bottom of the bed. And a sheet valance; which is fitted over the mattress, where it acts as both a fitted sheet and a bed skirt.
Gallery
References
Bedding
Linens
Beds
Insulators
18th-century introductions
Ornaments
Interior design
Textile patterns | Bed skirt | Biology | 934 |
69,881,829 | https://en.wikipedia.org/wiki/John%20Hinchley | John William Hinchley (1871-1931) was a chemical engineer who was the first Secretary of the Institution of Chemical Engineers.
Early life and education
Hinchley was born 21 January 1871 in Grantham, and studied at Lincoln Grammar School. From 1887 to 1890 he served an engineering apprenticeship at Ruston, Proctor and Company while attending science classes in the evening, being a prizewinner in chemistry, followed by a year as a science teacher. A national scholarship and the support of a friend enabled him to go to Imperial College, London where he graduated in 1895 with first class honours. He successfully sat the exam for a Whitworth Scholarship.
Career
After Imperial College, he went to Dublin to assist Professor John Joly with the development of colour photography. Returning to London he became assistant to a designer of acid plants and acetone production which stopped when his employer was killed in a road accident, so he became a chemical engineering consultant. In 1903 he went to Siam to be the technical head of the new Royal Mint of Bangkok, successfully developing a process melting 2.5 tons of silver a day and coinage to British Royal Mint standards. Back in London he was again a consultant, designing and erecting a variety of chemical plants.
In 1909 he was invited to give a series of 25 lectures on chemical engineering at Battersea Technical College, the first regular curriculum in the subject in the UK. These were popular, and in 1911 he was appointed lecturer in chemical engineering for two days a week at Imperial College, in 1917 becoming assistant professor, all the while continuing with his professional work, but passing on the course at Battersea. The same year he was promoted to the class of Fellows of the Institute of Chemistry. In 1926 he was made full Professor. The same year the article on Chemical Engineering in Encyclopedia Britannica was his work.
Institution of Chemical Engineers
George E. Davis proposed the formation of a Society of Chemical Engineers, but instead the Society of Chemical Industry (SCI) was formed. In 1918 Hinchley, who was a Council Member of the SCI, petitioned it to form a Chemical Engineers Group, which was done, with him as chairman and 510 members In 1920 this group voted to form a separate Institution of Chemical Engineers, which was achieved in 1922 with Hinchley as the Secretary, a role he held until his death.
According to the editor of Chemical Age just after his death, "The establishment, a few years later, of the Institution of Chemical Engineers was due to him perhaps more than any single person." The journal Nature described him as instrumental in its formation.
Personal life
It was while at Imperial College that he was introduced to a student at the Royal College of Art, Edith Mary Mason. She was later a member of the Royal Society of Miniature Painters, Sculptors and Gravers. They were married on 4 August 1903. She designed the Seal for the Institution of Chemical Engineers, which was executed by medallist Cecil Thomas, a fellow member of the same Royal Society.
While in Siam, he became a freemason and was involved in setting up the Imperial College Masonic lodge.
He died 13 August 1931 after a long illness. He was cremated at Golders Green Crematorium and the ashes scattered in the Garden of Rest, where there is now a memorial.
Legacy
The Institution of Chemical Engineers instituted an annual Hinchley Memorial Lecture in 1932 and a Hinchley Medal in 1943 for the most meritorious student of chemical engineering at Imperial College. The Medal continues, but is now directly awarded by the college.
References
Bibliography
British chemical engineers
History of the chemical industry
People from Grantham
Institution of Chemical Engineers
Academics of Imperial College London
Alumni of Imperial College London
People educated at Lincoln Grammar School
English Freemasons
1871 births
1931 deaths | John Hinchley | Chemistry,Engineering | 751 |
1,334,259 | https://en.wikipedia.org/wiki/Ernest%20Hart%20%28medical%20journalist%29 | Ernest Abraham Hart (26 June 18357 January 1898) was an English medical journalist. He was the editor of The British Medical Journal.
Biography
Hart was born in London, the son of a Jewish dentist. He was educated at the City of London school, and became a student at St George's hospital. In 1856, he became a member of the Royal College of Surgeons, making a specialty of diseases of the eye. He was appointed ophthalmic surgeon at St Mary's hospital at the age of 28, and occupied various other posts, introducing into ophthalmic practice some modifications since widely adopted. His name, too, is associated with a method of treating popliteal aneurism, which he was the first to use in Great Britain.
His real life-work, however, was as a medical journalist, beginning with the Lancet in 1857. He was appointed editor of the British Medical Journal on 11 August 1866. During this time, the British Medical Journal'''s harsh criticism of Isaac Baker Brown lead to the complete destruction of Brown's career. As editor, Hart can be held accountable in part for this (as can the editor before him, William Orlando Markham). His campaigning editorials could be vicious. They were usually sententious and often self-congratulatory.
On 22 November 1866 Hart was appointed as a poorlaw inspector as his colleague William Orlando Markham rejected the position. He took a leading part in the exposures which led to the inquiry into the state of London work-house infirmaries, and to the reform of the treatment of sick poor throughout England, and the Infant Life Protection Act of 1872, aimed at the evils of baby-farming, was largely due to his efforts. The record of his public work covers nearly the whole field of sanitary legislation during the last thirty years of his life.
He had a hand in the amendments of the Public Health and of the Medical Acts, always promoting the medical profession above others in the public health field; in the measures relating to notification of infectious disease, to vaccination, to the registration of plumbers; in the improvement of factory legislation; in the remedy of legitimate grievances of Army and Navy medical officers; in the removal of abuses and deficiencies in crowded barrack schools; in denouncing the sanitary shortcomings of the Indian government, particularly in regard to the prevention of cholera.
His work on behalf of the British Medical Association is shown by the increase from 2,000 to 19,000 in the number of members, and the growth of the British Medical Journal from 20 to 64 pages, during his editorship. From 1872 to 1897 he was chairman of the Association's Parliamentary Bill Committee.
Beginning his collections by contacting Tadamasa Hayashi in 1882, Hart became a prominent collector of Japanese Art and later joined the Japan Society, frequently giving lectures on subjects such as Lacquerware.See , Notes on the history of lacquer : a paper read before the Japan Society of London, Ernest Hart, Japan Society, 1893 In 1891 he travelled to Japan with his second wife, Alice Hart.
Hart was also editor of The Sanitary Record for a period, and chairman of the National Health Society.
Vaccination
In 1880, Hart authored the book The Truth About Vaccination''. It refuted the arguments made by anti-vaccinators. Each anti-vaccination allegation was disproved with medical and statistical evidence. Hart demonstrated from a vast body of evidence the advantages of how a vaccinated person can resist an attack of smallpox, compared to those un-vaccinated.
Selected publications
An account of the condition of the infirmaries of London workhouses, 1866
The Truth About Vaccination: An Examination and Refutation of the Assertions of the Anti-Vaccinators (1880)
Hypnotism, Mesmerism and the New Witchcraft (1896)
Family
Hart married his first wife, Rosetta Levy, in 1855. He married his second wife, in 1872, Alice Marion Rowland, the sister of social reformer Henrietta Barnett. Rowland had herself studied medicine in London and Paris, and was no less interested than her husband in philanthropic reform. She was most active in her encouragement of Irish cottage industries, and was the founder of the Donegal Industrial Fund.
References
Further reading
External links
1835 births
1898 deaths
People from the City of London
British medical writers
British ophthalmologists
19th-century English medical doctors
English male journalists
English Jews
Medical journalists
19th-century English journalists
Medical journal editors
19th-century English male writers
Vaccination advocates | Ernest Hart (medical journalist) | Biology | 936 |
46,880,460 | https://en.wikipedia.org/wiki/ATAC-seq | ATAC-seq (Assay for Transposase-Accessible Chromatin using sequencing) is a technique used in molecular biology to assess genome-wide chromatin accessibility. In 2013, the technique was first described as an alternative advanced method for MNase-seq, FAIRE-Seq and DNase-Seq. ATAC-seq is a faster analysis of the epigenome than DNase-seq or MNase-seq.
Description
ATAC-seq identifies accessible DNA regions by probing open chromatin with hyperactive mutant Tn5 Transposase that inserts sequencing adapters into open regions of the genome. While naturally occurring transposases have a low level of activity, ATAC-seq employs the mutated hyperactive transposase. In a process called "tagmentation", Tn5 transposase cleaves and tags double-stranded DNA with sequencing adaptors. The tagged DNA fragments are then purified, PCR-amplified, and sequenced using next-generation sequencing. Sequencing reads can then be used to infer regions of increased accessibility as well as to map regions of transcription factor binding sites and nucleosome positions. The number of reads for a region correlate with how open that chromatin is, at single nucleotide resolution. ATAC-seq requires no sonication or phenol-chloroform extraction like FAIRE-seq; no antibodies like ChIP-seq; and no sensitive enzymatic digestion like MNase-seq or DNase-seq. ATAC-seq preparation can be completed in under three hours.
Applications
ATAC-Seq analysis is used to investigate a number of chromatin-accessibility signatures. The most common use is nucleosome mapping experiments, but it can be applied to mapping transcription factor binding sites, adapted to map DNA methylation sites, or combined with sequencing techniques.
The utility of high-resolution enhancer mapping ranges from studying the evolutionary divergence of enhancer usage (e.g. between chimps and humans) during development and uncovering a lineage-specific enhancer map used during blood cell differentiation.
ATAC-Seq has also been applied to defining the genome-wide chromatin accessibility landscape in human cancers, and revealing an overall decrease in chromatin accessibility in macular degeneration. Computational footprinting methods can be performed on ATAC-seq to find cell specific binding sites and transcription factors with cell specific activity.
Single-cell ATAC-seq
Modifications to the ATAC-seq protocol have been made to accommodate single-cell analysis. Microfluidics can be used to separate single nuclei and perform ATAC-seq reactions individually. With this approach, single cells are captured by either a microfluidic device or a liquid deposition system before tagmentation. An alternative technique that does not require single cell isolation is combinatorial cellular indexing. This technique uses barcoding to measure chromatin accessibility in thousands of individual cells; it can generate epigenomic profiles from 10,000-100,000 cells per experiment. But combinatorial cellular indexing requires additional, custom-engineered equipment or a large quantity of custom, modified Tn5. Recently, a pooled barcode method called sci-CAR was developed, allowing joint profiling of chromatin accessibility and gene expression of single cells.
Computational analysis of scATAC-seq is based on construction of a count matrix with number of reads per open chromatin regions. Open chromatin regions can be defined, for example, by standard peak calling of pseudo bulk ATAC-seq data. Further steps include data reduction with PCA and clustering of cells. scATAC-seq matrices can be extremely large (hundreds of thousands of regions) and is extremely sparse, i.e. less than 3% of entries are non-zero. Therefore, imputation of count matrix is another crucial step performed by using various methods such as non-negative matrix factorization. As with bulk ATAC-seq, scATAC-seq allows finding regulators like transcription factors controlling gene expression of cells. This can be achieved by looking at the number of reads around TF motifs or footprinting analysis.
References
External links
ATAC-seq probes open-chromatin state (figure)
ATAC-seq: Fast and sensitive epigenomic profiling
HINT-ATAC: Identification of Transcription Factor Binding Sites using ATAC-seq
Molecular biology techniques | ATAC-seq | Chemistry,Biology | 940 |
304,799 | https://en.wikipedia.org/wiki/Equivalent%20potential%20temperature | Equivalent potential temperature, commonly referred to as theta-e , is a quantity that is conserved during changes to an air parcel's pressure (that is, during vertical motions in the atmosphere), even if water vapor condenses during that pressure change. It is therefore more conserved than the ordinary potential temperature, which remains constant only for unsaturated vertical motions (pressure changes).
is the temperature a parcel of air would reach if all the water vapor in the parcel were to condense, releasing its latent heat, and the parcel was brought adiabatically to a standard reference pressure, usually 1000 hPa (1000 mbar) which is roughly equal to atmospheric pressure at sea level.
Use in estimating atmospheric stability
Stability of incompressible fluid
Like a ball balanced on top of a hill, denser fluid lying above less dense fluid would be dynamically unstable: overturning motions (convection) can lower the center of gravity, and thus will occur spontaneously, rapidly producing a stable stratification (see also stratification (water)) which is thus the observed condition almost all the time. The condition for stability of an incompressible fluid is that density decreases monotonically with height.
Stability of compressible air: Potential temperature
If a fluid is compressible like air, the criterion for dynamic stability instead involves potential density, the density of the fluid at a fixed reference pressure. For an ideal gas (see gas laws), the stability criterion for an air column is that potential temperature increases monotonically with height.
To understand this, consider dry convection in the atmosphere, where the vertical variation in pressure is substantial and adiabatic temperature change is important: As a parcel of air moves upward, the ambient pressure drops, causing the parcel to expand. Some of the internal energy of the parcel is used up in doing the work required to expand against the atmospheric pressure, so the temperature of the parcel drops, even though it has not lost any heat. Conversely, a sinking parcel is compressed and becomes warmer even though no heat is added.
Air at the top of a mountain is usually colder than the air in the valley below, but the arrangement is not unstable: if a parcel of air from the valley were somehow lifted up to the top of the mountain, when it arrived it would be even colder than the air already there, due to adiabatic cooling; it would be heavier than the ambient air, and would sink back toward its original position. Similarly, if a parcel of cold mountain-top air were to make the trip down to the valley, it would arrive warmer and lighter than the valley air, and would float back up the mountain.
So cool air lying on top of warm air can be stable, as long as the temperature decrease with height is less than the adiabatic lapse rate; the dynamically important quantity is not the temperature, but the potential temperature—the temperature the air would have if it were brought adiabatically to a reference pressure. The air around the mountain is stable because the air at the top, due to its lower pressure, has a higher potential temperature than the warmer air below.
Effects of water condensation: Equivalent potential temperature
A rising parcel of air containing water vapor, if it rises far enough, reaches its lifted condensation level: it becomes saturated with water vapor (see Clausius–Clapeyron relation). If the parcel of air continues to rise, water vapor condenses and releases its latent heat to the surrounding air, partially offsetting the adiabatic cooling. A saturated parcel of air therefore cools less than a dry one would as it rises (its temperature changes with height at the moist adiabatic lapse rate, which is smaller than the dry adiabatic lapse rate). Such a saturated parcel of air can achieve buoyancy, and thus accelerate further upward, a runaway condition (instability) even if potential temperature increases with height. The sufficient condition for an air column to be absolutely stable, even with respect to saturated convective motions, is that the equivalent potential temperature must increase monotonically with height.
Formula
The definition of the equivalent potential temperature is:
Where:
is the temperature [K] of air at pressure ,
is a reference pressure that is taken as 1000 hPa,
is the pressure at the point,
and are the specific gas constants of dry air and of water vapour, respectively,
and are the specific heat capacities of dry air and of liquid water, respectively,
and are the total water and water vapour mixing ratios, respectively,
is the relative humidity,
is the latent heat of vapourisation of water.
A number of approximate formulations are used for calculating equivalent potential temperature, since it is not easy to compute integrations along motion of the parcel. Bolton (1980) gives review of such procedures with estimates of error. His best approximation formula is used when accuracy is needed:
Where:
is (dry) potential temperature [K] at the lifted condensation level (LCL),
is (approximated) temperature [K] at LCL,
is dew point temperature at pressure ,
is the water vapor pressure (to obtain for dry air),
is the ratio of the specific gas constant to the specific heat of dry air at constant pressure (0.2854),
is mixing ratio of water vapor mass per mass [kg/kg] (sometimes value is given in [g/kg] and that should be divided by 1000).
A little more theoretical formula is commonly used in literature like Holton (1972) when theoretical explanation is important:
Where:
is saturated mixing ratio of water at temperature , the temperature at the saturation level of the air,
is latent heat of evaporation at temperature (2406 kJ/kg {at 40 °C} to 2501 kJ/kg {at 0 °C}), and
is specific heat of dry air at constant pressure (1005.7 J/(kg·K)).
Further more simplified formula is used (in, for example, Stull 1988 §13.1 p. 546) for simplicity, if it is desirable to avoid computing :
Where:
= equivalent temperature
= specific gas constant for air (287.04 J/(kg·K))
Usage
This applies on the synoptic scale for characterisation of air masses. For instance, in a study of the North American Ice Storm of 1998, professors Gyakum (McGill University, Montreal) and Roebber (University of Wisconsin-Milwaukee) have demonstrated that the air masses involved originated from high Arctic at an altitude of 300 to 400 hPa the previous week, went down toward the surface as they moved to the Tropics, then moved back up along the Mississippi Valley toward the St. Lawrence Valley. The back trajectories were evaluated using the constant equivalent potential temperatures.
In the mesoscale, equivalent potential temperature is also a useful measure of the static stability of the unsaturated atmosphere. Under normal, stably stratified conditions, the potential temperature increases with height,
and vertical motions are suppressed. If the equivalent potential temperature decreases with height,
the atmosphere is unstable to vertical motions, and convection is likely. Situations in which the equivalent potential temperature decreases with height, indicating instability in saturated air, are quite common.
See also
Meteorology
Moist static energy
Potential temperature
Weather forecasting
Bibliography
M K Yau and R.R. Rogers, Short Course in Cloud Physics, Third Edition, published by Butterworth-Heinemann, January 1, 1989, 304 pages.
References
Atmospheric thermodynamics
Equivalent units | Equivalent potential temperature | Mathematics | 1,548 |
69,171,533 | https://en.wikipedia.org/wiki/COVID%20Moonshot | The COVID Moonshot is a collaborative open-science project started in March 2020 with the goal of developing an un-patented oral antiviral drug to treat SARS-CoV-2, the virus causing COVID-19.
COVID Moonshot researchers are targeting the proteins needed to form functioning new viral proteins. They are particularly interested in proteases such as 3C-like protease (Mpro), a coronavirus nonstructural protein that mediates the breaking and replication of proteins.
COVID Moonshot may be the first open-science community effort for the development of an antiviral drug. Hundreds of scientists around the world, from academic and industrial organizations, have shared their expertise, resources, data, and results to more rapidly identify, screen, and test candidate compounds for the treatment of COVID-19.
Project history
Development of antiviral drugs is a complicated and time-consuming multistage process. The public sharing of information in the early stages of genome identification and protein structure identification has accelerated the process of searching for COVID-19 treatments and established a basis for the COVID Moonshot initiative.
Genome identification
On January 3, 2020, Chinese virologist Yong-Zhen Zhang of Fudan University and the Shanghai Public Health Clinical Center received a test sample from Wuhan, China, where patients had a pneumonia-like illness. By January 5, Zhang and his team had sequenced a virus from the sample and deposited its genome on GenBank, an international research database maintained by the United States National Center for Biotechnology Information.
By January 11, 2020, Edward C. Holmes of the University of Sydney had Zhang's permission to publicly release the genome.
Protein structures
With that information, structural biologists world-wide began examining its protein structures. Investigators from the Center for Structural Genomics of Infectious Diseases (CSGID) and other groups began working to characterize the 3D structure of the proteins, sharing their results via the Protein Data Bank (PDB).
Scientists were able to identify a key protein in the virus: 3C-like protease (Mpro).
Crucial early X-ray crystallography was done by Zihe Rao and Haitao Yang in Shanghai, China. On January 26, 2020, they submitted a structure of Mpro bound to an inhibitor to the Protein Data Bank. It was released as of February 5, 2020.
Rao began coordinating with David Stuart and Martin Walsh at Diamond Light Source, the United Kingdom's synchrotron facility. The Diamond group was able to develop and release a high-resolution crystal structure of unbound Mpro.
Approaches to accelerating drug development have been suggested, but identification of proteins and drug development commonly take years. It was possible to sequence the virus and characterize key proteins extremely quickly because the new virus was somewhat familiar. It had a 70–80% sequence similarity to the proteins in the SARS-CoV coronavirus that caused the SARS outbreak in 2002. Researchers could therefore build on what was already known about previous coronaviruses.
Possible targets
Identifying and recreating viral proteins in the lab is a first step to developing drugs to attack them and vaccines to protect against them. The COVID Moonshot initiative follows an approach to structure-based drug design in which researchers attempt to find a molecule that will bind tightly to a drug target and prevent it from carrying out its normal activities.
In the case of SARS-CoV-2, the coronavirus enters the body and then replicates its genomic RNA, building new copies that are incorporated into new, rapidly spreading viral particles. Protease enzymes or proteases are often desirable drug targets, because proteases are important in the formation and spreading of viral particles. Inhibition of viral proteases can inhibit the virus's ability to replicate itself and spread.
3C-like protease (Mpro), a coronavirus nonstructural protein, is one of the main proteins involved in the replication and transcription of SARS-CoV-2. By understanding Mpro's structure and the ways in which it functions, scientists can identify possible candidates to preemptively bind to Mpro and block its activity. Mpro is not the only possible target for drug design, but it is a highly interesting one.
Fragment screening
In collaboration with the University of Oxford and the Weizmann Institute of Science in Rehovot, Israel, the facilities at Diamond Light were used to develop fragment screens utilizing crystallography and mass spectrometry.
Nir London's laboratory at the Weizmann Institute contributed technology for identifying compounds that bind irreversibly to target proteins.
Frank von Delft and the Nuffield Department of Medicine at the University of Oxford provided technology for rapid crystallographic fragment screening.
Researchers examined thousands of possible fragments from diverse screening libraries and identified at least 71 possible protein–ligand crystal structures, chemical fragments which might have the potential to bind to Mpro.
These results were immediately made available online.
Designing candidates
The open release of the data and its announcement on Twitter on March 7, 2020, mark a critical point in the formation of COVID Moonshot. The scientists shared their information and challenged chemists worldwide to use that information to design potential openly available antiviral drug candidates. They expected a couple of hundred submissions. By May 2020 more than 4,600 design submissions for potential inhibitors were received. By January 2021, the number of unique compound designs had risen to 14,000. In response, those involved began to shift from a spontaneous virtual collaboration to a larger and more organized network of partners with specialized skills and well-articulated goals.
The design submissions were stored in Collaborative Drug Discovery's CDD Vault, a database used for large-scale management of chemical structures, experimental protocols and experimental results.
Alpha Lee and Matt Robinson brought computational expertise from PostEra to the project. PostEra used techniques from artificial intelligence and machine learning to develop analysis tools for computational drug discovery, chemical synthesis and biochemical assays. When COVID Moonshot's appeal resulted in not hundreds but thousands of responses, they built a platform capable of triaging large numbers of compounds and designing routes for their synthetic formation.
Supercomputer access was provided through the COVID-19 High Performance Computing (HPC) Consortium, accelerating the speed at which designs could be examined and compared. The distributed supercomputing initiative Folding@home has carried out multiple sprints to model novel protein structures and target desirable structures as a part of COVID Moonshot.
Many of the criteria for selecting drug candidates were determined by the group's goals. An ideal drug candidate would be effective in treating COVID-19. It also would be easily and cheaply made, so that as many countries and companies as possible could produce and distribute it. The ingredients to make it should be easy to obtain, and the processes involved should be as simple as possible. A drug shouldn't require special handling (like refrigeration) and it should be easy to administer (a pill rather than an injection).
In a matter of months, researchers were able to identify more than 200 promising crystal structure designs and to begin creating and testing them in the lab.
Chris Schofield at the University of Oxford synthesized and tested 4 of the most promising of the novel designed peptides to demonstrate their ability to block and inhibit Mpro.
Freely available data from COVID Moonshot has also been used to assess the predictive ability of docking scores in suggesting the potency of SARS-CoV-2 M-pro inhibitors.
To go beyond the design phase, possible drug candidates must be created and tested for both effectiveness and safety in animal and human trials. The Wellcome Trust has committed to key initial funding to support this process. Synthesis of candidates is being carried out in parallel, at sites including Ukraine (Enamine), India (Sai Life Sciences) and China (WuXi). Annette von Delft of the University of Oxford and the National Institute for Health Research (NIHR)'s Oxford Biomedical Research Centre (BRC) is leading pre-clinical small molecule research related to COVID Moonshot.
Potential for antiviral treatments
COVID Moonshot anticipates that they will select three pre-clinical candidates by March 2022, to be followed by preclinical safety and toxicology testing and identification of needed chemistry, manufacturing and control (CMC) steps. Based on that data, the most promising candidate will be chosen. Phase-1 clinical trials, the first stage of testing in human subjects, are projected to begin by June 2023.
Unlike a vaccine, which increases immunity and protects against catching an infectious disease, an antiviral drug treats someone who is already sick by attacking the virus and countering its effects, potentially lessening both symptoms and further transmission.
Mpro is present in other coronaviruses that cause disease, so an antiviral drug that targets Mpro may also be effective against coronaviruses such as SARS and MERS and future pandemics.
Mpro does not mutate easily, so it is less likely that variants of the virus will adapt that can avoid the effects of such a drug.
Open science
Among the many participants in the COVID Moonshot project are the
University of Oxford,
University of Cambridge,
Diamond Light Source,
Weizmann Institute of Science in Rehovot, Israel,
Temple University,
Memorial Sloan Kettering Cancer Center,
PostEra,
University of Johannesburg,
and the
Drugs for Neglected Diseases initiative (DNDi) in Switzerland.
Support for the project has come from a variety of philanthropic sources including the Wellcome Trust,
COVID-19 Therapeutics Accelerator (CTA),
Bill & Melinda Gates Foundation,
LifeArc,
and through crowdsourcing.
Because COVID Moonshot is based in open science and shared open data, any drug that the project develops can be manufactured and sold by whoever wishes to produce it, worldwide. Countries that are unable to buy or manufacture expensive licensed drugs would therefore have the opportunity to produce their own supplies, and competition between suppliers is likely to result in greater availability and reduced prices for consumers.
This would circumvent issues around the time needed to vaccinate people worldwide. As of July 2021, it was estimated that at current rates, this was likely to take several years. Inequities in distribution will increase both the spreading of the virus and the risk that new and more dangerous variants will emerge.
Supporters of the COVID Moonshot initiative have argued that open-science drug discovery is an essential model for combating both current and future pandemics, and that the prevention of the spread of pandemic diseases is an essential public service.
References
External links
Antiviral drugs
Collaborative projects
Genome databases
International medical and health organizations
Open data
Open science
Proteins
SARS-CoV-2
Scientific organisations based in England | COVID Moonshot | Chemistry,Biology | 2,212 |
2,066,604 | https://en.wikipedia.org/wiki/Tutu%20%28plant%29 | Tutu is a common name of Māori origin for plants in the genus Coriaria found in New Zealand.
Six New Zealand native species are known by the name:
Coriaria angustissima
Coriaria arborea
Coriaria lurida
Coriaria plumosa
Coriaria pteridoides
They are shrubs or trees; some are endemic to New Zealand. Most of the plant parts are poisonous, containing the neurotoxin tutin and its derivative hyenanchin. The widespread species Coriaria arborea is most often linked to cases of poisoning.
Honey containing tutin can be produced by honey bees feeding on honeydew produced by sap-sucking vine hopper insects (genus Scolypopa) feeding on tutu. The last recorded deaths from eating honey containing tutin were in the 1890s, although sporadic outbreaks of toxic honey poisoning continue to occur. Poisoning symptoms include delirium, vomiting, and coma.
Food, medical and musical uses
Tutu had a variety of food, medical and musical uses.
References
External links
Tutu, 1966 Encyclopedia of New Zealand
Coriariaceae
Flora of New Zealand
Plant common names | Tutu (plant) | Biology | 234 |
55,296,834 | https://en.wikipedia.org/wiki/Venus%20In%20situ%20Composition%20Investigations | Venus In situ Composition Investigations (VICI) is a concept lander mission to Venus in order to answer long-standing questions about its origins and evolution, and provide new insights needed to understand terrestrial planet formation, evolution, and habitability.
VICI was one of 12 considerations for New Frontiers 4, but was not one of the two missions selected to be finalists in late 2017.
Overview
The mission concept was proposed in 2017 to NASA's New Frontiers program to compete for funding and development, but it was not selected. However, on 20 December 2017, it was awarded technology development funds to prepare it for future mission competitions. The funds are meant to further develop the Venus Element and Mineralogy Camera to operate under the extreme heat and pressure on Venus. The instrument uses lasers on a lander to measure the mineralogy and elemental composition of rocks on the surface of Venus.
If selected and developed at some future opportunity, the VICI mission would send two identical landers to unexplored Tesserae regions thought to be ancient exposed surfaces that had not undergone volcanic resurfacing. The two landers would measure atmospheric composition and structure during their descent at a level of detail that has not been possible on earlier missions. The landers would also analyze surface chemistry, mineralogy, and morphology at their landing site.
Scientific payload
VICI's proposed payloads includes a copy of the neutral mass spectrometer and tunable laser spectrometer currently used by the Curiosity rover to provide surface mineralogy and elemental composition. A gamma-ray spectrometer would perform measurements of naturally radioactive elements to a depth of ~10 cm.
See also
Venus In Situ Atmospheric and Geochemical Explorer (VISAGE), a competing mission concept to Venus
Venus In Situ Explorer (VISE), a concept mission to Venus
Venus Origins Explorer (VOX), a competing mission concept to Venus
References
External links
Goddard Venus Lander Prototype at YouTube (42 seconds video)
Missions to Venus
Extraterrestrial atmosphere entry
New Frontiers program proposals
Proposed NASA space probes | Venus In situ Composition Investigations | Astronomy | 410 |
49,105,396 | https://en.wikipedia.org/wiki/Vegetation%20index | A vegetation index (VI) is a spectral imaging transformation of two or more image bands designed to enhance the contribution of vegetation properties and allow reliable spatial and temporal inter-comparisons of terrestrial photosynthetic activity and canopy structural variations.
There are many VIs, with many being functionally equivalent. Many of the indices make use of the inverse relationship between red and near-infrared reflectance associated with healthy green vegetation. Since the 1960s scientists have used satellite remote sensing to monitor fluctuation in vegetation at the Earth's surface. Measurements of vegetation attributes include leaf area index (LAI), percent green cover, chlorophyll content, green biomass and absorbed photosynthetically active radiation (APAR).
VIs have been historically classified based on a range of attributes, including the number of spectral bands (2 or greater than 2); the method of calculations (ratio or orthogonal), depending on the required objective; or by their historical development (classified as first generation VIs or second generation VIs). For the sake of comparison of the effectiveness of different VIs, Lyon, Yuan et al. (1998) classified 7 VIs based on their computation methods (Subtraction, Division or Rational Transform). Due to advances in hyperspectral remote sensing technology, high-resolution reflectance spectrums are now available, which can be used with traditional multispectral VIs. In addition, VIs have been developed to be used specifically with hyperspectral data, such as the use of Narrow Band Vegetation Indices.
Uses
Vegetation indices have been used to:
examine climate trends;
estimate water content of soils remotely;
monitor drought;
schedule crop irrigation, crop management;
monitor evaporation and plant transpiration.
assess changes in biodiversity
classify vegetation
detection and quantification of crop diseases
Types of vegetation index
Multispectral Vegetation Index
Ratio Vegetation Index (RVI): Defined as the ratio between the Red and Near Infrared lights of multispectral images
Normalised Difference Vegetation Index (NDVI): The most commonly used remote sensing index that calculates the ratio of the difference and sum between the Near Infrared and Red bands of multispectral images. It normally takes values between -1 and +1. It is mostly used in vegetation dynamics monitoring, including biomass quantification.
Kauth-Thomas Tasseled Cap Transformation: A spectral enhancement index that transforms the spectral information of a satellite data into spectral features
Infrared Index
Normalized difference water index
Perpendicular Vegetation Index
Greenness Above Bare Soil
Moisture Stress Index: A spectral index that measures the level of moisture stress in leaves
Leaf Water Content Index (LWCI)
MidIR Index
Soil-Adjusted Vegetation Index (SAVI): An adjusted form of NDVI developed to minimize the effects of soil brightness on spectral vegetation indices, particularly in areas of high soil composition
Modified SAVI: Mostly applied in to areas with low NDVI measures.
Atmospherically Resistant Vegetation Index
Soil and Atmospherically Resistant Vegetation Index
Enhanced Vegetation Index (EVI): Very similar to NDVI. The only difference is that it corrects atmospheric and canopy background noise, particularly in regions with high biomass
New Vegetation Index
Aerosol Free Vegetation Index
Triangular Vegetation Index
Reduced Simple Ratio
Visible Atmospherically Resistant Index
Normalised Difference Built-Up Index
Weighted Difference Vegetation Index (WDVI)
Fraction of absorbed photosynthetically active radiation (FAPAR)
Normalised Difference Greenness index (NDGI)
Temperature Vegetation Water Stress Index (TVWSI)
Hyperspectral Vegetation Index
With the advent of hyperspectral data, vegetation index have been developed specifically for hyperspectral data.
Discrete-Band Normalised Difference Vegetation Index
Yellowness Index
Photochemical Reflectance Index
Descrete-Band Normalised Difference Water Index
Red Edge Position Determination
Crop Chlorophyll Content Prediction
Moment distance index (MDI)
Advanced Vegetation Indices
With the emergence of machine learning, certain algorithms can be used to determine vegetation indices from data. This allows to take into account all spectral bands and to discover hidden parameters that can be useful to strengthen these vegetation indices. Thus, they can be more robust against light variations, shadows or even uncalibrated images if these artifacts exist in the training data.
Synthesis of Vegetation Indices Using Genetic Programming
A soft computing approach for selecting and combining spectral bands
DeepIndices: Remote Sensing Indices Based on Approximation of Functions through Deep Learning
See also
Crop coefficient
References
Remote sensing
Biogeography | Vegetation index | Biology | 890 |
42,841,555 | https://en.wikipedia.org/wiki/De%20materia%20medica | (Latin name for the Greek work , , both meaning "On Medical Material") is a pharmacopoeia of medicinal plants and the medicines that can be obtained from them. The five-volume work was written between 50 and 70 CE by Pedanius Dioscorides, a Greek physician in the Roman army. It was widely read for more than 1,500 years until supplanted by revised herbals in the Renaissance, making it one of the longest-lasting of all natural history and pharmacology books.
The work describes many drugs known to be effective, including aconite, aloes, colocynth, colchicum, henbane, opium and squill. In total, about 600 plants are covered, along with some animals and mineral substances, and around 1000 medicines made from them.
was circulated as illustrated manuscripts, copied by hand, in Greek, Latin, and Arabic throughout the medieval period. From the 16th century onwards, Dioscorides' text was translated into Italian, German, Spanish, French, and into English in 1655. It served as the foundation for herbals in these languages by figures such as Leonhart Fuchs, Valerius Cordus, Lobelius, Rembert Dodoens, Carolus Clusius, John Gerard, and William Turner. Over time, these herbals incorporated increasing numbers of direct observations, gradually supplementing and eventually supplanting the classical text.
Several manuscripts and early printed versions of survive, including the illustrated Vienna Dioscurides manuscript written in the original Greek in 6th-century Constantinople; it was used there by the Byzantines as a hospital text for just over a thousand years. Sir Arthur Hill saw a monk on Mount Athos still using a copy of Dioscorides to identify plants in 1934.
Book
Between 50 and 70 AD, a Greek physician in the Roman army, Dioscorides, wrote a five-volume book in his native Greek, (, "On Medical Material"), known more widely in Western Europe by its Latin title . He had studied pharmacology at Tarsus in Roman Anatolia (now Turkey). The book became the principal reference work on pharmacology across Europe and the Middle East for over 1,500 years, and was thus the precursor of all modern pharmacopoeias.
In contrast to many classical authors, was not "rediscovered" in the Renaissance, because it never left circulation; indeed, Dioscorides' text eclipsed the Hippocratic Corpus. In the medieval period, was circulated in Latin, Greek, and Arabic. In the Renaissance from 1478 onwards, it was printed in Italian, German, Spanish, and French as well. In 1655, John Goodyer made an English translation from a printed version, probably not corrected from the Greek.
While being reproduced in manuscript form through the centuries, the text was often supplemented with commentary and minor additions from Arabic and Indian sources. Several illustrated manuscripts of survive. The most famous is the lavishly illustrated Vienna Dioscurides (the Juliana Anicia Codex), written in the original Greek in Byzantine Constantinople in 512/513 AD; its illustrations are sufficiently accurate to permit identification, something not possible with later medieval drawings of plants; some of them may be copied from a lost volume owned by Juliana Anicia's great-grandfather, Theodosius II, in the early 5th century. The Naples Dioscurides and Morgan Dioscurides are somewhat later Byzantine manuscripts in Greek, while other Greek manuscripts survive today in the monasteries of Mount Athos. Densely-illustrated Arabic copies survive from the 12th and 13th centuries. The result is a complex set of relationships between manuscripts, involving translation, copying errors, additions of text and illustrations, deletions, reworkings, and a combination of copying from one manuscript and correction from another.
is the prime historical source of information about the medicines used by the Greeks, Romans, and other cultures of antiquity. The work also records the Dacian names for some plants, which otherwise would have been lost. The work presents about 600 medicinal plants in all, along with some animals and mineral substances, and around 1,000 medicines made from these sources. Botanists have not always found Dioscorides' plants easy to identify from his short descriptions, partly because he had naturally described plants and animals from southeastern Europe, whereas by the 16th century his book was in use all over Europe and across the Islamic world. This meant that people attempted to force a match between the plants they knew and those described by Dioscorides, leading to what could be catastrophic results.
Approach
Each entry gives a substantial amount of detail on the plant or substance in question, concentrating on medicinal uses but giving such mention of other uses (such as culinary) and help with recognition as considered necessary. For example, on the "Mekon Agrios and Mekon Emeros", the opium poppy and related species, Dioscorides states that the seed of one is made into bread: it has "a somewhat long little head and white seed", while another "has a head bending down" and a third is "more wild, more medicinal and longer than these, with a head somewhat long—and they are all cooling." After this brief description, he moves at once into pharmacology, saying that they cause sleep; other uses are to treat inflammation and erysipela, and if boiled with honey to make a cough mixture. The account thus combines recognition, pharmacological effect, and guidance on drug preparation. Its effects are summarized, accompanied by a caution:
Dioscorides then describes how to tell a good from a counterfeit preparation. He mentions the recommendations of other physicians, Diagoras (according to Eristratus), Andreas, and Mnesidemus, only to dismiss them as false and not borne out by experience. He ends with a description of how the liquid is gathered from poppy plants, and lists names used for it: chamaesyce, mecon rhoeas, oxytonon; papaver to the Romans, and wanti to the Egyptians.
As late as in the Tudor and Stuart periods in Britain, herbals often still classified plants in the same way as Dioscorides and other classical authors, not by their structure or apparent relatedness but by how they smelt and tasted, whether they were edible, and what medicinal uses they had. Only when European botanists like Matthias de l'Obel, Andrea Cesalpino and Augustus Quirinus Rivinus (Bachmann) had done their best to match plants they knew to those listed in Dioscorides did they go further and create new classification systems based on similarity of parts, whether leaves, fruits, or flowers.
Contents
The book is divided into five volumes. Dioscorides organized the substances by certain similarities, such as their being aromatic, or vines; these divisions do not correspond to any modern classification. In David Sutton's view the grouping is by the type of effect on the human body.
Volume I: Aromatics
Volume I covers aromatic oils, the plants that provide them, and ointments made from them. They include what are probably cardamom, nard, valerian, cassia or senna, cinnamon, balm of Gilead, hops, mastic, turpentine, pine resin, bitumen, heather, quince, apple, peach, apricot, lemon, pear, medlar, plum and many others.
Volume II: Animals to herbs
Volume II covers an assortment of topics: animals including sea creatures such as sea urchin, seahorse, whelk, mussel, crab, scorpion, electric ray, viper, cuttlefish and many others; dairy produce; cereals; vegetables such as sea kale, beetroot, asparagus; and sharp herbs such as garlic, leek, onion, caper and mustard.
Volume III: Roots, seeds and herbs
Volume III covers roots, seeds and herbs. These include plants that may be rhubarb, gentian, liquorice, caraway, cumin, parsley, lovage, fennel and many others.
Volume IV: Roots and herbs, continued
Volume IV describes further roots and herbs not covered in Volume III. These include herbs that may be betony, Solomon's seal, clematis, horsetail, daffodil and many others.
Volume V: Vines, wines and minerals
Volume V covers the grapevine, wine made from it, grapes and raisins; but also strong medicinal potions made by boiling many other plants including mandrake, hellebore, and various metal compounds, such as what may be zinc oxide, verdigris and iron oxide.
Influence and effectiveness
In Europe
Writing in The Great Naturalists, the historian of science David Sutton describes as "one of the most enduring works of natural history ever written" and that "it formed the basis for Western knowledge of medicines for the next 1,500 years."
The historian of science Marie Boas writes that herbalists depended entirely on Dioscorides and Theophrastus until the 16th century, when they finally realized they could work on their own. She notes also that herbals by different authors, such as Leonhart Fuchs, Valerius Cordus, Lobelius, Rembert Dodoens, Carolus Clusius, John Gerard and William Turner, were dominated by Dioscorides, his influence only gradually weakening as the 16th-century herbalists "learned to add and substitute their own observations".
Early science and medicine historian Paula Findlen, writing in the Cambridge History of Science: Early Modern Science, calls "one of the most successful and enduring herbals of antiquity, [which] emphasized the importance of understanding the natural world in light of its medicinal efficiency", in contrast to Pliny's Natural History (which emphasized the wonders of nature) or the natural history studies of Aristotle and Theophrastus (which emphasized the causes of natural phenomena). Medicine historian Vivian Nutton, in Ancient Medicine, writes that Dioscorides's "five books in Greek On Materia medica attained canonical status in Late Antiquity." Science historian Brian Ogilvie calls Dioscorides "the greatest ancient herbalist", and "the summa of ancient descriptive botany", observing that its success was such that few other books in his domain have survived from classical times. Further, his approach matched the Renaissance liking for detailed description, unlike the philosophical search for essential nature (as in Theophrastus's ). A critical moment was the decision by Niccolò Leoniceno and others to use Dioscorides "as the model of the careful naturalist—and his book as the model for natural history."
The Dioscorides translator and editor Tess Anne Osbaldeston notes that "For almost two millennia Dioscorides was regarded as the ultimate authority on plants and medicine", and that he "achieved overwhelming commendation and approval because his writings addressed the many ills of mankind most usefully." To illustrate this, she states that "Dioscorides describes many valuable drugs including aconite, aloes, bitter apple, colchicum, henbane, and squill". The work mentions the painkillers willow (leading ultimately to aspirin, she writes), autumn crocus and opium, which however is also narcotic. Many other substances that Dioscorides describes remain in modern pharmacopoeias as "minor drugs, diluents, flavouring agents, and emollients ... [such as] ammoniacum, anise, cardamoms, catechu, cinnamon, colocynth, coriander, crocus, dill, fennel, galbanum, gentian, hemlock, hyoscyamus, lavender, linseed, mastic, male fern, marjoram, marshmallow, mezereon, mustard, myrrh, orris (iris), oak galls, olive oil, pennyroyal, pepper, peppermint, poppy, psyllium, rhubarb, rosemary, rue, saffron, sesame, squirting cucumber (elaterium), starch, stavesacre (delphinium), storax, stramonium, sugar, terebinth, thyme, white hellebore, white horehound, and couch grass—the last still used as a demulcent diuretic." She notes that medicines such as wormwood, juniper, ginger, and calamine also remain in use, while "Chinese and Indian physicians continue to use liquorice". She observes that the many drugs listed to reduce the spleen may be explained by the frequency of malaria in his time. Dioscorides lists drugs for women to cause abortion and to treat urinary tract infection; palliatives for toothache, such as colocynth, and others for intestinal pains; and treatments for skin and eye diseases. As well as these useful substances, she observes that "A few superstitious practices are recorded in ," such as using Echium as an amulet to ward off snakes, or Polemonia (Jacob's ladder) for scorpion stings.
In the view of the historian Paula De Vos, formed the core of the European pharmacopoeia until the end of the 19th century, suggesting that "the timelessness of Dioscorides' work resulted from an empirical tradition based on trial and error; that it worked for generation after generation despite social and cultural changes and changes in medical theory".
At Mount Athos in northern Greece Dioscorides's text was still in use in its original Greek into the 20th century, as observed in 1934 by Sir Arthur Hill, Director of the Royal Botanic Gardens, Kew:
Arabic medicine
Along with his fellow physicians of Ancient Rome, Aulus Cornelius Celsus, Galen, Hippocrates and Soranus of Ephesus, Dioscorides had a major and long-lasting effect on Arabic medicine as well as medical practice across Europe. was one of the first scientific works to be translated from Greek into Arabic (Arabic:Hayūlā ʿilāj al-ṭibb). It was translated first into Syriac and then into Arabic in 9th century Baghdad. The translators were most often Syriac Christians, such as Hunayn ibn Ishaq, and their work is known to have been sponsored by local rulers, such as the Artuqids.
Manuscripts
Leiden Dioscurides (1083)
Manuscript (Or. 289), dated 1083, an illustrated Arabic translation of Dioscurides' . The work was originally translated from Greek into Arabic via Syriac by Hunayn ibn Ishaq (810–873) with the collaboration of Stephanus b. Bāsīl between 847–861. This translation was slightly revised by Ḥusayn b. Ibrāhīm al-Nātilī in 990–991. The current copy is based on an exemplar in the hand of al-Nātilī. The work was offered to the amīr of Samarqand, Abū ʿAlī al-Simǧūrī. Acquired by Levinus Warner (1619–1665) and bequeathed to Leiden University Library on his death.
A digitized version is available via Leiden's Digital Collections.
1224 manuscript
One manuscript is dated to 1224, but its provenance is uncertain. It is generally cautiously attributed to "Iraq or Northern Jazira, possibly Baghdad". Its folios have been dispersed among multiple institutions and collectors.
Istanbul, Topkapı Palace, Ahmet II 2127 (1229)
This copy was created by Abd Al-Jabbar ibn Ali in 1229.
References
Cited sources
(subscription required for online access)
Further reading
Manuscripts and editions
Note: Editions may vary by both text and numbering of chapters
Arabic
Digitized version of Kitāb al-Ḥašāʾiš fī hāyūlā al-ʿilāg ̌al-ṭibbī Or. 289 Illustrated Arabic De Materia Medica of Dioscorides from Digital Collections at Leiden University Libraries
English
The Greek Herbal of Dioscorides ... Englished by John Goodyer A. D. 1655, edited by R.T. Gunter (1933).
De materia medica, translated by Lily Y. Beck (2005). Hildesheim: Olms-Weidman.
(from the Latin, after John Goodyer 1655])
French
Edition of Martin Mathee, Lyon (1559) in six books
German
Edition of J Berendes, Stuttgart 1902
Greek
Naples Dioscurides: Codex ex Vindobonensis Graecus 1 ca 500 AD, at Biblioteca Nazionale di Napoli site
English description, World Digital Library
Edition of Karl Gottlob Kühn, being Volume XXV of his Medicorum Graecorum Opera, Leipzig 1829, together with annotation and parallel text in Latin
Book I – Book II – Book III – Book IV – Book V – Indices
Edition of Max Wellman, Berlin
Books I, II – Books III, IV – Book V
Greek and Latin
(Index in frontispiece)
Latin
Edition of Jean Ruel 1552
Index – Preface – Book I – Book II – Book III – Book IV – Book V
De Medica Materia : libri sex, Ioanne Ruellio Suesseionensi interprete, translated by Jean Ruel (1546).
De Materia medica : libri V Eiusdem de Venenis Libri duo. Interprete Iano Antonio Saraceno Lugdunaeo, Medico, translated by Janus Antonius Saracenus (1598).
Spanish
Edition of Andres de Laguna 1570 site
Andres de Laguna, published at Antwerp 1555 , at Biblioteca Nacional de España site
Dioscórides Interactivo Ediciones Universidad Salamanca. Spanish and Greek.
External links
Ancient Roman medicine
Medical manuals
Herbals
History of pharmacy
Natural history books
Pharmacology literature
Pharmacopoeias
1st-century books in Latin | De materia medica | Chemistry | 3,747 |
6,362,086 | https://en.wikipedia.org/wiki/Advanced%20Wireless%20Services | Advanced Wireless Services (AWS) is a wireless telecommunications spectrum band used for mobile voice and data services, video, and messaging. AWS is used in the United States, Argentina, Canada, Colombia, Mexico, Chile, Paraguay, Peru, Ecuador, Trinidad and Tobago, Uruguay and Venezuela. It replaces some of the spectrum formerly allocated to Multipoint Multichannel Distribution Service (MMDS), sometimes referred to as Wireless Cable, that existed from 2150 to 2162 MHz.
The AWS band uses microwave frequencies in several segments: from 1695 to 2200 MHz. The service is intended to be used by mobile devices such as wireless phones for mobile voice, data, and messaging services. Most manufacturers of smartphone mobile handsets provide versions of their phones that include radios that can communicate using the AWS spectrum. Though initially limited, device support for AWS has steadily improved the longer the band has been in general use, with most high-end and many mid-range handsets supporting it over UMTS, LTE and 5G NR.
Changes
The AWS band defined in 2002 (AWS-1), used microwave frequencies in two segments, from 1710 to 1755 MHz for uplink, and from 2110 to 2155 MHz for downlink. The service is intended to be used by mobile devices such as wireless phones for mobile voice, data, and messaging services. Most manufacturers of smartphone mobile handsets provide versions of their phones that include radios that can communicate using the AWS spectrum. Since for downlink AWS uses a subset of UMTS frequency band I (2100 MHz) some UMTS2100 capable handsets do detect AWS networks but cannot register on them due to the difference in uplink frequencies (1710–1755 MHz for AWS versus 1920–1980 MHz for UMTS2100).
Though initially limited, device support for AWS has steadily improved the longer the frequency has been in general use, with most high-end and many mid-range handsets supporting it over HSPA, LTE, or both. In Canada, almost all available LTE handsets support AWS as it was the first frequency over which LTE was offered there, and was still the most commonly supported frequency for LTE in Canada as of 2014-08-21.
In 2012 the [FCC] released rules for the 'H' block (AWS-2), covering the frequencies 1915-1920 MHz and 1995-2000 MHz.
In 2013 they regulated the AWS-3 Block, covering bands 1695-1710 MHz, 1755-1780 MHz and 2155-2180 MHz.
In 2012 there was a proposal regarding the AWS-4 Block, which regulated use of 2000-2020 MHz and 2180-2200 MHz. These were initially proposed for use with the Mobile Satellite System (MSS), but later more uses were introduced
Canada
In Canada, Industry Canada held the auction for AWS spectrum in 2008. Freedom Mobile (formerly Wind Mobile) had licensed AWS spectrum in every province, and began offering voice and data services on December 16, 2009. Its Saskatchewan and Manitoba spectrum was later sold off to Sasktel and MTS, respectively. Freedom only operates in British Columbia, Alberta and Ontario, although they have roaming agreements with Rogers, Telus and Bell at extra cost.
Mobilicity also used the AWS spectrum and began offering services in May 2010, operating in similar areas as Wind but with a smaller network footprint. Its AWS network was combined with Rogers when the latter company acquired Mobilicity in 2015.
Quebecor licensed AWS spectrum throughout the province of Quebec and began offering service with its Vidéotron Mobile brand on September 9, 2010.
Shaw Communications licensed AWS spectrum in western Canada and northern Ontario, began to build some infrastructure for providing wireless phone service, but subsequently decided to cancel further development and did not launch this service. The licenses were eventually sold to Rogers, with some transferred to Wind. Shaw re-entered the mobile services market when it acquired Wind Mobile in 2016.
Halifax-based EastLink obtained licenses in eastern Canada, with a small amount of spectrum bought in Ontario and Alberta, and is currently building up infrastructure to launch mobile phone and data services in Nova Scotia and PEI in 2012. This Service has since launched and is available in numerous markets around Atlantic Canada with roaming through Rogers and Bell.
Rogers Wireless, Bell Mobility, Telus Mobility, SaskTel, and Manitoba Telecom Services (MTS) all received licenses for AWS spectrum, which they are now using for their LTE networks. Freedom Mobile has subsequently refarmed some spectrum in their UMTS network and deployed LTE on bands 4, and 66.
United States
In the United States, the service is administered by the Federal Communications Commission. The licenses were broken up into 6 blocks (A-F). Block A consisted of 734 Cellular Market Areas (CMA). Blocks B and C were each divided into 176 Economic Areas (EA), sometimes referred to as BEA by the FCC. Blocks D, E, and F were each broken up into 12 Regional Economic Area Groupings (REAG), sometimes referred to as REA by the FCC.
Bidding for this new spectrum started on August 9, 2006 and the majority of the frequency blocks were sold to T-Mobile USA to deploy their 3G wireless network in the United States. This move effectively killed the former MMDS and/or Wireless Cable service in the United States.
Operators
The following mobile network operators are known to use AWS. Indicated in the list are the launch dates and city.
Antigua and Barbuda
FLOW – November 2014
Digicel
Argentina
Movistar – December 2014
Personal – December 2014
Claro Argentina – June 2015
Canada
Primary network (UMTS)
Vidéotron Mobile – September 9, 2010 in Montreal and Québec City, QC
UMTS and LTE
Freedom Mobile – December 16, 2009 (primary) and December 2016 in Toronto, ON.
Eastlink Wireless – Feb 2013 (primary) in Halifax, NS.
LTE only
Rogers Wireless – July 2011
Bell Mobility – November 2011
Telus Mobility – February 2012
MTS Wireless – September 2012
SaskTel Wireless – January 2013
Chile
Nextel Chile – April 2012 to June 2015
WOM Chile – July 2015
VTR Chile - July 2015
Colombia
Tigo-ETB – December 2013 in Bogotá, Medellín, Barranquilla, Cali, Pereira, Manizales, Armenia
Movistar – December 2013 in Bogotá, Medellín, Barranquilla, Cali, Pereira, Bucaramanga, Cartagena
WOM Colombia - August 2014
Dominican Republic
Claro República Dominicana – July 2014
Ecuador
CNT EP – Fall 2013 starting in Guayaquil and Quito
Jamaica
Digicel - 2019.
FLOW - December 2016.
Mexico
Nextel Mexico – September 2012
Telcel – November 2012
Movistar – September 2014, (moved to B2/PCS 1900MHz)
AT&T – September 2015
Paraguay
VOX Copaco – February 2012
CLARO - was assigned with 1700/2100 AWS spectrum in 2015
Tigo Paraguay
Perú
Entel Perú
Movistar Perú
Trinidad and Tobago
Digicel - August 2019
bmobile - November 2022
United States
T-Mobile US – May 1, 2008 in New York City
Big River Telephone – 2007 in Bollinger, Cape Girardeau, Madison, Perry, St. Francois, Ste. Genevieve, Washington and Wayne Counties Missouri.
i wireless
AWN Alaska
Mosaic Telecom
Cricket Wireless
MetroPCS – March 31, 2008 in Las Vegas
Verizon Wireless – Fall 2013 starting in New York City.
AT&T Mobility
Uruguay
Antel – 2013
Venezuela
Movistar, February 2015
Movilnet, January 2017
See also
Federal Communications Commission (FCC)
List of AWS-1 devices
UMTS frequency bands
White Spaces Coalition
Personal Communications Services (PCS)
References
External links
FCC: Advanced Wireless Services
PhoneScoop's Visual Guide to AWS
Mobile technology | Advanced Wireless Services | Technology | 1,610 |
35,504,917 | https://en.wikipedia.org/wiki/Andromeda%20XIX | Andromeda XIX is a satellite galaxy of the Andromeda Galaxy (M31), a member of the Local Group, like the Milky Way Galaxy. Andromeda XIX is considered "the most extended dwarf galaxy known in the Local Group", and has been shown to have a half-light radius of 1.7 kiloparsec (kpc). It was discovered by the Canada–France–Hawaii Telescope, and is thought to be a dwarf galaxy.
As with other dwarf galaxies, Andromeda XIX is not producing new stars: 90% of its star formation occurred over 9 billion years ago. However, compared to dwarf galaxies of similar mass Andromeda XIX is extremely diffuse, like Antlia II.
History
Surveillance was performed during use of the MegaPrime/MegaCam 1 deg2 (camera) on the Canada-France-Hawaii Telescope (CFHT) had mapped the Andromeda Galaxy's stellar halo (one quarter) up to ~150 kpc. The survey confirmed the clumpiness of Andromeda's stellar halo. It had shown the existence of multiple other dwarf galaxies. They include: Andromeda XI, XII, XIII, XV, XVI, XVIII, XIX, and XX.
See also
List of Andromeda's satellite galaxies
References
Andromeda (constellation)
Dwarf galaxies
Interacting galaxies
5056919
Andromeda Subgroup | Andromeda XIX | Astronomy | 291 |
51,089,566 | https://en.wikipedia.org/wiki/Unlimited%20Cities | Unlimited Cities (in French Villes sans limites) are methods and apps to facilitate the civil society involvement in urban transformations. Unlimited Cities DIY is an open-source upgrade of the application linked with the New Urban Agenda of the United Nations "Habitat III" Conference.
Use
These apps that can be used on mobile devices (tablets and smartphones) for people to express their views on the evolution of a neighbourhood before future developments are outlined by professionals. "Through a simple interface, they make up a realistic representation of their expectations for a given site. Six cursors can be played with: urban density, nature, mobility, neighbourhood life, digital, creativity/art in the city. Designed by the UFO urban planning agency in partnership with the HOST architectural and urban planning firm, the apps provides upstream information to urban project developers, as well as to people to query their design wishes and thus to appropriate the future project.”. Thus the Unlimited Cities method gives the civil society the opportunity to act and co-construct with professional urban developers without being subject to solutions predetermined by experts and public authorities.
According to one of its creators, the urban architect Alain Renk: "Today the future of cities and metropoles lies less in the poetic, imaginary and solitary techniques found in Jules Verne’s novels than in capacities offered by digital mediations to imagine, represent and openly share knowledge, through the collective intelligence, offering opportunities to consider less standardized and prioritized lifestyles, freer creativity, shorter design and manufacturing circuits of circular economies and, ultimately, preservation of common goods.
Backgrounds
The project originates in 2002 at the Orleans ArchiLab international meetings, with the publication of the book named Construire la Ville Complexe? (Building the Complex City?), published by Jean Michel Place, a well-known editor in the world of architecture. Then in 2007 with a research using digital urban ecosystem simulators on the Plan Construction Architecture du Ministère du développement durable (Construction Architecture Plan of the Ministry of Sustainable Development). A crossed interview with Alain Renk and the sociologist Marc Augé discusses the possibilities of simulators to operate the collective intelligence.
In 2009, the HOST Agency, responsible for the creation of the Civic-Tech UFO, got the certification of the Cap Digital and Advancity competitiveness clusters to mount the UrbanD collaborative research program, intended to lay the collaborative software theoretical and technical basis for the evaluation and representation of the quality of urban life to enlighten decisions. This 3-year program (from early 2010 to late 2012) was the basis for the creation of "Unlimited Cities", apps and required an 800 000 € budget that was half funded by European Regional Development Fund (ERDF) subsidies.
In June 2011 a beta version of Unlimited Cities PRO is presented in Paris in the Futur en Seine Festival with real world tests with visitors, then shown in Tokyo in November and in Rio de Janeiro in December of the same year.
On October 2, 2012, the first operational deployment is implemented by the town hall of Rennes,: " the first tests were carried out in the area around the TGV1 train station and the prison demolition site in Rennes, and we discovered that when being able to build what they wanted, users quickly forgot reluctance such as for urban density, and they conceived urban projects that often went against conventional wisdom. The idea of urban density and tall buildings is often rejected, but it is accepted as soon as people can adapt it to their own logic.”.
Then the tool is implemented in Montpellier in June 2013. Then in June, July and August in Evreux, where UFO worked on the conversion of the downtown former Saint-Louis Hospital.,
In June 2015 in Grenoble, "the application is used to imagine, jointly with the population, solutions to give more visibility to the transport offer. It is a different way of working. We do not turn anymore only to the planners but we directly go to the locals and ask for their opinion, their vision. The purpose is obviously to increase buses utilization, but it is also to have people satisfied with the arrangements put in place. ".
The first cities to use Unlimited Cities PRO call the attention due to the mediators’ ability to query people in the street, often off guard, with an appealing playful tablet. Their presence in the neighbourhoods for several weeks, right where people live and work, prompts the number of participants to be much higher (over 1 600 people in Evreux) than with conventional methods of consultation that struggle to get people go to places allocated for this. These achievements arouse the interest of some researchers who will analyse some attitude changes in urban professionals and citizens. Can we talk about a rebirth of participatory democracy? Are those images, which belong to hyperrealism, misleading or conversely are they accessible to all kinds of people? Are the Open-Source dimension of the collected data and its accessibility in real time involved in building trust between experts and non-experts? The method is the topic of several scientific articles, and has been honoured with several awards in France, as well as with the Open Cities Award from the European Commission.
UN-Habitat and Unlimited Cities DIY
The first requests were initially applied for in Rio, in 2011, for associations to use the software in the favelas, and then recurrently in Africa, South America and India. In parallel associations and groups in Europe also wanted to be free to implement, in their territories and independently, the collaborative urban planning device without needing further financing than users’ support.
In June 2013 the Civic-Tech UFO presented at the festival Futur en Seine the Unlimited Cities DIY prototype: an open source, free and easy to implement upgrade. Presentations of the beta version are then non-stop: September 2013 Nantes for Ecocity symposium; November 2013, Barcelona in the Open Cities Award Ceremony; January 2014 Rennes for a meeting at the Institute of Urbanism, March 2014 Le Havre at a conference of Urbanism collaboration; May 2014 London in the Franco-British symposium on Smart-Cities; July 2014 Berlin for the Open Knowledge Festival; early October 2014 Hyderabad, India for Congress Metropolis; late October 2014 Wuhan, China for the conference of the Sino-French ecocity Caiden; 2015 Wroclaw for the Hacking of the Social Operating System; September 2015, Lyon at the annual conference of the national Federation Planning Agencies and many other workshops that confirmed recurring applications for an open-source version easy to implement.
2016 has been showing a strong acceleration of the open-source version expansion because several workshops were organized with the University of Lyon in April to redesign the campus of the Central School (Wikibuilding-campus project), and then again in China several conferences and use of the software with farmers (Wikibuilding-Village project), with children (project Wikibuilding Natur), and with students and faculty of the University of Wuhan HUST.
The first contacts between the agency UN-Habitat and Unlimited Cities DIY software are held in October 2014 in Hyderabad with the City Resilience Profiling Program and then in Barcelona in 2015. The connection is concretized in the following Habitat III Conference. Held every twenty years, Habitat conferences organized by the UN form a sounding board that accelerates the consideration of major urban issues in public policy. This year 2016, the preparatory document for the Habitat III Conference in Quito highlights the need to evolve towards urban planning construction carried together with civil society. The non-profit organisation "7 Milliards d'urbanistes (7 billion urban planners) " will be present in Quito to introduce the open source Unlimited Cities DIY software to delegates of the 197 countries member for collaborative urban planning to become available to the greatest possible number of people.
Honours and awards
In 2015, the Wikibuilding project designed for the future Paris Rive gauche is preselected under the contest "Réinventer Paris."
2013 Winner of Printemps du numérique (Rural TIC)
2013 Winner of Territoires innovants (Interconnected)
2013 Winner of Open Cities Awards (European Commission)
2011 Winner of the call for projects for Futur en Seine (Cap digital)
2011 Nominee of the 2011 Prix de la croissance verte numérique Award (Acidd)
2010 Selection of the Carrefours Innovations&Territoires (CDC)
Publications
(fr) Créer virtuellement un urbanisme collectif by Julie Nicolas and Xavier Crépin, Le Moniteur - N°5813, April 2015.
(fr) L’urbanisme collaboratif, expérience et contexte by Nancy Ottaviano, GIS Symposium Participation.
(fr) Clément Marquet, Nancy Ottaviano and Alain Renk, « Pour une ville contributive », Urbanisme dossier "Villes numériques, villes intelligentes?", Autumn 2014, p. 53-55.
(fr) L’appropriation de la ville par le numérique by Clément Marquet : Undergoing Thesis, Institut Mines Telecom.
(fr) Et si on inventait l’enquête d’imagination publique? by Sylvain Rolland, La Tribune hors-série Grand-Paris.
(fr) Villes sans limite, un outil pour stimuler l’imagination publique by Karim Ben Merien and Xavier Opige, Les Cahiers de l’IAU idf
(fr) Wikibuilding : l’urbanisme participatif de demain ? by Ludovic Clerima, Explorimmo, 2015
Alain Renk, Urban Diversity: Cities Of Differences Create Different Cities, in WorldCrunch.com, November 12, 2013 (visited on May 28, 2016)
(fr) Philippe Gargov, Samsung et son safari imaginaire : l’urbanisme collaboratif is now mainstream, on pop-up-urbain.com, December 2012 (visited on June 13, 2016)
July 8, 2011, radio broadcast: Qu’est-ce que la ville numérique? : The field of the possible, France Culture, 2011
References
Urban planning
Open source | Unlimited Cities | Engineering | 2,094 |
8,527,545 | https://en.wikipedia.org/wiki/Maxdata | Maxdata is the name of two German information technology companies.
The original Maxdata was founded in 1987 by Holger Lampatz in Marl, North Rhine-Westphalia, as the Maxdata Computer GmbH. It began selling personal computers in 1990. Maxdata used its own name for B2B products while selling notebooks and displays for the customer market under the Belinea brand.
In 1997, Maxdata was majority-owned by Vobis, itself a fully-owned subsidiary of the Metro AG. In 2003, the company was restructured as the Maxdata AG and listed in the Prime Standard. Maxdata filed insolvency proceedings at the Local Court in Essen on Wednesday, 25 June 2008. It had 1000 employees at the time of closure. The Belinea brand was sold to Brunen IT Group while the Maxdata name was sold to S&T.
The Maxdata Computer AG, a fully-owned Swiss subsidiary of the Maxdata AG, was taken over by Brunen IT as Belinea AG before being sold to S&T and re-named Maxdata (Schweiz) AG. In 2016, S&T stopped the production of computers and notebooks.
S&T founded a new Maxdata Deutschland GmbH in 2014 in Mendig as a fully-owned subsidiary but in 2016 it was renamed to S&T Deutschland GmbH.
Products
Its product lines included servers, desktop computers, notebooks and the Belinea series of monitors.
References
Companies based in North Rhine-Westphalia
Defunct computer hardware companies
Defunct computer systems companies
German brands
Defunct technology companies of Germany
Technology companies disestablished in 2008 | Maxdata | Technology | 345 |
76,407,981 | https://en.wikipedia.org/wiki/Biomass%20Energy%20and%20Alcohol%20Fuels%20Act | The Biomass Energy and Alcohol Fuels Act of 1980 is a statute that addresses general biomass energy development in its various forms, and the use of gasohol. It was one of six acts enacted by the U.S. Energy Security Act.
The purpose of the statute is to reduce the dependence of the United States on imported petroleum and natural gas. It is law that had been enacted by the U.S. Congress for the production and use of biomass energy. It also provided for the use of municipal waste biomass energy and rural, agricultural, and forestry biomass energy.
The Biomass Energy and Alcohol Fuels Act (BEAFA) consists of four subsections:
General Biomass Energy Development
Municipal Waste Biomass Energy
Rural, Agricultural, and Forestry Biomass Energy, and
Miscellaneous Biomass Provisions (The use of gasohol in Federal motor vehicles)
Roles of the Secretary of Agriculture and Secretary of Energy
For general biomass energy development, the Secretary of Agriculture and Secretary of Energy were required by the act to jointly prepare and transmit it to the President and the Congress, a plan for maximizing biomass energy production and use. The act required the plan to be designed to achieve a total level of alcohol production and use within the United States of at least 60,000 barrels per day of alcohol by December 31, 1982.
For municipal waste biomass energy, the Secretary of Energy was to prepare a report and transmit it to the President and Congress.
The Secretary of Agriculture was to prepare such a report for rural, agricultural, and forestry biomass energy.
References
Energy policy
Biomass | Biomass Energy and Alcohol Fuels Act | Environmental_science | 306 |
32,872,804 | https://en.wikipedia.org/wiki/Vortex%20core%20line | In scientific visualization, a vortex core line is a line-like feature tracing the center of a vortex with in a velocity field.
Detection methods
Several methods exist to detect vortex core lines in a flow field. studied and compared nine methods for vortex detection, including five methods for the identification of vortex core lines. Although this list is incomplete, they considered it representative for the state of the art (as of 2004).
One of these five methods is by : in a velocity field v(x,t) a vector x lies on a vortex core line if v(x,t) is an eigenvector of the tensor derivative and the other – not corresponding – eigenvalues are complex.
Another is the Lambda2 method, which is Galilean invariant and thus produces the same results when a uniform velocity field is added to the existing velocity field or when the field is translated.
See also
Flow visualization
References
Visualization (graphics)
Vortices | Vortex core line | Chemistry,Mathematics,Technology | 196 |
19,000,739 | https://en.wikipedia.org/wiki/Car%20Design%20News | Car Design News (CDN) () is an online news and information service for the international automotive design community. CDN covers production and concept cars, the career moves of significant car designers, major international auto shows, design competitions and student exhibitions at the major transportation design colleges. It is based in the UK and published by Ultima Media, part of German publisher Süddeutscher Verlag.
CDN offers both free and paywall-protect content.
History
CDN was founded in November 1999 by Brett Patterson, an automotive designer working at the time for General Motors on assignment in Detroit. In 2004, Patterson relocated to London, UK, taking on fellow car designers Nick Hull and Sam Livingstone as partners.
Eric Gallina came on board in 2005 and helped grow the site's editorial content as well as its reach. He became Editor just before the company was acquired by Ultima Media Ltd in 2008. Gallina continued to work under new ownership, running the website and commissioning contributors from 2008 through 2012.
The Car Design News portfolio of activities currently includes Car Design of the Year, Car Design Night, Car Design Awards China, Car Design News Webinars, Interior Motives (a print magazine illustrating in detail the design development of selected car interiors), Interior Motives conference, and the Interior Motives Student Design Awards.
External links
Car Design News
References
Automobile magazines published in the United States
Online magazines published in the United States
Automotive websites
Design magazines
Magazines established in 1999
Magazines published in Detroit
1999 establishments in Michigan | Car Design News | Engineering | 302 |
2,229,292 | https://en.wikipedia.org/wiki/Stirling%20numbers%20of%20the%20second%20kind | In mathematics, particularly in combinatorics, a Stirling number of the second kind (or Stirling partition number) is the number of ways to partition a set of n objects into k non-empty subsets and is denoted by or . Stirling numbers of the second kind occur in the field of mathematics called combinatorics and the study of partitions. They are named after James Stirling.
The Stirling numbers of the first and second kind can be understood as inverses of one another when viewed as triangular matrices. This article is devoted to specifics of Stirling numbers of the second kind. Identities linking the two kinds appear in the article on Stirling numbers.
Definition
The Stirling numbers of the second kind, written or or with other notations, count the number of ways to partition a set of labelled objects into nonempty unlabelled subsets. Equivalently, they count the number of different equivalence relations with precisely equivalence classes that can be defined on an element set. In fact, there is a bijection between the set of partitions and the set of equivalence relations on a given set. Obviously,
for n ≥ 0, and for n ≥ 1,
as the only way to partition an n-element set into n parts is to put each element of the set into its own part, and the only way to partition a nonempty set into one part is to put all of the elements in the same part. Unlike Stirling numbers of the first kind, they can be calculated using a one-sum formula:
The Stirling numbers of the first kind may be characterized as the numbers that arise when one expresses powers of an indeterminate x in terms of the falling factorials
(In particular, (x)0 = 1 because it is an empty product.)
Stirling numbers of the second kind satisfy the relation
Notation
Various notations have been used for Stirling numbers of the second kind. The brace notation was used by Imanuel Marx and Antonio Salmeri in 1962 for variants of these numbers.<ref>Antonio Salmeri, Introduzione alla teoria dei coefficienti fattoriali, Giornale di Matematiche di Battaglini 90 (1962), pp. 44–54.</ref> This led Knuth to use it, as shown here, in the first volume of The Art of Computer Programming (1968).Donald E. Knuth, Fundamental Algorithms, Reading, Mass.: Addison–Wesley, 1968. According to the third edition of The Art of Computer Programming, this notation was also used earlier by Jovan Karamata in 1935.Jovan Karamata, Théorèmes sur la sommabilité exponentielle et d'autres sommabilités s'y rattachant, Mathematica (Cluj) 9 (1935), pp, 164–178. The notation S(n, k) was used by Richard Stanley in his book Enumerative Combinatorics and also, much earlier, by many other writers.
The notations used on this page for Stirling numbers are not universal, and may conflict with notations in other sources.
Relation to Bell numbers
Since the Stirling number counts set partitions of an n-element set into k parts, the sum
over all values of k is the total number of partitions of a set with n members. This number is known as the nth Bell number.
Analogously, the ordered Bell numbers can be computed from the Stirling numbers of the second kind via
Table of values
Below is a triangular array of values for the Stirling numbers of the second kind :
As with the binomial coefficients, this table could be extended to , but the entries would all be 0.
Properties
Recurrence relation
Stirling numbers of the second kind obey the recurrence relation
with initial conditions
For instance, the number 25 in column k = 3 and row n = 5 is given by 25 = 7 + (3×6), where 7 is the number above and to the left of 25, 6 is the number above 25 and 3 is the column containing the 6.
To prove this recurrence, observe that a partition of the objects into k nonempty subsets either contains the -th object as a singleton or it does not. The number of ways that the singleton is one of the subsets is given by
since we must partition the remaining objects into the available subsets. In the other case the -th object belongs to a subset containing other objects. The number of ways is given by
since we partition all objects other than the -th into k subsets, and then we are left with k choices for inserting object . Summing these two values gives the desired result.
Another recurrence relation is given by
which follows from evaluating at .
Simple identities
Some simple identities include
This is because dividing n elements into sets necessarily means dividing it into one set of size 2 and sets of size 1. Therefore we need only pick those two elements;
and
To see this, first note that there are 2 ordered pairs of complementary subsets A and B. In one case, A is empty, and in another B is empty, so ordered pairs of subsets remain. Finally, since we want unordered pairs rather than ordered pairs we divide this last number by 2, giving the result above.
Another explicit expansion of the recurrence-relation gives identities in the spirit of the above example.
Identities
The table in section 6.1 of Concrete Mathematics provides a plethora of generalized forms of finite sums involving the Stirling numbers. Several particular finite sums relevant to this article include
Explicit formula
The Stirling numbers of the second kind are given by the explicit formula:
This can be derived by using inclusion-exclusion to count the surjections from n to k and using the fact that the number of such surjections is .
Additionally, this formula is a special case of the kth forward difference of the monomial evaluated at x = 0:
Because the Bernoulli polynomials may be written in terms of these forward differences, one immediately obtains a relation in the Bernoulli numbers:
The evaluation of incomplete exponential Bell polynomial Bn,k(x1,x2,...) on the sequence of ones equals a Stirling number of the second kind:
Another explicit formula given in the NIST Handbook of Mathematical Functions is
Parity
The parity of a Stirling number of the second kind is same as the parity of a related binomial coefficient:
where
This relation is specified by mapping n and k coordinates onto the Sierpiński triangle.
More directly, let two sets contain positions of 1's in binary representations of results of respective expressions:
One can mimic a bitwise AND operation by intersecting these two sets:
to obtain the parity of a Stirling number of the second kind in O(1) time. In pseudocode:
where is the Iverson bracket.
The parity of a central Stirling number of the second kind is odd if and only if is a fibbinary number, a number whose binary representation has no two consecutive 1s.
Generating functions
For a fixed integer n, the ordinary generating function for Stirling numbers of the second kind is given by
where are Touchard polynomials. If one sums the Stirling numbers against the falling factorial instead, one can show the following identities, among others:
and
which has special case
For a fixed integer k, the Stirling numbers of the second kind have rational ordinary generating function
and have an exponential generating function given by
A mixed bivariate generating function for the Stirling numbers of the second kind is
Lower and upper bounds
If and , then
Asymptotic approximation
For fixed value of the asymptotic value of the Stirling numbers of the second kind as is given by
If (where o denotes the little o notation) then
A uniformly valid approximation also exists: for all such that , one has
where , and is the unique solution to . Relative error is bounded by about .
Unimodality
For fixed , is unimodal, that is, the sequence increases and then decreases. The maximum is attained for at most two consecutive values of k. That is, there is an integer such that
Looking at the table of values above, the first few values for are
When is large
and the maximum value of the Stirling number can be approximated with
Applications
Moments of the Poisson distribution
If X is a random variable with a Poisson distribution with expected value λ, then its n-th moment is
In particular, the nth moment of the Poisson distribution with expected value 1 is precisely the number of partitions of a set of size n, i.e., it is the nth Bell number (this fact is Dobiński's formula).
Moments of fixed points of random permutations
Let the random variable X be the number of fixed points of a uniformly distributed random permutation of a finite set of size m. Then the nth moment of X is
Note: The upper bound of summation is m, not n.
In other words, the nth moment of this probability distribution is the number of partitions of a set of size n into no more than m parts.
This is proved in the article on random permutation statistics, although the notation is a bit different.
Rhyming schemes
The Stirling numbers of the second kind can represent the total number of rhyme schemes for a poem of n lines. gives the number of possible rhyming schemes for n lines using k unique rhyming syllables. As an example, for a poem of 3 lines, there is 1 rhyme scheme using just one rhyme (aaa), 3 rhyme schemes using two rhymes (aab, aba, abb), and 1 rhyme scheme using three rhymes (abc).
Variants
r-Stirling numbers of the second kind
The r-Stirling number of the second kind counts the number of partitions of a set of n objects into k non-empty disjoint subsets, such that the first r elements are in distinct subsets. These numbers satisfy the recurrence relation
Some combinatorial identities and a connection between these numbers and context-free grammars can be found in
Associated Stirling numbers of the second kind
An r-associated Stirling number of the second kind is the number of ways to partition a set of n objects into k subsets, with each subset containing at least r elements. It is denoted by and obeys the recurrence relation
The 2-associated numbers appear elsewhere as "Ward numbers" and as the magnitudes of the coefficients of Mahler polynomials.
Reduced Stirling numbers of the second kind
Denote the n objects to partition by the integers 1, 2, ..., n. Define the reduced Stirling numbers of the second kind, denoted , to be the number of ways to partition the integers 1, 2, ..., n into k nonempty subsets such that all elements in each subset have pairwise distance at least d. That is, for any integers i and j in a given subset, it is required that . It has been shown that these numbers satisfy
(hence the name "reduced"). Observe (both by definition and by the reduction formula), that , the familiar Stirling numbers of the second kind.
See also
Stirling number
Stirling numbers of the first kind
Bell number – the number of partitions of a set with n'' members
Stirling polynomials
Twelvefold way
References
.
.
Calculator for Stirling Numbers of the Second Kind
Set Partitions: Stirling Numbers
Permutations
Factorial and binomial topics
Triangles of numbers
Operations on numbers
pl:Liczby Stirlinga#Liczby Stirlinga II rodzaju | Stirling numbers of the second kind | Mathematics | 2,350 |
19,189,627 | https://en.wikipedia.org/wiki/PLEXIL | PLEXIL (Plan Execution Interchange Language) is an open source technology for automation, created and currently in development by NASA.
Overview
PLEXIL is a programming language for representing plans for automation.
PLEXIL is used in automation technologies such as the NASA K10 rover, Mars Curiosity rover's percussion drill, Deep Space Habitat and Habitat Demonstration Unit, Edison Demonstration of Smallsat Networks, LADEE, Autonomy Operating System (AOS) and procedure automation for the International Space Station.
The PLEXIL Executive is an execution engine that implements PLEXIL and can be interfaced (using a provided software framework) with external systems to be controlled and/or queried. PLEXIL has been used to demonstrate automation technologies targeted at future NASA space missions.
The binaries and documentation are widely available as BSD licensed open source from SourceForge.net.
Nodes
The fundamental programming unit of PLEXIL is the Node. A node is a data structure formed of two primary components: a set of conditions that drive the execution of the node and another set which specifies what the node accomplishes after execution.
A hierarchical composition of nodes is called a plan. A plan is a tree divided in nodes close to the root (high level nodes) and leaf nodes that represent primitive actions such as variable assignments or the sending of commands to the external system.
Node Types:
As of September 2008 NASA has implemented seven types of nodes.
List nodes: List nodes are the internal nodes in a plan. These nodes have child nodes that can be of any type.
Command nodes: These nodes issue commands that drive the system.
Assignment nodes: Performs a local operation and assigns a value to a variable.
Function call nodes:accesses external functions that perform computations, but do not alter the state of the system.
Update nodes: Provides information to the planning and decision support interface.
Library call nodes: This nodes invoke nodes in an external library.
Empty nodes: Nodes that contain attributes and do not perform any actions.
Node states:
Each node can be in only one state. They are:
Inactive
Waiting
Executing
Finishing
Iteration_Ended
Failing
Finished
Nodes transitions:
SkipCondition T : The skip condition changes from unknown or false to true.
StartCondition T : The start condition changes from unknown or false to true.
InvariantCondition F/U : Invariant condition changes from true to false or unknown.
EndCondition T : End condition changes to true.
Ancestor_inv_condition F/U : The invariant condition of any ancestor changes to false or unknown.
Ancestor_end_condition T : The end condition of any ancestor changes to true.
All_children_waiting_or_finished T : This is true when all child nodes are either in node state waiting or finished.
Command_abort_complete T : When the abort for a command action is completed.
Function_abort_complete T : The abort of a function call is completed.
Parent_waiting T : The (single) parent of the node transitions to node state waiting.
Parent_executing T : The (single) parent of the node transitions to node state executing.
RepeatCondition T/F : the repeat condition changes from unknown to either true or false.
References
External links
PLEXIL at NASA
PLEXIL Manual
PLEXIL at SourceForge
See also
Spacecraft command language
Cybernetics
Space exploration
Domain-specific programming languages
Robotics software | PLEXIL | Engineering | 687 |
3,527,338 | https://en.wikipedia.org/wiki/Mathomatic | Mathomatic is a free, portable, general-purpose computer algebra system (CAS) that can symbolically solve, simplify, combine and compare algebraic equations, and can perform complex number, modular, and polynomial arithmetic, along with standard arithmetic. It can perform symbolic calculus (derivative, extrema, Taylor series, and polynomial integration and Laplace transforms), numerical integration, and can handle all elementary algebra except logarithms. Trigonometric functions can be entered and manipulated using complex exponentials, with the GNU m4 preprocessor. Not currently implemented are general functions such as f(x), arbitrary-precision and interval arithmetic, as well as matrices.
Features
Mathomatic is capable of solving, differentiating, simplifying, calculating, and visualizing elementary algebra. It also can perform summations, products, and automated display of calculations of any length by plugging sequential or test values into any formula, then approximating and simplifying before display.
Intermediate results (showing the work) may be displayed by previously typing "set debug 1" (see the session example); this works for solving and almost every command in Mathomatic. "set debug 2" shows more details about the work done.
The software does not include a GUI except with the Mathomatic trademark authorized, versions for smartphones and tablets running iOS or Android. The Mathomatic software, available on the official Mathomatic website, is authorized for use in any other type of software, due to its permissive free software license (GNU LGPL). It is available as a free software library, and as a free console mode application that uses a color command-line interface with pretty-print output that runs in a terminal emulator under any operating system. The console interface is simple and requires learning the basic algebra notation to start. All input and output is line-at-a-time ASCII text. By default, input is standard input and output is standard output. Mathomatic is typically compiled with editline or GNU readline for easier input.
There is no programming capability; the interpreter works like an algebraic calculator. Expressions and equations are entered in standard algebraic infix notation. Operations are performed on them by entering simple English commands.
Because all numeric arithmetic is double precision floating point, and round-off error is not tracked, Mathomatic is not suitable for applications requiring high precision, such as astronomical calculations. It is useful for symbolic-numeric calculations of about 14 decimal digits accuracy, although many results will be exact, if possible.
Mathomatic can be used as a floating point or integer arithmetic code generating tool, simplifying and converting equations into optimized assignment statements in the Python, C, and Java programming languages. The output can be made compatible with most other mathematics programs, except TeX and MathML format input/output are currently not available. The ASCII characters that are allowed in Mathomatic variable names is configurable, allowing TeX format variable names.
The Mathomatic source code can be compiled as a symbolic math library with an API, which can be linked to C compatible programs that need to use the Mathomatic symbolic math engine.
Session examples
Solving and code generation example, where the work is shown:
1-> x = (a+1)*(b+2)
#1: x = (a + 1)*(b + 2)
1-> set debug 1
Success.
1-> solve for b
level 1: x = (a + 1)*(b + 2)
Subtracting "(a + 1)*(b + 2)" from both sides of the equation:
level 1: x - ((a + 1)*(b + 2)) = 0
Subtracting "x" from both sides of the equation:
level 1: -1*(a + 1)*(b + 2) = -1*x
Dividing both sides of the equation by "-1":
level 1: (a + 1)*(b + 2) = x
Dividing both sides of the equation by "a + 1":
level 1: b + 2 = x/(a + 1)
Subtracting "2" from both sides of the equation:
level 1: b = (x/(a + 1)) - 2
Solve completed:
level 1: b = (x/(a + 1)) - 2
Solve successful:
x
#1: b = ------- - 2
(a + 1)
1-> code C ; output C programming language code
b = ((x/(a + 1.0)) - 2.0);
1-> variables C ; define the variables for the C compiler
double x;
double a;
double b;
1->
History
Development of Mathomatic was started in the year 1986 by George Gesslein II, as an experiment in computerized mathematics. It was originally written in Microsoft C for MS-DOS. Versions 1 and 2 were published by Dynacomp of Rochester, New York in 1987 and 1988 as a scientific software product for DOS. Afterwards it was released as shareware and then emailware, with a 2D equation graphing program. At the turn of the century, Mathomatic was ported to the GNU C Compiler under Linux and became free software. The graphing program was discontinued; 2D/3D graphing of equations is now accomplished with gnuplot.
The name "Mathomatic" is a portmanteau of "math" and "automatic", and was inspired by the naming and automation of Rog-O-Matic, which was an early experiment in artificial intelligence.
Development has ceased as a result of the death of the author on February 24, 2013.
Available platforms
Mathomatic is available for almost all platforms, including Microsoft Windows using MinGW. It is available for Mac OS X, for iOS, for Android, and for the Nintendo DS under DSLinux and stand-alone. Fedora Linux, Slackware, Debian, Ubuntu, Gentoo Linux, and all of the main BSD Unix distributions include Mathomatic as an automatically installable package. There is a port to JavaScript using Emscripten, allowing Mathomatic to run in a web browser. The ports are all maintained by separate individuals.
Requirements
Building from source requires a C compiler with the standard POSIX C libraries. If Mathomatic is compiled with the GCC C compiler or the Tiny C Compiler for a Unix-like operating system, no changes need to be made to the source code. Mathomatic uses no compiler-specific code, so it will usually compile easily with any C compiler. Use of the Mathomatic Symbolic Math Library allows mixing programming languages and is operating system independent.
Mathomatic can be ported to any computer with at least 1 megabyte of free RAM. The Mathomatic standard distribution memory requirement defaults to a maximum of 400 megabytes, depending on the size of the equation spaces and how many expressions have been entered. Equation spaces are fixed size arrays that are allocated as needed, the size of which is set during compilation or startup. Each algebraic expression or equation entered at the main prompt is stored in an equation space.
Mathomatic is written to do most symbolic manipulations with memory moves, like an assembly language program. This causes Mathomatic to crash when used with the new LLVM backend, which doesn't seem to like the standard C library function memmove(3). To use Mathomatic with a C compiler that uses an LLVM backend, disable all optimizations with "-O0" on the C compiler command line. Otherwise the regression tests will loop endlessly. This is most certainly an optimization bug in LLVM. To help those trying to debug this optimization error, Mathomatic will fail when LLVM optimizes the simplification of (32^.5) to 4*(2^.5), and the like, going into an endless loop every time.
See also
Comparison of computer algebra systems
Maxima – a more complete CAS with similar functionality, also free
References
Mathomatic on ORMS
External links
Additional documentation in Italian for Ubuntu
Mathematics on a UNIX workstation
Mathomatic at MacUpdate
1987 software
Android (operating system) software
C (programming language) libraries
Command-line software
Computer algebra system software for Linux
Computer algebra system software for macOS
Computer algebra system software for Windows
Cross-platform free software
Embedded Linux
Free computer algebra systems
Free educational software
Free software programmed in C
IOS software
Portable software
Nintendo DS software | Mathomatic | Technology | 1,793 |
29,603,509 | https://en.wikipedia.org/wiki/BIM-1 | BIM-1 (GF 109203X) and the related compounds BIM-2, BIM-3, and BIM-8 are bisindolylmaleimide-based protein kinase C (PKC) inhibitors. These inhibitors also inhibit PDK1 explaining the higher inhibitory potential of LY33331 compared to the other BIM compounds a bisindolylmaleimide inhibitor toward PDK1.
Function
BIM-1 is present in the structure of PKCiota (residue 574-turn motif). It needs to be phosphorylated towards a PKCbeta-specific inhibitor site-directed mutagenesis of the compound for its full activation and co-crystallized as an asymmetric pair which is mediated by 3-phosphoinositide-dependent protein kinase-1 (PDK1) are downstream characteristics of PKCs and PKB/AKT.
Scope
The bound BIM-1 inhibitor blocks bilobal interactions, the ATP-binding site, features an ATP-competitive inhibitor, 2-methyl-1H-indol-3-yl-BIM-1, the crystal structure and catalytic subunit with a 20-amino acid substrate analog inhibitor structure is bilobal MgATP a transport protein that provide a more precise description of which is influenced by lobe-lobe interactions binding in cells expressing both forms a pair of kinase-inhibitor complexes with ferritin in a soluble and non-toxic form (Poisson-Boltzmann) and a portion of the inhibitor peptide a lysine residue, has been shown to be involved in ATP binding.
Interactions
The PKCiota-BIM-1 complex interacts with the zinc finger of lambda/iota PKC characterization of lambda-interacting protein (LIP) (lambda-interacting protein; a selective activator of lambda/iota PKC). Phosphorylation of a PKC induces a conformation leading to import of a PKC into the nucleus. The entire 587-amino acid coding region of a new PKC isoform, PKC iota. Where Thr-412 (activation loop of the kinase domain) at PKCiota/lambda phosphorylates glyceraldehyde-3-phosphate dehydrogenase (GAPDH) that sort cargo to the anterograde pathway the phosphorylation pathway(s) involved in this phenomenon mimic glutamate and can adopt two limiting diastereomeric (syn and anti) conformation biosynthetically related indolocarbazole analogs and in Proto-oncogene serine/threonine-protein kinase Pim-1-Peptide as a phosphorylation target including itself. The bound BIM-1 inhibitor blocks the ATP-binding site and puts the kinase domain into an intermediate open conformation. The value of such calculations lies in understanding a variant was designed which showed improved binding characteristics of configurationally stable atropisomeric bisindolylmaleimides where the two kinase domains, and two different inhibitor conformers bind in different orientations, the hinge region of staurosporine-Pim-1 resembles co-crystallized as an asymmetric pair of biosynthetically 'related' indolocarbazole analogs. It is a modulator of the 5-HT2A receptor.
References
Ligands (biochemistry)
Protein kinase inhibitors
Bisindolylmaleimides
Dimethylamino compounds | BIM-1 | Chemistry | 741 |
21,979,677 | https://en.wikipedia.org/wiki/Shorty%20Awards | The Shorty Awards (also known as "The Shortys") are awards for outstanding and innovative work in digital and social media content by brands, advertising agencies, and creators. The awards, which generally focus on short-term content, honor achievements in content creation on Twitter, Facebook, YouTube, Instagram, TikTok, Twitch, and other social networking sites. The Shorty Awards began in 2008 and initially recognized achievements by independent creators on Twitter, with the first formal awards ceremony occurring in February 2009. Since then, the awards, which are now awarded each spring, have shifted their focus to recognize content across numerous platforms.
Entrant work is judged on the merits of excellence in creativity, strategy, and engagement by the Real Time Academy, a group of industry professionals selected by the Shorty Awards on the basis of their professional reputations, industry knowledge, and personal achievements (which may include previous Shorty wins). An additional public voting component, known as Audience Honor Voting, is also used to select Shorty Awards contenders.
Notable Shorty Award winners include Malala Yousafzai, Trevor Noah, Michelle Obama, Conan O’Brien, Lady Gaga, Bill Nye, and Lizzo. Brands and organizations such as Chipotle, Duolingo, Marvel Studios, HBO, Red Bull, Airbnb, Nestle, BMW, UNICEF and the Human Rights Campaign have also been awarded.
The Shorty Awards also produces an annual award program called The Shorty Impact Awards, a competition dedicated to showcasing digital and social media-based projects by brands, agencies, and organizations that seek to make the world a better place.
List of ceremonies
1st Shorty Awards
The awards were created in 2008 by tech entrepreneurs Greg Galant, Adam Varga, and Lee Semel of Sawhorse Media. They invited Twitter account holders to nominate the best Twitter users in general categories such as humor, news, food, and design. Winners were chosen by more than 30,000 Twitter users during the voting period. The founders of Twitter first heard about the awards after the contest had gotten underway and expressed support for it.
The first Shorty Awards ceremony was held on February 11, 2009, at the Galapagos Art Space in Brooklyn, New York. Approximately 300 people attended the event. The event was hosted by CNN anchor Rick Sanchez and featured appearances by prominent Twitter users MC Hammer and Gary Vaynerchuk and a video appearance by Shaquille O'Neal. The awards, in 26 categories, were voted on by Twitter users.
2nd Shorty Awards
Voting for the second Shorty Awards opened in January 2010 in 26 official categories. A Real-Time Photo of the Year category was added to the list of official categories for the first time, recognizing the best photo posted to services such as Twitpic, Yfrog, or Facebook.
The second Shorty Awards competition introduced a panel of judges called the Real-Time Academy of Short Form Arts & Sciences whose members were Craig Newmark, David Pogue, Kurt Andersen, Caterina Fake, Joi Ito, Frank Moss, Alberto Ibargüen, Sreenath Sreenivasan, MC Hammer, Alyssa Milano and Jimmy Wales. After public nominations determined the finalists, the academy decided on the winners.
Winners were announced at a ceremony held in the Times Center in The New York Times building in Manhattan that was also streamed online. The ceremony was hosted by CNN anchor Rick Sanchez, who presented awards in the official categories as well as the newly added Real-Time Photo of the Year and a special humanitarian award.
3rd Shorty Awards
The nomination period for the third annual Shorty Awards opened in January 2011 and ran through February 11, 2011, except for new categories that had extended nomination deadlines. There were 30 official categories and five special categories. In addition to Real-Time Photo of the Year, for the first time the awards accepted nominations for Foursquare Mayor of the Year, Foursquare Location of the Year, Microblog of the Year on Tumblr, and a Connecting People award. The awards also introduced new Shorty Industry Awards to recognize the best uses of social media by brands and agencies. Winners were announced at a ceremony on March 28, 2011, hosted by Aasif Mandvi in the Times Center. Other Shorty Awards presenters were scheduled to include Kiefer Sutherland, Jerry Stiller, Anne Meara, Stephen Wallem, Miss USA Rima Fakih, and Miss Teen USA Kamie Crawford.
4th Shorty Awards
The 4th Annual Shorty Awards featured Ricky Gervais and Tiffani Thiessen. 1.6 million tweeted nominations were made across all the categories to honor the top users on Twitter, Facebook, Tumblr, Foursquare, YouTube and other internet platforms.
5th Shorty Awards
The 5th Annual Shorty Awards ceremony featured Felicia Day, James Urbaniak, Kristian Nairn, Hannibal Buress, Carrie Keagan, Chris Hardwick, David Karp and Coco Rocha. 2.4 million tweeted nominations were made across all the categories to honor the top users on Twitter, Facebook, Tumblr, Foursquare, YouTube and other internet sites.
6th Shorty Awards
The ceremony took place on April 7, 2014, at the New York TimesCenter and was hosted by Comedian Natasha Leggero. The show included appearances by Patton Oswalt, Jamie Oliver, Kristen Bell, Jerry Seinfeld, Moshe Kasher, Julie Klausner, Erin Brady, Guy Kawasaki, Matt Walsh, Retta, Us the Duo, Big Boi, Gilbert Gottfried, Thomas Middleditch, Billie Jean King and Leandra Medine. Winners included Jerry Seinfeld and Will Ferrell.
7th Shorty Awards
The Seventh Annual Shorty Awards was hosted by comedian Rachel Dratch and took place on April 20, 2015, at The Times Center in NYC. The Real-Time Academy, the judging body of the Shortys, tripled in size for the 7th annual Awards and included Alton Brown, Mamrie Hart, Nikki Glaser, OK Go, The Fine Bros, Debbie Sterling, Dan Savage, Deena Varshavskaya and Palmer Luckey. Panic! at the Disco was the musical guest at the ceremony. On-stage presenters included Kevin Jonas, Bill Nye, Bella Thorne, Wyclef Jean, Emily Kinney and Tyler Oakley.
8th Shorty Awards
The Eighth Annual Shorty Awards were held in NYC at the TimesCenter on April 11, 2016. They were hosted by YouTuber, Writer and Comedian Mamrie Hart with musical performances from Nico & Vinz. Winners of the night included Bill Wurtz, DJ Khaled, Misty Copeland, Casey Neistat, Dwayne Johnson, Hannah Hart, Troye Sivan, Baddie Winkle, Kevin Hart, Taraji P. Henson, King Bach, and Zach King.
9th Shorty Awards
The Ninth Annual Shorty Awards were held in NYC at the PlayStation Theater on April 23, 2017. They were hosted by two-time Emmy Award winner Tony Hale with a musical performance by Lizzo. Winners of the night included Bill Nye, Shay Mitchell, Doug the Pug, Gigi Gorgeous, Simone Biles, Mara Wilson, Gaten Matarazzo and Chrissy Teigen.
10th Shorty Awards
The 10th Annual Shorty Awards, took place on April 15, 2018, at the PlayStation Theater, New York City. The ceremony was hosted by actress, singer, and songwriter Keke Palmer with a musical performance by Betty Who.
11th Shorty Awards
The 11th Annual Shorty Awards were held on May 5, 2019, in New York City at the PlayStation Theater. The ceremony was hosted by American actress and comedian Kathy Griffin, with a musical performance by Tank and the Bangas.
12th Shorty Awards
The 12th Annual Shorty Awards were held on May 3, 2020. Due to the COVID-19 pandemic, the ceremony took place online for the first time, with presenters and award winners filming from their own homes. The ceremony was hosted by actor J.B. Smoove and featured a remixed performance of Trap Queen by Fetty Wap. Award winners included Jack Stauber, Supercar Blondie, Rose and Rosie, and Greta Thunberg.
13th Shorty Awards
The 13th Annual Shorty Awards took place from April 26 to May 14, 2021. The ceremony was hosted on different social media platforms, such as Instagram and Clubhouse, to create a more tailored experience. Winners were announced from May 11 to May 14, with 10 winners being revealed each hour from 1 to 4 p.m. EST on the Shorty Awards Instagram account.
14th Shorty Awards
The 14th Annual Shorty Awards were held virtually on May 15, 2022, honoring the best in social media and digital content. Hosted by Jay Shetty, the event recognized influencers, brands, and organizations across various categories, celebrating excellence in digital storytelling and innovative online campaigns. Notable winners included Tabitha Brown for her food content and the D'Amelio Family for their contributions to family and parenting content. The event highlighted the role of digital media in connecting and inspiring audiences during challenging times.
15th Shorty Awards
The 15th Annual Shorty Awards celebrated the best in social media and digital content on May 24, 2023, at Tribeca 360° in New York City. Hosted by Jay Pharoah, the event honored creators, brands, and organizations across 148 categories, recognizing excellence in digital storytelling, innovative content, and impactful social media campaigns. Notable winners included Jay Shetty, Smile Train, and Paramount. The event also introduced the Elevate Creatives Fund, a $100,000 initiative aimed at supporting digital creators in building sustainable businesses.
See also
The Streamer Awards
Streamy Awards
Webby Awards
References
External links
Official Shorty Awards Site
The Real-Time Academy
Web awards
Blog awards
Twitter
Social media
Awards established in 2008
Advertising awards
Podcasting awards | Shorty Awards | Technology | 2,025 |
20,759,609 | https://en.wikipedia.org/wiki/Creative%20city | A creative city is a city where creativity is a strategic factor in urban development. A creative city provides places, experiences, attractions, and opportunities to foster creativity among its citizens.
Early developments
Partners initially focused on design and culture as resources for livability. In the early 1980's, partners launched a program to document the economic value of design and cultural amenities. The Economics of Amenity program explored how cultural amenities and the quality of life in a community are linked to economic development and job creation. This work was the catalyst for a significant array of economic impact studies of the arts across the globe.
Core concepts used by partners were cultural planning and cultural resources, which they saw as the planning of urban resources including quality design, architecture, parks, the natural environment, animation and especially arts activity and tourism.
From the late 1970s onwards, UNESCO and the Council of Europe began to investigate the cultural industries. From the perspective of cities, it was Nick Garnham, who when seconded to the Greater London Council in 1983/4, set up a cultural industries unit to put the cultural industries on the agenda. Drawing on, re-reading and adapting the original work by Theodor Adorno and Walter Benjamin in the 1930s which had seen the culture industry as a kind of monster and influenced also by Hans Magnus Enzensberger, he saw the cultural industries as a potentially liberating force. This investigation into the cultural industries of the time found that a city and nation that emphasized its development of cultural industries added value, exports, and new jobs, while supporting competitiveness, continues to expand a city's and nation's growth in the global economy.
The first mention of the creative city as a concept was in a seminar organized by the Australia Council, the City of Melbourne, the Ministry of Planning and Environment (Victoria) and the Ministry for the Arts (Victoria) in September 1988. Its focus was to explore how arts and cultural concerns could be better integrated into the planning process for city development. A keynote speech by David Yencken, former Secretary for Planning and Environment for Victoria, spelled out a broader agenda stating that whilst efficiency of cities is important there is much more needed: "[The city] should be emotionally satisfying and stimulate creativity amongst its citizens".
Another important early player was Comedia, founded in 1978 by Charles Landry. Its study on 1991, Glasgow: The Creative City and its Cultural Economy was followed on 1994 by a study on urban creativity called The Creative City in Britain and Germany.
Anatomy
As well as being the centre of a creative economy and being home to a sizeable creative class, creative cities have also been theorized to embody a particular structure. This structure comprises three categories of people, spaces, organizations, and institutions: the upper-ground, the underground, and the middle-ground.
The upper-ground consists of firms and businesses engaged in creative industries. These are the organizations that create the economic growth one hopes to find in a creative city, by taking the creative product of the city's residents and converting it into a good or service that can be sold.
The underground consists of the individual creative people—for example, artists, writers, or innovators—who produce this creative product.
The middle-ground bridges the gap between the polished upper-ground and the raw energy of the underground. It can be vibrant neighborhoods, buzzing galleries, or collaborative art collectives. In these spaces, underground creativity takes form, disparate ideas coalesce into tangible products, and connections spark between individuals across the spectrum. This fertile middle-ground fosters cross-pollination of ideas and talent, fueling innovation and propelling the creative ecosystem forward.
To unlock the economic power of creative industries, cities must nurture all levels of the ecosystem, not just the polished upper-ground. Urban planning initiatives can create vibrant middle-ground spaces, while targeted policies can attract and empower the often-overlooked "creative class" of the underground. This holistic approach fosters innovation, diversity, and ultimately, economic growth.
Richard Florida works on quantifying various measures of the "creative potential" of a city, and then ranks cities based on his "creativity index". This, in turn, encourages cities to compete with one another for higher rankings and the attendant economic benefits that supposedly come with them. In order to do this, city governments will hire consulting firms to advise them on how to boost their creative potential, thus creating an industry and a class of expertise centred around creative cities.
The emergence of the creative economy and creative class
There have been critiques of the creative city idea claiming it is only targeted at hipsters, property developers and those who gentrify areas or seek to glamorize them thus destroying local distinctiveness. This has happened in places, but it is not inevitable. The creative challenge is to find appropriate regulations and incentives to obviate the negative aspects. A valid concern has been the conscious use of artists to be the vanguard of gentrification, to lift property values and to make areas safe before others move in, otherwise referred to as artwashing.
Critiques of creative city and creative and cultural industries highlight them as a neoliberal tool to extract value from a city's culture and creativity. It treats cultural resources of a city as raw materials that can be used as assets in the 21st century---just as coal, steel, and gold were assets of the city in the 20th century.
Florida's work has been criticized by scholars such as Jamie Peck as, "work[ing] quietly with the grain of extant 'neoliberal' development agendas, framed around interurban competition, gentrification, middle-class consumption and place-marketing". In other words, Florida's prescriptions in favor of fostering a creative class are, rather than being revolutionary, simply a way of bolstering the conventional economic model of the city. The idea of the creative class serves to create a cultural hierarchy, and as such reproduce inequalities; indeed, even Florida himself has even acknowledged that the areas he himself touts as hotspots of the creative class are at the same time home to shocking disparities in economic status among their residents. In order to explain this, he points to the inflation of housing prices that an influx of creatives can bring to an area, as well as to the creative class' reliance on service industries that typically pay their employees low wages.
Critics argue that the creative city idea has now become a catch-all phrase in danger of losing its meaning and in danger of hollowing out by general overuse of the word 'creative' as applied to people, activities, organizations, urban neighbourhoods or cities that objectively are not especially creative. Cities still tend to restrict its meaning to the arts and cultural activities within the creative economy professions, calling any cultural plan a creative city plan, when such activities are only one aspect of a community's creativity. There is a tendency for cities to adopt the term without thinking through its real organizational consequences and the need to change their mindset. The creativity implied in the term, the creative city, is about lateral and integrative thinking in all aspects of city planning and urban development, placing people, not infrastructure, at the centre of planning processes.
Landry's original Creative City vision, focused on holistic urban transformation, has yielded to a Florida-centric model prioritizing economic innovation and its skilled workforce. This shift has reduced the Creative City to a mere business tool, a far cry from its initial ambition to reshape urban policy. Now, the "thesis" is palatable to existing power structures, neatly fitting into the global economic order. Yet, the debate simmers on. While some cling to the holistic vision of city-wide creativity, others equate the Creative City solely with the economic engine of the creative class.
Global impact
In 2004, UNESCO established the Creative Cities Network (UCCN). UCCN was established to share best practices and partnerships that can help sustain and improve a city's creativity. All cities recognized as a member of the UCCN agree that creativity acts as a strategic factor of sustainable development.
The UCCN have seven creative fields: crafts and folk art, design, film, gastronomy, literature, media arts, and music.
See also
Creative industries
Smart city
References
Urban planning
Urban studies and planning terminology | Creative city | Engineering | 1,707 |
4,248,332 | https://en.wikipedia.org/wiki/Biopharmaceutics%20Classification%20System | The Biopharmaceutics Classification System (BCS) is a system to differentiate drugs on the basis of their solubility and permeability.
This system restricts the prediction using the parameters solubility and intestinal permeability. The solubility classification is based on a United States Pharmacopoeia (USP) aperture. The intestinal permeability classification is based on a comparison to the intravenous injection. All those factors are highly important because 85% of the most sold drugs in the United States and Europe are orally administered.
Classes
According to the Biopharmaceutics Classification System (BCS) drug substances are classified to four classes upon their solubility and permeability:
Class I – high permeability, high solubility
Example: metoprolol, paracetamol
Those compounds are well absorbed and their absorption rate is usually higher than excretion.
Class II – high permeability, low solubility
Example: glibenclamide, bicalutamide, ezetimibe, aceclofenac
The bioavailability of those products is limited by their solvation rate. A correlation between the in vivo bioavailability and the in vitro solvation can be found.
Class III – low permeability, high solubility
Example: cimetidine
The absorption is limited by the permeation rate but the drug is solvated very fast. If the formulation does not change the permeability or gastro-intestinal duration time, then class I criteria can be applied.
Class IV – low permeability, low solubility
Example: bifonazole
Those compounds have a poor bioavailability. Usually they are not well absorbed over the intestinal mucosa and a high variability is expected.
Definitions
The drugs are classified in BCS on the basis of solubility and permeability.
Solubility class boundaries are based on the highest dose strength of an immediate release product. A drug is considered highly soluble when the highest dose strength is soluble in 250 ml or less of aqueous media over the pH range of 1 to 6.8. The volume estimate of 250 ml is derived from typical bioequivalence study protocols that prescribe administration of a drug product to fasting human volunteers with a glass of water.
Permeability class boundaries are based indirectly on the extent of absorption of a drug substance in humans and directly on the measurement of rates of mass transfer across human intestinal membrane. Alternatively non-human systems capable of predicting drug absorption in humans can be used (such as in-vitro culture methods). A drug substance is considered highly permeable when the extent of absorption in humans is determined to be 85% or more of the administered dose based on a mass-balance determination or in comparison to an intravenous dose.
See also
ADME
Partition coefficient
Bioavailability
Drug metabolism
First pass effect
Polar surface area
IVIVC
References
Further reading
External links
BCS guidance of the U.S. Food and Drug Administration
Pharmacological classification systems
Pharmacy in the United States | Biopharmaceutics Classification System | Chemistry | 642 |
11,237,695 | https://en.wikipedia.org/wiki/Pentominium | The Pentominium was a planned 122-storey, supertall skyscraper located in Dubai, United Arab Emirates. Construction on the tower was halted in August 2011. It was designed by Andrew Bromberg of architects Aedas and funded by Trident International Holdings. The AED 1.46 billion (US$400 million) construction contract was awarded to Arabian Construction Company (ACC).
Construction started on 26 July 2008. Before construction stopped, the building was expected to be completed in 2013. By May 2011, 22 floors had been completed. However, in August 2011, construction stopped after Trident International Holdings fell behind on payments for a US$20.4 million loan following the global financial crisis.
Had the project been completed as scheduled, the Pentominium would have been the second tallest building in Dubai after Burj Khalifa and the tallest residential building in the world, surpassing the Central Park Tower in New York City.
Six Senses Residences Dubai Marina
After being abandoned for 12 years, in December 2023, Select Group acquired the long-stalled Pentominium Tower. The Pentominium was completely redesigned from the ground up by Woods Bagot, utilizing the already built structure into the new design. In March 2024, the tower was rebranded and renamed as Six Senses Residences Dubai Marina.
Upon its completion, Six Senses Residences Dubai Marina will have 122 stories and will be 517 m / 1,696 ft tall making it the tallest building in the Dubai Marina and potentially the tallest residential building in the world. Construction restarted in 2024. It is expected to be completed by 2028.
See also
List of tallest buildings in Dubai
List of tallest residential buildings in Dubai
List of buildings with 100 floors or more
References
External links
Construction Week
Residential skyscrapers in Dubai
Proposed buildings and structures in Dubai
Buildings and structures under construction in Dubai
Expressionist architecture
Futurist architecture
Architecture in Dubai
High-tech architecture
Andrew Bromberg buildings
Aedas buildings
Postmodern architecture | Pentominium | Engineering | 390 |
23,793,064 | https://en.wikipedia.org/wiki/CrystalGraphics | CrystalGraphics is the developer of the PowerPoint sharing website PowerShow.com, as well as templates and plug-ins for PowerPoint and Office products. Some of CrystalGraphics' products include PowerPoint templates, 2D and 3D special-effects, video backgrounds, charting, animations and other add-ons. The company was founded by Dennis Ricks in 1986 and is based in Santa Clara, California. The company was the first company to introduce 3D transitions for PowerPoint.
TOPAS
AT&T/Crystal TOPAS was a pioneering 3D computer graphics software package for x86-based personal computers. It was a fully integrated 3D modeling, rendering, and animation package. It included texture mapping tools and tools to easily integrate 3D artwork with photographic backgrounds and digital images.
References
External links
Presentation
1986 establishments in California
American companies established in 1986
Companies based in Santa Clara, California | CrystalGraphics | Technology | 180 |
41,824,003 | https://en.wikipedia.org/wiki/MIMO-OFDM | Multiple-input, multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) is the dominant air interface for 4G and 5G broadband wireless communications. It combines multiple-input, multiple-output (MIMO) technology, which multiplies capacity by transmitting different signals over multiple antennas, and orthogonal frequency-division multiplexing (OFDM), which divides a radio channel into a large number of closely spaced subchannels to provide more reliable communications at high speeds. Research conducted during the mid-1990s showed that while MIMO can be used with other popular air interfaces such as time-division multiple access (TDMA) and code-division multiple access (CDMA), the combination of MIMO and OFDM is most practical at higher data rates.
MIMO-OFDM is the foundation for most advanced wireless local area network (wireless LAN) and mobile broadband network standards because it achieves the greatest spectral efficiency and, therefore, delivers the highest capacity and data throughput. Greg Raleigh invented MIMO in 1996 when he showed that different data streams could be transmitted at the same time on the same frequency by taking advantage of the fact that signals transmitted through space bounce off objects (such as the ground) and take multiple paths to the receiver. That is, by using multiple antennas and precoding the data, different data streams could be sent over different paths. Raleigh suggested and later proved that the processing required by MIMO at higher speeds would be most manageable using OFDM modulation, because OFDM converts a high-speed data channel into a number of parallel lower-speed channels.
Operation
In modern usage, the term "MIMO" indicates more than just the presence of multiple transmit antennas (multiple input) and multiple receive antennas (multiple output). While multiple transmit antennas can be used for beamforming, and multiple receive antennas can be used for diversity, the word "MIMO" refers to the simultaneous transmission of multiple signals (spatial multiplexing) to multiply spectral efficiency (capacity).
Traditionally, radio engineers treated natural multipath propagation as an impairment to be mitigated. MIMO is the first radio technology that treats multipath propagation as a phenomenon to be exploited. MIMO multiplies the capacity of a radio link by transmitting multiple signals over multiple, co-located antennas. This is accomplished without the need for additional power or bandwidth. Space–time codes are employed to ensure that the signals transmitted over the different antennas are orthogonal to each other, making it easier for the receiver to distinguish one from another. Even when there is line of sight access between two stations, dual antenna polarization may be used to ensure that there is more than one robust path.
OFDM enables reliable broadband communications by distributing user data across a number of closely spaced, narrowband subchannels. This arrangement makes it possible to eliminate the biggest obstacle to reliable broadband communications, intersymbol interference (ISI). ISI occurs when the overlap between consecutive symbols is large compared to the symbols’ duration. Normally, high data rates require shorter duration symbols, increasing the risk of ISI. By dividing a high-rate data stream into numerous low-rate data streams, OFDM enables longer duration symbols. A cyclic prefix (CP) may be inserted to create a (time) guard interval that prevents ISI entirely. If the guard interval is longer than the delay spreadthe difference in delays experienced by symbols transmitted over the channelthen there will be no overlap between adjacent symbols and consequently no intersymbol interference. Though the CP slightly reduces spectral capacity by consuming a small percentage of the available bandwidth, the elimination of ISI makes it an exceedingly worthwhile tradeoff.
A key advantage of OFDM is that fast Fourier transforms (FFTs) may be used to simplify implementation. Fourier transforms convert signals back and forth between the time domain and frequency domain. Consequently, Fourier transforms can exploit the fact that any complex waveform may be decomposed into a series of simple sinusoids. In signal processing applications, discrete Fourier transforms (DFTs) are used to operate on real-time signal samples. DFTs may be applied to composite OFDM signals, avoiding the need for the banks of oscillators and demodulators associated with individual subcarriers. Fast Fourier transforms are numerical algorithms used by computers to perform DFT calculations.
FFTs also enable OFDM to make efficient use of bandwidth. The subchannels must be spaced apart in frequency just enough to ensure that their time-domain waveforms are orthogonal to each other. In practice, this means that the subchannels are allowed to partially overlap in frequency.
MIMO-OFDM is a particularly powerful combination because MIMO does not attempt to mitigate multipath propagation and OFDM avoids the need for signal equalization. MIMO-OFDM can achieve very high spectral efficiency even when the transmitter does not possess channel state information (CSI). When the transmitter does possess CSI (which can be obtained through the use of training sequences), it is possible to approach the theoretical channel capacity. CSI may be used, for example, to allocate different size signal constellations to the individual subcarriers, making optimal use of the communications channel at any given moment of time.
More recent MIMO-OFDM developments include multi-user MIMO (MU-MIMO), higher order MIMO implementations (greater number of spatial streams), and research concerning massive MIMO and cooperative MIMO (CO-MIMO) for inclusion in coming 5G standards.
MU-MIMO is part of the IEEE 802.11ac standard, the first Wi-Fi standard to offer speeds in the gigabit per second range. MU-MIMO enables an access point (AP) to transmit to up to four client devices simultaneously. This eliminates contention delays, but requires frequent channel measurements to properly direct the signals. Each user may employ up to four of the available eight spatial streams. For example, an AP with eight antennas can talk to two client devices with four antennas, providing four spatial streams to each. Alternatively, the same AP can talk to four client devices with two antennas each, providing two spatial streams to each.
Multi-user MIMO beamforming even benefits single spatial stream devices. Prior to MU-MIMO beamforming, an access point communicating with multiple client devices could only transmit to one at a time. With MU-MIMO beamforming, the access point can transmit to up to four single stream devices at the same time on the same channel.
The 802.11ac standard also supports speeds up to 6.93 Gbit/s using eight spatial streams in single-user mode. The maximum data rate assumes use of the optional 160 MHz channel in the 5 GHz band and 256 QAM (quadrature amplitude modulation). Chipsets supporting six spatial streams have been introduced and chipsets supporting eight spatial streams are under development.
Massive MIMO consists of a large number of base station antennas operating in a MU-MIMO environment. While LTE networks already support handsets using two spatial streams, and handset antenna designs capable of supporting four spatial streams have been tested, massive MIMO can deliver significant capacity gains even to single spatial stream handsets. Again, MU-MIMO beamforming is used to enable the base station to transmit independent data streams to multiple handsets on the same channel at the same time. However, one question still to be answered by research is: When is it best to add antennas to the base station and when is it best to add small cells?
Another focus of research for 5G wireless is CO-MIMO. In CO-MIMO, clusters of base stations work together to boost performance. This can be done using macro diversity for improved reception of signals from handsets or multi-cell multiplexing to achieve higher downlink data rates. However, CO-MIMO requires high-speed communication between the cooperating base stations.
History
Gregory Raleigh was first to advocate the use of MIMO in combination with OFDM. In a theoretical paper, he proved that with the proper type of MIMO system—multiple, co-located antennas transmitting and receiving multiple information streams using multidimensional coding and encoding—multipath propagation could be exploited to multiply the capacity of a wireless link. Up to that time, radio engineers tried to make real-world channels behave like ideal channels by mitigating the effects of multipath propagation. However, mitigation strategies have never been fully successful. In order to exploit multipath propagation, it was necessary to identify modulation and coding techniques that perform robustly over time-varying, dispersive, multipath channels. Raleigh published additional research on MIMO-OFDM under time-varying conditions, MIMO-OFDM channel estimation, MIMO-OFDM synchronization techniques, and the performance of the first experimental MIMO-OFDM system.
Raleigh solidified the case for OFDM by analyzing the performance of MIMO with three leading modulation techniques in his PhD dissertation: quadrature amplitude modulation (QAM), direct sequence spread spectrum (DSSS), and discrete multi-tone (DMT). QAM is representative of narrowband schemes such as TDMA that use equalization to combat ISI. DSSS uses rake receivers to compensate for multipath and is used by CDMA systems. DMT uses interleaving and coding to eliminate ISI and is representative of OFDM systems. The analysis was performed by deriving the MIMO channel matrix models for the three modulation schemes, quantifying the computational complexity and assessing the channel estimation and synchronization challenges for each. The models showed that for a MIMO system using QAM with an equalizer or DSSS with a rake receiver, computational complexity grows quadratically as data rate is increased. In contrast, when MIMO is used with DMT, computational complexity grows log-linearly (i.e., n log n) as data rate is increased.
Raleigh subsequently founded Clarity Wireless in 1996 and Airgo Networks in 2001 to commercialize the technology. Clarity developed specifications in the Broadband Wireless Internet Forum (BWIF) that led to the IEEE 802.16 (commercialized as WiMAX) and LTE standards, both of which support MIMO. Airgo designed and shipped the first MIMO-OFDM chipsets for what became the IEEE 802.11n standard. MIMO-OFDM is also used in the 802.11ac standard and is expected to play a major role in 802.11ax and fifth generation (5G) mobile phone systems.
Several early papers on multi-user MIMO were authored by Ross Murch et al. at Hong Kong University of Science and Technology. MU-MIMO was included in the 802.11ac standard (developed starting in 2011 and approved in 2014). MU-MIMO capacity appears for the first time in what have become known as "Wave 2" products. Qualcomm announced chipsets supporting MU-MIMO in April 2014.
Broadcom introduced the first 802.11ac chipsets supporting six spatial streams for data rates up to 3.2 Gbit/s in April 2014. Quantenna says it is developing chipsets to support eight spatial streams for data rates up to 10 Gbit/s.
Massive MIMO, Cooperative MIMO (CO-MIMO), and HetNets (heterogeneous networks) are currently the focus of research concerning 5G wireless. The development of 5G standards is expected to begin in 2016. Prominent researchers to date include Jakob Hoydis (of Alcatel-Lucent), Robert W. Heath (at the University of Texas at Austin), Helmut Bölcskei (at ETH Zurich), and David Gesbert (at EURECOM).
Trials of 5G technology have been conducted by Samsung. Japanese operator NTT DoCoMo plans to trial 5G technology in collaboration with Alcatel-Lucent, Ericsson, Fujitsu, NEC, Nokia, and Samsung.
References
IEEE 802
Information theory
Mobile telecommunications standards
Radio resource management | MIMO-OFDM | Mathematics,Technology,Engineering | 2,470 |
2,686,488 | https://en.wikipedia.org/wiki/Autofluorescence | Autofluorescence is the natural fluorescence of biological structures such as mitochondria and lysosomes, in contrast to fluorescence originating from artificially added fluorescent markers (fluorophores).
The most commonly observed autofluorescencing molecules are NADPH and flavins; the extracellular matrix can also contribute to autofluorescence because of the intrinsic properties of collagen and elastin.
Generally, proteins containing an increased amount of the amino acids tryptophan, tyrosine, and phenylalanine show some degree of autofluorescence.
Autofluorescence also occurs in non-biological materials found in many papers and textiles. Autofluorescence from U.S. paper money has been demonstrated as a means for discerning counterfeit currency from authentic currency.
Microscopy
Autofluorescence can be problematic in fluorescence microscopy. Light-emitting stains (such as fluorescently labelled antibodies) are applied to samples to enable visualisation of specific structures.
Autofluorescence interferes with detection of specific fluorescent signals, especially when the signals of interest are very dim — it causes structures other than those of interest to become visible.
In some microscopes (mainly confocal microscopes), it is possible to make use of different lifetime of the excited states of the added fluorescent markers and the endogenous molecules to exclude most of the autofluorescence.
In a few cases, autofluorescence may actually illuminate the structures of interest, or serve as a useful diagnostic indicator.
For example, cellular autofluorescence can be used as an indicator of cytotoxicity without the need to add fluorescent markers.
The autofluorescence of human skin can be used to measure the level of advanced glycation end-products (AGEs), which are present in higher quantities during several human diseases.
Optical imaging systems that utilize multispectral imaging can reduce signal degradation caused by autofluorescence while adding enhanced multiplexing capabilities.
The super resolution microscopy SPDM revealed autofluorescent cellular objects which are not detectable under conventional fluorescence imaging conditions.
Autofluorescent molecules
{| class="wikitable sortable" style="text-align:center;"
|- style="vertical-align:bottom;"
! Molecule
! Excitation(nm)
! Fluorescence(nm) Peak
!
!
!
!
|-
| NAD(P)H
| 340
| 450
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Chlorophyll
| 465–665
| 673–726
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|Collagen
| 270–370
| 305–450
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Retinol
|
| 500
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Riboflavin
|
| 550
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Cholecalciferol
|
| 380–460
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Folic acid
|
| 450
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Pyridoxine
|
| 400
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Tyrosine
| 270
| 305
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Dityrosine
| 325
| 400
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Excimer-likeaggregate(collagen)
| 270
| 360
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Glycation adduct
| 370
| 450
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Indolamine
|
|
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Lipofuscin
| 410–470
| 500–695
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Lignin(a polyphenol)
| 335–488
| 455–535
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Tryptophan
| 280
| 300–350
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|Flavin
| 380–490
| 520–560
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
| Melanin
| 340–400
| 360–560
|style="text-align:center;"|
|style="text-align:center;"|
|style="text-align:center;"|
|
|}
See also
Autoluminescence
Phosphorescence
Fluorescence in the life sciences
References
Microscopy | Autofluorescence | Chemistry | 1,436 |
23,934,029 | https://en.wikipedia.org/wiki/Skew%20coordinates | A system of skew coordinates is a curvilinear coordinate system where the coordinate surfaces are not orthogonal, in contrast to orthogonal coordinates.
Skew coordinates tend to be more complicated to work with compared to orthogonal coordinates since the metric tensor will have nonzero off-diagonal components, preventing many simplifications in formulas for tensor algebra and tensor calculus. The nonzero off-diagonal components of the metric tensor are a direct result of the non-orthogonality of the basis vectors of the coordinates, since by definition:
where is the metric tensor and the (covariant) basis vectors.
These coordinate systems can be useful if the geometry of a problem fits well into a skewed system. For example, solving Laplace's equation in a parallelogram will be easiest when done in appropriately skewed coordinates.
Cartesian coordinates with one skewed axis
The simplest 3D case of a skew coordinate system is a Cartesian one where one of the axes (say the x axis) has been bent by some angle , staying orthogonal to one of the remaining two axes. For this example, the x axis of a Cartesian coordinate has been bent toward the z axis by , remaining orthogonal to the y axis.
Algebra and useful quantities
Let , , and respectively be unit vectors along the , , and axes. These represent the covariant basis; computing their dot products gives the metric tensor:
where
and
which are quantities that will be useful later on.
The contravariant basis is given by
The contravariant basis isn't a very convenient one to use, however it shows up in definitions so must be considered. We'll favor writing quantities with respect to the covariant basis.
Since the basis vectors are all constant, vector addition and subtraction will simply be familiar component-wise adding and subtraction. Now, let
where the sums indicate summation over all values of the index (in this case, i = 1, 2, 3). The contravariant and covariant components of these vectors may be related by
so that, explicitly,
The dot product in terms of contravariant components is then
and in terms of covariant components
Calculus
By definition, the gradient of a scalar function f is
where are the coordinates x, y, z indexed. Recognizing this as a vector written in terms of the contravariant basis, it may be rewritten:
The divergence of a vector is
and of a tensor
The Laplacian of f is
and, since the covariant basis is normal and constant, the vector Laplacian is the same as the componentwise Laplacian of a vector written in terms of the covariant basis.
While both the dot product and gradient are somewhat messy in that they have extra terms (compared to a Cartesian system) the advection operator which combines a dot product with a gradient turns out very simple:
which may be applied to both scalar functions and vector functions, componentwise when expressed in the covariant basis.
Finally, the curl of a vector is
References
Coordinate systems | Skew coordinates | Mathematics | 639 |
77,245,306 | https://en.wikipedia.org/wiki/4C%20%2B26.42 | 4C +26.42 is an elliptical galaxy located in the constellation of Boötes. It has a redshift of 0.063, estimating the galaxy to be located 863 million light-years from Earth. It has an active galactic nucleus and is the brightest cluster galaxy (BCG) in Abell 1795, an X-ray luminous rich cluster (LX 1045 ergs s−1), with an estimated cooling-flow rate of 300 M yr−1.
Properties
4C +26.42 is one of the powerful radio galaxies inhabiting the center of the cluster. Radio-loud, low-luminous and classified as a Fanaroff-Riley class I, the galaxy contains a strong double-lobed radio source, that stretches ≈10 kpc on both sides of the nucleus region measuring P1.4 1025 W Hz−1 and found occupying inside the cluster flow. It has a radial velocity of 365 km s−1, with a complex core structure, and pole-on dispersion in diameter, indicating a marginal intrinsic dispersion. Furthermore, it is a LINEAR galaxy, with an emission spectrum characterized by broad weak ion atoms.
The galaxy is known have a pair of filaments, coiled together. Known as the "SE Filament", they are estimated to have a distance of ~ 42 and 35 respectively but have unresolved widths in Hα (< 0.7 ~ 1 kpc) with thin-looking appearances, reminding of magnetic field lines.
An structure has been discovered inside the envelope of 4C +26.42. Traced from a previous merger with another giant subcluster galaxy, the structure has a measurement of 400 kpc from the center which protrudes towards the north–south direction. When reached at the largest radius, a low-surface brightness region is found, with a slight angle pointing towards east direction. According to researchers, the total I-band magnitude and envelope of the galaxy is -26.6, making 4C +26.42 among the brightest galaxies known.
Nebula line emission
4C +26.42 contains a nebular line emission found coruscating. With a luminosity range of L(Hα) ≈ 1042 ergs s−1, within 20 kpc from the central galaxy, the line emission is embedded inside a filament extending towards the southern nucleus region by 80 kpc. An excessive blue light is also found as well, which is probably emitted out from both young stars in globular clusters and massive star populations located inside the galaxy similar to Hydra A.
Molecular outflows by radio bubbles
4C +26.42 is known to manifest robust molecular 109 M⊙ gas flows, with molecular gas positioned in a pair of filaments. With an estimated length of ~ 7 kpc, north and south from the nucleus, the filaments are jutted around the outer edges of two inflated radio bubbles, caused by evaporation of plasma through heated radio jets launched by the galaxy.
Results shows North filament is found flat and increased velocity gradient that goes from the systemic velocity at the nucleus to a maximum velocity of -370 km s−1. As for the South filament, it shows the opposite by having a shallow velocity gradient, practically collapsed through starbursts. Through comparing both filaments together, they show a close bond specifying these filaments are indeed gas flows caused by the expansion of radio bubbles. Researchers concluded the total amount of molecular gas mass is 3.2 ± 0.2 × 109 M⊙.
Estimated star formation
The star formation in 4C +26.42 are shown to vary, based on different observations. Several studies shows, the estimated star formation rate is said to be smaller than 1–20 M⊙ yr−1 based on different data and methods. Based on ultraviolet imaging, it is said to betwixt of 5 and 20 M⊙ yr−1. Deducing the initial mass function (IMF) as top-heavy with a slope of 3.3 yields, researchers suggested the star formation in 4C +26.42 is extremely high, reaching star formation rates of 581 and 758 M⊙ yr−1.
It was not until then researchers decided to calculated the actual star formation in 4C +26.42. Detecting Lyman-alpha that is emitted from the galaxy, they found it has a significant number of O-type stars within luminosity ranges of 1500 Å, L1500 = 1.9 × 1042 ergs s−1. Applying new methods like Galactic extinction law, extinction value (EB−V = 0.14) and foreground screen dust modes, they predicted the increasement of O-type stars is 5.3 x 104. Accurately, the O-type stars is 2.4 × 104 based on a modern spectroscopy method, indicating the actual star formation rate is only within the compass of 8-23 M⊙ yr−1. Finally researchers used a star formation model corresponding to far-ultraviolet colors whom they found the star production rate in 4C +26.42 is 5-10 M yr−1 over the past 5 billion years.
Radio morphology
According to Very Long Baseline Array observations at 1.6, 5, 8.4 and 22 GHz, 4C +26.42 has a two-sided source, with a geometrical Z-structure located from the core region by ~5 mas. The radio morphology is found on small-scale with core-power 5 GHz Log Pcore, 5 GHz = 23.70 W/Hz and radio power results at 0.4 GHz Log Ptot, 0.4 GHz.
Faraday rotational measure
The faraday rotational measure in 4C +26.42 is extortionate. Exceeding 2000 rad m−2 when observed in high resolution (0.6 arcsec) VLA images, the radio source is found polarized by 10% to 30%. The magnitude and the scale comparable with a boiling (108) and thick (0.03 cm−3) X-ray emitting gas. Based on the degree of ordering, the magnetic field field is within the ranges of 20 and 100 μG.
References
Boötes
Radio galaxies
Elliptical galaxies
4C objects
049005
LINER galaxies
+05-33-005 | 4C +26.42 | Astronomy | 1,280 |
43,934,614 | https://en.wikipedia.org/wiki/James%20C.%20Liao | James C. Liao (; born 1958) is a Taiwanese-American chemist. He is the Parsons Foundation Professor and Chair of the Department of Chemical and Biomolecular Engineering at the University of California, Los Angeles, and is the co-founder and lead scientific advisor of Easel Biotechnologies, LLC.
He is best known for his work in metabolic engineering, synthetic biology, and bioenergy. Liao has been recognized for the biosynthesis and production of higher alcohols such as isobutanol from sugars, cellulose, waste protein, or carbon dioxide.
He was named the president of Academia Sinica, Taiwan, in June 2016.
Education and career
Liao holds both Taiwanese and American citizenship. After graduating from National Taiwan University with a Bachelor of Science (B.S.) in 1980, Liao earned his Ph.D. from the University of Wisconsin–Madison in 1987 under the guidance of chemical engineer Edwin N. Lightfoot, co-author of Transport Phenomena. Liao then worked as a research scientist for Eastman Kodak from 1987 to 1989. In 1990, he joined the Department of Chemical Engineering at Texas A&M University as an assistant professor and three years later he became an associate professor. In 1997, Liao became a professor for the Department of Chemical and Biomolecular Engineering at University of California, Los Angeles.
Research
Liao's research interests include biological synthesis of fuels and chemicals, carbon and nitrogen assimilation, metabolic engineering and synthetic biology, transcriptional and metabolic networks analysis, fatty acid metabolism.
Protein based biofuels
Liao and his team are researching protein based biofuels which use proteins, rather than fats or carbohydrates, as a significant raw material for biorefining and biofuel production. The benefit of using protein is that the protein metabolism is much faster than fatty acid metabolism such as algae biofuels, which leads to higher production.
Electrofuels
Liao's lab recently participated in the US Department of Energy's Electrofuels program. They proposed converting solar energy into liquid fuels such as isobutanol. A new bioreactor could store electricity as liquid fuel with the help of a genetically engineered microbes and carbon dioxide. The isobutanol produced would have an energy density close to gasoline.
Non-oxidative glycolysis
Liao has also worked on the creation of a non-oxidative glycolysis pathway. Natural metabolic pathways degrade sugars in an oxidative way that loses 1/3 of the carbon to in fermentation. The Liao laboratory has developed a pathway, called Non-oxidative glycolysis (NOG), that allows 100% carbon conservation in various fermentation processes.
Awards and honors
Samson-Prime Minister's Prize for Innovation in Alternative Energy and Smart Mobility for Transportation, Israel, 2020
Elected to The World Academy of Sciences 2019
Elected to National Academy of Sciences 2015
Elected Academician of Academia Sinica
Industrial Application of Science from National Academy of Sciences 2014
Elected to National Academy of Engineering 2013
ENI award for Renewable Energy 2013
White House Champion of Change in Renewable Energy, 2012
Presidential Green Chemistry Award from EPA 2010
James E. Bailey Award, Society for Biological Engineering, 2009
Alpha Chi Sigma Award, American Institute of Chemical Engineers, 2009
Marvin J. Johnson Award, Biochemical Technology Division, American Chemical Society, 2009
Charles Thom Award, Society for Industrial Microbiology, 2008
Merck Award for Metabolic Engineering, 2006
FPBE Division Award of American Institute of Chemical Engineers, 2006
Fellow, American Institute for Medical and Biological Engineering, 2002
National Science Foundation Young Investigator Award, 1992
Personal
Liao is originally from Taiwan. He is married to Kelly Liao and has two daughters, Carol and Clara Liao.
References
External links
James C. Liao, President of Academia Sinica
White House Champion of Change for Renewable Energy - James Liao
James C. Liao, Metabolic Engineering and Synthetic Biology Laboratory Homepage
Living people
University of Wisconsin–Madison alumni
National Taiwan University alumni
UCLA Henry Samueli School of Engineering and Applied Science faculty
Taiwanese chemical engineers
Members of Academia Sinica
Members of the United States National Academy of Sciences
Fellows of the American Institute for Medical and Biological Engineering
Synthetic biologists
Members of the United States National Academy of Engineering
Year of birth missing (living people)
American biomedical engineers
TWAS fellows | James C. Liao | Biology | 888 |
76,045,590 | https://en.wikipedia.org/wiki/Susan%20Chomba | Susan Chomba is a Kenyan scientist and environmentalist. She is a director at the World Resources Institute.
Biography
Chomba grew up in poverty in Kirinyaga County. Chomba was largely raised by her grandmother as her mother, a single parent, was always working. Chomba's mother grew capsicum and French beans on a small plot of land owned by a step-uncle and created a farming cooperative.
When Chomba was nine, a local boarding school rejected her due to her poverty, so she attended one further away, in Western Kenya. When her mother was no longer able to afford to send her there, Chomba returned to Kirinyaga to attend the provincial high school. Each student in the school was given a patch of land to farm. Chomba experimented with organic farming, growing cabbage to withstand the cold climate.
Although Chomba had hoped to study law or agricultural economics, she was placed in a forestry course at Moi University. In her third year, when taking an agroforestry class, she found her calling.
Chomba joined the International Centre for Research in Agroforestry, where she Regreening Africa, an eight-country land restoration program that restored one million hectares of degraded land in Africa.
Chomba was a member of the first cohort to graduate with a dual European master's degree in Sustainable Tropical Forestry from Bangor University and the University of Copenhagen. She completed fieldwork in Tanzania. She continued to get her PhD in forest governance at the University of Copenhagen.
In 2021, Chomba joined the World Resources Institute as their Director of Vital Landscapes for Africa, where she leads their work on "Forests, Food systems and People." She is also a global ambassador for the Race to Zero and Race to Resilience under the UN High Level Champions for Climate Action.
Awards
Peter Henry Forestry Postgraduate Award, first recipient, Bangor University
2016: 16 Women Restoring the Earth, Global Landscapes Forum
2022: 25 women shaping climate action globally, Greenbiz
2023: 100 Women (BBC), which features 100 inspiring and influential women from around the world.
References
Kenyan scientists
Environmental scientists
Kenyan women scientists
Women agronomists
Alumni of Bangor University
University of Copenhagen alumni
Moi University alumni
Year of birth missing (living people)
Living people | Susan Chomba | Environmental_science | 462 |
27,613,441 | https://en.wikipedia.org/wiki/Reflectance%20difference%20spectroscopy | Reflectance difference spectroscopy (RDS) is a spectroscopic technique which measures the difference in reflectance of two beams of light that are shone in normal incident on a surface with different linear polarizations. It is also known as reflectance anisotropy spectroscopy (RAS).
It is calculated as:
and are the reflectance in two different polarizations.
The method was introduced in 1985 for the study optical properties of the cubic semiconductors silicon and germanium. Due to its high surface sensitivity and independence of ultra-high vacuum, its use has been expanded to in situ monitoring of epitaxial growth or the interaction of surfaces with adsorbates. To assign specific features in the signal to their origin in morphology and electronic structure, theoretical modelling by density functional theory is required.
References
Materials science
Analytical chemistry
Scientific techniques | Reflectance difference spectroscopy | Physics,Chemistry,Materials_science,Engineering | 168 |
57,800,159 | https://en.wikipedia.org/wiki/PDS%2070 | PDS 70 (V1032 Centauri) is a very young T Tauri star in the constellation Centaurus. Located from Earth, it has a mass of and is approximately 5.4 million years old. The star has a protoplanetary disk containing two nascent exoplanets, named PDS 70b and PDS 70c, which have been directly imaged by the European Southern Observatory's Very Large Telescope. PDS 70b was the first confirmed protoplanet to be directly imaged.
Discovery and naming
The "PDS" in this star's name stands for Pico dos Dias Survey, a survey that looked for pre-main-sequence stars based on the star's infrared colors measured by the IRAS satellite.
PDS 70 was identified as a T Tauri variable star in 1992, from these infrared colors. PDS 70's brightness varies quasi-periodically with an amplitude of a few hundredths of a magnitude in visible light. Measurements of the star's period in the astronomical literature are inconsistent, ranging from 3.007 days to 5.1 or 5.6 days.
Protoplanetary disk
The protoplanetary disk around PDS 70 was first hypothesized in 1992 and fully imaged in 2006 with phase-mask coronagraph on the VLT. The disk has a radius of approximately . In 2012 a large gap (~) in the disk was discovered, which was thought to be caused by planetary formation.
The gap was later found to have multiple regions: large dust grains were absent out to 80 au, while small dust grains were only absent out to the previously-observed . There is an asymmetry in the overall shape of the gap; these factors indicate that there are likely multiple planets affecting the shape of the gap and the dust distribution.
The James Webb Space Telescope has been used to detect water vapor in the inner part of the disk, where terrestrial planets may be forming.
Planetary system
In results published in 2018, a planet in the disk, named PDS 70 b, was imaged with SPHERE planet imager at the Very Large Telescope (VLT). With a mass estimated to be a few times greater than Jupiter, the planet is thought to have a temperature of around and an atmosphere with clouds; its orbit has an approximate radius of , taking around 120 years for a revolution.
The emission spectrum of the planet PDS 70 b is gray and featureless, and no molecular species were detected by 2021.
A second planet, named PDS 70 c, was discovered in 2019 using the VLT's MUSE integral field spectrograph. The planet orbits its host star at a distance of , farther away than PDS 70 b. PDS 70 c is in a near 1:2 orbital resonance with PDS 70 b, meaning that PDS 70 c completes nearly one revolution once every time PDS 70 b completes nearly two.
Circumplanetary disks
Modelling predicts that PDS 70 b has acquired its own accretion disk. The accretion disk was at first observationally supported in 2019, however, in 2020 evidence was presented that the current data favors a model with a single blackbody component of the planet. The accretion rate was measured to be at least 5*10−7 Jupiter masses per year. A 2021 study with newer methods and data suggested a lower accretion rate of 1.4*10−8 /year. It is not clear how to reconcile these results with each other and with existing planetary accretion models; future research in accretion mechanisms and Hα emissions production should offer clarity.
The photospheric blackbody radius of the planet is 3.0 . Its bolometric temperature is 1193 K, while only upper limits on these quantities can be derived for the optically thick accretion disk, significantly larger than the planet itself. However, weak evidence that the current data favors a model with a single blackbody component is found.
In July 2019, astronomers using the Atacama Large Millimeter Array (ALMA) reported the first-ever detection of a moon-forming circumplanetary disk. The disk was detected around PDS 70 c, with a potential disk observed around PDS 70 b. The two planets and the superposition of PDS 70 c and the protoplanetary disk was confirmed by Caltech-led researchers using the W. M. Keck Observatory in Mauna Kea, whose research was published in May 2020. An image of the circumplanetary disk around PDS 70 c separated from the protoplanetary disk was finally confirming the circumplanetary disk and was published in November 2021.
Possible planet d
VLT/SPHERE observations showed a third object 0.12 arcseconds from the star. Its spectrum is very blue, possibly due to star light reflected in dust. It could be a feature of the inner disk. The possibility does still exist that this object is a planetary mass object enshrouded by a dust envelope. For this second scenario the mass of the planet would be on the order of a few tens . JWST NIRCam observations also detected this object. It is located at around 13.5 AU and if it is a planet, it would be in a 1:2:4 mean-motion resonance with the other protoplanets.
Possible co-orbital body
In July 2023, the likely detection of a cloud of debris co-orbital with the planet PDS 70 b was announced. This debris is thought to have a mass 0.03-2 times that of the Moon, and could be evidence of a Trojan planet or one in the process of forming.
Gallery
See also
List of brightest stars
List of nearest bright stars
Lists of stars
Historical brightest stars
References
External links
(ESO; July 2021)
Centaurus
Planetary systems with two confirmed planets
IRAS catalogue objects
K-type stars
Centauri, V1032
T Tauri stars
J14081015-4123525 | PDS 70 | Astronomy | 1,224 |
453,566 | https://en.wikipedia.org/wiki/Phoenix%20%28spacecraft%29 | Phoenix was an uncrewed space probe that landed on the surface of Mars on May 25, 2008, and operated until November 2, 2008. Phoenix was operational on Mars for sols ( days). Its instruments were used to assess the local habitability and to research the history of water on Mars. The mission was part of the Mars Scout Program; its total cost was $420 million, including the cost of launch.
The multi-agency program was led by the Lunar and Planetary Laboratory at the University of Arizona, with project management by NASA's Jet Propulsion Laboratory. Academic and industrial partners included universities in the United States, Canada, Switzerland, Denmark, Germany, the United Kingdom, NASA, the Canadian Space Agency, the Finnish Meteorological Institute, Lockheed Martin Space Systems, MacDonald Dettwiler & Associates (MDA) in partnership with Optech Incorporated (Optech) and other aerospace companies. It was the first NASA mission to Mars led by a public university.
Phoenix was NASA's sixth successful landing on Mars, from seven attempts, and the first in Mars' polar region. The lander completed its mission in August 2008, and made a last brief communication with Earth on November 2 as available solar power dropped with the Martian winter. The mission was declared concluded on November 10, 2008, after engineers were unable to re-contact the craft. After unsuccessful attempts to contact the lander by the Mars Odyssey orbiter up to and past the Martian summer solstice on May 12, 2010, JPL declared the lander to be dead. The program was considered a success because it completed all planned science experiments and observations.
Mission overview
The mission had two goals. One was to study the geological history of water, the key to unlocking the story of past climate change. The second was to evaluate past or potential planetary habitability in the ice-soil boundary. Phoenix'''s instruments were suitable for uncovering information on the geological and possibly biological history of the Martian Arctic. Phoenix was the first mission to return data from either of the poles, and contributed to NASA's main strategy for Mars exploration, "Follow the water."
The primary mission was anticipated to last 90 sols (Martian days)—just over 92 Earth days. However, the craft exceeded its expected operational lifetime by a little over two months before succumbing to the increasing cold and dark of an advancing Martian winter. Researchers had hoped that the lander would survive into the Martian winter so that it could witness polar ice developing around it – perhaps up to of solid carbon dioxide ice could have appeared. Even had it survived some of the winter, the intense cold would have prevented it from lasting all the way through. The mission was chosen to be a fixed lander rather than a rover because:
costs were reduced through reuse of earlier equipment (though this claim is disputed by some observers);
the area of Mars where Phoenix landed is thought to be relatively uniform, thus traveling on the surface is of less value; and
the weight budget needed for mobility could instead be used for more and better scientific instruments.
The 2003–2004 observations of methane gas on Mars were made remotely by three teams working with separate data. If the methane is truly present in the atmosphere of Mars, then something must be producing it on the planet now, because the gas is broken down by radiation on Mars within 300 years; therefore, it was considered important to determine the biological potential or habitability of the Martian arctic's soils. Methane could also be the product of a geochemical process or the result of volcanic or hydrothermal activity.
History
While the proposal for Phoenix was being written, the Mars Odyssey orbiter used its gamma-ray spectrometer and found the distinctive signature of hydrogen on some areas of the Martian surface, and the only plausible source of hydrogen on Mars would be water in the form of ice, frozen below the surface. The mission was therefore funded on the expectation that Phoenix would find water ice on the arctic plains of Mars. In August 2003 NASA selected the University of Arizona "Phoenix" mission for launch in 2007. It was hoped this would be the first in a new line of smaller, low-cost, Scout missions in the agency's exploration of Mars program. The selection was the result of an intense two-year competition with proposals from other institutions. The $325 million NASA award is more than six times larger than any other single research grant in University of Arizona history.
Peter H. Smith of the University of Arizona Lunar and Planetary Laboratory, as Principal Investigator, along with 24 Co-Investigators, were selected to lead the mission. The mission was named after the Phoenix, a mythological bird that is repeatedly reborn from its own ashes. The Phoenix spacecraft contains several previously built components. The lander used was the modified Mars Surveyor 2001 Lander (canceled in 2000), along with several of the instruments from both that and the previous unsuccessful Mars Polar Lander mission. Lockheed Martin, who built the lander, had kept the nearly complete lander in an environmentally controlled clean room from 2001 until the mission was funded by the NASA Scout Program.Phoenix was a partnership of universities, NASA centers, and the aerospace industry. The science instruments and operations were a University of Arizona responsibility. NASA's Jet Propulsion Laboratory in Pasadena, California, managed the project and provided mission design and control. Lockheed Martin Space Systems built and tested the spacecraft. The Canadian Space Agency provided a meteorological station, including an innovative laser-based atmospheric sensor. The co-investigator institutions included Malin Space Science Systems (California), Max Planck Institute for Solar System Research (Germany), NASA Ames Research Center (California), NASA Johnson Space Center (Texas), MacDonald, Dettwiler and Associates (Canada), Optech Incorporated (Canada), SETI Institute, Texas A&M University, Tufts University, University of Colorado, University of Copenhagen (Denmark), University of Michigan, University of Neuchâtel (Switzerland), University of Texas at Dallas, University of Washington, Washington University in St. Louis, and York University (Canada). Scientists from Imperial College London and the University of Bristol provided hardware for the mission and were part of the team operating the microscope station.
On June 2, 2005, following a critical review of the project's planning progress and preliminary design, NASA approved the mission to proceed as planned. The purpose of the review was to confirm NASA's confidence in the mission.
Specifications
Launched mass
Includes Lander, Aeroshell (backshell and heatshield), parachutes, cruise stage.
Lander Mass
Lander Dimensions
About long with the solar panels deployed. The science deck by itself is about in diameter. From the ground to the top of the MET mast, the lander measures about tall.
Communications
X-band throughout the cruise phase of the mission and for its initial communication after separating from the third stage of the launch vehicle. UHF links, relayed through Mars orbiters during the entry, descent and landing phase and while operating on the surface of Mars. The UHF system on Phoenix is compatible with relay capabilities of NASA's Mars Odyssey, Mars Reconnaissance Orbiter and with the European Space Agency's Mars Express. The interconnections use the Proximity-1 protocol.
Power
Power for the cruise phase is generated using two decagonal gallium arsenide solar panels (total area ) mounted to the cruise stage, and for the lander, via two gallium arsenide solar array panels (total area ) deployed from the lander after touchdown on the Martian surface. NiH2 battery with a capacity of 16 A·h.
Lander systems include a RAD6000 based computer system for commanding the spacecraft and handling data. Other parts of the lander are an electrical system containing solar arrays and batteries, a guidance system to land the spacecraft, eight and monopropellant hydrazine engines built by Aerojet-Redmond Operations for the cruise phase, twelve Aerojet monopropellant hydrazine thrusters to land the Phoenix, mechanical and structural elements, and a heater system to ensure the spacecraft does not get too cold.
Scientific payloadPhoenix carried improved versions of University of Arizona panoramic cameras and volatiles-analysis instrument from the ill-fated Mars Polar Lander, as well as experiments that had been built for the canceled Mars Surveyor 2001 Lander, including a JPL trench-digging robotic arm, a set of wet chemistry laboratories, and optical and atomic force microscopes. The science payload also included a descent imager and a suite of meteorological instruments.
During EDL, the Atmospheric Structure Experiment was conducted. This used accelerometer and gyroscope data recorded during the lander's descent through the atmosphere to create a vertical profile of the temperature, pressure, and density of the atmosphere above the landing site, at that point in time.
Robotic arm and camera
The robotic arm was designed to extend from its base on the lander, and had the ability to dig down to below a sandy surface. It took samples of dirt and ice that were analyzed by other instruments on the lander. The arm was designed and built for the Jet Propulsion Laboratory by Alliance Spacesystems, LLC (now MDA US Systems, LLC) in Pasadena, California. A rotating rasp-tool located in the heel of the scoop was used to cut into the strong permafrost. Cuttings from the rasp were ejected into the heel of the scoop and transferred to the front for delivery to the instruments. The rasp tool was conceived of at the Jet Propulsion Laboratory. The flight version of the rasp was designed and built by HoneyBee Robotics. Commands were sent for the arm to be deployed on May 28, 2008, beginning with the pushing aside of a protective covering intended to serve as a redundant precaution against potential contamination of Martian soil by Earthly life-forms.
The Robotic Arm Camera (RAC) attached to the robotic arm just above the scoop was able to take full-color pictures of the area, as well as verify the samples that the scoop returned, and examined the grains of the area where the robotic arm had just dug. The camera was made by the University of Arizona and Max Planck Institute for Solar System Research, Germany.
Surface stereo imager
The Surface Stereo Imager (SSI) was the primary camera on the lander. It is a stereo camera that is described as "a higher resolution upgrade of the imager used for Mars Pathfinder and the Mars Polar Lander". It took several stereo images of the Martian Arctic, and also used the Sun as a reference to measure the atmospheric distortion of the Martian atmosphere due to dust, air and other features. The camera was provided by the University of Arizona in collaboration with the Max Planck Institute for Solar System Research.
Thermal and evolved gas analyzer
The Thermal and Evolved Gas Analyzer (TEGA) is a combination of a high-temperature furnace with a mass spectrometer. It was used to bake samples of Martian dust and determine the composition of the resulting vapors. It has eight ovens, each about the size of a large ball-point pen, which were able to analyze one sample each, for a total of eight separate samples. Team members measured how much water vapor and carbon dioxide gas were given off, how much water ice the samples contained, and what minerals are present that may have formed during a wetter, warmer past climate. The instrument also measured organic volatiles, such as methane, down to 10 parts per billion. TEGA was built by the University of Arizona and University of Texas at Dallas.
On May 29, 2008 (sol ), electrical tests indicated an intermittent short circuit in TEGA, resulting from a glitch in one of the two filaments responsible for ionizing volatiles. NASA worked around the problem by configuring the backup filament as the primary and vice versa.
In early June, first attempts to get soil into TEGA were unsuccessful as it seemed too "cloddy" for the screens.
On June 11 the first of the eight ovens was filled with a soil sample after several tries to get the soil sample through the screen of TEGA. On June 17, it was announced that no water was found in this sample; however, since it had been exposed to the atmosphere for several days prior to entering the oven, any initial water ice it might have contained could have been lost via sublimation.
Mars Descent Imager
The Mars Descent Imager (MARDI) was intended to take pictures of the landing site during the last three minutes of descent. As originally planned, it would have begun taking pictures after the aeroshell departed, about above the Martian soil.
Before launch, testing of the assembled spacecraft uncovered a potential data corruption problem with an interface card that was designed to route MARDI image data as well as data from various other parts of the spacecraft. The potential problem could occur if the interface card were to receive a MARDI picture during a critical phase of the spacecraft's final descent, at which point data from the spacecraft's Inertial Measurement Unit could have been lost; this data was critical to controlling the descent and landing. This was judged to be an unacceptable risk, and it was decided to not use MARDI during the mission. As the flaw was discovered too late for repairs, the camera remained installed on Phoenix but it was not used to take pictures, nor was its built-in microphone used.
MARDI images had been intended to help pinpoint exactly where the lander landed, and possibly help find potential science targets. It was also to be used to learn if the area where the lander lands is typical of the surrounding terrain. MARDI was built by Malin Space Science Systems. It would have used only 3 watts of power during the imaging process, less than most other space cameras. It had originally been designed and built to perform the same function on the Mars Surveyor 2001 Lander mission; after that mission was canceled, MARDI spent several years in storage until it was deployed on the Phoenix lander.
Microscopy, electrochemistry, and conductivity analyzer
The Microscopy, Electrochemistry, and Conductivity Analyzer (MECA) is an instrument package originally designed for the canceled Mars Surveyor 2001 Lander mission. It consists of a wet chemistry lab (WCL), optical and atomic force microscopes, and a thermal and electrical conductivity probe. The Jet Propulsion Laboratory built MECA. A Swiss consortium led by the University of Neuchatel contributed the atomic force microscope.
Using MECA, researchers examined soil particles as small as 16 μm across; additionally, they attempted to determine the chemical composition of water-soluble ions in the soil. They also measured electrical and thermal conductivity of soil particles using a probe on the robotic arm scoop.
Sample wheel and translation stage
This instrument presents 6 of 69 sample holders to an opening in the MECA instrument to which the robotic arm delivers the samples and then brings the samples to the optical microscope and the atomic force microscope. Imperial College London provided the microscope sample substrates.
Optical microscope
The optical microscope, designed by the University of Arizona, is capable of making images of the Martian regolith with a resolution of 256 pixels/mm or 16 micrometers/pixel. The field of view of the microscope is a sample holder to which the robotic arm delivers the sample. The sample is illuminated either by 9 red, green and blue LEDs or by 3 LEDs emitting ultraviolet light. The electronics for the readout of the CCD chip are shared with the robotic arm camera which has an identical CCD chip.
Atomic force microscope
The atomic force microscope has access to a small area of the sample delivered to the optical microscope. The instrument scans over the sample with one of 8 silicon crystal tips and measures the repulsion of the tip from the sample. The maximum resolution is 0.1 micrometres. A Swiss consortium led by the University of Neuchatel contributed the atomic force microscope.
Wet Chemistry Laboratory (WCL)
The wet chemistry lab (WCL) sensor assembly and leaching solution were designed and built by Thermo Fisher Scientific. The WCL actuator assembly was designed and built by Starsys Research in Boulder, Colorado. Tufts University developed the reagent pellets, barium ISE, and ASV electrodes, and performed the preflight characterization of the sensor array.
The robotic arm scooped up some soil and put it in one of four wet chemistry lab cells, where water was added, and, while stirring, an array of electrochemical sensors measured a dozen dissolved ions such as sodium, magnesium, calcium, and sulfate that leached out from the soil into the water. This provided information on the biological compatibility of the soil, both for possible indigenous microbes and for possible future Earth visitors.
All of the four wet chemistry labs were identical, each containing 26 chemical sensors and a temperature sensor. The polymer Ion Selective Electrodes (ISE) were able to determine the concentration of ions by measuring the change in electric potential across their ion-selective membranes as a function of concentration. Two gas sensing electrodes for oxygen and carbon dioxide worked on the same principle but with gas-permeable membranes. A gold micro-electrode array was used for the cyclic voltammetry and anodic stripping voltammetry. Cyclic voltammetry is a method to study ions by applying a waveform of varying potential and measuring the current–voltage curve. Anodic stripping voltammetry first deposits the metal ions onto the gold electrode with an applied potential. After the potential is reversed, the current is measured while the metals are stripped off the electrode.
Thermal and Electrical Conductivity Probe (TECP)
The MECA contains a Thermal and Electrical Conductivity Probe (TECP). The TECP, designed by Decagon Devices, has four probes that made the following measurements: Martian soil temperature, relative humidity, thermal conductivity, electrical conductivity, dielectric permittivity, wind speed, and atmospheric temperature.
Three of the four probes have tiny heating elements and temperature sensors inside them. One probe uses internal heating elements to send out a pulse of heat, recording the time the pulse is sent and monitoring the rate at which the heat is dissipated away from the probe. Adjacent needles sense when the heat pulse arrives. The speed that the heat travels away from the probe as well as the speed that it travels between probes allows scientists to measure thermal conductivity, specific heat (the ability of the regolith to conduct heat relative to its ability to store heat) and thermal diffusivity (the speed at which a thermal disturbance is propagated in the soil).
The probes also measured the dielectric permittivity and electrical conductivity, which can be used to calculate moisture and salinity of the regolith. Needles 1 and 2 work in conjunction to measure salts in the regolith, heat the soil to measure thermal properties (thermal conductivity, specific heat and thermal diffusivity) of the regolith, and measure soil temperature. Needles 3 and 4 measure liquid water in the regolith. Needle 4 is a reference thermometer for needles 1 and 2.
The TECP humidity sensor is a relative humidity sensor, so it must be coupled with a temperature sensor in order to measure absolute humidity. Both the relative humidity sensor and a temperature sensor are attached directly to the circuit board of the TECP and are, therefore, assumed to be at the same temperature.
Meteorological station
The Meteorological Station (MET) recorded the daily weather of Mars during the course of the Phoenix mission. It is equipped with a wind indicator and pressure and temperature sensors. The MET also contains a lidar (light detection and ranging) device for sampling the number of dust particles in the air. It was designed in Canada by Optech and MDA, supported by the Canadian Space Agency. A team initially led by York University's Professor Diane Michelangeli until her death in 2007, when Professor James Whiteway took over, oversaw the science operations of the station. The York University team includes contributions from the University of Alberta, University of Aarhus (Denmark), Dalhousie University, Finnish Meteorological Institute, Optech, and the Geological Survey of Canada. Canadarm maker MacDonald Dettwiler and Associates (MDA) of Richmond, B.C. built the MET.
The surface wind velocity, pressure, and temperature were also monitored over the mission (from the tell-tale, pressure, and temperature sensors) and show the evolution of the atmosphere with time. To measure dust and ice contribution to the atmosphere, a lidar was employed. The lidar collected information about the time-dependent structure of the planetary boundary layer by investigating the vertical distribution of dust, ice, fog, and clouds in the local atmosphere.
There are three temperature sensors (thermocouples) on a vertical mast (shown in its stowed position) at heights of approximately above the lander deck. The sensors were referenced to a measurement of absolute temperature at the base of the mast. A pressure sensor built by Finnish Meteorological Institute is located in the Payload Electronics Box, which sits on the surface of the deck, and houses the acquisition electronics for the MET payload. The Pressure and Temperature sensors commenced operations on Sol 0 (May 26, 2008) and operated continuously, sampling once every 2 seconds.
The Telltale is a joint Canadian/Danish instrument (right) which provides a coarse estimate of wind speed and direction. The speed is based on the amount of deflection from vertical that is observed, while the wind direction is provided by which way this deflection occurs. A mirror, located under the telltale, and a calibration "cross," above (as observed through the mirror) are employed to increase the accuracy of the measurement. Either camera, SSI or RAC, could make this measurement, though the former was typically used. Periodic observations both day and night aid in understanding the diurnal variability of wind at the Phoenix landing site.
The wind speeds ranged from . The usual average speed was .
The vertical-pointing lidar was capable of detecting multiple types of backscattering (for example Rayleigh scattering and Mie Scattering), with the delay between laser pulse generation and the return of light scattered by atmospheric particles determining the altitude at which scattering occurs. Additional information was obtained from backscattered light at different wavelengths (colors), and the Phoenix system transmitted both 532 nm and 1064 nm. Such wavelength dependence may make it possible to discriminate between ice and dust, and serve as an indicator of the effective particle size.
The Phoenix lidar's laser was a passive Q-switched Nd:YAG laser with the dual wavelengths of 1064 nm and 532 nm. It operated at 100 Hz with a pulse width of 10 ns. The scattered light was received by two detectors (green and IR) and the green signal was collected in both analog and photon counting modes.
The lidar was operated for the first time at noon on Sol 3 (May 29, 2008), recording the first surface extraterrestrial atmospheric profile. This first profile indicated well-mixed dust in the first few kilometers of the atmosphere of Mars, where the planetary boundary layer was observed by a marked decrease in scattering signal. The contour plot (right) shows the amount of dust as a function of time and altitude, with warmer colors (red, orange) indicating more dust, and cooler colors (blue, green), indicating less dust. There is also an instrumentation effect of the laser warming up, causing the appearance of dust increasing with time. A layer at can be observed in the plot, which could be extra dust, or—less likely, given the time of sol this was acquired—a low altitude ice cloud.
The image on the left shows the lidar laser operating on the surface of Mars, as observed by the SSI looking straight up; the laser beam is the nearly-vertical line just right of center. Overhead dust can be seen both moving in the background, as well as passing through the laser beam in the form of bright sparkles. The fact that the beam appears to terminate is the result of the extremely small angle at which the SSI is observing the laser—it sees farther up along the beam's path than there is dust to reflect the light back down to it.
The laser device discovered snow falling from clouds; this was not known to occur before the mission. It was also determined that cirrus clouds formed in the area.
Mission highlights
Launch Phoenix was launched on August 4, 2007, at 5:26:34 a.m. EDT (09:26:34 UTC) on a Delta II 7925 launch vehicle from Pad 17-A of the Cape Canaveral Air Force Station. The launch was nominal with no significant anomalies. The Phoenix lander was placed on a trajectory of such precision that its first trajectory course correction burn, performed on August 10, 2007, at 7:30 a.m. EDT (11:30 UTC), was only 18 m/s. The launch took place during a launch window extending from August 3, 2007, to August 24, 2007. Due to the small launch window, the rescheduled launch of the Dawn mission (originally planned for July 7) had to be launched after Phoenix in September. The Delta II rocket was chosen due to its successful launch history, which includes launches of the Spirit and Opportunity Mars Exploration Rovers in 2003 and Mars Pathfinder in 1996.
A noctilucent cloud was created by the exhaust gas from the Delta II 7925 rocket used to launch Phoenix. The colors in the cloud formed from the prism-like effect of the ice particles present in the exhaust trail.
Cruise
Entry, descent, and landing
The Jet Propulsion Laboratory made adjustments to the orbits of its two active satellites around Mars, Mars Reconnaissance Orbiter and Mars Odyssey, and the European Space Agency similarly adjusted the orbit of its Mars Express spacecraft to be in the right place on May 25, 2008, to observe Phoenix as it entered the atmosphere and then landed on the surface. This information helps designers to improve future landers. The projected landing area was an ellipse covering terrain which has been informally named "Green Valley" and contains the largest concentration of water ice outside the poles.Phoenix entered the Martian atmosphere at nearly , and within 7 minutes had decreased its speed to before touching down on the surface. Confirmation of atmospheric entry was received at 4:46 p.m. PDT (23:46 UTC). Radio signals received at 4:53:44 p.m. PDT confirmed that Phoenix had survived its difficult descent and landed 15 minutes earlier, thus completing a 680 million km (422 million miles) flight from Earth.
For unknown reasons, the parachute was deployed about 7 seconds later than expected, leading to a landing position some east, near the edge of the predicted 99% landing ellipse.Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment (HiRISE) camera photographed Phoenix suspended from its parachute during its descent through the Martian atmosphere. This marked the first time ever one spacecraft photographed another in the act of landing on a planet (the Moon not being a planet, but a satellite). The same camera also imaged Phoenix on the surface with enough resolution to distinguish the lander and its two solar cell arrays. Ground controllers used Doppler tracking data from Odyssey and Mars Reconnaissance Orbiter to determine the lander's precise location as .The landing site is here on the NASA World Wind planetary viewer (free installation required)Phoenix landed in the Green Valley of Vastitas Borealis on May 25, 2008, in the late Martian northern hemisphere spring (Ls=76.73), where the Sun shone on its solar panels the whole Martian day. By the Martian northern Summer solstice (June 25, 2008), the Sun appeared at its maximum elevation of 47.0 degrees. Phoenix experienced its first sunset at the start of September 2008.
The landing was made on a flat surface, with the lander reporting only 0.3 degrees of tilt. Just before landing, the craft used its thrusters to orient its solar panels along an east–west axis to maximize power generation. The lander waited 15 minutes before opening its solar panels, to allow dust to settle. The first images from the lander became available around 7:00 p.m. PDT (2008-05-26 02:00 UTC). The images show a surface strewn with pebbles and incised with small troughs into polygons about across and high, with the expected absence of large rocks and hills.
Like the 1970s era Viking spacecraft, Phoenix used retrorockets for its final descent. Experiments conducted by Nilton Renno, mission co-investigator from the University of Michigan, and his students have investigated how much surface dust would be kicked up on landing. Researchers at Tufts University, led by co-investigator Sam Kounaves, conducted additional in-depth experiments to identify the extent of the ammonia contamination from the hydrazine propellant and its possible effects on the chemistry experiments. In 2007, a report to the American Astronomical Society by Washington State University professor Dirk Schulze-Makuch, suggested that Mars might harbor peroxide-based life forms which the Viking landers failed to detect because of the unexpected chemistry. The hypothesis was proposed long after any modifications to Phoenix could be made. One of the Phoenix mission investigators, NASA astrobiologist Chris McKay, stated that the report "piqued his interest" and that ways to test the hypothesis with Phoenix's instruments would be sought.
Surface mission
Communications from the surface
The robotic arm's first movement was delayed by one day when, on May 27, 2008, commands from Earth were not relayed to the Phoenix lander on Mars. The commands went to NASA's Mars Reconnaissance Orbiter as planned, but the orbiter's Electra UHF radio system for relaying commands to Phoenix temporarily shut off. Without new commands, the lander instead carried out a set of backup activities. On May 27 the Mars Reconnaissance Orbiter relayed images and other information from those activities back to Earth.
The robotic arm was a critical part of the Phoenix Mars mission. On May 28, scientists leading the mission sent commands to unstow its robotic arm and take more images of its landing site. The images revealed that the spacecraft landed where it had access to digging down a polygon across the trough and digging into its center.
The lander's robotic arm touched soil on Mars for the first time on May 31, 2008 (sol ). It scooped dirt and started sampling the Martian soil for ice after days of testing its systems.
Presence of shallow subsurface water ice
The polygonal cracking at the landing zone had previously been observed from orbit, and is similar to patterns seen in permafrost areas in polar and high altitude regions of Earth. Phoenixs robotic arm camera took an image underneath the lander on sol 5 that shows patches of a smooth bright surface uncovered when thruster exhaust blew off overlying loose soil. It was later shown to be water ice.Mellon, M., et al. 2009. The periglacial landscape at the Phoenix landing site. Journal of Geophys. Res. 114. E00E07
On June 19, 2008 (sol ), NASA announced that dice-sized clumps of bright material in the "Dodo-Goldilocks" trench dug by the robotic arm had vaporized over the course of four days, strongly implying that they were composed of water ice which sublimed following exposure. While dry ice also sublimes, under the conditions present it would do so at a rate much faster than observed.
On July 31, 2008 (sol ), NASA announced that Phoenix confirmed the presence of water ice on Mars, as predicted in 2002 by the Mars Odyssey orbiter. During the initial heating cycle of a new sample, TEGA's mass spectrometer detected water vapor when the sample temperature reached 0 °C.
Liquid water cannot exist on the surface of Mars with its present low atmospheric pressure, except at the lowest elevations for short periods.
With Phoenix in good working order, NASA announced operational funding through September 30, 2008 (sol ). The science team worked to determine whether the water ice ever thaws enough to be available for life processes and if carbon-containing chemicals and other raw materials for life are present.
Additionally during 2008 and early 2009 a debate emerged within NASA over the presence of 'blobs' which appeared on photos of the vehicle's landing struts, which have been variously described as being either water droplets or 'clumps of frost'. Due to the lack of consensus within the Phoenix science project, the issue had not been raised in any NASA news conferences.
One scientist thought that the lander's thrusters splashed a pocket of brine from just below the Martian surface onto the landing strut during the vehicle's landing. The salts would then have absorbed water vapor from the air, which would have explained how they appeared to grow in size during the first 44 sols (Martian days) before slowly evaporating as Mars temperature dropped.
Wet chemistry
On June 24, 2008 (sol ), NASA's scientists launched a series of scientific tests. The robotic arm scooped up more soil and delivered it to 3 different on-board analyzers: an oven that baked it and tested the emitted gases, a microscopic imager, and a wet chemistry laboratory (WCL). The lander's robotic arm scoop was positioned over the Wet Chemistry Lab delivery funnel on Sol 29 (the 29th Martian day after landing, i.e. June 24, 2008). The soil was transferred to the instrument on sol (June 25, 2008), and Phoenix performed the first wet chemistry tests. On Sol 31 (June 26, 2008) Phoenix returned the wet chemistry test results with information on the salts in the soil, and its acidity. The wet chemistry lab (WCL) was part of the suite of tools called the Microscopy, Electrochemistry and Conductivity Analyzer (MECA).
A 360-degree panorama assembled from images taken on sols 1 and 3 after landing. The upper portion has been vertically stretched by a factor of 8 to bring out details. Visible near the horizon at full resolution are the backshell and parachute (a bright speck above the right edge of the left solar array, about distant) and the heat shield and its bounce mark (two end-to-end dark streaks above the center of the left solar array, about distant); on the horizon, left of the weather mast, is a crater.
End of the mission
The solar-powered lander operated two months longer than its three-month prime mission. The lander was designed to last 90 days, and had been running on bonus time since the successful end of its primary mission in August 2008. On October 28, 2008 (sol ), the lander went into safe mode due to power constraints based on the insufficient amount of sunlight reaching the lander, as expected at this time of year. It was decided then to shut down the four heaters that keep the equipment warm, and upon bringing the lander back from safe mode, commands were sent to turn off two of the heaters rather than only one as was originally planned for the first step. The heaters involved provide heat to the robotic arm, TEGA instrument and a pyrotechnic unit on the lander that were unused since landing, so these three instruments were also shut down.
On November 10, Phoenix Mission Control reported the loss of contact with the Phoenix lander; the last signal was received on November 2. The demise of the craft occurred as a result of a dust storm that reduced power generation even further. While the spacecraft's work ended, the analysis of data from the instruments was in its earliest stages.
Communication attempts 2010
Though it was not designed to survive the frigid Martian winter, the spacecraft's safe mode kept the option open to reestablish communications if the lander could recharge its batteries during the next Martian spring. However, its landing location is in an area that is usually part of the north polar ice cap during the Martian winter, and the lander was seen from orbit to be encased in dry ice. It is estimated that, at its peak, the layer of CO2 ice in the lander's vicinity would total about 30 grams/cm2, which is enough to make a dense slab of dry ice at least thick. It was considered unlikely that the spacecraft could endure these conditions, as its fragile solar panels would likely break off under so much weight.
Scientists attempted to make contact with Phoenix starting January 18, 2010 (sol ), but were unsuccessful. Further attempts in February and April also failed to pick up any signal from the lander.Frost-Covered Phoenix Lander Seen in Winter Images (November 4, 2009) Project manager Barry Goldstein announced on May 24, 2010, that the project was being formally ended. Images from the Mars Reconnaissance Orbiter showed that its solar panels were apparently irretrievably damaged by freezing during the Martian winter.
Results of the mission
Landscape
Unlike some other places visited on Mars with landers (Viking and Pathfinder), nearly all the rocks near Phoenix are small. For about as far as the camera can see, the land is flat, but shaped into polygons between in diameter and are bounded by troughs that are deep. These shapes are due to ice in the soil expanding and contracting due to major temperature changes. The microscope showed that the soil on top of the polygons is composed of flat particles (probably a type of clay) and rounded particles. Also, unlike other places visited on Mars, the site has no ripples or dunes. Ice is present a few inches below the surface in the middle of the polygons, and along its edges, the ice is at least deep. When the ice is exposed to the Martian atmosphere it slowly sublimates. Some dust devils were observed.
Weather
Snow was observed to fall from cirrus clouds. The clouds formed at a level in the atmosphere that was around , so the clouds would have to be composed of water-ice, rather than carbon dioxide-ice (dry ice) because, at the low pressure of the Martian atmosphere, the temperature for forming carbon dioxide ice is much lower—less than . It is now thought that water ice (snow) would have accumulated later in the year at this location. This represents a milestone in understanding Martian weather. Wind speeds ranged from . The usual average speed was . These speeds seem high, but the atmosphere of Mars is very thin—less than 1% of the Earth's—and so did not exert much force on the spacecraft. The highest temperature measured during the mission was , while the coldest was .
Climate cycles
Interpretation of the data transmitted from the craft was published in the journal Science. As per the peer reviewed data the presence of water ice has been confirmed and that the site had a wetter and warmer climate in the recent past. Finding calcium carbonate in the Martian soil leads scientists to think that the site had been wet or damp in the geological past. During seasonal or longer period diurnal cycles water may have been present as thin films. The tilt or obliquity of Mars changes far more than the Earth; hence times of higher humidity are probable.
Surface chemistry
Chemistry results showed the surface soil to be moderately alkaline, with a pH of 7.7 ±0.5. The overall level of salinity is modest. TEGA analysis of its first soil sample indicated the presence of bound water and CO2 that were released during the final (highest-temperature, 1,000 °C) heating cycle.
The elements detected and measured in the samples are chloride, bicarbonate, magnesium, sodium, potassium, calcium, and sulfate. Further data analysis indicated that the soil contains soluble sulfate (SO42-) at a minimum of 1.1% and provided a refined formulation of the soil.
Analysis of the Phoenix WCL also showed that the Ca(ClO4)2 in the soil has not interacted with liquid water of any form, perhaps for as long as 600 million years. If it had, the highly soluble Ca(ClO4)2 in contact with liquid water would have formed only CaSO4. This suggests a severely arid environment, with minimal or no liquid water interaction. The pH and salinity level were viewed as benign from the standpoint of biology.
Perchlorate
On August 1, 2008, Aviation Week reported that "The White House has been alerted by NASA about plans to make an announcement soon on major new Phoenix lander discoveries concerning the "potential for life" on Mars, scientists tell Aviation Week & Space Technology." This led to a subdued media speculation on whether some evidence of past or present life had been discovered. To quell the speculation, NASA released the preliminary findings stating that Mars soil contains perchlorate () and thus may not be as life-friendly as thought earlier. The presence of almost 0.5% perchlorates in the soil was an unexpected finding with broad implications.
Laboratory research published in July 2017 demonstrated that when irradiated with a simulated Martian UV flux, perchlorates become bacteriocidal. Two other compounds of the Martian surface, iron oxides and hydrogen peroxide, act in synergy with irradiated perchlorates to cause a 10.8-fold increase in cell death when compared to cells exposed to UV radiation after 60 seconds of exposure. It was also found that abraded silicates (quartz and basalt) lead to the formation of toxic reactive oxygen species. The results leaves the question of the presence of organic compounds open-ended since heating the samples containing perchlorate would have broken down any organics present. However, in the cold subsurface of Mars, which provides substantial protection against UV radiation, halotolerant organisms might survive enhanced perchlorate concentrations by physiological adaptations similar to those observed in the yeast Debaryomyces hansenii exposed in lab experiments to increasing NaClO4 concentrations.
Perchlorate (ClO4) is a strong oxidizer, so it has the potential of being used for rocket fuel and as a source of oxygen for future missions. Also, when mixed with water, perchlorate can greatly lower freezing point of water, in a manner similar to how salt is applied to roads to melt ice. So, perchlorate may be allowing small amounts of liquid water to form on the surface of Mars today. Gullies, which are common in certain areas of Mars, may have formed from perchlorate melting ice and causing water to erode soil on steep slopes. Perchlorates have also been detected at the landing site of the Curiosity rover, nearer equatorial Mars, and in the martian meteorite EETA79001, suggesting a "global distribution of these salts". Only highly refractory and/or well-protected organic compounds are likely to be preserved in the frozen subsurface. Therefore, the MOMA instrument planned to fly on the 2022 ExoMars rover will employ a method that is unaffected by the presence of perchlorates to detect and measure sub-surface organics.
Phoenix DVD
Attached to the deck of the lander (next to the US flag) is a special DVD compiled by The Planetary Society. The disc contains Visions of Mars, a multimedia collection of literature and art about the Red Planet. Works include the text of H.G. Wells' 1897 novel War of the Worlds (and the 1938 radio broadcast by Orson Welles), Percival Lowell's 1908 book Mars as the Abode of Life with a map of his proposed canals, Ray Bradbury's 1950 novel The Martian Chronicles, and Kim Stanley Robinson's 1993 novel Green Mars. There are also messages directly addressed to future Martian visitors or settlers from, among others, Carl Sagan and Arthur C. Clarke. In 2006, The Planetary Society collected a quarter of a million names submitted through the Internet and placed them on the disc, which claims, on the front, to be "the first library on Mars." This DVD is made of a special silica glass designed to withstand the Martian environment, lasting for hundreds (if not thousands) of years on the surface while it awaits retrieval by future explorers. This is similar in concept to the Voyager Golden Record that was sent on the Voyager 1 and Voyager 2 missions.
The text just below the center of the disk reads:
A previous CD version was supposed to have been sent with the Russian spacecraft Mars 94, intended to land on Mars in Fall 1995.
References
External links
LPL, LMSS, JPL and NASA links
Phoenix mission lead home page
Phoenix summary at JPL
Phoenix Profile by NASA's Solar System Exploration
NASA's Phoenix Photojournal
NASA's Phoenix Analyst's Notebook for accessing mission data and documents
NASA Archives of raw Phoenix mission images , most recent first
NASA TV broadcast of Phoenix landing (YouTube copy of NASA broadcast from 8 minutes before until 2 minutes after touchdown)
Blogs of the scientists and engineers of the Phoenix team from launch through the end of the mission.
Other links
Complete List of Works on the Phoenix DVD
Written Introduction to the Visions of Mars Project
Phoenix Mission Details Video
may 2008-2.html Mars Express Support to Phoenix Landing Includes animation of Phoenix'' descent and landing, plus KSC images of pre-flight processing and launch
Article and news footage on the Phoenix landing
Canadian contribution at the University of Alberta
Mars Phoenix Blog Page
Software Behind the Mars Phoenix Lander (Audio Interview)
Color panorama of the landing site
Phoenix EDL Reconstruction 1B – 9 minute video simulation based on actual EDL data
Phoenix Mars Lander wins 2009 John L. "Jack" Swigert, Jr., Award for Space Achievement
Chris McKay: Results of the Phoenix Mission to Mars and Analog Sites on Earth
Mars Scout Program
Missions to Mars
NASA space probes
Mare Boreum quadrangle
Space probes launched in 2007
Spacecraft launched by Delta II rockets
Solar-powered robots
Derelict landers (spacecraft)
Soft landings on Mars
Astrobiology space missions
Message artifacts
2008 on Mars | Phoenix (spacecraft) | Astronomy | 9,408 |
29,214,429 | https://en.wikipedia.org/wiki/List%20of%20exoplanet%20firsts | This is a list of exoplanet discoveries that were the first by several criteria, including:
the detection method used,
the planet type,
the planetary system type,
the star type,
and others.
The first
The choice of "first" depends on definition and confirmation, as below. The three systems detected prior to 1994 each have a drawback, with Gamma Cephei b being unconfirmed until 2002; while the PSR B1257+12 planets orbit a pulsar. This leaves 51 Pegasi b (discovered and confirmed 1995) as the first confirmed exoplanet around a normal star.
By discovery method
By detection method
Some of these planets had already been discovered by another method but were the first to be detected by the listed method.
By system type
By star type
By planet type
Other
See also
Lists of exoplanets
List of exoplanet extremes
Methods of detecting exoplanets
Notes
References
Planetary firsts
Extrasolar planet firsts
Extrasolar planets
Extrasolar planet firsts
Firsts | List of exoplanet firsts | Astronomy | 208 |
35,840,461 | https://en.wikipedia.org/wiki/PhytoPath | PhytoPath was a joint scientific project between the European Bioinformatics Institute and Rothamsted Research, running from January 2012 to May 30, 2017. The project aimed to enable the exploitation of the growing body of “-omics” data being generated for phytopathogens, their plant hosts and related model species. Gene mutant phenotypic information is directly displayed in genome browsers.
Background
PhytoPath was a bioinformatics resource launched in 2012, which integrated genome scale data from important plant pathogenic species with literature-curated information about the phenotypes of host infection available from the Pathogen-Host Interaction database (PHI-base). It provides access to complete genome assembly and gene models from priority crop and model phytopathogenic species of fungi and oomycetes through the Ensembl Genomes Browser interface. Phytopath also links directly from individual gene sequence models within the Ensembl genome browser to the peer reviewed phenotype information curated within PHI-base. The Phytopath resource aimed to provide tools for comparative analysis of fungal and oomycete genomes. Since the final update - in May 2017 - the database makes accessible 275 genomic sequences in genome browsers from 113 fungal, 25 protist, and 137 bacterial species. Support for community annotation for gene models was provided using the WebApollo online gene editor for some species.
References
External links
PhytoPath
Pathogen-Host Interaction database
Ensembl Fungi
Ensembl
Biological databases
Genetic engineering in the United Kingdom
Rothamsted Experimental Station
Science and technology in Cambridgeshire
Science and technology in Hertfordshire
South Cambridgeshire District
Wellcome Trust | PhytoPath | Biology | 345 |
29,539,388 | https://en.wikipedia.org/wiki/Franco%20Levi | Franco Levi (September 20, 1914 in Turin – January 10, 2009) was an Italian engineer.
He is known for his involvement in drafting the first Eurocode as a leading member of European regulatory bodies, and was a prominent academic involved in structural engineering research.
Education
Levi received his degree in Engineering from the Ecole Centrale in Paris and from the Polytechnic University of Turin in the years 1936 and 1937. Already assistant to Professor Gustavo Colonnetti in Turin, in 1938 he had to go into exile to France and later on to Switzerland due to the anti-Semitic laws.
Career
Research in Turin
Back in Italy in 1945, he was able to resume his research at the Polytechnic University of Turin. This work was covering the most recent topics of structural mechanics and engineering, and he published papers and books on the theory of states of coaction, on plastic theory, and on the time-dependent behaviour of concrete structures, with particular regard to creep effects. He designed the Torino Palavela.
His attention was attracted by the need of a quick transfer of scientific advancement to the practical design and construction of structures. The opportunity for that action was the development of the new technique of pre-stressed concrete, in which he was scientifically involved since 1938, during his first research period. After the war, since 1945 to 1961, as Director of the CNR Consiglio Nazionale delle Ricerche (Italian National Research Council) in Turin, Franco Levi had an essential role in the international discussion on the theoretical and practical aspects of this innovative technique and on the establishment of design rules to be internationally agreed.
Material innovation
The innovation of prestressed concrete, together with the new approaches of plastic design and of probabilistic safety criteria induced the most advanced specialists of structural design and analysis to install in 1953 a new committee (Comité Européen du Béton – CEB) having the objective of a coordination and synthesis of research and of the creation and international harmonization of the principles and rules for the conception, calculation, construction, and maintenance of concrete structures according to the new approaches.
Involvement in European scientific organisations
Franco Levi was appointed President of CEB from 1957 and maintained this position until 1968, leading this organization to the publication of the first and the second set of CEB Recommendations.
Between 1966 and 1970 he was also President of the ”Fédération Internationale de la Précontrainte – FIP”, which had the role of promoting the innovative technique in the practical field.
In 1979 the European Community considered the work of CEB ripe to become the basis for the first Eurocode; Franco Levi was appointed Chairman of the Drafting Committee for Eurocode 2 (Concrete Structures). Eurocode 2 was printed by CEC in the famous Luxemburg edition 1988, together with EC1, EC3, EC6, EC8. Franco Levi had an essential role in the coordination of the drafting of such 5 Eurocodes based on the same criteria of the “Limit States” and on what was called the “Semiprobabilistic approach”. This format still remains the format of the subsequent set of Eurocodes issued by CEN TC 250 (Structural Eurocodes).
In the same time he was Professor of Structural Analysis at the University of Venice and at the Polytechnic University of Turin and Director of the Department of Structural Engineering and Soil Mechanics until 1989, when he became Emeritus.
Recognition
In 1986 he became Full Member of the Academy of Sciences of Turin.
He received honours from the Universities of Liège, Waterloo, Venice, the American Concrete Institute, AICAP, and the Trasenster, Freyssinet, Mörsch, Caquot, Torroja Medals, as well as the Golden Medal of the Italian Government.
References
External links
Brief biography
1914 births
2009 deaths
Structural engineers
20th-century Italian engineers
Engineers from Turin
National Research Council (Italy) people
École Centrale Paris alumni
Polytechnic University of Turin alumni
Academic staff of the Ca' Foscari University of Venice
Academic staff of the Polytechnic University of Turin | Franco Levi | Engineering | 803 |
67,197,507 | https://en.wikipedia.org/wiki/Gouy%E2%80%93Stodola%20theorem | In thermodynamics and thermal physics, the Gouy-Stodola theorem is an important theorem for the quantification of irreversibilities in an open system, and aids in the exergy analysis of thermodynamic processes. It asserts that the rate at which work is lost during a process, or at which exergy is destroyed, is proportional to the rate at which entropy is generated, and that the proportionality coefficient is the temperature of the ambient heat reservoir. In the literature, the theorem often appears in a slightly modified form, changing the proportionality coefficient.
The theorem is named jointly after the French physicist Georges Gouy and Slovak physicist Aurel Stodola, who demonstrated the theorem in 1889 and 1905 respectively. Gouy used it while working on exergy and utilisable energy, and Stodola while working on steam and gas engines.
Overview
The Gouy-Stodola theorem is often applied upon an open thermodynamic system, which can exchange heat with some thermal reservoirs. It holds both for systems which cannot exchange mass, and systems which mass can enter and leave.
Observe such a system, as sketched in the image shown, as it is going through some process. It is in contact with multiple reservoirs, of which one, that at temperature , is the environment reservoir. During the process, the system produces work and generates entropy. Under these conditions, the theorem has two general forms.
Work form
The reversible work is the maximal useful work which can be obtained, , and can only be fully utilized in an ideal reversible process. An irreversible process produces some work , which is less than . The lost work is then ; in other words, is the work which was lost or not exploited during the process due to irreversibilities. In terms of lost work, the theorem generally stateswhere is the rate at which work is lost, and is the rate at which entropy is generated. Time derivatives are denoted by dots. The theorem, as stated above, holds only for the entire thermodynamic universe - the system along with its surroundings, together:where the index "tot" denotes the total quantities produced within or by the entire universe.
Note that is a relative quantity, in that it is measured in relation to a specific thermal reservoir. In the above equations, is defined in reference to the environment reservoir, at . When comparing the actual process to an ideal, reversible process between the same endpoints (in order to evaluate , so as to find the value of ), only the heat interaction with the reference reservoir is allowed to vary. The heat interactions between the system and other reservoirs are kept the same. So, if a different reference reservoir is chosen, the theorem would read , where this time is in relation to , and in the corresponding reversible process, only the heat interaction with is different.
By integrating over the lifetime of the process, the theorem can also be expressed in terms of final quantities, rather than rates: .
Adiabatic case
The theorem also holds for adiabatic processes. That is, for closed systems, which are not in thermal contact with any heat reservoirs.
Similarly to the non-adiabatic case, the lost work is measured relative to some reference reservoir . Even though the process itself is adiabatic, the corresponding reversible process may not be, and might require heat exchange with the reference reservoir. Thus, this can be thought of as a special case of the above statement of the theorem - an adiabatic process is one for which the heat interactions with all reservoirs are zero, and in the reversible process, only the heat interaction with the reference thermal reservoir may be different.
The adiabatic case of the theorem holds also for the other formulation of the theorem, presented below.
Exergy form
The exergy of the system is the maximal amount of useful work that the system can generate, during a process which brings it to equilibrium with its environment, or the amount of energy available. During an irreversible process, such as heat exchanges with reservoirs, exergy is destroyed. Generally, the theorem states thatwhere is the rate at which exergy is destroyed, and is the rate at which entropy is generated. As above, time derivatives are denoted by dots.
Unlike the lost work formulation, this version of the theorem holds for both the system (the control volume) and for its surroundings (the environment and the thermal reservoirs) separately:andwhere the index "sys" denotes quantities produced within or by the system itself, and "surr" within or by the surroundings. Therefore, summing these two forms, the theorem also holds for the thermodynamic universe as a whole:where the index "tot" denotes the total quantities of the entire universe.
Thus, the exergy formulation of the theorem is less limited, as it can be applied on different regions separately. Nevertheless, the work form is used more often.
The proof of the theorem, in both forms, uses the first law of thermodynamics, writing out the terms , , and in the relevant regions, and comparing them.
Modified coefficient and effective temperature
In many cases, it is preferable to use a slightly modified version of the Gouy-Stodola theorem in work form, where is replaced by some effective temperature. When this is done, it often enlarges the scope of the theorem, and adapts it to be applicable to more systems or situations. For example, the corrections elaborated below are only necessary when the system exchanges heat with more than one reservoir - if it exchanges heat only at the environmental temperature , the simple form above holds true. Additionally, modifications may change the reversible process to which the real process is compared in calculating .
The modified theorem then readswhere is the effective temperature.
For a flow process, let denote the specific entropy (entropy per unit mass) at the inlet, where mass flows in, and the specific entropy at the outlet, where mass flows out. Similarly, denote the specific enthalpies by and . The inlet and outlet, in this case, function as initial and final states a process: mass enters the system at an initial state (the inlet, indexed "1"), undergoes some process, and then leaves at a final state (the outlet, indexed "2").
This process is then compared to a reversible process, with the same initial state, but with a (possibly) different final state. The theoretical specific entropy and enthalpy after this ideal, isentropic process are given by and , respectively. When the actual process is compared to this theoretical reversible process and is evaluated, the proper effective temperature is given byIn general, lies somewhere in between the final temperature in the actual process and the final temperature in the theoretical reversible process .
This equation above can sometimes be simplified. If both the pressure and the specific heat capacity remain constant, then the changes in enthalpy and entropy can be written in terms of the temperatures, andHowever, it is important to note that this version of the theorem doesn't relate the exact values which the original theorem does. Specifically, in comparing the actual process to a reversible one, the modified version allows the final state to be different between the two. This is in contrast to the original version, wherein reversible process is constructed to match so that the final states are the same.
Applications
In general, the Gouy-Stodola theorem is used to quantify irreversibilities in a system and to perform exergy analysis. That is, it allows one to take a thermodynamic system and better understand how inefficient it is (energy-wise), how much work is lost, how much room there is for improvement and where. The second law of thermodynamics states, in essence, that the entropy of a system only increases. Over time, thermodynamic systems tend to gain entropy and lose energy (in approaching equilibrium): thus, the entropy is "somehow" related to how much exergy or potential for useful work a system has. The Gouy-Stodola theorem provides a concrete link. For the most part, this is how the theorem is used - to find and quantify inefficiencies in a system.
Flow processes
A flow process is a type of thermodynamic process, where matter flows in and out of an open system called the control volume. Such a process may be steady, meaning that the matter and energy flowing into and out of the system are constant through time. It can also be unsteady, or transient, meaning that the flows may change and differ at different times.
Many proofs of the theorem demonstrate it specifically for flow systems. Thus, the theorem is particularly useful in performing exergy analysis on such systems.
Vapor compression and absorption
The Gouy-Stodola theorem is often applied to refrigeration cycles. These are thermodynamic cycles or mechanical systems where external work can be used to move heat from low temperature sources to high temperature sinks, or vice versa. Specifically, the theorem is useful in analyzing vapor compression and vapor absorption refrigeration cycles.
The theorem can help identify which components of a system have major irreversibilities, and how much exergy they destroy. It can be used to find at which temperatures the performance is optimal, or what size system should be constructed. Overall, that is, the Gouy-Stodola theorem is a tool to find and quantify inefficiencies in a system, and can point to how to minimize them - this is the goal of exergy analysis. When the theorem is used for these purposes, it is usually applied in its modified form.
In ecology
Macroscopically, the theorem may be useful environmentally, in ecophysics. An ecosystem is a complex system, where many factors and components interact, some biotic and some abiotic. The Gouy-Stodola theorem can find how much entropy is generated by each part of the system, or how much work is lost. Where there is human interference in an ecosystem, whether the ecosystem continues to exist or is lost may depend on how many irreversibilities it can support. The amount of entropy which is generated or the amount of work the system can perform may vary. Hence, two different states (for example, a healthy forest versus one which has undergone significant deforestation) of the same ecosystem may be compared in terms of entropy generation, and this may be used to evaluate the sustainability of the ecosystem under human interference.
In biology
The theorem is also useful on a more microscopic scale, in biology. Living systems, such as cells, can be analyzed thermodynamically. They are rather complex systems, where many energy transformations occur, and they often waste heat. Hence, the Gouy-Stodola theorem may be useful, in certain situations, to perform exergy analysis on such systems. In particular, it may help to highlight differences between healthy and diseased cells.
Generally, the theorem may find applications in fields of biomedicine, or where biology and physics cross over, such as biochemical engineering thermodynamics.
As a variational principle
A variational principle in physics, such as the principle of least action or Fermat's principle in optics, allows one to describe the system in a global manner and to solve it using the calculus of variations. In thermodynamics, such a principle would allow a Lagrangian formulation. The Gouy-Stodola theorem can be used as the basis for such a variational principle, in thermodynamics. It has been proven to satisfy the necessary conditions.
This is fundamentally different from most of the theorem's other uses - here, it isn't being applied in order to locate components with irreversibilities or loss of exergy, but rather helps give some more general information about the system.
References
Thermodynamics | Gouy–Stodola theorem | Physics,Chemistry,Mathematics | 2,479 |
604,707 | https://en.wikipedia.org/wiki/Truth%20function | In logic, a truth function is a function that accepts truth values as input and produces a unique truth value as output. In other words: the input and output of a truth function are all truth values; a truth function will always output exactly one truth value, and inputting the same truth value(s) will always output the same truth value. The typical example is in propositional logic, wherein a compound statement is constructed using individual statements connected by logical connectives; if the truth value of the compound statement is entirely determined by the truth value(s) of the constituent statement(s), the compound statement is called a truth function, and any logical connectives used are said to be truth functional.
Classical propositional logic is a truth-functional logic, in that every statement has exactly one truth value which is either true or false, and every logical connective is truth functional (with a correspondent truth table), thus every compound statement is a truth function. On the other hand, modal logic is non-truth-functional.
Overview
A logical connective is truth-functional if the truth-value of a compound sentence is a function of the truth-value of its sub-sentences. A class of connectives is truth-functional if each of its members is. For example, the connective "and" is truth-functional since a sentence like "Apples are fruits and carrots are vegetables" is true if, and only if, each of its sub-sentences "apples are fruits" and "carrots are vegetables" is true, and it is false otherwise. Some connectives of a natural language, such as English, are not truth-functional.
Connectives of the form "x believes that ..." are typical examples of connectives that are not truth-functional. If e.g. Mary mistakenly believes that Al Gore was President of the USA on April 20, 2000, but she does not believe that the moon is made of green cheese, then the sentence
"Mary believes that Al Gore was President of the USA on April 20, 2000"
is true while
"Mary believes that the moon is made of green cheese"
is false. In both cases, each component sentence (i.e. "Al Gore was president of the USA on April 20, 2000" and "the moon is made of green cheese") is false, but each compound sentence formed by prefixing the phrase "Mary believes that" differs in truth-value. That is, the truth-value of a sentence of the form "Mary believes that..." is not determined solely by the truth-value of its component sentence, and hence the (unary) connective (or simply operator since it is unary) is non-truth-functional.
The class of classical logic connectives (e.g. &, →) used in the construction of formulas is truth-functional. Their values for various truth-values as argument are usually given by truth tables. Truth-functional propositional calculus is a formal system whose formulae may be interpreted as either true or false.
Table of binary truth functions
In two-valued logic, there are sixteen possible truth functions, also called Boolean functions, of two inputs P and Q. Any of these functions corresponds to a truth table of a certain logical connective in classical logic, including several degenerate cases such as a function not depending on one or both of its arguments. Truth and falsehood are denoted as 1 and 0, respectively, in the following truth tables for sake of brevity.
Functional completeness
Because a function may be expressed as a composition, a truth-functional logical calculus does not need to have dedicated symbols for all of the above-mentioned functions to be functionally complete. This is expressed in a propositional calculus as logical equivalence of certain compound statements. For example, classical logic has equivalent to . The conditional operator "→" is therefore not necessary for a classical-based logical system if "¬" (not) and "∨" (or) are already in use.
A minimal set of operators that can express every statement expressible in the propositional calculus is called a minimal functionally complete set. A minimally complete set of operators is achieved by NAND alone {↑} and NOR alone {↓}.
The following are the minimal functionally complete sets of operators whose arities do not exceed 2:
One element {↑}, {↓}.
Two elements , , , , , , , , , , , , , , , , , .
Three elements , , , , , .
Algebraic properties
Some truth functions possess properties which may be expressed in the theorems containing the corresponding connective. Some of those properties that a binary truth function (or a corresponding logical connective) may have are:
associativity: Within an expression containing two or more of the same associative connectives in a row, the order of the operations does not matter as long as the sequence of the operands is not changed.
commutativity: The operands of the connective may be swapped without affecting the truth-value of the expression.
distributivity: A connective denoted by · distributes over another connective denoted by +, if a · (b + c) = (a · b) + (a · c) for all operands a, b, c.
idempotence: Whenever the operands of the operation are the same, the connective gives the operand as the result. In other words, the operation is both truth-preserving and falsehood-preserving (see below).
absorption: A pair of connectives satisfies the absorption law if for all operands a, b.
A set of truth functions is functionally complete if and only if for each of the following five properties it contains at least one member lacking it:
monotonic: If f(a1, ..., an) ≤ f(b1, ..., bn) for all a1, ..., an, b1, ..., bn ∈ {0,1} such that a1 ≤ b1, a2 ≤ b2, ..., an ≤ bn. E.g., .
affine: For each variable, changing its value either always or never changes the truth-value of the operation, for all fixed values of all other variables. E.g., , .
self dual: To read the truth-value assignments for the operation from top to bottom on its truth table is the same as taking the complement of reading it from bottom to top; in other words, f(¬a1, ..., ¬an) = ¬f(a1, ..., an). E.g., .
truth-preserving: The interpretation under which all variables are assigned a truth value of true produces a truth value of true as a result of these operations. E.g., . (see validity)
falsehood-preserving: The interpretation under which all variables are assigned a truth value of false produces a truth value of false as a result of these operations. E.g., . (see validity)
Arity
A concrete function may be also referred to as an operator. In two-valued logic there are 2 nullary operators (constants), 4 unary operators, 16 binary operators, 256 ternary operators, and n-ary operators. In three-valued logic there are 3 nullary operators (constants), 27 unary operators, 19683 binary operators, 7625597484987 ternary operators, and n-ary operators. In k-valued logic, there are k nullary operators, unary operators, binary operators, ternary operators, and n-ary operators. An n-ary operator in k-valued logic is a function from . Therefore, the number of such operators is , which is how the above numbers were derived.
However, some of the operators of a particular arity are actually degenerate forms that perform a lower-arity operation on some of the inputs and ignore the rest of the inputs. Out of the 256 ternary Boolean operators cited above, of them are such degenerate forms of binary or lower-arity operators, using the inclusion–exclusion principle. The ternary operator is one such operator which is actually a unary operator applied to one input, and ignoring the other two inputs.
"Not" is a unary operator, it takes a single term (¬P). The rest are binary operators, taking two terms to make a compound statement (P ∧ Q, P ∨ Q, P → Q, P ↔ Q).
The set of logical operators may be partitioned into disjoint subsets as follows:
In this partition, is the set of operator symbols of arity .
In the more familiar propositional calculi, is typically partitioned as follows:
nullary operators:
unary operators:
binary operators:
Principle of compositionality
Instead of using truth tables, logical connective symbols can be interpreted by means of an interpretation function and a functionally complete set of truth-functions (Gamut 1991), as detailed by the principle of compositionality of meaning.
Let I be an interpretation function, let Φ, Ψ be any two sentences and let the truth function fnand be defined as:
fnand(T,T) = F; fnand(T,F) = fnand(F,T) = fnand(F,F) = T
Then, for convenience, fnot, for fand and so on are defined by means of fnand:
fnot(x) = fnand(x,x)
for(x,y) = fnand(fnot(x), fnot(y))
fand(x,y) = fnot(fnand(x,y))
or, alternatively fnot, for fand and so on are defined directly:
fnot(T) = F; fnot(F) = T;
for(T,T) = for(T,F) = for(F,T) = T; for(F,F) = F
fand(T,T) = T; fand(T,F) = fand(F,T) = fand(F,F) = F
Then
etc.
Thus if S is a sentence that is a string of symbols consisting of logical symbols v1...vn representing logical connectives, and non-logical symbols c1...cn, then if and only if have been provided interpreting v1 to vn by means of fnand (or any other set of functional complete truth-functions) then the truth-value of is determined entirely by the truth-values of c1...cn, i.e. of . In other words, as expected and required, S is true or false only under an interpretation of all its non-logical symbols.
Computer science
Logical operators are implemented as logic gates in digital circuits. Practically all digital circuits (the major exception is DRAM) are built up from NAND, NOR, NOT, and transmission gates. NAND and NOR gates with 3 or more inputs rather than the usual 2 inputs are fairly common, although they are logically equivalent to a cascade of 2-input gates. All other operators are implemented by breaking them down into a logically equivalent combination of 2 or more of the above logic gates.
The "logical equivalence" of "NAND alone", "NOR alone", and "NOT and AND" is similar to Turing equivalence.
The fact that all truth functions can be expressed with NOR alone is demonstrated by the Apollo guidance computer.
See also
Bertrand Russell and Alfred North Whitehead,Principia Mathematica, 2nd edition
Ludwig Wittgenstein,Tractatus Logico-Philosophicus, Proposition 5.101
Bitwise operation
Binary function
Boolean domain
Boolean logic
Boolean-valued function
List of Boolean algebra topics
Logical constant
Modal operator
Propositional calculus
Truth-functional propositional logic
Notes
References
Further reading
Józef Maria Bocheński (1959), A Précis of Mathematical Logic, translated from the French and German versions by Otto Bird, Dordrecht, South Holland: D. Reidel.
Alonzo Church (1944), Introduction to Mathematical Logic, Princeton, NJ: Princeton University Press. See the Introduction for a history of the truth function concept.
Mathematical logic
Logical truth | Truth function | Mathematics | 2,556 |
1,989,166 | https://en.wikipedia.org/wiki/Phagosome | In cell biology, a phagosome is a vesicle formed around a particle engulfed by a phagocyte via phagocytosis. Professional phagocytes include macrophages, neutrophils, and dendritic cells (DCs).
A phagosome is formed by the fusion of the cell membrane around a microorganism, a senescent cell or an apoptotic cell. Phagosomes have membrane-bound proteins to recruit and fuse with lysosomes to form mature phagolysosomes. The lysosomes contain hydrolytic enzymes and reactive oxygen species (ROS) which kill and digest the pathogens. Phagosomes can also form in non-professional phagocytes, but they can only engulf a smaller range of particles, and do not contain ROS. The useful materials (e.g. amino acids) from the digested particles are moved into the cytosol, and waste is removed by exocytosis. Phagosome formation is crucial for tissue homeostasis and both innate and adaptive host defense against pathogens.
However, some bacteria can exploit phagocytosis as an invasion strategy. They either reproduce inside of the phagolysosome (e.g. Coxiella spp.) or escape into the cytoplasm before the phagosome fuses with the lysosome (e.g. Rickettsia spp.). Many Mycobacteria, including Mycobacterium tuberculosis and Mycobacterium avium paratuberculosis, can manipulate the host macrophage to prevent lysosomes from fusing with phagosomes and creating mature phagolysosomes. Such incomplete maturation of the phagosome maintains an environment favorable to the pathogens inside it.
Formation
Phagosomes are large enough to degrade whole bacteria, or apoptotic and senescent cells, which are usually >0.5μm in diameter. This means a phagosome is several orders of magnitude bigger than an endosome, which is measured in nanometres.
Phagosomes are formed when pathogens or opsonins bind to a transmembrane receptor, which are randomly distributed on the phagocyte cell surface. Upon binding, "outside-in" signalling triggers actin polymerisation and pseudopodia formation, which surrounds and fuses behind the microorganism. Protein kinase C, phosphoinositide 3-kinase, and phospholipase C (PLC) are all needed for signalling and controlling particle internalisation. More cell surface receptors can bind to the particle in a zipper-like mechanism as the pathogen is surrounded, increasing the binding avidity. Fc receptor (FcR), complement receptors (CR), mannose receptor and dectin-1 are phagocytic receptors, which means that they can induce phagocytosis if they are expressed in non-phagocytic cells such as fibroblasts. Other proteins such as Toll-like receptors are involved in pathogen pattern recognition and are often recruited to phagosomes but do not specifically trigger phagocytosis in non-phagocytic cells, so they are not considered phagocytic receptors.
Opsonisation
Opsonins are molecular tags such as antibodies and complements that attach to pathogens and up-regulate phagocytosis. Immunoglobulin G (IgG) is the major type of antibody present in the serum. It is part of the adaptive immune system, but it links to the innate response by recruiting macrophages to phagocytose pathogens. The antibody binds to microbes with the variable Fab domain, and the Fc domain binds to Fc receptors (FcR) to induce phagocytosis.
Complement-mediated internalisation has much less significant membrane protrusions, but the downstream signalling of both pathways converge to activate Rho GTPases. They control actin polymerisation which is required for the phagosome to fuse with endosomes and lysosomes.
Non-phagocytic cells
Other non-professional phagocytes have some degree of phagocytic activity, such as thyroid and bladder epithelial cells that can engulf erythrocytes and retinal epithelial cells that internalise retinal rods. However non-professional phagocytes do not express specific phagocytic receptors such as FcR and have a much lower rate of internalisation.
Some invasive bacteria can also induce phagocytosis in non-phagocytic cells to mediate host uptake. For example, Shigella can secrete toxins that alter the host cytoskeleton and enter the basolateral side of enterocytes.
Structure
As the membrane of the phagosome is formed by the fusion of the plasma membrane, the basic composition of the phospholipid bilayer is the same. Endosomes and lysosomes then fuse with the phagosome to contribute to the membrane, especially when the engulfed particle is very big, such as a parasite. They also deliver various membrane proteins to the phagosome and modify the organelle structure.
Phagosomes can engulf artificial low-density latex beads and then purified along a sucrose concentration gradient, allowing the structure and composition to be studied. By purifying phagosomes at different time points, the maturation process can also be characterised. Early phagosomes are characterised by Rab5, which transition into Rab7 as the vesicle matures into late phagosomes.
Maturation process
The nascent phagosome is not inherently bactericidal. As it matures, it becomes more acidic from pH 6.5 to pH 4, and gains characteristic protein markers and hydrolytic enzymes. The different enzymes function at various optimal pH, forming a range so they each work in narrow stages of the maturation process. Enzyme activity can be fine-tuned by modifying the pH level, allowing for greater flexibility. The phagosome moves along microtubules of the cytoskeleton, fusing with endosomes and lysosomes sequentially in a dynamic "kiss-and-run" manner. This intracellular transport depends on the size of the phagosomes. Larger organelles (with a diameter of about 3 μm) are transported very persistently from the cell periphery towards the perinuclear region whereas smaller organelles (with a diameter of about 1 μm) are transported more bidirectionally back and forth between cell center and cell periphery. Vacuolar proton pumps (v-ATPase) are delivered to the phagosome to acidify the organelle compartment, creating a more hostile environment for pathogens and facilitating protein degradation. The bacterial proteins are denatured in low pH and become more accessible to the proteases, which are unaffected by the acidic environment. The enzymes are later recycled from the phagolysosome before egestion so they are not wasted. The composition of the phospholipid membrane also changes as the phagosome matures.
Fusion may take minutes to hours depending on the contents of the phagosome; FcR or mannose receptor-mediated fusion last less than 30 minutes, but phagosomes containing latex beads may take several hours to fuse with lysosomes. It is suggested that the composition of the phagosome membrane affects the rate of maturation. Mycobacterium tuberculosis has a very hydrophobic cell wall, which is hypothesised to prevent membrane recycling and recruitment of fusion factors, so the phagosome does not fuse with lysosomes and the bacterium avoids degradation.
Smaller lumenal molecules are transferred by fusion faster than larger molecules, which suggests that a small aqueous channel forms between the phagosome and other vesicles during "kiss-and-run", through which only limited exchange is allowed.
Fusion regulation
Shortly after internalisation, F-actin depolymerises from the newly formed phagosome so it becomes accessible to endosomes for fusion and delivery of proteins. The maturation process is divided into early and late stages depending on characteristic protein markers, regulated by small Rab GTPases. Rab5 is present on early phagosomes, and controls the transition to late phagosomes marked by Rab7.
Rab5 recruits PI-3 kinase and other tethering proteins such as Vps34 to the phagosome membrane, so endosomes can deliver proteins to the phagosome. Rab5 is partially involved in the transition to Rab7, via the CORVET complex and the HOPS complex in yeast. The exact maturation pathway in mammals is not well understood, but it is suggested that HOPS can bind Rab7 and displace the guanosine nucleotide dissociation inhibitor (GDI). Rab11 is involved in membrane recycling.
Phagolysosome
The phagosome fuses with lysosomes to form a phagolysosome, which has various bactericidal properties. The phagolysosome contains reactive oxygen and nitrogen species (ROS and RNS) and hydrolytic enzymes. The compartment is also acidic due to proton pumps (v-ATPases) that transport H+ across the membrane, used to denature the bacterial proteins.
The exact properties of phagolysosomes vary depending on the type of phagocyte. Those in dendritic cells have weaker bactericidal properties than those in macrophages and neutrophils. Also, macrophages are divided into pro-inflammatory "killer" M1 and "repair" M2. The phagolysosomes of M1 can metabolise arginine into highly reactive nitric oxide, while M2 use arginine to produce ornithine to promote cell proliferation and tissue repair.
Function
Pathogen degradation
Macrophages and neutrophils are professional phagocytes in charge of most of the pathogen degradation, but they have different bactericidal methods. Neutrophils have granules that fuse with the phagosome. The granules contain NADPH oxidase and myeloperoxidase, which produce toxic oxygen and chlorine derivatives to kill pathogens in an oxidative burst. Proteases and anti-microbial peptides are also released into the phagolysosome. Macrophages lack granules, and rely more on phagolysosome acidification, glycosidases, and proteases to digest microbes. Phagosomes in dendritic cells are less acidic and have much weaker hydrolytic activity, due to a lower concentration of lysosomal proteases and even the presence of protease inhibitors.
Inflammation
Phagosome formation is tied to inflammation via common signalling molecules. PI-3 kinase and PLC are involved in both the internalisation mechanism and triggering inflammation. The two proteins, along with Rho GTPases, are important components of the innate immune response, inducing cytokine production and activating the MAP kinase signalling cascade. Pro-inflammatory cytokines including IL-1β, IL-6, TNFα, and IL-12 are all produced.
The process is tightly regulated and the inflammatory response varies depending on the particle type within the phagosome. Pathogen-infected apoptotic cells will trigger inflammation, but damaged cells that are degraded as part of the normal tissue turnover do not. The response also differs according to the opsonin-mediated phagocytosis. FcR and mannose receptor-mediated reactions produce pro-inflammatory reactive oxygen species and arachidonic acid molecules, but CR-mediated reactions do not result in those products.
Antigen presentation
Immature dendritic cells (DCs) can phagocytose, but mature DCs cannot due to changes in Rho GTPases involved in cytoskeleton remodelling. The phagosomes of DCs are less hydrolytic and acidic than those of macrophages and neutrophils, as DCs are mainly involved in antigen presentation rather than pathogen degradation. They need to retain protein fragments of a suitable size for specific bacterial recognition, so the peptides are only partially degraded. Peptides from the bacteria are trafficked to the Major Histocompatibility Complex (MHC). The peptide antigens are presented to lymphocytes, where they bind to T-cell receptors and activates T-cells, bridging the gap between innate and adaptive immunity. This is specific to mammals, birds, and jawed fish, as insects do not have adaptive immunity.
Nutrient
Ancient single-celled organisms such as amoeba use phagocytosis as a way to acquire nutrients, rather than an immune strategy. They engulf other smaller microbes and digest them within the phagosome of around one bacterium per minute, which is much faster than professional phagocytes. For the soil amoeba Dictyostelium discoideum, their main food source is the bacteria Legionella pneumophila, which causes Legionnaire's disease in humans. Phagosome maturation in amoeba is very similar to that in macrophages, so they are used as a model organism to study the process.
Tissue clearance
Phagosomes degrade senescent cells and apoptotic cells to maintain tissue homeostasis. Erythrocytes have one of the highest turnover rates in the body, and they are phagocytosed by macrophages in the liver and spleen. In the embryo, the process of removing dead cells is not well-characterised, but it is not performed by macrophages or other cells derived from hematopoietic stem cells. It is only in the adult that apoptotic cells are phagocytosed by professional phagocytes. Inflammation is only triggered by certain pathogen- or damage-associated molecular patterns (PAMPs or DAMPs), the removal of senescent cells is non-inflammatory.
Autophagosome
Autophagosomes are different from phagosomes in that they are mainly used to selectively degrade damaged cytosolic organelles such as mitochondria (mitophagy). However, when the cell is starved or stressed, autophagosomes can also non-selectively degrade organelles to provide the cell with amino acids and other nutrients. Autophagy is not limited to professional phagocytes, it is first discovered in rat hepatocytes by cell biologist Christian de Duve. Autophagosomes have a double membrane, the inner one from the engulfed organelle, and the outer membrane is speculated to be formed from the endoplasmic reticulum or the ER-Golgi Intermediate Compartment (ERGIC). The autophagosome also fuses with lysosomes to degrade its contents. When M. tuberculosis inhibit phagosome acidification, Interferon gamma can induce autophagy and rescue the maturation process.
Bacterial evasion and manipulation
Many bacteria have evolved to evade the bactericidal properties of phagosomes or even exploit phagocytosis as an invasion strategy.
Mycobacterium tuberculosis target M2 macrophages at the lower parts of the respiratory pathway, which do not produce ROS. M. tuberculosis can also manipulate the signalling pathways by secreting phosphatases such as PtpA and SapM, which disrupt protein recruitment and block phagosome acidification.
Legionella pneumophila can re-model the phagosome membrane to imitate vesicles in other parts of the secretory pathway, so lysosomes do not recognise the phagosome and do not fuse with it. The bacterium secretes toxins that interfere with host trafficking, so the Legionella-containing vacuole recruits membrane proteins usually found on the endoplasmic reticulum or the ERGIC. This re-directs secretory vesicles to the modified phagosome and deliver nutrients to the bacterium.
Listeria monocytogenes secretes a pore-forming protein listeriolysin O so the bacterium can escape the phagosome into the cytosol. Listeriolysin is activated by the acidic environment of the phagosome. In addition, Listeria secrete two phospholipase C enzymes that facilitate in phagosome escape.
See also
Autophagosome
Phagocyte
References
Cell biology
Vesicles | Phagosome | Biology | 3,449 |
4,511,669 | https://en.wikipedia.org/wiki/Range%20Safety%20and%20Telemetry%20System | Range Safety and Telemetry System (RSTS) is a GPS based, S-band telemetry receiving and UHF command destruct system, with two 5.4-meter telemetry and command destruct auto-tracking antennas. The system built by Honeywell International. The system is capable of providing four redundant telemetry links and expanding for added telemetry link receiving.
The prime purpose of the RSTS is to provide the range safety and telemetry functions necessary to track and verify a safe rocket flight within prescribed boundaries or safely terminate an errant rocket.
One of the, operationally identical, RSTS systems is located at the Kodiak Launch Center (KLC) in Alaska with the second mobile
unit at varying mission specific sites, such as off-axis locations. Either system can act in conjunction with each other or as a stand-alone unit.
The RSTS is similar in design to the Ballistic Missile Range Safety Technology (BMRST) operated at Cape Canaveral Space Force Station, Florida.
See also
Index of aviation articles
External links
RSTS Operating at KLC
KLC Users Manual (11.7MB PDF)
Honeywell Range Safety
Telemetry | Range Safety and Telemetry System | Astronomy | 243 |
965,698 | https://en.wikipedia.org/wiki/Little%20Dumbbell%20Nebula | The Little Dumbbell Nebula, also known as Messier 76, NGC 650/651, the Barbell Nebula, or the Cork Nebula, is a planetary nebula in the northern constellation of Perseus. It was discovered by Pierre Méchain in 1780 and included in Charles Messier's catalog of comet-like objects as number 76. It was first classified as a planetary nebula in 1918 by the astronomer Heber Doust Curtis. However, others might have previously recognized it as a planetary nebula; for example, William Huggins found its spectrum indicated it was a nebula (instead of a galaxy or a star cluster); and Isaac Roberts in 1891 suggested that M76 might be similar to the Ring Nebula (M57), as seen instead from the side view.
M76 is currently classed as a type of bipolar planetary nebula (BPN), composed of a ring which we see edge-on as the central bar structure, and two lobes on either opening of the ring. The progenitor star ejected the ring when it was in the asymptotic giant branch, before it had become a planetary nebula. Soon afterward the star expelled the rest of its outer layers, creating the two lobes, and leaving a white dwarf as the remnant of the star's core. Distance to M76 is currently estimated to be 780 parsecs or 2,500 light years, making the average dimensions about 0.378 pc. (1.23 ly.) across.
The total nebula shines at the apparent magnitude of +10.1 with its central white dwarf or planetary nebula nucleus (PNN) at +15.9v (16.1B) magnitude. The nucleus has a surface temperature of about 88,400 K. It has a radial velocity of −19.1km/s.
The Little Dumbbell Nebula derives its common name from its resemblance to the Dumbbell Nebula (M27) in the constellation of Vulpecula. It was originally thought to consist of two separate emission nebulae so it bears the New General Catalogue numbers NGC 650 and 651.
See also
The Dumbbell (M27), Ring (M57), and Helix (NGC 7293) Nebulae (three other nebulae of the same type as M76)
List of Messier objects
List of planetary nebulae
References
External links
NightSkyInfo.com – M76, the Little Dumbbell Nebula
Little Dumbbell Nebula (M76, NGC 650 and 651)
The Little Dumbbell Nebula @ SEDS Messier pages
Messier objects
NGC objects
Perseus (constellation)
Planetary nebulae
Orion–Cygnus Arm
Astronomical objects discovered in 1780
Discoveries by Pierre Méchain | Little Dumbbell Nebula | Astronomy | 554 |
24,885,129 | https://en.wikipedia.org/wiki/Virtually%20safe%20dose | A virtually safe dose (VSD) may be determined for those carcinogens not assumed to have a threshold. Virtually safe doses are calculated by regulatory agencies to represent the level of exposure to such carcinogenic agents at which an excess of cancers greater than that level accepted by society is not expected.
See also
Dose-response relationship
Linear no-threshold model
References
Toxicology | Virtually safe dose | Environmental_science | 76 |
8,543,739 | https://en.wikipedia.org/wiki/Gamebird%20hybrids | Gamebird hybrids are the result of crossing species of game birds, including ducks, with each other and with domestic poultry. These hybrid species may sometimes occur naturally in the wild or more commonly through the deliberate or inadvertent intervention of humans.
Charles Darwin described hybrids of game birds and domestic fowl in The Variation of Animals and Plants Under Domestication:
Mr. Hewitt, who has had great experience in crossing tame cock-pheasants with fowls belonging to five breeds, gives as the character of all 'extraordinary wildness' (13/42. 'The Poultry Book' by Tegetmeier 1866 pages 165, 167.); but I have myself seen one exception to this rule. Mr. S. J. Salter (13/43. 'Natural History Review' 1863 April page 277.) who raised a large number of hybrids from a bantam-hen by Gallus sonneratii, states that 'all were exceedingly wild.' [...] utterly sterile male hybrids from the pheasant and the fowl act in the same manner, "their delight being to watch when the hens leave their nests, and to take on themselves the office of a sitter." (13/57. 'Cottage Gardener' 1860 page 379.) [...] Mr. Hewitt gives it as a general rule with fowls, that crossing the breed increases their size. He makes this remark after stating that hybrids from the pheasant and fowl are considerably larger than either progenitor: so again, hybrids from the male golden pheasant and female common pheasant "are of far larger size than either parent-bird.' (17/39. Ibid 1866 page 167; and 'Poultry Chronicle' volume 3 1855 page 15.)"
Pheasant and grouse hybrids
Hybrids have been obtained between the "ornamental" species of pheasants e.g. Lady Amherst's, silver and Reeves's pheasants.
Natural pheasant and grouse hybrids have been reported:
Capercaillie or wood grouse (Tetrao urogallus) and black grouse (Tetrao tetrix) in the UK
Dusky or blue grouse (Dendragapus obscurus) and common pheasant (Phasianus colchicus) near Portland, Oregon, United States
Sharp-tailed grouse (Tympanuchus phasianellus) and prairie chicken (Tympanuchus cupido)
Willow ptarmigan (Lagopus lagopus) and spruce grouse (Falcipennis canadensis)
Chicken hybrids
Charles Darwin mentioned crosses between domestic fowl and pheasants in Origin of Species [...] from observations communicated to me by Mr. Hewitt, who has had great experience in hybridising pheasants and fowls and later in The Variation of Animals and Plants Under Domestication (top of this page), where he mentioned effeminate behaviour in the male hybrids.
In her book Bird Hybrids, A. P. Gray lists numerous crosses between chickens (Gallus gallus) and other types of fowl. Domestic fowl can be crossed, and produce fertile offspring, with silver pheasants, red junglefowl and green junglefowl. They have also produced hybrids with peafowl, chachalacas, capercaillie, grouse, quail, curassows, pheasants and guans.
Domestic fowl have been crossed with guineafowl and also with common pheasant (Phasianus colchicus). Domestic fowl/pheasant hybrids have also occurred naturally. Domestic chickens and Japanese quail (Coturnix japonica) have been hybridised using artificial insemination.
The peafowl (Pavo cristatus) from Asia and the common guineafowl (Numida meleagris) from Africa have been crossed.
Chicken and turkey hybrids
There have been attempted crosses between domestic turkeys (Meleagris gallopavo) and chickens. According to Gray, no hybrids hatched in twelve studies. Other reports found only a few fertile eggs were produced and very few resulted in advance embryos. According to Olsen, 23 hybrids were obtained from 302 embryos which resulted from 2,132 eggs. Dark Cornish cockerels and Rhode Island Red cockerels successfully fertilised turkey eggs. Harada & Buss reported hybridisation experiments between Beltsville Small White Turkeys and two strains of chickens. When male chickens inseminated female turkeys, both male and female embryos form, but the males are much less viable and usually die in the early stages of development. When male turkeys inseminated female chickens, no hybrids resulted; however, the unfertilised chicken eggs began to divide. According to Olsen, turkey-chicken crosses produced all males.
A supposed turkey × pheasant hybrid was reported by Edwards in 1761.
A hybrid between a turkey and Ocellated turkey was reported in 1956.
Duck hybrids
Charles Darwin also described duck hybrids in The Variation of Animals and Plants Under Domestication:
Hybrids are often raised between the common and musk duck, and I have been assured by three persons, who have kept these crossed birds, that they were not wild; but Mr. Garnett (13/45. As stated by Mr. Orton in his 'Physiology of Breeding' page 12.) observed that his hybrids were wild, and exhibited 'migratory propensities' of which there is not a vestige in the common or musk duck.
Hybrids between mallard ducks and Aylesbury ducks (a white domestic breed derived from the mallard) are frequently seen in British parks where the two types are present. The hybrids often resemble a dark coloured mallard with a white breast. Mallard ducks also hybridise with the Muscovy duck producing pied offspring.
Hybrids between the ruddy duck and white-headed duck are undesirable in parts of Europe where the introduced ruddy duck has bred with native white-headed ducks. The increasing number of ruddy ducks and hybrids threatens the existence of the white-headed ducks, resulting in shooting campaigns to remove the introduced species. This is controversial as some believe that nature should be allowed to take its course, even though this favours the more successful introduced species.
Duck-chicken chimera was prepared by transferring donor germ cells into embryo cavity of zygote. The transfer of dermal cells into recipient embryos to produce chimerism provides a basis for studying the barriers to fertilization in interspecific reproductive chimerism. This will help protect endangered birds, contribute to a better understanding of poultry physiology and embryonic development, and provide technical methods for poultry transgenic.
Hybrid ducks of the genus Aythya include birds that are a mixture of tufted duck, greater scaup, pochard, ferruginous duck and ring-necked duck.
List of duck hybrids:
Northern pintail × mallard
Ruddy duck × white-headed duck
Ruddy shelduck × shelduck
White-faced whistling duck × plumed whistling duck
Baikal teal × northern pintail
Hooded merganser × smew
Eurasian wigeon × American wigeon
Mallard × grey duck, a subspecies of the Pacific black duck.
See also Mariana mallard.
Goose hybrids
Goose hybrids include Canada goose × greylag goose, Canada goose × domesticated geese, emperor goose × Canada goose, red-breasted goose × Canada goose, Canada goose × white-fronted goose and barnacle goose × Canada goose.
See also
Bird hybrid
Haldane's rule
References
Darwin, Charles. The Variation of Animals and Plants Under Domestication.
Darwin, Charles. Origin of Species.
External links
Bird Hybrids Database
Hybrid Stifftails in Spain
Ruddy ducks: a conservation problem
This article uses content from Hybrid Fowl licensed under the GFDL.
Hybridisation in birds
Hybrids
Intergeneric hybrids | Gamebird hybrids | Biology | 1,648 |
77,746,656 | https://en.wikipedia.org/wiki/Project%20Bergamot | Project Bergamot is a joint project between several European universities and Mozilla for the development of machine translation software based on artificial neural networks, which is intended for local execution on end-user devices.
The software library that was created and the associated language models were made available to the general public as Free Software. Execution requires a x86 CPU with SSE4.1 instruction set extensions. In 2022, Devin Coldewey of TechCrunch judged the translation quality to be "more than adequate", but considered Firefox Translations to be not yet fully mature.
Usage
Mozilla used the Bergamot Translator to expand its web browser Firefox with a feature for translating web pages, which was previously considered an important gap in Firefox' feature set. It is often compared to the much older corresponding feature in Google Chrome, which utilizes a cloud-based background service. In contrast, Firefox Translations does not require any data to leave the user's computer, resulting in advantages in terms of data protection, availability and possibly response times. There is just the installation of a new language model that needs to take place the first time a new language is encountered. Greater independence from large technology companies and their interests is also mentioned as an important advantage. Mozilla thus strengthened its position as an alternative software vendor with a particular focus on data protection and security. Mozilla followed up with the similar feature of speech recognition for spoken user input, based on whisperfile.
On the other hand, slow translation times have been observed, especially on older devices. Also, Firefox Translations initially supported far fewer language pairs than other major translation services and is only gradually adding new models. On that matter, the training pipeline is also made available to interested parties to enable the creation of missing language models.
TranslateLocally is a Firefox-independent translation software based on the Bergamot Translator. It is also available as an (Electron-based) standalone application or as an extension for Chromium-based web browsers.
History
Mozilla had already tried to get a (cloud-based) web content translation feature into Firefox a few years before Project Bergamot, but had failed because of the financial challenge. Microsoft had already delivered offline capabilities for its translation software in 2018. Google soon followed suit, Apple two years later. The software is based on the free translation framework Marian, which the University of Edinburgh had previously developed in cooperation with Microsoft, and is itself based on the Nematus toolkit that was presented in 2017. Under the leadership of the University of Edinburgh, a development consortium was formed with the Mozilla Corporation and the additional European universities of Prague, Sheffield and Tartu. In 2018, it was able to get 3 million euros of funding from the EU's Horizon 2020 programme. Firefox Translations was initially provided as an add-on. A first functional demonstration prototype was presented in October 2019. Beta version 117 had the feature integrated directly into the browser, the official release was in version 118 from September 2023. Both the add-on module and as part of Firefox, the code and the models are subject to the version 2 of the Mozilla Public License. Since 2022, the EU-funded HPLT project creates new language models. It involves additional partners, including the universities of Helsinki, Turku, Oslo and other partners from Spain, Norway and the Czech Republic.
References
Notes
External links
Project website
Machine translation | Project Bergamot | Technology | 710 |
2,252,727 | https://en.wikipedia.org/wiki/Solar%20Designer | Alexander Peslyak () (born 1977), better known as Solar Designer, is a security specialist from Russia. He is best known for his publications on exploitation techniques, including the return-to-libc attack and the first generic heap-based buffer overflow exploitation technique, as well as computer security protection techniques such as privilege separation for daemon processes.
Peslyak is the author of the widely popular password cracking tool John the Ripper. His code has also been used in various third-party operating systems, such as OpenBSD and Debian.
Work
Peslyak has been the founder and leader of the Openwall Project since 1999. He is the founder of Openwall, Inc. and has been the CTO since 2003. He served as an advisory board member at the Open Source Computer Emergency Response Team (oCERT) from 2008 until oCERT's conclusion in August 2017. He also co-founded oss-security.
He has spoken at many international conferences, including FOSDEM and CanSecWest. He wrote the foreword to Michał Zalewski's 2005 book Silence on the Wire.
Alexander received the 2009 "Lifetime Achievement Award" during the annual Pwnie Award at the Black Hat Security Conference. In 2015 Qualys acknowledged his help with the disclosure of a GNU C Library gethostbyname function buffer overflow ().
See also
Security-focused operating system
References
External links
Openwall Project home page
Solar Designer's pseudo homepage
http://phrack.org/issues/69/2.html#article
1977 births
Living people
Hackers | Solar Designer | Technology | 334 |
36,836,784 | https://en.wikipedia.org/wiki/Iota%20Piscis%20Austrini | Iota Piscis Austrini (ι Piscis Austrini) is a solitary, blue-white hued star in the southern constellation of Piscis Austrinus. It has an apparent visual magnitude of +4.35 and is around 500 light years from the Sun. This is an A-type main sequence star with a stellar classification of A0 V. It has a magnitude 11.4 visual companion located at an angular separation of 20 arc seconds along a position angle of 290°, as of 1910.
Iota Piscis Austrini is moving through the Galaxy at a speed of 29.7 km/s relative to the Sun. Its projected Galactic orbit carries it between 18,400 and 24,300 light years from the center of the Galaxy.
Naming
In Chinese, (), meaning Celestial Money, refers to an asterism consisting of refers to an asterism consisting of ι Piscis Austrini 13 Piscis Austrini, θ Piscis Austrini, μ Piscis Austrini and τ Piscis Austrini. Consequently, the Chinese name for ι Piscis Austrini itself is (, .)
References
B-type main-sequence stars
Piscis Austrini, Iota
Piscis Austrinus
Durchmusterung objects
Piscis Austrini, 09
107380
206742
8305 | Iota Piscis Austrini | Astronomy | 292 |
11,511,378 | https://en.wikipedia.org/wiki/Psychogenic%20disease | Classified as a "conversion disorder" by the DSM-IV, a psychogenic disease is a condition in which mental stressors cause physical symptoms matching other disorders. The manifestation of physical symptoms without biologically identifiable cause results from disruptions in normal brain function due to psychological stress. During a psychogenic episode, neuroimaging has shown that neural circuits affecting functions such as emotion, executive functioning, perception, movement, and volition are inhibited. These disruptions become strong enough to prevent the brain from voluntarily allowing certain actions (e.g. moving a limb). When the brain is unable to signal to the body to perform an action voluntarily, physical symptoms of a disorder arise. Examples of diseases that are deemed to be psychogenic in origin include psychogenic seizures, psychogenic polydipsia, psychogenic tremor, and psychogenic pain.
The term psychogenic disease is often used similarly to psychosomatic disease. However, the term psychogenic usually implies that psychological factors played a key causal role in the development of the illness. The term psychosomatic is often used more broadly to describe illnesses with a known medical cause where psychological factors may nonetheless play a role (e.g., asthma as exacerbated by anxiety).
Diagnosis
With the advent of medical screening technologies such as electroencephalography (EEG) monitoring, psychogenic diseases are being diagnosed more frequently, as medical professionals have increasingly precise tools to evaluate patients. When a patient does not display typical markers of a disorder that would normally show up from medical exams, physicians may diagnose a patient's symptoms as being psychogenic. Research into understanding psychogenic disorders has led to the development of electronic diagnostic tests for ruling out the usual biological markers of a disorder, as well as new clinical observation procedures. A test a physician may employ for identifying a psychogenic disorder would be to see if the symptom changes with suggestion, for example a patient may be told to use a tuning fork to aid symptoms in a movement disorder.
Despite the understanding of psychogenic symptoms, it is not assumed that all medically unexplained illness must have a psychological cause. It remains possible that genetic, biochemical, electrophysiological, or other abnormalities may be present which we do not understand and cannot identify. Some patients may have their symptoms misdiagnosed as psychogenic even with a lack of concrete evidence to suggest there are psychological causes. Misdiagnoses of psychogenic disease may be accidental, or may arise intentionally due to bias or ignorance. For example a doctor with a bias towards men may tell women that their symptoms are psychogenic, despite actual symptoms of a physical disorder.
See also
Functional symptom
Habit cough
Mass psychogenic illness
Psychogenic amnesia
Psychological trauma
Psychoneuroimmunology
References
Further reading
Lim, Erle C. H.; Seet, Raymond C. S. (2007). "What Is the Place for Placebo in the Management of Psychogenic Disease?". Journal of the Royal Society of Medicine. 100 (2): 60–61. doi:10.1258/jrsm.100.2.60. PMC 1790983. PMID 17277261.
Sykes, Richard (2010). "Medically Unexplained Symptoms and the Siren 'Psychogenic Inference'". Philosophy, Psychiatry, & Psychology. 17 (4): 289–299. doi:10.1353/ppp.2010.0034. ISSN 1086-3303
Jannini, E. A., McCabe, M. P., Salonia, A., Montorsi, F., & Sachs, B. D. (2010). Controversies in sexual medicine: Organic vs. psychogenic? The Manichean diagnosis in sexual medicine. The journal of sexual medicine, 7(5), 1726–1733.
Colligan, M. J. (1981). Mass psychogenic illness: Some clarification and perspectives. Journal of Occupational Medicine, 23(9), 635–638.
Bransfield, R. C., & Friedman, K. J. (2019, December). Differentiating Psychosomatic, Somatopsychic, Multisystem Illnesses and Medical Uncertainty. In Healthcare (Vol. 7, No. 4, p. 114). Multidisciplinary Digital Publishing Institute.
Behavioural sciences
Types of mental disorders
Mind–body interventions | Psychogenic disease | Biology | 909 |
48,335,870 | https://en.wikipedia.org/wiki/Soundiiz | Soundiiz is a playlist converter/manager for several music streaming sites.
It provides automated transfer of playlists, as well as a single interface as which to manage and synchronize between such, such as Deezer, Apple Music, SoundCloud, Amazon Music, YouTube, Qobuz, Spotify, Napster, Tidal, Discogs, as well as others.
In April 2015, Tidal partnered with Soundiiz.
Starting May 2020, Soundiiz is providing a SmartLink feature to share playlists and releases to customers whatever music services they are using.
References
Streaming media systems | Soundiiz | Technology | 125 |
2,903,601 | https://en.wikipedia.org/wiki/Chi%20Bo%C3%B6tis | Chi Boötis, Latinised from χ Boötis, is a single, white-hued star in the northern constellation Boötes, near the eastern constellation border with Corona Borealis. It is faintly visible to the naked eye with an apparent visual magnitude of +5.3. Based upon an annual parallax shift of as seen from the Earth, it is located about 251 light-years from the Sun. The star is moving closer to the Sun with a radial velocity of −16 km/s.
This is an A-type main-sequence star with a stellar classification of A2 V, which indicates it is generating energy via hydrogen fusion at its core. It is about 340 million years old with a projected rotational velocity of 84 km/s. The star has double the mass of the Sun, 2.24 times the Sun's radius, and is emitting 37 times the Sun's luminosity from its photosphere at an effective temperature of around . It displays an infrared excess at an emission temperature of 65 K, indicating there is a circumstellar disk of dust orbiting the star at a distance of around .
References
External links
A-type main-sequence stars
Circumstellar disks
Boötes
Bootis, Chi
BD+29 2640
Bootis, 48
135502
074596
5676 | Chi Boötis | Astronomy | 274 |
31,877,832 | https://en.wikipedia.org/wiki/Ball%20tree | In computer science, a ball tree, balltree or metric tree, is a space partitioning data structure for organizing points in a multi-dimensional space. A ball tree partitions data points into a nested set of balls. The resulting data structure has characteristics that make it useful for a number of applications, most notably nearest neighbor search.
Informal description
A ball tree is a binary tree in which every node defines a D-dimensional ball containing a subset of the points to be searched. Each internal node of the tree partitions the data points into two disjoint sets which are associated with different balls. While the balls themselves may intersect, each point is assigned to one or the other ball in the partition according to its distance from the ball's center. Each leaf node in the tree defines a ball and enumerates all data points inside that ball.
Each node in the tree defines the smallest ball that contains all data points in its subtree. This gives rise to the useful property that, for a given test point outside the ball, the distance to any point in a ball in the tree is greater than or equal to the distance from to the surface of the ball. Formally:
Where is the minimum possible distance from any point in the ball to some point .
Ball-trees are related to the M-tree, but only support binary splits, whereas in the M-tree each level splits to fold, thus leading to a shallower tree structure, therefore need fewer distance computations, which usually yields faster queries. Furthermore, M-trees can better be stored on disk, which is organized in pages. The M-tree also keeps the distances from the parent node precomputed to speed up queries.
Vantage-point trees are also similar, but they binary split into one ball, and the remaining data, instead of using two balls.
Construction
A number of ball tree construction algorithms are available. The goal of such an algorithm is to produce a tree that will efficiently support queries of the desired type (e.g. nearest-neighbor) in the average case. The specific criteria of an ideal tree will depend on the type of question being answered and the distribution of the underlying data. However, a generally applicable measure of an efficient tree is one that minimizes the total volume of its internal nodes. Given the varied distributions of real-world data sets, this is a difficult task, but there are several heuristics that partition the data well in practice. In general, there is a tradeoff between the cost of constructing a tree and the efficiency achieved by this metric.
This section briefly describes the simplest of these algorithms. A more in-depth discussion of five algorithms was given by Stephen Omohundro.
k-d construction algorithm
The simplest such procedure is termed the "k-d Construction Algorithm", by analogy with the process used to construct k-d trees. This is an offline algorithm, that is, an algorithm that operates on the entire data set at once. The tree is built top-down by recursively splitting the data points into two sets. Splits are chosen along the single dimension with the greatest spread of points, with the sets partitioned by the median value of all points along that dimension. Finding the split for each internal node requires linear time in the number of samples contained in that node, yielding an algorithm with time complexity , where n is the number of data points.
Pseudocode
function construct_balltree is
input: D, an array of data points.
output: B, the root of a constructed ball tree.
if a single point remains then
create a leaf B containing the single point in D
return B
else
let c be the dimension of greatest spread
let p be the central point selected considering c
let L, R be the sets of points lying to the left and right of the median along dimension c
create B with two children:
B.pivot := p
B.child1 := construct_balltree(L),
B.child2 := construct_balltree(R),
let B.radius be maximum distance from p among children
return B
end if
end function
Nearest-neighbor search
An important application of ball trees is expediting nearest neighbor search queries, in which the objective is to find the k points in the tree that are closest to a given test point by some distance metric (e.g. Euclidean distance). A simple search algorithm, sometimes called KNS1, exploits the distance property of the ball tree. In particular, if the algorithm is searching the data structure with a test point t, and has already seen some point p that is closest to t among the points encountered so far, then any subtree whose ball is further from t than p can be ignored for the rest of the search.
Description
The ball tree nearest-neighbor algorithm examines nodes in depth-first order, starting at the root. During the search, the algorithm
maintains a max-first priority queue (often implemented with a heap), denoted Q here, of the k nearest points encountered so far. At each node B, it may perform one of three operations, before finally returning an updated version of the priority queue:
If the distance from the test point t to the current node B is greater than the furthest point in Q, ignore B and return Q.
If B is a leaf node, scan through every point enumerated in B and update the nearest-neighbor queue appropriately. Return the updated queue.
If B is an internal node, call the algorithm recursively on Bs two children, searching the child whose center is closer to t first. Return the queue after each of these calls has updated it in turn.
Performing the recursive search in the order described in point 3 above increases likelihood that the further child will be pruned
entirely during the search.
Pseudocode
function knn_search is input:
t, the target point for the query
k, the number of nearest neighbors of t to search for
Q, max-first priority queue containing at most k points
B, a node, or ball, in the tree
output:
Q, containing the k nearest neighbors from within B
if distance(t, B.pivot) - B.radius ≥ distance(t, Q.first) then return Q unchanged
else if B is a leaf node then for each point p in B do if distance(t, p) < distance(t, Q.first) then add p to Q
if size(Q) > k then remove the furthest neighbor from Q
end if end if repeat else let child1 be the child node closest to t
let child2 be the child node furthest from t
knn_search(t, k, Q, child1)
knn_search(t, k, Q, child2)
end if return Q
end function'
Performance
In comparison with several other data structures, ball trees have been shown to perform fairly well on
the nearest-neighbor search problem, particularly as their number of dimensions grows.
However, the best nearest-neighbor data structure for a given application will depend on the dimensionality, number of data points, and underlying structure of the data.
References
Trees (data structures)
Machine learning
Articles with example pseudocode | Ball tree | Engineering | 1,464 |
51,865,173 | https://en.wikipedia.org/wiki/Spit%20hood | A spit hood, spit mask, mesh hood or spit guard is a restraint device intended to prevent a person from spitting or biting. The use of the hoods has been controversial, as they are a potential suffocation risk.
Justification for use
Proponents, often including police unions and associations, say the spit hoods can help protect personnel from exposure to serious infections like hepatitis and that in London, 59% of injecting drug users test positive for hepatitis C. According to Occupational Safety and Health Administration regulations in the United States, saliva is considered potentially infectious for hepatitis C, HIV and other bloodborne pathogens only if visible blood is present.
Opposition to use
Several studies have concluded that the risk of transmission of disease from spitting was low.
The spit hoods have been criticised for breaching human rights guidelines. Critics describe the hoods as primitive, cruel, and degrading.
There is a risk of death. According to The New York Times, spit hoods have been involved in several deaths in law enforcement custody.
Use around the world
Australia
The use of spit hoods and restraint chairs at the Don Dale Youth Detention Centre in the Northern Territory, Australia, led to the establishment of the Royal Commission into the Protection and Detention of Children in the Northern Territory.
The Australian Federal Police (AFP) banned the usage of spit hoods in 2023. While the ban was welcomed by the Australian Human Rights Commission (HRC), there was backlash from the Australian Federal Police Association (AFPA).
Five years after the death of Aboriginal man Wayne Fella Morrison in custody in South Australia in September 2016, the use of spit hoods was banned in the state. South Australia remains the only state to legislate the ban on spit hoods, with a bill to ban spit hoods tabled in New South Wales parliament in 2023. Morrison’s family led the state-wide campaign to establish ‘Fella’s Bill’, now extended into the National Ban Spit hoods Coalition. Spit hoods are also banned in the Australian Capital Territory (ACT). In Queensland, the use of spit hoods is banned in watchhouses but not in correctional facilities such as prisons and youth detention centres. In Western Australia, they are still used by police and in prisons but are banned in youth detention centres. There have also been calls for a formal ban the use of spit hoods in the Northern Territory, where they are banned by institutions such as youth detention centres despite no legislation prohibiting them. While not formally banned, spit hoods are not used by police in New South Wales, Tasmania and Victoria.
While the use of spit hoods is opposed by police forces in Australia, their usage is still supported by several police unions.
New Zealand
New Zealand does not ban the usage of spit hoods and their usage has grown. In 2011, they were used by police 12 times, compared to 257 times in 2019, a 2,000% increase in eight years.
United Kingdom
Some British police chiefs have privately expressed concerns that the hoods are reminiscent of those used at the Guantanamo Bay detention camp. A decision by the Metropolitan Police Service to start using spit hoods was condemned by the human rights group Amnesty International, the civil rights group Liberty and the campaign group Inquest. Many major British police forces have chosen not to use spit hoods.
Canada
In 2016, a spit hood was used shortly prior to the death of Soleiman Faqiri in the Central East Correctional center. In 2023, Faqiri's death was ruled a homicide. In 2021, 21-year old Nicous D'Andre Spring was pepper sprayed while wearing a spit hood during his illegal detainment in Bordeaux jail in Montreal and died the following day. Quebec police and the Chief coroner's office began an investigation and public inquiry in to his death in 2023.
See also
Muzzle (device)
Restraint chair
References
Law enforcement equipment
Physical restraint
Masks in law | Spit hood | Biology | 793 |
46,975,228 | https://en.wikipedia.org/wiki/Success%20Talks | Success Talks is an organisation that holds interviews with successful individuals from across the world with a focus on digital media and mobile phone applications. The majority of the content, which promote key messages that challenge current stereotypes about success, can be found on the organisation's YouTube channel, which was started in December 2012. Other interviews can also be found on its Facebook page and mobile phone app. Its channel has gained thousands of views as well as a following on Facebook and Twitter.
Success Talks has been quoted in publications such as the Financial Times. In recent years, Success Talks have diversified into creating events and have worked with organizations such as PwC, Linklaters, Credit Suisse, Pearson and Teach First.
History
The concept for Success Talks arose when founder Dennis Owusu-Sem was having a conversation with a colleague about successful black individuals who were not in the fields of music, sport or entertainment. From this, he decided to find as many people as he could interview to answer this and many other questions around success.
Interviews
Success Talks have amassed over 40 interviews which have been released on YouTube, Facebook, Podcasts and its own app. Below are some of individuals currently on the series:
Baroness Patricia Scotland - Barrister and previous Attorney General for England and Wales and Advocate General for Northern Ireland
Ken Olisa - Chairman Restoration Partners
Karen Blackett - CEO Mediacom UK
Christine Ohuruogu - Great Britain Athlete and Olympian
Marc Hare - Founder of Mr Hare
Piers Linney - Co-CEO Outsourcery and "Dragon" on Dragons' Den
Raoul Shah -Founder and CEO of Exposure
Errol Douglas - Celebrity Hair Stylist
Courtenay Griffiths QC - Barrister at Bedford Court Chambers
Sandie Okoro - General Counsel HSBC
Anne-Marie Imafidon - Founder Stemettes
Samantha Tross - Orthopaedic Surgeon
Damon Buffini - Businessman and Ex- Managing Director Permira
Atul Kochhar - Chef, restaurateur and television personality
Adrien Sauvage - British fashion designer
Jamal Edwards - Founder and CEO, SB.TV
Walter White - Partner, McGuireWoods LLP
Vanessa Kingori - Publisher, British GQ
M. S. Banga (Vindi Banga) - Partner, Clayton, Dubilier & Rice
Paul Cleal - Partner, PwC
∗ Success Quotes
References
External links
Success Talks Website
Digital media
Interviews
Communications and media organisations based in the United Kingdom | Success Talks | Technology | 483 |
22,285,493 | https://en.wikipedia.org/wiki/CB-13 | CB-13 (CRA13, SAB-378) is a cannabinoid drug, which acts as a potent agonist at both the CB1 and CB2 receptors, but has poor blood–brain barrier penetration, and so produces only peripheral effects at low doses, with symptoms of central effects such as catalepsy only appearing at much higher dose ranges. It has antihyperalgesic properties in animal studies, and has progressed to preliminary human trials.
Legal Status
As of October 2015 CB-13 is a controlled substance in China.
CB-13 is a Schedule I controlled substance in North Dakota.
See also
A-PONASA
AM-6545
AZ-11713908
Bunamidine
References
Cannabinoids
Designer drugs
Naphthalenes
Aromatic ketones
Naphthol ethers
Peripherally selective drugs
Ethers | CB-13 | Chemistry | 173 |
42,321,551 | https://en.wikipedia.org/wiki/Rings%20of%20Chariklo | The rings of Chariklo are a set of two narrow rings around the minor planet 10199 Chariklo. Chariklo, with a diameter of about , is the second-smallest celestial object with confirmed rings (with 2060 Chiron being the smallest) and the fifth ringed celestial object discovered in the Solar System, after the gas giants and ice giants. Orbiting Chariklo is a bright ring system consisting of two narrow and dense bands, 6–7 km (4 mi) and 2–4 km (2 mi) wide, separated by a gap of . The rings orbit at distances of about from the centre of Chariklo, a thousandth the distance between Earth and the Moon. The discovery was made by a team of astronomers using ten telescopes at various locations in Argentina, Brazil, Chile and Uruguay in South America during observation of a stellar occultation on 3 June 2013, and was announced on 26 March 2014.
The existence of a ring system around a minor planet was unexpected because it had been thought that rings could only be stable around much more massive bodies. Ring systems around minor bodies had not previously been discovered despite the search for them through direct imaging and stellar occultation techniques. Chariklo's rings should disperse over a period of at most a few million years, so either they are very young, or they are actively contained by shepherd moons with a mass comparable to that of the rings. The team nicknamed the rings Oiapoque (the inner, more substantial ring) and Chuí (the outer ring), after the two rivers that form the northern and southern coastal borders of Brazil. A request for formal names will be submitted to the IAU at a later date.
Discovery and observations
Chariklo is the largest confirmed member of a class of small bodies known as centaurs, which orbit the Sun between Saturn and Uranus in the outer Solar System. Forecasts had shown that, as seen from South America, it would pass in front of the 12.4-magnitude star UCAC4 248-108672, located in the constellation Scorpius, on 3 June 2013.
With the aid of thirteen telescopes located in Argentina, Brazil, Chile, and Uruguay, a team of astronomers led by Felipe Braga Ribas (), a post-doctoral astronomer of the National Observatory (ON), in Rio de Janeiro, and 65 other researchers from 34 institutions in 12 countries, was able to observe this occultation event, a phenomenon during which a star disappears behind its occulting body. The 1.54-metre Danish National Telescope at La Silla Observatory, due to the much faster data acquisition rate of its 'Lucky Imager' camera (10 Hz), was the only telescope able to resolve the individual rings.
During this event, the observed brightness was predicted to dip from magnitude 14.7 (star + Chariklo) to 18.5 (Chariklo alone) for at most 19.2 seconds. This increase of 3.8 magnitudes is equivalent to a decrease in brightness by a factor 32.5. The primary occultation event was accompanied by four additional small decreases in the overall intensity of the light curve, which were observed seven seconds before the beginning of the occultation and seven seconds after the end of the occultation. These secondary occultations indicated that something was partially blocking the light of the background star. The symmetry of the secondary occultations and multiple observations of the event in various locations helped reconstruct not only the shape and size of the object, but also the thickness, orientation, and location of the ring planes. The relatively consistent ring properties inferred from several secondary occultation observations discredit alternative explanations for these features, such as cometary-like outgassing.
Telescopes that observed the occultation included the Danish National Telescope and the survey telescope TRAPPIST of La Silla Observatory, the PROMPT Telescopes (Cerro Tololo Inter-American Observatory), the Brazilian Southern Astrophysical Research Telescope or SOAR (Cerro Pachón), the 0.45-metre ASH telescope (Cerro Burek), and those of the State University of Ponta Grossa Observatory, the Polo Astronomical Pole Casimiro Montenegro Filho (at the Itaipu Technological Park Foundation, in Foz do Iguaçu), the Universidad Católica Observatory of the Pontifical Catholic University of Chile (Santa Martina, Chile) and several at Estación Astrofísica de Bosque Alegre, operated by the National University of Córdoba. Negative detections were recorded by El Catalejo Observatory (Santa Rosa, La Pampa, Argentina), the 20-inch Planewave telescope (part of the Searchlight Observatory Network) at San Pedro de Atacama, Chile and the OALM instrument at Los Molinos Astronomical Observatory in Uruguay. Some of the other participating instruments were those at the National Observatory in Rio de Janeiro, the Valongo Observatory (at the Federal University of Rio de Janeiro), the Oeste do Paraná State University Observatory or Unioeste (in the state of Paraná), the Pico dos Dias Observatory or OPL (in Minas Gerais) and the São Paulo State University (UNESP – Guaratinguetá) in São Paulo.
On 18 October 2022, the NIRCam instrument onboard the James Webb Space Telescope (JWST) was used to observe the occultation of the star Gaia DR3 6873519665992128512 by Chariklo's rings, capturing the characteristic dual decrease in the star's brightness as the rings obscured the starlight at two points.
Properties
The orientation of the rings is consistent with an edge-on view from Earth in 2008, explaining the observed dimming of Chariklo between 1997 and 2008 by a factor of 1.75, as well as the gradual disappearance of water ice and other materials from its spectrum as the observed surface area of the rings decreased. Also consistent with this edge-on orientation is that since 2008, the Chariklo system has increased in brightness by a factor of 1.5 again, and the infrared water-ice spectral features have reappeared. This suggests that the rings are composed at least partially of water ice. An icy ring composition is also consistent with the expected density of a disrupted body within Chariklo's Roche limit.
Inner ring (2013C1R or Oiapoque)
The equivalent depth (a parameter related to the total amount of material contained in the ring based on the viewing geometry) of C1R was observed to vary by 21% over the course of the observation. Similar asymmetries have been observed during occultation observations of Uranus's narrow rings, and may be due to resonant oscillations responsible for modulating the width and optical depth of the rings. The column density of C1R is estimated to be 30–100 g/cm2.
Outer ring (2013C2R or Chuí)
C2R is half the width of the brighter ring, and resides just outside it, at . With an optical depth of about 0.06, it is markedly more diffuse than its companion. Altogether, it has approximately a twelfth of the mass of C1R.
Origin
The origin of the rings is unknown, but both are likely to be remnants of a debris disk, which could have formed via an impact on Chariklo, a collision with or between one or more pre-existing moons, tidal disruption of a former retrograde moon, or from material released from the surface by cometary activity or rotational disruption. If the rings formed through an impact event with Chariklo, the object must have impacted at a low velocity to prevent ring particles from being ejected beyond Chariklo's Hill sphere.
Impact velocities in the outer Solar System are typically ≈ 1 km/s (compared with the escape velocity at the surface of Chariklo of ≈ 0.1 km/s), and were even lower before the Kuiper belt was dynamically excited, supporting the possibility that the rings formed in the Kuiper belt before Chariklo was transferred to its current orbit less than 10 Myr ago. Impact velocities in the asteroid belt are much higher (≈ 5 km/s), which could explain the absence of such ring features in minor bodies within the asteroid belt. Collisions between ring particles would cause the ring to widen substantially, and Poynting–Robertson drag would cause the ring particles to fall onto the central body within a few million years, requiring either an active source of ring particles or dynamical confinement by small (kilometre-sized) embedded or shepherd moons yet to be discovered. Such moons would be very challenging to detect via direct imaging from Earth due to the small radial separation of the ring system and Chariklo.
Simulations
As the smallest known celestial body with its own ring system, Chariklo and its rings are the first to have been fully simulated by numerically solving the N-body problem. The assumptions made included the planetoid and ring particles being spherical, and all particles having equal radii between 2.5 and 10 m. Depending on parameters, the simulations involved between 21 million and 345 million particles interacting with each other through gravity and collisions. The goal of the simulations was to assess under what conditions the rings remain stable; that is, do not cluster into few bigger bodies.
The first conclusion coming from the simulations is that the density of Chariklo has to be bigger than that of the ring matter, just in order to maintain them in orbit. Secondarily, for all tested ring particle radii and ring spatial densities, the rings did cluster in relatively short time scales. The authors suggest three main explanations:
the ring particles are much smaller, on the order of 1 cm, than assumed in the simulations
the rings are very young (below 100 years)
there's a relatively massive, undetected as of yet, body in the system, which acts as a shepherd moon
They additionally noted that the effects of some of the assumptions, for instance complete absence of eccentricity of the rings, have not been evaluated.
References
External links
Universidad Católica Observatory
20140326
Articles containing video clips
20130603
Chariklo
Solar System | Rings of Chariklo | Astronomy | 2,082 |
14,972 | https://en.wikipedia.org/wiki/Idempotence | Idempotence (, ) is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application. The concept of idempotence arises in a number of places in abstract algebra (in particular, in the theory of projectors and closure operators) and functional programming (in which it is connected to the property of referential transparency).
The term was introduced by American mathematician Benjamin Peirce in 1870 in the context of elements of algebras that remain invariant when raised to a positive integer power, and literally means "(the quality of having) the same power", from + potence (same + power).
Definition
An element of a set equipped with a binary operator is said to be idempotent under if
.
The binary operation is said to be idempotent if
.
Examples
In the monoid of the natural numbers with multiplication, only and are idempotent. Indeed, and .
In the monoid of the natural numbers with addition, only is idempotent. Indeed, .
In a magma , an identity element or an absorbing element , if it exists, is idempotent. Indeed, and .
In a group , the identity element is the only idempotent element. Indeed, if is an element of such that , then and finally by multiplying on the left by the inverse element of .
In the monoids and of the power set of the set with set union and set intersection respectively, and are idempotent. Indeed, , and .
In the monoids and of the Boolean domain with logical disjunction and logical conjunction respectively, and are idempotent. Indeed, , and .
In a GCD domain (for instance in ), the operations of GCD and LCM are idempotent.
In a Boolean ring, multiplication is idempotent.
In a Tropical semiring, addition is idempotent.
In a ring of quadratic matrices, the determinant of an idempotent matrix is either 0 or 1. If the determinant is 1, the matrix necessarily is the identity matrix.
Idempotent functions
In the monoid of the functions from a set to itself (see set exponentiation) with function composition , idempotent elements are the functions such that , that is such that (in other words, the image of each element is a fixed point of ). For example:
the absolute value is idempotent. Indeed, , that is ;
constant functions are idempotent;
the identity function is idempotent;
the floor, ceiling and fractional part functions are idempotent;
the real part function of a complex number, is idempotent.
the subgroup generated function from the power set of a group to itself is idempotent;
the convex hull function from the power set of an affine space over the reals to itself is idempotent;
the closure and interior functions of the power set of a topological space to itself are idempotent;
the Kleene star and Kleene plus functions of the power set of a monoid to itself are idempotent;
the idempotent endomorphisms of a vector space are its projections.
If the set has elements, we can partition it into chosen fixed points and non-fixed points under , and then is the number of different idempotent functions. Hence, taking into account all possible partitions,
is the total number of possible idempotent functions on the set. The integer sequence of the number of idempotent functions as given by the sum above for n = 0, 1, 2, 3, 4, 5, 6, 7, 8, ... starts with 1, 1, 3, 10, 41, 196, 1057, 6322, 41393, ... .
Neither the property of being idempotent nor that of being not is preserved under function composition. As an example for the former, mod 3 and are both idempotent, but is not, although happens to be. As an example for the latter, the negation function on the Boolean domain is not idempotent, but is. Similarly, unary negation of real numbers is not idempotent, but is. In both cases, the composition is simply the identity function, which is idempotent.
Computer science meaning
In computer science, the term idempotence may have a different meaning depending on the context in which it is applied:
in imperative programming, a subroutine with side effects is idempotent if multiple calls to the subroutine have the same effect on the system state as a single call, in other words if the function from the system state space to itself associated with the subroutine is idempotent in the mathematical sense given in the definition;
in functional programming, a pure function is idempotent if it is idempotent in the mathematical sense given in the definition.
This is a very useful property in many situations, as it means that an operation can be repeated or retried as often as necessary without causing unintended effects. With non-idempotent operations, the algorithm may have to keep track of whether the operation was already performed or not.
Computer science examples
A function looking up a customer's name and address in a database is typically idempotent, since this will not cause the database to change. Similarly, a request for changing a customer's address to XYZ is typically idempotent, because the final address will be the same no matter how many times the request is submitted. However, a customer request for placing an order is typically not idempotent since multiple requests will lead to multiple orders being placed. A request for canceling a particular order is idempotent because no matter how many requests are made the order remains canceled.
A sequence of idempotent subroutines where at least one subroutine is different from the others, however, is not necessarily idempotent if a later subroutine in the sequence changes a value that an earlier subroutine depends on—idempotence is not closed under sequential composition. For example, suppose the initial value of a variable is 3 and there is a subroutine sequence that reads the variable, then changes it to 5, and then reads it again. Each step in the sequence is idempotent: both steps reading the variable have no side effects and the step changing the variable to 5 will always have the same effect no matter how many times it is executed. Nonetheless, executing the entire sequence once produces the output (3, 5), but executing it a second time produces the output (5, 5), so the sequence is not idempotent.
int x = 3;
void inspect() { printf("%d\n", x); }
void change() { x = 5; }
void sequence() { inspect(); change(); inspect(); }
int main() {
sequence(); // prints "3\n5\n"
sequence(); // prints "5\n5\n"
return 0;
}
In the Hypertext Transfer Protocol (HTTP), idempotence and safety are the major attributes that separate HTTP methods. Of the major HTTP methods, GET, PUT, and DELETE should be implemented in an idempotent manner according to the standard, but POST doesn't need to be. GET retrieves the state of a resource; PUT updates the state of a resource; and DELETE deletes a resource. As in the example above, reading data usually has no side effects, so it is idempotent (in fact nullipotent). Updating and deleting a given data are each usually idempotent as long as the request uniquely identifies the resource and only that resource again in the future. PUT and DELETE with unique identifiers reduce to the simple case of assignment to a variable of either a value or the null-value, respectively, and are idempotent for the same reason; the end result is always the same as the result of the initial execution, even if the response differs.
Violation of the unique identification requirement in storage or deletion typically causes violation of idempotence. For example, storing or deleting a given set of content without specifying a unique identifier: POST requests, which do not need to be idempotent, often do not contain unique identifiers, so the creation of the identifier is delegated to the receiving system which then creates a corresponding new record. Similarly, PUT and DELETE requests with nonspecific criteria may result in different outcomes depending on the state of the system - for example, a request to delete the most recent record. In each case, subsequent executions will further modify the state of the system, so they are not idempotent.
In event stream processing, idempotence refers to the ability of a system to produce the same outcome, even if the same file, event or message is received more than once.
In a load–store architecture, instructions that might possibly cause a page fault are idempotent. So if a page fault occurs, the operating system can load the page from disk and then simply re-execute the faulted instruction. In a processor where such instructions are not idempotent, dealing with page faults is much more complex.
When reformatting output, pretty-printing is expected to be idempotent. In other words, if the output is already "pretty", there should be nothing to do for the pretty-printer.
In service-oriented architecture (SOA), a multiple-step orchestration process composed entirely of idempotent steps can be replayed without side-effects if any part of that process fails.
Many operations that are idempotent often have ways to "resume" a process if it is interrupted ways that finish much faster than starting all over from the beginning. For example, resuming a file transfer,
synchronizing files, creating a software build, installing an application and all of its dependencies with a package manager, etc.
Applied examples
Applied examples that many people could encounter in their day-to-day lives include elevator call buttons and crosswalk buttons. The initial activation of the button moves the system into a requesting state, until the request is satisfied. Subsequent activations of the button between the initial activation and the request being satisfied have no effect, unless the system is designed to adjust the time for satisfying the request based on the number of activations.
See also
Biordered set
Closure operator
Fixed point (mathematics)
Idempotent of a code
Idempotent analysis
Idempotent matrix
Idempotent relation a generalization of idempotence to binary relations
Idempotent (ring theory)
Involution (mathematics)
Iterated function
List of matrices
Nilpotent
Pure function
Referential transparency
References
Further reading
"idempotent" at the Free On-line Dictionary of Computing
p. 443
Peirce, Benjamin. Linear Associative Algebra 1870.
Properties of binary operations
Algebraic properties of elements
Closure operators
Mathematical relations
Theoretical computer science | Idempotence | Mathematics | 2,349 |
52,057,614 | https://en.wikipedia.org/wiki/NGC%20312 | NGC 312 is an elliptical galaxy in the constellation Phoenix. It was discovered on September 5, 1836, by John Herschel. NGC 312 is situated south of the celestial equator and, as such, it is more easily visible from the southern hemisphere. Given its B magnitude of 13.4, NGC 312 is visible with the help of a telescope having an aperture of 10 inches (250mm) or more.
References
0312
18360905
Phoenix (constellation)
Elliptical galaxies
003343 | NGC 312 | Astronomy | 99 |
245,982 | https://en.wikipedia.org/wiki/Buoyancy | Buoyancy (), or upthrust is a net upward force exerted by a fluid that opposes the weight of a partially or fully immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid. Thus, the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object. The pressure difference results in a net upward force on the object. The magnitude of the force is proportional to the pressure difference, and (as explained by Archimedes' principle) is equivalent to the weight of the fluid that would otherwise occupy the submerged volume of the object, i.e. the displaced fluid.
For this reason, an object whose average density is greater than that of the fluid in which it is submerged tends to sink. If the object is less dense than the liquid, the force can keep the object afloat. This can occur only in a non-inertial reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a "downward" direction.
Buoyancy also applies to fluid mixtures, and is the most common driving force of convection currents. In these cases, the mathematical modelling is altered to apply to continua, but the principles remain the same. Examples of buoyancy driven flows include the spontaneous separation of air and water or oil and water.
Buoyancy is a function of the force of gravity or other source of acceleration on objects of different densities, and for that reason is considered an apparent force, in the same way that centrifugal force is an apparent force as a function of inertia. Buoyancy can exist without gravity in the presence of an inertial reference frame, but without an apparent "downward" direction of gravity or other source of acceleration, buoyancy does not exist.
The center of buoyancy of an object is the center of gravity of the displaced volume of fluid.
Archimedes' principle
Archimedes' principle is named after Archimedes of Syracuse, who first discovered this law in 212 BC. For objects, floating and sunken, and in gases as well as liquids (i.e. a fluid), Archimedes' principle may be stated thus in terms of forces:
—with the clarifications that for a sunken object the volume of displaced fluid is the volume of the object, and for a floating object on a liquid, the weight of the displaced liquid is the weight of the object.
More tersely: buoyant force = weight of displaced fluid.
Archimedes' principle does not consider the surface tension (capillarity) acting on the body, but this additional force modifies only the amount of fluid displaced and the spatial distribution of the displacement, so the principle that buoyancy = weight of displaced fluid remains valid.
The weight of the displaced fluid is directly proportional to the volume of the displaced fluid (if the surrounding fluid is of uniform density). In simple terms, the principle states that the buoyancy force on an object is equal to the weight of the fluid displaced by the object, or the density of the fluid multiplied by the submerged volume times the gravitational acceleration, g. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy. This is also known as upthrust.
Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it. Suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. The force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyancy force: 10 − 3 = 7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor. It is generally easier to lift an object up through the water than it is to pull it out of the water.
Assuming Archimedes' principle to be reformulated as follows,
then inserted into the quotient of weights, which has been expanded by the mutual volume
yields the formula below. The density of the immersed object relative to the density of the fluid can easily be calculated without measuring any volumes:
(This formula is used for example in describing the measuring principle of a dasymeter and of hydrostatic weighing.)
Example: If you drop wood into water, buoyancy will keep it afloat.
Example: A helium balloon in a moving car. During a period of increasing speed, the air mass inside the car moves in the direction opposite to the car's acceleration (i.e., towards the rear). The balloon is also pulled this way. However, because the balloon is buoyant relative to the air, it ends up being pushed "out of the way", and will actually drift in the same direction as the car's acceleration (i.e., forward). If the car slows down, the same balloon will begin to drift backward. For the same reason, as the car goes round a curve, the balloon will drift towards the inside of the curve.
Forces and equilibrium
The equation to calculate the pressure inside a fluid in equilibrium is:
where f is the force density exerted by some outer field on the fluid, and σ is the Cauchy stress tensor. In this case the stress tensor is proportional to the identity tensor:
Here δij is the Kronecker delta. Using this the above equation becomes:
Assuming the outer force field is conservative, that is it can be written as the negative gradient of some scalar valued function:
Then:
Therefore, the shape of the open surface of a fluid equals the equipotential plane of the applied outer conservative force field. Let the z-axis point downward. In this case the field is gravity, so Φ = −ρfgz where g is the gravitational acceleration, ρf is the mass density of the fluid. Taking the pressure as zero at the surface, where z is zero, the constant will be zero, so the pressure inside the fluid, when it is subject to gravity, is
So pressure increases with depth below the surface of a liquid, as z denotes the distance from the surface of the liquid into it. Any object with a non-zero vertical depth will have different pressures on its top and bottom, with the pressure on the bottom being greater. This difference in pressure causes the upward buoyancy force.
The buoyancy force exerted on a body can now be calculated easily, since the internal pressure of the fluid is known. The force exerted on the body can be calculated by integrating the stress tensor over the surface of the body which is in contact with the fluid:
The surface integral can be transformed into a volume integral with the help of the Gauss theorem:
where V is the measure of the volume in contact with the fluid, that is the volume of the submerged part of the body, since the fluid does not exert force on the part of the body which is outside of it.
The magnitude of buoyancy force may be appreciated a bit more from the following argument. Consider any object of arbitrary shape and volume V surrounded by a liquid. The force the liquid exerts on an object within the liquid is equal to the weight of the liquid with a volume equal to that of the object. This force is applied in a direction opposite to gravitational force, that is of magnitude:
where ρf is the density of the fluid, Vdisp is the volume of the displaced body of liquid, and g is the gravitational acceleration at the location in question.
If this volume of liquid is replaced by a solid body of exactly the same shape, the force the liquid exerts on it must be exactly the same as above. In other words, the "buoyancy force" on a submerged body is directed in the opposite direction to gravity and is equal in magnitude to
Though the above derivation of Archimedes principle is correct, a recent paper by the Brazilian physicist Fabio M. S. Lima brings a more general approach for the evaluation of the buoyant force exerted by any fluid (even non-homogeneous) on a body with arbitrary shape. Interestingly, this method leads to the prediction that the buoyant force exerted on a rectangular block touching the bottom of a container points downward! Indeed, this downward buoyant force has been confirmed experimentally.
The net force on the object must be zero if it is to be a situation of fluid statics such that Archimedes principle is applicable, and is thus the sum of the buoyancy force and the object's weight
If the buoyancy of an (unrestrained and unpowered) object exceeds its weight, it tends to rise. An object whose weight exceeds its buoyancy tends to sink. Calculation of the upwards force on a submerged object during its accelerating period cannot be done by the Archimedes principle alone; it is necessary to consider dynamics of an object involving buoyancy. Once it fully sinks to the floor of the fluid or rises to the surface and settles, Archimedes principle can be applied alone. For a floating object, only the submerged volume displaces water. For a sunken object, the entire volume displaces water, and there will be an additional force of reaction from the solid floor.
In order for Archimedes' principle to be used alone, the object in question must be in equilibrium (the sum of the forces on the object must be zero), therefore;
and therefore
showing that the depth to which a floating object will sink, and the volume of fluid it will displace, is independent of the gravitational field regardless of geographic location.
(Note: If the fluid in question is seawater, it will not have the same density (ρ) at every location, since the density depends on temperature and salinity. For this reason, a ship may display a Plimsoll line.)
It can be the case that forces other than just buoyancy and gravity come into play. This is the case if the object is restrained or if the object sinks to the solid floor. An object which tends to float requires a tension restraint force T in order to remain fully submerged. An object which tends to sink will eventually have a normal force of constraint N exerted upon it by the solid floor. The constraint force can be tension in a spring scale measuring its weight in the fluid, and is how apparent weight is defined.
If the object would otherwise float, the tension to restrain it fully submerged is:
When a sinking object settles on the solid floor, it experiences a normal force of:
Another possible formula for calculating buoyancy of an object is by finding the apparent weight of that particular object in the air (calculated in Newtons), and apparent weight of that object in the water (in Newtons). To find the force of buoyancy acting on the object when in air, using this particular information, this formula applies:
Buoyancy force = weight of object in empty space − weight of object immersed in fluid
The final result would be measured in Newtons.
Air's density is very small compared to most solids and liquids. For this reason, the weight of an object in air is approximately the same as its true weight in a vacuum. The buoyancy of air is neglected for most objects during a measurement in air because the error is usually insignificant (typically less than 0.1% except for objects of very low average density such as a balloon or light foam).
Simplified model
A simplified explanation for the integration of the pressure over the contact area may be stated as follows:
Consider a cube immersed in a fluid with the upper surface horizontal.
The sides are identical in area, and have the same depth distribution, therefore they also have the same pressure distribution, and consequently the same total force resulting from hydrostatic pressure, exerted perpendicular to the plane of the surface of each side.
There are two pairs of opposing sides, therefore the resultant horizontal forces balance in both orthogonal directions, and the resultant force is zero.
The upward force on the cube is the pressure on the bottom surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal bottom surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the bottom surface.
Similarly, the downward force on the cube is the pressure on the top surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal top surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the top surface.
As this is a cube, the top and bottom surfaces are identical in shape and area, and the pressure difference between the top and bottom of the cube is directly proportional to the depth difference, and the resultant force difference is exactly equal to the weight of the fluid that would occupy the volume of the cube in its absence.
This means that the resultant upward force on the cube is equal to the weight of the fluid that would fit into the volume of the cube, and the downward force on the cube is its weight, in the absence of external forces.
This analogy is valid for variations in the size of the cube.
If two cubes are placed alongside each other with a face of each in contact, the pressures and resultant forces on the sides or parts thereof in contact are balanced and may be disregarded, as the contact surfaces are equal in shape, size and pressure distribution, therefore the buoyancy of two cubes in contact is the sum of the buoyancies of each cube. This analogy can be extended to an arbitrary number of cubes.
An object of any shape can be approximated as a group of cubes in contact with each other, and as the size of the cube is decreased, the precision of the approximation increases. The limiting case for infinitely small cubes is the exact equivalence.
Angled surfaces do not nullify the analogy as the resultant force can be split into orthogonal components and each dealt with in the same way.
Static stability
A floating object is stable if it tends to restore itself to an equilibrium position after a small displacement. For example, floating objects will generally have vertical stability, as if the object is pushed down slightly, this will create a greater buoyancy force, which, unbalanced by the weight force, will push the object back up.
Rotational stability is of great importance to floating vessels. Given a small angular displacement, the vessel may return to its original position (stable), move away from its original position (unstable), or remain where it is (neutral).
Rotational stability depends on the relative lines of action of forces on an object. The upward buoyancy force on an object acts through the center of buoyancy, being the centroid of the displaced volume of fluid. The weight force on the object acts through its center of gravity. A buoyant object will be stable if the center of gravity is beneath the center of buoyancy because any angular displacement will then produce a 'righting moment'.
The stability of a buoyant object at the surface is more complex, and it may remain stable even if the center of gravity is above the center of buoyancy, provided that when disturbed from the equilibrium position, the center of buoyancy moves further to the same side that the center of gravity moves, thus providing a positive righting moment. If this occurs, the floating object is said to have a positive metacentric height. This situation is typically valid for a range of heel angles, beyond which the center of buoyancy does not move enough to provide a positive righting moment, and the object becomes unstable. It is possible to shift from positive to negative or vice versa more than once during a heeling disturbance, and many shapes are stable in more than one position.
Fluids and objects
As a submarine expels water from its buoyancy tanks, it rises because its volume is constant (the volume of water it displaces if it is fully submerged) while its mass is decreased.
Compressible objects
As a floating object rises or falls, the forces external to it change and, as all objects are compressible to some extent or another, so does the object's volume. Buoyancy depends on volume and so an object's buoyancy reduces if it is compressed and increases if it expands.
If an object at equilibrium has a compressibility less than that of the surrounding fluid, the object's equilibrium is stable and it remains at rest. If, however, its compressibility is greater, its equilibrium is then unstable, and it rises and expands on the slightest upward perturbation, or falls and compresses on the slightest downward perturbation.
Submarines
Submarines rise and dive by filling large ballast tanks with seawater. To dive, the tanks are opened to allow air to exhaust out the top of the tanks, while the water flows in from the bottom. Once the weight has been balanced so the overall density of the submarine is equal to the water around it, it has neutral buoyancy and will remain at that depth. Most military submarines operate with a slightly negative buoyancy and maintain depth by using the "lift" of the stabilizers with forward motion.
Balloons
The height to which a balloon rises tends to be stable. As a balloon rises it tends to increase in volume with reducing atmospheric pressure, but the balloon itself does not expand as much as the air on which it rides. The average density of the balloon decreases less than that of the surrounding air. The weight of the displaced air is reduced. A rising balloon stops rising when it and the displaced air are equal in weight. Similarly, a sinking balloon tends to stop sinking.
Divers
Underwater divers are a common example of the problem of unstable buoyancy due to compressibility. The diver typically wears an exposure suit which relies on gas-filled spaces for insulation, and may also wear a buoyancy compensator, which is a variable volume buoyancy bag which is inflated to increase buoyancy and deflated to decrease buoyancy. The desired condition is usually neutral buoyancy when the diver is swimming in mid-water, and this condition is unstable, so the diver is constantly making fine adjustments by control of lung volume, and has to adjust the contents of the buoyancy compensator if the depth varies.
Density
If the weight of an object is less than the weight of the displaced fluid when fully submerged, then the object has an average density that is less than the fluid and when fully submerged will experience a buoyancy force greater than its own weight. If the fluid has a surface, such as water in a lake or the sea, the object will float and settle at a level where it displaces the same weight of fluid as the weight of the object. If the object is immersed in the fluid, such as a submerged submarine or air in a balloon, it will tend to rise.
If the object has exactly the same density as the fluid, then its buoyancy equals its weight. It will remain submerged in the fluid, but it will neither sink nor float, although a disturbance in either direction will cause it to drift away from its position.
An object with a higher average density than the fluid will never experience more buoyancy than weight and it will sink.
A ship will float even though it may be made of steel (which is much denser than water), because it encloses a volume of air (which is much less dense than water), and the resulting shape has an average density less than that of the water.
See also
References
External links
Falling in Water
W. H. Besant (1889) Elementary Hydrostatics from Google Books.
NASA's definition of buoyancy
Fluid mechanics
Force | Buoyancy | Physics,Mathematics,Engineering | 4,097 |
78,862,393 | https://en.wikipedia.org/wiki/Red%20Books%20of%20Humphry%20Repton | The Red Books were books created by the landscape designer Humphry Repton to illustrate his designs for his clients.
The books were a way for Repton to describe his landscape design plans for their property. More than one hundred of the estimated four hundred Red Books created by Repton are still extant. The books acquired their name from their distinctive binding in red Morocco leather.
The Morgan Library & Museum in New York City holds the Red Books for Ferney Hall in Shropshire, commissioned by Samuel Phipps in 1789, and Hatchlands Park in Surrey, commissioned by William Brightwell Sumner in 1800.
The Red Books for Shrublands Hall in Suffolk from 1788 and Brondesbury Park in Middlesex in 1790, are in the collection of Dumbarton Oaks in Georgetown (Washington, D.C.).
References
Landscape architecture
Books about gardening
18th-century works | Red Books of Humphry Repton | Engineering | 171 |
26,406,793 | https://en.wikipedia.org/wiki/Yamaha%20YM2414 | The YM2414, a.k.a. OPZ, is an eight-channel sound chip developed by Yamaha. It was used in many mid-market phase/frequency modulation-based synthesizers, including Yamaha's TX81Z (the first product to feature the chip and was named after it), DX11, YS200 family, the Korg Z3 guitar synthesizer, and many other devices. A successor was released as the upgraded OPZII/YM2424, used only in the Yamaha V50.
The OPZ has the following features:
Eight concurrent FM synthesis channels
Four operators per channel
Eight selectable waveforms
Fixed-frequency mode, which can go much lower in the OPZII, enabling 0 Hz carriers or low rates for native chorusing
Dual low frequency oscillators
Products
The chip was used in the PortaTone PSR-80 and PSR-6300, the Yamaha TX81Z rack-mounted FM synthesizer, the Yamaha DX11, DSR1000 and 2000, YS100, YS200 and DS55 synthesizers, the TQ5 Tone Generator and the Yamaha EMT-1 half-rack FM Sound Expander module. It was also used in the Yamaha WT11 wind tone generator.
Its upgraded variant, the YM2424 (OPZII), was used exclusively in the Yamaha V50 music workstation.
See also
List of Yamaha products
References
External links
Korg Z3 at joness.com
DX11 at vintagesynth.com
YM2414 | Yamaha YM2414 | Technology | 321 |
2,226,899 | https://en.wikipedia.org/wiki/Notification%20system | In information technology, a notification system is a combination of software and hardware that provides a means of delivering a message to a set of recipients. It commonly shows activity related to an account. Such systems constitute an important aspect of modern Web applications.
The widespread adoption of notification systems was a major technological development of the 20th century. A notification is a combination of software, hardware, and psychology that provides a means of delivering a message to a group of recipients. Notifications show activity that relate to an event, account, or person. A push notification is a message that appears on a mobile device such as a text, sports score, limited-time deal, or an e-mail announcing when a computer network will be down for a scheduled maintenance. Notifications are sent from app publishers at any time, in an effort to get users to open up their app or website. Notifications appear on a user's lock screen and also at the top of their phone screen when the phone is unlocked and in use. Push notifications can be valuable and convenient for both the app user and the developer due to the immediacy and display location of notifications. Notifications also pair with sounds to reach multiple senses of a user and get maximum attention. For app publishers, push notifications are a way for them to speak directly to the user without being caught by spam filters or being pushed to the side by the flood of emails within an inbox. Because of this, these push click-through rates can be twice as high as email. They invite users to open an app or spend time and money in a certain way by the app publisher, even when the app isn't open. This means that for developers, publishers, and businesses, notifications are the most effective way to take attention and ultimately make money.
Notifications utilize a concept known as variable rewards, which is a technique that slot machines use to hook gamblers. Similarly, variable reward systems keep users compulsively checking their phones due to the possibility of social approval awaiting them. Notifications have taken over our world and are now utilized by every software, website, program, and person in the world.
Ramsay Brown, co-founder of FKA Dopamine Lab, CEO of Mission Control, and leader of AI Responsibility Lab, says "The brain isn't particularly craving any one little feel-good signal as much as it does a good rhythm and pattern". Social media apps cater to the timing of the notifications that they deliver to deliver literal hits of dopamine to users at algorithmically determined times. Oftentimes these companies will stockpile these notifications before delivering them all in a batch in order to maximize the emotional impact that a user experiences. Another man, Jonathan Haidt, who is a social psychologist at NYU Stern School of Business points to concerns of mental health directly relating to social media and the notification system. He points to the increase in depression and suicide rates among teens and young adults since the early 2000s and Haidy states that this trend starts the year social media was made available on cell phones. Tristan Harris, former design ethicist at Google and co-founder of the Center for Human Technology states that there is a "disinformation-for-profit business model" and companies profit by allowing "unregulated messages to reach anyone for the best price". This becomes problematic as companies have unlimited and often unwarranted access to you and your focus through the notification system. This is always used to drive larger profits, whether that means that companies use notifications to simply promote their newest product, or if they subtly try to get you back onto the app in order to take more of your time. There is overwhelming evidence that notifications are associated with decreased productivity, poorer concentration, and increased distraction at work, school, and home.
See also
Emergency notification system
Emergency communication system
Emergency broadcast system
Emergency alert system
Emergency telephone number
ePrompter, an e-mail notification system
References
Human–computer interaction
Information systems | Notification system | Technology,Engineering | 811 |
1,152,416 | https://en.wikipedia.org/wiki/Antitrust%20%28film%29 | Antitrust (also titled Conspiracy.com and Startup) is a 2001 American techno-thriller film written by Howard Franklin and directed by Peter Howitt.
Antitrust portrays young idealistic programmers and a large corporation (NURV) that offers a significant salary, an informal working environment, and creative opportunities for those talented individuals willing to work for them. The charismatic CEO of NURV (Tim Robbins) seems to be good-natured, but new employee and protagonist Milo Hoffman (Ryan Phillippe) begins to unravel the terrible hidden truth of NURV's operation.
The film stars Phillippe, Rachael Leigh Cook, Claire Forlani, and Robbins. Antitrust opened in the United States on January 12, 2001, and was generally panned by critics.
Plot
Working with his three friends at their new software development company Skullbocks, Stanford graduate Milo Hoffman is recruited by Gary Winston, the CEO of the software corporation NURV. Milo is offered an attractive programming position with a large paycheck, an almost-unrestrained working environment, and extensive creative control over his work. After accepting, Hoffman and his girlfriend, Alice Poulson (Forlani), move to NURV headquarters in Portland, Oregon.
Despite development of the flagship product (Synapse, a worldwide media distribution network) being well on schedule, Hoffman soon becomes suspicious of the excellent source code that Winston personally provides to him, seemingly when needed most, while refusing to divulge the code's origin.
After his best friend and fellow computer programmer, Teddy Chin, is murdered, Hoffman discovers that NURV is stealing the code they need from programmers around the world—including Chin—and then killing them. NURV not only employs an extensive surveillance system to observe and steal code, the company has infiltrated the Justice Department and most mainstream media. Even Hoffman's girlfriend is a plant, an ex-con hired by the company to spy on and manipulate him.
In a secret NURV database of employee surveillance dossiers, Hoffman discovers highly-sensitive personal information about Lisa Calighan (Cook), a friendly co-worker. When he says he knows the company has this information about her, she agrees to help him expose NURV's crimes. Coordinating with Brian Bissel, Hoffman's old start-up friend, they plan to use a local public-access television station to hijack Synapse and globally broadcast their charges against NURV. However, Calighan is actually Winston's accomplice and foils Hoffman.
When the plan fails, and as Winston prepares to kill Hoffman, a backup plan is put into motion. Off-screen, Hoffman had previously confronted and convinced Poulson to turn against NURV; she, the fourth member of Skullbocks, and NURV's incorruptible security contractors usurp one of NURV's own work centers—"Building 21"—and transmit incriminating evidence with the Synapse code. Calighan, Winston, and his entourage are arrested by the FBI for their crimes. After amicably parting ways with the redeemed Poulson, Hoffman rejoins Skullbocks.
Cast
Allusions
Roger Ebert found Gary Winston to be a thinly disguised pastiche of entrepreneur Bill Gates; so much so that he was "surprised [the writers] didn't protect against libel by having the villain wear a name tag saying, 'Hi! I'm not Bill! Similarly, Ebert felt NURV "seems a whole lot like Microsoft". Parallels between the fictional and real-world software giants were also drawn by Lisa Bowman of ZDNet UK, James Berardinelli of ReelViews, and Rita Kempley of The Washington Post. Microsoft spokesman Jim Cullinan said, "From the trailers, we couldn't tell if the movie was about or Oracle."
Production
Principal photography for Antitrust took place in Vancouver, British Columbia, California, and Portland, Oregon.
Stanley Park in Vancouver served as the grounds for Gary Winston's house, although the gate house at its entrance was faux. The exterior of Winston's house itself was wholly computer-generated; only the paved walkway and body of water in the background are physically present in the park. For later shots of Winston and Hoffman walking along a beach near the house, the CG house was placed in the background of Bowen Island, the shooting location. Catherine Hardwicke designed the interior sets for Winston's house, which featured several different units, or "pods", e.g., personal, work, and recreation units. No scenes take place in any of the personal areas, however; only public areas made it to the screen. While the digital paintings in Winston's home were created with green screen technology, the concept was based on technology that was already available in the real world. The characters even refer to Bill Gates' house which, in real life, had such art. The paintings which appeared for Hoffman were of a cartoon character, "Alien Kitty", developed by Floyd Hughes specifically for the film.
Simon Fraser University's Burnaby campus stood in for external shots of NURV headquarters.
The Chan Centre for the Performing Arts at the University of British Columbia (UBC) was used for several internal locations. The centre's foyer area became the NURV canteen; the set decoration for which was inspired by Apple's canteen, which the producers saw during a visit to their corporate headquarters. The inside of the Chan—used for concerts—served as the shape for "The Egg", or "The NURV Center", where Hoffman's cubicle is located. Described as "a big surfboard freak" by director Peter Howitt, production designer Catherine Hardwicke surrounded "The Egg" set with surfboards mounted to the walls; Howitt has said, "The idea was to make NURV a very cool looking place." Both sets for NURV's Building 21 were also on UBC's campus. The internal set was an art gallery on campus, while the exterior was built for the film on the university's grounds. According to Howitt, UBC students kept attempting to steal the Building 21 set pieces.
Hoffman and Poulson's new home—a real house in Vancouver—was a "very tight" shooting location and a very rigorous first week for shooting because, as opposed to a set, the crew could not move the walls. The painting in the living room is the product of a young Vancouver artist, and was purchased by Howitt as his first piece of art.
The new Skullbocks office was a real loft, also in Vancouver, on Beatty Street.
Open source
Antitrusts pro–open source story excited industry leaders and professionals, with the prospects of expanding the public's awareness and knowledge level of the availability of open-source software. The film heavily features Linux and its community, using screenshots of the Gnome desktop, consulting Linux professionals, and including cameos by Miguel de Icaza and Scott McNealy (the latter appearing in the film's trailers). Jon Hall, executive director of Linux International and consultant on the film, said "[Antitrust] is a way of bringing the concept of open source and the fact that there is an alternative to the general public, who often don't even know that there is one."
Despite the film's message about open source computing, MGM did not follow through with their marketing: the official website for Antitrust featured some videotaped interviews which were only available in Apple's proprietary QuickTime format.
Reception
Antitrust received mainly negative reviews, and has a "Rotten" consensus of 24% on Rotten Tomatoes, based on 106 reviews, with an average score of 4 out of 10. The summary states "Due to its use of clichéd and ludicrous plot devices, this thriller is more predictable than suspenseful. Also, the acting is bad." The film also has a score of 31 out 100, based on 29 reviews, on Metacritic. Audiences polled by CinemaScore gave the film a grade "B+" on scale of A to F.
Roger Ebert of the Chicago Sun-Times gave the film two stars out of four. Linux.com appreciated the film's open-source message, but felt the film overall was lackluster, saying AntiTrust is probably worth a $7.50 ticket on a night when you've got nothing else planned."
James Keith La Croix of Detroit's Metro Times gave the film four stars, impressed that "Antitrust is a thriller that actually thrills."
The film won both the Golden Goblet for Best Feature Film, and Best Director for Howitt, at the 2001 Shanghai International Film Festival.
Home media
Antitrust was released as a "Special Edition" DVD on May 15, 2001, and on VHS on December 26, 2001. The DVD features audio commentary by the director and editor, an exclusive documentary, deleted scenes and alternative opening and closing sequences with director's commentary, Everclear's music video for "When It All Goes Wrong Again" (which is played over the beginning of the closing credits), and the original theatrical trailer. The DVD was re-released August 1, 2006. It was released on Blu-ray Disc on September 22, 2015.
See also
List of films featuring surveillance
References
External links
()
2001 films
2001 thriller films
2000s American films
2000s English-language films
American thriller films
Films about computer and internet entrepreneurs
Films about security and surveillance
Films directed by Peter Howitt
Films scored by Don Davis (composer)
Films set in Portland, Oregon
Films shot in California
Films shot in Portland, Oregon
Films shot in Vancouver
Films with screenplays by Howard Franklin
Hyde Park Entertainment films
Metro-Goldwyn-Mayer films
Techno-thriller films
Works about free software
English-language thriller films | Antitrust (film) | Technology | 2,008 |
44,933,795 | https://en.wikipedia.org/wiki/Space%20cloth | Space cloth is a hypothetical infinite plane of conductive material having a resistance of η ohms per square, where η is the impedance of free space. η ≈ 376.7 ohms. If a transmission line composed of straight parallel perfect conductors in free space is terminated by space cloth that is normal to the transmission line then that transmission line is terminated by its characteristic impedance. The calculation of the characteristic impedance of a transmission line composed of straight, parallel good conductors may be replaced by the calculation of the D.C. resistance between electrodes placed on a two-dimensional resistive surface. This equivalence can be used in reverse to calculate the resistance between two conductors on a resistive sheet if the arrangement of the conductors is the same as the cross section of a transmission line of known impedance. For example, a pad surrounded by a guard ring on a printed circuit board (PCB) is similar to the cross section of a coaxial cable transmission line.
Examples
Calculating characteristic impedance from the surface resistance
The figure to the right shows a coaxial cable terminated by space cloth. In the case of a closed structure like a coaxial cable, the space cloth may be trimmed to the boundary of the outer conductor. The computation of resistance between the conductors can be computed with 2D electromagnetic field solver methods including the relaxation method and analog methods using resistance paper.
In the case of a coaxial cable, there is a closed-form solution. The resistive surface is considered to be a series of infinitesimal annular rings, each having a width of dρ and a resistance of (η/2πρ)dρ. The resistance between the inner electrode and the outer electrode is just the integral over all such rings.
This is exactly the equation for the characteristic impedance of a coaxial cable in free space.
Calculating surface resistance from characteristic impedance
The characteristic impedance of a two parallel wire transmission line is given by
where d is the diameter of the wire and D is the center to center separation between the wires.
If the second figure is taken to be two round pads on a printed circuit board that has surface contamination resulting in a surface resistivity of Rs (50 MΩ per square, for example) then the resistance between the two pads is given by:
Multi-mode transmission line
The figure shows the cross section of a three conductor transmission line. The structure has two transmission eigen-modes which are the differential mode (conductors a and b driven with equal amplitude but opposite phase voltages with respect to conductor c) and the common mode (conductors a and b driven with the same voltages with respect to conductor c). In general, the eigen-modes have different characteristic impedances.
If , , then the field in region IV and V and can be ignored.
The resistance of regions I–III are
where η is the impedance of space cloth (unit: ohm per square)
In the common mode, conductors a and b are at the same voltage so there is no effect from region I. The common mode characteristic impedance is the resistance of region II in parallel with region III.
In the differential mode, the characteristic impedance is the resistance of region I in parallel with the series combination of regions II and III.
See also
Resistance paper
Teledeltos
Notes
References
Electromagnetic radiation
Transmission lines | Space cloth | Physics | 677 |
12,696,909 | https://en.wikipedia.org/wiki/Puto%20%28food%29 | Puto is a Filipino steamed rice cake, traditionally made from slightly fermented rice dough (galapong). It is eaten as is or as an accompaniment to a number of savoury dishes (most notably, dinuguan). Puto is also an umbrella term for various kinds of indigenous steamed cakes, including those made without rice. It is a sub-type of kakanin (rice cakes).
Description
Puto is made from rice soaked overnight to allow it to ferment slightly. Yeast may sometimes be added to aid this process. It is then ground (traditionally with stone mills) into a rice dough known as galapong. The mixture is then steamed.
The most common shape of the putuhán steamer used in making puto is round, ranging from in diameter and between deep. These steamers are rings made of either soldered sheet metal built around a perforated pan, or of thin strips of bent bamboo enclosing a flat basket of split bamboo slats (similar to a dim sum steamer basket). The cover is almost always conical to allow the condensing steam to drip along the perimeter instead of on the cakes.
A sheet of muslin (katsâ) is stretched over the steamer ring and the prepared rice batter poured directly on it; an alternative method uses banana leaf as a liner. The puto is then sold as large, thick cakes in flat baskets called bilao lined with banana leaf, either as whole loaves or sliced into smaller, lozenge-shaped individual portions.
Properly prepared puto imparts the slightly yeasty aroma of fermented rice galapong, which may be enhanced by the fragrance of banana leaves. It is neither sticky nor dry and crumbly, but soft, moist, and with a fine, uniform grain. The essential flavour is of freshly cooked rice, but it may be sweetened a bit if eaten by itself as a snack instead of as accompaniment to savory dishes. Most puto cooked in the Tagalog-speaking regions may contain a small quantity of wood ash lye.
Puto eaten on its own commonly add toppings like cheese, butter/margarine, hard-boiled eggs, meat, or freshly grated coconut. In Bulacan, puto with cheese toppings are humorously called putong bakla ("homosexual puto"), while puto with egg toppings are called putong lalaki ("man's puto") and those filled with meat are called putong babae ("woman's puto").
Variants
Puto is also an umbrella term for various kinds of indigenous steamed cakes, including those made without rice. The key characteristics are that they are cooked by steaming and are made with some type of flour (to contrast with bibingka, which are baked cakes). There are exceptions, however, like puto seko which is a baked dry cookie. The traditional puto made with galapong is sometimes referred to as putong puti ("white puto") or putong bigas ("rice puto) to distinguish it from other dishes also called puto. It is also similar to potu in Guam.
Modern variants of puto may also use non-traditional ingredients like ube (purple yam), vanilla, or chocolate. Notable variants of puto, as well as other dishes classified as puto, include the following:
Rice-based puto
Puto bagas - a puto shaped like a concave disc that is made from ground rice (maaw). Unlike other puto it is baked until crunchy. It originates from the Bicol Region.
Puto bao - a puto from the Bicol region traditionally cooked in halved coconut shells lined with a banana leaf. It distinctively has a filling of sweetened coconut meat (bukayo).
Puto bumbong – traditionally made from a special variety of sticky or glutinous rice (called pirurutong) which has a distinctly purple colour. The rice mixture is soaked in saltwater and dried overnight and then poured into bumbóng (bamboo tube) and then steamed until steam rises out of the bamboo tubes. It is served topped with butter or margarine and shredded coconut mixed with moscovado sugar. It is commonly eaten during Christmas in the Philippines along with bibingka, another type of rice cake.
Puto dahon or puto dahon saging - a puto from the Hiligaynon people that is traditionally cooked wrapped in a banana leaf.
Puto kutsinta (typically just called kutsinta or cuchinta)- a steamed rice cake similar to putong puti, but is made using lye. It is characteristically moist and chewy, and can range in color from reddish brown to yellow or orange in coloration. It is typically topped with shredded coconut meat.
Putong lusong - an anise-flavored puto from Pampanga typically served in square or rectangular slices.
Puto Manapla – a variant specifically flavored with anise and lined with banana leaves. It is named after the municipality of Manapla where it originates.
Puto maya – more accurately, a type of biko. It is made from glutinous rice (usually purple glutinous rice called tapol) soaked in water, drained and then placed into a steamer for 30 minutes. This rice mixture is then combined with coconut milk, salt, sugar and ginger juice and returned to the steamer for another 25 to 30 minutes. It is popular in the Cebuano-speaking regions of the Philippines. It is traditionally served as small patties and eaten very early in the morning with sikwate (hot chocolate). It is also commonly paired with ripe sweet mangoes.
Puto pandan – puto cooked with a knot of pandan leaves, which imparts additional fragrance and a light green color.
Puto-Pao – a combination of siopao (meat-filled bun) and puto. It uses the traditional puto recipe but incorporates a spiced meat filling. It is similar to some traditional variants of puto (especially in Bulacan) that also have meat fillings.
Putong pula - a Tagalog puto from the Rizal Province which uses brown muscovado sugar, giving it a brownish color.
Putong pulo or putong polo - small spherical puto from Tagalog regions that typically use achuete seeds for coloring, giving the puto a light brown to orange color. They are traditionally served with a topping of cheese or grated young coconut.
Putong sulot - a version of puto bumbong that uses white glutinous rice. Unlike puto bumbong it is available all-year round. It originates from the province of Pampanga and Batangas.
Sayongsong – also known as sarungsong or alisuso, they are steamed ground mixture of glutinous rice, regular rice, and young coconut or roasted peanuts, with coconut milk, sugar, and calamansi juice. It is distinctively served in cone-shaped banana leaves. It is a specialty of Surigao del Norte and the Caraga Region, as well as the southeastern Visayas.
Others
Puto flan (also called leche puto, or puto leche) – a combination of a steamed muffin and leche flan (custard). It uses regular flour, though there are versions that use rice flour.
Putong kamotengkahoy - also known as puto binggala in Visayan and puto a banggala in Maranao. A small cupcake made from cassava, grated coconut, and sugar. It is very similar to cassava cake, except it is steamed rather than baked.
Puto lanson – puto from Iloilo which is made of grated cassava, and is foamy when cooked.
Puto mamón – a puto mixture that has no rice but combines egg yolks, salt and sugar. A mixture of milk and water and another of flour are alternately mixed into the yolks, then egg whites are beaten and folded in before the dough is poured into muffin cups and steamed for 15 to 20 minutes. It is a steamed variant of mamón, a traditional Filipino chiffon cake.
Puto seco (also spelled puto seko) – a type of powdery cookie made from corn flour. The name literally means "dry puto" in Spanish. It is baked rather than steamed. Sometimes also called puto masa (literally "corn dough puto"; not to be confused with masa podrida, a Filipino shortbread cookie).
Gallery
See also
Bibingka
Espasol
Idli
Kakanin
Kalamay
Panyalam
Rice cake
Sapin-sapin
Piutu
Puttu
Appam
List of steamed foods
References
Fermented foods
Foods containing coconut
Philippine rice dishes
Rice cakes
Steamed foods | Puto (food) | Biology | 1,864 |
4,406,343 | https://en.wikipedia.org/wiki/Chronology%20of%20computation%20of%20%CF%80 | The table below is a brief chronology of computed numerical values of, or bounds on, the mathematical constant pi (). For more detailed explanations for some of these calculations, see Approximations of .
As of July 2024, has been calculated to 202,112,290,000,000 (approximately 202 trillion) decimal digits. The last 100 decimal digits of the latest world record computation are:
7034341087 5351110672 0525610978 1945263024 9604509887 5683914937 4658179610 2004394122 9823988073 3622511852
Before 1400
1400–1949
1949–2009
2009–present
See also
History of pi
Approximations of π
References
External links
Borwein, Jonathan, "The Life of Pi "
Kanada Laboratory home page
Stu's Pi page
Takahashi's page
Google's web service making all 100 trillion digits available
Pi
History of mathematics
Pi
Pi algorithms | Chronology of computation of π | Mathematics | 208 |
29,415,121 | https://en.wikipedia.org/wiki/Bis%28acetonitrile%29palladium%20dichloride | Bis(acetonitrile)palladium dichloride is the coordination complex with the formula PdCl2(NCCH3)2. It is the adduct of two acetonitrile ligands with palladium(II) chloride. It is a yellow-brown solid that is soluble in organic solvents. The compound is a reagent and a catalyst for reactions that require soluble Pd(II). The compound is similar to bis(benzonitrile)palladium dichloride. It reacts with 1,5-cyclooctadiene to give dichloro(1,5-cyclooctadiene)palladium.
References
Palladium compounds
Homogeneous catalysis
Coordination complexes
Chloro complexes
Nitriles | Bis(acetonitrile)palladium dichloride | Chemistry | 166 |
173,547 | https://en.wikipedia.org/wiki/Cayley%E2%80%93Hamilton%20theorem | In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex numbers or the integers) satisfies its own characteristic equation.
The characteristic polynomial of an matrix is defined as , where is the determinant operation, is a variable scalar element of the base ring, and is the identity matrix. Since each entry of the matrix is either constant or linear in , the determinant of is a degree- monic polynomial in , so it can be written as
By replacing the scalar variable with the matrix , one can define an analogous matrix polynomial expression,
(Here, is the given matrix—not a variable, unlike —so is a constant rather than a function.)
The Cayley–Hamilton theorem states that this polynomial expression is equal to the zero matrix, which is to say that that is, the characteristic polynomial is an annihilating polynomial for
One use for the Cayley–Hamilton theorem is that it allows to be expressed as a linear combination of the lower matrix powers of :
When the ring is a field, the Cayley–Hamilton theorem is equivalent to the statement that the minimal polynomial of a square matrix divides its characteristic polynomial.
A special case of the theorem was first proved by Hamilton in 1853 in terms of inverses of linear functions of quaternions. This corresponds to the special case of certain real or complex matrices. Cayley in 1858 stated the result for and smaller matrices, but only published a proof for the case. As for matrices, Cayley stated “..., I have not thought it necessary to undertake the labor of a formal proof of the theorem in the general case of a matrix of any degree”. The general case was first proved by Ferdinand Frobenius in 1878.
Examples
matrices
For a matrix , the characteristic polynomial is given by , and so is trivial.
matrices
As a concrete example, let
Its characteristic polynomial is given by
The Cayley–Hamilton theorem claims that, if we define
then
We can verify by computation that indeed,
For a generic matrix,
the characteristic polynomial is given by , so the Cayley–Hamilton theorem states that
which is indeed always the case, evident by working out the entries of .
Applications
Determinant and inverse matrix
For a general invertible matrix , i.e., one with nonzero determinant, −1 can thus be written as an order polynomial expression in : As indicated, the Cayley–Hamilton theorem amounts to the identity
The coefficients are given by the elementary symmetric polynomials of the eigenvalues of . Using Newton identities, the elementary symmetric polynomials can in turn be expressed in terms of power sum symmetric polynomials of the eigenvalues:
where is the trace of the matrix . Thus, we can express in terms of the trace of powers of .
In general, the formula for the coefficients is given in terms of complete exponential Bell polynomials as
In particular, the determinant of equals . Thus, the determinant can be written as the trace identity:
Likewise, the characteristic polynomial can be written as
and, by multiplying both sides by (note ), one is led to an expression for the inverse of as a trace identity,
Another method for obtaining these coefficients for a general matrix, provided no root be zero, relies on the following alternative expression for the determinant,
Hence, by virtue of the Mercator series,
where the exponential only needs be expanded to order , since is of order , the net negative powers of automatically vanishing by the C–H theorem. (Again, this requires a ring containing the rational numbers.) Differentiation of this expression with respect to allows one to express the coefficients of the characteristic polynomial for general as determinants of matrices,
Examples
For instance, the first few Bell polynomials are = 1, , , and .
Using these to specify the coefficients of the characteristic polynomial of a matrix yields
The coefficient gives the determinant of the matrix, minus its trace, while its inverse is given by
It is apparent from the general formula for cn−k, expressed in terms of Bell polynomials, that the expressions
always give the coefficients of and of in the characteristic polynomial of any matrix, respectively. So, for a matrix , the statement of the Cayley–Hamilton theorem can also be written as
where the right-hand side designates a matrix with all entries reduced to zero. Likewise, this determinant in the case, is now
This expression gives the negative of coefficient of in the general case, as seen below.
Similarly, one can write for a matrix ,
where, now, the determinant is ,
and so on for larger matrices. The increasingly complex expressions for the coefficients is deducible from Newton's identities or the Faddeev–LeVerrier algorithm.
n-th power of matrix
The Cayley–Hamilton theorem always provides a relationship between the powers of (though not always the simplest one), which allows one to simplify expressions involving such powers, and evaluate them without having to compute the power n or any higher powers of .
As an example, for the theorem gives
Then, to calculate , observe
Likewise,
Notice that we have been able to write the matrix power as the sum of two terms. In fact, matrix power of any order can be written as a matrix polynomial of degree at most , where is the size of a square matrix. This is an instance where Cayley–Hamilton theorem can be used to express a matrix function, which we will discuss below systematically.
Matrix functions
Given an analytic function
and the characteristic polynomial of degree of an matrix , the function can be expressed using long division as
where is some quotient polynomial and is a remainder polynomial such that .
By the Cayley–Hamilton theorem, replacing by the matrix gives , so one has
Thus, the analytic function of the matrix can be expressed as a matrix polynomial of degree less than .
Let the remainder polynomial be
Since , evaluating the function at the eigenvalues of yields
This amounts to a system of linear equations, which can be solved to determine the coefficients . Thus, one has
When the eigenvalues are repeated, that is for some , two or more equations are identical; and hence the linear equations cannot be solved uniquely. For such cases, for an eigenvalue with multiplicity , the first derivatives of vanish at the eigenvalue. This leads to the extra linearly independent solutions
which, combined with others, yield the required equations to solve for .
Finding a polynomial that passes through the points is essentially an interpolation problem, and can be solved using Lagrange or Newton interpolation techniques, leading to Sylvester's formula.
For example, suppose the task is to find the polynomial representation of
The characteristic polynomial is , and the eigenvalues are . Let . Evaluating at the eigenvalues, one obtains two linear equations, and .
Solving the equations yields and . Thus, it follows that
If, instead, the function were , then the coefficients would have been and ; hence
As a further example, when considering
then the characteristic polynomial is , and the eigenvalues are .
As before, evaluating the function at the eigenvalues gives us the linear equations and ; the solution of which gives, and . Thus, for this case,
which is a rotation matrix.
Standard examples of such usage is the exponential map from the Lie algebra of a matrix Lie group into the group. It is given by a matrix exponential,
Such expressions have long been known for ,
where the are the Pauli matrices and for ,
which is Rodrigues' rotation formula. For the notation, see 3D rotation group#A note on Lie algebras.
More recently, expressions have appeared for other groups, like the Lorentz group , and , as well as . The group is the conformal group of spacetime, its simply connected cover (to be precise, the simply connected cover of the connected component of ). The expressions obtained apply to the standard representation of these groups. They require knowledge of (some of) the eigenvalues of the matrix to exponentiate. For (and hence for ), closed expressions have been obtained for all irreducible representations, i.e. of any spin.
Algebraic number theory
The Cayley–Hamilton theorem is an effective tool for computing the minimal polynomial of algebraic integers. For example, given a finite extension of and an algebraic integer which is a non-zero linear combination of the we can compute the minimal polynomial of by finding a matrix representing the -linear transformation
If we call this transformation matrix , then we can find the minimal polynomial by applying the Cayley–Hamilton theorem to .
Proofs
The Cayley–Hamilton theorem is an immediate consequence of the existence of the Jordan normal form for matrices over algebraically closed fields, see . In this section, direct proofs are presented.
As the examples above show, obtaining the statement of the Cayley–Hamilton theorem for an matrix
requires two steps: first the coefficients of the characteristic polynomial are determined by development as a polynomial in of the determinant
and then these coefficients are used in a linear combination of powers of that is equated to the zero matrix:
The left-hand side can be worked out to an matrix whose entries are (enormous) polynomial expressions in the set of entries of , so the Cayley–Hamilton theorem states that each of these expressions equals . For any fixed value of , these identities can be obtained by tedious but straightforward algebraic manipulations. None of these computations, however, can show why the Cayley–Hamilton theorem should be valid for matrices of all possible sizes , so a uniform proof for all is needed.
Preliminaries
If a vector of size is an eigenvector of with eigenvalue , in other words if , then
which is the zero vector since (the eigenvalues of are precisely the roots of ). This holds for all possible eigenvalues , so the two matrices equated by the theorem certainly give the same (null) result when applied to any eigenvector. Now if admits a basis of eigenvectors, in other words if is diagonalizable, then the Cayley–Hamilton theorem must hold for , since two matrices that give the same values when applied to each element of a basis must be equal.
Consider now the function which maps matrices to matrices given by the formula , i.e. which takes a matrix and plugs it into its own characteristic polynomial. Not all matrices are diagonalizable, but for matrices with complex coefficients many of them are: the set of diagonalizable complex square matrices of a given size is dense in the set of all such square matrices (for a matrix to be diagonalizable it suffices for instance that its characteristic polynomial not have any multiple roots). Now viewed as a function (since matrices have entries) we see that this function is continuous. This is true because the entries of the image of a matrix are given by polynomials in the entries of the matrix. Since
and since the set is dense, by continuity this function must map the entire set of matrices to the zero matrix. Therefore, the Cayley–Hamilton theorem is true for complex numbers, and must therefore also hold for - or -valued matrices.
While this provides a valid proof, the argument is not very satisfactory, since the identities represented by the theorem do not in any way depend on the nature of the matrix (diagonalizable or not), nor on the kind of entries allowed (for matrices with real entries the diagonalizable ones do not form a dense set, and it seems strange one would have to consider complex matrices to see that the Cayley–Hamilton theorem holds for them). We shall therefore now consider only arguments that prove the theorem directly for any matrix using algebraic manipulations only; these also have the benefit of working for matrices with entries in any commutative ring.
There is a great variety of such proofs of the Cayley–Hamilton theorem, of which several will be given here. They vary in the amount of abstract algebraic notions required to understand the proof. The simplest proofs use just those notions needed to formulate the theorem (matrices, polynomials with numeric entries, determinants), but involve technical computations that render somewhat mysterious the fact that they lead precisely to the correct conclusion. It is possible to avoid such details, but at the price of involving more subtle algebraic notions: polynomials with coefficients in a non-commutative ring, or matrices with unusual kinds of entries.
Adjugate matrices
All proofs below use the notion of the adjugate matrix of an matrix , the transpose of its cofactor matrix. This is a matrix whose coefficients are given by polynomial expressions in the coefficients of (in fact, by certain determinants), in such a way that the following fundamental relations hold,
These relations are a direct consequence of the basic properties of determinants: evaluation of the entry of the matrix product on the left gives the expansion by column of the determinant of the matrix obtained from by replacing column by a copy of column , which is if and zero otherwise; the matrix product on the right is similar, but for expansions by rows.
Being a consequence of just algebraic expression manipulation, these relations are valid for matrices with entries in any commutative ring (commutativity must be assumed for determinants to be defined in the first place). This is important to note here, because these relations will be applied below for matrices with non-numeric entries such as polynomials.
A direct algebraic proof
This proof uses just the kind of objects needed to formulate the Cayley–Hamilton theorem: matrices with polynomials as entries. The matrix whose determinant is the characteristic polynomial of is such a matrix, and since polynomials form a commutative ring, it has an adjugate
Then, according to the right-hand fundamental relation of the adjugate, one has
Since is also a matrix with polynomials in as entries, one can, for each , collect the coefficients of in each entry to form a matrix of numbers, such that one has
(The way the entries of are defined makes clear that no powers higher than occur). While this looks like a polynomial with matrices as coefficients, we shall not consider such a notion; it is just a way to write a matrix with polynomial entries as a linear combination of constant matrices, and the coefficient has been written to the left of the matrix to stress this point of view.
Now, one can expand the matrix product in our equation by bilinearity:
Writing
one obtains an equality of two matrices with polynomial entries, written as linear combinations of constant matrices with powers of as coefficients.
Such an equality can hold only if in any matrix position the entry that is multiplied by a given power is the same on both sides; it follows that the constant matrices with coefficient in both expressions must be equal. Writing these equations then for from down to 0, one finds
Finally, multiply the equation of the coefficients of from the left by , and sum up:
The left-hand sides form a telescoping sum and cancel completely; the right-hand sides add up to :
This completes the proof.
A proof using polynomials with matrix coefficients
This proof is similar to the first one, but tries to give meaning to the notion of polynomial with matrix coefficients that was suggested by the expressions occurring in that proof. This requires considerable care, since it is somewhat unusual to consider polynomials with coefficients in a non-commutative ring, and not all reasoning that is valid for commutative polynomials can be applied in this setting.
Notably, while arithmetic of polynomials over a commutative ring models the arithmetic of polynomial functions, this is not the case over a non-commutative ring (in fact there is no obvious notion of polynomial function in this case that is closed under multiplication). So when considering polynomials in with matrix coefficients, the variable must not be thought of as an "unknown", but as a formal symbol that is to be manipulated according to given rules; in particular one cannot just set to a specific value.
Let be the ring of matrices with entries in some ring R (such as the real or complex numbers) that has as an element. Matrices with as coefficients polynomials in , such as or its adjugate B in the first proof, are elements of .
By collecting like powers of , such matrices can be written as "polynomials" in with constant matrices as coefficients; write for the set of such polynomials. Since this set is in bijection with , one defines arithmetic operations on it correspondingly, in particular multiplication is given by
respecting the order of the coefficient matrices from the two operands; obviously this gives a non-commutative multiplication.
Thus, the identity
from the first proof can be viewed as one involving a multiplication of elements in .
At this point, it is tempting to simply set equal to the matrix , which makes the first factor on the left equal to the zero matrix, and the right hand side equal to ; however, this is not an allowed operation when coefficients do not commute. It is possible to define a "right-evaluation map" , which replaces each by the matrix power of , where one stipulates that the power is always to be multiplied on the right to the corresponding coefficient. But this map is not a ring homomorphism: the right-evaluation of a product differs in general from the product of the right-evaluations. This is so because multiplication of polynomials with matrix coefficients does not model multiplication of expressions containing unknowns: a product is defined assuming that commutes with , but this may fail if is replaced by the matrix .
One can work around this difficulty in the particular situation at hand, since the above right-evaluation map does become a ring homomorphism if the matrix is in the center of the ring of coefficients, so that it commutes with all the coefficients of the polynomials (the argument proving this is straightforward, exactly because commuting with coefficients is now justified after evaluation).
Now, is not always in the center of , but we may replace with a smaller ring provided it contains all the coefficients of the polynomials in question: , , and the coefficients of the polynomial . The obvious choice for such a subring is the centralizer of , the subring of all matrices that commute with ; by definition is in the center of .
This centralizer obviously contains , and , but one has to show that it contains the matrices . To do this, one combines the two fundamental relations for adjugates, writing out the adjugate as a polynomial:
Equating the coefficients shows that for each , we have as desired. Having found the proper setting in which is indeed a homomorphism of rings, one can complete the proof as suggested above:
This completes the proof.
A synthesis of the first two proofs
In the first proof, one was able to determine the coefficients of based on the right-hand fundamental relation for the adjugate only. In fact the first equations derived can be interpreted as determining the quotient of the Euclidean division of the polynomial on the left by the monic polynomial , while the final equation expresses the fact that the remainder is zero. This division is performed in the ring of polynomials with matrix coefficients. Indeed, even over a non-commutative ring, Euclidean division by a monic polynomial is defined, and always produces a unique quotient and remainder with the same degree condition as in the commutative case, provided it is specified at which side one wishes to be a factor (here that is to the left).
To see that quotient and remainder are unique (which is the important part of the statement here), it suffices to write as and observe that since is monic, cannot have a degree less than that of , unless .
But the dividend and divisor used here both lie in the subring , where is the subring of the matrix ring generated by : the -linear span of all powers of . Therefore, the Euclidean division can in fact be performed within that commutative polynomial ring, and of course it then gives the same quotient and remainder 0 as in the larger ring; in particular this shows that in fact lies in .
But, in this commutative setting, it is valid to set to in the equation
in other words, to apply the evaluation map
which is a ring homomorphism, giving
just like in the second proof, as desired.
In addition to proving the theorem, the above argument tells us that the coefficients of are polynomials in , while from the second proof we only knew that they lie in the centralizer of ; in general is a larger subring than , and not necessarily commutative. In particular the constant term lies in . Since is an arbitrary square matrix, this proves that can always be expressed as a polynomial in (with coefficients that depend on .
In fact, the equations found in the first proof allow successively expressing as polynomials in , which leads to the identity
valid for all matrices, where
is the characteristic polynomial of .
Note that this identity also implies the statement of the Cayley–Hamilton theorem: one may move to the right hand side, multiply the resulting equation (on the left or on the right) by , and use the fact that
A proof using matrices of endomorphisms
As was mentioned above, the matrix p(A) in statement of the theorem is obtained by first evaluating the determinant and then substituting the matrix A for t; doing that substitution into the matrix before evaluating the determinant is not meaningful. Nevertheless, it is possible to give an interpretation where is obtained directly as the value of a certain determinant, but this requires a more complicated setting, one of matrices over a ring in which one can interpret both the entries of , and all of itself. One could take for this the ring of matrices over , where the entry is realised as , and as itself. But considering matrices with matrices as entries might cause confusion with block matrices, which is not intended, as that gives the wrong notion of determinant (recall that the determinant of a matrix is defined as a sum of products of its entries, and in the case of a block matrix this is generally not the same as the corresponding sum of products of its blocks!). It is clearer to distinguish from the endomorphism of an -dimensional vector space V (or free -module if is not a field) defined by it in a basis , and to take matrices over the ring End(V) of all such endomorphisms. Then is a possible matrix entry, while designates the element of whose entry is endomorphism of scalar multiplication by ; similarly will be interpreted as element of . However, since is not a commutative ring, no determinant is defined on ; this can only be done for matrices over a commutative subring of . Now the entries of the matrix all lie in the subring generated by the identity and , which is commutative. Then a determinant map is defined, and evaluates to the value of the characteristic polynomial of at (this holds independently of the relation between and ); the Cayley–Hamilton theorem states that is the null endomorphism.
In this form, the following proof can be obtained from that of (which in fact is the more general statement related to the Nakayama lemma; one takes for the ideal in that proposition the whole ring ). The fact that is the matrix of in the basis means that
One can interpret these as components of one equation in , whose members can be written using the matrix-vector product that is defined as usual, but with individual entries and in being "multiplied" by forming ; this gives:
where is the element whose component is (in other words it is the basis of written as a column of vectors). Writing this equation as
one recognizes the transpose of the matrix considered above, and its determinant (as element of is also p(φ). To derive from this equation that , one left-multiplies by the adjugate matrix of , which is defined in the matrix ring , giving
the associativity of matrix-matrix and matrix-vector multiplication used in the first step is a purely formal property of those operations, independent of the nature of the entries. Now component of this equation says that ; thus vanishes on all , and since these elements generate it follows that , completing the proof.
One additional fact that follows from this proof is that the matrix whose characteristic polynomial is taken need not be identical to the value substituted into that polynomial; it suffices that be an endomorphism of satisfying the initial equations
for some sequence of elements that generate (which space might have smaller dimension than , or in case the ring is not a field it might not be a free module at all).
A bogus "proof":
One persistent elementary but incorrect argument for the theorem is to "simply" take the definition
and substitute for , obtaining
There are many ways to see why this argument is wrong. First, in the Cayley–Hamilton theorem, is an matrix. However, the right hand side of the above equation is the value of a determinant, which is a scalar. So they cannot be equated unless (i.e. is just a scalar). Second, in the expression , the variable λ actually occurs at the diagonal entries of the matrix . To illustrate, consider the characteristic polynomial in the previous example again:
If one substitutes the entire matrix for in those positions, one obtains
in which the "matrix" expression is simply not a valid one. Note, however, that if scalar multiples of identity matrices
instead of scalars are subtracted in the above, i.e. if the substitution is performed as
then the determinant is indeed zero, but the expanded matrix in question does not evaluate to ; nor can its determinant (a scalar) be compared to p(A) (a matrix). So the argument that still does not apply.
Actually, if such an argument holds, it should also hold when other multilinear forms instead of determinant is used. For instance, if we consider the permanent function and define , then by the same argument, we should be able to "prove" that . But this statement is demonstrably wrong: in the 2-dimensional case, for instance, the permanent of a matrix is given by
So, for the matrix in the previous example,
Yet one can verify that
One of the proofs for Cayley–Hamilton theorem above bears some similarity to the argument that . By introducing a matrix with non-numeric coefficients, one can actually let live inside a matrix entry, but then is not equal to , and the conclusion is reached differently.
Proofs using methods of abstract algebra
Basic properties of Hasse–Schmidt derivations on the exterior algebra of some -module (supposed to be free and of finite rank) have been used by to prove the Cayley–Hamilton theorem. See also .
A combinatorial proof
A proof based on developing the Leibniz formula for the characteristic polynomial was given by Straubing and a generalization was given using trace monoid theory of Foata and Cartier.
Abstraction and generalizations
The above proofs show that the Cayley–Hamilton theorem holds for matrices with entries in any commutative ring , and that will hold whenever is an endomorphism of an -module generated by elements that satisfies
This more general version of the theorem is the source of the celebrated Nakayama lemma in commutative algebra and algebraic geometry.
The Cayley-Hamilton theorem also holds for matrices over the quaternions, a noncommutative ring.
See also
Companion matrix
Remarks
Notes
References
(open access)
(communicated on June 9, 1862)
(communicated on June 23, 1862)
"Classroom Note: A Simple Proof of the Leverrier--Faddeev Characteristic Polynomial Algorithm"
(open archive).
External links
A proof from PlanetMath.
The Cayley–Hamilton theorem at MathPages
Theorems in linear algebra
Articles containing proofs
Matrix theory
William Rowan Hamilton | Cayley–Hamilton theorem | Mathematics | 5,794 |
77,746,826 | https://en.wikipedia.org/wiki/Caloglossa | Caloglossa is a genus of algae in the Delesseriaceae.
Description
Caloglossa has thalli that resemble branching leaves. This "exogenous primary branching" differentiates the genus from other members of the Delesseriaceae, other than the closely related genus Taenioma.
Species of Caloglossa are red to brown in color. Each thalli has a conspicuous midrib which is formed by a row of elongated cells. In fresh water, populations spread vegetatively. In brackish water, the plants may reproduce sexually.
Distribution
Caloglossa is a common genus worldwide, and is distributed in littoral zones from tropical to temperate waters. They can grow in habitats of varying salinity, and may be found growing on stones on marine coasts, in brackish estuaries, epiphytically in saltmarsh and mangrove habitat, and in total freshwater areas.
Use
The genus sees use in aquascaping and may be found in the aquarium trade. One species in particular, Caloglossa cf. beccarii, is popular as it exhibits a variety of colors and is easy to cultivate.
Caloglossa beccarii has also been investigated as a potential food item in Thailand. It was found to have insignificant toxicity while providing a potentially rich nutritional benefit.
Taxonomy
Some authors have considered the taxon authority to be Jacob Georg Agardh instead of Georg Matthias von Martens. King & Puttock (1994) argued that Martens did not formally elevate Caloglossa to genus rank in his 1869 publication, preferring to follow Agardh's 1876 treatment instead. The diversity of species within Caloglossa has been heavily studied and subject to much revision. A mix of morphological and DNA analysis has informed researchers on the phylogeny of the genus.
As of 2024, are 22 species recognized by AlgaeBase.
Caloglossa adhaerens
Caloglossa apicula
Caloglossa apomeiotica
Caloglossa beccarii
Caloglossa bengalensis
Caloglossa confusa
Caloglossa continua
Caloglossa fluviatilis
Caloglossa fonticola
Caloglossa intermedia
Caloglossa kamiyana
Caloglossa leprieurii
Caloglossa manaticola
Caloglossa monosticha
Caloglossa ogasawaraensis
Caloglossa postiae
Caloglossa rotundata
Caloglossa ruetzleri
Caloglossa saigonensis
Caloglossa stipitata
Caloglossa triclada
Caloglossa vieillardii
References
Delesseriaceae
Edible algae
Red algae genera
Aquarium plants
Taxa described in 1869 | Caloglossa | Biology | 563 |
44,358,091 | https://en.wikipedia.org/wiki/IEC%2062682 | IEC 62682 is a technical standard titled Management of alarms systems for the process industries.
Scope
The standard specifies principles and processes for the management of alarm systems based on distribute control systems and computer-based Human-Machine Interface (HMI) technology for the process industries. It covers alarms from all systems presented to the operator, which can include basic process control systems, annunciator panels, safety instrumented systems, fire and gas systems, and emergency response systems. The practices are applicable to continuous, batch, and discrete processes. The process industry sector includes many types of manufacturing processes, such as refineries, petrochemical, chemical, pharmaceutical, pulp and paper, and power.
Standard
The standard addresses all lifecycle phases (development, design, installation, and operation) for alarm management in the process industries. The standard defines the terminology and work processes recommended to effectively maintain an alarm system throughout the lifecycle. The standard was written as an extension of the existing ISA 18.2-2009 standard which utilized numerous industry alarm management guidance documents in its development such as EEMUA 191. Ineffective alarm systems have often been cited as contributing factors in the investigation reports following major process incidents. The standard is intended to provide a methodology that will result in the improved safety of the process industries.
See also
Butterfleye
External links
IEC 62682
Alarm Network
Safety
Alarms
Industrial processes
Electrical standards | IEC 62682 | Physics,Technology | 282 |
11,253,941 | https://en.wikipedia.org/wiki/Flight%20feather | Flight feathers (Pennae volatus) are the long, stiff, asymmetrically shaped, but symmetrically paired pennaceous feathers on the wings or tail of a bird; those on the wings are called remiges (), singular remex (), while those on the tail are called rectrices ( or ), singular rectrix (). The primary function of the flight feathers is to aid in the generation of both thrust and lift, thereby enabling flight. The flight feathers of some birds perform additional functions, generally associated with territorial displays, courtship rituals or feeding methods. In some species, these feathers have developed into long showy plumes used in visual courtship displays, while in others they create a sound during display flights. Tiny serrations on the leading edge of their remiges help owls to fly silently (and therefore hunt more successfully), while the extra-stiff rectrices of woodpeckers help them to brace against tree trunks as they hammer on them. Even flightless birds still retain flight feathers, though sometimes in radically modified forms.
The remiges are divided into primary and secondary feathers based on their position along the wing. There are typically 11 primaries attached to the manus (six attached to the metacarpus and five to the phalanges), but the outermost primary, called the remicle, is often rudimentary or absent; certain birds, notably the flamingos, grebes, and storks, have seven primaries attached to the metacarpus and 12 in all. Secondary feathers are attached to the ulna. The fifth secondary remex (numbered inwards from the carpal joint) was formerly thought to be absent in some species, but the modern view of this diastataxy is that there is a gap between the fourth and fifth secondaries. Tertiary feathers growing upon the adjoining portion of the brachium are not considered true remiges.
The moult of their flight feathers can cause serious problems for birds, as it can impair their ability to fly. Different species have evolved different strategies for coping with this, ranging from dropping all their flight feathers at once (and thus becoming flightless for some relatively short period of time) to extending the moult over a period of several years.
Remiges
Remiges (from the Latin for "oarsman") are located on the posterior side of the wing. Ligaments attach the long calami (quills) firmly to the wing bones, and a thick, strong band of tendinous tissue known as the postpatagium helps to hold and support the remiges in place. Corresponding remiges on individual birds are symmetrical between the two wings, matching to a large extent in size and shape (except in the case of mutation or damage), though not necessarily in the pattern. They are given different names depending on their position along the wing.
Primaries
Primaries are connected to the manus (the bird's "hand", composed of carpometacarpus and phalanges); these are the longest and narrowest of the remiges (particularly those attached to the phalanges), and they can be individually rotated. These feathers are especially important for flapping flight, as they are the principal source of thrust, moving the bird forward through the air. The mechanical properties of primaries are important in supporting flight. Most thrust is generated on the downstroke of flapping flight. However, on the upstroke (when the bird often draws its wing in close to its body), the primaries are separated and rotated, reducing air resistance while still helping to provide some thrust. The flexibility of the remiges on the wingtips of large soaring birds also allows for the spreading of those feathers, which helps to reduce the creation of wingtip vortices, thereby reducing drag. The barbules on these feathers, friction barbules, are specialized with large lobular barbicels that help grip and prevent slippage of overlying feathers and are present in most of the flying birds.
Species vary somewhat in the number of primaries they possess. The number in non-passerines generally varies between nine and 11, but grebes, storks and flamingos have 12, and ostriches have 16. While most modern passerines have ten primaries, some have only nine. Those with nine are missing the most distal primary (sometimes called the remicle) which is typically very small and sometimes rudimentary in passerines.
The outermost primaries—those connected to the phalanges—are sometimes known as pinions.
Secondaries
Secondaries are connected to the ulna. In some species, the ligaments that bind these remiges to the bone connect to small, rounded projections, known as quill knobs, on the ulna; in other species, no such knobs exist. Secondary feathers remain close together in flight (they cannot be individually separated like the primaries can) and help to provide lift by creating the airfoil shape of the bird's wing. Secondaries tend to be shorter and broader than primaries, with blunter ends (see illustration). They vary in number from six in hummingbirds to as many as 40 in some species of albatross. In general, larger and longer-winged species have a larger number of secondaries.
Birds in more than 40 non-passerine families seem to be missing the fifth secondary feather on each wing, a state known as diastataxis (those that do have the fifth secondary are said to be eutaxic). In these birds, the fifth set of secondary covert feathers does not cover any remiges, possibly due to a twisting of the feather papillae during embryonic development. Loons, grebes, pelicans, hawks and eagles, cranes, sandpipers, gulls, parrots, and owls are among the families missing this feather.
Tertials
Tertials arise in the brachial region and are not considered true remiges as they are not supported by attachment to the corresponding bone, in this case the humerus. These elongated "true" tertials act as a protective cover for all or part of the folded primaries and secondaries, and do not qualify as flight feathers as such. However, many authorities use the term tertials to refer to the shorter, more symmetrical innermost secondaries of passerines (arising from the olecranon and performing the same function as true tertials) in an effort to distinguish them from other secondaries. The term humeral is sometimes used for birds such as the albatrosses and pelicans that have a long humerus.
Tectrices
The calami of the flight feathers are protected by a layer of non-flight feathers called covert feathers or tectrices (singular tectrix), at least one layer of them both above and beneath the flight feathers of the wings as well as above and below the rectrices of the tail. These feathers may vary widely in size – in fact, the upper tail tectrices of the male peafowl, rather than its rectrices, are what constitute its elaborate and colorful "train".
Emargination
The outermost primaries of large soaring birds, particularly raptors, often show a pronounced narrowing at some variable distance along the feather edges. These narrowings are called either notches or emarginations depending on the degree of their slope. An emargination is a gradual change, and can be found on either side of the feather. A notch is an abrupt change, and is only found on the wider trailing edge of the remex. (Both are visible on the primary in the photo showing the feathers; they can be found about halfway along both sides of the left hand feather—a shallow notch on the left, and a gradual emargination on the right.) The presence of notches and emarginations creates gaps at the wingtip; air is forced through these gaps, increasing the generation of lift.
Alula
Feathers on the alula or bastard wing are not generally considered to be flight feathers in the strict sense; though they are asymmetrical, they lack the length and stiffness of most true flight feathers. However, alula feathers are definitely an aid to slow flight. These feathers—which are attached to the bird's "thumb" and normally lie flush against the anterior edge of the wing—function in the same way as the slats on an airplane wing, allowing the wing to achieve a higher than normal angle of attack – and thus lift – without resulting in a stall. By manipulating its thumb to create a gap between the alula and the rest of the wing, a bird can avoid stalling when flying at low speeds or landing.
Delayed development in hoatzins
The development of the remiges (and alulae) of nestling hoatzins is much delayed compared to the development of these feathers in other young birds, presumably because young hoatzins are equipped with claws on their first two digits. They use these small rounded hooks to grasp branches when clambering about in trees, and feathering on these digits would presumably interfere with that functionality. Most youngsters shed their claws sometime between their 70th and 100th day of life, but some retain them— though callused-over and unusable— into adulthood.
Rectrices
Rectrices (singular rectrix) from the Latin word for "helmsman", help the bird to brake and steer in flight. These feathers lie in a single horizontal row on the rear margin of the anatomic tail. Only the central pair are attached (via ligaments) to the tail bones; the remaining rectrices are embedded into the rectricial bulbs, complex structures of fat and muscle that surround those bones. Rectrices are always paired, with a vast majority of species having six pairs. They are absent in grebes and some ratites, and greatly reduced in size in penguins. Many grouse species have more than 12 rectrices. In some species (including ruffed grouse, hazel grouse and common snipe), the number varies among individuals. Domestic pigeons have a highly variable number as a result of changes brought about over centuries of selective breeding.
Numbering conventions
In order to make the discussion of such topics as moult processes or body structure easier, ornithologists assign a number to each flight feather. By convention, the numbers assigned to primary feathers always start with the letter P (P1, P2, P3, etc.), those of secondaries with the letter S, those of tertials with T and those of rectrices with R.
Most authorities number the primaries descendantly, starting from the innermost primary (the one closest to the secondaries) and working outwards; others number them ascendantly, from the most distal primary inwards. There are some advantages to each method. Descendant numbering follows the normal sequence of most birds' primary moult. In the event that a species is missing the small distal tenth primary, as some passerines are, its lack does not impact the numbering of the remaining primaries. Ascendant numbering, on the other hand, allows for uniformity in the numbering of non-passerine primaries, as they almost invariably have four attached to the manus regardless of how many primaries they have overall. This method is particularly useful for indicating wing formulae, as the outermost primary is the one with which the measurements begin.
Secondaries are always numbered ascendantly, starting with the outermost secondary (the one closest to the primaries) and working inwards. Tertials are also numbered ascendantly, but in this case, the numbers continue on consecutively from that given to the last secondary (e.g. ... S5, S6, T7, T8, ... etc.).
Rectrices are always numbered from the centermost pair outwards in both directions.
Specialized flight feathers
The flight feathers of some species provide additional functionality. In some species, for example, either remiges or rectrices make a sound during flight. These sounds are most often associated with courtship or territorial displays. The outer primaries of male broad-tailed hummingbirds produce a distinctive high-pitched trill, both in direct flight and in power-dives during courtship displays; this trill is diminished when the outer primaries are worn, and absent when those feathers have been moulted. During the northern lapwing's zigzagging display flight, the bird's outer primaries produce a humming sound. The outer primaries of the male American woodcock are shorter and slightly narrower than those of the female, and are likely the source of the whistling and twittering sounds made during his courtship display flights. Male club-winged manakins use modified secondaries to make a clear trilling courtship call. A curve-tipped secondary on each wing is dragged against an adjacent ridged secondary at high speeds (as many as 110 times per second—slightly faster than a hummingbird's wingbeat) to create a stridulation much like that produced by some insects. Both Wilson's and common snipe have modified outer tail feathers which make noise when they are spread during the birds' roller coaster display flights; as the bird dives, wind flows through the modified feathers and creates a series of rising and falling notes, which is known as "winnowing". Differences between the sounds produced by these two former conspecific subspecies—and the fact that the outer two pairs of rectrices in Wilson's snipe are modified, while only the single outermost pair are modified in common snipe—were among the characteristics used to justify their splitting into two distinct and separate species.
Flight feathers are also used by some species in visual displays. Male standard-winged and pennant-winged nightjars have modified P2 primaries (using the descendant numbering scheme explained above) which are displayed during their courtship rituals. In the standard-winged nightjar, this modified primary consists of an extremely long shaft with a small "pennant" (actually a large web of barbules) at the tip. In the pennant-winged nightjar, the P2 primary is an extremely long (but otherwise normal) feather, while P3, P4 and P5 are successively shorter; the overall effect is a broadly forked wingtip with a very long plume beyond the lower half of the fork.
Males of many species, ranging from the widely introduced ring-necked pheasant to Africa's many whydahs, have one or more elongated pairs of rectrices, which play an often-critical role in their courtship rituals. The outermost pair of rectrices in male lyrebirds are extremely long and strongly curved at the ends. These plumes are raised up over the bird's head (along with a fine spray of modified uppertail coverts) during his extraordinary display. Rectrix modification reaches its pinnacle among the birds of paradise, which display an assortment of often bizarrely modified feathers, ranging from the extremely long plumes of the ribbon-tailed astrapia (nearly three times the length of the bird itself) to the dramatically coiled twin plumes of the magnificent bird-of-paradise.
Owls have remiges which are serrated rather than smooth on the leading edge. This adaptation disrupts the flow of air over the wings, eliminating the noise that airflow over a smooth surface normally creates, and allowing the birds to fly and hunt silently.
The rectrices of woodpeckers are proportionately short and very stiff, allowing them to better brace themselves against tree trunks while feeding. This adaptation is also found, though to a lesser extent, in some other species that feed along tree trunks, including treecreepers and woodcreepers.
Scientists have not yet determined the function of all flight feather modifications. Male swallows in the genera Psalidoprocne and Stelgidopteryx have tiny recurved hooks on the leading edges of their outer primaries, but the function of these hooks is not yet known; some authorities suggest they may produce a sound during territorial or courtship displays.
Vestigiality in flightless birds
Over time, a small number of bird species have lost their ability to fly. Some of these, such as the steamer ducks, show no appreciable changes in their flight feathers. Some, such as the Titicaca grebe and a number of the flightless rails, have a reduced number of primaries.
The remiges of ratites are soft and downy; they lack the interlocking hooks and barbules that help to stiffen the flight feathers of other birds. In addition, the emu's remiges are proportionately much reduced in size, while those of the cassowaries are reduced both in number and structure, consisting merely of five to six bare quills. Most ratites have completely lost their rectrices; only the ostrich still has them.
Penguins have lost their differentiated flight feathers. As adults, their wings and tail are covered with the same small, stiff, slightly curved feathers as are found on the rest of their bodies.
The ground-dwelling kākāpō, which is the world's only flightless parrot, has remiges which are shorter, rounder and more symmetrically vaned than those of parrots capable of flight; these flight feathers also contain fewer interlocking barbules near their tips.
Moult
Once they have finished growing, feathers are essentially dead structures. Over time, they become worn and abraded, and need to be replaced. This replacement process is known as moult (molt in the United States). The loss of wing and tail feathers can affect a bird's ability to fly (sometimes dramatically) and in certain families can impair the ability to feed or perform courtship displays. The timing and progression of flight feather moult therefore varies among families.
For most birds, moult begins at a certain specific point, called a focus (plural foci), on the wing or tail and proceeds in a sequential manner in one or both directions from there. For example, most passerines have a focus between the innermost primary (P1, using the numbering scheme explained above) and outermost secondary (S1), and a focus point in the middle of the center pair of rectrices. As passerine moult begins, the two feathers closest to the focus are the first to drop. When replacement feathers reach roughly half of their eventual length, the next feathers in line (P2 and S2 on the wing, and both R2s on the tail) are dropped. This pattern of drop and replacement continues until moult reaches either end of the wing or tail. The speed of the moult can vary somewhat within a species. Some passerines that breed in the Arctic, for example, drop many more flight feathers at once (sometimes becoming briefly flightless) in order to complete their entire wing moult prior to migrating south, while those same species breeding at lower latitudes undergo a more protracted moult.
In many species, there is more than one focus along the wing. Here, moult begins at all foci simultaneously, but generally proceeds only in one direction. Most grouse, for example, have two wing foci: one at the wingtip, the other between feathers P1 and S1. In this case, moult proceeds descendantly from both foci. Many large, long-winged birds have multiple wing foci.
Birds that are heavily "wing-loaded"—that is, heavy-bodied birds with relatively short wings—have great difficulty flying with the loss of even a few flight feathers. A protracted moult like the one described above would leave them vulnerable to predators for a sizeable portion of the year. Instead, these birds lose all their flight feathers at once. This leaves them completely flightless for a period of three to four weeks, but means their overall period of vulnerability is significantly shorter than it would otherwise be. Eleven families of birds, including loons, grebes and most waterfowl, have this moult strategy.
The cuckoos show what is called saltatory or transilient wing moults. In simple forms, this involves the moulting and replacement of odd-numbered primaries and then the even-numbered primaries. There are however complex variations with differences based on life history.
Arboreal woodpeckers, which depend on their tails—particularly the strong central pair of rectrices—for support while they feed, have a unique tail moult. Rather than moulting their central tail feathers first, as most birds do, they retain these feathers until last. Instead, the second pair of rectrices (both R2 feathers) are the first to drop. (In some species in the genera Celeus and Dendropicos, the third pair is the first dropped.) The pattern of feather drop and replacement proceeds as described for passerines (above) until all other rectrices have been replaced; only then are the central tail rectrices moulted. This provides some protection to the growing feathers, since they're always covered by at least one existing feather, and also ensures that the bird's newly strengthened tail is best able to cope with the loss of the crucial central rectrices. Ground-feeding woodpeckers, such as the wrynecks, do not have this modified moult strategy; in fact, wrynecks moult their outer tail feathers first, with moult proceeding proximally from there.
Age differences in flight feathers
There are often substantial differences between the remiges and rectrices of adults and juveniles of the same species. Because all juvenile feathers are grown at once—a tremendous energy burden to the developing bird—they are softer and of poorer quality than the equivalent feathers of adults, which are moulted over a longer period of time (as long as several years in some cases). As a result, they wear more quickly.
As feathers grow at variable rates, these variations lead to visible dark and light bands in the fully formed feather. These growth bars and their widths have been used to determine the daily nutritional status of birds. Each light and dark bar correspond to around 24 hours and the use of this technique has been called ptilochronology (analogous to dendrochronology).
In general, juveniles have feathers which are narrower and more sharply pointed at the tip. This can be particularly visible when the bird is in flight, especially in the case of raptors. The trailing edge of the wing of a juvenile bird can appear almost serrated, due to the feathers' sharp tips, while that of an older bird will be straighter-edged. The flight feathers of a juvenile bird will also be uniform in length, since they all grew at the same time. Those of adults will be of various lengths and levels of wear, since each is moulted at a different time.
The flight feathers of adults and juveniles can differ considerably in length, particularly among the raptors. Juveniles tend to have slightly longer rectrices and shorter, broader wings (with shorter outer primaries, and longer inner primaries and secondaries) than do adults of the same species. However, there are many exceptions. In longer-tailed species, such as swallow-tailed kite, secretary bird and European honey buzzard, for example, juveniles have shorter rectrices than adults do. Juveniles of some Buteo buzzards have narrower wings than adults do, while those of large juvenile falcons are longer. It is theorized that the differences help young birds compensate for their inexperience, weaker flight muscles and poorer flying ability.
Wing formula
A wing formula describes the shape of distal end of a bird's wing in a mathematical way. It can be used to help distinguish between species with similar plumages, and thus is particularly useful for those who ring (band) birds.
To determine a bird's wing formula, the distance between the tip of the most distal primary and the tip of its greater covert (the longest of the feathers that cover and protect the shaft of that primary) is measured in millimeters. In some cases, this results in a positive number (e.g., the primary extends beyond its greater covert), while in other cases it is a negative number (e.g. the primary is completely covered by the greater covert, as happens in some passerine species). Next, the longest primary feather is identified, and the differences between the length of that primary and that of all remaining primaries and of the longest secondary are also measured, again in millimeters. If any primary shows a notch or emargination, this is noted, and the distance between the feather's tip and any notch is measured, as is the depth of the notch. All distance measurements are made with the bird's wing closed, so as to maintain the relative positions of the feathers.
While there can be considerable variation across members of a species—and while the results are obviously impacted by the effects of moult and feather regeneration—even very closely related species show clear differences in their wing formulas.
Primary extension
The distance that a bird's longest primaries extend beyond its longest secondaries (or tertials) when its wings are folded is referred to as the primary extension or primary projection. As with wing formulae, this measurement is useful for distinguishing between similarly plumaged birds; however, unlike wing formulae, it is not necessary to have the bird in-hand to make the measurement. Rather, this is a useful relative measurement—some species have long primary extensions, while others have shorter ones. Among the Empidonax flycatchers of the Americas, for example, the dusky flycatcher has a much shorter primary extension than does the very similarly plumaged Hammond's flycatcher. Europe's common skylark has a long primary projection, while that of the near-lookalike Oriental skylark is very short.
As a general rule, species which are long-distance migrants will have longer primary projection than similar species which do not migrate or migrate shorter distances.
See also
Bird anatomy
Bird flight
Drumming (snipe)
Pinioning
Plumage
Delayed feathering in chickens
Notes
References
External links
Wing Feathers—US Fish and Wildlife Service document Contains excellent photographic examples of emargination and notching in raptor remiges.
Video of feeding Magellanic woodpecker (Campephilus magellanicus) Shows use of rectrices for bracing.
Video of singing male superb lyrebird (Menuta novaehollandiae) Shows long modified rectrices which are used in display (though the video doesn't show full display).
Video of male club-winged manakin (Machaeropterus deliciosus) Shows use of secondary remiges to produce sound.
Cornell Laboratory of Ornithology's American woodcock (Scolopax minor) recordings #94216 has a good example of the sounds made by remiges during courtship display flight, starting at about 2:32.
Sound made by rectrices in courtship flight of common snipe (Gallinago gallinago)
Birds
Feathers
Bird flight | Flight feather | Biology | 5,617 |
1,208,353 | https://en.wikipedia.org/wiki/Meridian%20%28Chinese%20medicine%29 | The meridian system (, also called channel network) is a pseudoscientific concept from traditional Chinese medicine (TCM) that alleges meridians are paths through which the life-energy known as "qi" (ch'i) flows.
Meridians are not real anatomical structures: scientists have found no evidence that supports their existence. One historian of medicine in China says that the term is "completely unsuitable and misguided, but nonetheless it has become a standard translation." Major proponents of their existence have not come to any consensus as to how they might work or be tested in a scientific context.
History
The concept of meridians are first attested in two works recovered from the Mawangdui and Zhangjiashan tombs of the Han-era Changsha Kingdom, the Cauterization Canon of the Eleven Foot and Arm Channels Zúbì Shíyī Mài Jiǔjīng) and the Cauterization Canon of the Eleven Yin and Yang Channels Yīnyáng Shíyī Mài Jiǔjīng). In the texts, the meridians are referenced as mài () rather than jīngmài.
Main concepts
The meridian network is typically divided into two categories, the jingmai () or meridian channels and the luomai () or associated vessels (sometimes called "collaterals"). The jingmai contain the 12 tendinomuscular meridians, the 12 divergent meridians, the 12 principal meridians, the eight extraordinary vessels as well as the Huato channel, a set of bilateral points on the lower back whose discovery is attributed to the ancient physician Hua Tuo. The collaterals contain 15 major arteries that connect the 12 principal meridians in various ways, in addition to the interaction with their associated internal Zung Fu (臟腑) organs and other related internal structures. The collateral system also incorporates a branching expanse of capillary-like vessels which spread throughout the body, namely in the 12 cutaneous regions as well as emanating from each point on the principal meridians. If one counts the number of unique points on each meridian, the total comes to 361, which matches the number of days in a year, in the moon calendar system. Note that this method ignores the fact that the bulk of acupoints are bilateral, making the actual total 670.
There are about 400 acupuncture points (not counting bilateral points twice) most of which are situated along the major 20 pathways (i.e. 12 primary and eight extraordinary channels). However, by the second Century AD, 649 acupuncture points were recognized in China (reckoned by counting bilateral points twice). There are "12 Principal Meridians" where each meridian corresponds to either a hollow or solid organ; interacting with it and extending along a particular extremity (i.e. arm or leg). There are also "Eight Extraordinary Channels", two of which have their own sets of points, and the remaining ones connecting points on other channels.
12 standard meridians
The 12 standard meridians, also called Principal Meridians, are divided into Yin and Yang groups. The Yin meridians of the arm are the Lung, Heart, and Pericardium. The Yang meridians of the arm are the Large Intestine, Small Intestine, and Triple Burner. The Yin Meridians of the leg are the Spleen, Kidney, and Liver. The Yang meridians of the leg are Stomach, Bladder, and Gall Bladder.
The table below gives a more systematic list of the 12 standard meridians:
Eight extraordinary meridians
The eight extraordinary meridians are of pivotal importance to the study of Traditional Chinese medicine that incorporates the modalities and practices of Qigong, Taijiquan and Chinese alchemy. These eight extra meridians differ from the standard twelve organ meridians in that they are considered to be storage vessels likened to oceans, fields, or reservoirs of energy that are not associated directly with the Zang Fu, i.e. internal organs but have a general influence upon them. Within Traditional Chinese medicine they are thought to bring about large functional and physiological changes within clinical practice. These channels were studied in the "Spiritual Axis" chapters 17, 21 and 62, the "Classic of Difficulties" chapters 27, 28 and 29 and the "Study of the 8 Extraordinary vessels" (Qi Jing Ba Mai Kao), written in 1578.
The eight extraordinary vessels are ():
Conception Vessel (Ren Mai) –
Governing Vessel (Du Mai) –
Penetrating Vessel (Chong Mai) –
Girdle Vessel (Dai Mai) –
Yin Linking Vessel (Yin Wei Mai) –
Yang Linking Vessel (Yang Wei Mai) –
Yin Heel Vessel (Yin Qiao Mai) –
Yang Heel Vessel (Yang Qiao Mai) –
Scientific view of meridian theory
Scientists have found no evidence that supports their existence. The historian of medicine in China Paul U. Unschuld adds that there "is no evidence of a concept of 'energy' -- either in the strictly physical sense or even in the more colloquial sense -- anywhere in Chinese medical theory."
Some advocates of traditional Chinese medicine believe that meridians function as electrical conduits based on observations that the electrical impedance of a current through meridians is lower than other areas of the body. A 2008 review of studies found that the studies were of poor quality and could not support the claims.
Some proponents of the Primo Vascular System propose that the putative primo vessels, very thin (less than 30 μm wide) conduits found in many mammals, may be a factor explaining some of the suggested effects of the meridian system.
According to Steven Novella, neurologist involved in the Skeptical movement, "there is no evidence that the meridians actually exist. At the risk of sounding redundant, they are as made up and fictional as the ether, phlogiston, Bigfoot, and unicorns."
The National Council Against Health Fraud concluded that "[t]he meridians are imaginary; their locations do not relate to internal organs, and therefore do not relate to human anatomy."
See also
Acupuncture point
Chakra
List of acupuncture points
Marma adi
Nadi (yoga)
Pressure points
Glossary of alternative medicine
References
Acupuncture
Qigong
Vitalism
Pseudoscience | Meridian (Chinese medicine) | Biology | 1,277 |
23,240,139 | https://en.wikipedia.org/wiki/Phi%20coefficient | In statistics, the phi coefficient (or mean square contingency coefficient and denoted by φ or rφ) is a measure of association for two binary variables.
In machine learning, it is known as the Matthews correlation coefficient (MCC) and used as a measure of the quality of binary (two-class) classifications, introduced by biochemist Brian W. Matthews in 1975.
Introduced by Karl Pearson, and also known as the Yule phi coefficient from its introduction by Udny Yule in 1912 this measure is similar to the Pearson correlation coefficient in its interpretation.
Definition
A Pearson correlation coefficient estimated for two binary variables will return the phi coefficient.
Two binary variables are considered positively associated if most of the data falls along the diagonal cells. In contrast, two binary variables are considered negatively associated if most of the data falls off the diagonal.
If we have a 2×2 table for two random variables x and y
where n11, n10, n01, n00, are non-negative counts of numbers of observations that sum to n, the total number of observations. The phi coefficient that describes the association of x and y is
Phi is related to the point-biserial correlation coefficient and Cohen's d and estimates the extent of the relationship between two variables (2×2).
The phi coefficient can also be expressed using only , , , and , as
Maximum values
Although computationally the Pearson correlation coefficient reduces to the phi coefficient in the 2×2 case, they are not in general the same. The Pearson correlation coefficient ranges from −1 to +1, where ±1 indicates perfect agreement or disagreement, and 0 indicates no relationship. The phi coefficient has a maximum value that is determined by the distribution of the two variables if one or both variables can take on more than two values. See Davenport and El-Sanhury (1991) for a thorough discussion.
Machine learning
The MCC is defined identically to phi coefficient, introduced by Karl Pearson, also known as the Yule phi coefficient from its introduction by Udny Yule in 1912. Despite these antecedents which predate Matthews's use by several decades, the term MCC is widely used in the field of bioinformatics and machine learning.
The coefficient takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes. The MCC is in essence a correlation coefficient between the observed and predicted binary classifications; it returns a value between −1 and +1. A coefficient of +1 represents a perfect prediction, 0 no better than random prediction and −1 indicates total disagreement between prediction and observation. However, if MCC equals neither −1, 0, or +1, it is not a reliable indicator of how similar a predictor is to random guessing because MCC is dependent on the dataset. MCC is closely related to the chi-square statistic for a 2×2 contingency table
where n is the total number of observations.
While there is no perfect way of describing the confusion matrix of true and false positives and negatives by a single number, the Matthews correlation coefficient is generally regarded as being one of the best such measures. Other measures, such as the proportion of correct predictions (also termed accuracy), are not useful when the two classes are of very different sizes. For example, assigning every object to the larger set achieves a high proportion of correct predictions, but is not generally a useful classification.
The MCC can be calculated directly from the confusion matrix using the formula:
In this equation, TP is the number of true positives, TN the number of true negatives, FP the number of false positives and FN the number of false negatives. If exactly one of the four sums in the denominator is zero, the denominator can be arbitrarily set to one; this results in a Matthews correlation coefficient of zero, which can be shown to be the correct limiting value. In case two or more sums are zero (e.g. both labels and model predictions are all positive or negative), the limit does not exist.
The MCC can be calculated with the formula:
using the positive predictive value, the true positive rate, the true negative rate, the negative predictive value, the false discovery rate, the false negative rate, the false positive rate, and the false omission rate.
The original formula as given by Matthews was:
This is equal to the formula given above. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are Markedness (Δp) and Youden's J statistic (Informedness or Δp'). Markedness and Informedness correspond to different directions of information flow and generalize Youden's J statistic, the p statistics, while their geometric mean generalizes the Matthews Correlation Coefficient to more than two classes.
Some scientists claim the Matthews correlation coefficient to be the most informative single score to establish the quality of a binary classifier prediction in a confusion matrix context.
Example
Given a sample of 12 pictures, 8 of cats and 4 of dogs, where cats belong to class 1 and dogs belong to class 0,
actual = [1,1,1,1,1,1,1,1,0,0,0,0],
assume that a classifier that distinguishes between cats and dogs is trained, and we take the 12 pictures and run them through the classifier, and the classifier makes 9 accurate predictions and misses 3: 2 cats wrongly predicted as dogs (first 2 predictions) and 1 dog wrongly predicted as a cat (last prediction).
prediction = [0,0,1,1,1,1,1,1,0,0,0,1]
With these two labelled sets (actual and predictions) we can create a confusion matrix that will summarize the results of testing the classifier:
In this confusion matrix, of the 8 cat pictures, the system judged that 2 were dogs, and of the 4 dog pictures, it predicted that 1 was a cat. All correct predictions are located in the diagonal of the table (highlighted in bold), so it is easy to visually inspect the table for prediction errors, as they will be represented by values outside the diagonal.
In abstract terms, the confusion matrix is as follows:
where P = Positive; N = Negative; TP = True Positive; FP = False Positive; TN = True Negative; FN = False Negative.
Plugging the numbers from the formula:
Confusion matrix
Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows:
Multiclass case
The Matthews correlation coefficient has been generalized to the multiclass case. The generalization called the statistic (for K different classes) was defined in terms of a confusion matrix
.
When there are more than two labels the MCC will no longer range between −1 and +1. Instead the minimum value will be between −1 and 0 depending on the true distribution. The maximum value is always +1.
This formula can be more easily understood by defining intermediate variables:
the number of times class k truly occurred,
the number of times class k was predicted,
the total number of samples correctly predicted,
the total number of samples. This allows the formula to be expressed as:
Using above formula to compute MCC measure for the dog and cat example discussed above, where the confusion matrix is treated as a 2 × Multiclass example:
An alternative generalization of the Matthews Correlation Coefficient to more than two classes was given by Powers by the definition of Correlation as the geometric mean of Informedness and Markedness.
Several generalizations of the Matthews Correlation Coefficient to more than two classes along with new Multivariate Correlation Metrics for multinary classification have been presented by P Stoica and P Babu
.
Advantages over accuracy and F1 score
As explained by Davide Chicco in his paper "Ten quick tips for machine learning in computational biology" (BioData Mining, 2017) and "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation" (BMC Genomics, 2020), the Matthews correlation coefficient is more informative than F1 score and accuracy in evaluating binary classification problems, because it takes into account the balance ratios of the four confusion matrix categories (true positives, true negatives, false positives, false negatives).
The former article explains, for Tip 8:
Chicco's passage might be read as endorsing the MCC score in cases with imbalanced data sets. This, however, is contested; in particular, Zhu (2020) offers a strong rebuttal.
Note that the F1 score depends on which class is defined as the positive class. In the first example above, the F1 score is high because the majority class is defined as the positive class. Inverting the positive and negative classes results in the following confusion matrix:
TP = 0, FP = 0; TN = 5, FN = 95
This gives an F1 score = 0%.
The MCC doesn't depend on which class is the positive one, which has the advantage over the F1 score to avoid incorrectly defining the positive class.
See also
Cohen's kappa
Contingency table
Cramér's V, a similar measure of association between nominal variables.
F1 score
Fowlkes–Mallows index
Polychoric correlation (subtype: Tetrachoric correlation), when variables are seen as dichotomized versions of (latent) continuous variables
References
Bioinformatics
Cheminformatics
Computational chemistry
Information retrieval evaluation
Machine learning
Statistical classification
Statistical ratios
Summary statistics for contingency tables | Phi coefficient | Chemistry,Engineering,Biology | 1,999 |
6,924,337 | https://en.wikipedia.org/wiki/Ebullioscopic%20constant | In thermodynamics, the ebullioscopic constant relates molality to boiling point elevation. It is the ratio of the latter to the former:
is the van 't Hoff factor, the number of particles the solute splits into or forms when dissolved.
is the molality of the solution.
A formula to compute the ebullioscopic constant is:
is the ideal gas constant.
is the molar mass of the solvent.
is boiling point of the pure solvent in kelvin.
is the molar enthalpy of vaporization of the solvent.
Through the procedure called ebullioscopy, a known constant can be used to calculate an unknown molar mass. The term ebullioscopy means "boiling measurement" in Latin. This is related to cryoscopy, which determines the same value from the cryoscopic constant (of freezing point depression).
This property of elevation of boiling point is a colligative property. It means that the property, in this case , depends on the number of particles dissolved into the solvent and not the nature of those particles.
Values for some solvents
See also
Ebullioscope
List of boiling and freezing information of solvents
Boiling-point elevation
Colligative properties
References
External links
Ebullioscopic constant calculator AD
Phase transitions | Ebullioscopic constant | Physics,Chemistry | 274 |
45,223,395 | https://en.wikipedia.org/wiki/Helsinki%20Central%20Library%20Oodi | The Helsinki Central Library Oodi (; ), commonly referred to as Oodi (), is a public library in Helsinki, Finland. The library is situated in the Kluuvi district, close to Helsinki Central Station and next to Helsinki Music Centre and Kiasma Museum of Contemporary Art. Despite its name, the library is not the main library in the Helsinki City Library system, which is located in Pasila instead; "central" refers to its location in the city centre.
History
A design competition in 2012 to build the library was won by the Finnish architectural firm ALA Architects and structural design by Ramboll Finland. ALA Architects won the commission over 543 other competitors. The library was planned to be a three-story building and to include a sauna (which hasn't materialised ) and a ground-floor movie theatre. In January 2015, the Helsinki City Council voted 75–8 to launch the building project. The estimated costs of the new library was , of which the state agreed to pay in connection with the centenary of Finland's independence in 2017. The City of Helsinki budgeted for the building.
On 31 December 2016, it was announced that the new library would be named in Finnish and in Swedish. The name was selected from a pool of some 1,600 names proposed by the public. According to Helsinki Deputy City Director Ritva Viljanen, "Oodi" was chosen because it's easy to remember, easy to say, and easy to translate. The selection jury also did not want to name the new library after a person.
The library was built in the Töölönlahti district next to Helsinki Music Centre and Kiasma Museum of Contemporary Art and inaugurated on 5 December 2018 on the eve of the Finnish Independence Day.
Awards
In 2019, the International Federation of Library Associations (IFLA) named Oodi as the best Public Library of the Year.
Services
Specially designed robots transport books to the third floor that has an area designated for books. The rest of the space is designed for meetings and events.
The National Audiovisual Institute (KAVI) organizes regular archival film screenings at the Kino Regina cinema, located since 2019 in the Helsinki Central Library Oodi.
Energy use and environmental impact
The building is regarded as very energy-efficient due to its use of local materials and its use of sunlight. The building uses passive solar building design and uses almost no energy.
Gallery
See also
Helsinki Metropolitan Area Libraries
Helsinki University Library
National Library of Finland
Tampere Central Library Metso
Turku Main Library
Seinäjoki Library
References
External links
Concept designs and images of the upcoming Helsinki Central Library
Helsinki Central Library project description
Is Finland's Wood City the future of building?. BBC News. 27 September 2022.
2018 establishments in Finland
Buildings and structures in Helsinki
Libraries established in 2018
Libraries in Finland
Low-energy building
Solar architecture
Solar design
Sustainable urban planning
Wooden architecture
Wooden buildings and structures in Finland | Helsinki Central Library Oodi | Engineering | 595 |
44,443,391 | https://en.wikipedia.org/wiki/Americium%28II%29%20iodide | Americium(II) iodide is the inorganic compound with the formula AmI2. It is a black solid which crystallizes in the same motif as strontium bromide.
References
Americium compounds
Iodides
Actinide halides | Americium(II) iodide | Chemistry | 54 |
63,029,619 | https://en.wikipedia.org/wiki/Pokhozhaev%27s%20identity | Pokhozhaev's identity is an integral relation satisfied by stationary localized solutions to a nonlinear Schrödinger equation or nonlinear Klein–Gordon equation. It was obtained by S.I. Pokhozhaev and is similar to the virial theorem. This relation is also known as G.H. Derrick's theorem. Similar identities can be derived for other equations of mathematical physics.
The Pokhozhaev identity for the stationary nonlinear Schrödinger equation
Here is a general form due to H. Berestycki and P.-L. Lions.
Let be continuous and real-valued, with .
Denote .
Let
be a solution to the equation
,
in the sense of distributions.
Then satisfies the relation
The Pokhozhaev identity for the stationary nonlinear Dirac equation
There is a form of the virial identity for the stationary nonlinear Dirac equation in three spatial dimensions (and also the Maxwell-Dirac equations) and in arbitrary spatial dimension.
Let
and let and be the self-adjoint Dirac matrices of size :
Let be the massless Dirac operator.
Let be continuous and real-valued, with .
Denote .
Let be a spinor-valued solution that satisfies the stationary form of the nonlinear Dirac equation,
in the sense of distributions,
with some .
Assume that
Then satisfies the relation
See also
Virial theorem
Derrick's theorem
References
Mathematical_identities
Theorems in mathematical physics
Physics theorems | Pokhozhaev's identity | Physics,Mathematics | 300 |
73,078,834 | https://en.wikipedia.org/wiki/Ethylphosphonoselenoic%20dichloride | Ethylphosphonoselenoic dichloride is a selenium-containing organophosphorus compound. It's the precursor to selenophos, the selenium analog of the VE nerve agent.
See also
Methylphosphonyl dichloride
References
Organophosphorus compounds
Organoselenium compounds | Ethylphosphonoselenoic dichloride | Chemistry | 73 |
5,176,156 | https://en.wikipedia.org/wiki/Conserved%20name | A conserved name or nomen conservandum (plural nomina conservanda, abbreviated as nom. cons.) is a scientific name that has specific nomenclatural protection. That is, the name is retained, even though it violates one or more rules which would otherwise prevent it from being legitimate. Nomen conservandum is a Latin term, meaning "a name to be conserved". The terms are often used interchangeably, such as by the International Code of Nomenclature for Algae, Fungi, and Plants (ICN), while the International Code of Zoological Nomenclature favours the term "conserved name".
The process for conserving botanical names is different from that for zoological names. Under the botanical code, names may also be "suppressed", nomen rejiciendum (plural nomina rejicienda or nomina utique rejicienda, abbreviated as nom. rej.), or rejected in favour of a particular conserved name, and combinations based on a suppressed name are also listed as “nom. rej.”.
Botany
Conservation
In botanical nomenclature, conservation is a nomenclatural procedure governed by Article 14 of the ICN. Its purpose is
"to avoid disadvantageous nomenclatural changes entailed by the strict application of the rules, and especially of the principle of priority [...]" (Art. 14.1).
Conservation is possible only for names at the rank of family, genus or species.
It may effect a change in original spelling, type, or (most commonly) priority.
Conserved spelling (orthographia conservanda, orth. cons.) allows spelling usage to be preserved even if the name was published with another spelling: Euonymus (not Evonymus), Guaiacum (not Guajacum), etc. (see orthographical variant).
Conserved types (typus conservandus, typ. cons.) are often made when it is found that a type in fact belongs to a different taxon from the description, when a name has subsequently been generally misapplied to a different taxon, or when the type belongs to a small group separate from the monophyletic bulk of a taxon.
Conservation of a name against an earlier taxonomic (heterotypic) synonym (which is termed a rejected name, nomen rejiciendum, nom. rej.) is relevant only if a particular taxonomist includes both types in the same taxon.
Rejection
Besides conservation of names of certain ranks (Art. 14), the ICN also offers the option of outright rejection of a name (nomen utique rejiciendum) also called suppressed name under Article 56, another way of creating a nomen rejiciendum that cannot be used anymore. Outright rejection is possible for a name at any rank.
Rejection (suppression) of individual names is distinct from suppression of works (opera utique oppressa) under Article 34, which allows for listing certain taxonomic ranks in certain publications which are considered not to include any validly published names.
Effects
Conflicting conserved names are treated according to the normal rules of priority. Separate proposals (informally referred to as "superconservation" proposals) may be made to protect a conserved name that would be overtaken by another. However, conservation has different consequences depending on the type of name that is conserved:
A conserved family name is protected against all other family names based on genera that are considered by the taxonomist to be part of the same family.
A conserved genus or species name is conserved against any homonyms, homotypic synonyms, and those specific heterotypic synonyms that are simultaneously declared nomina rejicienda (as well as their own homotypic synonyms). As taxonomic changes are made, other names may require new proposals for conservation and/or rejection.
Documentation
Conserved and rejected names (and suppressed names) are listed in the appendices to the ICN. As of the 2012 (Melbourne) edition, a separate volume holds the bulk of the appendices (except appendix I, on names of hybrids). The substance of the second volume is generated from a database which also holds a history of published proposals and their outcomes, the binding decisions on whether a name is validly published (article 38.4) and on whether it is a homonym (article 53.5). The database can be queried online.
Procedure
The procedure starts by submitting a proposal to the journal Taxon (published by the IAPT). This proposal should present the case both for and against conservation of a name. Publication notifies anybody concerned that the matter is being considered and makes it possible for those interested to write in. Publication is the start of the formal procedure: it counts as referring the matter "to the appropriate Committee for study" and Rec 14A.1 comes into effect. The name in question is (somewhat) protected by this Recommendation ("... authors should follow existing usage as far as possible ...").
After reviewing the matter, judging the merits of the case, "the appropriate Committee" makes a decision either against ("not recommended") or in favor ("recommended"). Then the matter is passed to the General Committee.
After reviewing the matter, mostly from a procedural angle, the General Committee makes a decision, either against ("not recommended") or in favor ("recommended"). At this point Article 14.16 comes into effect. Art 14.16 authorizes all users to indeed use that name.
The General Committee reports to the Nomenclature Section of the International Botanical Congress, stating which names (including types and spellings) it recommends for conservation. Then, by Div.III.1, the Nomenclature Section makes a decision on which names (including types, spellings) are accepted into the Code. At this stage the de facto decision is made to modify the Code.
The Plenary Session of that same International Botanical Congress receives the "resolution moved by the Nomenclature Section of that Congress" and makes a de jure decision to modify the Code. By long tradition this step is ceremonial in nature only.
In the course of time there have been different standards for the majority required for a decision. However, for decades the Nomenclature Section has required a 60% majority for an inclusion in the Code, and the Committees have followed this example, in 1996 adopting a 60% majority for a decision.
Zoology
For zoology, the term "conserved name", rather than nomen conservandum, is used in the International Code of Zoological Nomenclature, although informally both terms are used interchangeably.
In the glossary of the International Code of Zoological Nomenclature (the code for names of animals, one of several nomenclature codes), this definition is given:
conserved name
A name otherwise unavailable or invalid that the Commission, by the use of its plenary power, has enabled to be used as a valid name by removal of the known obstacles to such use.
This is a more generalized definition than the one for nomen protectum, which is specifically a conserved name that is either a junior synonym or homonym that is in use because the senior synonym or homonym has been made a nomen oblitum ("forgotten name").
An example of a conserved name is the dinosaur genus name Pachycephalosaurus, which was formally described in 1943. Later, Tylosteus (which was formally described in 1872) was found to be the same genus as Pachycephalosaurus (a synonym). By the usual rules, the genus Tylosteus has precedence and would normally be the correct name. But the International Commission on Zoological Nomenclature (ICZN) ruled that the name Pachycephalosaurus was to be given precedence and treated as the valid name, because it was in more common use and better known to scientists.
The ICZN's procedural details are different from those in botany, but the basic operating principle is the same, with petitions submitted to the commission for review.
See also
Opinion 2027, an example of name conservation as applied by ICZN
Glossary of scientific naming
References
Biological classification
Botanical nomenclature
Zoological nomenclature
Taxonomy (biology) | Conserved name | Biology | 1,698 |
705,621 | https://en.wikipedia.org/wiki/Web%20Services%20Interoperability | The Web Services Interoperability Organization (WS-I) was an industry consortium created in 2002 and chartered to promote interoperability amongst the stack of web services specifications. WS-I did not define standards for web services; rather, it created guidelines and tests for interoperability.
In July 2010, WS-I joined the OASIS, standardization consortium as a member section.
It operated until December 2017.
The WS-I standards were then maintained by relevant technical committees within OASIS.
It was governed by a board of directors consisting of the founding members (IBM, Microsoft, BEA Systems, SAP, Oracle, Fujitsu, Hewlett-Packard, and Intel) and two elected members (Sun Microsystems and webMethods). After it joined OASIS, other organizations have joined the WS-I technical committee including CA Technologies, JumpSoft and Booz Allen Hamilton.
The organization's deliverables included profiles, sample applications that demonstrate the profiles' use, and test tools to help determine profile conformance.
WS-I Profiles
According to WS-I, a profile is
A set of named web services specifications at specific revision levels, together with a set of implementation and interoperability guidelines recommending how the specifications may be used to develop interoperable web services.
WS-I Basic Profile
WS-I Basic Security Profile
Simple Soap Binding Profile
WS-I Profile Compliance
The WS-I is not a certifying authority; thus, every vendor can claim to be compliant to a profile. However the use of the test tool is required before a company can claim a product to be compliant. See WS-I Trademarks and Compliance claims requirements
See also
Web Services Resource Framework
OASIS
References
External links
WS-I consortium's Home Page
WS-I OASIS member section Home page (2010-2017, maintained as archive by OASIS)
The Microsoft - WS-I controversy, cnet news, May 2002
Web services
Interoperability | Web Services Interoperability | Engineering | 404 |
706,374 | https://en.wikipedia.org/wiki/Antihomomorphism | In mathematics, an antihomomorphism is a type of function defined on sets with multiplication that reverses the order of multiplication. An antiautomorphism is an invertible antihomomorphism, i.e. an antiisomorphism, from a set to itself. From bijectivity it follows that antiautomorphisms have inverses, and that the inverse of an antiautomorphism is also an antiautomorphism.
Definition
Informally, an antihomomorphism is a map that switches the order of multiplication. Formally, an antihomomorphism between structures and is a homomorphism , where equals as a set, but has its multiplication reversed to that defined on . Denoting the (generally non-commutative) multiplication on by , the multiplication on , denoted by , is defined by . The object is called the opposite object to (respectively, opposite group, opposite algebra, opposite category etc.).
This definition is equivalent to that of a homomorphism (reversing the operation before or after applying the map is equivalent). Formally, sending to and acting as the identity on maps is a functor (indeed, an involution).
Examples
In group theory, an antihomomorphism is a map between two groups that reverses the order of multiplication. So if is a group antihomomorphism,
φ(xy) = φ(y)φ(x)
for all x, y in X.
The map that sends x to x−1 is an example of a group antiautomorphism. Another important example is the transpose operation in linear algebra, which takes row vectors to column vectors. Any vector-matrix equation may be transposed to an equivalent equation where the order of the factors is reversed.
With matrices, an example of an antiautomorphism is given by the transpose map. Since inversion and transposing both give antiautomorphisms, their composition is an automorphism. This involution is often called the contragredient map, and it provides an example of an outer automorphism of the general linear group , where F is a field, except when and , or and (i.e., for the groups , , and ).
In ring theory, an antihomomorphism is a map between two rings that preserves addition, but reverses the order of multiplication. So is a ring antihomomorphism if and only if:
φ(1) = 1
φ(x + y) = φ(x) + φ(y)
φ(xy) = φ(y)φ(x)
for all x, y in X.
For algebras over a field K, φ must be a K-linear map of the underlying vector space. If the underlying field has an involution, one can instead ask φ to be conjugate-linear, as in conjugate transpose, below.
Involutions
It is frequently the case that antiautomorphisms are involutions, i.e. the square of the antiautomorphism is the identity map; these are also called s. For example, in any group the map that sends x to its inverse x−1 is an involutive antiautomorphism.
A ring with an involutive antiautomorphism is called a *-ring, and these form an important class of examples.
Properties
If the source X or the target Y is commutative, then an antihomomorphism is the same thing as a homomorphism.
The composition of two antihomomorphisms is always a homomorphism, since reversing the order twice preserves order. The composition of an antihomomorphism with a homomorphism gives another antihomomorphism.
See also
Semigroup with involution
References
Morphisms | Antihomomorphism | Mathematics | 811 |
11,774,737 | https://en.wikipedia.org/wiki/CBM-CFS3 | CBM-CFS3 (Carbon Budget Model of the Canadian Forest Sector) is a Windows-based software modelling framework for stand- and landscape-level forest ecosystem carbon accounting. It is used to calculate forest carbon stocks and stock changes for the past (monitoring) or into the future (projection). It can be used to create, simulate and compare various forest management scenarios in order to assess impacts on carbon. It is compliant with requirements under the Kyoto Protocol and with the Good Practice Guidance for Land Use, Land-Use Change and Forestry (2003) report published by the Intergovernmental Panel on Climate Change (IPCC).
It is the central model of the Government of Canada's National Forest Carbon Monitoring, Accounting and Reporting System (NFCMARS). The CBM-CFS3 was developed through a collaboration between Natural Resources Canada's Canadian Forest Service (CFS) and the Canadian Model Forest Network, and is currently supported by the CFS. The CBM-CFS3 is distributed at no charge by the Canadian Forest Service through Canada's National Forest Information System web site. Technical support is available by contacting Stephen Kull, Carbon Model Extension Forester, at the CFS.
See also
Carbon accounting
References
External links
Canadian Forest Service, Forest Carbon Accounting Web Site
Canadian Forest Service CBM-CFS3 Web Site
Natural Resources Canada Web Site
Canadian Forest Service Web Site
The Canadian Model Forest Network
Good Practice Guidance for Land Use, Land-Use Change and Forestry
Forest models
Climate change in Canada | CBM-CFS3 | Biology,Environmental_science | 309 |
51,456,706 | https://en.wikipedia.org/wiki/Isotropic%20beacon | An isotropic beacon is a hypothetical type of transmission beacon that emits a uniform EM signal in all directions for the purposes of communication with extraterrestrial intelligence.
Isotropic beacons and their relation to SETI
An isotropic beacon can be any transmitter that emits a uniform electromagnetic field. However, the term is most commonly used to describe a transmitter used by a civilization to call attention to itself over interstellar distances to extraterrestrial creatures. The isotropic beacon uses the Kardashev scale. The Kardashev scale is a method of measuring a civilization's level of technological advancement based on the amount of energy it is able to use. The measure was proposed by Soviet astronomer Nikolai Kardashev in 1964. The Kardashev scale has three designated categories, which are a Type I civilization, also called a planetary civilization, that can use and store all of the energy available on its planet. A Type II civilization, also called a stellar civilization, can use and control energy at the scale of its planetary system. A Type III civilization, also called a galactic civilization, can control energy at the scale of its entire host galaxy. Project Cyclops is and was one of the first looks at the theoretical framework of what it would take to create such a device.
References
Exploratory engineering
Hypothetical astronomical objects
Search for extraterrestrial intelligence
Astronomy projects
Hypothetical technology | Isotropic beacon | Astronomy,Technology | 286 |
11,421,289 | https://en.wikipedia.org/wiki/Mir-395%20microRNA%20precursor%20family | mir-395 is a non-coding RNA called a microRNA that was identified in both Arabidopsis thaliana and Oryza sativa computationally and was later experimentally verified. mir-395 is thought to target mRNAs coding for ATP sulphurylases. The mature sequence is excised from the 3' arm of the hairpin.
miR-395 is upregulated in Arabidopsis during sulphate-limited conditions, when the mature miRNA then regulates sulphur transporters and ATP sulphurylases.
References
External links
miRBase family entry for mir-395
MicroRNA
MicroRNA precursor families | Mir-395 microRNA precursor family | Chemistry | 132 |
52,376,402 | https://en.wikipedia.org/wiki/OnePlus%203T | The OnePlus 3T (also abbreviated as OP3T) is a smartphone made by OnePlus. It is the successor to the OnePlus 3 and was revealed on 15 November 2016.
It is an incremental update to the company's flagship phone being released only 6 months later. It features the identical Optic AMOLED display, the same Sony IMX298 rear camera sensor, and the same Dash Charge technology as its predecessor.
The OnePlus 3T is also the first phone from OnePlus to be available for immediate delivery (in Europe and North America), without long waiting times from being out of stock.
Release
The OnePlus 3T was released on 15 November 2016 via a Facebook live video. It went on sale in a new Gunmetal colour with the Soft Gold option being released soon after much like the OnePlus 3. The OnePlus 3T has 6GB of RAM, the option of 64 or 128GB of UFS 2.0 storage, and a Qualcomm Snapdragon 821 processor.
To celebrate the 20th anniversary of Colette, a French high fashion, streetwear, and accessory retailer, OnePlus and Colette partnered to produce 250 exclusive limited edition black versions of the phone featuring a Colette logo at the back of the phone. This exclusive edition of the phone went on sale in Paris at a Colette store on 21 March 2017.
Specifications
Hardware
The OnePlus 3T has the same metal back design, compared with the OnePlus 3, with anodised aluminium and curved edges. The device is available in two colors, a new Gunmetal (black/gray), which is slightly darker than the Graphite color used in the OnePlus 3 and Soft Gold (white/gold) which was released on 6 January 2017. A limited edition Midnight Black colored phone was available to purchase on 28 March 2017, it featured an all black backing similar to the Colette exclusive limited edition but without the logo.
The OnePlus 3T's darker aluminium backing is the only physical characteristic that allows it to be distinguished from the OnePlus 3. Users can also purchase the same protective covers in Bamboo, Rosewood, Black Apricot, Karbon and Sandstone, which takes care of the camera hump and evens it with the phone.
It features the same Optic AMOLED display with Corning Gorilla Glass 4 protection, in the same casing of the OnePlus 3. The OnePlus 3T now comes in options for 64 GB or 128 GB of UFS 2.0 storage. The phone also features the same alert slider which users can use to quickly toggle between alert modes from silent to priority to all notifications. It includes NFC and the same fast finger scanner which can unlock the device in approximately 0.3 seconds.
The OnePlus 3T contains the same rear-facing Sony IMX298 sensor with 16 MP, 1.1 μm, f/2.0, Optical image stabilization (OIS) and Electronic Image Stabilization (EIS), as the predecessor. The phone has an upgraded front-facing Samsung 3P8SP sensor with 16 MP, 1.0 μm, f/2.0 and EIS.
The phone once again has the same OnePlus quick charging capabilities named Dash Charge with the ability to gain 60% of the charge in 30 minutes. According to the company, this is accomplished by doing all the power transforming required for direct input to the battery in the power brick supplied, not within the phone itself, reducing heat on the device. Additionally, the power brick can contain larger, dedicated electronics, whereas any power processing on a phone has to use smaller and cooler equipment, reducing the speed of charging.
The phone comes with a faster Qualcomm Snapdragon 821 processor and a bigger 3400 mAh battery.
Software
The OnePlus 3T came out of the box with Android 6.0.1 Marshmallow, as OxygenOS version 3.5. The last version of 6.0.1 released was OxygenOS version 3.5.4 on 14 December 2016.
OnePlus released the first stable version of Android 7.0 Nougat on 31 December 2016, with the release of OxygenOS 4.0. The last version of 7.0 was released as OxygenOS version 4.0.3 on 9 February 2017.
OnePlus then released a first stable version of Android 7.1.1 Nougat on 16 March 2017, with the release of OxygenOS version 4.1.0. The last version of 7.1.1 as OxygenOS 4.1.x was released as version 4.1.7 on 22 August 2017.
A major stable update (still based on Android 7.1.1 Nougat) was released on 25 September 2017, as OxygenOS version 4.5.0. The last version of 7.1.1 was released as OxygenOS version 4.5.1 on 16 October 2017.
After owners of the phone expressed concern whether their phone would be abandoned after the release of the OnePlus 5, it was confirmed by Pete Lau, the CEO, that the phone, and its predecessor would eventually receive a software update to Android O. It was also announced that it would be the last major update.
On 19 November 2017, the first stable Android 8.0 Oreo update was released, as OxygenOS version 5.0.
In May 2018, OnePlus released OxygenOS version 5.0.3 with the same Face Unlock feature present on the OnePlus 5T for the 3T and its predecessor several months after promising it.
On 30 July 2018, OnePlus updated their support policy choosing to provide Android P as the last update for the OnePlus 3T instead of Android 8.1.
The last version of 8.0 was released as OxygenOS 5.0.8 on 27 November 2018.
Stable Android 9.0 Pie for the 3T was released on 22 May 2019 as OxygenOS version 9.0.3 (OnePlus choosing to now synchronise their OS version numbering with Android).
The last version of 9.0 Pie was released on 20 November 2019, as OxygenOS 9.0.6. It was also confirmed this would be the last official update released for the 3T by OnePlus.
Network compatibility
The OnePlus 3T has the following compatibility .
Reception
OnePlus 3T generally received a good reception. It was especially praised for its good performance and price to quality ratio. The OnePlus 3T was considered too similar to OnePlus 3 and the increase of the front camera's megapixels was thought to be ineffective in many reviews. Many critics felt the difference was so small that OnePlus 3 owners should not upgrade to OnePlus 3T.
Criticism
XDA Developers discovered that with the launch of the Android 7.0 Nougat update, OnePlus introduced a software defeat device into the code of the OnePlus 3 and the OnePlus 3T, relaxing thermal throttling and increasing clock speeds when the phone detected that it was in a benchmark app, in order to boost benchmark scores. This came as a bit of a shock to much of the Android enthusiast community, as every major manufacturer had removed their benchmark cheating code following the massive backlash that occurred when it was originally discovered on other devices in 2013. OnePlus immediately stated that they would be removing the benchmark cheating from future software versions, and that they weren't sure how it made it into a production build. OnePlus later reversed this decision with the OnePlus 5, reintroducing the software that locked clock speeds to their maximum while in a benchmark.
References
External links
OnePlus mobile phones
Mobile phones introduced in 2016
Discontinued flagship smartphones
Mobile phones with 4K video recording | OnePlus 3T | Technology | 1,623 |
4,639,751 | https://en.wikipedia.org/wiki/Dental%20impression | A dental impression is a negative imprint of hard and soft tissues in the mouth from which a positive reproduction, such as a cast or model, can be formed. It is made by placing an appropriate material in a dental impression tray which is designed to roughly fit over the dental arches. The impression material is liquid or semi-solid when first mixed and placed in the mouth. It then sets to become an elastic solid, which usually takes a few minutes depending upon the material. This leaves an imprint of a person's dentition and surrounding structures of the oral cavity.
Digital impressions using computerized scanning are now available.
Uses
Impressions, and the study models, are used in several areas of dentistry including:
diagnosis and treatment planning
prosthodontics (such as making dentures)
orthodontics
restorative dentistry (e.g. to make impressions of teeth which have been prepared to receive indirect extracoronal restorations such as crowns, bridges, inlays and onlays)
maxillofacial prosthetics (prosthetic rehabilitation of intra-oral and extra-oral defects due to trauma, congenital defects, and surgical resection of tumors)
oral and maxillofacial surgery for both intra-oral and or extra-oral aims (e.g. dental implants)
The required type of material for taking an impression and the area that it covers will depend on the clinical indication. Common materials used for dental impressions are:
non rigid materials:
reversible hydrocolloids: agar
irreversible hydrocolloids: sodium alginate
elastomeric materials:
silicones (polyvinyl siloxane): condensation-cured silicones, addition silicones, vinyl polyether silicones (VPES)
polyethers
polysulphides
rigid materials:
plaster of Paris
impression compound
zinc oxide and eugenol-based impression paste
Techniques for taking impression
Impressions can also be described as mucostatic or mucocompressive, being defined both by the impression material used and the type of impression tray used (i.e. spaced or closely adapted). Mucostatic means that the impression is taken with the mucosa in its normal resting position. These impressions will generally lead to a denture which has a good fit during rest, but during chewing, the denture will tend to pivot around incompressible areas (e.g. torus palatinus) and dig into compressible areas. Mucocompressive means that the impression is taken when the mucosa is subject to compression. These impressions will generally lead to a denture that is most stable during function but not at rest. Dentures are at rest most of the time, so it could be argued that mucostatic impressions make better dentures, however in reality it is likely that tissue adaption to the presence of either a denture made with a mucostatic or a mucocompressive technique make little difference between the two in the long term.
Another type of impression technique is selective pressure technique in which stress bearing areas are compressed and stress relief areas are relieved such that both the advantages of muco static and muco compressive techniques are achieved.
User Guide of Dental Impression material: https://www.youtube.com/watch?v=-keGMbCHC2A
Special techniques
"Wash impression" – this is a very thin layer of low viscosity impression material which is used to record fine details. Usually it is the second stage, where the runny impression material is used after an initial impression taken with a more viscous material.
Two phase one stage: the putty and low body weight impression materials are inserted to the mouth at once .
Two phase two stage: first the putty is set in the mouth then low body weight material is added on the top of ready impression and inserted to the mouth to get the final accurate impression
Functional impression (also known as secondary impression)
Neutral zone impression
Window technique
Altered cast technique
Applegate technique
Impression for provision of fixed prosthesis
The preparation border must be accurately captured by the light bodied impression material when taking impressions for crown and bridge work. As a result, the gingival tissues must be pushed away from the preparation margin in order for the impression material to be accessible. Inserting a retraction cord into the gingival crevice is one method of retracting gingival tissues away from the tooth.
Impression materials
Impression materials can be considered as follows:
Rigid
Plaster of Paris (impression plaster)
Plaster of Paris is traditionally used as a casting material once the impression has been taken, however its use as an impression material is occasionally useful in edentate patients. The tissues are not displaced during impression taking, hence the material is termed mucostatic. Mainly composed of β-calcium sulphate hemihydrate, impression plaster has a similar composition and setting reaction to the casting material with an increase in certain components to control the initial expansion that is observed with Plaster of Paris. Additionally, more water is added to the powder than with the casting material to aid in good flow during impression taking. As the impression material is very similar to the casting material to be used, it requires the incorporation of a separating medium (e.g. sodium alginate) to aid in separating the cast from the impression. If a special tray is to be used, impression plaster requires 1–1.5mm spacing for adequate thickness.
Advantages:
Hydrophilic
Good detail reproduction
Good dimensional stability (contraction on setting)
Good patient tolerance
2–3 minutes working time
Disadvantages:
Brittle
No recovery from deformation. Therefore, if an undercut is present the material will have to be broken off the impression and then glued back together prior to casting
Excess salivation by the patient could have adverse effect on detail reproduction
Impression compound
Impression compound has been used for many years as an impression material for removable prostheses. Although its use has recently declined with the advent of better materials. Due to its poor flow characteristics, it is unable to reproduce fine detail and so its use is somewhat limited to the following scenarios:
Primary impressions of complete dentures
Border moulding of trays
Extension of trays
Achieving mucocompression in the post-dam area when working impressions are taken for complete dentures
Impression compound is a thermoplastic material; it is presented as a sheet of material, which is warmed in hot water (> 55–60 °C) for one minute, and loaded on a tray prior to impression taking. Once in the mouth, the material will harden and record the detail of the soft tissues. The impression can further be hardened by placing it in cold water after use. Impressions with compound should be poured within an hour as the material exhibits poor dimensional stability. There are two main presentations of impression compound: red compound and greenstick. The latter is mainly used for border moulding and recording the post-dam area.
Vinyl polysiloxane impression material (impression material)
vinyl polysiloxane dental impression materials used for making accurate dental impressions with excellent reproducibility. It is available in Putty and light body consistencies to aid dentists make perfect impressions for fabrication of crowns, bridges, inlays, onlays and veneers.
Example Flexceed
Advantages:
Better reproduction detail with two viscosities (Putty & Light Body)
Exhibits pseudo-plastic properties for precision which is not found in alginates
Superior tear strength than any other VPS material
Better dimensional stability – multiple models can be poured up to two weeks
Good hydrophilicity
Compatible with gypsum products
Superior wetting characteristics ensuring gypsum working cast is hard with smooth surface
Can be subjected to cold sterilization without compromising the details and dimensional stability of the impression
Zinc-oxide eugenol plaster (impression paste)
Impression paste is traditionally used to take the working (secondary) impressions for a complete denture. When used with a special tray it requires 1 mm of spacing to allow for enough thickness of the material; this is also termed a close fitting special tray. It is available as a two-paste system:
Base paste: zinc oxide
Catalyst paste: eugenol
The two pastes should be used in equal amounts and blended together with a stainless steel spatula (Clarident spatula) on a paper pad. Zinc-oxide Eugenol plaster will produce a mucostatic impression.
Advantages:
Thermoplastic – can be heated to aid removal from the casting material
Good detail reproduction
Good dimensional stability (0.15% shrinkage on setting)
Disadvantages:
Rigid – presence of undercuts can distort the final material or cause the section engaged to separate from resultant impression
Impression waxes
Non rigid
Hydrocolloid
Agar
Agar is a material which provides high accuracy. Therefore, it is used in fixed prosthodontics (crowns, bridges) or when a dental model has to be duplicated by a dental technician. Agar is a true hydrophilic material, hence the teeth do not need to be dried before placing it into the mouth. It is a reversible hydrocolloid which means that its physical state can be changed by altering its temperature which allows to reuse the material multiple times. The material comes in form of tubes or cartridges. A special hardware is required in the process of taking agar impressions, namely a water bath and rim lock trays with coiled edges allowing passage of cold water for cooling the material to set while in the mouth. The bath consists of three containers filled with water at different temperatures: the first is set at 100 °C to liquefy the agar, the second is used to lower down the temperature of the material for safe intra-oral use (usually set at 43–46 °C) and the third one is used for storage and is set at 63–66 °C. The storage container can maintain agar tubes and cartridges at temperature 63–66 °C for several days for convenient immediate use. The tray is connected to a hose, material is loaded onto the tray and placed in the mouth over the preparation – an adequate thickness of the material is required, otherwise distortion may occur upon removal from the mouth. The other end of the hose is connected to a cold water source. The hydrocolloid is then cooled down through the tray wall which results in setting of the material. The models should be poured as soon as possible to avoid changes in dimensional stability.
Modern dentistry offers other materials (e.g. elastomerics) which provide high accuracy impressions and are easier to use hence agar is used less frequently.
Advantages:
high accuracy
hydrophilic
reusable
Disadvantages:
complex procedural steps
significant start-up cost of the hardware
Alginate
Alginate, on the other hand, is an irreversible hydrocolloid. It exists in two phases: either as a viscous liquid, or a solid gel, the transition generated by a chemical reaction.
The impression material is created through adding water to the powdered alginate which contains a mixture of sodium and potassium salts of alginic acid. The overall setting double composition reaction is as follows:
Potassium (sodium) alginate + calcium sulphate dihydrate + water → calcium alginate + potassium (sodium) sulphate
Sodium phosphate is added as a retarder which preferentially reacts with calcium ions to delay the set of the material.
Alginate has a mixing time of 45–60 secs, a working time of 45 secs (fast set) and 75 secs (regular set). The setting time can be between 1 – 4.5 mins which can be varied by the temperature of water used: the cooler the water, the slower the set and vice versa. You want to ensure that the material is fully set before removal from the mouth.
The water content that the completed impression is exposed to must be controlled. Improper storage can either result in syneresis (the material contracts upon standing and exudes liquid) or imbibition (water uptake which is uncontrolled in extent and direction). Therefore, the impression must be stored correctly, which involves wrapping the set material in a damp tissue and storing it in a sealed polythene bag until the impression can be cast. Alginate is used in dental circumstances when less accuracy is required. For example, this includes the creation of study casts to plan dental cases and design prosthesis, and also to create the primary and working impressions for denture construction.
Several faults can be encountered when using an alginate impression material, but these can generally be avoided through adequate mixing, correct spatulation, correct storage of the set material, and timely pouring of the impression.
Due to the increased accuracy of elastomers, they are recommended for taking secondary impressions over alginate. Patients both preferred the overall experience of having an impression taken with an elastomer than with alginate, and also favoured the resultant dentures produced.
Advantages:
Easy flow
Cheap
Reproduction of adequate detail
Fast setting time
Minimal tissue displacement in the mouth
Disadvantages:
It has poor dimensional stability
Poor tear strength
If it is unsupported, it distorts
Easy to include air during mixing
A minimum thickness of 3 mm is required which is hard to achieve in thin areas in between the teeth
Non-aqueous elastomeric impression materials
As stated above, there are times clinically where the accuracy of an alginate impression is not acceptable, particularly for the construction of fixed prosthodontics. Agar may be used but as discussed has a number of technical difficulties in its use. As such elastomers were developed to capture the fine detail and accuracy required.
Polysulphides
Polysulphides have become increasingly unpopular due to their unpleasant taste/smell. The material is presented as a paste to paste system mixed by a dental nurse prior to use. The material sets by a condensation polymerisation reaction. Initially the polymer chains increase in length causing a slight increase in temperature, of 3–4 °C. This is then followed by cross linking of the polymer chains and finally the release of water as a by product. This later reaction slightly contracts the material making it stiffer and more resistant to permanent deformation. When poured and cast this slight contraction means the resulting model is slightly larger and as such creates space for the luting cement.
Advantages:
Good tear resistance
Dimensionally stable – some shrinkage on set with release of by-product
Good Accuracy
Most flexible elastomer
Disadvantages:
Reduced patient satisfaction – distinct unpleasant taste and smell
Long setting time
Requires excellent moisture control
Difficult to mix
Polyethers
Polyethers are the most hydrophilic impression material of the hydrophobic elastomers. This property makes it a commonly used material in general practice as it more likely to capture preparation margins when moisture control is not perfect.
Presented as a paste to paste system the material is often used with a monophase impression technique, meaning both the material syringed round the preparation and the bulk within the tray are the same material. Note when mixing polyether the base to accelerator ratio is not 1:1 like with most elastomers, but 1:4.
Advantages:
Most hydrophilic elastomeric impression material
Dimensionally stable – minimal shrinkage on set with release of by-product
Good accuracy
Monophase impression
Good tear resistance
Disadvantages:
Can be too stiff – deep undercuts and space under a bridge pontic should be blocked out with soft (modelling) wax to prevent inadvertently removing bridge with impression
Indications:
Indirect cast restorations, especially in cases where moisture control cannot be guaranteed
Locating and the pick up of implant analogues in preparation for placement of superstructure
Functional impression taking in removable prosthodontics
Silicones
There are two types of silicone resin impression material, addition and condensation (reflecting each of their setting reactions). Silicones are inherently hydrophobic and as such require excellent moisture control for optimal use.
Addition silicone
Addition silicones have become the most used impression material in advanced restorative dentistry. There are many forms available, based on their differing amounts of filler content. This dictates the flow properties of each type with more filler resulting in a thicker, less flowable material. The most common forms are: extra light-bodied (low filler content), light-bodied, universal or medium-bodied, heavy-bodied and putty (high filler content). However each type follows the same addition polymerisation reaction and is presented as a paste to paste system. The reaction does not produce any by-product making it dimensionally stable and very accurate.
Advantages:
Good detail reproduction
Excellent dimensional stability – no shrinkage on set
High patient acceptance
More than one model can be poured from one cast
Disadvantages:
Hydrophobic – requires excellent moisture control
Too accurate – impression may not be compensated for during investment and casting, resulting in too small a die being produced and subsequently too small a restoration.
Poor tear resistance
Expensive
Indications:
Indirect cast restorations
Multiple models required
Impressions for removable prosthodontics
Bite registration material
Contraindications
Inadequate moisture control
Condensation silicone
Condensation silicones are commonly used as a putty, paste or light bodied material. The systems are usually presented as a paste or putty and a liquid/paste catalyst; meaning accurate proportioning is difficult to achieve resulting in varied outcomes. For example, the setting reaction of putty is started by kneading a low viscosity paste accelerator into a bulk of silicone with high filler content.
As stated the material sets by a condensation reaction forming a three-dimensional silicone matrix whilst releasing ethyl alcohol as a by-product. This in turn results in a minimally exothermic set with marked shrinkage on setting (shrinkage being relative to filler content, where high filler content has reduced shrinkage).
Advantages:
Accurate
High patient acceptance
Disadvantages:
Hydrophobic – requires excellent moisture control
Unreliable dimensional stability – difficult to accurately proportion components leading to variable results
Marked shrinkage on setting with release of by-product
Indications:
Indirect cast restorations
Matrices for indirect/direct restoration
Working impressions for metal based removable prosthodontics and relines
Lab putty
Impression trays
An impression tray is a container which holds the impression material as it sets, and supports the set impression until after casting. Impression trays can be separated into two main categories- stock trays and special trays.
Stock trays
Stock trays are used to take primary impressions and come in a range of sizes and shapes, and can be plastic or metal. Stock trays can be rounded (designed to fit the mouths of people with no remaining teeth) or squared (designed to fit people with some remaining teeth). They can be full arch, covering all the teeth in either the upper or lower jaw in one impression, or a partial coverage tray, designed to fit over about three teeth (used when making crowns). The stock tray with the closest size and shape to the patient's own arch dimensions is selected for impressions.
Stock trays must meet various requirements in order to obtain a satisfactory impression. A good stock tray will:
Be rigid enough to endure the force of the dental impression material being positioned in the mouth. Flexure of the tray under force would cause the tray to be distorted, so when the impression tray is removed the impression would be narrower and distorted. This is particularly important for plastic stock trays.
Be of appropriate dimensions to obtain the most accurate impression of the region being reproduced possible. Inadequate extension of an impression tray where the impression material is not supported would likely cause distortions in the impression of the area.
Be loose enough around the dental arch to not touch the soft tissues of the mouth.
Have a sturdy handle to allow the tray to be easily removed from the mouth.
Be easily disinfected, unless for single use. Disposable trays are often preferred due to increasing legislation about infection control in medicine and dentistry.
Stock trays can be dentate or edentulous, and perforated (used with alginate) or non-perforated (allows the impression material to run through the holes and increase the bond of the impression material to the tray when set).
Plastic stock trays
Plastic stock trays are generally injection moulded from a high-impact styrene such as polystyrene. The Triple Tray is a type of plastic tray used for taking impressions for a crown using the double arch or dual bite impression technique. It is used for taking impressions of the tooth preparation and the opposing teeth, by the use of a special impression material, usually elastomer. The accuracy of the results is however subject to the ability of the patient to close their teeth when the tray is present in the mouth. It cannot produce results of the complete arch, therefore its usefulness is limited.
Metal stock trays
Metal stock trays are often preferred over plastic stock trays, due to the lack of rigidity in plastic stock trays. Although expensive to purchase, they have the benefit of being reusable, so can be more cost-efficient in the long-term.
Custom trays
A special tray is an impression tray custom made for an individual patient by a denturist (dental technician), usually made from acrylic, such as polymethyl methacrylate, or shellac. A stock tray is used to make a preliminary impression, from which a model can be cast. This is then used for wax to make the tray to be laid down. The thickness corresponds to specific spacing, and can be classed as spaced, where about 3mm of space is left between the tray and the mucosa for the impression material to occupy, or closely adapted, where less space is left for the impression material. This is determined by the impression material to be used.
Specific features can be given to the special tray to improve the accuracy of the impression such as a window which can help to record displaceable tissues such as flabby ridges when used with a less viscous impression material. Special trays can be given perforations if required by drilling holes in tray.
Customised trays have been less frequently used since the advent of putties. This is due to the putty providing good support for light bodied material, and showing very little dimensional change which provides a fine detailed dental impression. There is now a large increase in the variety of stock trays available.
Tray adhesives
Tray adhesives are used to ensure the retention of the impression material in the impression tray, with or without the presence of perforations, and are based on contact adhesive technology. Maximum retention can be achieved with the presence of both a tray adhesive and perforations in the impression tray. The adhesive is applied to the internal surface of the tray, as well as over the margins to ensure the binding of the outer edge of the impression material to the tray. A suitable amount of adhesive (usually two thin coats) should be applied to the tray to prevent pooling of the adhesive which can weaken the bond between the tray and impression material. The adhesive should be completely dried prior to impression-taking.
Tray adhesives usually come in a screw-top bottle with a brush attached to the lid that can be used for applying the adhesive. Overtime, the adhesive can accumulate around the cap, causing the evaporation of the solvent, and consequently the thickening of the adhesive. This can reduce the efficacy of the adhesive to bind to the tray.
Types
Various tray adhesives are available, corresponding to the impression material used.
Digital impressions
Digital impressions using extra-oral or intra-oral scanner systems are being adopted in dentistry. A model can be produced from the digital scan by milling or stereolithography.
References
Dentistry
Dental materials | Dental impression | Physics | 4,863 |
22,965,231 | https://en.wikipedia.org/wiki/Supersingular%20prime%20%28algebraic%20number%20theory%29 | In algebraic number theory, a supersingular prime for a given elliptic curve is a prime number with a certain relationship to that curve. If the curve E is defined over the rational numbers, then a prime p is supersingular for E if the reduction of E modulo p is a supersingular elliptic curve over the residue field Fp.
Noam Elkies showed that every elliptic curve over the rational numbers has infinitely many supersingular primes. However, the set of supersingular primes has asymptotic density zero (if E does not have complex multiplication). conjectured that the number of supersingular primes less than a bound X is within a constant multiple of , using heuristics involving the distribution of eigenvalues of the Frobenius endomorphism. As of 2019, this conjecture is open.
More generally, if K is any global field—i.e., a finite extension either of Q or of Fp(t)—and A is an abelian variety defined over K, then a supersingular prime for A is a finite place of K such that the reduction of A modulo is a supersingular abelian variety.
See also
Supersingular prime (moonshine theory)
References
Classes of prime numbers
Algebraic number theory
Unsolved problems in number theory | Supersingular prime (algebraic number theory) | Mathematics | 273 |
3,209,246 | https://en.wikipedia.org/wiki/Abraham%E2%80%93Lorentz%20force | In the physics of electromagnetism, the Abraham–Lorentz force (also known as the Lorentz–Abraham force) is the reaction force on an accelerating charged particle caused by the particle emitting electromagnetic radiation by self-interaction. It is also called the radiation reaction force, the radiation damping force, or the self-force. It is named after the physicists Max Abraham and Hendrik Lorentz.
The formula, although predating the theory of special relativity, was initially calculated for non-relativistic velocity approximations was extended to arbitrary velocities by Max Abraham and was shown to be physically consistent by George Adolphus Schott. The non-relativistic form is called Lorentz self-force while the relativistic version is called the Lorentz–Dirac force or collectively known as Abraham–Lorentz–Dirac force. The equations are in the domain of classical physics, not quantum physics, and therefore may not be valid at distances of roughly the Compton wavelength or below. There are, however, two analogs of the formula that are both fully quantum and relativistic: one is called the "Abraham–Lorentz–Dirac–Langevin equation", the other is the self-force on a moving mirror.
The force is proportional to the square of the object's charge, multiplied by the jerk that it is experiencing. (Jerk is the rate of change of acceleration.) The force points in the direction of the jerk. For example, in a cyclotron, where the jerk points opposite to the velocity, the radiation reaction is directed opposite to the velocity of the particle, providing a braking action. The Abraham–Lorentz force is the source of the radiation resistance of a radio antenna radiating radio waves.
There are pathological solutions of the Abraham–Lorentz–Dirac equation in which a particle accelerates in advance of the application of a force, so-called pre-acceleration solutions. Since this would represent an effect occurring before its cause (retrocausality), some theories have speculated that the equation allows signals to travel backward in time, thus challenging the physical principle of causality. One resolution of this problem was discussed by Arthur D. Yaghjian and was further discussed by Fritz Rohrlich and Rodrigo Medina. Furthermore, some authors argue that a radiation reaction force is unnecessary, introducing a corresponding stress-energy tensor that naturally conserves energy and momentum in Minkowski space and other suitable spacetimes.
Definition and description
The Lorentz self-force derived for non-relativistic velocity approximation , is given in SI units by:
or in Gaussian units by
where is the force, is the derivative of acceleration, or the third derivative of displacement, also called jerk, μ0 is the magnetic constant, ε0 is the electric constant, c is the speed of light in free space, and q is the electric charge of the particle.
Physically, an accelerating charge emits radiation (according to the Larmor formula), which carries momentum away from the charge. Since momentum is conserved, the charge is pushed in the direction opposite the direction of the emitted radiation. In fact the formula above for radiation force can be derived from the Larmor formula, as shown below.
The Abraham–Lorentz force, a generalization of Lorentz self-force for arbitrary velocities is given by:
Where is the Lorentz factor associated with , the velocity of particle. The formula is consistent with special relativity and reduces to Lorentz's self-force expression for low velocity limit.
The covariant form of radiation reaction deduced by Dirac for arbitrary shape of elementary charges is found to be:
History
The first calculation of electromagnetic radiation energy due to current was given by George Francis FitzGerald in 1883, in which radiation resistance appears. However, dipole antenna experiments by Heinrich Hertz made a bigger impact and gathered commentary by Poincaré on the amortissement or damping of the oscillator due to the emission of radiation. Qualitative discussions surrounding damping effects of radiation emitted by accelerating charges was sparked by Henry Poincaré in 1891. In 1892, Hendrik Lorentz derived the self-interaction force of charges for low velocities but did not relate it to radiation losses. Suggestion of a relationship between radiation energy loss and self-force was first made by Max Planck. Planck's concept of the damping force, which did not assume any particular shape for elementary charged particles, was applied by Max Abraham to find the radiation resistance of an antenna in 1898, which remains the most practical application of the phenomenon.
In the early 1900s, Abraham formulated a generalization of the Lorentz self-force to arbitrary velocities, the physical consistency of which was later shown by George Adolphus Schott. Schott was able to derive the Abraham equation and attributed "acceleration energy" to be the source of energy of the electromagnetic radiation. Originally submitted as an essay for the 1908 Adams Prize, he won the competition and had the essay published as a book in 1912. The relationship between self-force and radiation reaction became well-established at this point. Wolfgang Pauli first obtained the covariant form of the radiation reaction and in 1938, Paul Dirac found that the equation of motion of charged particles, without assuming the shape of the particle, contained Abraham's formula within reasonable approximations. The equations derived by Dirac are considered exact within the limits of classical theory.
Background
In classical electrodynamics, problems are typically divided into two classes:
Problems in which the charge and current sources of fields are specified and the fields are calculated, and
The reverse situation, problems in which the fields are specified and the motion of particles are calculated.
In some fields of physics, such as plasma physics and the calculation of transport coefficients (conductivity, diffusivity, etc.), the fields generated by the sources and the motion of the sources are solved self-consistently. In such cases, however, the motion of a selected source is calculated in response to fields generated by all other sources. Rarely is the motion of a particle (source) due to the fields generated by that same particle calculated. The reason for this is twofold:
Neglect of the "self-fields" usually leads to answers that are accurate enough for many applications, and
Inclusion of self-fields leads to problems in physics such as renormalization, some of which are still unsolved, that relate to the very nature of matter and energy.
These conceptual problems created by self-fields are highlighted in a standard graduate text. [Jackson]
The difficulties presented by this problem touch one of the most fundamental aspects of physics, the nature of the elementary particle. Although partial solutions, workable within limited areas, can be given, the basic problem remains unsolved. One might hope that the transition from classical to quantum-mechanical treatments would remove the difficulties. While there is still hope that this may eventually occur, the present quantum-mechanical discussions are beset with even more elaborate troubles than the classical ones. It is one of the triumphs of comparatively recent years (~ 1948–1950) that the concepts of Lorentz covariance and gauge invariance were exploited sufficiently cleverly to circumvent these difficulties in quantum electrodynamics and so allow the calculation of very small radiative effects to extremely high precision, in full agreement with experiment. From a fundamental point of view, however, the difficulties remain.
The Abraham–Lorentz force is the result of the most fundamental calculation of the effect of self-generated fields. It arises from the observation that accelerating charges emit radiation. The Abraham–Lorentz force is the average force that an accelerating charged particle feels in the recoil from the emission of radiation. The introduction of quantum effects leads one to quantum electrodynamics. The self-fields in quantum electrodynamics generate a finite number of infinities in the calculations that can be removed by the process of renormalization. This has led to a theory that is able to make the most accurate predictions that humans have made to date. (See precision tests of QED.) The renormalization process fails, however, when applied to the gravitational force. The infinities in that case are infinite in number, which causes the failure of renormalization. Therefore, general relativity has an unsolved self-field problem. String theory and loop quantum gravity are current attempts to resolve this problem, formally called the problem of radiation reaction or the problem of self-force.
Derivation
The simplest derivation for the self-force is found for periodic motion from the Larmor formula for the power radiated from a point charge that moves with velocity much lower than that of speed of light:
If we assume the motion of a charged particle is periodic, then the average work done on the particle by the Abraham–Lorentz force is the negative of the Larmor power integrated over one period from to :
The above expression can be integrated by parts. If we assume that there is periodic motion, the boundary term in the integral by parts disappears:
Clearly, we can identify the Lorentz self-force equation which is applicable to slow moving particles as:
A more rigorous derivation, which does not require periodic motion, was found using an effective field theory formulation.
A generalized equation for arbitrary velocities was formulated by Max Abraham, which is found to be consistent with special relativity. An alternative derivation, making use of theory of relativity which was well established at that time, was found by Dirac without any assumption of the shape of the charged particle.
Signals from the future
Below is an illustration of how a classical analysis can lead to surprising results. The classical theory can be seen to challenge standard pictures of causality, thus signaling either a breakdown or a need for extension of the theory. In this case the extension is to quantum mechanics and its relativistic counterpart quantum field theory. See the quote from Rohrlich in the introduction concerning "the importance of obeying the validity limits of a physical theory".
For a particle in an external force , we have
where
This equation can be integrated once to obtain
The integral extends from the present to infinitely far in the future. Thus future values of the force affect the acceleration of the particle in the present. The future values are weighted by the factor
which falls off rapidly for times greater than in the future. Therefore, signals from an interval approximately into the future affect the acceleration in the present. For an electron, this time is approximately sec, which is the time it takes for a light wave to travel across the "size" of an electron, the classical electron radius. One way to define this "size" is as follows: it is (up to some constant factor) the distance such that two electrons placed at rest at a distance apart and allowed to fly apart, would have sufficient energy to reach half the speed of light. In other words, it forms the length (or time, or energy) scale where something as light as an electron would be fully relativistic. It is worth noting that this expression does not involve the Planck constant at all, so although it indicates something is wrong at this length scale, it does not directly relate to quantum uncertainty, or to the frequency–energy relation of a photon. Although it is common in quantum mechanics to treat as a "classical limit", some speculate that even the classical theory needs renormalization, no matter how the Planck constant would be fixed.
Abraham–Lorentz–Dirac force
To find the relativistic generalization, Dirac renormalized the mass in the equation of motion with the Abraham–Lorentz force in 1938. This renormalized equation of motion is called the Abraham–Lorentz–Dirac equation of motion.
Definition
The expression derived by Dirac is given in signature (− + + +) by
With Liénard's relativistic generalization of Larmor's formula in the co-moving frame,
one can show this to be a valid force by manipulating the time average equation for power:
Paradoxes
Pre-acceleration
Similar to the non-relativistic case, there are pathological solutions using the Abraham–Lorentz–Dirac equation that anticipate a change in the external force and according to which the particle accelerates in advance of the application of a force, so-called preacceleration solutions. One resolution of this problem was discussed by Yaghjian, and is further discussed by Rohrlich and Medina.
Runaway solutions
Runaway solutions are solutions to ALD equations that suggest the force on objects will increase exponential over time. It is considered as an unphysical solution.
Hyperbolic motion
The ALD equations are known to be zero for constant acceleration or hyperbolic motion in Minkowski spacetime diagram. The subject of whether in such condition electromagnetic radiation exists was matter of debate until Fritz Rohrlich resolved the problem by showing that hyperbolically moving charges do emit radiation. Subsequently, the issue is discussed in context of energy conservation and equivalence principle which is classically resolved by considering "acceleration energy" or Schott energy.
Self-interactions
However the antidamping mechanism resulting from the Abraham–Lorentz force can be compensated by other nonlinear terms, which are frequently disregarded in the expansions of the retarded Liénard–Wiechert potential.
Landau–Lifshitz radiation damping force
The Abraham–Lorentz–Dirac force leads to some pathological solutions. In order to avoid this, Lev Landau and Evgeny Lifshitz came with the following formula for radiation damping force, which is valid when the radiation damping force is small compared with the Lorentz force in some frame of reference (assuming it exists),
so that the equation of motion of the charge in an external field can be written as
Here is the four-velocity of the particle, is the Lorentz factor and is the three-dimensional velocity vector. The three-dimensional Landau–Lifshitz radiation damping force can be written as
where is the total derivative.
Experimental observations
While the Abraham–Lorentz force is largely neglected for many experimental considerations, it gains importance for plasmonic excitations in larger nanoparticles due to large local field enhancements. Radiation damping acts as a limiting factor for the plasmonic excitations in surface-enhanced Raman scattering. The damping force was shown to broaden surface plasmon resonances in gold nanoparticles, nanorods and clusters.
The effects of radiation damping on nuclear magnetic resonance were also observed by Nicolaas Bloembergen and Robert Pound, who reported its dominance over spin–spin and spin–lattice relaxation mechanisms for certain cases.
The Abraham–Lorentz force has been observed in the semiclassical regime in experiments which involve the scattering of a relativistic beam of electrons with a high intensity laser. In the experiments, a supersonic jet of helium gas is intercepted by a high-intensity (1018–1020 W/cm2) laser. The laser ionizes the helium gas and accelerates the electrons via what is known as the “laser-wakefield” effect. A second high-intensity laser beam is then propagated counter to this accelerated electron beam. In a small number of cases, inverse-Compton scattering occurs between the photons and the electron beam, and the spectra of the scattered electrons and photons are measured. The photon spectra are then compared with spectra calculated from Monte Carlo simulations that use either the QED or classical LL equations of motion.
Collective effects
The effects of radiation reaction are often considered within the framework of single-particle dynamics. However, interesting phenomena arise when a collection of charged particles is subjected to strong electromagnetic fields, such as in a plasma. In such scenarios, the collective behavior of the plasma can significantly modify its properties due to radiation reaction effects.
Theoretical studies have shown that in environments with strong magnetic fields, like those found around pulsars and magnetars, radiation reaction cooling can alter the collective dynamics of the plasma. This modification can lead to instabilities within the plasma. Specifically, in the high magnetic fields typical of these astrophysical objects, the momentum distribution of particles is bunched and becomes anisotropic due to radiation reaction forces, potentially driving plasma instabilities and affecting overall plasma behavior. Among these instabilities, the firehose instability can arise due to the anisotropic pressure, and electron cyclotron maser due to population inversion in the rings.
See also
Lorentz force
Cyclotron radiation
Synchrotron radiation
Electromagnetic mass
Radiation resistance
Radiation damping
Wheeler–Feynman absorber theory
Magnetic radiation reaction force
References
Further reading
See sections 11.2.2 and 11.2.3
Donald H. Menzel (1960) Fundamental Formulas of Physics, Dover Publications Inc., , vol. 1, p. 345.
Stephen Parrott (1987) Relativistic Electrodynamics and Differential Geometry, § 4.3 Radiation reaction and the Lorentz–Dirac equation, pages 136–45, and § 5.5 Peculiar solutions of the Lorentz–Dirac equation, pp. 195–204, Springer-Verlag .
External links
MathPages – Does A Uniformly Accelerating Charge Radiate?
Feynman: The Development of the Space-Time View of Quantum Electrodynamics
EC. del Río: Radiation of an accelerated charge
Electrodynamics
Electromagnetic radiation
Radiation
Hendrik Lorentz | Abraham–Lorentz force | Physics,Chemistry,Mathematics | 3,569 |
595,605 | https://en.wikipedia.org/wiki/Nuclear%20Energy%20Agency | The Nuclear Energy Agency (NEA) is an intergovernmental agency that is organized under the Organisation for Economic Co-operation and Development (OECD). Originally formed on 1 February 1958 with the name European Nuclear Energy Agency (ENEA)—the United States participated as an Associate Member—the name was changed on 20 April 1972 to its current name after Japan became a member.
The mission of the NEA is to "assist its member countries in maintaining and further developing, through international co-operation, the scientific, technological and legal bases required for the safe, environmentally friendly and economical use of nuclear energy for peaceful purposes."
History
The creation of the European Nuclear Energy Agency (ENEA) was agreed by the OEEC Council of Ministers on December 20, 1957.
Members
NEA currently consists of 33 countries from Europe, North America and the Asia-Pacific region. In 2021, Bulgaria accessioned to NEA as its most recent member. In 2022, following Russia's invasion of Ukraine, Russia's membership was suspended.
Together they account for approximately 85% of the world's installed nuclear capacity. Nuclear power accounts for almost a quarter of the electricity produced in NEA Member countries. The NEA works closely with the International Atomic Energy Agency (IAEA) in Vienna and with the European Commission in Brussels.
Within the OECD, there is close co-ordination with the International Energy Agency and the Environment Directorate, as well as contacts with other directorates, as appropriate.
Areas of work
Nuclear safety and regulation
Nuclear energy development
Radioactive waste management
Radiation protection and public health
Nuclear law and liability
Nuclear science
Data bank
Information and communication
European Nuclear Energy Tribunal
Structure
Since 1 September 2014, the Director-General of the NEA is William D Magwood, IV, who replaced Luis E. Echávarri on this post. The NEA Secretariat serves seven specialised standing technical committees under the leadership of the Steering Committee for Nuclear Energy—the governing body of the NEA—which reports directly to the OECD Council.
The standing technical committees, representing each of the seven major areas of the Agency's programme, are composed of member country experts who are both contributors to the programme of work and beneficiaries of its results. The approach is highly cost-efficient as it enables the Agency to pursue an ambitious programme with a relatively small staff that co-ordinates the work. The substantive value of the standing technical committees arises from the numerous important functions they perform, including: providing a forum for in-depth exchanges of technical and programmatic information; stimulating development of useful information by initiating and carrying out co-operation/research on key problems; developing common positions, including "consensus opinions", on technical and policy issues; identifying areas where further work is needed and ensuring that NEA activities respond to real needs; organising joint projects to enable interested countries to carry out research on particular issues on a cost-sharing basis.
NEA Annual Report
The NEA Annual Report, issued in English and French, is a definitive guide to the agency's yearly undertakings, major publications, and the evolving global nuclear energy sector. It aims to equip governments, stakeholders, and industry specialists with in-depth analysis and foresight on nuclear technology developments.
The 2022 edition highlights that there were 423 nuclear reactors in operation worldwide, providing a total of 379 GWe. NEA member countries manage 312 of these reactors, constituting roughly 80% of the global capacity. Additionally, the year witnessed the grid connection of six new reactors, contributing 7,360 MWe, and the construction of 57 reactors, reflecting a dynamic and expanding nuclear industry.
See also
International Energy Agency
International Atomic Energy Agency
European Organization for Nuclear Research
References
External links
– OECD Nuclear Energy Agency
International organizations based in France
International nuclear energy organizations
Nuclear organizations
Radiation protection organizations
OECD | Nuclear Energy Agency | Engineering | 782 |
52,802,478 | https://en.wikipedia.org/wiki/Numerical%20algebraic%20geometry | Numerical algebraic geometry is a field of computational mathematics, particularly computational algebraic geometry, which uses methods from numerical analysis to study and manipulate the solutions of systems of polynomial equations.
Homotopy continuation
The primary computational method used in numerical algebraic geometry is homotopy continuation, in which a homotopy is formed between two polynomial systems, and the isolated solutions (points) of one are continued to the other. This is a specialization of the more general method of numerical continuation.
Let represent the variables of the system. By abuse of notation, and to facilitate the spectrum of ambient spaces over which one can solve the system, we do not use vector notation for . Similarly for the polynomial systems and .
Current canonical notation calls the start system , and the target system, i.e., the system to solve, . A very common homotopy, the straight-line homotopy, between and is
In the above homotopy, one starts the path variable at and continues toward . Another common choice is to run from to . In principle, the choice is completely arbitrary. In practice, regarding endgame methods for computing singular solutions using homotopy continuation, the target time being can significantly ease analysis, so this perspective is here taken.
Regardless of the choice of start and target times, the ought to be formulated such that , and .
One has a choice in , including
Roots of unity
Total degree
Polyhedral
Multi-homogeneous
and beyond these, specific start systems that closely mirror the structure of may be formed for particular systems. The choice of start system impacts the computational time it takes to solve , in that those that are easy to formulate (such as total degree) tend to have higher numbers of paths to track, and those that take significant effort (such as the polyhedral method) are much sharper. There is currently no good way to predict which will lead to the quickest time to solve.
Actual continuation is typically done using predictor–corrector methods, with additional features as implemented. Predicting is done using a standard ODE predictor method, such as Runge–Kutta, and correction often uses Newton–Raphson iteration.
Because and are polynomial, homotopy continuation in this context is theoretically guaranteed to compute all solutions of , due to Bertini's theorem. However, this guarantee is not always achieved in practice, because of issues arising from limitations of the modern computer, most namely finite precision. That is, despite the strength of the probability-1 argument underlying this theory, without using a priori certified tracking methods, some paths may fail to track perfectly for various reasons.
Witness set
A witness set
is a data structure used to describe algebraic varieties. The witness set for an affine variety that is equidimensional consists of three pieces of information. The first piece of information is a system of equations . These equations define the algebraic variety that is being studied. The second piece of information is a linear space . The dimension of is the codimension of , and chosen to intersect transversely. The third piece of information is the list of points in the intersection . This intersection has finitely many points and the number of points is the degree of the algebraic variety . Thus, witness sets encode the answer to the first two questions one asks about an algebraic variety: What is the dimension, and what is the degree? Witness sets also allow one to perform a numerical irreducible decomposition, component membership tests, and component sampling. This makes witness sets a good description of an algebraic variety.
Certification
Solutions to polynomial systems computed using numerical algebraic geometric methods can be certified, meaning that the approximate solution is "correct". This can be achieved in several ways, either a priori using a certified tracker, or a posteriori by showing that the point is, say, in the basin of convergence for Newton's method.
Software
Several software packages implement portions of the theoretical body of numerical algebraic geometry. These include, in alphabetic order:
alphaCertified
Bertini
Hom4PS
HomotopyContinuation.jl
Macaulay2 (core implementation of homotopy tracking and NumericalAlgebraicGeometry package)
MiNuS: Optimized C++ framework for fast homotopy continuation. Fastest solver for certain 100-320 degree square problems to date.
PHCPack
References
External links
Bertini home page
Hom4PS-3
HomotopyContinuation.jl
MiNuS fast C++ framework
Algebraic geometry
Computational geometry
Computational fields of study | Numerical algebraic geometry | Mathematics,Technology | 909 |
29,305,046 | https://en.wikipedia.org/wiki/Center%20for%20Mathematics%20and%20Theoretical%20Physics | The Center for Mathematics and Theoretical Physics (CMTP) is an Italian institution supporting research in mathematics and theoretical physics. The CMTP was founded on November 17, 2009 as an interdepartmental research center of the three Roman universities: Sapienza, Tor Vergata and Roma Tre. The CMTP's director is Roberto Longo, from the Mathematics Department of Tor Vergata University, and its scientific secretaries are Alberto De Sole, from Sapienza University, and Alessandro Giuliani, from Roma Tre University.
The center does not have a permanent location; however, it is temporarily hosted in Tor Vergata's Mathematics Department.
The aim of the CMTP, according to its Web site, is to "take advantage of the high quality and wide spectrum of research in mathematical physics presently carried on in Roma [sic] in order to promote cross fertilization of mathematics and theoretical physics at the highest level by fostering creative interactions of leading experts from both subjects."
Activities of the center
The CMTP promotes scientific research by organizing workshops, congresses, and periods of thematic research; sending invitations to scientists; and assigning study grants. The CMTP's goal is to attract foreign scientists of international prestige and young talented foreigners to Rome by offering a natural place for scientific education and a base of cultural interchange with other scientific centers abroad.
The opening activity of the center was to present the Seminal Interactions between Mathematics and Physics conference hosted by the Accademia Nazionale dei Lincei in Rome. The invited speakers counted, among others, four fields medalists; Alain Connes, Andrei Okounkov, Stanislav Smirnov and C. Villani; and an Abel prize winner, Isadore Singer.
As part of the conference, the center organized two evening public lectures for the general audience, held by Ludvig Faddeev and Singer.
Among its activities, the center runs the Levi Civita colloquia.
References
See also
Institute for Theoretical Physics (disambiguation)
Center for Theoretical Physics (disambiguation)
External links
Home page: http://cmtp.uniroma2.it/index.php
http://cmtp.uniroma2.it/documents/100804Sole24ore.pdf
http://cmtp.uniroma2.it/documents/100917Messaggero.pdf
https://web.archive.org/web/20110722041533/http://www.lswn.it/en/conferences/2010/seminal_interactions_between_mathematics_and_physics
http://www.adnkronos.com/IGN/News/Cronaca/Ricerca-con-il-Cmtp-e-nato-a-Roma-un-nuovo-gruppo-di-ragazzi-di-Via-Panisperna_985806198.html
http://cmtp.uniroma2.it/documents/100921DNews.pdf
http://matematica.unibocconi.it/news/apre-roma-un-nuovo-centro-la-ricerca-matematica-e-fisica-teorica
http://www3.lastampa.it/scienza/sezioni/news/articolo/lstp/333282/
http://cmtp.uniroma2.it/documents/100922ManifestoB.pdf
http://cmtp.uniroma2.it/documents/100922CorrieredellaSera.pdf
http://roma.corriere.it/roma/notizie/tempo_libero/10_settembre_22/casa-jazz-1703811852126.shtml
https://web.archive.org/web/20101114164434/http://news.sciencemag.org/scienceinsider/2010/09/romes-mathematical-physicists-in.html
http://www.scienzainrete.it/contenuto/articolo/il-centro-dove-matematica-e-fisica-teorica-si-incontrano
http://cmtp.uniroma2.it/documents/PublicServiceReview.pdf
Roberto Longo
Alberto De Sole
Alessandro Giuliani
Seminal Interactions between Mathematics and Physics
Levi Civita colloquia
Mathematical institutes
Higher education in Italy
Physics research institutes
Educational institutions established in 2009
University of Rome Tor Vergata
Theoretical physics institutes
2009 establishments in Italy | Center for Mathematics and Theoretical Physics | Physics | 989 |
47,835,006 | https://en.wikipedia.org/wiki/Veniam | Veniam was a technology startup focused on building large WiFi mesh networks using moving vehicles like city buses or taxis.
The company is headquartered in Mountain View, California and was founded in 2012. The Company received 4.9 million dollars in 2014 in a funding round from True Ventures, USV and Cane Investments. Veniam's technology is being used in Porto's city buses with about 230,000 users with onboard units (OBUs) installed on over 600 buses, taxis and garbage trucks. They aim to equip many moving things with wireless hotspots creating a mesh that could be used to build sensors to turn the city smarter.Each vehicle is equipped with a NetRider, a multi network unit with Wi-Fi (802.11p), DSRC, GPS and 4G/LTE connectivity. Veniam was acquired by Nexar in 2022.
Company
Veniam was founded by João Barros, its CEO, Roy Russell, former Zipcar CTO, Susana Sargento, a professor at the University of Aveiro, and Robin Chase, former CEO of Zipcar and Buzzcar.
Products
Veniam Platform
Awards
2018 named Best Connected Product/Service at TU Automotive
2017 ScaleUp Portugal Award Tech Winner
2017 Telecom Council Spiffy Winner - San Andreas Award for the Most Disruptive Technology
2017 CNBC 50 Disruptors - list of companies whose "innovations are changing the world.”
Winner of TU Automotive Best Auto Mobility Product/Service 2016
2016 Best Auto Mobility Product/Service Winner by TU Automotive
2016 CNBC 50 Disruptors - list of companies whose "innovations are revolutionizing the business landscape.”
Winner of the “Best New Venture” at the WBA 2015 Wi-Fi Industry Awards
Winner of WBA Scale Up Award 2015 for the outstanding innovation and solutions brought to market
Winner of the 2015 Red Herring Top 100 Award
Winner of the NOS Innovation Award 2015
Winner of the Portuguese Venture Competition “Building Global Innovators” (ISCTE–IUL; MIT Portugal)
Most Likely to Succeed Idea within the Cable Industry at CableLabs’ Innovation Showcase
Named 2015 Gartner “Cool Vendor” in Smart Cities
Named FierceWireless “Fierce 15” Top Wireless Company List of 2015
Investors
Cane Investments
Cisco Investments
Liberty Global
Orange Digital Ventures
True Ventures
USV
Verizon Ventures
Yamaha Motor Ventures And Laboratory Silicon Valley
Institutional Partners
Carnegie Mellon Portugal
European Union
Instituto de Telecomunicações
ISCTE – University Institute of Lisbon
MIT Portugal
O NOVO NORTE
Quadro de Referência Estratégico Nacional
University of Aveiro
University of Porto
University Technology Entreprise Network Portugal
References
American companies established in 2012
Companies based in Mountain View, California
Internet of things companies
Mesh networking | Veniam | Technology | 549 |
9,252,226 | https://en.wikipedia.org/wiki/Pitchers%20%28ceramic%20material%29 | Pitchers are pottery that has been broken in the course of manufacture. Biscuit (unglazed) pitchers can be crushed, ground and re-used, either as a low-percentage addition to the virgin raw materials on the same factory, or elsewhere as grog. Because of the adhering glaze, glost pitchers find less use. The crushed material can also be used in other industries as an inert filler.
Archaeologists call ancient pitchers sherds or ostracons; shards or ostraca.
References
Ceramic materials | Pitchers (ceramic material) | Physics,Engineering | 111 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.