text
stringlengths
11
320k
source
stringlengths
26
161
Pitch shifting is a sound recording technique in which the original pitch of a sound is raised or lowered. Effects units that raise or lower pitch by a pre-designated musical interval ( transposition ) are known as pitch shifters . The simplest methods are used to increase pitch and reduce durations or, conversely, reduce pitch and increase duration. This can be done by replaying a sound waveform at a different speed than it was recorded. It could be accomplished on an early reel-to-reel tape recorder by changing the diameter of the capstan or using a different motor. As for vinyl records, placing a finger on the turntable to give friction will slow it, while giving it a "spin" can advance it. As technologies improved, motor speed and pitch control could be achieved electronically by servo drive system circuits. [ 1 ] A pitch shifter is a sound effects unit that raises or lowers the pitch of an audio signal by a preset interval . For example, a pitch shifter set to increase the pitch by a fourth will raise each note three diatonic intervals above the notes actually played. Simple pitch shifters raise or lower the pitch by one or two octaves , while more sophisticated devices offer a range of interval alterations. Pitch shifters are included in most audio processors today. A harmonizer is a type of pitch shifter that combines the pitch-shifted signal with the original to create a two or more note harmony. The Eventide H910 Harmonizer , [ 2 ] released in 1975, was one of the first commercially available pitch-shifters and digital multi-effects units. On November 10, 1976, Eventide filed a trademark registration for "Harmonizer" and continues to maintain its rights to the Harmonizer trademark today. [ 3 ] In digital recording , pitch shifting is accomplished through digital signal processing . Older digital processors could often shift pitch only in post-production , whereas many modern devices using computer processing technology can change pitch values virtually in real time. [ 4 ] Pitch correction is a form of pitch shifting and is found in software such as Auto-Tune and Melodyne to correct intonation inaccuracies in a recording or performance. Pitch shifting may raise or lower all sounds in a recording by the same amount, whereas in practice, pitch correction may make different changes from note to note. [ 5 ] Pitch shifting can be used in DJing for harmonic mixing , a technique of matching the musical key of tracks in a DJ mix to avoid dissonance and create harmonious mixes or mashups . If a DJ wishes to mix two tracks which are not in compatible keys, they can shift the pitch of one track so that its key is compatible with the other, allowing seamless transitions between tracks which might otherwise sound dissonant when played together. Numerous cartoons have used pitch shifters to produce distinctive animal voices. Alvin and the Chipmunks recordings with David Seville (aka Ross Bagdasarian ) were created by recording vocal tracks at slow speeds, then playing them back at normal speeds. Voice artist Mel Blanc used pitch shifting techniques to create the voices of Tweety and Daffy Duck . [ 6 ] In the 1970s, reruns of shows like I Love Lucy were sped up in order to run more advertisements during commercial breaks. The Eventide H910 Harmonizer was used to downward pitch-shift the characters' voices back to normal after the episode was sped up. [ 7 ] South Park creators Trey Parker and Matt Stone have used pitch shifting for most of their characters throughout the show's run. [ 8 ] One notable early practitioner of pitch shifting in music is Chuck Berry , who used the technique to make his voice sound younger. Many of the Beatles ' records from 1966 and 1967 were made by recording instrumental tracks a half-step higher and the vocals correspondingly low. Examples include " Rain ", " I'm Only Sleeping ", and " When I'm Sixty-Four ". Electronic musician Burial is known for including pitch-shifted samples of vocal melodies in his songs. [ 9 ] Goregrind and occasionally death metal use vocals that are often pitch-shifted to sound unnaturally low and guttural. The famous bass intro to the song " Seven Nation Army " by The White Stripes , is the result of guitarist Jack White playing an electric guitar through a pitch shifting effects pedal set to an octave below. The band was a duo, who lacked a bassist and had never previously used one in any of their music, choosing instead to mimic the sound of a bass guitar. [ 10 ] From 1986 to 1988, American musician Prince used pitch shifting to create his “Camille” vocals. The coda in the song “ The Bewlay Brothers ” by David Bowie features Bowie’s voice distorted by varispeeding; this effect also appears throughout Bowie’s 1967 song “ The Laughing Gnome ”.
https://en.wikipedia.org/wiki/Pitch_shifting
In aerodynamics , the pitching moment on an airfoil is the moment (or torque ) produced by the aerodynamic force with respect to the aerodynamic center on the airfoil . The pitching moment on the wing of an airplane is part of the total moment that must be balanced using the lift on the horizontal stabilizer . [ 1 ] : Section 5.3 More generally, a pitching moment is any moment acting on the pitch axis of a moving body. The lift on an airfoil is a distributed force that can be said to act at a point called the center of pressure. However, as angle of attack changes on a cambered airfoil, there is movement of the center of pressure forward and aft. This makes analysis difficult when attempting to use the concept of the center of pressure. One of the remarkable properties of a cambered airfoil is that, even though the center of pressure moves forward and aft, if the lift is imagined to act at a point called the aerodynamic center , the moment of the lift force changes in proportion to the square of the airspeed. If the moment is divided by the dynamic pressure , the area and chord of the airfoil, the result is known as the pitching moment coefficient. This coefficient changes only a little over the operating range of angle of attack of the airfoil. The moment coefficient for a whole airplane is not the same as that of its wing. The figure on the right shows the variation of moment with AoA for a stable airplane. The negative slope for positive α indicates stability in pitch. The combination of the two concepts of aerodynamic center and pitching moment coefficient make it relatively simple to analyse some of the flight characteristics of an aircraft. [ 1 ] : Section 5.10 The aerodynamic center of an airfoil is usually close to 25% of the chord behind the leading edge of the airfoil. When making tests on a model airfoil, such as in a wind-tunnel, if the force sensor is not aligned with the quarter-chord of the airfoil, but offset by a distance x , the pitching moment about the quarter-chord point, M c / 4 {\displaystyle M_{c/4}} is given by where the indicated values of D and L are the drag and lift on the model, as measured by the force sensor. The pitching moment coefficient is important in the study of the longitudinal static stability of aircraft and missiles. The pitching moment coefficient C m {\displaystyle C_{m}} is defined as follows [ 1 ] : Section 5.4 where M is the pitching moment, q is the dynamic pressure , S is the wing area , and c is the length of the chord of the airfoil. C m {\displaystyle C_{m}} is a dimensionless coefficient so consistent units must be used for M , q , S and c . Pitching moment coefficient is fundamental to the definition of aerodynamic center of an airfoil. The aerodynamic center is defined to be the point on the chord line of the airfoil at which the pitching moment coefficient does not vary with angle of attack, [ 1 ] : Section 5.10 or at least does not vary significantly over the operating range of angle of attack of the airfoil. In the case of a symmetric airfoil, the lift force acts through one point for all angles of attack, and the center of pressure does not move as it does in a cambered airfoil. Consequently, the pitching moment coefficient about this point for a symmetric airfoil is zero. The pitching moment is, by convention, considered to be positive when it acts to pitch the airfoil in the nose-up direction. Conventional cambered airfoils supported at the aerodynamic center pitch nose-down so the pitching moment coefficient of these airfoils is negative. [ 2 ]
https://en.wikipedia.org/wiki/Pitching_moment
A pitfall trap is a trapping pit for small animals, such as insects, amphibians and reptiles. Pitfall traps are a sampling technique, mainly used for ecology studies and ecologic pest control. [ 1 ] Animals that enter a pitfall trap are unable to escape. This is a form of passive collection, as opposed to active collection where the collector catches each animal (by hand or with a device such as a butterfly net ). Active collection may be difficult or time-consuming, especially in habitats where it is hard to see the animals such as in thick grass. Pitfall traps come in a variety of sizes and designs. They come in two main forms: dry and wet pitfall traps. Dry pitfall traps consist of a container (tin, jar or drum) buried in the ground with its rim at surface level used to trap mobile animals that fall into it. Wet pitfall traps are basically the same, but contain a solution designed to kill and preserve the trapped animals. The fluids that can be used in these traps include formalin (10% formaldehyde), methylated spirits, alcohol, ethylene glycol , trisodium phosphate, picric acid or even (with daily checked traps) plain water. A little detergent is usually added to break the surface tension of the liquid to promote quick drowning. The opening is usually covered by a sloped stone or lid or some other object. This is done to reduce the amount of rain and debris entering the trap, and to prevent animals in dry traps from drowning (when it rains) or overheating (during the day) as well as to keep out predators. One or more fence-lines of some sort may be added to channel targets into the trap. [ 2 ] Traps may also be baited. Lures or baits of varying specificity can be used to increase the capture rate of a certain target species or group by placing them in, above or near the trap. Examples of baits include meat , dung , fruit and pheromones . Pitfall traps can be used for various purposes: There are inevitably biases in pitfall sampling when it comes to comparison of different groups of animals and different habitats in which the trapping occurs. An animal's trappability depends on the structure of its habitat (e.g. density of vegetation, type of substrate). Gullan and Cranston (2005) recommend measuring and controlling for such variations. Intrinsic properties of the animal itself also affect its trappability: some taxa are more active than others (e.g. higher physiological activity or ranging over a wider area), more likely to avoid the trap, less likely to be found on the ground (e.g. tree-dwelling species that occasionally move across the terrain), or too large to be trapped (or large enough to escape if trapped). Trappability can also be affected by conditions such as temperature or rain, which may alter the animal's behaviour. The capture rate is therefore proportional not only to how abundant a given type of animal is (which is often the factor of interest), but how easily they are trapped. Comparisons between different groups must therefore take into account variation in habitat structure and complexity, changes in ecological conditions over time and the innate differences in species. [ 4 ]
https://en.wikipedia.org/wiki/Pitfall_trap
The Pithecometra principle or Pithecometra thesis ( German : Pithecometra-Satz ) describes the evolution of humans; the pithecometra law is analogous to the concept that "man evolved within apes" or "man descended from apes" as advocated by Thomas Henry Huxley . In evolution, Huxley first developed the concept of the "Pithecometra principle" which was discussed by Charles Darwin and Ernst Haeckel , when Huxley wrote the 1863 essay "On the Origin of Species" stating that humanity was more closely related to apes than the apes were to monkeys. [ 1 ] Huxley added that to hunt evidence of this close ancestry between apes and humans, the regions where modern apes are found should be the focal point, hence, Africa . [ 1 ] The pithecometra principle has been most notable in evolution theory by placing humanity as an offshoot of animal species, rather than a separate divine creation, and thus pithecometra has generated intense religious controversy for decades. Another of Darwin's colleagues was Ernst Heinrich Haeckel (1834–1919). [ 1 ] Haeckel agreed with Huxley on several aspects of the pithecometra thesis. However, Haeckel frequently lectured on the Asian origin of the "missing link" between apes and humans. [ 1 ] Consequently, Eugene Dubois , a student of Haeckel's indoctrinated with the idea of Asian hominid origins, traveled to Java, Indonesia in 1890–1892. It was during this expedition when Dubois made the incredible discovery of Homo erectus fossils in Asia . Also known as Java Man , that specimen was validation of humanity's deep ancestry outside of Europe. [ 1 ] The pithecometra thesis with the work of Darwin, Huxley and Haeckel helped liberate the European scientific community of its Eurocentric biases. [ 1 ] However, their work did not directly produce a change. It required the later revolution in evolutionary thought, of the Neo-Darwinian Synthesis of the mid-20th century, to cause a change in the recovery of fossils from regions outside Europe. Evidence of refusals to accept the fossils that began to be found in Asia and Africa after the late 19th century was the Piltdown Hoax. [ 1 ] The perpetrator of the Piltdown hoax is uncertain but the year and location indicate a rejection of growing evidence for man's ancestry outside of Europe. In 1912 in England, a fossil was presented and named Piltdown Man . [ 1 ] This specimen was the combination of ape and human features the scientific community had been seeking in order to argue human/ape affinities. The high, globular braincase signified human-like features while the robust jaw and molars resembled ape skulls. This fossil was used as proof of human evolution having occurred in England. With the discovery of the Piltdown specimen, actual fossil specimens of the prehistoric australopithecine genus coming from Africa were being ignored. [ 1 ] Raymond Dart , who obtained a fossil skull of an actual hominid showing human-ape affinities from South Africa was treated with disdain. [ 1 ] Later in the 1950s, as the Neo-Darwinian Synthesis had thoroughly saturated the European scientific community, fewer people chose to ignore the significant Australopithecus fossils coming from Africa, and the Piltdown Man fossil was re-examined. Upon closer inspection, the cranium was judged to be of a modern human and the jaw matched a modern orangutan . [ 1 ] The molars had been filed down to appear like human upper molars, and the surface of the Piltdown specimen had been painted to give it the illusion of having been buried a long time. The rejection of the Piltdown fossil in the 1950s removed a significant barrier that had blocked the European scientific community's view of more accurate human origins. [ 1 ]
https://en.wikipedia.org/wiki/Pithecometra_principle
Pithoviridae is a family of viruses . [ 1 ] Pithiviridae contains two subfamilies that are both monotypic down to their sole genera. This taxonomy is shown hereafter: [ 2 ] This virus -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pithoviridae
A pitot tube ( / ˈ p iː t oʊ / PEE -toh ; also pitot probe ) measures fluid flow velocity . It was invented by French engineer Henri Pitot during his work with aqueducts and published in 1732, [ 1 ] and modified to its modern form in 1858 by Henry Darcy . [ 2 ] It is widely used to determine the airspeed of aircraft; [ 3 ] the water speed of boats; and the flow velocity of liquids, air, and gases in industry. The basic pitot tube consists of a tube pointing directly into the oncoming fluid flow. Pressure in the tube can be measured as the moving fluid cannot escape and stagnates. This pressure is the stagnation pressure of the fluid, also known as the total pressure or (particularly in aviation) the pitot pressure . The measured stagnation pressure cannot just by itself be used to determine the fluid flow velocity (airspeed in aviation) directly. However, with a measured static pressure as well it can be determined by the use of Bernoulli's equation which states: Which can also be written Solving that for flow velocity gives where This equation applies only to fluids that can be treated as incompressible. Liquids are treated as incompressible under almost all conditions. Gases under certain conditions can be approximated as incompressible. See Compressibility . The dynamic pressure is the difference between the stagnation pressure and the static pressure. The dynamic pressure is then determined using a diaphragm inside an enclosed container. If the air on one side of the diaphragm is at the static pressure, and the other at the stagnation pressure, then the deflection of the diaphragm is proportional to the dynamic pressure. In aircraft, the static pressure can be measured using static ports on the side of the fuselage. The dynamic pressure measured can be used to determine the indicated airspeed of the aircraft. The diaphragm arrangement described above can be contained within the airspeed indicator , which can convert the dynamic pressure to an airspeed reading by means of mechanical levers. Instead of separate pitot and static ports, a pitot-static tube (also called a Prandtl tube) may be employed, which has a second tube coaxial with the pitot tube with holes on the sides, outside the direct airflow, to measure the static pressure. [ 4 ] If a liquid column manometer is used to measure the pressure difference Δ p ≡ p t − p s {\displaystyle \Delta p\equiv p_{t}-p_{s}} , where Therefore, A pitot-static system is a system of pressure-sensitive instruments that is most often used in aviation to determine an aircraft's airspeed , Mach number , altitude , and altitude trend . A pitot-static system generally consists of a pitot tube, a static port, and the pitot-static instruments. [ 5 ] Errors in pitot-static system readings can be extremely dangerous as the information obtained from the pitot static system, such as airspeed, is potentially safety-critical. Several commercial airline incidents and accidents have been traced to a failure of the pitot-static system. Examples include Austral Líneas Aéreas Flight 2553 , Northwest Airlines Flight 6231 , Birgenair Flight 301 and one of the two X-31s . [ 6 ] The French air safety authority BEA said that pitot tube icing was a contributing factor in the crash of Air France Flight 447 into the Atlantic Ocean . [ 7 ] In 2008 Air Caraïbes reported two incidents of pitot tube icing malfunctions on its A330s. [ 8 ] Birgenair Flight 301 had a fatal pitot tube failure which investigators suspected was due to insects creating a nest inside the pitot tube; the prime suspect is the black and yellow mud dauber wasp. Aeroperú Flight 603 had a fatal pitot-static system failure due to the cleaning crew leaving the static port blocked with tape. In industry, the flow velocities being measured are often those flowing in ducts and tubing where measurements by an anemometer would be difficult to obtain. In these kinds of measurements, the most practical instrument to use is the pitot tube. The pitot tube can be inserted through a small hole in the duct with the pitot connected to a U-tube water gauge or some other differential pressure gauge for determining the flow velocity inside the ducted wind tunnel. One use of this technique is to determine the volume of air that is being delivered to a conditioned space. The fluid flow rate in a duct can then be estimated from: In aviation, airspeed is typically measured in knots . In weather stations with high wind speeds, the pitot tube is modified to create a special type of anemometer called pitot tube static anemometer . [ 9 ] In many modern carburetors a Pitot tube at the intake is fed to the fuel float chamber as an alternative to feeding ambient air pressure there to better control air/fuel ratio.
https://en.wikipedia.org/wiki/Pitot_tube
Pittcon Editors’ Awards honoured the best new products on show at the Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy , or Pittcon, [ 1 ] for 20 years from 1996 having been established by Dr Gordon Wilkinson, managing editor of Analytical Instrument Industry Report (later Instrumenta ). On 8 March 2015, the event returned to the Morial Convention Center in New Orleans and this was the last occasion when the awards were presented. The independent awards, which represented the results of an informal poll of leading editors, had become an important feature of the world's largest trade show for the laboratory equipment industry. Pittcon organisers and media center supported the scheme and provided details and photographs on the exhibition's Press and Media Information page. [ 2 ] In 2016 the group of editors and journalists that formed the core of the judging panel reluctantly decided to discontinue the awards program citing gradually dwindling support from ever-busier media representatives. The awards were started because of the challenge that editors faced of effectively covering the trade show , which in 2015 hosted 925 exhibitors. New exhibitors at the Morial Convention Center totalled 130 companies. Walking past every booth at an event such as this represents a trek of over 10 kilometres (6.2 mi). [ 3 ] Accredited media representatives, of whom there were more than 150 per year, were invited to list up to three new products on a nomination form provided on registration at the Media Center. Editors were invited to attend a judging session towards the end of the trade show. They reviewed entries and voted on the nominated products. The only criterion was that products must have appeared at the exhibition for the first time, but winning products usually featured innovations in technology or industrial design , or enabled new analytical applications . [ 4 ] Gold, Silver and Bronze winners were determined and plaques were awarded to the booth personnel of the winning companies on the final morning of the four-day exposition. Other nominated products received an Honourable Mention . [ 5 ] Winners for the period 1996 to 2015, together with the names of their products, are listed below. Of the award winners, the majority were the largest instrument makers in the industry, but over 30 small companies or start-ups went home with awards, illustrating that editors were able to use their technical expertise to spot innovations irrespective of the marketing budgets of exhibitors. Company names are listed in the format used at the date of the award, although may have now changed as a result of change of ownership. Trademarks are acknowledged, but not indicated; readers should check corporate literature or websites for current intellectual property rights. Web links are only provided for award-winning products up to five years old. Products introduced earlier have usually been updated with more recent models. 2015 : [ 6 ] 2014 : [ 10 ] 2013 : [ 14 ] 2012 : [ 19 ] 2011 : [ 23 ] 2010 : 2009 : 2008 : [ 28 ] 2007 : [ 29 ] 2006 : 2005 : 2004 : 2003 : 2002 : 2001 : 2000 : 1999 : [ 30 ] 1998 : 1997 : 1996 :
https://en.wikipedia.org/wiki/Pittcon_Editors'_Awards
Pitting corrosion , or pitting , is a form of extremely localized corrosion that leads to the random creation of small holes in metal. The driving power for pitting corrosion is the depassivation of a small area, which becomes anodic (oxidation reaction) while an unknown but potentially vast area becomes cathodic (reduction reaction), leading to very localized galvanic corrosion . The corrosion penetrates the mass of the metal, with a limited diffusion of ions. Another term arises, pitting factor, which is defined as the ratio of the depth of the deepest pit (resulting due to corrosion) to the average penetration, which can be calculated based on the weight loss. According to Frankel (1998) who performed a review on pitting corrosion, it develops in three successive steps: (1) initiation (or nucleation ) by breakdown of the passive film protecting the metal surface from oxidation, (2) growth of metastable pits (growing up to the micron scale and then repassivating), and (3) the growth of larger and stable pits. [ 1 ] The evolution of the pit density (number of pits per surface area) as a function of time follows a sigmoid curve with the characteristic shape of a logistic function curve, or a hyperbolic tangent . [ 2 ] Guo et al. (2018), after a statistical analysis of hundreds of individual pits observed on carbon steel surfaces at the nano-to-micro- scales, distinguish three stages of pitting corrosion: induction, propagation, and saturation. [ 2 ] The pit formation can be essentially regarded as a two step process: nucleation followed by a growth. The process of pit nucleation is initiated by the depassivation of the protective oxide layer isolating the metal substrate from the aggressive solution. The depassivation of the protective oxide layer is the less properly understood step in pitting corrosion and its very local and random appearance probably its most enigmatic characteristic. Mechanical or physical damages may locally disrupt the protective layer. Crystalline defects, or impurity inclusions, pre-existing in the base metal material can also serve as nucleation points (especially metal sulfide inclusions). The chemical conditions prevailing in the solution and the nature of the metal, or the alloy composition, are also important factors to take into consideration. Several theories have been elaborated to explain the depassivation process. Anions with weak or strong ligand properties such as chloride ( Cl − ) and thiosulfate ( S 2 O 2− 3 ) respectively can complex the metallic cations (Me n+ ) present in the protective oxide layer and so contribute to its local dissolution. Chloride anions could also compete with hydroxide ions ( OH − ) for the sorption onto the oxide layer and start to diffuse into the porosity or the crystal lattice of the oxide layer. Finally, according to the point-defect model elaborated by Digby Macdonald, the migration of crystal defects inside the oxide layer could explain its random localized disappearance. [ 3 ] [ 4 ] [ 5 ] The main interest of the point-defect model is to explain the stochastic character of the pitting corrosion process. The more common explanation for pitting corrosion is that it is an autocatalytic process driven by the random formation of small electrochemical cells with separate anodic and cathodic zones. The random local breakdown of the protective oxide layer and the subsequent oxidation of the underlying metal in the anodic zones result in the local formation of a pit where acid conditions are maintained by the spatial separation of the cathodic and anodic half-reactions. This creates a gradient of electrical potential and is responsible for the electromigration of aggressive anions into the pit. [ 6 ] For example, when a metal is exposed to an oxygenated aqueous solution containing sodium chloride (NaCl) as electrolyte , the pit acts as anode (metal oxidation) and the metal surface acts as cathode (oxygen reduction). In the case of pitting corrosion of iron , or carbon steel , by atmospheric oxygen dissolved in acidic water ( pH < 7) in contact with the metal exposed surface, the reactions respectively occurring at the anode and cathode zones can be written as follows: Acidic conditions favor the redox reaction according to Le Chatelier principle because the H + ions added to the reagents side displace the reaction equilibrium to the right and also increase the solubility of the released Fe 2+ cations . Under neutral to alkaline conditions ( pH > 7), the set of redox reactions given here above becomes the following: The precipitation of Fe(OH) 2 ( green rust ) can also contribute to drive the reaction towards the right. However, the solubility of Fe(OH) 2 ( Fe 2+ ) is relatively high (~ 100 times that of Fe 3+ ), but strongly decreases when pH increases because of common ion effect with the OH − . In the two examples given here above: – Iron is a reductant giving electrons while being oxidized. – Oxygen is an oxidant taking up electrons while being reduced. The formation of anodic and cathodic zones creates an electrochemical cell ( i.e. , a small electric battery ) at the surface of the affected metal. The difference in Gibbs free energy (ΔG) drives the reaction because ΔG is negative and the system releases energy ( enthalpy , ΔH < 0) while increasing entropy (ΔG = ΔH - TΔS). The transport of dissolved ions occurs into the aqueous solution in contact with the corroding metal while electrons are transported from the anode (giving e − ) to the cathode (accepting e − ) via the base metal ( electrical conductor ). The localized production of positive metal cations (Me n+ , Fe 2+ in the example here above) in the pit (oxidation: anode) gives a local excess of positive charges which attract the negative ions (e.g., the highly mobile chloride anions Cl − ) from the surrounding electrolyte to maintain the electroneutrality of the ion species in aqueous solution in the pit. The pit contains a high concentration of metal (Me) chloride (MeCl n ) which hydrolyzes with water to produce the corresponding metal hydroxide (Me(OH) n ), and n H + and n Cl – ions, accelerating the corrosion process. [ 7 ] In the case of metallic iron, or steel, the process can be schematized as follows: [ 8 ] Under basic conditions, such as under the alkaline conditions prevailing in concrete, the hydrolysis reaction directly consumes hydroxides ions ( OH − ) while releasing chloride ions: So, when chloride ions present in solution enter in contact with the steel surface, they react with Fe 2+ of the passive layer protecting the steel surface and form an iron–chloride complex. Then, the iron-chloride complex reacts with the OH − anions produced by the water dissociation and precipitates ferrous hydroxide ( Fe(OH) 2 ) while releasing chloride ions and new H + ions available to continue the corrosion process. In the pit, the oxygen concentration is essentially zero and all of the cathodic oxygen reactions take place on the metal surface outside the pit. The pit is anodic (oxidation) and the locus of rapid dissolution of the metal. [ 9 ] The metal corrosion initiation is autocatalytic in nature however its propagation is not. This kind of corrosion is often difficult to detect and so is extremely insidious, as it causes little loss of material with the small effect on its surface, while it damages the deep structures of the metal. The pits on the surface are often obscured by corrosion products. Pitting can be initiated by a small surface defect, being a scratch or a local change in the alloy composition (or local impurities, e.g. metallic sulfide inclusions such as MnS or NiS ), [ 10 ] [ 11 ] or a damage to the protective coating. Polished surfaces display a higher resistance to pitting. [ 12 ] In order to maintain the solution electroneutrality inside the pit populated by cations released by oxidation in the anodic zone (e.g., Fe 2+ in case of steel), anions need to migrate inside the narrow pit. It is worth to notice that the electromobilities of thiosulfate ( S 2 O 2− 3 ) and chloride ( Cl − ) anions are the highest after these of H + and OH − ions in aqueous solution. Moreover, the molar conductivity of thiosulfate ions is even higher than that of chloride ions because they are twice negatively charged (weak base reluctant to accept a proton). In capillary electrophoresis , thiosulfate moves faster than chloride and eluates before this latter. The high electromobility of both anions could also be one of the many factors explaining their harmful impact for pitting corrosion when compared with other much less damaging ion species such as SO 2− 4 and NO − 3 . Pitting corrosion is defined by localized attack, ranging from microns to millimeters in diameter, in an otherwise passive surface and only occurs for specific alloy and environmental combinations. Thus, this type of corrosion typically occurs in alloys that are protected by a tenacious (passivating) oxide film such as stainless steels, nickel alloys, aluminum alloys in environments that contain an aggressive species such as chlorides (Cl – ) or thiosulfates (S 2 O 3 2– ). In contrast, alloy/environment combinations where the passive film is not very protective usually will not produce pitting corrosion. A good example of the importance of alloy/environment combinations is carbon steel . In environments where the pH value is lower than 10, carbon steel does not form a passivating oxide film and the addition of chloride results in uniform attack over the entire surface. However, at pH greater than 10 (alkaline) the oxide is protective and the addition of chloride results in pitting corrosion. [ 13 ] Besides chlorides, other anions implicated in pitting include thiosulfates (S 2 O 3 2− ), fluorides and iodides . Stagnant water conditions with low concentrations of dissolved oxygen also favor pitting. Thiosulfates are particularly aggressive species and are formed by partial oxidation of pyrite ( FeS 2 , a ferrous disulfide), or partial sulfate reduction by microorganisms , a.o. by sulfate reducing bacteria (SRB). Thiosulfates are a concern for corrosion in many industries handling sulfur-derived compounds: sulfide ores processing, oil wells and pipelines transporting soured oils, kraft paper production plants, photographic industry, methionine and lysine factories. Although in the aforementioned example, oxic conditions were always considered with the reduction of dissolved O 2 in the cathodic zones, pitting corrosion may also occur under anoxic, or reducing, conditions. Indeed, the very harmful reduced species of sulfur ( H 2 S , HS − , S 2− , HS–S − , − S–S − , S 0 and S 2 O 2− 3 ) can only subsist under reducing conditions. [ 14 ] Moreover, in the case of steel and stainless steel, reducing conditions are conducive to the dissolution of the protective oxide layer (dense γ- Fe 2 O 3 ) because Fe 2+ is much more soluble than Fe 3+ , and so reducing conditions contribute to the breakdown of the protective oxide layer (initiation, nucleation of the pit). Reductants exert thus an antagonist effect with respect to the oxidants (chromate, nitrite) used as corrosion inhibitors to induce steel repassivation via the formation of a dense γ- Fe 2 O 3 protective layer. Pitting corrosion can thus occur both under oxidizing and reducing conditions and can be aggravated in poorly oxygenated waters by differential aeration, or by drying/wetting cycles. Under strongly reducing conditions , in the absence of dissolved oxygen in water, or pore water of the ground, the electron acceptor ( oxidizing agent ) at the cathodic sites , where reduction occurs, can be the protons ( H + ) of water itself, the protons of hydrogen sulfide ( H 2 S ), or in acidic conditions in case of severe pyrite oxidation in a former oxic atmosphere, dissolved ferric ions ( Fe 3+ ), known to be very potent oxidizers . The presence of harmful reduced species of sulfur and microbial activity feeding the sulfur cycle ( sulfide oxidation possibly followed by bacterial sulfate reduction ) have also to be taken into account. Strictly abiotic ( i.e. inorganic) corrosion processes are generally slower under anoxic conditions than under oxic conditions, but the presence of bacteria and biofilms can aggravate the degradation conditions and causes unexpected problems. Critical infrastructures and metallic components with very long service life may be susceptible to pitting corrosion: for example the metallic canisters and overpacks aimed to contain vitrified high-level radioactive waste (HLW) and spent nuclear fuel and to confine them in a water-tight enveloppe for several tenths of thousands years in deep geologic repositories. Different types of corrosion inhibitor exist. Among them, oxidizing species such as chromate ( CrO 2− 4 ) and nitrite ( NO − 2 ) were the first used to re-establish the state of passivation in the protective oxide layer. In the specific case of steel, the Fe 2+ cation being a relatively soluble species, it contributes to favor the dissolution of the oxide layer which so loses its passivity. To restore the passivity, the principle simply consists to prevent the dissolution of the oxide layer by converting the soluble divalent Fe 2+ cation into the much less soluble trivalent Fe 3+ cation. This approach is also at the basis of the chromate conversion coating used to passivate steel , aluminium , zinc , cadmium , copper , silver , titanium , magnesium , and tin alloys. [ 15 ] : p.1265 [ 16 ] As hexavalent chromate is a known carcinogen, its aqueous effluents can no longer be freely discharged into the environment and its maximum concentration acceptable in water is very low. Nitrite is also an oxidizing species and has been used as corrosion inhibitor since the 1950's. [ 17 ] [ 18 ] [ 19 ] Under the basic conditions prevailing in concrete pore water nitrite converts the relatively soluble Fe 2+ ions into the much less soluble Fe 3+ ions, and so protects the carbon-steel reinforcement bars by forming a new and denser layer of γ- Fe 2 O 3 as follows: Corrosion inhibitors, when present in sufficient amount, can provide protection against pitting. However, too low level of them can aggravate pitting by forming local anodes. A single pit in a critical point can cause a great deal of damage. One example is the explosion in Guadalajara , Mexico, on 22 April 1992, when gasoline fumes accumulated in sewers destroyed kilometers of streets. The vapors originated from a leak of gasoline through a single hole formed by corrosion between a steel gasoline pipe and a zinc -plated water pipe. [ 20 ] Firearms can also suffer from pitting, most notably in the bore of the barrel when corrosive ammunition is used and the barrel is not cleaned soon afterwards. [ 21 ] Deformities in the bore caused by pitting can greatly reduce the firearm's accuracy. [ 22 ] To reduce pitting in firearm bores, most modern firearms have a bore lined with chromium . [ 23 ] Pitting corrosion can also help initiate stress corrosion cracking , as happened when a single eyebar on the Silver Bridge in West Virginia , United States, failed and killed 46 people on the bridge in December 1967. [ 24 ] In laboratories, pitting corrosion may damage equipment, reducing its performance or longevity. Fume hoods are of particular concern, as the material constitution of their ductwork must suit the primary effluent(s) intended for exhaust. [ 25 ] If the chosen vent material is unsuitable for the primary effluent(s), consequent pitting corrosion will prevent the fume hood from effectively containing harmful airborne particles. [ 26 ] Academic library – Free online college e-textbooks (2022). "Chloride-induced corrosion" . Ebrary . Retrieved 2022-02-12 . Hirao, Hiroshi; Yamada, Kazuo; Takahashi, Haruka; Zibara, Hassan (2005). "Chloride binding of cement estimated by binding isotherms of hydrates" . Journal of Advanced Concrete Technology . 3 (1): 77– 84. doi : 10.3151/jact.3.77 . eISSN 1347-3913 . ISSN 1346-8014 . Retrieved 2022-02-19 . Galan, Isabel; Glasser, Fredrik P. (2015-02-01). "Chloride in cement" . Advances in Cement Research . 27 (2): 63– 97. doi : 10.1680/adcr.13.00067 . eISSN 1751-7605 . ISSN 0951-7197 . Retrieved 2022-02-19 . Newman, Roger (2010-01-01). "Pitting corrosion of metals" . The Electrochemical Society Interface . 19 (1): 33– 38. Bibcode : 2010ECSIn..19a..33N . doi : 10.1149/2.F03101if . ISSN 1944-8783 . S2CID 138876686 . Macdonald, Digby D.; Roberts, Bruce; Hyne, James B. (1978-01-01). "The corrosion of carbon steel by wet elemental sulphur" . Corrosion Science . 18 (5): 411– 425. Bibcode : 1978Corro..18..411M . doi : 10.1016/S0010-938X(78)80037-7 . ISSN 0010-938X . Retrieved 2022-02-13 . Choudhary, Lokesh; Macdonald, Digby D.; Alfantazi, Akram (2015-06-01). "Role of thiosulfate in the corrosion of steels: A review" . Corrosion . 71 (9): 1147– 1168. doi : 10.5006/1709 . ISSN 0010-9312 . Retrieved 2022-02-13 . Paik, C. H.; White, H. S.; Alkire, R. C. (2000-11-01). "Scanning electrochemical microscopy detection of dissolved sulfur species from inclusions in stainless steel" . Journal of the Electrochemical Society . 147 (11): 4120– 4124. Bibcode : 2000JElS..147.4120P . doi : 10.1149/1.1394028 . eISSN 1945-7111 . ISSN 0013-4651 . Retrieved 2018-03-25 . Newman, R. C.; Isaacs, H. S.; Alman, B. (1982-05-01). "Effects of sulfur compounds on the pitting behavior of type 304 stainless steel in near-neutral chloride solutions" . Corrosion . 38 (5): 261– 265. doi : 10.5006/1.3577348 . ISSN 0010-9312 . Retrieved 2018-03-25 . Hesketh, J.; Dickinson, E. J. F.; Martin, M. L.; Hinds, G.; Turnbull, A. (2021-04-15). "Influence of H 2 S on the pitting corrosion of 316L stainless steel in oilfield brine" . Corrosion Science . 182 : 109265. Bibcode : 2021Corro.18209265H . doi : 10.1016/j.corsci.2021.109265 . ISSN 0010-938X . PMC 8276138 . PMID 34267394 .
https://en.wikipedia.org/wiki/Pitting_corrosion
Pitting resistance equivalent number ( PREN ) is a predictive measurement of a stainless steel's resistance to localized pitting corrosion based on its chemical composition. In general: the higher PREN-value, the more resistant is the stainless steel to localized pitting corrosion by chloride . PREN is frequently specified when stainless steels will be exposed to seawater or other high chloride solutions. In some instances stainless steels with PREN-values > 32 may provide useful resistance to pitting corrosion in seawater, but is dependent on optimal conditions. However, crevice corrosion is also a significant possibility and a PREN > 40 is typically specified for seawater service. [ 1 ] [ 2 ] [ 3 ] These alloys need to be manufactured and heat treated correctly to be seawater corrosion resistant to the expected level. PREN alone is not an indicator of corrosion resistance. The value should be calculated for each heat to ensure compliance with minimum requirements, this is due to chemistry variation within the specified composition limits. There are several PREN formulas. They commonly range from: to: There are a few stainless steels which add tungsten (W), for those the following formula is used: All % values of elements must be expressed by mass, or weight (wt. %), and not by volume. Tolerance on element measurements could be ignored as the PREN value is indicative only. A number of online calculators are available to help you calculate your PREN . As well as being practical, they are often accompanied by extensive technical documentation written by expert engineers in the field. Don't hesitate to consult them Exact pitting test procedures are specified in the ASTM G48 standard. [ 5 ]
https://en.wikipedia.org/wiki/Pitting_resistance_equivalent_number
Pittsburgh Life Sciences Greenhouse ( PLSG ) is an investment firm based in the South Side neighborhood of Pittsburgh, Pennsylvania that provides resources and tools to entrepreneurial life sciences enterprises in Pittsburgh and western Pennsylvania in order to advance research and patient care. Since PLSG began operations in 2002, it has assisted more than 435 life sciences companies and has affected more than 10,000 jobs in western Pennsylvania. PLSG has provided 34 companies with office or laboratory space, and 14 have been relocated to Pittsburgh from outside the region. PLSG has invested over $20 million in 77 companies, which has leveraged over $1.5 billion in additional capital to the region. PLSG guides researchers, entrepreneurs and emerging companies through the challenges faced in early stages of company development. They provide support to companies developing product and service innovations in biotechnology tools, diagnostics/screening, healthcare IT, medical devices and therapeutics. PLSG also helps in the expansion of more mature life science companies, by supporting new product and market developments and connecting them to investors. [ 1 ] Pittsburgh Life Sciences Greenhouse grew out of an original plan known as BioVenture, developed by CMU and Pitt. [ 2 ] The initiative received a major boost in 2001 when money from the state's settlement with the tobacco industry was pledged to create a life science greenhouse in Western Pennsylvania . [ 3 ] In 2003, Pittsburgh Biomedical Corporation, a non-profit established in 1988 by the Pittsburgh Technology Council, consolidated with PLSG. [ 4 ] Today, PLSG exists as a partnership between the Commonwealth of Pennsylvania , University of Pittsburgh , Carnegie Mellon University , University of Pittsburgh Medical Center and the regional foundation of community. Their mission is to "create, nurture and help establish a globally dominant life sciences industry in western Pennsylvania." [ 1 ] Video
https://en.wikipedia.org/wiki/Pittsburgh_Life_Sciences_Greenhouse
5449 18736 ENSG00000064835 ENSMUSG00000004842 P28069 Q00286 NM_001122757 NM_000306 NM_008849 NM_001362468 NP_000297 NP_001116229 NP_032875 NP_001349397 POU class 1 homeobox 1 , also known as pituitary-specific positive transcription factor 1 ( PIT1 ), POU domain, class 1, transcription factor 1 ( POU1F1 ) and growth hormone factor 1 (GHF1), is a transcription factor for growth hormone encoded by the gene POU1F1 . [ 5 ] [ 6 ] PIT1 is part of the POU family of transcription factors. [ 7 ] It is expressed by somatotrophic cells, [ 8 ] as well as thyrotrophs [ 9 ] and lactotrophs [ 10 ] of the anterior pituitary gland. It contains a C-terminal domain for transactivation. [ 9 ] Another domain is DNA binding—its C-terminal portion is homologous to the homeodomain consensus, [ 8 ] common to many genes involved in development, while the other portion is POU specific, affords PIT1 specificity in its transcriptional activation of the prolactin and growth hormone genes and is involved in protein-protein interactions. [ 9 ] Activity on thyroid stimulating hormone-beta expression is also known for PIT1. [ 9 ] Pituitary-specific positive transcription factor 1 has been shown to interact with GATA2 [ 11 ] and PITX1 . [ 12 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pituitary-specific_positive_transcription_factor_1
Pitzer equations [ 1 ] are important for the understanding of the behaviour of ions dissolved in natural waters such as rivers, lakes and sea-water. [ 2 ] [ 3 ] [ 4 ] They were first described by physical chemist Kenneth Pitzer . [ 5 ] The parameters of the Pitzer equations are linear combinations of parameters, of a virial expansion of the excess Gibbs free energy , which characterise interactions amongst ions and solvent. The derivation is thermodynamically rigorous at a given level of expansion. The parameters may be derived from various experimental data such as the osmotic coefficient , mixed ion activity coefficients, and salt solubility. They can be used to calculate mixed ion activity coefficients and water activities in solutions of high ionic strength for which the Debye–Hückel theory is no longer adequate. They are more rigorous than the equations of specific ion interaction theory (SIT theory), but Pitzer parameters are more difficult to determine experimentally than SIT parameters. A starting point for the development can be taken as the virial equation of state for a gas. where P {\displaystyle P} is the pressure, V {\displaystyle V} is the volume, T {\displaystyle T} is the temperature and B , C , D {\displaystyle B,C,D} ... are known as virial coefficients . The first term on the right-hand side is for an ideal gas . The remaining terms quantify the departure from the ideal gas law with changing pressure, P {\displaystyle P} . It can be shown by statistical mechanics that the second virial coefficient arises from the intermolecular forces between pairs of molecules, the third virial coefficient involves interactions between three molecules, etc. This theory was developed by McMillan and Mayer. [ 6 ] Solutions of uncharged molecules can be treated by a modification of the McMillan-Mayer theory. However, when a solution contains electrolytes , electrostatic interactions must also be taken into account. The Debye–Hückel theory [ 7 ] was based on the assumption that each ion was surrounded by a spherical "cloud" or ionic atmosphere made up of ions of the opposite charge. Expressions were derived for the variation of single-ion activity coefficients as a function of ionic strength . This theory was very successful for dilute solutions of 1:1 electrolytes and, as discussed below, the Debye–Hückel expressions are still valid at sufficiently low concentrations. The values calculated with Debye–Hückel theory diverge more and more from observed values as the concentrations and/or ionic charges increases. Moreover, Debye–Hückel theory takes no account of the specific properties of ions such as size or shape. Brønsted had independently proposed an empirical equation, [ 8 ] in which the activity coefficient depended not only on ionic strength, but also on the concentration, m , of the specific ion through the parameter β . This is the basis of SIT theory . It was further developed by Guggenheim. [ 9 ] Scatchard [ 10 ] extended the theory to allow the interaction coefficients to vary with ionic strength. Note that the second form of Brønsted's equation is an expression for the osmotic coefficient . Measurement of osmotic coefficients provides one means for determining mean activity coefficients. The exposition begins with a virial expansion of the excess Gibbs free energy [ 11 ] W w is the mass of the water in kilograms, b i , b j ... are the molalities of the ions and I is the ionic strength. The first term, f(I) represents the Debye–Hückel limiting law. The quantities λ ij (I) represent the short-range interactions in the presence of solvent between solute particles i and j . This binary interaction parameter or second virial coefficient depends on ionic strength, on the particular species i and j and the temperature and pressure. The quantities μ ijk represent the interactions between three particles. Higher terms may also be included in the virial expansion. Next, the free energy is expressed as the sum of chemical potentials , or partial molal free energy, and an expression for the activity coefficient is obtained by differentiating the virial expansion with respect to a molality b. ϕ − 1 = ( ∑ i b i ) − 1 [ I f ′ − f + ∑ i ∑ j ( λ i j + I λ i j ′ ) b i b j + 2 ∑ i ∑ j ∑ k μ i j k b i b j b k + ⋯ ] {\displaystyle \phi -1=\left(\sum _{i}b_{i}\right)^{-1}\left[If'-f+\sum _{i}\sum _{j}\left(\lambda _{ij}+I\lambda '_{ij}\right)b_{i}b_{j}+2\sum _{i}\sum _{j}\sum _{k}\mu _{ijk}b_{i}b_{j}b_{k}+\cdots \right]} For a simple electrolyte M p X q , at a concentration m , made up of ions M z + and X z − , the parameters f ϕ {\displaystyle f^{\phi }} , B M X ϕ {\displaystyle B_{MX}^{\phi }} and C M X ϕ {\displaystyle C_{MX}^{\phi }} are defined as The term f φ is essentially the Debye–Hückel term. Terms involving μ M M M {\displaystyle \mu _{MMM}} and μ X X X {\displaystyle \mu _{XXX}} are not included as interactions between three ions of the same charge are unlikely to occur except in very concentrated solutions. The B parameter was found empirically to show an ionic strength dependence (in the absence of ion-pairing ) which could be expressed as With these definitions, the expression for the osmotic coefficient becomes A similar expression is obtained for the mean activity coefficient. These equations were applied to an extensive range of experimental data at 25 °C with excellent agreement to about 6 mol kg −1 for various types of electrolyte. [ 12 ] [ 13 ] The treatment can be extended to mixed electrolytes [ 14 ] and to include association equilibria. [ 15 ] Values for the parameters β (0) , β (1) and C for inorganic and organic acids, bases and salts have been tabulated. [ 16 ] Temperature and pressure variation is also discussed. One area of application of Pitzer parameters is to describe the ionic strength variation of equilibrium constants measured as concentration quotients. Both SIT and Pitzer parameters have been used in this context, For example, both sets of parameters were calculated for some uranium complexes and were found to account equally well for the ionic strength dependence of the stability constants. [ 17 ] Pitzer parameters and SIT theory have been extensively compared. There are more parameters in the Pitzer equations than in the SIT equations. Because of this the Pitzer equations provide for more precise modelling of mean activity coefficient data and equilibrium constants. However, the determination of the greater number of Pitzer parameters means that they are more difficult to determine. [ 18 ] Besides the set of parameters obtained by Pitzer et al. in the 1970s mentioned in the previous section. Kim and Frederick [ 19 ] [ 20 ] published the Pitzer parameters for 304 single salts in aqueous solutions at 298.15 K, extended the model to the concentration range up to the saturation point. Those parameters are widely used, however, many complex electrolytes including ones with organic anions or cations, which are very significant in some related fields, were not summarized in their paper. For some complex electrolytes, Ge et al. [ 21 ] obtained the new set of Pitzer parameters using up-to-date measured or critically reviewed osmotic coefficient or activity coefficient data. Besides the well-known Pitzer-like equations, there is a simple and easy-to-use semi-empirical model, which is called the three-characteristic-parameter correlation (TCPC) model. It was first proposed by Lin et al. [ 22 ] It is a combination of the Pitzer long-range interaction and short-range solvation effect: Ge et al. [ 23 ] modified this model, and obtained the TCPC parameters for a larger number of single salt aqueous solutions. This model was also extended for a number of electrolytes dissolved in methanol, ethanol, 2-propanol, and so on. [ 24 ] Temperature dependent parameters for a number of common single salts were also compiled, available at. [ 25 ] The performance of the TCPC model in correlation with the measured activity coefficient or osmotic coefficients is found to be comparable with Pitzer-like models. Due to its empirical aspects, the Pitzer modelling framework has a number of well-known limitations. [ 26 ] Most importantly, to improve the fits to experimental data, different variations of the equations have been described. Extrapolations, especially in the temperature and pressure domain, are generally problematic. One alternative modelling approach [ 27 ] has been specifically designed to address this extrapolation issue by reducing the number of equation parameters while maintaining similar predictive precision and accuracy.
https://en.wikipedia.org/wiki/Pitzer_equations
2,2-Dimethylpropanoyl chloride is a branched-chain acyl chloride . [ 1 ] It was first made by Aleksandr Butlerov in 1874 by reacting pivalic acid with phosphorus pentachloride . [ 2 ] Pivaloyl chloride is used as an input in the manufacture of some drugs, insecticides and herbicides. This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pivaloyl_chloride
The Pixel 8 and Pixel 8 Pro are a pair of Android smartphones designed, developed, and marketed by Google as part of the Google Pixel product line. They serve as the successors to the Pixel 7 and Pixel 7 Pro , respectively. Visually, the phones resemble their respective predecessors, with incremental upgrades to their displays and performance. Powered by the third-generation Google Tensor system-on-chip , Google placed heavy emphasis on their artificial intelligence –powered features, especially in the realm of generative AI and photo editing . The Pixel 8 and Pixel 8 Pro were officially announced on October 4, 2023, at the annual Made by Google event and were released in the United States on October 12. They received generally positive reviews from critics, who praised both the hardware and software despite their modest upgrades. The phones' AI features, Google's historic promise of seven years of software updates , and the Pro model's unconventional inclusion of a temperature sensor received significant attention and was heavily scrutinized, drawing mixed reactions. The mid-range variant Pixel 8a was released in May 2024. In May 2023, 9to5Google reported that Google intended to launch the Pixel 8 and Pixel 8 Pro in late 2023. [ 4 ] The phones were approved by the Federal Communications Commission (FCC) in August of that year. [ 5 ] After previewing the phones in September, [ 6 ] Google officially announced the phones on October 4, alongside the Pixel Watch 2 , at the annual Made by Google event. [ 7 ] Pre-orders became available the same day, [ 8 ] and the phones became available in 21 countries on October 12. [ 9 ] [ 10 ] Google hardware chief Rick Osterloh announced later that month that the company would begin manufacturing its Pixel phones in India beginning in 2024 with the Pixel 8, following Apple 's lead with the iPhone 15 series. Bloomberg News reported that Dixon Technologies and Foxconn were among the top contenders for the job. [ 11 ] [ 12 ] The Pixel 8 and Pixel 8 Pro are visually similar to the Pixel 7 and Pixel 7 Pro , respectively, [ 13 ] with minor refinements such as a flatter screen, more rounded corners, and softer edges. The Pro model also features a matte finish. [ 8 ] [ 9 ] [ 14 ] They were each available in three colors, [ 8 ] with a fourth "Mint" color added in January 2024: [ 15 ] The Pixel 8 has a 6.2 in (157 mm) FHD+ 1080p OLED display at 428 ppi with a 2400 × 1080 pixel resolution and a 20:9 aspect ratio, while the Pixel 8 Pro has a 6.7 in (170 mm) QHD+ 1440p LTPO OLED display at 489 ppi with a 2992 × 1344 pixel resolution and a 20:9 aspect ratio. [ 16 ] The Pixel 8 has a variable refresh rate of 60–120 Hz, while the Pixel 8 Pro has variable refresh rate of 1–120 Hz. Both phones contain a wide and a ultrawide rear camera, with the Pixel 8 Pro featuring an additional 48 megapixel telephoto 5× optical zoom rear camera. The front camera on both phones contains a 10.5 megapixel ultrawide lens. [ 9 ] As with the Pixel 7 series, the Face Unlock facial recognition system is enabled by software and the front camera, but adds support for secure biometric authentication. [ 17 ] [ 18 ] The phones are powered by the third-generation Google Tensor system-on-chip (SoC), marketed as " Google Tensor G3 ", and the Titan M2 security co-processor . [ 18 ] [ 19 ] The OLED display, marketed as "Actua" and "Super Actua" on the Pixel 8 and Pixel 8 Pro, respectively, boasts "better color accuracy and higher brightness". [ 8 ] [ 18 ] The Pro model also features a temperature sensor on its rear camera bar, an unconventional feature for a smartphone. [ 18 ] It was launched with its use on humans pending approval from the Food and Drug Administration . [ 20 ] The Pixel 8 and Pixel 8 Pro were among the first phones on the market to support Wi-Fi 7 , the latest wireless standard. [ 21 ] The Pixel 8 and Pixel 8 Pro shipped with Android 14 at launch, [ 14 ] coinciding with the stable release of Android 14 on the Android Open Source Project (AOSP), [ 22 ] along with version 9.1 of the newly renamed Pixel Camera app. [ 23 ] It will receive seven years of major OS upgrades, with support extending to 2030, a significant extension compared to previous generations that places the Pixel on par with Apple's typical support lifetime for iPhones . [ 14 ] [ 18 ] Google also stated that it would stock spare parts for the devices for seven years. [ 24 ] Wired and The Verge noted that these two commitments were potentially linked to California's impending right to repair act requiring companies to provide support for devices costing $100 or more for seven years. [ 18 ] [ 24 ] As with previous Pixel smartphones, artificial intelligence and software advancements took center stage during the Made by Google launch event. New camera features announced include Best Take, an upgraded Magic Eraser, Night Sight Video, Magic Editor, Audio Magic Eraser, and Real Tone on video. [ 8 ] [ 18 ] Exclusive to the Pixel 8 Pro were Video Boost and manual "Pro" camera controls, [ 14 ] [ 25 ] although the latter was only artificially restricted to the Pro model via software. [ 26 ] As part of Google's ongoing response to OpenAI 's ChatGPT , Google also announced Assistant with Bard , a new version of the Google Assistant virtual assistant that integrates the company's recently introduced Bard chatbot. [ 27 ] Other generative AI features included improved call screening , faster voice typing , grammar suggestions on Gboard , upgrades to the Recorder app, and a new magnifier app. [ 28 ] The Pixel 8 Pro was touted as the first piece of hardware to run Google's generative AI large language models fully on-device, [ 29 ] with Gemini Nano later being integrated into both models. [ 30 ] [ 31 ] [ 32 ] On launch day, Google partnered with X Corp. to include an Easter egg on X , formerly known as Twitter, when users searched the hashtag #GooglePixel. [ 33 ] In November 2023, Google set up a "Google Pixel Experience Space" pop-up store in Taiwan to showcase the Pixel 8 and Pixel 8 Pro. [ 34 ] In continuation of Google's multi-year sponsorship of the NBA , the Pixel 8's "Built Different" advertising campaign spanned the NBA's 2023–2024 season . A series of commercials, produced in collaboration with Robot Agency, featured numerous NBA athletes and personalities such as Jimmy Butler , Giannis Antetokounmpo , Thanasis Antetokounmpo , Chiney Ogwumike , Flau'jae Johnson , Jamad Fiin, Chris Brickley, Cameron Look, Richard Jefferson , and Crissa Jackson . [ 35 ] [ 36 ] Google also collaborated with The New York Times to capture street-style video for the publication's "Style Outside" column. [ 37 ] [ 38 ] To promote the introduction of the "Mint" color in January 2024, Google partnered with street artist Ricardo Gonzalez to paint over a Pixel 8 billboard in New York City. [ 39 ] In February 2024, Google released a commercial titled "Javier in Frame" which advertised the Pixel 8's Guided Frame feature, ahead of its airing during Super Bowl LVIII . Directed by Adam Morse and telling the story of a blind man named Javier who uses Guided Frame to "document important moments in his life", the 60-second commercial marked Google's third Super Bowl spot in a row to market the Pixel. [ 40 ] [ 41 ] In early reactions, three aspects particularly piqued commentators' interest: the Pixel 8 Pro's temperature sensor, Google's promise of seven years of updates, and the heavy emphasis on AI. The temperature sensor drew varied reactions: some found it a potentially useful novelty, [ 8 ] [ 25 ] [ 42 ] while others were bewildered and dismissed it as a strange gimmick. [ 9 ] [ 13 ] [ 43 ] The response to Google's seven-year pledge was similarly divided: several journalists welcomed the move, hailing it as astonishing and monumental; [ 44 ] [ 45 ] [ 46 ] others questioned whether Google would fulfill its promise. [ 47 ] [ 48 ] [ 49 ] The Washington Post 's Chris Velazco opined that the phones reflected "a deepening obsession with AI", [ 50 ] with The Verge 's Jon Porter describing the launch event as "a parade of AI", observing that the phrase "AI" had been invoked over fifty times. [ 51 ] As the Pixel 8 was "the first mainstream phone to bake generative AI directly into the photo creation process at no extra cost", computer science professor Ren Ng at the University of California, Berkeley described it as a pivotal milestone in the area of imagery. [ 52 ] Nicole Nguyen of The Wall Street Journal raised concerns with the implications of the Pixel 8's photo editing features, fearing that it could lead to an influx in " fauxtography ", the malicious manipulation of photographs. [ 53 ] The AI features themselves received mixed responses. Writing for Wired , Julian Chokkattu expressed excitement that these features, hitherto limited to those proficient with image or video editing software , were now being made accessible to a wider audience; [ 54 ] Ben Sin of XDA Developers found them "fun and scary". [ 55 ] Porter felt that some of the features showcased were unnecessary, concluding that Google was continuing to attempt to reassert its position as a leader in AI after ChatGPT's meteoric rise earlier that year had caught Google executives off-guard . [ 51 ] Also writing for The Verge , Allison Johnson described the features as "complicated and messy", [ 56 ] while her colleague Jay Peters contemplated the question, "What is a photo?" [ 57 ] Reviews were largely positive, though Mashable observed a prevalent discontent with the phones' battery life, temperature sensor, and higher prices. [ 58 ] Writing for The Guardian , Samuel Gibbs praised the phones' affordability and build quality, [ 59 ] [ 60 ] while Digital Spy 's Jason Murdock highlighted their cameras, performances, and displays. [ 61 ] [ 62 ] Chokkattu was thoroughly impressed by the phones' AI features, but was less pleased with the battery life and Face Unlock system. [ 63 ] PCMag 's Iyaz Akhtar echoed these sentiments, [ 64 ] [ 65 ] while June Wan of ZDNET and Daniel Howley of Yahoo! Finance also emphasized the usefulness of AI. [ 66 ] [ 67 ] Marques Brownlee thought the phones were a mixed bag, finding the AI features a hit-or-miss. [ 68 ] CNN Underscored reviewer Max Buondonno offered glowing praise of both phones. [ 69 ] [ 70 ] The Verge 's Allison Johnson was more skeptical, finding the AI features "useful [but] troubling", lamenting the higher prices, and questioning Google's seven-year-update promise. [ 71 ] Mark Knapp of IGN appreciated the phones' modest hardware and performance upgrades, but felt they were inferior to Samsung 's latest Android phones. [ 72 ] [ 73 ] Ron Amadeo of Ars Technica commended Google for abandoning curved screens in favor of a flat one, as well as praising its commitment to Tensor and software updates; however, he lambasted the Pro's temperature sensor as "embracing the worst of junky smartphone gimmicks". [ 74 ] Forbes staff writer Rebecca Isaacs deemed the phones "a solid choice for casual users". [ 75 ] Ryan Reith, an analyst at the International Data Group , predicted that Google could achieve higher sales numbers "if supported by strong marketing", considering its emphasis on AI. [ 20 ] An opinion piece published in the Financial Times was headlined: "Price, not AI, will lift [the] Pixel's market share". [ 76 ] Multiple publications have labeled the phones Google's latest subdued effort to compete with Apple's dominant iPhone sales. [ 20 ] [ 77 ]
https://en.wikipedia.org/wiki/Pixel_8
The Pixel 8 and Pixel 8 Pro are a pair of Android smartphones designed, developed, and marketed by Google as part of the Google Pixel product line. They serve as the successors to the Pixel 7 and Pixel 7 Pro , respectively. Visually, the phones resemble their respective predecessors, with incremental upgrades to their displays and performance. Powered by the third-generation Google Tensor system-on-chip , Google placed heavy emphasis on their artificial intelligence –powered features, especially in the realm of generative AI and photo editing . The Pixel 8 and Pixel 8 Pro were officially announced on October 4, 2023, at the annual Made by Google event and were released in the United States on October 12. They received generally positive reviews from critics, who praised both the hardware and software despite their modest upgrades. The phones' AI features, Google's historic promise of seven years of software updates , and the Pro model's unconventional inclusion of a temperature sensor received significant attention and was heavily scrutinized, drawing mixed reactions. The mid-range variant Pixel 8a was released in May 2024. In May 2023, 9to5Google reported that Google intended to launch the Pixel 8 and Pixel 8 Pro in late 2023. [ 4 ] The phones were approved by the Federal Communications Commission (FCC) in August of that year. [ 5 ] After previewing the phones in September, [ 6 ] Google officially announced the phones on October 4, alongside the Pixel Watch 2 , at the annual Made by Google event. [ 7 ] Pre-orders became available the same day, [ 8 ] and the phones became available in 21 countries on October 12. [ 9 ] [ 10 ] Google hardware chief Rick Osterloh announced later that month that the company would begin manufacturing its Pixel phones in India beginning in 2024 with the Pixel 8, following Apple 's lead with the iPhone 15 series. Bloomberg News reported that Dixon Technologies and Foxconn were among the top contenders for the job. [ 11 ] [ 12 ] The Pixel 8 and Pixel 8 Pro are visually similar to the Pixel 7 and Pixel 7 Pro , respectively, [ 13 ] with minor refinements such as a flatter screen, more rounded corners, and softer edges. The Pro model also features a matte finish. [ 8 ] [ 9 ] [ 14 ] They were each available in three colors, [ 8 ] with a fourth "Mint" color added in January 2024: [ 15 ] The Pixel 8 has a 6.2 in (157 mm) FHD+ 1080p OLED display at 428 ppi with a 2400 × 1080 pixel resolution and a 20:9 aspect ratio, while the Pixel 8 Pro has a 6.7 in (170 mm) QHD+ 1440p LTPO OLED display at 489 ppi with a 2992 × 1344 pixel resolution and a 20:9 aspect ratio. [ 16 ] The Pixel 8 has a variable refresh rate of 60–120 Hz, while the Pixel 8 Pro has variable refresh rate of 1–120 Hz. Both phones contain a wide and a ultrawide rear camera, with the Pixel 8 Pro featuring an additional 48 megapixel telephoto 5× optical zoom rear camera. The front camera on both phones contains a 10.5 megapixel ultrawide lens. [ 9 ] As with the Pixel 7 series, the Face Unlock facial recognition system is enabled by software and the front camera, but adds support for secure biometric authentication. [ 17 ] [ 18 ] The phones are powered by the third-generation Google Tensor system-on-chip (SoC), marketed as " Google Tensor G3 ", and the Titan M2 security co-processor . [ 18 ] [ 19 ] The OLED display, marketed as "Actua" and "Super Actua" on the Pixel 8 and Pixel 8 Pro, respectively, boasts "better color accuracy and higher brightness". [ 8 ] [ 18 ] The Pro model also features a temperature sensor on its rear camera bar, an unconventional feature for a smartphone. [ 18 ] It was launched with its use on humans pending approval from the Food and Drug Administration . [ 20 ] The Pixel 8 and Pixel 8 Pro were among the first phones on the market to support Wi-Fi 7 , the latest wireless standard. [ 21 ] The Pixel 8 and Pixel 8 Pro shipped with Android 14 at launch, [ 14 ] coinciding with the stable release of Android 14 on the Android Open Source Project (AOSP), [ 22 ] along with version 9.1 of the newly renamed Pixel Camera app. [ 23 ] It will receive seven years of major OS upgrades, with support extending to 2030, a significant extension compared to previous generations that places the Pixel on par with Apple's typical support lifetime for iPhones . [ 14 ] [ 18 ] Google also stated that it would stock spare parts for the devices for seven years. [ 24 ] Wired and The Verge noted that these two commitments were potentially linked to California's impending right to repair act requiring companies to provide support for devices costing $100 or more for seven years. [ 18 ] [ 24 ] As with previous Pixel smartphones, artificial intelligence and software advancements took center stage during the Made by Google launch event. New camera features announced include Best Take, an upgraded Magic Eraser, Night Sight Video, Magic Editor, Audio Magic Eraser, and Real Tone on video. [ 8 ] [ 18 ] Exclusive to the Pixel 8 Pro were Video Boost and manual "Pro" camera controls, [ 14 ] [ 25 ] although the latter was only artificially restricted to the Pro model via software. [ 26 ] As part of Google's ongoing response to OpenAI 's ChatGPT , Google also announced Assistant with Bard , a new version of the Google Assistant virtual assistant that integrates the company's recently introduced Bard chatbot. [ 27 ] Other generative AI features included improved call screening , faster voice typing , grammar suggestions on Gboard , upgrades to the Recorder app, and a new magnifier app. [ 28 ] The Pixel 8 Pro was touted as the first piece of hardware to run Google's generative AI large language models fully on-device, [ 29 ] with Gemini Nano later being integrated into both models. [ 30 ] [ 31 ] [ 32 ] On launch day, Google partnered with X Corp. to include an Easter egg on X , formerly known as Twitter, when users searched the hashtag #GooglePixel. [ 33 ] In November 2023, Google set up a "Google Pixel Experience Space" pop-up store in Taiwan to showcase the Pixel 8 and Pixel 8 Pro. [ 34 ] In continuation of Google's multi-year sponsorship of the NBA , the Pixel 8's "Built Different" advertising campaign spanned the NBA's 2023–2024 season . A series of commercials, produced in collaboration with Robot Agency, featured numerous NBA athletes and personalities such as Jimmy Butler , Giannis Antetokounmpo , Thanasis Antetokounmpo , Chiney Ogwumike , Flau'jae Johnson , Jamad Fiin, Chris Brickley, Cameron Look, Richard Jefferson , and Crissa Jackson . [ 35 ] [ 36 ] Google also collaborated with The New York Times to capture street-style video for the publication's "Style Outside" column. [ 37 ] [ 38 ] To promote the introduction of the "Mint" color in January 2024, Google partnered with street artist Ricardo Gonzalez to paint over a Pixel 8 billboard in New York City. [ 39 ] In February 2024, Google released a commercial titled "Javier in Frame" which advertised the Pixel 8's Guided Frame feature, ahead of its airing during Super Bowl LVIII . Directed by Adam Morse and telling the story of a blind man named Javier who uses Guided Frame to "document important moments in his life", the 60-second commercial marked Google's third Super Bowl spot in a row to market the Pixel. [ 40 ] [ 41 ] In early reactions, three aspects particularly piqued commentators' interest: the Pixel 8 Pro's temperature sensor, Google's promise of seven years of updates, and the heavy emphasis on AI. The temperature sensor drew varied reactions: some found it a potentially useful novelty, [ 8 ] [ 25 ] [ 42 ] while others were bewildered and dismissed it as a strange gimmick. [ 9 ] [ 13 ] [ 43 ] The response to Google's seven-year pledge was similarly divided: several journalists welcomed the move, hailing it as astonishing and monumental; [ 44 ] [ 45 ] [ 46 ] others questioned whether Google would fulfill its promise. [ 47 ] [ 48 ] [ 49 ] The Washington Post 's Chris Velazco opined that the phones reflected "a deepening obsession with AI", [ 50 ] with The Verge 's Jon Porter describing the launch event as "a parade of AI", observing that the phrase "AI" had been invoked over fifty times. [ 51 ] As the Pixel 8 was "the first mainstream phone to bake generative AI directly into the photo creation process at no extra cost", computer science professor Ren Ng at the University of California, Berkeley described it as a pivotal milestone in the area of imagery. [ 52 ] Nicole Nguyen of The Wall Street Journal raised concerns with the implications of the Pixel 8's photo editing features, fearing that it could lead to an influx in " fauxtography ", the malicious manipulation of photographs. [ 53 ] The AI features themselves received mixed responses. Writing for Wired , Julian Chokkattu expressed excitement that these features, hitherto limited to those proficient with image or video editing software , were now being made accessible to a wider audience; [ 54 ] Ben Sin of XDA Developers found them "fun and scary". [ 55 ] Porter felt that some of the features showcased were unnecessary, concluding that Google was continuing to attempt to reassert its position as a leader in AI after ChatGPT's meteoric rise earlier that year had caught Google executives off-guard . [ 51 ] Also writing for The Verge , Allison Johnson described the features as "complicated and messy", [ 56 ] while her colleague Jay Peters contemplated the question, "What is a photo?" [ 57 ] Reviews were largely positive, though Mashable observed a prevalent discontent with the phones' battery life, temperature sensor, and higher prices. [ 58 ] Writing for The Guardian , Samuel Gibbs praised the phones' affordability and build quality, [ 59 ] [ 60 ] while Digital Spy 's Jason Murdock highlighted their cameras, performances, and displays. [ 61 ] [ 62 ] Chokkattu was thoroughly impressed by the phones' AI features, but was less pleased with the battery life and Face Unlock system. [ 63 ] PCMag 's Iyaz Akhtar echoed these sentiments, [ 64 ] [ 65 ] while June Wan of ZDNET and Daniel Howley of Yahoo! Finance also emphasized the usefulness of AI. [ 66 ] [ 67 ] Marques Brownlee thought the phones were a mixed bag, finding the AI features a hit-or-miss. [ 68 ] CNN Underscored reviewer Max Buondonno offered glowing praise of both phones. [ 69 ] [ 70 ] The Verge 's Allison Johnson was more skeptical, finding the AI features "useful [but] troubling", lamenting the higher prices, and questioning Google's seven-year-update promise. [ 71 ] Mark Knapp of IGN appreciated the phones' modest hardware and performance upgrades, but felt they were inferior to Samsung 's latest Android phones. [ 72 ] [ 73 ] Ron Amadeo of Ars Technica commended Google for abandoning curved screens in favor of a flat one, as well as praising its commitment to Tensor and software updates; however, he lambasted the Pro's temperature sensor as "embracing the worst of junky smartphone gimmicks". [ 74 ] Forbes staff writer Rebecca Isaacs deemed the phones "a solid choice for casual users". [ 75 ] Ryan Reith, an analyst at the International Data Group , predicted that Google could achieve higher sales numbers "if supported by strong marketing", considering its emphasis on AI. [ 20 ] An opinion piece published in the Financial Times was headlined: "Price, not AI, will lift [the] Pixel's market share". [ 76 ] Multiple publications have labeled the phones Google's latest subdued effort to compete with Apple's dominant iPhone sales. [ 20 ] [ 77 ]
https://en.wikipedia.org/wiki/Pixel_8_Pro
The Pixel 9 , Pixel 9 Pro , and Pixel 9 Pro XL are a group of Android smartphones designed, developed, and marketed by Google as part of the Google Pixel product line. They serve as the successor to the Pixel 8 and Pixel 8 Pro , respectively. Sporting a redesigned appearance and powered by the fourth-generation Google Tensor system-on-chip , the phones are heavily integrated with Gemini -branded artificial intelligence features. The Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL were officially announced on August 13, 2024, at the annual Made by Google event, and were released in the United States on August 22 and September 4. The Pixel 9 series was approved by the Federal Communications Commission (FCC) in July 2024. [ 3 ] After previewing the Pro model the same month, [ 4 ] [ 5 ] Google officially announced the Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL on August 13, alongside the Pixel 9 Pro Fold and Pixel Watch 3 , at the annual Made by Google event. [ 6 ] Numerous observers noted the unusually early timing of the launch event, which was traditionally held in September after Apple 's annual launch of the new iPhone . Commentators described this as an attempt to "outshine" Apple, its longtime rival, and demonstrate its artificial intelligence (AI) prowess. [ a ] Several also took note of Google's usually frequent veiled attacks targeting Apple. [ 13 ] [ 11 ] All three phones became available for pre-order the same day; the Pixel 9 and Pixel 9 Pro XL were made available on August 22 while the Pro was available on September 4, the latter alongside the Pixel 9 Pro Fold, in 32 countries. [ 14 ] [ 15 ] Pixel 9 owners on Verizon US can text from anywhere outside of the network reach via satellite. [ 16 ] The Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL feature a redesigned appearance while retaining the overall design language that began with the Pixel 6 series, with the edges now flat rather than curved and the camera bar taking the shape of "an elongated, free-floating [...] oval". [ 17 ] [ 18 ] They are each available in four colors: [ 6 ] In a departure from previous generations, the Pixel 9 series was offered in three models: the base model, a "Pro" model, and a new "Pro XL" model. The Pixel 9 and Pixel 9 Pro are near-identical in size, with a 6.3 in (160 mm) screen size, while the Pixel 9 Pro XL is slightly larger at 6.7 in (171 mm). [ 17 ] [ 19 ] A key distinction between the base and Pro models lies in the camera setup, with the higher-end models sporting a 48- megapixel telephoto rear camera in addition to the standard 50- and 48-megapixel wide and ultrawide lenses; the Pro models also include a 42-megapixel ultrawide front camera compared to 10.5 megapixels on the base. [ 6 ] All three phones are powered by the fourth-generation Google Tensor system-on-chip (SoC), marketed as "Google Tensor G4", and the Titan M2 security co-processor . [ 20 ] [ 21 ] The upgraded Samsung Exynos 5400 modem on the new Tensor chip enhances the Pixel 9's satellite connectivity, enabling the ability to contact emergency services via satellite , similar to the feature introduced by Apple on the iPhone 14 and the first Android phone to be equipped with this technology. Dubbed "Satellite SOS", Google partnered with satellite network provider Skylo and SOS dispatch center Garmin on the feature, which was made available for free for two years. [ 22 ] [ 23 ] [ 24 ] Tensor G4 is also the first SoC to run Gemini Nano , a version of the Gemini large language model (LLM), with multimodality . [ 25 ] As with prior Pixel generations, the Pixel 9 series is equipped with numerous AI-powered features, with the Associated Press calling it a "vessel for the AI technology that is expected to reshape the way people live and work". [ 11 ] Google dedicated the first half-hour of its launch event discussing its advances in the field before unveiling its new devices. [ 26 ] Gemini , a generative AI–powered chatbot launched in 2023 in response to OpenAI 's ChatGPT , was frequently spotlighted, replacing the Google Assistant as the new default virtual assistant on Pixel and heavily integrating into the Pixel 9 series. [ 27 ] [ 28 ] In order to facilitate on-device AI processing, the RAM on the Pixel 9 series was substantially increased. [ 28 ] [ 20 ] Google also debuted Gemini Live, a new voice chat mode powered by the Imagen 3 text-to-image model . [ 29 ] [ 30 ] Other AI-powered features included Pixel Studio, an image generation app; Pixel Screenshots, a screenshot management and analysis app; Add Me, the ability to retroactively add subjects to photos; Pixel Weather, a new weather forecast app; Call Notes, which summarizes phone calls while running on-device; and miscellaneous camera updates. [ 31 ] [ 32 ] [ 33 ] Breaking with tradition, the Pixel 9 series was shipped with the year-old Android 14 rather than Android 15 , likely due to the earlier-than-usual timeframe; [ 34 ] [ 35 ] the phones were updated with Android 15 via a "Pixel Drop" software update, formerly known as Feature Drops, on October 15. [ 36 ] Continuing the Pixel 8 's trend, the phones will receive seven years of major OS upgrades, with support extending to 2031. [ 6 ] [ 21 ] An "after party" livestream hosted by actress Keke Palmer and featuring celebrity guest appearances followed the Made by Google event. [ 37 ] Days after the phones' launch, Google generated controversy after several social media influencers part of the seven-year-old #TeamPixel marketing program posted screenshots of a new clause stipulating that participants must not show preference for competitors when creating content with the Pixel 9. Missing context led to confusion online regarding the extent of the restriction, which only applied to #TeamPixel influencers. Google later apologized and removed the clause from the agreement. [ 38 ] [ 39 ] [ 40 ] In February 2025, Google released a commercial titled "Dream Job" which advertised Gemini on the Pixel 9, ahead of its airing during Super Bowl LIX . One of Google's two Super Bowl spots that year, the 60-second commercial featured a father using Gemini on his Pixel 9 to prepare for a job interview. Google also ran a third commercial entitled "Party Blitz" online, in which a man "attempts to impress his girlfriend's family by using Gemini [on his Pixel 9] to become a football expert". [ 41 ] [ 42 ] [ 43 ] In his initial reaction to the Pixel 9 series, Android Police 's Rajesh Pandey praised the overall design but disliked the iPhone-esque flat edges and polished metal frame. [ 18 ] His colleague Taylor Kerns questioned the absence of an "XL" version of the base model, [ 44 ] while Rebecca Isaacs of Forbes welcomed the addition of small-sized Pro model and the enhanced build quality. [ 45 ] Pandey and Kerns' colleague Will Sattelberg concurred but had mixed reactions to the AI-powered features. [ 46 ] Allison Johnson of The Verge was impressed by the camera features, writing in a headline, "The Pixel 9 Pro XL showed me the future of AI photography". [ 47 ] Writing for Mashable , Kimberly Gedeon was drawn to the design of the 9 Pro XL, praising the upgraded Super Res Zoom feature and AI-powered features. [ 48 ] PCMag 's Iyaz Akhtar called the rear design of the phones "divisive" but "sleek". [ 49 ] Kyle Barr of Gizmodo and Philip Michaels of Tom's Guide both found themselves particularly attracted to the Pixel Screenshots app. [ 50 ] [ 51 ] Kerry Wan of ZDNET predicted that the phones would be a " sleeper hit ". [ 52 ] The Pixel 9's Tensor G4 processor has also received mixed reviews. While it was praised for improved AI capabilities, some have criticized its poor efficiency under heavy load and lack of performance improvements over the Tensor G3, especially when compared to other flaghip processors at the time. [ 53 ] [ 54 ] [ 55 ] Soniya Jobanputra, a lead member of the Pixel's product management team, told The Financial Express that the G4 was not designed to "beat some specific benchmark that’s out there. We’re designing it to meet our use cases”. [ 56 ]
https://en.wikipedia.org/wiki/Pixel_9
The Pixel 9 , Pixel 9 Pro , and Pixel 9 Pro XL are a group of Android smartphones designed, developed, and marketed by Google as part of the Google Pixel product line. They serve as the successor to the Pixel 8 and Pixel 8 Pro , respectively. Sporting a redesigned appearance and powered by the fourth-generation Google Tensor system-on-chip , the phones are heavily integrated with Gemini -branded artificial intelligence features. The Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL were officially announced on August 13, 2024, at the annual Made by Google event, and were released in the United States on August 22 and September 4. The Pixel 9 series was approved by the Federal Communications Commission (FCC) in July 2024. [ 3 ] After previewing the Pro model the same month, [ 4 ] [ 5 ] Google officially announced the Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL on August 13, alongside the Pixel 9 Pro Fold and Pixel Watch 3 , at the annual Made by Google event. [ 6 ] Numerous observers noted the unusually early timing of the launch event, which was traditionally held in September after Apple 's annual launch of the new iPhone . Commentators described this as an attempt to "outshine" Apple, its longtime rival, and demonstrate its artificial intelligence (AI) prowess. [ a ] Several also took note of Google's usually frequent veiled attacks targeting Apple. [ 13 ] [ 11 ] All three phones became available for pre-order the same day; the Pixel 9 and Pixel 9 Pro XL were made available on August 22 while the Pro was available on September 4, the latter alongside the Pixel 9 Pro Fold, in 32 countries. [ 14 ] [ 15 ] Pixel 9 owners on Verizon US can text from anywhere outside of the network reach via satellite. [ 16 ] The Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL feature a redesigned appearance while retaining the overall design language that began with the Pixel 6 series, with the edges now flat rather than curved and the camera bar taking the shape of "an elongated, free-floating [...] oval". [ 17 ] [ 18 ] They are each available in four colors: [ 6 ] In a departure from previous generations, the Pixel 9 series was offered in three models: the base model, a "Pro" model, and a new "Pro XL" model. The Pixel 9 and Pixel 9 Pro are near-identical in size, with a 6.3 in (160 mm) screen size, while the Pixel 9 Pro XL is slightly larger at 6.7 in (171 mm). [ 17 ] [ 19 ] A key distinction between the base and Pro models lies in the camera setup, with the higher-end models sporting a 48- megapixel telephoto rear camera in addition to the standard 50- and 48-megapixel wide and ultrawide lenses; the Pro models also include a 42-megapixel ultrawide front camera compared to 10.5 megapixels on the base. [ 6 ] All three phones are powered by the fourth-generation Google Tensor system-on-chip (SoC), marketed as "Google Tensor G4", and the Titan M2 security co-processor . [ 20 ] [ 21 ] The upgraded Samsung Exynos 5400 modem on the new Tensor chip enhances the Pixel 9's satellite connectivity, enabling the ability to contact emergency services via satellite , similar to the feature introduced by Apple on the iPhone 14 and the first Android phone to be equipped with this technology. Dubbed "Satellite SOS", Google partnered with satellite network provider Skylo and SOS dispatch center Garmin on the feature, which was made available for free for two years. [ 22 ] [ 23 ] [ 24 ] Tensor G4 is also the first SoC to run Gemini Nano , a version of the Gemini large language model (LLM), with multimodality . [ 25 ] As with prior Pixel generations, the Pixel 9 series is equipped with numerous AI-powered features, with the Associated Press calling it a "vessel for the AI technology that is expected to reshape the way people live and work". [ 11 ] Google dedicated the first half-hour of its launch event discussing its advances in the field before unveiling its new devices. [ 26 ] Gemini , a generative AI–powered chatbot launched in 2023 in response to OpenAI 's ChatGPT , was frequently spotlighted, replacing the Google Assistant as the new default virtual assistant on Pixel and heavily integrating into the Pixel 9 series. [ 27 ] [ 28 ] In order to facilitate on-device AI processing, the RAM on the Pixel 9 series was substantially increased. [ 28 ] [ 20 ] Google also debuted Gemini Live, a new voice chat mode powered by the Imagen 3 text-to-image model . [ 29 ] [ 30 ] Other AI-powered features included Pixel Studio, an image generation app; Pixel Screenshots, a screenshot management and analysis app; Add Me, the ability to retroactively add subjects to photos; Pixel Weather, a new weather forecast app; Call Notes, which summarizes phone calls while running on-device; and miscellaneous camera updates. [ 31 ] [ 32 ] [ 33 ] Breaking with tradition, the Pixel 9 series was shipped with the year-old Android 14 rather than Android 15 , likely due to the earlier-than-usual timeframe; [ 34 ] [ 35 ] the phones were updated with Android 15 via a "Pixel Drop" software update, formerly known as Feature Drops, on October 15. [ 36 ] Continuing the Pixel 8 's trend, the phones will receive seven years of major OS upgrades, with support extending to 2031. [ 6 ] [ 21 ] An "after party" livestream hosted by actress Keke Palmer and featuring celebrity guest appearances followed the Made by Google event. [ 37 ] Days after the phones' launch, Google generated controversy after several social media influencers part of the seven-year-old #TeamPixel marketing program posted screenshots of a new clause stipulating that participants must not show preference for competitors when creating content with the Pixel 9. Missing context led to confusion online regarding the extent of the restriction, which only applied to #TeamPixel influencers. Google later apologized and removed the clause from the agreement. [ 38 ] [ 39 ] [ 40 ] In February 2025, Google released a commercial titled "Dream Job" which advertised Gemini on the Pixel 9, ahead of its airing during Super Bowl LIX . One of Google's two Super Bowl spots that year, the 60-second commercial featured a father using Gemini on his Pixel 9 to prepare for a job interview. Google also ran a third commercial entitled "Party Blitz" online, in which a man "attempts to impress his girlfriend's family by using Gemini [on his Pixel 9] to become a football expert". [ 41 ] [ 42 ] [ 43 ] In his initial reaction to the Pixel 9 series, Android Police 's Rajesh Pandey praised the overall design but disliked the iPhone-esque flat edges and polished metal frame. [ 18 ] His colleague Taylor Kerns questioned the absence of an "XL" version of the base model, [ 44 ] while Rebecca Isaacs of Forbes welcomed the addition of small-sized Pro model and the enhanced build quality. [ 45 ] Pandey and Kerns' colleague Will Sattelberg concurred but had mixed reactions to the AI-powered features. [ 46 ] Allison Johnson of The Verge was impressed by the camera features, writing in a headline, "The Pixel 9 Pro XL showed me the future of AI photography". [ 47 ] Writing for Mashable , Kimberly Gedeon was drawn to the design of the 9 Pro XL, praising the upgraded Super Res Zoom feature and AI-powered features. [ 48 ] PCMag 's Iyaz Akhtar called the rear design of the phones "divisive" but "sleek". [ 49 ] Kyle Barr of Gizmodo and Philip Michaels of Tom's Guide both found themselves particularly attracted to the Pixel Screenshots app. [ 50 ] [ 51 ] Kerry Wan of ZDNET predicted that the phones would be a " sleeper hit ". [ 52 ] The Pixel 9's Tensor G4 processor has also received mixed reviews. While it was praised for improved AI capabilities, some have criticized its poor efficiency under heavy load and lack of performance improvements over the Tensor G3, especially when compared to other flaghip processors at the time. [ 53 ] [ 54 ] [ 55 ] Soniya Jobanputra, a lead member of the Pixel's product management team, told The Financial Express that the G4 was not designed to "beat some specific benchmark that’s out there. We’re designing it to meet our use cases”. [ 56 ]
https://en.wikipedia.org/wiki/Pixel_9_Pro
The Pixel 9 , Pixel 9 Pro , and Pixel 9 Pro XL are a group of Android smartphones designed, developed, and marketed by Google as part of the Google Pixel product line. They serve as the successor to the Pixel 8 and Pixel 8 Pro , respectively. Sporting a redesigned appearance and powered by the fourth-generation Google Tensor system-on-chip , the phones are heavily integrated with Gemini -branded artificial intelligence features. The Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL were officially announced on August 13, 2024, at the annual Made by Google event, and were released in the United States on August 22 and September 4. The Pixel 9 series was approved by the Federal Communications Commission (FCC) in July 2024. [ 3 ] After previewing the Pro model the same month, [ 4 ] [ 5 ] Google officially announced the Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL on August 13, alongside the Pixel 9 Pro Fold and Pixel Watch 3 , at the annual Made by Google event. [ 6 ] Numerous observers noted the unusually early timing of the launch event, which was traditionally held in September after Apple 's annual launch of the new iPhone . Commentators described this as an attempt to "outshine" Apple, its longtime rival, and demonstrate its artificial intelligence (AI) prowess. [ a ] Several also took note of Google's usually frequent veiled attacks targeting Apple. [ 13 ] [ 11 ] All three phones became available for pre-order the same day; the Pixel 9 and Pixel 9 Pro XL were made available on August 22 while the Pro was available on September 4, the latter alongside the Pixel 9 Pro Fold, in 32 countries. [ 14 ] [ 15 ] Pixel 9 owners on Verizon US can text from anywhere outside of the network reach via satellite. [ 16 ] The Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL feature a redesigned appearance while retaining the overall design language that began with the Pixel 6 series, with the edges now flat rather than curved and the camera bar taking the shape of "an elongated, free-floating [...] oval". [ 17 ] [ 18 ] They are each available in four colors: [ 6 ] In a departure from previous generations, the Pixel 9 series was offered in three models: the base model, a "Pro" model, and a new "Pro XL" model. The Pixel 9 and Pixel 9 Pro are near-identical in size, with a 6.3 in (160 mm) screen size, while the Pixel 9 Pro XL is slightly larger at 6.7 in (171 mm). [ 17 ] [ 19 ] A key distinction between the base and Pro models lies in the camera setup, with the higher-end models sporting a 48- megapixel telephoto rear camera in addition to the standard 50- and 48-megapixel wide and ultrawide lenses; the Pro models also include a 42-megapixel ultrawide front camera compared to 10.5 megapixels on the base. [ 6 ] All three phones are powered by the fourth-generation Google Tensor system-on-chip (SoC), marketed as "Google Tensor G4", and the Titan M2 security co-processor . [ 20 ] [ 21 ] The upgraded Samsung Exynos 5400 modem on the new Tensor chip enhances the Pixel 9's satellite connectivity, enabling the ability to contact emergency services via satellite , similar to the feature introduced by Apple on the iPhone 14 and the first Android phone to be equipped with this technology. Dubbed "Satellite SOS", Google partnered with satellite network provider Skylo and SOS dispatch center Garmin on the feature, which was made available for free for two years. [ 22 ] [ 23 ] [ 24 ] Tensor G4 is also the first SoC to run Gemini Nano , a version of the Gemini large language model (LLM), with multimodality . [ 25 ] As with prior Pixel generations, the Pixel 9 series is equipped with numerous AI-powered features, with the Associated Press calling it a "vessel for the AI technology that is expected to reshape the way people live and work". [ 11 ] Google dedicated the first half-hour of its launch event discussing its advances in the field before unveiling its new devices. [ 26 ] Gemini , a generative AI–powered chatbot launched in 2023 in response to OpenAI 's ChatGPT , was frequently spotlighted, replacing the Google Assistant as the new default virtual assistant on Pixel and heavily integrating into the Pixel 9 series. [ 27 ] [ 28 ] In order to facilitate on-device AI processing, the RAM on the Pixel 9 series was substantially increased. [ 28 ] [ 20 ] Google also debuted Gemini Live, a new voice chat mode powered by the Imagen 3 text-to-image model . [ 29 ] [ 30 ] Other AI-powered features included Pixel Studio, an image generation app; Pixel Screenshots, a screenshot management and analysis app; Add Me, the ability to retroactively add subjects to photos; Pixel Weather, a new weather forecast app; Call Notes, which summarizes phone calls while running on-device; and miscellaneous camera updates. [ 31 ] [ 32 ] [ 33 ] Breaking with tradition, the Pixel 9 series was shipped with the year-old Android 14 rather than Android 15 , likely due to the earlier-than-usual timeframe; [ 34 ] [ 35 ] the phones were updated with Android 15 via a "Pixel Drop" software update, formerly known as Feature Drops, on October 15. [ 36 ] Continuing the Pixel 8 's trend, the phones will receive seven years of major OS upgrades, with support extending to 2031. [ 6 ] [ 21 ] An "after party" livestream hosted by actress Keke Palmer and featuring celebrity guest appearances followed the Made by Google event. [ 37 ] Days after the phones' launch, Google generated controversy after several social media influencers part of the seven-year-old #TeamPixel marketing program posted screenshots of a new clause stipulating that participants must not show preference for competitors when creating content with the Pixel 9. Missing context led to confusion online regarding the extent of the restriction, which only applied to #TeamPixel influencers. Google later apologized and removed the clause from the agreement. [ 38 ] [ 39 ] [ 40 ] In February 2025, Google released a commercial titled "Dream Job" which advertised Gemini on the Pixel 9, ahead of its airing during Super Bowl LIX . One of Google's two Super Bowl spots that year, the 60-second commercial featured a father using Gemini on his Pixel 9 to prepare for a job interview. Google also ran a third commercial entitled "Party Blitz" online, in which a man "attempts to impress his girlfriend's family by using Gemini [on his Pixel 9] to become a football expert". [ 41 ] [ 42 ] [ 43 ] In his initial reaction to the Pixel 9 series, Android Police 's Rajesh Pandey praised the overall design but disliked the iPhone-esque flat edges and polished metal frame. [ 18 ] His colleague Taylor Kerns questioned the absence of an "XL" version of the base model, [ 44 ] while Rebecca Isaacs of Forbes welcomed the addition of small-sized Pro model and the enhanced build quality. [ 45 ] Pandey and Kerns' colleague Will Sattelberg concurred but had mixed reactions to the AI-powered features. [ 46 ] Allison Johnson of The Verge was impressed by the camera features, writing in a headline, "The Pixel 9 Pro XL showed me the future of AI photography". [ 47 ] Writing for Mashable , Kimberly Gedeon was drawn to the design of the 9 Pro XL, praising the upgraded Super Res Zoom feature and AI-powered features. [ 48 ] PCMag 's Iyaz Akhtar called the rear design of the phones "divisive" but "sleek". [ 49 ] Kyle Barr of Gizmodo and Philip Michaels of Tom's Guide both found themselves particularly attracted to the Pixel Screenshots app. [ 50 ] [ 51 ] Kerry Wan of ZDNET predicted that the phones would be a " sleeper hit ". [ 52 ] The Pixel 9's Tensor G4 processor has also received mixed reviews. While it was praised for improved AI capabilities, some have criticized its poor efficiency under heavy load and lack of performance improvements over the Tensor G3, especially when compared to other flaghip processors at the time. [ 53 ] [ 54 ] [ 55 ] Soniya Jobanputra, a lead member of the Pixel's product management team, told The Financial Express that the G4 was not designed to "beat some specific benchmark that’s out there. We’re designing it to meet our use cases”. [ 56 ]
https://en.wikipedia.org/wiki/Pixel_9_Pro_XL
The Pixel Watch is a Wear OS smartwatch designed, developed, and marketed by Google as part of the Google Pixel product line. First previewed in May 2022 during the Google I/O keynote, it features a round dome-shaped display as well as deep integration with Fitbit , which Google acquired in 2021. Two Pixel-branded smartwatches had been in development at Google by July 2016, but were canceled ahead of their release due to hardware chief Rick Osterloh 's concerns that they did not fit well with other Pixel devices. Development on a new Pixel-branded watch began shortly after Google's acquisition of Fitbit. The Pixel Watch was officially announced on October 6, 2022, at the annual Made by Google event, and was released in the United States on October 13. It was succeeded by the Pixel Watch 2 in 2023. In July 2016, Google was reportedly developing two smartwatches , codenamed "Swordfish" and "Angelfish", which were to be powered by the Android Wear operating system and expected to be released under the Nexus brand name. [ 2 ] According to Business Insider , these watches were canceled ahead of the 2016 Made by Google launch event due to concerns from Google hardware chief Rick Osterloh that they did not sync well with the company's new Pixel devices; the smartwatches were eventually "salvaged" by LG and released as the LG Watch Style and LG Watch Sport in February 2017. [ 3 ] [ 4 ] Android Wear was rebranded as Wear OS in March 2018. [ 5 ] In August, Wear OS director of engineering Miles Barr dispelled rumors that the company planned to release a Pixel-branded smartwatch that year. [ 6 ] In January 2019, smartwatch manufacturer Fossil Group agreed to sell some of its intellectual property on smartwatch technology to Google for $40 million, as well as transfer a portion of its research and development team over. [ 7 ] In November, Google announced that it would acquire smartwatch and fitness tracker maker Fitbit for $2.1 billion, [ 8 ] which Osterloh stated would pave the way for Google-developed wearables . [ 9 ] The acquisition was completed in January 2021 following a prolonged investigation by the U.S. Department of Justice , [ 10 ] [ 11 ] with Fitbit absorbed into Google's hardware division. [ 12 ] Fitbit co-founder James Park was subsequently appointed head of Google's wearables division. [ 13 ] During the 2021 Google I/O keynote in May, Google announced Wear OS 3, a version of Wear OS co-developed with Samsung and Fitbit which incorporates elements of the former's Tizen operating system. [ 14 ] [ 15 ] [ 16 ] In October, Osterloh revealed that Google and Fitbit were in the process of developing a Wear OS-powered smartwatch. [ 17 ] [ 18 ] Two months later, Business Insider reported that a Pixel-branded smartwatch codenamed "Rohan" was being targeted for a 2022 release, featuring a round bezel -less design, integration with Fitbit, proprietary watch bands, and health-tracking capabilities. [ 19 ] [ 20 ] Evidence unearthed that month indicated that the watch would be powered by either Samsung's Exynos system-on-chip (SoC) or Google's own Tensor chip, the latter of which had recently debuted on the company's Pixel 6 smartphone line. [ 21 ] In April 2022, the "Fitbit" category was renamed "Watches" on the online Google Store , in anticipation of the Pixel Watch's impending launch. [ 22 ] The same month, Google filed a trademark for the "Pixel Watch" name with the U.S. Patent and Trademark Office , [ 23 ] while three models of the smartwatch were approved by the Bluetooth Special Interest Group . [ 24 ] A prototype of the Pixel Watch was found at a restaurant in the U.S., an incident which drew parallels to Gizmodo 's leak of Apple 's iPhone 4 in 2010. [ 25 ] [ 26 ] Osterloh unveiled a preview of the Pixel Watch on May 11, during the 2022 Google I/O keynote. [ 27 ] [ 28 ] [ 29 ] In an interview with CNET , Park stated that there were no plans to shut down Fitbit, adding that the Google Fit app would co-exist with Fitbit on the Pixel Watch. [ 30 ] Google CEO Sundar Pichai was seen wearing a Pixel Watch in September during an interview at the Code 2022 conference. [ 31 ] [ 32 ] Google officially announced the Pixel Watch on October 6, alongside the Pixel 7 and Pixel 7 Pro smartphones, at the annual Made by Google event. [ 33 ] It became available for pre-orders on the same day, before being released in nine countries on October 13. [ 34 ] When asked why Google waited so long before launching the device, Osterloh cited their acquisition of Fitbit and its expansive health platform as the primary catalyst which convinced Google to greenlight the Pixel Watch, adding that the company was committed to first-party wearables. [ 35 ] The Pixel Watch sports a round watch face with a domed design, physical crown, and watch frame made of recycled stainless steel attached to custom-designed bands. [ 36 ] 18 families of watch faces are available, each of which are highly customizable. It was available in four case–band color pairs: [ 37 ] The Pixel Watch is available in two models, one with and one without support for cellular connectivity. [ 38 ] Its case has a diameter of 41 mm (1.6 in) and a Gorilla Glass 5 display. Powered by Samsung's Exynos 9110 SoC alongside the ARM Cortex-M33 co-processor, it contains a 294 mAh battery and 2 GB of RAM, as well as multiple sensors and wireless technologies. [ 39 ] The watch features a USB-C charging mechanism manufactured by Compal Electronics . [ 40 ] Due to the base's curved design, it can only be wirelessly charged with Google's proprietary magnetic charger, [ 41 ] though some users were able to charge the device using other Qi chargers or via reverse wireless charging on their phones. [ 42 ] At launch, the Pixel Watch was only compatible with proprietary bands designed by Google, though the company stated that it planned to partner with third parties to develop additional bands in the future. [ 43 ] [ 44 ] By default, each Pixel Watch comes with a proprietary Active Band, with several other proprietary band options available at an added cost. [ 45 ] [ 46 ] Counterpoint Research calculated that the LTE version of the Pixel Watch cost an estimated US$ 123 to manufacture. [ 47 ] The Pixel Watch shipped with Wear OS 3.5 , [ 48 ] and features deep integration with Fitbit. [ 49 ] It is compatible with Android smartphones running Android 8.0 or above, [ 50 ] and is accompanied by a Pixel Watch mobile app available for download on the Play Store . [ 51 ] [ 52 ] iPhones are not supported. [ 53 ] Google added fall detection capabilities in February 2023. [ 54 ] It was updated to Wear OS 4.0 in October 2023. [ 55 ] Actor Simu Liu , who previously served as brand ambassador for the Pixel 6 series in Canada, [ 56 ] participated in an advertising campaign developed by Cossette for the Pixel Watch in May 2023. [ 57 ] Following the announcement of the Pixel Watch and Pixel Tablet at the 2022 Google I/O, Jon Porter of The Verge opined that Google was taking a subtle approach at Apple's " walled garden " ecosystem strategy. [ 58 ] This was echoed by International Data Corporation research director Ramon Llamas, who believed that Google was aiming to become a "head-on competitor to Apple". [ 59 ] Kate Kozuch of Tom's Guide praised the watch's sleek visual design. [ 60 ] Victoria Song of The Verge quelled fears over the watch's reported 24-hour battery life , declaring it was "decent" when compared to similar smartwatches. [ 61 ] The Pixel Watch was positively received upon its launch. Lisa Eadicicco of CNET and Cherlynn Low of Engadget lauded its design and health features, with Eadicco likening it to "a hybrid of Fitbit and the Apple Watch", but both criticized the battery life. [ 62 ] [ 63 ] Song called the Pixel Watch "good-but-not-yet-great". [ 64 ] Wired 's Julian Chokkattu echoed these sentiments, but argued that its "accuracy, elegance, and comfort" compensated its shortcomings. [ 65 ] CNN Underscored reviewer Max Buondonno praised the Pixel Watch's sleek design and the performance of Wear OS 3.5, but felt that the battery life was subpar and the screen was not large enough. [ 66 ] Nicole Nguyen of The Wall Street Journal did not find the smartwatch particularly astounding and noted several software bugs, but ultimately deemed it a worthy companion to the Pixel phone. [ 67 ] Analyst firm Canalys calculated that Google shipped an estimated 880,000 Pixel Watches during the fourth quarter of 2022, constituting 22 percent of Google's total wearable sales, which include Fitbit products. The Pixel Watch's launch allowed Google to obtain 8 percent of the wearable market share , jumping 16 percent from fourth place to second place, behind Apple. [ 68 ] The Pixel Watch Android app had amassed more than 500,000 downloads by February 2023. [ 69 ] The Pixel Watch was succeeded by the Pixel Watch 2 in October 2023. [ 70 ]
https://en.wikipedia.org/wiki/Pixel_Watch
The Pixel Watch 2 is a Wear OS smartwatch designed, developed, and marketed by Google as part of the Google Pixel product line. It serves as the successor to the first-generation Pixel Watch . The Pixel Watch 2 was officially announced on October 4, 2023, at the annual Made by Google event, and was released in the United States on October 12. In May 2023, 9to5Google reported that Google intended to release a successor to the Pixel Watch , a Wear OS –powered smartwatch, in October. [ 3 ] Two codenames for the watch, believed to be in reference to the Wi-Fi and cellular models, were later discovered to be "Eos" and "Aurora". [ 4 ] Three models were approved by the Federal Communications Commission (FCC) in August, [ 5 ] while the Eos model was listed on the Google Play Console device catalog for developers. [ 6 ] After previewing the watch in September, [ 7 ] Google officially announced the Pixel Watch 2 on October 4, alongside the Pixel 8 and Pixel 8 Pro , at the annual Made by Google event. [ 8 ] Pre-orders became available the same day, before being released in 30 countries on October 12. [ 9 ] [ 10 ] The watch suffered from significant shipping delays at the online Google Store. [ 11 ] Visually, the Pixel Watch 2 is near-identical to its predecessor, save a "slightly redesigned haptic crown". Six new families of watch faces were made available at launch. [ 9 ] [ 12 ] It is available in four case–band color pairs: [ 13 ] The Pixel Watch 2 is made of recycled aluminum, a departure from the original Pixel Watch's stainless steel watch frame. Google stated that the change was made to make the watch lighter and more comfortable for users. [ 14 ] [ 9 ] It is powered by Qualcomm 's Snapdragon SW5100 system-on-chip (SoC), a departure from its predecessor's Samsung Exynos chip. [ 6 ] The watch's new circular sensor array consists of several new sensors. [ 15 ] A multipath heart rate sensor boasts more accurate readings; a skin temperature sensor tracks sleep but not menstruation; while an electrodermal activity sensor detects sweat beads to assess the wearer's mood. The Pixel Watch 2 is not compatible with the first generation's proprietary magnetic charger, instead requiring a newer and faster one. [ 9 ] The Pixel Watch 2 shipped with Wear OS 4.0 . [ 6 ] Like its predecessor, the watch features heavy Fitbit integration, given Google's acquisition of the company in 2021. [ 16 ] New personal safety features include emergency location sharing, Safety Check, and Safety Signal. [ 9 ] In her review for The Verge , Victoria Song praised the Pixel Watch 2's improvements from the first-generation on all fronts, especially battery life, [ 17 ] as did Yahoo! Finance reviewer Daniel Howley and Digital Spy reviewer Jason Murdock. [ 18 ] [ 19 ] Julian Chokkattu of Wired concurred, writing, "I get a watch that actually comes with everything I wish the original did out of the box. Hooray!" [ 20 ] Matthew Miller of ZDNET highlighted the watch's deep Fitbit integration and safety features, but was ambivalent toward its small size. [ 21 ] Will Greenwald of PCMag praised the watch's design, performance, and health features, [ 22 ] while Mark Knapp of IGN called it "elegant and performant" but "still not a killer". [ 23 ] CNN Underscored 's Max Buondonno and TheStreet 's Jason Cipriani hailed its health, performance, and battery life enhancements. [ 24 ] [ 25 ] Writing for The Guardian , Samuel Gibbs appreciated the improved performance and battery life but was disappointed with the lack of advanced health features and ability to be repaired. [ 26 ] Engadget 's Cherlynn Low was conflicted, commending Google's efforts to close the gap between other smartwatches but still finding it mediocre overall; [ 27 ] Inverse 's Raymond Wong agreed, calling it "a better smartwatch, but not the best". [ 28 ] Elizabeth de Luna of Mashable described the watch as "playing catch-up to the Apple Watch ", [ 13 ] while Robert Leedham of GQ thought it was the ideal smartwatch for those indifferent toward smartwatches. [ 29 ]
https://en.wikipedia.org/wiki/Pixel_Watch_2
The Pixel Watch 3 is a Wear OS smartwatch designed, developed, and marketed by Google as part of the Google Pixel product line. It is the successor to the second-generation Pixel Watch . The Pixel Watch 3 was officially announced on August 13, 2024, at the annual Made by Google event, and was released in the United States on September 10. 9to5Google reported in January 2024 that Google was planning to release the Pixel Watch 3 in two sizes. [ 1 ] The device was approved by the Federal Communications Commission (FCC) in July of that year. [ 2 ] Google officially announced the Pixel Watch 3 on August 13, alongside the Pixel 9 , Pixel 9 Pro , and Pixel 9 Pro Fold , at the annual Made by Google event. [ 3 ] The Pixel Watch 3 retains the same basic design as the prior Pixel Watch 1 and 2 and is marketed in two sizes, 41 and 45 mm in diameter. While the 41 mm watch matches the size of prior Pixel Watches, the bezels are slightly smaller and there is 10% more display area. The 45 mm watch has 40% more display area compared to prior models. For each size, the maximum brightness is 2000 nits. [ 4 ]
https://en.wikipedia.org/wiki/Pixel_Watch_3
A pixel aspect ratio ( PAR ) is a mathematical ratio that describes how the width of a pixel in a digital image compares to the height of that pixel. Most digital imaging systems display an image as a grid of tiny, square pixels. However, some imaging systems, especially those that must be compatible with standard-definition television motion pictures, display an image as a grid of rectangular pixels, in which the pixel width and height are different. Pixel aspect ratio describes this difference. Use of pixel aspect ratio mostly involves pictures pertaining to standard-definition television and some other exceptional cases. Most other imaging systems, including those that comply with SMPTE standards and practices, use square pixels. PAR is also known as sample aspect ratio and abbreviated SAR , though it can be confused with storage aspect ratio . The ratio of the width to the height of an image is known as the aspect ratio , or more precisely the display aspect ratio (DAR) – the aspect ratio of the image as displayed; for TV, DAR was traditionally 4:3 (a.k.a. fullscreen), with 16:9 (a.k.a. widescreen) now the standard for HDTV. In digital images , there is a distinction with the storage aspect ratio (SAR), which is the ratio of pixel dimensions . If an image is displayed with square pixels, then these ratios agree; if not, then non-square, "rectangular" pixels are used, and these ratios disagree. The aspect ratio of the pixels themselves is known as the pixel aspect ratio (PAR) – for square pixels this is 1:1 – and these are related by the identity: Rearranging (solving for PAR) yields: For example: In analog images such as film there is no notion of pixel, nor notion of SAR or PAR, but in the digitization of analog images the resulting digital image has pixels, hence SAR (and accordingly PAR, if displayed at the same aspect ratio as the original). Non-square pixels arise often in early digital TV standards, related to digitalization of analog TV signals – whose vertical and "effective" horizontal resolutions differ and are thus best described by non-square pixels – and also in some digital video cameras and computer display modes , such as Color Graphics Adapter (CGA). Today they arise also in transcoding between resolutions with different SARs. Actual displays do not generally have non-square pixels, though digital sensors might; they are rather a mathematical abstraction used in resampling images to convert between resolutions. There are several complicating factors in understanding PAR, particularly as it pertains to digitization of analog video: Video is presented as a sequential series of images called video frames. Historically, video frames were created and recorded in analog form. As digital display technology, digital broadcast technology, and digital video compression evolved separately, it resulted in video frame differences that must be addressed using pixel aspect ratio. Digital video frames are generally defined as a grid of pixels used to present each sequential image. The horizontal component is defined by pixels (or samples), and is known as a video line. The vertical component is defined by the number of lines, as in 480 lines. Standard-definition television standards and practices were developed as broadcast technologies and intended for terrestrial broadcasting, and were therefore not designed for digital video presentation. Such standards define an image as an array of well-defined horizontal " Lines ", well-defined vertical " Line Duration " and a well-defined picture center. However, there is not a standard-definition television standard that properly defines image edges or explicitly demands a certain number of picture elements per line. Furthermore, analog video systems such as NTSC 480i and PAL 576i , instead of employing progressively displayed frames, employ fields or interlaced half-frames displayed in an interwoven manner to reduce flicker and double the image rate for smoother motion. As a result of computers becoming powerful enough to serve as video editing tools, video digital-to-analog converters and analog-to-digital converters were made to overcome this incompatibility. To convert analog video lines into a series of square pixels, the industry adopted a default sampling rate at which luma values were extracted into pixels. The luma sampling rate for 480i pictures was 12 + 3 ⁄ 11 MHz and for 576i pictures was 14 + 3 ⁄ 4 MHz . The term pixel aspect ratio was first coined when ITU-R BT.601 (commonly known as Rec. 601 ) specified that standard-definition television pictures are made of lines of exactly 720 non-square pixels. ITU-R BT.601 did not define the exact pixel aspect ratio but did provide enough information to calculate the exact pixel aspect ratio based on industry practices: The standard luma sampling rate of precisely 13 + 1 ⁄ 2 MHz. Based on this information: SMPTE RP 187 further attempted to standardize the pixel aspect ratio values for 480i and 576i . It designated 177:160 for 480i or 1035:1132 for 576i . However, due to significant difference with practices in effect by industry and the computational load that they imposed upon the involved hardware, SMPTE RP 187 was simply ignored. SMPTE RP 187 information annex A.4 further suggested the use of 10:11 for 480i . As of this writing, ITU-R BT.601-6, which is the latest edition of ITU-R BT.601, still implies that the pixel aspect ratios mentioned above are correct. As stated above, ITU-R BT.601 specified that standard-definition television pictures are made of lines of 720 non-square pixels, sampled with a precisely specified sampling rate. A simple mathematical calculation reveals that a 704 pixel width would be enough to contain a 480i or 576i standard 4:3 picture: Unfortunately, not all standard TV pictures are exactly 4:3: As mentioned earlier, in analog video, the center of a picture is well-defined but the edges of the picture are not standardized. As a result, some analog devices (mostly PAL devices but also some NTSC devices) generated motion pictures that were horizontally (slightly) wider. This also proportionately applies to anamorphic widescreen (16:9) pictures. Therefore, to maintain a safe margin of error, ITU-R BT.601 required sampling 16 more non-square pixels per line (8 more at each edge) to ensure saving all video data near the margins. This requirement, however, had implications for PAL motion pictures. PAL pixel aspect ratios for standard (4:3) and anamorphic wide screen (16:9), respectively 59:54 and 118:81, were awkward for digital image processing, especially for mixing PAL and NTSC video clips. Therefore, video editing products chose the almost equivalent values, respectively 12:11 and 16:11, which were more elegant and could create PAL digital images at exactly 704 pixels wide, as illustrated: Commonly found on the Internet and in various other published media are numerous sources that introduce different and highly incompatible values as the pixel aspect ratios of various video pictures and video systems. (See the Supplementary sources section.) To neutrally judge the accuracy and/or feasibility of these sources, please note that as the digital motion picture was invented years after the traditional motion picture, all video pictures targeted for standard definition television and compatible media, digital or otherwise, have (and must have) specifications compatible with standard definition television. Therefore, the pixel aspect ratio of digital video must be calculated from the specification of common traditional equipment rather than the specifications of digital video. Otherwise, any pixel aspect ratio that is calculated from a digital video source is only usable in certain cases for the same kind of video sources and cannot be considered/used as a general pixel aspect ratio of any standard definition television system. In addition, unlike digital video that has well-defined picture edges, traditional video systems have never standardized a well-defined edge for the picture. Therefore, the pixel aspect ratio of common standard television systems cannot be calculated based on edges of pictures. Such a calculated aspect ratio value would not be entirely wrong, but also cannot be considered as the general pixel aspect ratio of any specific video system. The use of such values would be restricted only to certain cases. In modern digital imaging systems and high-definition televisions , especially those that comply with SMPTE standards and practices, only square pixels are used for broadcast and display. However, some formats (ex., HDV , DVCPRO HD ) use non-square pixels internally for image storage, as a way to reduce the amount of data that must be processed, thus limiting the necessary transfer rates and maintaining compatibility with existing interfaces. Directly mapping an image with a certain pixel aspect ratio on a device whose pixel aspect ratio is different makes the image look unnaturally stretched or squashed in either the horizontal or vertical direction. For example, a circle generated for a computer display with square pixels looks like a vertical ellipse on a standard-definition NTSC television that uses vertically rectangular pixels. This issue is more evident on wide-screen TVs. Pixel aspect ratio must be taken into consideration by video editing software products that edit video files with non-square pixels, especially when mixing video clips with different pixel aspect ratios. This would be the case when creating a video montage from various cameras employing different video standards (a relatively rare situation). Special effects software products must also take the pixel aspect ratio into consideration, since some special effects require calculation of the distances from a certain point so that they look visually correct. An example of such effects would be radial blur, motion blur, or even a simple image rotation. Pixel aspect ratio value is used mainly in digital video software, where motion pictures must be converted or reconditioned to use video systems other than the original. The video player software may use pixel aspect ratio to properly render digital video on screen. Video editing software uses pixel aspect ratio to properly scale and render a video into a new format. The pixel aspect ratio support is also required to display, without distortion, legacy digital images from computer standards and video-games what existed in the 80s. In that generation, square pixels were too expensive to produce, so machines and video cards like the SNES , CGA , EGA , Hercules , C64 , MSX , PC-88 , X68000 etc had non-square pixels. [ 1 ] Pixel aspect ratio is often confused with different types of image aspect ratios; the ratio of the image width and height. Due to non-squareness of pixels in Standard-definition TV, there are two types of such aspect ratios: storage aspect ratio ( SAR ) and display aspect ratio (abbreviated DAR , also known as image aspect ratio and picture aspect ratio ). Also, pixel aspect ratio ( PAR ) is also known as sample aspect ratio (abbreviated SAR ) in some industrial standards (such as H.264 [ 2 ] ) and output of programs (such as ffmpeg [ 3 ] ). Note the reuse of the abbreviations PAR and SAR . This article uses only the terms pixel aspect ratio, display aspect ratio and storage aspect ratio to avoid ambiguity. Storage aspect ratio is the ratio of the image width to height in pixels, and can be easily calculated from the video file. Display aspect ratio is the ratio of image width to height (in a unit of length such as centimeters or inches) when displayed on screen, and is calculated from the combination of pixel aspect ratio and storage aspect ratio. However, users who know the definition of these concepts may get confused as well. Poorly crafted user-interfaces or poorly written documentations can easily cause such confusion: Some video-editing software applications often ask users to specify an "aspect ratio" for their video file, presenting him or her with the choices of "4:3" and "16:9". Sometimes, these choices may be "PAL 4:3", "NTSC 4:3", "PAL 16:9" and "NTSC 16:9". In such situations, the video editing program is implicitly asking for the pixel aspect ratio of the video file by asking for information about the video system from which the video file originated. The program then uses a table (similar to the one below) to determine the correct pixel aspect ratio value. Generally speaking, to avoid confusion, it can be assumed that video editing products never ask for the storage aspect ratio as they can directly retrieve or calculate it. Non-square-pixel–aware applications also need only to ask for either pixel aspect ratio or display aspect ratio, from either of which they can calculate the other. Pixel aspect ratio values for common standard-definition video formats are listed below. Note that for PAL video formats, two different types of pixel aspect ratio values are listed: Note that sources differ on PARs for common formats – for example, 576 lines (PAL) displayed at 4:3 (DAR) corresponds to either PAR of 12:11 (if 704×576, SAR = 11:9), or a PAR of 16:15 (if 720×576, SAR = 5:4). See references for sources giving both, and SDTV: Resolution for a table of storage, display and pixel aspect ratios. Also note that CRT televisions do not have pixels, but scanlines.
https://en.wikipedia.org/wiki/Pixel_aspect_ratio
Pixels per inch ( ppi ) and pixels per centimetre ( ppcm or pixels/cm ) are measurements of the pixel density of an electronic image device, such as a computer monitor or television display, or image digitizing device such as a camera or image scanner . Horizontal and vertical density are usually the same, as most devices have square pixels , but differ on devices that have non-square pixels. Pixel density is not the same as resolution — where the former describes the amount of detail on a physical surface or device, the latter describes the amount of pixel information regardless of its scale. Considered in another way, a pixel has no inherent size or unit (a pixel is actually a sample), but when it is printed, displayed, or scanned, then the pixel has both a physical size (dimension) and a pixel density (ppi). [ 1 ] Since most digital hardware devices use dots or pixels, the size of the media (in inches) and the number of pixels (or dots) are directly related by the 'pixels per inch'. The following formula gives the number of pixels, horizontally or vertically, given the physical size of a format and the pixels per inch of the output: Pixels per inch (or pixels per centimetre) describes the detail of an image file when the print size is known. For example, a 100×100 pixel image printed in a 2 inch square has a resolution of 50 pixels per inch. Used this way, the measurement is meaningful when printing an image. In many applications, such as Adobe Photoshop, the program is designed so that one creates new images by specifying the output device and PPI (pixels per inch). Thus the output target is often defined upon creating the image. When moving images between devices, such as printing an image that was created on a monitor, it is important to understand the pixel density of both devices. Consider a 23″ HD monitor (20″ wide), that has a known, native resolution of 1920 pixels (horizontal). Let us assume an artist created a new image at this monitor resolution of 1920 pixels, possibly intended for the web without regard to printing. Rewriting the formula above can tell us the pixel density (PPI) of the image on the monitor display: Now, let us imagine the artist wishes to print a larger banner at 48″ horizontally. We know the number of pixels in the image, and the size of the output, from which we can use the same formula again to give the PPI of the printed poster: This shows that the output banner will have only 40 pixels per inch. Since a printer device is capable of printing at 300 ppi, the resolution of the original image is well below what would be needed to create a decent quality banner, even if it looked good on a monitor for a website. We would say more directly that a 1920 × 1080 pixel image does not have enough pixels to be printed in a large format. Printing on paper is accomplished with different technologies. Newspapers and magazines were traditionally printed using a halftone screen, [ 2 ] which would print dots at a given frequency, the screen frequency, in lines per inch (LPI) by using a purely analog process in which a photographic print is converted into variable sized dots through interference patterns passing through a screen. Modern inkjet printers can print microscopic dots at any location, and don't require a screen grid, with the metric dots per inch (DPI). These are both different from pixel density or pixels per inch (PPI) because a pixel is a single sample of any color, whereas an inkjet print can only print a dot of a specific color either on or off. Thus a printer translates the pixels into a series of dots using a process called dithering . The dot pitch , smallest size of each dot, is also determined by the type of paper the image is printed on. An absorbent paper surface, uncoated recycled paper for instance, lets ink droplets spread — so has a larger dot pitch. [ 3 ] Often one wishes to know the image quality in pixels per inch (PPI) that would be suitable for a given output device. If the choice is too low, then the quality will be below what the device is capable of—loss of quality—and if the choice is too high then pixels will be stored unnecessarily—wasted disk space. The ideal pixel density (PPI) depends on the output format, output device, the intended use and artistic choice. For inkjet printers measured in DPI it is generally good practice to use half or less than the DPI to determine the PPI. For example, an image intended for a printer capable of 600 dpi could be created at 300 ppi. When using other technologies such as AM or FM screen printing, there are often published screening charts that indicate the ideal PPI for a printing method. [ 4 ] Using the DPI or LPI of a printer remains useful to determine PPI until one reaches larger formats, such as 36" or higher, as the factor of visual acuity then becomes more important to consider. If a print can be viewed close up, then one may choose the printer device limits. However, if a poster, banner or billboard will be viewed from far away then it is possible to use a much lower PPI. [ citation needed ] The PPI/PPCM of a computer display is related to the size of the display in inches / centimetres and the total number of pixels in the horizontal and vertical directions. This measurement is often referred to as dots per inch , though that measurement more accurately refers to the resolution of a computer printer . For example, a 15-inch (38 cm) display whose dimensions work out to 12 inches (30.48 cm) wide by 9 inches (22.86 cm) high, capable of a maximum 1024×768 (or XGA ) pixel resolution, can display around 85 PPI, or 33.46 PPCM, in both the horizontal and vertical directions. This figure is determined by dividing the width (or height) of the display area in pixels by the width (or height) of the display area in inches. It is possible for a display to have different horizontal and vertical PPI measurements (e.g., a typical 4:3 ratio CRT monitor showing a 1280×1024 mode computer display at maximum size, which is a 5:4 ratio, not quite the same as 4:3). The apparent PPI of a monitor depends upon the screen resolution (that is, the number of pixels) and the size of the screen in use; a monitor in 800×600 mode has a lower PPI than does the same monitor in a 1024×768 or 1280×960 mode. The dot pitch of a computer display determines the absolute limit of possible pixel density. Typical circa-2000 cathode-ray tube or LCD computer displays range from 67 to 130 PPI, though desktop monitors have exceeded 200 PPI, and certain smartphone manufacturers' flagship mobile device models have been exceeding 500 PPI since 2014. In January 2008, Kopin Corporation announced a 0.44 inch (1.12 cm) SVGA LCD with a pixel density of 2272 PPI (each pixel only 11.25 μm). [ 5 ] [ 6 ] In 2011 they followed this up with a 3760-DPI 0.21-inch diagonal VGA colour display. [ 7 ] The manufacturer says they designed the LCD to be optically magnified, as in high-resolution eyewear devices. Holography applications demand even greater pixel density, as higher pixel density produces a larger image size and wider viewing angle. Spatial light modulators can reduce pixel pitch to 2.5 μm , giving a pixel density of 10,160 PPI. [ 8 ] Some observations indicate that the unaided human generally can't differentiate detail beyond 300 PPI. [ 9 ] However, this figure depends both on the distance between viewer and image, and the viewer’s visual acuity . The human eye also responds in a different way to a bright, evenly lit interactive display from how it does to prints on paper. High pixel density display technologies would make supersampled antialiasing obsolete, enable true WYSIWYG graphics and, potentially enable a practical “ paperless office ” era. [ 10 ] For perspective, such a device at 15 inch (38 cm) screen size would have to display more than four Full HD screens (or WQUXGA resolution). The PPI pixel density specification of a display is also useful for calibrating a monitor with a printer. Software can use the PPI measurement to display a document at "actual size" on the screen. PPI can be calculated from the screen's diagonal size in inches and the resolution in pixels (width and height). This can be done in two steps: where For example: These calculations may not be very precise. Frequently, screens advertised as “X inch screen” can have their real physical dimensions of viewable area differ, for example: Camera manufacturers often quote view screens in 'number of dots'. This is not the same as the number of pixels, because there are 3 'dots' per pixel – red, green and blue. For example, the Canon 50D is quoted as having 920,000 dots. [ 15 ] This translates as 307,200 pixels (×3 = 921,600 dots). Thus the screen is 640×480 pixels. [ 16 ] This must be taken into account when working out the PPI. 'Dots' and 'pixels' are often confused in reviews and specs when viewing information about digital cameras specifically. "PPI" or "pixel density" may also describe image scanner resolution. In this context, PPI is synonymous with samples per inch . In digital photography, pixel density is the number of pixels divided by the area of the sensor. A typical DSLR , circa 2013, has 1–6.2 MP/cm 2 ; a typical compact has 20–70 MP/cm 2 . For example, Sony Alpha SLT-A58 has 20.1 megapixels on an APS-C sensor having 6.2 MP/cm 2 since a compact camera like Sony Cyber-shot DSC-HX50V has 20.4 megapixels on an 1/2.3" sensor having 70 MP/cm 2 . The professional camera has a lower PPI than a compact camera, because it has larger photodiodes due to having far larger sensors. Smartphones use small displays, but modern smartphone displays have a larger PPI rating, such as the Samsung Galaxy S7 with a quad HD display at 577 PPI, Fujitsu F-02G with a quad HD display at 564 PPI, [ 17 ] the LG G6 with quad HD display at 564 PPI or – XHDPI or Oppo Find 7 with 534 PPI on 5.5-inch display – XXHDPI (see section below). [ 18 ] Sony 's Xperia XZ Premium has a 4K display with a pixel density of 807 PPI, the highest of any smartphone as of 2017. [ 19 ] Android supports the following logical DPI values for controlling how large content is displayed: [ 20 ] The digital publishing industry primarily uses pixels per inch but sometimes pixels per centimeter is used, or a conversion factor is given. [ 21 ] [ 22 ] [ 23 ] The PNG image file format only allows the meter as the unit for pixel density. [ 24 ] The following table show how pixel density is supported by popular image file formats. The cell colors used do not indicate how feature-rich a certain image file format is, but what density support can be expected of a certain image file format. Even though image manipulation software can optionally set density for some image file formats, not many other software uses density information when displaying images. Web browsers, for example, ignore any density information. As the table shows, support for density information in image file formats varies enormously and should be used with great care in a controlled context.
https://en.wikipedia.org/wiki/Pixel_density
Pixhawk is a project responsible for creating open-source standards for the flight controller hardware that can be installed on various unmanned aerial vehicles . Additionally, any flight controller built to the open standards often includes "Pixhawk" in its name and may be referred to as such. An unmanned vehicle's flight controller, also referred to as an FC, FCB (flight control board), FMU (flight management unit), or autopilot, is a combination of hardware and software that is responsible for interfacing with a variety of onboard sensors and control systems in order to facilitate remote control or provide fully autonomous control. [ 1 ] Pixhawk-standardized flight controllers are being used for academic, professional, and amateur applications, and are supported by two mainstream autopilot firmware options: PX4 and ArduPilot . Both firmware options allow for a variety of vehicle types through the Pixhawk flight controller system, including configuration options for unmanned boats, rovers, helicopters, planes, VTOLs , and multirotors . [ 2 ] [ 3 ] Many manufacturers have adopted various iterations of the Pixhawk standard, including Holybro and CubePilot. Refer to the UAV-systems hardware chart for a full list of flight controllers that have fully or partially adopted the Pixhawk standard. Pixhawk flight controllers typically feature one or two microcontrollers . In the case of two microcontrollers, a main flight management processor handles all sensor readings, PID calculations, and other resource-heavy computations, while the other handles input/output operations to external motors, switches and radio control receivers. [ 4 ] Onboard sensors include an IMU with a multi-axis accelerometer and gyroscope , magnetometer to use as a compass, and a GPS tracking unit to estimate the vehicle's location. The Pixhawk standards dictate the hardware requirements for manufacturers who are building products to be compatible with the PX4 autopilot software stack. However, due to ArduPilot's adaptation of Pixhawk flight controllers, the standard is able to ensure compatibility with ArduPilot as well. [ 5 ] The open standards consist of a main autopilot reference standard for each iteration of the Pixhawk FMU, as well as various other standards that apply to the general Pixhawk control ecosystem, such as a payload bus standard or a smart battery standard. [ 6 ] This is the main section of the Pixhawk open standards, containing all mechanical and electrical specification for each version of the flight management unit. Currently, versions 1, 2, 3, 4, 4X, 5, 5X, 6X, 6U, and 6C autopilots have been released. [ 7 ] The mechanical design standard includes dimensional drawings of the FMU's PCB , the selected sensor types and their locations, and areas that need additional heat sinking. The electrical standard includes the pin-out of each pin in the main processing microcontroller, and which interface each pin is set to communicate with. [ 4 ] The autopilot bus standard is an extension of the autopilot reference standard specifically for providing more information about manufacturing the latest reference versions of Pixhawk FMU, such as the 5X and 6X. The main reason for this is that these are the first flight units featuring a system on module design, where the housing of the flight controller module takes the form of a compact prism with a set of extremely high-density, 100-pin connectors between the module and the baseboard (seen at the bottom of the image on the right). The baseboard allows users to plug the necessary peripheral devices (such as motors, servos, and radios) into the flight controller, while the system on module design results in an easily swappable flight computer. Additionally, this bus standard details PCB layout guidelines for the system on module along with a catalog of reference schematics for interfaces between the module and the baseboard. [ 8 ] In the connector standard, the Pixhawk project specifies using the JST GH for the vast majority of all interfaces between the flight controller board and pluggable peripherals. Just as importantly, the standard defines a convention for user-facing pin-outs for telemetry , GPS, CAN bus , SPI , power, and debug ports. External pin-out information is critical for anyone developing a vehicle with an autopilot, as improperly plugging in peripherals results in a non-functional system at best, and a dangerous environment with broken hardware at worst. Although there is a great deal of variation within the Pixhawk family in terms of available ports and port types, the standardization of pin-outs for the most popular interfaces is immensely helpful to any user working with multiple generations of Pixhawk flight controllers. Although this section serves as an accessory to the main Autopilot Reference Standard, it concisely details how the Pixhawk standards suggest making additional vehicle payloads that are compatible with a Pixhawk autopilot. [ 9 ] Although it is not strictly enforced across all vehicle payload manufacturers, this facilitates the possibility for users to implement payloads and flight controllers from different manufacturers. The smart battery standard has not been published yet, but it is set to define the interface between a smart battery and a Pixhawk FMU. Such a standard would define the communication protocols, connectors, and capabilities of a battery management system that would be used in a Pixhawk-operated vehicle. [ 10 ] Although there are a variety of radio solutions that can be interfaced with a Pixhawk flight controller, the project does have a short mechanical, electrical, and software definition for a Pixhawk-specific radio communication system. The standard anticipates connections between ground stations and radio modules to be over USB or Ethernet , while connections between local and remote radios could go over traditional radio-frequency links, or LTE . [ 11 ] In 2008, Lorenz Meier, a master's student at ETH Zurich , wanted to make an indoor drone that could use computer vision to autonomously traverse a space and avoid collisions with obstacles. However, such technology did not exist, let alone in a way that was accessible to a university student. Motivated by participating in the indoor autonomy category of a European Micro Air Vehicle competition, Lorenz leveraged the help of professor Marc Pollefeys and assembled a group of 14 teammates to spend nine tireless months creating custom flight controller hardware, firmware , and high-level software. The team, named "Pixhawk," won first place in their category in 2009, being the first competitors to successfully implement computer vision for obstacle avoidance. [ 12 ] Revisiting the project in subsequent years, Lorenz realized that there were not a lot of existing industry tools that could be used to accomplish what he and his team did. As a result, the Pixhawk team made the entire project open source. The ground control software that allowed the team to interface with the drone while it was in flight, the MAVLink communication protocol that was custom developed for streaming telemetry back to the ground station, the PX4 autopilot software that was responsible for controlling the drone, and the Pixhawk flight controller hardware that the autopilot ran on were all released to the public for further development. [ 13 ] Over time, the released project began to grow. MAVLink was picked up by the open-source ArduPilot autopilot software development project, and the ground control software QGroundControl was subsequently used to interface with MAVLink systems. After a couple codebase rewrites and hardware development cycles, Lorenz and a worldwide team of open-source maintainers were able to support a manufacturer that would build a flight controller to their standards. In 2013, 3D Robotics became the first manufacturer of commercial Pixhawk flight controllers, officially lowering the barrier to entry to autonomous flight for enthusiasts and corporations worldwide. [ 13 ] Now, anyone could purchase an extremely capable autonomous flight control, flash it with free, open-source PX4 or ArduPilot firmware, and have a university research-level drone platform. Lorenz heavily credits the open-source community with the extensive success of the Pixhawk platform, as the combined development power seemed to be greater than that of a well-resourced company. [ 12 ] In order to help standardize various developments across the project and ensure that it remained accessible and open-source, the Dronecode organization was founded in 2014. Dronecode is currently a non-profit organization under the Linux Foundation , and it has been responsible for facilitating conversations that define the Pixhawk standards. [ 10 ]
https://en.wikipedia.org/wiki/Pixhawk
In elementary geometry , the pizza theorem states the equality of two areas that arise when one partitions a disk in a certain way. The theorem is so called because it mimics a traditional pizza slicing technique. It shows that if two people share a pizza sliced into 8 pieces (or any multiple of 4 greater than 8), and take alternating slices, then they will each get an equal amount of pizza, irrespective of the central cutting point. Let p be an interior point of the disk, and let n be a multiple of 4 that is greater than or equal to 8. Form n sectors of the disk with equal angles by choosing an arbitrary line through p , rotating the line ⁠ n / 2 ⁠ − 1 times by an angle of ⁠ 2 π / n ⁠ radians , and slicing the disk on each of the resulting ⁠ n / 2 ⁠ lines. Number the sectors consecutively in a clockwise or anti-clockwise fashion. Then the pizza theorem states ( Upton 1968 ): The sum of the areas of the odd-numbered sectors equals the sum of the areas of the even-numbered sectors. The pizza theorem was originally proposed as a challenge problem by Upton (1967) . The published solution to this problem, by Michael Goldberg, involved direct manipulation of the algebraic expressions for the areas of the sectors. Carter & Wagon (1994a) provide an alternative proof by dissection . They show how to partition the sectors into smaller pieces so that each piece in an odd-numbered sector has a congruent piece in an even-numbered sector, and vice versa. Frederickson (2012) gave a family of dissection proofs for all cases (in which the number of sectors is 8, 12, 16, ... ). The requirement that the number of sectors be a multiple of four is necessary: as Don Coppersmith showed, dividing a disk into four sectors, or a number of sectors that is not divisible by four, does not in general produce equal areas. Mabry & Deiermann (2009) answered a problem of Carter & Wagon (1994b) by providing a more precise version of the theorem that determines which of the two sets of sectors has greater area in the cases that the areas are unequal. Specifically, if the number of sectors is 2 ( mod 8) and no slice passes through the center of the disk, then the subset of slices containing the center has smaller area than the other subset, while if the number of sectors is 6 (mod 8) and no slice passes through the center, then the subset of slices containing the center has larger area. An odd number of sectors is not possible with straight-line cuts, and a slice through the center causes the two subsets to be equal regardless of the number of sectors. Mabry & Deiermann (2009) also observe that, when the pizza is divided evenly, then so is its crust (the crust may be interpreted as either the perimeter of the disk or the area between the boundary of the disk and a smaller circle having the same center, with the cut-point lying in the latter's interior), and since the disks bounded by both circles are partitioned evenly so is their difference. However, when the pizza is divided unevenly, the diner who gets the most pizza area actually gets the least crust. As Hirschhorn et al. (1999) note, an equal division of the pizza also leads to an equal division of its toppings, as long as each topping is distributed in a disk (not necessarily concentric with the whole pizza) that contains the central point p of the division into sectors. Hirschhorn et al. (1999) show that a pizza sliced in the same way as the pizza theorem, into a number n of sectors with equal angles where n is divisible by four, can also be shared equally among n /4 people. For instance, a pizza divided into 12 sectors can be shared equally by three people as well as by two; however, to accommodate all five of the Hirschhorns, a pizza would need to be divided into 20 sectors. Cibulka et al. (2010) and Knauer, Micek & Ueckerdt (2011) study the game theory of choosing free slices of pizza in order to guarantee a large share, a problem posed by Dan Brown and Peter Winkler . In the version of the problem they study, a pizza is sliced radially (without the guarantee of equal-angled sectors) and two diners alternately choose pieces of pizza that are adjacent to an already-eaten sector. If the two diners both try to maximize the amount of pizza they eat, the diner who takes the first slice can guarantee a 4/9 share of the total pizza, and there exists a slicing of the pizza such that he cannot take more. The fair division or cake cutting problem considers similar games in which different players have different criteria for how they measure the size of their share; for instance, one diner may prefer to get the most pepperoni while another diner may prefer to get the most cheese. Brailov (2021) , Brailov (2022) , Ehrenborg, Morel & Readdy (2022) , and Ehrenborg, Morel & Readdy (2023) extend this result to higher dimensions, i.e. for certain arrangements of hyperplanes, the alternating sum of volumes cut out by the hyperplanes is zero. Compare with the ham sandwich theorem , a result about slicing n -dimensional objects. The two-dimensional version implies that any pizza, no matter how misshapen, can have its area and its crust length simultaneously bisected by a single carefully chosen straight-line cut. The three-dimensional version implies the existence of a plane cut that equally shares base, tomato and cheese.
https://en.wikipedia.org/wiki/Pizza_theorem
Pl@ntNet is a citizen science project for automatic plant identification through photographs and based on machine learning . This project launched in 2009 has been developed by scientists ( computer engineers and botanists ) from a consortium gathering French research institutes ( Institut de recherche pour le développement (IRD), Centre de coopération internationale en recherche agronomique pour le développement (CIRAD), Institut national de la recherche agronomique (INRA), Institut national de recherche en informatique et en automatique (INRIA) and the network Tela Botanica , with the support of Agropolis Fondation [ 2 ] [ 3 ] ). An app for smartphones (and a web version) was launched in 2013, [ 4 ] which allows to identify thousands of plant species from photographs taken by the user. It is available in several languages. As of 2019 it had been downloaded over 10 million times, in more than 180 countries worldwide. [ 1 ] In 2019, Pl@ntNet has 22 projects: [ 5 ]
https://en.wikipedia.org/wiki/Pl@ntNet
The placental microbiome is the nonpathogenic , commensal bacteria claimed to be present in a healthy human placenta and is distinct from bacteria that cause infection and preterm birth in chorioamnionitis . [ 1 ] Until recently, the healthy placenta was considered to be a sterile organ but now genera and species have been identified that reside in the basal layer . [ 2 ] [ 1 ] It should be stressed that the evidence for a placental microbiome is controversial. [ 3 ] [ 4 ] Most studies supporting the existence of a placental microbiome lack the appropriate experimental controls, and it has been found that contamination is most likely responsible for reports of a placental microbiome. [ 3 ] [ 5 ] The placental microbiome more closely resembles that of the oral microbiome than either the vaginal or rectal microbiome. [ 1 ] Culturable and non-culturable bacterial species in the placenta obtained following normal term pregnancy have been identified. In a healthy placental microbiome, the diversity of the species and genera is extensive. [ 1 ] A change in the composition of the microbiota in the placenta is associated with excess gestational weight gain , and pre-term birth. [ 10 ] The placental microbiota varies between low birth weight infants and those infants with normal birth weights. [ 13 ] While bacteria are often found in the amniotic fluid of failed pregnancies, they are also found in particulate matter that is found in about 1% of healthy pregnancies. [ 9 ] In non-human animals, part of the microbiome is passed onto offspring even before the offspring are born. Bacteriologists assume that the same probably holds true for humans. [ 9 ] The fact that germ free animals can be routinely generated by sterile cesarean section provides strong experimental evidence for the sterile womb hypothesis. Future research may find that the microbiota of the female reproductive tract may be related to pregnancy , conception, and birth . Animal studies have been used to investigate the relationship between oral microbiota and the placental microbiota. Mice inoculated with species of oral bacteria demonstrated placental colonization soon afterwards. [ 14 ] Investigations into reproductive-associated microbiomes began around 1885 by Theodor Escherich. He wrote that meconium from the newborn was free of bacteria. This was interpreted as the uterine environment being sterile. Other investigations used sterile diapers for meconium collection. No bacteria were able to be cultured from the samples. Bacteria were detected and were directly proportional to the time between birth and the passage of meconium. A 1927 study demonstrated the presence of bacteria in the amniotic fluid of those that were in labor for longer than six hours. [ 15 ]
https://en.wikipedia.org/wiki/Placental_microbiome
The clothing worn by plague doctors was intended to protect them from airborne diseases during outbreaks of bubonic plague in Europe. [ 2 ] It is often seen as a symbol of death and disease. [ 3 ] Contrary to popular belief, no evidence suggests that the beak mask costume was worn during the Black Death or the Middle Ages. The costume started to appear in the 17th century when physicians studied and treated plague patients. [ 4 ] The costume consists of a leather hat, mask with glass eyes and a beak, stick to remove clothes of a plague victim, gloves, waxed linen robe, and boots. [ 2 ] The typical mask had glass openings for the eyes and a curved beak shaped like a bird's beak with straps that held the beak in front of the doctor's nose. [ 5 ] The mask had two small nose holes and was a type of respirator that contained aromatic items. [ 6 ] The beak could hold dried flowers (commonly roses and carnations ), herbs (commonly lavender and peppermint ), camphor , or a vinegar sponge, [ 7 ] [ 8 ] as well as juniper berry , ambergris , cloves , labdanum , myrrh , and storax . [ 9 ] The purpose of the mask was to keep away bad smells, such as the smell of decaying bodies. The smell taken with the most caution was known as miasma , a noxious form of "bad air". This was thought to be the principal cause of the disease. [ 10 ] Doctors believed the herbs would counter the "evil" smells of the plague and prevent them from becoming infected. [ 11 ] Though these particular theories about the plague's nature were incorrect, it is likely that the costume actually did afford the wearer some protection. The garments covered the body, shielding against splattered blood, lymph, and cough droplets, and the waxed robe prevented fleas (the true carriers of the plague) from touching the body or clinging to the linen. [ 12 ] The wide-brimmed leather hat indicated their profession. [ 2 ] [ 13 ] Doctors used wooden canes in order to point out areas needing attention and to examine patients without touching them. [ 14 ] The canes were also used to keep people away [ 15 ] [ 16 ] and to remove clothing from plague victims. [ 17 ] The exact origins of the costume are unclear, as most depictions come from satirical writings and political cartoons. [ 18 ] An early reference to plague doctors wearing masks is in 1373 when Johannes Jacobi recommends their use but he offers no physical description of what these masks looked like. [ 19 ] The beaked plague doctor inspired costumes in Italian theater as a symbol of general horror and death, though some historians insist that the plague doctor was originally fictional and inspired the real plague doctors later. [ 20 ] Depictions of the beaked plague doctor rose in response to superstition and fear about the unknown source of the plague. [ 21 ] Often, these plague doctors were the last thing a patient would see before death; therefore, the doctors were seen as a foreboding of death. The garments were first mentioned by a physician to King Louis XIII of France , Charles de Lorme, who wrote in a 1619 plague outbreak in Paris that he developed an outfit made of Moroccan goat leather , including boots, breeches, a long coat, hat, and gloves [ 22 ] [ 23 ] modeled after a soldier's canvas gown that went from the neck to the ankle. [ 24 ] [ 25 ] [ 26 ] The garment was impregnated with similar fragrant items as the mask. De Lorme wrote that the mask had a "nose half a foot long, shaped like a beak, filled with perfume with only two holes, one on each side near the nostrils, but that can suffice to breathe and to carry along with the air one breathes the impression of the drugs enclosed further along in the beak." [ 27 ] Recent research has revealed that strong caveats must be applied with regard to De Lorme's assertions, however. [ 28 ] The Genevan physician, Jean-Jacques Manget , in his 1721 work Treatise on the Plague written just after the Great Plague of Marseille , describes the costume supposedly worn by plague doctors in Rome in 1656. The costume forms the frontispiece of Manget's 1721 work. [ 29 ] Their robes, leggings, hats, and gloves were also made of Morocco leather. [ 30 ] This costume was also worn by plague doctors during the Naples Plague of 1656 , which killed 145,000 people in Rome and 300,000 in Naples . [ 31 ] [ 32 ] In his work Tractatus de Peste , [ 33 ] published at Toulouse in May 1629, [ 34 ] Irish physician Niall Ó Glacáin references the protective clothing worn by plague doctors, which included leather coats, gauntlets and long beak-like masks filled with fumigants. [ 35 ] [ 36 ] The costume is also associated with a commedia dell'arte character called Il Medico della Peste ('The Plague Doctor'), who wears a distinctive plague doctor's mask . [ 37 ] The Venetian mask was normally white, consisting of a hollow beak and round eye-holes covered with clear glass, and is one of the distinctive masks worn during the Carnival of Venice . [ 38 ] During the COVID-19 pandemic beginning in 2020, the plague doctor costume grew in popularity due to its relevance to the pandemic, with news reports of plague doctor-costumed individuals in public places and photos of people wearing plague doctor costumes appearing in social media. [ 39 ] [ 40 ] Media related to Plague doctors at Wikimedia Commons
https://en.wikipedia.org/wiki/Plague_doctor_costume
PlaidML is a portable tensor compiler . Tensor compilers bridge the gap between the universal mathematical descriptions of deep learning operations, such as convolution , and the platform and chip-specific code needed to perform those operations with good performance. Internally, PlaidML makes use of the Tile eDSL [ 3 ] to generate OpenCL , OpenGL , LLVM , or CUDA code. It enables deep learning on devices where the available computing hardware is either not well supported or the available software stack contains only proprietary components. For example, it does not require the usage of CUDA or cuDNN on Nvidia hardware, while achieving comparable performance. [ 4 ] PlaidML supports the machine learning libraries Keras , ONNX , and nGraph . However, Keras have dropped support of multiple backends and latest Keras version isn't compatible with PlaidML. An integration with Tensorflow-Keras is planned as a replacement for Keras. [ 5 ] In August 2018 Intel acquired Vertex.AI , a startup whose mission statement was “deep learning for every platform”. [ 6 ] Intel released PlaidML as free software under to the terms of the Apache Licence (version 2.0) to improve compatibility with nGraph , TensorFlow , and other ecosystem software.
https://en.wikipedia.org/wiki/PlaidML
Plakophilin are proteins of the cytoskeleton. [ 1 ] They are involved in regulating the adhesive activity of cadherin . [ 2 ] The three types of plakophilin proteins found in humans are PKP1 , PKP2 , and PKP3 ; all exhibiting dual localization in the nucleus as well as desmosomes. [ 3 ] [ 4 ] Genes include: This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plakophilin
A plan-relief ( French pronunciation: [plɑ̃ ʁəljɛf] ) is a scale model of a landscape and buildings produced for military usage, made to visualize building projects on fortifications or campaigns surrounding fortified locations. The first examples seem to have been used by the Venetian Republic and more generally by the Italian city-states of the Renaissance era. The wood turner Jakob Sandtner ( fl . 1561-1579) produced plans-relief of many Bavarian towns, whilst in France Louis XIV 's war minister Louvois initiated a collection of plans-relief of French strongholds on a 1:600 scale in 1688. This collection included 144 examples according to Vauban 's 1697 inventory of it and was put on a show at the Palais des Tuileries . Vauban's successors expanded the collection as and when operational necessity demanded, right up until 1870 when they were rendered obsolete by advances in the power of artillery. Some examples from this collection were destroyed and as a whole, it fell into disrepair until being made a Monument historique on 22 July 1927. One hundred examples from this collection survive, of which most are on show in the Museum of Plans-reliefs at Les Invalides and some others in the Palais des Beaux-Arts de Lille , still providing a witness to the towns and fortresses of France at this era. After the success of the panorama painting , a spin-off of the plan-relief called the "panstereorama" emerged, this time for popular rather than military use. It was designed to offer an approximation of a balloon ride over a given city. No known models survive. [ 1 ]
https://en.wikipedia.org/wiki/Plan-relief
The Plan of Rome is a model , more precisely a relief map , of ancient Rome in the 4th century. Made of varnished plaster (11 × 6 m), it represents three-fifths of the city at a 1/400 scale, forming a puzzle of around one hundred pieces. It was created by Paul Bigot , an architect and winner of the Grand Prix de Rome in 1900. Initially focused on the Circus Maximus , Bigot's work gradually expanded to cover an area of over 70 m 2 . It has also become a virtual reconstruction project led by the University of Caen since the 1990s. Bigot developed the model as a synthesis of the literary, archaeological, and iconographic knowledge available at the beginning of the 20th century, working on it for four decades. His project followed the tradition of the "Rome submissions," where residents of the Villa Medici presented reconstructions of architectural elements of ancient Rome. It also coincided with the profound renewal of knowledge about the city during major works accompanying its transformation into the capital of modern Italy. The Plan of Rome quickly gained recognition as both an artistic masterpiece and a valuable educational tool, with various international events showcasing it to the public. Following drawings and watercolors, reconstructions of ancient Rome took the form of models in the 20th century. From the late 20th century and early 21st century, with advances in computer technology, reconstructions have increasingly relied on virtual reality . Bigot created four plaster models before his death in 1942, only two of which remained in the early 21st century—one in Caen and the other in Brussels . The Caen model, classified as a historic monument in 1978, has been the focus of dedicated work since the mid-1990s to create a virtual counterpart accessible to the public, integrating current knowledge about ancient Rome's topography. This project saw significant acceleration during the 2010s. The most recent work, using advanced techniques and the virtual model, does not overshadow Bigot's monumental efforts, which remain a testament to early 20th-century knowledge about Rome. Bigot remains a pioneer in the topography of Rome, as well as in ancient architecture and urban planning. His work retains a certain prestige in the early 21st century, even beyond its archaeological accuracy. The virtual model, on the other hand, can evolve with new archaeological discoveries and advances in technology, enabling ongoing updates to the project. Paul Bigot was a Normandy -born architect from Orbec [ 1 ] and the brother of animal painter and sculptor Raymond Bigot [ fr ] (1872-1953). He won the Grand Prix de Rome in architecture in 1900 with his proposal for A Thermal Bath and Casino Establishment (including baths, a hotel, and a casino [ 2 ] ). His work made a strong impression, showcasing a coherent ensemble with distinctly differentiated elements. [ 3 ] The competition granted access to training at the French Academy in Rome . [ 4 ] Upon arriving in Italy, Bigot developed a passion for ancient Rome, which was still largely hidden beneath the modern city. He became so involved in excavations that he spent more time at the French School than at the Villa Medici . At the Villa, he encountered Tony Garnier , [ 5 ] who in 1901 submitted a provocative project [ 6 ] for an Industrial City, [ 7 ] described as a "manifesto for modern urbanism," [ 8 ] which sparked numerous reactions. The idea of creating a model to represent a structure within its environment [ 9 ] reportedly emerged from a discussion among residents, all of whom were architects, [ 10 ] that same year. [ 3 ] The school imposed a mandatory exercise for architects to evaluate their skills and progress: [ 11 ] the final-year "Rome submission." This traditionally involved a watercolor reconstruction or restoration of an ancient monument, accompanied by contemporary surveys of buildings [ 12 ] [ 13 ] —a practice dating back to 1778. [ 14 ] These works helped preserve the state of ancient buildings during the 18th and 19th centuries. [ 15 ] Restoration focused on ruins and sought plausibility, while reconstruction aimed to reproduce the ancient state of a structure. Both approaches raised the question of "the relationship between representation and the reality of the depicted object." [ 16 ] Bigot began his work with submissions similar to those of his peers. [ 17 ] In 1902, he signed a petition advocating for greater freedom in the submissions for Villa Medici residents. [ 3 ] Bigot chose to work on a disappeared structure in a densely built area. [ 18 ] His third-year submission [ 19 ] in 1903 was a reconstruction of the Circus Maximus , [ 20 ] which at the time was covered by a gasworks plant. In 1905, he submitted a board titled Research on the Boundaries of the Grand Circus, attempting a plausible reconstruction of a monument that was "almost entirely disappeared." This reconstruction was still considered plausible even at the end of the 20th century despite certain "risky" elements. [ 21 ] To support his hypotheses, Bigot conducted archaeological excavations funded by the Académie des Inscriptions et Belles-Lettres , [ 22 ] publishing the results and drawing analogies with other structures, such as the Circus of Maxentius . [ 22 ] He also produced detailed reports to bolster his funding requests [ 3 ] for this costly project. [ 22 ] He secured a grant of 5,000 francs from the Académie in 1908, [ 23 ] along with private and public funding, including 25,000 francs from the French government in 1909 in exchange for a promise to donate the model to the Sorbonne . [ 24 ] At this point, he decided to create an architectural model , and his Circus Maximus was submitted in 1908. The enthusiastic reception of this initial work, regarded from the start as "an impressive piece for the vivid image it provides of the ancient city," [ 22 ] encouraged him to embark on a project that would ultimately occupy 40 years of his life. [ 9 ] Bigot’s prolonged stay in Rome perplexed figures like Eugène Guillaume and the Académie des Beaux-Arts due to the project's costs. [ 25 ] However, he disregarded their comments, bolstered by support from Louis Duchesne , [ 12 ] the director of the French School of Rome . [ 26 ] He formed connections with prominent experts in Roman topography, including Christian Hülsen and Rodolfo Lanciani . [ 27 ] [ 28 ] [ 19 ] To his masters' surprise, Bigot first proposed a clay model of the Circus Maximus . To convey the building's scale, he then modeled the surrounding district, the city center, and eventually most of Rome [ 29 ] (excluding the Baths of Diocletian and the Vatican ) with a visionary approach that might no longer be evident today. By 1906, he had obtained subsidies to complete his work and was in Rome in 1907-1908 to prepare for the 1911 exhibition. [ 20 ] By 1909, he had the idea of creating a bronze model. [ 30 ] Bigot's stay in Rome extended to seven years. [ 7 ] According to Jérôme Carcopino , Bigot "infinitely delayed his departure for Paris, fearing that an error or omission would betray the fidelity of his model." [ 31 ] Carcopino also remarked that "while promotions came and went, Bigot remained, eyes fixed on his plan, unwaveringly faithful to his dream." [ 32 ] Ultimately, he spent eleven years in Rome, living on scholarships [ 33 ] before returning to France in 1912. [ 34 ] Bigot's model took the form of a relief plan . This technique had been developed extensively in France from 1668 under the initiative of Louis XIV 's minister Louvois , primarily for military purposes, representing the kingdom’s major fortified places at a scale of 1/600. [ 15 ] Two years earlier, Jean-Baptiste Colbert had founded the French Academy in Rome to "help artists draw inspiration from Roman models." [ 35 ] Bigot's work continued a long tradition of reconstructing the ancient city, beginning with Flavio Biondo 's Roma Instaurata [ fr ] in 1446, followed by Pirro Ligorio in the 16th century, Giovan Battista Nolli in 1748, [ 36 ] and Luigi Canina in the first half of the 19th century. [ 37 ] The first modern models of Rome appeared at the end of the 18th century, made of cork or plaster, [ 38 ] [ 39 ] intended for wealthy tourists or collectors like Louis-François-Sébastien Fauvel and Louis-François Cassas . [ 40 ] Notably, they included a 3-meter-long Colosseum and a Pantheon displayed at the Johannisburg Castle . [ 41 ] A partial relief model of Rome was made between 1850 and 1853 for military purposes following the siege of the city. This model realistically depicted ancient monuments as part of the backdrop for events, [ 42 ] though they were not its primary focus. From 1860, scholarship recipients were divided between scientific ambitions and a desire for freedom from the Académie des Beaux-Arts' strict rules requiring depictions of the current state and a restored version of a monument. [ 43 ] The inclusion of "atmospheric reconstructions" as annexes to their submissions allowed architects to break free and present realistic scenes, which were popular at the time, inspired by artists like Théodore Chassériau , Jean-Léon Gérôme , and Lawrence Alma-Tadema . [ 44 ] Paul Bigot belongs to the "tradition of architect-archaeologists," [ 45 ] and his work stands at the crossroads of the envois de Rome from the 19th and 20th centuries, situated between studies of monument complexes and those focused on colonial cities such as Selinunte , Priene , or Pompeii . [ 20 ] These studies of ancient newly planned cities are considered "one of the sources of modern urban planning." [ 46 ] His plan aligns with the envois tradition aimed at presenting "a complete image of a monument or site," including representations of the city, buildings, and dwellings. [ 47 ] The envois of this period focused on sanctuaries, thermal complexes, small towns, palaces, villas, or neighborhoods. [ 48 ] Architects became increasingly interested in the urban fabric, as this era also marked the birth of urbanism . [ 49 ] For the residents of the Villa Medici, their Roman stay was a "rediscovery of planned urbanism," distinguishing between two types of cities: newly founded ones and those "shaped by the slow work of time." [ 50 ] They were also part of a movement of interest in the history and archaeology of ancient Rome, which followed the city's transformation into the capital of Italy after 1870 [ 51 ] and the large-scale projects undertaken to make it the capital of a modern country. [ 52 ] These infrastructure works isolated the ancient city from the developing suburbs. [ 53 ] This period marked the "historical and archaeological rediscovery of the ancient city." [ 54 ] It was no longer seen as a romantic vision of a city in ruins [ 55 ] but rather as one threatened by "banal modernity." [ 56 ] Ancient monuments became "symbols of patriotic grandeur." [ 57 ] Urban planning debates emerged to beautify the new Italian capital, continuing the tradition of ancient urban design concepts. [ 58 ] This period also saw significant contributions from Rodolfo Lanciani , who published numerous popular works in French and English, [ 54 ] as well as the more scientific Forma Urbis , a "turning point in Roman cartography," [ 59 ] from 1893 to 1901. Discovered behind the Forum of Peace , [ 60 ] the Forma Urbis continued to be studied and published into the late 20th century. [ 61 ] Created under Emperor Septimius Severus , the Forma Urbis was a marble map of Rome at a 1:240 scale, dating back to the 3rd century. Only about 15% [ 19 ] (1,019 fragments) have survived. Additional lost fragments, known through sketches, remain valuable for research. [ 62 ] The Severan map, preserved at the Capitoline Museum, became the subject of studies and debates at the end of the 19th century and the beginning of the 20th century. [ 19 ] Despite acknowledged errors, Lanciani's 1:1000 publication of the Severan map remained a fundamental source for understanding vanished or poorly documented monuments. [ 28 ] Between 1898 and 1914, a preserved space known as the Passeggiata Archeologica was created. [ 59 ] Giuseppe Marcelliani [ fr ] created a terracotta model of Rome at a 1/100 scale around 1904, [ 63 ] continuing until 1910. [ 64 ] He prioritized "the aesthetic and monumental aspects," [ 65 ] presenting "a succession of monumental complexes." [ 66 ] Marcelliani's approach differed, with some buildings treated "as blocks" and others designed through assembly. Overall, approximations were noted in the representations of monument complexes. [ 67 ] His work, focused on the monumental center, [ 68 ] was highly successful and circulated through postcards. [ 64 ] It was exhibited until 1923 with an entry fee [ 69 ] and was in poor condition by the early 21st century, although partially displayed. [ 70 ] Interest in ancient Rome with ideological objectives reached its peak between 1911 and 1937, shared by both the Italian monarchy and the fascist government , [ 71 ] as part of a "political reclamation of the image of Rome and its empire." [ 72 ] Mussolini , by promoting Romanità, sought to bolster his legitimacy. [ 73 ] However, Mussolini's urban planning projects devastated archaeological remains. [ 74 ] Bigot could not be suspected of adhering to these ideologies, [ 75 ] as he was a pacifist and a supporter of Aristide Briand . [ 76 ] Paul Bigot served as an aviation observer during World War I and, after the conflict, participated in the reconstruction [ fr ] of war-torn towns in northern and eastern France. [ 34 ] Working both on-site and in his Parisian workshop near a dome of the Grand Palais , [ 77 ] he created a masterpiece of miniaturism and precision, which he continued to modify throughout his life. [ 78 ] He made one final trip to Rome in 1934, [ 79 ] supported by a new subsidy, [ 78 ] to stay informed about recent discoveries related to major projects undertaken in the 1930s, particularly the construction of the Via dei Fori Imperiali , inaugurated on April 9, 1932, between the forums of Caesar and Augustus. He also studied developments at Largo Argentina (1926-1932), the Theatre of Marcellus , the Mausoleum of Augustus , and the Ara Pacis . [ 80 ] The war complicated access to the latest information on these projects. [ 81 ] Bigot's work had "significant resonance" [ 35 ] and was considered "a revelation." [ 82 ] Its success was immediate, with its "artistic, educational, and scientific value" recognized right away, [ 37 ] garnering press attention. [ 20 ] In 1911, the architect exhibited his model at the Mostra archeologica at the Baths of Diocletian , which celebrated the 50th anniversary of the proclamation of the Kingdom of Italy on March 17, 1861. [ 71 ] The exhibition included casts of Roman works from various regions of the Roman Empire. [ 75 ] Bigot's model, covering 50 square meters at the time, [ 83 ] was displayed in a room named after him, [ 84 ] an "exceptional tribute" [ 85 ] (the current "Planetario Room" [ 86 ] ). The organizers recommended that visitors make a point of seeing this room, warning that otherwise, they would gain only "an incomplete idea of the exhibition itself." [ 24 ] Bigot later described this first version of the Plan of Rome as "quite rough." [ 1 ] The exhibition also featured casts of works from across the Empire 's territories, [ 87 ] aiming to express "the revival of national unity, rediscovered through an ancient community of origins." [ 88 ] Bigot received the Medal of Honor at the 1913 Salon d'Architecture of French Artists. [ 89 ] Forced to leave the space loaned by the Italian government, the model was installed at the Grand Palais on April 15, under the glass roof, which became his workshop until his death, [ 24 ] where a version of his model was placed on the fourth floor. [ 90 ] That same year, he was made a Knight of the Legion of Honor . [ 82 ] Monsignor Duchesne proposed transforming the plan into metal. [ 91 ] Georges Clemenceau had a law passed unanimously [ 24 ] to fund the creation of the plan in bronze, with a subsidy of 80,000 francs. However, the outbreak of World War I halted the project. [ 77 ] Clemenceau and Le Figaro launched a subscription campaign, but the war and Bigot's meticulous nature interrupted the endeavor. [ 82 ] After World War I, Bigot received funding from the Rockefeller Foundation to complete his work and produce two additional copies, one for the University of Pennsylvania and the other for the Sorbonne . [ 92 ] The bronze plan project resumed between 1923 and 1925. [ 91 ] [ 24 ] In 1925, Bigot became a professor at the École des Beaux-Arts , and in 1931, he was inducted into the Institut de France [ 1 ] as a member of the Académie des Beaux-Arts , taking Henri Deglane 's seat. As such, he participated in selecting the Prix de Rome winners and managing submissions. [ 93 ] [ 94 ] Throughout his life, for over forty years, [ 89 ] Bigot’s passion was updating his model. As a result, the only two surviving versions — the monochromatic Caen plan, which was his original working model, [ 37 ] and the colored version at the Royal Museums of Art and History in Brussels — are slightly different. Bigot's renown lasted until World War II . The French government commissioned a copy for the Sorbonne , and the United States requested one for the University of Philadelphia . The model underwent numerous modifications, especially in 1937, [ 94 ] the year marking the commemoration of the bimillennium of Augustus ' birth, [ 71 ] although the details of many updates remain unknown. [ 95 ] The version presented at the 1937 Universal Exposition was described as "the final version of P. Bigot’s relief." That same year, the Il Plastico model by Gismondi was exhibited in Rome [ 79 ] at the Mostra Augustea della Romanità , a show initiated by art historian and fascist party deputy Giulio Quirino Giglioli . [ 85 ] The exhibition was a critical and public success [ 96 ] and represented "the apotheosis of the fascist regime as the heir to Rome." [ 88 ] Another exhibition dedicated to the regime opened on the same day as the one on Antiquity. [ 97 ] Appointed a professor at the École des Beaux-Arts in Paris in 1923 and head of a studio two years later, [ 24 ] Bigot, who was also the architect for French historical buildings [ fr ] , collaborated with the Christofle silversmith company to create a bronze casting of the Plan of Rome . However, the project remained incomplete due to the author's perfectionism, as he constantly revised it based on archaeological discoveries. In 1933, the Rector of the Paris Academy granted him 150,000 francs to continue work on the plan. [ 98 ] In 1937, a version of the Plan of Rome was presented at the Universal Exposition at the Palais de Chaillot , [ 1 ] [ 94 ] either the Caen or Brussels version. [ 99 ] Bigot also designed the exposition's main gate at Place de la Concorde . [ 7 ] André Piganiol provided some assistance with the updates a few years before Bigot's death. [ 100 ] The last modifications to the relief occurred between 1937 and 1942. [ 101 ] The meticulous updating of the plan made it "the visual expression of ongoing archaeological research." [ 102 ] The war hindered the flow of information in the late 1930s and early 1940s. [ 31 ] Bigot passed away unexpectedly on June 8, 1942, [ 45 ] while still working on the bronze plan. [ 1 ] In addition to updating his model, he was also designing monumental architectural projects. [ 103 ] According to François Hinard , Bigot relied on his disciples, Henry Bernard , a future key figure in the reconstruction of Caen [ fr ] , and Paul-Jacques Grillo [ fr ] , to continue updating the plan [ 104 ] and "complete and continue the changes interrupted by the declaration of war." [ 105 ] Bigot's wish to have his work updated after World War II and his death was never fulfilled. Upon returning from captivity in Germany, Henry Bernard found two complete models in the Grand Palais rotunda, [ 92 ] one of which was donated to the Royal Museums of Art and History in Brussels and the other to the University of Caen . [ 77 ] The donation to the Norman University was contingent upon the allocation of a specific space under the large amphitheaters of the law and literature faculties, which was accepted by deans Yver, Musset, and de Boüard [ fr ] . [ 106 ] There are four known versions of Bigot's plaster plan, two of which have disappeared. A fifth version, partially made of bronze, was only partially completed. The two preserved versions have "noticeably different appearances due to differences in plaster treatment." [ 45 ] The Caen model is monochromatic, [ 108 ] with the plaster "tinted and waxed" [ 109 ] in ochre, giving the city "a homogeneous color reminiscent of its appearance at sunset." [ 45 ] Paul Bigot designed a lighting system for the model, using precisely positioned projectors [ 110 ] fitted with colored lenses. This system aimed to reproduce the natural lighting of the city. [ 111 ] [ 112 ] According to Paola Ciancio-Rossetto, the plan preserved at the University of Caen is the one from the 1937 Universal Exposition [ 99 ] [ 113 ] and is the most up-to-date surviving version. [ 114 ] Royo, [ 103 ] Fleury, and Madeleine believe it to be the original model belonging to the architect, [ 37 ] whose updates ceased with his death. [ 108 ] The model, found in Bigot's workshop at the Grand Palais, [ 115 ] was donated to the University of Caen in 1956. [ 35 ] Student and Legatee of Paul Bigot, Henry Bernard , [ 116 ] architect of the post-1945 reconstruction of the city and particularly of the University of Caen, stored the model at the University of Caen in a specially designed room in the basement of the Law building, [ 37 ] which was "almost fortified," according to Hinard. [ 117 ] This was done in agreement with Rector Pierre Daure [ fr ] and the university council. [ 77 ] An Association of Friends of the Plan of Rome was established. The installation, inaugurated on April 28, 1958, [ 108 ] included a sound and light show with illumination of the various monuments represented, along with explanations provided by Hellenist Henri Van Effenterre [ fr ] and historian Pierre Vidal-Naquet . [ 27 ] The explanations were presented by major sections, [ 118 ] "sector by sector." [ 119 ] Even before 1968, the model fell into obscurity. [ 27 ] Its "long descent into oblivion" included the deterioration of the metal structure, theft, damage, [ 108 ] and the dismantling of the lighting system. Several small elements of the model were stolen during this period, including the Arch of Constantine , [ 120 ] the Meta Sudans , [ 121 ] the Colossus of Nero , [ 122 ] small temples (such as three from the sacred area of Largo di Torre Argentina ), and equestrian statues originally represented by Bigot. The model was listed as a historic monument on June 12, 1978, [ 123 ] or in 1987, [ 124 ] as an object of historical importance, granting it legal protection against any modifications. It took more than ten years to rediscover and restore the piece, [ 108 ] which had "come dangerously close to disaster." [ 47 ] François Hinard rediscovered the work after his appointment as a professor of Roman history in 1983 and raised public and official awareness, reactivating the Association of Friends of the Plan of Rome. [ 124 ] In 1987, part of the ceiling collapsed due to household water infiltration in the room, [ 108 ] damaging the model and reigniting interest in the piece. Élisabeth Deniaux devoted specific teaching to it. [ 125 ] A plan to move the model was initially considered but ultimately abandoned, and its preservation at the university was confirmed in 1991. [ 124 ] Since 1995-1996, the model has been housed in the new Maison de la Recherche en Sciences Humaines ( Campus 1 [ fr ] ), [ 126 ] [ 107 ] whose construction began in 1993. [ 124 ] Bigot's work is now displayed on a rotating platform and features lighting and camera systems [ 5 ] as part of an exhibition dedicated to ancient Rome. [ 127 ] The 11-meter diameter rotating platform [ 128 ] is red, reminiscent of imperial purple . [ 124 ] Before its installation, the model underwent significant restoration [ 37 ] in the workshop of Philippe Langot, [ 124 ] a conservator-restorer in Semur-en-Auxois . The model had been dirty, cracked, and affected by condensation. [ 129 ] [ 107 ] Following the restoration, some fragments remained unattached, possibly from earlier work before the 1958 inauguration. [ 130 ] The restoration marked a favorable period for highlighting Paul Bigot's work and "opening new research perspectives." The first symposium was held in 1991 under François Hinard 's direction. [ 37 ] The virtual restitution project developed as Bigot's model found its permanent location. [ 131 ] Bigot's work also made an impact in Belgium, as evidenced by the diploma awarded to him by the Royal Academy of Sciences, Letters, and Fine Arts in 1933. The model located in Brussels was donated by Henry Bernard to Henry Lacoste , [ 99 ] an excavator at Apamea , [ 132 ] who, starting in 1955, sought to establish an architecture museum in Brussels. The Plan of Rome at his disposal, a casting dated 1937 and exhibited at the Palais de Chaillot according to Royo, was modified by Bigot until his death [ 115 ] and intended solely for educational purposes. [ 133 ] This version comprises 98 elements [ 134 ] and measures 11 by 4 meters. [ 60 ] The project to acquire a model dates back to February 1938, initiated by students from the Royal Academy of Fine Arts in Brussels after a lecture given by Paul Bigot. [ 135 ] Postponed by World War II , it was realized through the execution of Bigot's testament and Henry Bernard's donation. The model was installed on July 1, 1950, [ 37 ] at the Cinquantenaire Museum under Henry Lacoste's leadership, [ 136 ] with the generous and enthusiastic support of students from the Academy. [ 137 ] The students provided Lacoste with the means to acquire the model and ensured its transport and assembly. According to Fleury, the model was delivered to the Cinquantenaire Museum in 1936. [ 35 ] The Rome Room was envisioned by its promoter as the embryo of a space dedicated to architecture and urban planning, [ 138 ] intended to provide young architects and researchers with various resources: models, plans, surveys, and wash drawings. The presentation of the model emphasized the urban planning projects that have shaped the entire history of the city of Rome. [ 139 ] Panels focused on themes such as the city of Rome during the Middle Ages or the Renaissance , as well as specific topics (gates, urban networks, aqueducts, etc.), accompanied the model. [ 140 ] [ 141 ] A slideshow was also featured. [ 142 ] Lacoste viewed the urban project of Rome as linked to two primary axes, and the plan served as a tool to illustrate these urban theories. [ 143 ] The Brussels plan, "delivered white," [ 45 ] was colored by students shortly before the 1950 inauguration [ 135 ] [ 137 ] to "add realism to the theoretical and scientific vision." [ 144 ] The colors chosen by Henry Lacoste's students [ 111 ] gave the Brussels model a "lifelike impression" inspired by the hues of eternal Rome. [ 112 ] [ 44 ] From 1950 to 1976, visits were conducted by a guide who used a cane with a tip, which caused damage to the model. In the meantime, in 1966, the model was relocated [ 145 ] to a new wing of the museum. It underwent its first major restoration in 1976-1977, including repainting and a new exhibition layout. Visitors could view the model from a balcony situated 3.50 meters high and walk around it. In the early 1990s, a sophisticated lighting system was installed to highlight 80 buildings. [ 134 ] This presentation inspired the staging of the Caen model after its relocation to its new destination in the 1990s. [ 124 ] The model serves as an educational tool for detailed presentations and for exhibitions aimed at the general public outside school hours. Documents on the buildings are projected simultaneously with their illumination on the model. [ 146 ] The model is no longer used as a teaching aid for urban planning, as Lacoste had intended. [ 147 ] Françoise Lecocq notes the presence of Astérix and Obélix figurines in the Grand Circus arena. [ 148 ] The model, which has been continuously exhibited and maintained since 1950, underwent renovation in 2003-2004 during the "From Pompeii to Rome" exhibition. [ 82 ] The model, described as "one of the key elements of the classical antiquities section," underwent a thorough cleaning in October 2018. A photogrammetry project was planned for the end of 2018, followed by an eight-month restoration period. The restoration of the sound and light system, including upgrades to the lighting, projections, and integration of new technologies, was estimated at €200,000. The system was expected to be operational by the start of the 2019 school year [ 149 ] but was only completed by early 2020. The copies of the Plan of Rome kept in the United States and Paris have disappeared: having become dust traps for a culture deemed obsolete, one was shredded, and the other was thrown away. [ 150 ] The Paris model, originally intended for the Sorbonne , may have been the original version of the model exhibited in Rome in 1911 and Paris in 1913, although Royo believes this model disappeared as it was used as the basis for the 1933 model. [ 115 ] This first version no longer exists, except in photographs and a plan. [ 151 ] The model was kept at the Institute of Art and Archaeology , whose building was designed by Bigot between 1925 and 1928 [ 152 ] and built in 1930; [ 24 ] this structure "constitutes his major architectural achievement." [ 153 ] Installed on the fourth [ 1 ] and top floor of the building in 1933, [ 91 ] it served as a teaching tool for topography, architecture, and urban planning [ 93 ] of ancient Rome. The model was still being worked on in 1941. [ 91 ] It suffered damage at the end of World War II [ 154 ] because it was located under the building’s skylights, [ 77 ] which were shattered during the bombings of Paris. [ 7 ] The Sorbonne model was destroyed during the events of May 68 [ 60 ] [ 91 ] [ 47 ] as it was thrown out to clear the room. [ 150 ] The elements that survived the occupation, known through photographs, [ 155 ] were destroyed during the restructuring of the Institute. [ 156 ] Other elements present in Bigot’s workshop under the dome of the Grand Palais were destroyed at an unknown time. [ 47 ] Bigot "is sparing with information" about this example of his work, [ 92 ] and the available information is scarce. [ 47 ] The enthusiasm for Bigot's work presented in Rome in 1911 spread across the Atlantic. Petitions were sent, one in 1912 to Andrew Carnegie and another to John Pierpont Morgan , leading to a 1913 news article aimed at acquiring a copy. [ 92 ] The exact date of creation for this model is not known precisely. According to Royo, it was either created before 1914 [ 99 ] or at the end of the 1920s when the author sought to fund his bronze plan. [ 92 ] The model seems to have been painted by American artists to enhance the realism of the work. [ 157 ] [ 92 ] [ 158 ] The destruction, for which the exact time is not known, seems to have been deliberate. [ 159 ] Fragments of the model are thought to still exist in Philadelphia in the early 2000s. [ 92 ] The Plan of Rome made a sensation in 1911 in Rome and then during the 1913 Paris exhibition at the Grand Palais . Georges Clemenceau initiated a law in the National Assembly [ 35 ] that allocated a sum of 80,000 francs to transform the plaster plan into a bronze work, "to make it indestructible." [ 7 ] The very fragile plaster risked disappearing, [ 160 ] and the aim of the transformation was to make "Rome […] eternal through its bronze image." [ 105 ] The transformation of a fragile work into bronze did not take into account the evolution of archaeological data. [ 161 ] Bigot later considered the transformation of his work into metal premature. [ 98 ] The relief model underwent its first attempt in bronze by Bertrand, then by Christofle, [ 24 ] but this attempt remained unfinished due to World War I [ 162 ] and the continuous demands of the work's author. [ 82 ] An operation, which generated "major financial and scientific problems," [ 34 ] took place between 1923 and 1927 by Christofle. [ 99 ] Bigot exasperated Christofle , who would have been at a loss due to the scale of the task, by incessant requests for modifications, [ 160 ] to which he conditioned the payment of the installments. However, Bigot apologized for the changes requested, emphasizing the progress made in the knowledge of the city's topography. [ 105 ] Work continued in 1929, but the plan was not delivered, as Bigot refused to accept it due to new changes that still needed to be made. The elements were finally completed and integrated into the Institute of Art and Archaeology [ 99 ] in 42 crates on November 18, 1932. [ 24 ] Bigot wanted to continue the project, but faced with the company’s refusal, he postponed it. [ 163 ] The funds allocated for the bronze plan were used to modify the plaster model. [ 164 ] In his 1942 work, Paul Bigot announced a subscription to finish the bronze plan, which remained unresolved due to his death. [ 105 ] According to Royo, the architect was aware of the fragility of his work and "intended to protect both time and Rome and his work." [ 104 ] The crates containing the unfinished bronze plan were rediscovered by Hinard in 1986 in the cellar of the building on Rue Michelet, [ 77 ] [ 1 ] where they had been stored without interruption since 1932. [ 165 ] The plan was unboxed in 1989. [ 93 ] Hinard observed that most of the crates had been opened, several elements had suffered from deformation or oxidation , and it was impossible to assess potential losses. [ 156 ] Some of the crates containing the elements of the bronze plan were kept for a time in the basement of the Law Faculty building at the University of Caen before being returned to its Paris premises. A temporary reassembly of the various parts took place in the mid-1990s, along with photographic documentation, and it was exhibited in the newly opened Maison de la Recherche en Sciences Humaines at the University of Caen while the model belonging to this university was sent for restoration, placed on the purple-colored platform reserved for the work. [ 124 ] [ 156 ] The Bigot model thus regained its "cultural showcase role at the university." [ 166 ] The bronze plan was not accessible at the end of the 2010s despite some sporadic exhibitions of several bronze plaques. However, the university's long-term goal remains to restore the model. [ 156 ] Despite the works he has written, Bigot has rarely discussed his method and sources. [ 167 ] The model, in its final version, occupies an area of about 75 m² [ 89 ] and does not correspond to the idea of a small, portable model. [ 168 ] This size "does not offer any privileged perspective from which to... dominate it," posing a problem for the viewer who can only apprehend the object as a whole from above and see details only nearby. [ 169 ] There is no privileged viewpoint, as the model can be viewed from all sides. [ 170 ] The model consists of about 100 fragments, [ 108 ] 102 elements specifically for the Caen version, [ 107 ] mostly arranged around an important building, [ 171 ] made of plaster with a wood [ 172 ] or metal frame. [ 91 ] There are differences between versions of his work that have persisted over time. [ 173 ] The organization into elements facilitates updates and castings of the work. [ 107 ] Bigot gathered the plates of Lanciani and annotated them to have a mass plan from which to work in his studio. [ 174 ] The architect also based his work on comparisons with photographs of marble fragments, which were enlarged and then placed back in the work of the Italian archaeologist. [ 175 ] The relief model is made to be viewed from above, as it is displayed in Brussels or Caen. [ 172 ] The relief of the seven hills of Rome is flattened because the viewpoint is 300 meters above sea level. [ 176 ] According to Bigot, "When we look at the relief, [...] the roughness fades as we move away." [ 157 ] Aerial photographs were first used in archaeology around the turn of the 20th century. [ 177 ] Bigot’s work is very rigorous, and he synthesizes knowledge while also relying on intuition. [ 33 ] His intuitions are sometimes confirmed by excavations, such as those of the temple of the nymphs , more than a quarter-century after the positioning of the building on his model. [ 178 ] The Plan of Rome is very precise, as a satellite image of the city confirmed the accurate location of the buildings and streets. [ 5 ] [ 107 ] The buildings still extant in 1992 "are rigorously in their place." [ 179 ] Bigot rarely explains his choices, [ 106 ] which makes understanding the object difficult. He writes little about his sources and method, [ 180 ] though he wrote more than Gismondi. [ 180 ] Bigot was driven by the "concern... not to omit anything from the complexity of the urban phenomenon." [ 181 ] He wanted to "gather all archaeological knowledge," [ 182 ] and thus conducted scrupulous work over four decades. Bigot, in his submission from Rome, provides neither sources nor bibliography because he did not present a report annexed to his work. The lack of both primary and secondary sources [ 183 ] makes apprehending the object difficult. [ 184 ] His 1942 work, Rome Antique au IVe siècle apr. J.-C. , [ 185 ] provides some elements. Bigot presents a "rapid and mostly allusive inventory," [ 183 ] his work was "something other than the academic result of a historical compilation." [ 186 ] Bigot knew and used classical literary sources, even though he did not give references. [ 187 ] He also knew "artistic and numismatic sources" [ 188 ] [ 189 ] — this use of iconographic, artistic, and archaeological sources, moving beyond solely literary sources, is an originality of his work. [ 190 ] He was also familiar with literature on the topography of Rome written during the Renaissance and the 18th century, in particular. He modeled his work after the studies of Piranesi , and the submissions from Rome were also an important source, including Abel Blouet ’s work on the Baths of Caracalla , even though it is unclear why some submissions were excluded from his corpus. [ 191 ] Lanciani's work on the Forma Urbis — the marble map was represented by Bigot in its original location on the wall of the Forum of Peace on his model [ 192 ] [ 193 ] — was fundamental to his approach, [ 194 ] which later inspired Gismondi. The marble map of the Severans is a source for work on the Villa Medici from the mid-19th century. [ 191 ] Lanciani published 46 plates showing the remains of Rome, [ 195 ] and Bigot used half of the plan [ 194 ] at a larger scale, modifying it. [ 81 ] The work on the Severan map posed interpretative problems. [ 196 ] Bigot rarely cites ancient or contemporary sources, even though he "truly drew from all the archaeological and historical literature" present at the French School in Rome, [ 197 ] [ 3 ] unlike other submissions that provided lists of sources. He was aided by his peers and by Duchesne, the director of the school. [ 198 ] He also kept up with current events, which allowed him to make subsequent modifications to his model, though he likely did not leave archives. [ 199 ] His colleagues at the French School assisted him, including Albert Grenier , Jérôme Carcopino , Eugène Albertini , and André Piganiol . [ 200 ] His bibliography seems to date from the period 1904–1911, as well as the changes linked to the refurbishments of the 1930s. [ 80 ] Balty considers that the work "reveals documentation that is already somewhat outdated." [ 31 ] His analysis allowed him to modify certain attributions of fragments of the Severan marble. [ 201 ] Even though Bigot closely followed Lanciani's conclusions for his study of the Forma Urbis , he also appealed to the hypotheses of Hülsen and Gatti. [ 202 ] Bigot followed ongoing archaeological debates and unsolved questions, adhering to certain hypotheses, such as the location of the Actian Apollo temple. After conducting research and discussions with scholars like Jérôme Carcopino and Italo Gismondi , Bigot revised his proposals on his model. His work was, therefore, the product of an "intellectual ferment" [ 203 ] and was not a mere "three-dimensional translation of monuments reconstructed by others." [ 202 ] The Circus Maximus (or Great Circus) fascinated him "for its historical and social significance," [ 18 ] from Romulus to Constantine II , [ 171 ] and it was his "beloved child" that dominated the model, acting as a "kind of guiding thread." [ 113 ] The circus is seen as "the true center of the relief" [ 204 ] and a symbol of the city, "responsible for the birth of the relief." [ 76 ] [ 205 ] The subject was unprecedented at the beginning of the 20th century; [ 11 ] Bigot created a longitudinal reconstruction to define its limits and provided a height for the bleachers, along with a cross-section of the cavea and carceres . His work achieved a seating capacity of 159,000 spectators. [ 206 ] Bigot wanted to represent the Circus Maximus in the relief of the Aventine and Palatine hills to highlight the extraordinary dimensions of this structure. [ 13 ] According to Ciancio-Rossetto, the relief model came from Bigot’s need to determine the boundaries of the bleachers and the building's relationship with the city's topography, at a time when knowledge was increasing significantly. [ 63 ] The relief model, a very new approach, [ 207 ] was extended to give perspective to his initial creation and also due to reactions to his work. [ 208 ] This first work generated great enthusiasm because it was "more expressive than any drawing," according to Bigot, [ 25 ] though he was concerned about the project. [ 13 ] Although the Italians considered the exploration of their capital’s underground "a national issue," [ 209 ] Bigot conducted excavations on this building between November 1904 and July 1906. He published two articles on the subject in 1908 [ 210 ] to support his argument, particularly regarding the limits of the structure. [ 211 ] The academic community received these works with skepticism, [ 212 ] but they constituted an important step in the research on this structure, despite errors, such as an extra tier of bleachers and the placement of the carceres . [ 207 ] Bigot "contributed... to the construction of knowledge about ancient Rome," and his scientific contributions are significant. [ 107 ] Bigot chose to represent the city during the time of Constantine I , at "a moment in the history of Roman urbanism and Roman topography," [ 213 ] before the creation of Constantinople and the proclamation of Christianity as the state religion. [ 214 ] He selected the same period as Giuseppe Gatteschi [ fr ] (1862-1935) for his restoration drawings of ancient Rome, placed in parallel to contemporary states, a work that took thirty years and was based on sources, [ 215 ] some of which dated back to the early 19th century. [ 216 ] The early 4th century corresponds to the "monumental peak of ancient Rome" and also to the last truly ancient state in the strict sense, [ 37 ] the "completion of a work" [ 217 ] at "a level never previously equaled and that will not be surpassed later." [ 157 ] This period allows for the representation of all the city's buildings at the time of "its full blossoming," [ 218 ] and the plan is "the synthesis of discoveries concerning the history of Roman monuments still standing in the 4th century and known in the early 20th century." [ 213 ] This period was also chosen for other representations, including Gismondi’s model. [ 219 ] Bigot focused on the monumental center of Rome, not the periphery. [ 220 ] He represented the center, [ 87 ] including part of Trastevere , but excluded the barracks of the Praetorian Guard and the Baths of Diocletian due to their distance from the center—elements that are included in Gismondi’s model. He stopped due to the surface area of his work and lack of space, [ 173 ] but also because of the ongoing work required for updates. [ 194 ] He also excluded the port of Rome and the horrea , [ 214 ] as these were not of interest to him. The cessation of his work's extension was due to fatigue and, in his words, the approach of a zone of gardens in the spaces to be represented, while his model had already become vast. [ 221 ] The "burdensome monumentality symbolically transposes the grandeur" of the city. [ 169 ] It is surprising that Bigot excluded areas that could have been of interest for sites of Christian worship, given that his work evokes Rome in the 4th century, such as St. Peter’s Basilica and the Lateran . [ 194 ] [ 222 ] The representations of the most important buildings in the city are the result of a subjective approach that was not explicitly explained and whose genesis is difficult to trace. [ 223 ] He had to make decisions for the reconstructions, focusing on filling in the gaps, and in doing so, made "more or less conscious" mistakes. [ 224 ] Despite similarities to the submissions from resident architects who chose watercolors, Bigot showed a concern for objectivity and the updating of his work. [ 225 ] With his readings, the architect was led to make choices for his reconstruction. [ 190 ] Starting from the Forma Urbis , he interpreted and extrapolated the elements provided by Lanciani, [ 194 ] using them to justify his own choices. [ 192 ] He also sought elements on the ground to confirm or disprove his hypotheses, drawing conclusions from them. [ 19 ] The relief plan was initially perceived as final, but it was with new discoveries that Bigot realized the need to update it. [ 164 ] His desire to produce the "counterpart of reality" would be the reason for the numerous changes the architect undertook. [ 77 ] Throughout his life, he reworked his model as archaeological discoveries allowed him to clarify previously unknown areas or change the identifications he had proposed. [ 226 ] The desire for updating is responsible for the failure to transform the plan into bronze, as the Christofle company was "exasperated by Paul Bigot's meticulous perfectionism and the extra costs it entailed." [ 218 ] He continued to incorporate discoveries until the end of the 1930s, [ 37 ] even though some of these works were not used to update his relief. [ 227 ] In 1942, Bigot apologized for the revisions to his plan, stating, "the image of the city can only be given by approximations," and "this is already a lot." [ 104 ] He noted the significant progress in the knowledge of Rome’s topography since the 1880s. [ 228 ] Bigot never considered his work finished, and the differences between his final version and the initial works are significant, consequences of major work in the Italian capital. [ 113 ] Bigot’s revisions could be observed during the restoration of the Caen model in 1995. [ 128 ] Bigot did not create just one model but several. [ 26 ] Over the course of his work, it expanded and became "a moving surface." Continuously, he made "modifications, additions, or revisions" [ 229 ] to keep up with the latest discoveries in the archaeological topography of ancient Rome. These changes affected 29 modules out of the 102 in the Caen version, about 25%. [ 230 ] Modules were added at the northern entrance of the city, as well as to the north and east of the Esquiline, stopping at the Baths of Diocletian , a well-known and studied building due to its state of preservation. He also modified the southeast of the Aventine with the Baths of Decius and expanded the east side of the Caelius and Esquiline to achieve a more complete vision of the city. He worked on the area of the Imperial Forums, [ 231 ] then on the Campus Martius , and extended his work on the side of the Trastevere . [ 232 ] [ 233 ] The reconsiderations of the choices made initially as knowledge and archaeological excavations advanced in the 1930s [ 234 ] led to changes in the model, [ 98 ] especially in 1937. [ 98 ] The changes made are rarely dated during the interwar years , [ 235 ] except when the author mentions them specifically. [ 236 ] These revisions are a sign of "a scrupulous update of his sources." [ 237 ] Bigot removed elements as research progressed, particularly for the 1937 World's Fair , and some modules were "declassified." However, not all his reliefs were modified: the Caen and Sorbonne plans were updated for the location of the Porticus Aemilia , but the plans of Brussels and Philadelphia were not. [ 238 ] Royo notes that the southern district of the Aventine was not modified, even though Bigot had located the Porticus Aemilia and the Horrea Galbana . [ 233 ] The architect changed the location of the Curia of Pompey , integrating it into the Pompeian portico [ fr ] on the plaster model, but the Curia remained integrated into the Largo Argentina in the partial bronze model. [ 239 ] He modified the Imperial Forums using Gismondi's work, raising the question of the relationship between the two architects. The French architect emphasized the importance of "local color" in his 1942 work, and the diversity of materials present in the city. [ 240 ] The first plan of Bigot, as mentioned in the publication of the Rome dispatches, depicted the unknown spaces with "hatching patterns," areas that would be filled in the final version. [ 183 ] Many spaces in the city—monuments, homes, emporia, and roads—were unknown, and Bigot's plan risked having "many voids," [ 28 ] making his work resemble only a skeleton. [ 73 ] The unknown spaces were completed with "local color" as part of a "project of architecture," and he adapted previous works to avoid presenting a plan with gaps. [ 241 ] Faced with the void, the architect chose to relegate "the accuracy of details in favor of an overall impression." [ 6 ] According to Bigot, "One cannot imagine an assembly of partial resurrections separated by voids that evoke interplanetary spaces." [ 242 ] Some sectors are treated "in the manner of an architectural project" with an obvious concern for plausibility. [ 243 ] Bigot uses the Regionnaires and the Severan marble plan to define the average size of the insulae and domus . He proposes a figure for the city's population [ 244 ] and its distribution. Through calculations of the surface area distribution in Rome and by analogy with the population density of Paris , he manages to calculate the size of the population of the capital of the Roman Empire. [ 245 ] He also attempts to give a capacity for the entertainment buildings. [ 87 ] He reproduces the ancient urban fabric [ 87 ] based on the Forma Urbis and the excavations at Ostia. [ 144 ] He places insulae in areas not definitively known to have been occupied by such constructions. [ 246 ] Furthermore, he situates domus with peristyles in the center of Rome. [ 247 ] The architect places buildings and decorative elements with "at best plausible integration, at worst fanciful." He draws upon archaeological data, the contributions of Lanciani, and personal interpretations. [ 248 ] There is a contradiction in Bigot's project between the comprehensiveness sought and the presence in the model of areas treated as projects, plausible images of reality. [ 144 ] However, he mobilizes as many sources as possible to "get as close as possible to archaeological reality," even though his work is, by nature, "always incomplete and yet finished." [ 167 ] Bigot makes aesthetic choices for his model that are sometimes confirmed archaeologically much later: for example, a street near the Vigna Barberini [ fr ] was confirmed in the 1980s. [ 249 ] He places the Temple of Apollo, located near the Theatre of Marcellus, in the correct spot, discovered only in 1939-1940. [ 250 ] However, his work is a "recreation... without direct relation to reality." [ 251 ] Royo identified "exceptional errors" in the displacement of buildings or their identification. [ 229 ] Some errors appear to be involuntary, particularly those related to issues with the connections of enlarged plans from Lanciani, while others seem intentional, as the plan was not updated even though more historically accurate information had become available. For example, errors in the orientation of buildings were not corrected, even though archaeology had advanced, particularly regarding the Temple of Peace and the Temple of Apollo at the Circus Flaminius . [ 231 ] Bigot extrapolated from Lanciani's work, and thus the Brussels model partially represents the Naumachia of Augustus [ fr ] . He sometimes places plausible or fanciful buildings. [ 247 ] His decisions regarding the model are sometimes premature, given the knowledge of the buildings discovered at the time. [ 81 ] The knowledge of the Campus Martius , an area that had been the subject of prestigious construction plans, was incomplete at the time of Bigot, even though the district had retained its ancient plot structure. "Bigot's Campus Martius bears... the mark of the knowledge of his time." [ 252 ] Work on the Severan marble plan accelerated in the second half of the 20th century, [ 253 ] particularly in this area. Bigot inverted the locations of the Theatre of Balbus and the Circus Flaminius , an error due to the unknown locations of these two buildings at the time, [ 5 ] an error repeated by his contemporaries, [ 175 ] which was corrected by Gatti only in 1960. [ 254 ] The Temple of the Forum of Trajan is also misplaced. [ 255 ] The knowledge of this forum evolved with excavations in the early 2000s, and it appears that the temple was located next to the Forum of Augustus , not as Bigot had supposed. [ 114 ] The Horologium Augusti is placed correctly in the model for its Augustan configuration, while in the Constantinian period, the space was built over. This representation, a "gross error," [ 217 ] in what is meant to be a model of Rome in the 4th century, is also anachronistic due to disturbances caused by recurrent flooding in the area. Similarly, the author does not represent urban wastelands, even though they are attested to in the 4th century due to a contraction of the city, [ 217 ] which accelerated in the following century. This type of error cannot be involuntary and is a result of the path chosen by the creator of the relief. Bigot was aware of works that could lead to changes in his model but did not always implement them, such as for the Curia Julia [ 31 ] or the Saepta Julia . [ 95 ] The location of the Saepta is modified on the Caen model. [ 233 ] For the Gardens of Adonis, he indicates they were not located on the Palatine Hill but does not apply this information to his model. Excavations in 1931 confirmed Bigot's theory. The Brussels model was modified. The architect does not seem to have shown interest in the Palatine Hill and Alfonso Bartoli 's work on the Flavian Palace in the late 1920s, even though he was interested in the Vigna Barberini [ fr ] . [ 178 ] The Palatine Hill underwent considerable archaeological work, and Bigot's vision was based on the 19th-century understanding of the area. Nevertheless, he did modify the sector in the last version of his model, with treatment given to the House of Augustus and the Temple of the Vigna Barberini, leading to relocations of monuments and raising still-relevant questions. [ 256 ] Bigot also sometimes used the Forma Urbis in a selective way, so as not to deviate from his image of the city. He excluded certain fragments of the marble plan, such as perpendicular straight lines, which "clashed with Rome." [ 257 ] Some fragments were artificially integrated, and Bigot invented a Via Septimiana . [ 258 ] Bigot's Plan of Rome is an original work in more than one way; it is also a tribute to the City. Nearly contemporary with another great architect who produced a much more famous model, Italo Gismondi , the question of their relationships and influences arises. Bigot's Envoi de Rome, "an archaeological summa and [...] a picture of urban genesis" [ 167 ] and "an extraordinary scientific achievement," [ 259 ] possesses a triple originality: it represents a city, not an isolated building; it is a model, not a drawing; it is a work that he continually pursues, not a mere Envoi de Rome. In other works, the architect integrates his character, which "closely associates the monument and the city, that is, architecture and urbanism." [ 260 ] According to Royo, the work "is emblematic of a certain historical approach and a particular perception of the city at the end of the last century, caught between urban dream and reality." Rome presents "the anarchy of centuries of sedimentation," but Bigot offers a reading of the city's complexity. [ 261 ] Unlike Lacoste, who applies the urban project—colonial-type voluntary urbanism—to the Brussels model, Bigot gives a rational vision of the city's development. [ 141 ] The French architect represents the complex urban phenomenon of the accumulation of buildings in the City related to streets and residential areas, not just isolated buildings or architectural ensembles like Marcelliani. [ 262 ] Unlike Henry Lacoste , Bigot does not believe that Roman urbanism followed a plan before the Great Fire of Rome in 64 AD. He assigns a central role to Nero in the city's urban planning after the disaster, even if he considers the work incomplete. [ 263 ] Bigot’s desire to represent the City as a whole is original in relation to his contemporaries, Italo Gismondi and Giuseppe Marcelliani [ fr ] . [ 264 ] The plan is "the sum of the scientific knowledge of an era" concerning the topography of ancient Rome. [ 265 ] It is not only a model but also "a global representation." [ 266 ] In this regard, Bigot is closer to Garnier 's conceptions, as both believe that "urban organization is the response to the fatigue caused by the repetition of a school exercise." [ 6 ] Garnier learns lessons from Antiquity through the organization of his project, even though he rejects "an academic and dusty image of Antiquity." Bigot, on the other hand, is trapped in "an encyclopedic and artistic concern [...] and [...] an exhaustive reconstruction." [ 267 ] However, due to the "analytical qualities and [the] intuitions of its author," the model remains worthy of interest because of the archaeological confirmations of Bigot’s proposals, in addition to "the very concrete nature of his work." [ 268 ] His work is "meritorious and visionary" at a time when the ancient city was still hidden in many ways. [ 200 ] According to Élisabeth Deniaux, the plan is "the visual translation of the culture of an era about the city that transmitted its civilization to the Western world." [ 106 ] Bigot’s Plan of Rome is not just a record of knowledge about the topography of imperial Rome; it is, according to Manuel Royo, "a paradoxical object of art that gives grandeur the aspect of a miniature and eternity the face of history." [ 269 ] It is "also and above all a testament to the architect's veneration for an image of Rome straight out of classical studies," [ 219 ] "an urban utopia [and] a projection of the intimate universe of its creator." [ 270 ] Royo believes the work is a global vision "where the sense of grandeur, diversity, even eternity, comes together." [ 271 ] The plan "summarizes a certain idea of the city," [ 272 ] it is a place of memory , [ 273 ] destined to be exhibited in a museum. [ 274 ] Bigot’s Plan of Rome belongs to the dreamlike world, and this is perhaps the reason for the absence of human figures in Bigot’s vision. [ 275 ] According to Royo, the plan is a singular object with a "contradiction between the encyclopedic desire of its author and his approach, which is as sensitive as it is aesthetic, to Antiquity." [ 109 ] The bronze model conveys his "temptation of eternity," [ 105 ] but "a fragile eternity," [ 276 ] and is "the result of an effort to clarify [the] intertwined layers in favor of just one of them, fragmentary and constantly reworked." [ 276 ] Bigot's work is a cultural effort, but he is also concerned with ensuring the continuation of his interpretative work. [ 276 ] Models of ancient Rome were still created at the very end of the 20th century: thus, in 1980, Augustan Rome was represented on 4 m² at the Antikenmuseum in Berlin, and in 1990, a 20 m² model depicting Rome during the reigns of the Tarquins was created for the Museum of Rome . [ 277 ] A model of archaic Rome was also made around the same time. Bigot’s model does not serve an ideological, political, or military function, unlike other relief plans; [ 278 ] [ 279 ] the plan is also, for its author, an aesthetic object, [ 280 ] driven by both "artistic and scientific approaches." [ 95 ] Through his work, Paul Bigot is concerned with representing the grandeur of the City during Antiquity, giving "a sort of vision of the grandeur of Rome," [ 281 ] but also the urbanism of Rome, which had become the capital of a modern state about a quarter of a century earlier. [ 71 ] The plan expresses "the monumentality of what it represents." [ 282 ] His work evokes both the "grandeur of ancient Rome" and "a vision of an urban planner." Thus, it is an ambiguous object [ 283 ] that aims to gather "the totality of topographical and historical knowledge about ancient Rome." [ 284 ] Bigot's Plan of Rome allows one to "substitute an intact and therefore glorified image for the destroyed Rome." [ 285 ] According to Manuel Royo, Bigot’s Plan of Rome possesses "didactic, artistic, technical, and historical" characteristics. [ 264 ] The object is the result of "a sensitive experience of the City and [...] a theoretical conception of ancient urban space." [ 10 ] The author's concerns are both pedagogical and aesthetic. [ 286 ] Bigot’s work reflects his "historical, archaeological, and urban vision," [ 287 ] and it is an invitation for a "journey through time and space." [ 288 ] The work is also "a cultural icon offered for the veneration of the eye, leading to a sort of virtual resurrection of ancient Rome." [ 289 ] With the planned audiovisual installation, the architect desires a "global perception of the historical and geographical territory of Rome" for the viewers. [ 290 ] His city is "an entirely theoretical and cultural universe." [ 291 ] Bigot offers a fictive journey [ 10 ] to the viewer, with the visit marked by the main buildings, [ 292 ] which are juxtaposed in an ideal state of conservation. [ 293 ] The model, however, allows for "an infinity of possible journeys [and] a layering of unique stories," [ 294 ] excluding the evolution of the different monuments. [ 217 ] This journey enables the management of various historical layers within a single object. [ 295 ] Paul Bigot is the only one to have written about his relief plan. His brochure was published in 1911 and then reissued in 1933 and 1937. [ 287 ] These works consider the reader as a spectator and, therefore, have characteristics typical of travel narratives. [ 264 ] The works also contain anecdotes. [ 296 ] The French architect published another book in 1942, which was reissued in a shortened version, accompanied by the brochure's text in Belgium by Lacoste in 1955, [ 265 ] and enriched with illustrations and texts meant for a literary stroll. [ 297 ] The frontispiece of his works features an eagle with outstretched wings at the center of a laurel wreath, based on a relief from the Church of the Holy Apostles . This representation is present on the façade of the Institute of Art and also in the form of a mold in the building. [ 298 ] This "ambiguous aesthetic," because it was extensively used by Italian fascism , was abandoned in the reissue by Lacoste in 1955. [ 299 ] Lacoste integrates figures into the photographs of the plan. [ 270 ] This visit is a journey, a "topographical inventory (...) [punctuated] by historical and anecdotal reminders," [ 244 ] an element of a "minimal encyclopedic knowledge." [ 300 ] These journeys are particularly visible in the early works published by Bigot about his model. According to him, the visitor should refer to the "explanatory legend" [ 287 ] and thereby perceive the organization of urban space. [ 301 ] Paul Bigot's scientific and educational work inspired [ 302 ] or was imitated by the architect and archaeologist Italo Gismondi , but commissioned by Mussolini for propaganda purposes, [ 84 ] combining "antiquity and the present day, models and real constructions." [ 303 ] The architect also used Gismondi's archaeological works, [ 250 ] especially after his trip to Italy in 1934, but did not use the Italian architect's model. [ 178 ] The emulation created by Bigot’s work and the pride it instilled explain Gismondi’s work from 1930 onward, which is both "more complete... and more famous." [ 82 ] Gismondi is also, in a way, the successor of Bigot, [ 303 ] his work considered as surpassing Bigot’s. [ 304 ] Gismondi's model aligns with the fascist vision of the grandeur of Rome, serving "as an ideal reference and substitute for reality," [ 304 ] and allowing for the extrapolation of incomplete archaeological research (such as the imperial forums , interrupted by the construction of the major axis that bears its name) or preserving the memory of destroyed elements, like those on the Velia . [ 96 ] The model, begun in 1933, [ 305 ] was exhibited in 1937 at the Mostra Augustea della Romanità (Augustan Exhibit of Romanity). The stated goal was to celebrate the bimillenary of Augustus ' birth through grand ceremonies. It played a central role in these ceremonies, alongside models of Augustus' temple and the city of Ancyra , the site of the discovery of the primary source of the Res Gestae . [ 84 ] After the fall of the fascist regime, Gismondi’s work lost its political dimension and regained its status as a "true object of study." [ 303 ] It was reworked by Gismondi for about 40 years, [ 79 ] incorporating new knowledge. [ 305 ] He based it on the works of Lanciani and Guglielmo Gatti [ fr ] . There is no direct evidence of contact between Bigot and Gismondi, but the 1911 exhibition provided Gismondi with "very rich and fundamental documentation." [ 306 ] The Italian archaeologist benefitted from discoveries related to the deep restructuring of Rome in the 1930s. [ 307 ] For spaces or monuments where uncertainties lingered, Gismondi operated by analogy or presented "volumes in the form of large masses" on his plan. He depicted the Servian Wall as a ruin. [ 308 ] Gismondi also utilized Bigot’s work on residential buildings and developed a typology to place them on his model. [ 305 ] Exhibited at the Esposizione Universale di Roma (EUR) in the Museum of Roman Civilization , intended for the World’s Fair planned for 1942, [ 309 ] the Italian model, called Il Plastico , is larger (1/250 scale) [ 79 ] and depicts the entirety of ancient Rome. It was updated until 1970, whereas Bigot’s model reflects the state of knowledge in 1942, the year of his death. Gismondi’s model ultimately represents the entire area within the Aurelian Wall , excluding Trastevere and the Vatican . [ 310 ] A project to restore Region XIV was under consideration in the early 1990s, as well as the creation of "a true and authentic digital map of ancient Rome" to establish a database and create "an illustrated manual for everyone." [ 311 ] Gismondi's model covers 240 m². [ 312 ] It is made from plaster derived from alabaster powder, reinforced with metal and plant fibers. Initially conceived in plaster, the reliefs were particularly worked on and accentuated by 15 to 20%. [ 313 ] The larger scale allows for more details to be displayed. The relief is better represented, and the materials have a more realistic appearance, with green stains for gardens. [ 314 ] Paola Ciancio Rossetto considers the work "more faithful to reality and the discoveries" and believes its author interprets less. [ 79 ] Gismondi used his knowledge of the site of Ostia , which he excavated, [ 315 ] for his model of Rome, and he made two 1/500 scale models of it. [ 308 ] While Bigot recounted his difficulties with the issues his work posed, Gismondi left no documentation other than drawings or sketches of reconstructions. [ 305 ] [ 109 ] His model is "a possible and silent projection of archaeological reality." [ 303 ] However, according to Paola Ciancio Rossetto, Gismondi is "the successor of Bigot’s work," but in a "more concrete, more realistic, and less passionate" manner. [ 316 ] Paul Bigot’s Plan of Rome , however, remains "an irreplaceable model... both for its technical aspects and its topographical documentation." [ 308 ] Bigot's model is a heritage object that cannot be modified. The use of a "virtual double" allows for the representation of the most recent data and provides a tool for teaching and research. [ 317 ] The virtual model serves both a scientific purpose, with direct access to sources, [ 318 ] and an educational and media role, [ 319 ] creating "a form of digital encyclopedia on Rome." [ 320 ] The proposed models are interactive and aim at research, pedagogy, and public outreach; they are a "tool for visualizing a reality that is difficult to perceive today." [ 37 ] Projects of this nature are costly, requiring "considerable human, material, and financial resources." [ 321 ] In 1970, Louis Callebat founded the Center for Ancient Studies and Research at the University of Caen, which worked briefly on computer applications for ancient languages. [ 322 ] Since the early 1990s, a multidisciplinary team formed around Philippe Fleury, a Latin professor with a passion for computing, [ 323 ] who began his work in the laboratory for the computerized analysis of texts. Fleury is a specialist in Vitruvius and ancient mechanical systems. [ 324 ] The formation of the "City-architecture, urbanism, and virtual image" [ 124 ] multidisciplinary pole, the partnership, and methodological work took place from September 1993 to December 1995. In 1994, the Plan of Rome team was formed, including members of the Center for Ancient Studies and Myths (CERLAM), with added expertise in architecture, computing, [ 126 ] history, and art history, [ 325 ] at the same time as the construction of the Humanities Research House (MRSH). [ 326 ] In the early 2000s, the team had about ten members. [ 5 ] Every two years, the work is evaluated by a scientific committee. [ 120 ] The team is working on a virtual reconstruction of ancient Rome, contemporaneous with Bigot's model, but "scientifically up to date" and "modifiable at all times." Gérard Jean-François, director of the University of Caen's Center for Computing Resources, and Françoise Lecocq were part of this team. [ 327 ] The reconstruction began in January 1996. [ 317 ] By December 10, 1998, about twenty buildings were modeled, including around ten related to the Forum Boarium and the Temple of Portunus . [ 328 ] The Curia and the Temple of Portunus were the first buildings to be reconstructed. [ 329 ] By the early 2000s, around thirty elements were reconstructed, including buildings and mechanical elements. [ 61 ] Ancient machines, such as war machines and the hydraulic organ, were also reconstructed, as well as other elements like the floods of the Tiber, Augustus' Horologium , and the raising of the obelisk of Constantine. [ 330 ] [ 331 ] In 2002-2003, two projects were initiated: Virtualia , aimed at promoting productions and responding to commissions, and the construction of a virtual reality center. [ 332 ] The initial goal was to complete the model of Rome during the time of Constantine by 2010, but this was revised to 2015. [ 333 ] Major changes in the organization of the team around 2006 complicated the achievement of this ambitious goal. The rapid evolution of computer hardware and software also made the early models obsolete, requiring them to be redone. [ 121 ] Partnerships were established to find a new dynamic. By 2011, 25% of the virtual model was completed during the partnership with the University of Virginia ’s Rome Reborn project, which produced a "more basic but complete" model. [ 37 ] The project was led by a team under Bernard Frischer and Diane Favro in the Cultural Virtual Reality Lab . [ 159 ] Local partnerships allowed for work in the early 2000s on the reconstruction of the city of Saint-Lô before the bombing during the Normandy Invasion . [ 61 ] The experience gained was applied to the reconstruction of other buildings destroyed in 1944, such as the former town hall of Caen or the former university of Caen, in partnership with the city of Caen and the Cadomus association, or the reconstruction of famous Norman buildings at various points in their history, such as the Saint-Pierre Church in Thaon [ fr ] . The financial resources in 2003 came from CERLAM, MRSH , the University of Caen, the state, the CNRS , the DRAC , the city of Caen, and the Lower Normandy region. [ 333 ] Sales of products, images, and multimedia materials also generate revenue [ 319 ] and enhance the research work. [ 320 ] Requests for 3D images from the print and audiovisual media have been fulfilled: not only were images related to the Roman world created, but also ones about Native American civilizations or reconstructions of elements from the Atlantic Wall . [ 334 ] These various reconstructions are part of the American project Rome Reborn , and while the method brings "notoriety and... income," it disperses the energies compared to the original project. [ 259 ] Since 2006, the team has been integrated into the Interdisciplinary Center for Virtual Reality [ fr ] , [ 131 ] a member of the French Virtual Reality Association [ fr ] , with the goal of both sharing technical and human resources and promoting the use of virtual reality, "both a science and a technique." [ 335 ] In 2003, around ten research fields at the University of Caen had expressed interest in virtual reality , [ 336 ] and studies began. [ 326 ] [ 337 ] The technical platform set up in 2006 has the necessary equipment for virtual reality as well as the essential staff, also responsible for promoting the use of the technique and providing the necessary support. It initially had an auditorium that could seat 200 people. [ 326 ] In addition to a graphics calculator and a video switching system, the location includes an immersive room designed to display virtual models in high resolution, "a place for enhancing research and... a place for scientific experimentation," according to Philippe Fleury. [ 337 ] Since 2008, the scientific framework for the project has been ERSAM, "Technological Research Team Education, Ancient Sources, Multimedia, and Plural Audiences." [ 338 ] [ 131 ] Additional work is planned from 2011. [ 326 ] A 45-square-meter virtual reality room, whose construction was initially planned for 2007, [ 339 ] has been operational since December 2016 [ 340 ] and was inaugurated on March 2, 2017. [ 341 ] This equipment, costing 1.2 million euros, [ 342 ] was financed 60% by the Normandy region . [ 343 ] The available space in 2016 was 618 square meters. [ 326 ] CIREVE aims to represent "disappeared, degraded, inaccessible, or future environments," experiment in the most diverse fields, and serve as a training tool. [ 326 ] The experience gained from the reconstruction of Rome has provided "methodological gains." [ 344 ] The virtual model has been used in architecture for a long time, and its use has spread in scientific circles as it allows for "immersion and total illusion." [ 345 ] In the Caen project, "advances in technology [are] at the service of ancient Rome." Furthermore, the work has not only a scientific scope but also "an effort to highlight and preserve cultural heritage." [ 321 ] The team began with "the restitution of the visible," of buildings still in existence, such as the Temple of Portunus and the Curia Julia, before focusing on "restoring the invisible." [ 346 ] The work is based on the organization of Bigot's model modules. [ 347 ] The team's goal is to build a realistic reconstruction within an interactive virtual model. The goal of the reconstructions is not to create illustrations like those found in non-scientific publications or even video games but to "visually disseminate scientific syntheses and demonstrate the validation of certain hypotheses." [ 348 ] Only buildings for which documentation exists can be fully reconstructed, both inside and outside. [ 320 ] The digital method allows for updating Bigot's model of Rome, which cannot be modified due to its classification. Bigot wrote that a "subject of this kind is subject to perpetual modifications." [ 349 ] The virtual model, by nature, is "reversible and modifiable at any time." [ 350 ] The restitution is created at a 1:1 scale at a specific moment in time, June 21 at 3:00 p.m., and focuses on the city during the time of Constantine, in 320. [ 351 ] This emperor's reign was also the choice made by Paul Bigot for his model. Therefore, a comparison between the two works is possible. [ 320 ] This period is also the richest in archaeological sources [ 37 ] and represents Rome's "monumental peak." [ 325 ] [ 352 ] The team's long-term goal is to propose reconstructions for other periods in Roman history, such as the monarchy , the time of the Scipios , and the end of Augustus ' reign. [ 325 ] The team's work aims to propose architectural and topographical hypotheses, including mechanical systems related to buildings in use during the Roman era ( velum , stage curtain, etc.). They create a "field of visualization and experimentation" to check for inconsistencies and discuss the different possibilities for reconstruction. [ 37 ] The team strives to restore a plausible image, considering both the known elements and those not attested but reconstructed to give an image of the building as it could have appeared. Access to the sources allows the public to access the scientific dossier. [ 353 ] The virtual restitution has helped rule out hypotheses, particularly for the velum of the Theatre of Pompey, with the traditional hypothesis making the lighting of the upper-class seats seem poor. [ 354 ] For the restitution of the machines, the team draws on the works of Vitruvius : in addition to the velum, the team has worked on how to erect an obelisk in the Circus Maximus, lifting machines, weapons (scorpion), and the hydraulic organ . [ 355 ] The team proposes a realistic restitution for 4th-century Romans, without distinguishing in the produced image between what is attested and what is assumed. For unknown spaces or housing, the team has opted for a relationship with the choices made by Bigot. [ 120 ] The analogy with other comparable known elements helps fill any gaps. [ 325 ] This method is also used by Jean-Claude Golvin for his watercolor reconstructions, and he is a partner in the project, particularly for the entertainment buildings. Interactivity and hyperlinks allow access to the sources and verification of the produced model. This method of accessing notes gives the result the character of a scientific publication. [ 37 ] The scientific dossiers are created by students ranging from master's level to doctoral level [ 333 ] and serve as the foundation for the reconstructions. [ 255 ] The sources used to create the model are digitized, [ 126 ] and their analysis requires extensive work. [ 356 ] The analysis of the sources is pursued in several directions to create 3D plans of the buildings: [ 356 ] written sources, which vary widely depending on the structures studied, [ 62 ] databases, and iconographic research, sometimes primarily based on the Forma Urbis , whether the fragments that have survived or others that were lost after their discovery but fortunately drawn. Other artifacts, such as coins, reliefs, and paintings, also provide insights into the buildings. Excavation reports are also used. [ 37 ] [ 325 ] In cases where sources are lacking, the team relies on the hypothesis put forward by Bigot, "naturally contestable and to be contested," [ 357 ] based on the Forma Urbis . [ 358 ] The team aims to distinguish between "what is certain, what is probable, and what is merely a hypothesis." [ 359 ] The virtual model provides fixed images of reconstructions, which are hypotheses, [ 359 ] 3D modeling , synthetic animations, and interactive visits. [ 326 ] [ 353 ] The ultimate goal is to have a "digital encyclopedia on Rome." [ 37 ] This objective is part of a broader movement of "scientific study, [...] valorization, and even [...] preservation of heritage through virtual images." [ 337 ] The techniques used include both virtual reality and stereoscopy . [ 352 ] The graphic designers model, apply textures, and work on the lighting. [ 348 ] The modeling process leads to the creation of a 3D model of the element to be reconstructed, after which the next step is to find the appropriate materials from known samples or created but realistic elements. The team finalizes the work using Autodesk 3ds Max to illuminate the created virtual scene. [ 360 ] The process of creating the virtual image then begins, with the use of an interactivity software, Virtools . [ 361 ] The virtual model allows researchers to propose multiple solutions [ 255 ] and serves as "a field of experimentation for studies on Roman topography." [ 179 ] Confronting hypotheses with the interactive visit allows the team to choose one of these variants [ 255 ] and scientifically validate the result. [ 37 ] [ 320 ] The digitization of the Forum Boarium enabled experimentation with interactivity and the principle of virtual tours, but in 2003, Fleury considered the result to be "not yet scientifically satisfactory." [ 347 ] The proposed reconstructions can be revised in the virtual model if new discoveries emerge or if old elements are reinterpreted. [ 37 ] The reconstruction of Rome has allowed for "renewing the techniques for representing Antiquity but also offering new modes of experience, leading to unprecedented results." [ 344 ] The advantages of the virtual model are numerous and have been listed by Françoise Lecocq: in addition to the use of different scales for visits, it can also take place inside the reconstructed buildings; the visit takes place in a virtual world; multiple chronological levels can be reconstructed; the model is "evolving and reversible"; links allow for reference to sources or any interesting elements; the insertion of characters is possible; all the senses can be evoked. The virtual model breaks free from distance constraints as it can be accessed online. [ 362 ] [ 363 ] The 3D model enables immersion and interaction. [ 326 ] The interactivity allows users to visit the virtual model "at human scale" realistically. [ 364 ] A character, aside from providing the scale, allows users to choose between an objective visit and a subjective view [ 37 ] —either from behind a character named Marcus or at human height. Visitors' movements within the model are realistic, though shortcuts are available to go to certain points to facilitate the visit. [ 344 ] The model of the Theatre of Pompey, with its dimensions, allows for testing the usefulness of scenic masks, as the viewer can position themselves at the end of the theater, which had a diameter of 158 m. The model also enables verification of the quality of the emperor's reserved seat, offering a view of the pulpitum and the audience. [ 361 ] The interactive visit allows direct access to the archaeological, iconographic, or literary sources that contributed to the hypotheses used to create the reconstruction. [ 37 ] Graphic design and source analysis "are inseparable and complement each other," and the reconstruction requires close collaboration. [ 348 ] After this virtual visit stage, the project aims to propose competing hypotheses and to depict the city at other points in its history. [ 365 ] The goal is to create "an evolving model with overlapping chronological layers," [ 255 ] which is less cumbersome compared to a physical model. [ 255 ] The presentation of the completed work to the public is at the heart of the project: [ 320 ] after visits to the physical model and the digital model, the team participates in local or national events, and the work is available for remote consultation. Paul Bigot's model is open to the public, especially school groups, secondary school students, and university students. In June 1997, the 5,000th visitor was welcomed. [ 366 ] By the early 2000s, the model was receiving 3,000 annual visitors. [ 5 ] The scenography of the model in the new environment of the Maison de la Recherche en Sciences Humaines allows for visits by various audiences, including schools, with educational resources available for teachers. [ 332 ] The virtual model allows the public to imagine and visualize the city of Rome in antiquity, making it "a true tool for representation accessible to diverse audiences." The website also serves as a platform to publicize the team's work, [ 37 ] since its launch in January 1996. [ 367 ] By September 1996, 80,000 connections had been recorded. [ 368 ] Public access to the work is central to the project. [ 353 ] Interactive kiosks are also located in the rotunda above the model. [ 369 ] Since June 2017, the application Roma in Tabula has been available for free download, allowing users to visit ten buildings in Rome from the 4th century: the Julian Senate House, the Temple of Castor and Pollux, the Forums of Peace and Nerva, the Mausoleums of Augustus and Hadrian, the Julian, Constantine, and Aemilian Basilicas, as well as the Colosseum. [ 370 ] Along with research and educational goals, the team also has a media objective, "activating curiosity about Roman Antiquity." [ 37 ] The Plan of Rome and the team's work are highlighted in articles or journals aimed at diverse audiences. [ 371 ] The laboratory strives to present the progress of its research to the general public through popular science publications, media appearances, exhibitions, participation in the Fête de la Science [ fr ] , [ 332 ] Heritage Days , and since 2006, through the Nocturnes du Plan of Rome , public sessions. Initially held at the Humanities Research Centre in Caen and relatively confidential, these sessions have gained popularity and were moved to the chemistry amphitheater, and later again to the Pierre Daure amphitheater, which has 768 seats. [ 372 ] The Nocturnes du Plan de Rome are a venue for presenting the team's work to the public. Additionally, an annual session, the Nocturne invitée , invites an external speaker. [ 373 ] The sessions of the Nocturnes du Plan de Rome are also broadcast on YouTube . [ 374 ]
https://en.wikipedia.org/wiki/Plan_of_Rome_(Bigot)
The planar Hall sensor is a type of magnetic sensor based on the planar Hall effect of ferromagnetic materials. [ 1 ] [ 2 ] It measures the change in anisotropic magnetoresistance caused by an external magnetic field in the Hall geometry. As opposed to an ordinary Hall sensor, which measures field components perpendicular to the sensor plane, the planar Hall sensor responds to magnetic field components in the sensor plane. Generally speaking, for ferromagnetic materials, the resistance is larger when the current flows along the direction of magnetization than when it flows perpendicular to the magnetization vector. This creates an asymmetric electric field perpendicular to the current, which depends on the magnetization state of the sensor. Exactly controlling the magnetization state is the key to the operation of the planar Hall sensor. From fabrication the magnetization is confined to one certain direction in zero applied field, and the application of a field perpendicular to this direction changes the magnetization state in such a way that the electronic readout is linear with respect to the magnitude of the applied field. This is true for applied fields smaller than a fourth of the intrinsic effective anisotropy field (see ref. 1 for details on the working principle). The planar Hall sensor has been demonstrated as a magnetic bead detector [ 3 ] [ 4 ] and to measure the Earth's field with nanotesla precision [ 5 ] As a magnetic bead sensor, the planar Hall sensor can be used as sensing principle in a magnetic bioassay. [ 1 ] In ref. 5 detection of influenza virusses was demonstrated using an immunoassay imitating a sandwich ELISA based on monoclonal antibodies. [ 6 ]
https://en.wikipedia.org/wiki/Planar_Hall_sensor
Planar cell polarity (PCP) is the protein-mediated signaling that coordinates the orientation of cells in a layer of epithelial tissue . In vertebrates, examples of mature PCP oriented tissue are the stereo-cilia bundles in the inner ear, [ 1 ] motile cilia of the epithelium, [ 2 ] and cell motility in epidermal wound healing. [ 3 ] Additionally, PCP is known to be crucial to major developmental time points including coordinating convergent extension during gastrulation and coordinating cell behavior for neural tube closure. [ 4 ] Cells orient themselves and their neighbors by establishing asymmetric expression of PCP components on opposing cell members within cells to establish and maintain the directionality of the cells. Some of these PCP components are transmembrane proteins which can proliferate the orientation signal to the surrounding cells. [ 5 ] Planar cell polarity was first described in insects and then further defined in fruit flies ( Drosophila melanogaster ). Some of the earlier work on gene controlled polarity of fly wings was published by D. Gubb and A. García-Bellido in 1982 describing how the mutation of some genes resulted in a morphology change in the cuticle orientation on the fly body. [ 6 ] The history of the PCP pathway as it was expanded by fly genetics work, which lead to the interesting names for PCP components like Frizzled , Van-gogh , and Dishevelled . These are typical nomenclature for new genes discovered in flies, which are often based on the description of the visual presentation of the mutant flies for each gene. Early PCP research focused on its role in embryology and genetics, but the discovery that PCP proteins were localized asymmetrically within the cell pushed the topic into the world of cell biology. [ 5 ] There was a surge in interest in the Planar Cell Polarity pathway after conserved PCP genes were found to be involved in important vertebrate processes vertebrate gastrulation, mammalian ear patterning and hearing, and neural tube closure. [ 5 ] Discoveries from this popular wave of PCP research has found its involvement in polarized ciliary beating in the trachea and brain ventricles , [ 7 ] [ 8 ] oriented cell divisions, [ 9 ] lung branching, [ 10 ] and hair follicle alignment. [ 11 ] [ 12 ] A major challenge to studying PCP is that the in vivo protein and cell contact signaling required to facilitate it are difficult to recapitulate in a cell culture dish. However, the recent advances in imaging technology and the expansion of genetic tools are helping to uncover how PCP works in the living cell and the role it plays in cell development and biology. [ 5 ] Core planar cell polarity genes all characterized in Drosophila mutants affect many structures in Drosophila with PCP features including the hair and bristles across the fly body. The core PCP genes in Drosophila and other vertebrates are Frizzled (Fz), Flamingo (Fmi), Strabismus (Stbm)/ Van-gogh (Vang), Prickle (Pk), Dishevelled (Dsh), Diego (Dgo), and trimeric G protein Gαo. [ 13 ] Frizzled ( Fz ) -The first frizzled mutant in Drosophila was identified in 1982 by D. Gubb and A. Garcia-Bellido. The mutant had polarity defects in the wing, notum, haltere, legs, abdominal tergites and abdominal sternites. [ 6 ] Specifically, D. Gubb and A. Garcia-Bellido saw a polarity defect in the cuticular hairs and bristles on the wings. Later research found that the function of the Frizzled (Fz) gene in Drosophila melanogaster is required to coordinate the cytoskeletons of epidermal cells to orient cuticular hairs and bristles on the surface of the insect. “In Fz mutants it is not the structure of individual hairs and bristles that is altered, but their orientation with respect to their neighbors and the organism as a whole.” [ 14 ] As shown in Figure 2, in wild-type wing all hairs point towards the distal tip, but in Fz mutants the hairs point in a disordered manner. [ 14 ] Frizzled encodes a seven-pass transmembrane protein and because of this it gives epithelial cells the ability to transmit and interpret polarity information from neighboring epithelial cells. [ 14 ] Flamingo ( Fmi ) – Another seven-pass transmembrane receptor, Flamingo is also a cadherin that localizes at cell-cell boundaries in the epithelia cells of Drosophila wing. In the absence of Fmi , planar polarity was distorted. Fmi localization at the proximal/distal cell boundary is first dependent on the localization of Frizzled at the same boundaries. [ 15 ] The PCP signaling pathway includes several components ( Fz, Dsh and Gαo ) of the ‘‘canonical’’ Wnt signaling pathway . However the core PCP proteins can function independent of β-catenin to result in downstream changes to cellular cytoskeleton and are known as a ‘‘noncanonical’’ Wnt pathway. [ 13 ] The hallmark of the PCP system is the asymmetric and polarized membrane expression of PCP proteins. Dishevelled and Diego are cytoplasmic proteins and are recruited to the membrane by each other and by their association to the transmembrane PCP protein Frizzledl. [ 13 ] Strabismus/Vang is a four-transmembrane protein and can recruit cytoplasmic PCP protein Prickle. [ 13 ] It is known that Prickle can interact with Disheveled and perturb its recruitment by Frizzled. Through a feedback loop of the extracellular domains of Frizzled and Strabismus at the junctions of two neighboring cell membranes, the complex of Strabismus and Prickle and the complex of Frizzled and Disheveled and Diego are localized to opposite sides of the cells along the polarization axis. [ 13 ] Flamingo is thought to localize to both sides and plays a role in homophilic adhesion, the adhesion of cells by the interaction of similar cadherin types. [ 13 ] Failure of these PCP proteins to segregate correctly within a cell boundary can lead to a disruption in PCP such as with the hair cells on fly wings and mouse skin. [ 13 ]
https://en.wikipedia.org/wiki/Planar_cell_polarity
Planar chirality , also known as 2D chirality, is the special case of chirality for two dimensions . Most fundamentally, planar chirality is a mathematical term, finding use in chemistry , physics and related physical sciences, for example, in astronomy , optics and metamaterials . Recent occurrences in latter two fields are dominated by microwave and terahertz applications as well as micro- and nanostructured planar interfaces for infrared and visible light . This term is used in chemistry contexts, [ 2 ] e.g., for a chiral molecule lacking an asymmetric carbon atom, but possessing two non- coplanar rings that are each dissymmetric and which cannot easily rotate about the chemical bond connecting them: 2,2'-dimethylbiphenyl is perhaps the simplest example of this case. Planar chirality is also exhibited by molecules like ( E )- cyclooctene , some di- or poly-substituted metallocenes , and certain monosubstituted paracyclophanes . Nature rarely provides planar chiral molecules, cavicularin being an exception. To assign the configuration of a planar chiral molecule, begin by selecting the pilot atom, which is the highest priority of the atoms that is not in the plane, but is directly attached to an atom in the plane. Next, assign the priority of the three adjacent in-plane atoms, starting with the atom attached to the pilot atom as priority 1, and preferentially assigning in order of highest priority if there is a choice. Then set the pilot atom to in front of the three atoms in question. If the three atoms reside in a clockwise direction when followed in order of priority, the molecule is assigned as R; when counterclockwise it is assigned as S. [ 3 ] Papakostas et al. observed in 2003 that planar chirality affects the polarization of light diffracted by arrays of planar chiral microstructures, where large polarization changes of opposite sign were detected in light diffracted from planar structures of opposite handedness. [ 4 ] The study of planar chiral metamaterials has revealed that planar chirality is also associated with an optical effect in non-diffracting structures: the directionally asymmetric transmission (reflection and absorption) of circularly polarized waves. Planar chiral metamaterials, which are also anisotropic and lossy exhibit different total transmission (reflection and absorption) levels for the same circularly polarized wave incident on their front and back. The asymmetric transmission phenomenon arises from different, e.g. left-to-right, circular polarization conversion efficiencies for opposite propagation directions of the incident wave and therefore the effect is referred to as circular conversion dichroism. Like the twist of a planar chiral pattern appears reversed for opposite directions of observation, planar chiral metamaterials have interchanged properties for left-handed and right-handed circularly polarized waves that are incident on their front and back. In particular left-handed and right-handed circularly polarized waves experience opposite directional transmission (reflection and absorption) asymmetries. [ 5 ] [ 6 ] Achiral components may form a chiral arrangement. In this case, chirality is not an intrinsic property of the components, but rather imposed extrinsically by their relative positions and orientations. This concept is typically applied to experimental arrangements, for example, an achiral (meta)material illuminated by a beam of light, where the illumination direction makes the whole experiment different from its mirror image. Extrinsic planar chirality results from illumination of any periodically structured interface for suitable illumination directions. Starting from normal incidence onto a periodically structured interface, extrinsic planar chirality arises from tilting the interface around any axis that does not coincide with a line of mirror symmetry of the interface. In the presence of losses, extrinsic planar chirality can result in circular conversion dichroism, as described above. [ 7 ] Conventional mirrors reverse the handedness of circularly polarized waves upon reflection. In contrast, a chiral mirror reflects circularly polarized waves of one handedness without handedness change [ dubious – discuss ] , while absorbing circularly polarized waves of the opposite handedness. A perfect chiral mirror exhibits circular conversion dichroism with ideal efficiency. Chiral mirrors can be realized by placing a planar chiral metamaterial in front of a conventional mirror. [ 8 ] The concept has been exploited in holography to realize independent holograms for left-handed and right-handed circularly polarized electromagnetic waves. [ 9 ] Active chiral mirrors that can be switched between left and right, or chiral mirror and conventional mirror, have been reported. [ 10 ]
https://en.wikipedia.org/wiki/Planar_chirality
Planar laser-induced fluorescence (PLIF) is an optical diagnostic technique widely used for flow visualization and quantitative measurements. PLIF has been shown to be used for velocity, concentration, temperature and pressure measurements. A PLIF setup consists of a source of light (usually a laser ), an arrangement of lenses to form a sheet, fluorescent medium, collection optics and a detector. The light from the source, illuminates the medium, which then fluoresces. This signal is captured by the detector and can be related to the various properties of the medium. The typical lasers used as light sources are pulsed, which provide a higher peak power than the continuous-wave lasers. Also the short pulse time is useful for good temporal resolution . Some of the widely used laser sources are Nd:YAG laser , dye lasers , excimer lasers , and ion lasers . The light from the laser (usually a beam) is passed through a set of lenses and/or mirrors to form a sheet, which is then used to illuminate the medium. This medium is either made up of fluorescent material or can be seeded with a fluorescent substance. The signal is usually captured by a CCD or CMOS camera (sometimes intensified cameras are also used). Timing electronics are often used to synchronize pulsed light sources with intensified cameras. - Unlike several other flow imaging techniques, PLIF may be combined with particle image velocimetry (PIV) . This allows for the simultaneous measurement of a fluid velocity field and species concentration.
https://en.wikipedia.org/wiki/Planar_laser-induced_fluorescence
In engineering , a mechanism is a device that transforms input forces and movement into a desired set of output forces and movement. Mechanisms generally consist of moving components which may include Gears and gear trains ; Belts and chain drives ; cams and followers ; Linkages ; Friction devices, such as brakes or clutches ; Structural components such as a frame, fasteners, bearings, springs, or lubricants; Various machine elements , such as splines, pins, or keys. German scientist Franz Reuleaux defines machine as "a combination of resistant bodies so arranged that by their means the mechanical forces of nature can be compelled to do work accompanied by certain determinate motion". In this context, his use of machine is generally interpreted to mean mechanism . The combination of force and movement defines power , and a mechanism manages power to achieve a desired set of forces and movement. A mechanism is usually a piece of a larger process, known as a mechanical system or machine . Sometimes an entire machine may be referred to as a mechanism; examples are the steering mechanism in a car , or the winding mechanism of a wristwatch . However, typically, a set of multiple mechanisms is called a machine. From the time of Archimedes to the Renaissance , mechanisms were viewed as constructed from simple machines , such as the lever , pulley , screw , wheel and axle , wedge , and inclined plane . Reuleaux focused on bodies, called links , and the connections between these bodies, called kinematic pairs , or joints. To use geometry to study the movement of a mechanism, its links are modelled as rigid bodies . This means that distances between points in a link are assumed to not change as the mechanism moves—that is, the link does not flex. Thus, the relative movement between points in two connected links is considered to result from the kinematic pair that joins them. Kinematic pairs, or joints, are considered to provide ideal constraints between two links, such as the constraint of a single point for pure rotation, or the constraint of a line for pure sliding, as well as pure rolling without slipping and point contact with slipping. A mechanism is modelled as an assembly of rigid links and kinematic pairs. Reuleaux called the ideal connections between links kinematic pairs . He distinguished between higher pairs , with line contact between the two links, and lower pairs , with area contact between the links. J. Phillips [ clarification needed ] shows that there are many ways to construct pairs that do not fit this simple model. Lower pair: A lower pair is an ideal joint that has surface contact between the pair of elements, as in the following cases: Higher pairs: Generally, a higher pair is a constraint that requires a line or point contact between the elemental surfaces. For example, the contact between a cam and its follower is a higher pair called a cam joint . Similarly, the contact between the involute curves that form the meshing teeth of two gears are cam joints. A kinematic diagram reduces machine components to a skeleton diagram that emphasises the joints and reduces the links to simple geometric elements. This diagram can also be formulated as a graph by representing the links of the mechanism as edges and the joints as vertices of the graph. This version of the kinematic diagram has proven effective in enumerating kinematic structures in the process of machine design. [ 1 ] An important consideration in this design process is the degree of freedom of the system of links and joints, which is determined using the Chebychev–Grübler–Kutzbach criterion . While all mechanisms in a mechanical system are three-dimensional, they can be analysed using plane geometry if the movement of the individual components is constrained so that all point trajectories are parallel or in a series connection to a plane. In this case the system is called a planar mechanism . The kinematic analysis of planar mechanisms uses the subset of Special Euclidean group SE , consisting of planar rotations and translations, denoted by SE. The group SE is three-dimensional, which means that every position of a body in the plane is defined by three parameters. The parameters are often the x and y coordinates of the origin of a coordinate frame in M , [ clarification needed ] measured from the origin of a coordinate frame in F , and the angle measured from the x -axis in F to the x -axis in M . [ clarification needed ] This is often described saying a body in the plane has three degrees of freedom . The pure rotation of a hinge and the linear translation of a slider can be identified with subgroups of SE, and define the two joints as one degree-of-freedom joints of planar mechanisms. [ incomprehensible ] The cam joint formed by two surfaces in sliding and rotating contact is a two degree-of-freedom joint. It is possible to construct a mechanism such that the point trajectories in all components lie in concentric spherical shells around a fixed point. An example is the gimbaled gyroscope . These devices are called spherical mechanisms. [ 2 ] Spherical mechanisms are constructed by connecting links with hinged joints such that the axes of each hinge pass through the same point. This point becomes centre of the concentric spherical shells. The movement of these mechanisms is characterised by the group SO(3) [ clarification needed ] of rotations in three-dimensional space. Other examples of spherical mechanisms are the automotive differential and the robotic wrist. The rotation group SO(3) is three-dimensional. An example of the three parameters that specify a spatial rotation are the roll, pitch and yaw angles used to define the orientation of an aircraft. A mechanism in which a body moves through a general spatial movement is called a spatial mechanism . An example is the RSSR linkage, which can be viewed as a four-bar linkage in which the hinged joints of the coupler link are replaced by rod ends , also called spherical joints or ball joints . The rod ends let the input and output cranks of the RSSR linkage be misaligned to the point that they lie in different planes, which causes the coupler link to move in a general spatial movement. Robot arms , Stewart platforms , and humanoid robotic systems are also examples of spatial mechanisms. Bennett's linkage is an example of a spatial overconstrained mechanism , which is constructed from four hinged joints. The group SE(3) [ clarification needed ] is six-dimensional, which means the position of a body in space is defined by six parameters. Three of the parameters define the origin of the moving reference frame relative to the fixed frame. Three other parameters define the orientation of the moving frame relative to the fixed frame. A linkage is a collection of links connected by joints. Generally, the links are the structural elements and the joints allow movement. Perhaps the single most useful example is the planar four-bar linkage . There are, however, many more special linkages: A compliant mechanism is a series of rigid bodies connected by compliant elements. These mechanisms have many advantages, including reduced part-count, reduced "slop" between joints (no parasitic motion because of gaps between parts [ 3 ] ), energy storage, low maintenance (they don't require lubrication and there is low mechanical wear), and ease of manufacture. [ 4 ] Flexure bearings (also known as flexure joints ) are a subset of compliant mechanisms that produce a geometrically well-defined motion (rotation) on application of a force. A cam and follower mechanism is formed by the direct contact of two specially shaped links. The driving link is called the cam and the link that is driven through the direct contact of their surfaces is called the follower. The shape of the contacting surfaces of the cam and follower determines the movement of the mechanism. In general a cam and follower mechanism's energy is transferred from cam to follower. The camshaft is rotated and, according to the cam profile, the follower moves up and down. Nowadays, slightly different types of eccentric cam followers are also available, in which energy is transferred from the follower to the cam. The main benefit of this type of cam and follower mechanism is that the follower moves slightly and helps to rotate the cam six times more circumference length with 70% of the force. The transmission of rotation between contacting toothed wheels can be traced back to the Antikythera mechanism of Greece and the south-pointing chariot of China. Illustrations by the Renaissance scientist Georgius Agricola show gear trains with cylindrical teeth. The implementation of the involute tooth yielded a standard gear design that provides a constant speed ratio. Some important features of gears and gear trains are: The design of mechanisms to achieve a particular movement and force transmission is known as the kinematic synthesis of mechanisms . [ 5 ] This is a set of geometric techniques which yield the dimensions of linkages, cam and follower mechanisms, and gears and gear trains to perform a required mechanical movement and power transmission. [ 6 ]
https://en.wikipedia.org/wiki/Planar_mechanism
In engineering , a mechanism is a device that transforms input forces and movement into a desired set of output forces and movement. Mechanisms generally consist of moving components which may include Gears and gear trains ; Belts and chain drives ; cams and followers ; Linkages ; Friction devices, such as brakes or clutches ; Structural components such as a frame, fasteners, bearings, springs, or lubricants; Various machine elements , such as splines, pins, or keys. German scientist Franz Reuleaux defines machine as "a combination of resistant bodies so arranged that by their means the mechanical forces of nature can be compelled to do work accompanied by certain determinate motion". In this context, his use of machine is generally interpreted to mean mechanism . The combination of force and movement defines power , and a mechanism manages power to achieve a desired set of forces and movement. A mechanism is usually a piece of a larger process, known as a mechanical system or machine . Sometimes an entire machine may be referred to as a mechanism; examples are the steering mechanism in a car , or the winding mechanism of a wristwatch . However, typically, a set of multiple mechanisms is called a machine. From the time of Archimedes to the Renaissance , mechanisms were viewed as constructed from simple machines , such as the lever , pulley , screw , wheel and axle , wedge , and inclined plane . Reuleaux focused on bodies, called links , and the connections between these bodies, called kinematic pairs , or joints. To use geometry to study the movement of a mechanism, its links are modelled as rigid bodies . This means that distances between points in a link are assumed to not change as the mechanism moves—that is, the link does not flex. Thus, the relative movement between points in two connected links is considered to result from the kinematic pair that joins them. Kinematic pairs, or joints, are considered to provide ideal constraints between two links, such as the constraint of a single point for pure rotation, or the constraint of a line for pure sliding, as well as pure rolling without slipping and point contact with slipping. A mechanism is modelled as an assembly of rigid links and kinematic pairs. Reuleaux called the ideal connections between links kinematic pairs . He distinguished between higher pairs , with line contact between the two links, and lower pairs , with area contact between the links. J. Phillips [ clarification needed ] shows that there are many ways to construct pairs that do not fit this simple model. Lower pair: A lower pair is an ideal joint that has surface contact between the pair of elements, as in the following cases: Higher pairs: Generally, a higher pair is a constraint that requires a line or point contact between the elemental surfaces. For example, the contact between a cam and its follower is a higher pair called a cam joint . Similarly, the contact between the involute curves that form the meshing teeth of two gears are cam joints. A kinematic diagram reduces machine components to a skeleton diagram that emphasises the joints and reduces the links to simple geometric elements. This diagram can also be formulated as a graph by representing the links of the mechanism as edges and the joints as vertices of the graph. This version of the kinematic diagram has proven effective in enumerating kinematic structures in the process of machine design. [ 1 ] An important consideration in this design process is the degree of freedom of the system of links and joints, which is determined using the Chebychev–Grübler–Kutzbach criterion . While all mechanisms in a mechanical system are three-dimensional, they can be analysed using plane geometry if the movement of the individual components is constrained so that all point trajectories are parallel or in a series connection to a plane. In this case the system is called a planar mechanism . The kinematic analysis of planar mechanisms uses the subset of Special Euclidean group SE , consisting of planar rotations and translations, denoted by SE. The group SE is three-dimensional, which means that every position of a body in the plane is defined by three parameters. The parameters are often the x and y coordinates of the origin of a coordinate frame in M , [ clarification needed ] measured from the origin of a coordinate frame in F , and the angle measured from the x -axis in F to the x -axis in M . [ clarification needed ] This is often described saying a body in the plane has three degrees of freedom . The pure rotation of a hinge and the linear translation of a slider can be identified with subgroups of SE, and define the two joints as one degree-of-freedom joints of planar mechanisms. [ incomprehensible ] The cam joint formed by two surfaces in sliding and rotating contact is a two degree-of-freedom joint. It is possible to construct a mechanism such that the point trajectories in all components lie in concentric spherical shells around a fixed point. An example is the gimbaled gyroscope . These devices are called spherical mechanisms. [ 2 ] Spherical mechanisms are constructed by connecting links with hinged joints such that the axes of each hinge pass through the same point. This point becomes centre of the concentric spherical shells. The movement of these mechanisms is characterised by the group SO(3) [ clarification needed ] of rotations in three-dimensional space. Other examples of spherical mechanisms are the automotive differential and the robotic wrist. The rotation group SO(3) is three-dimensional. An example of the three parameters that specify a spatial rotation are the roll, pitch and yaw angles used to define the orientation of an aircraft. A mechanism in which a body moves through a general spatial movement is called a spatial mechanism . An example is the RSSR linkage, which can be viewed as a four-bar linkage in which the hinged joints of the coupler link are replaced by rod ends , also called spherical joints or ball joints . The rod ends let the input and output cranks of the RSSR linkage be misaligned to the point that they lie in different planes, which causes the coupler link to move in a general spatial movement. Robot arms , Stewart platforms , and humanoid robotic systems are also examples of spatial mechanisms. Bennett's linkage is an example of a spatial overconstrained mechanism , which is constructed from four hinged joints. The group SE(3) [ clarification needed ] is six-dimensional, which means the position of a body in space is defined by six parameters. Three of the parameters define the origin of the moving reference frame relative to the fixed frame. Three other parameters define the orientation of the moving frame relative to the fixed frame. A linkage is a collection of links connected by joints. Generally, the links are the structural elements and the joints allow movement. Perhaps the single most useful example is the planar four-bar linkage . There are, however, many more special linkages: A compliant mechanism is a series of rigid bodies connected by compliant elements. These mechanisms have many advantages, including reduced part-count, reduced "slop" between joints (no parasitic motion because of gaps between parts [ 3 ] ), energy storage, low maintenance (they don't require lubrication and there is low mechanical wear), and ease of manufacture. [ 4 ] Flexure bearings (also known as flexure joints ) are a subset of compliant mechanisms that produce a geometrically well-defined motion (rotation) on application of a force. A cam and follower mechanism is formed by the direct contact of two specially shaped links. The driving link is called the cam and the link that is driven through the direct contact of their surfaces is called the follower. The shape of the contacting surfaces of the cam and follower determines the movement of the mechanism. In general a cam and follower mechanism's energy is transferred from cam to follower. The camshaft is rotated and, according to the cam profile, the follower moves up and down. Nowadays, slightly different types of eccentric cam followers are also available, in which energy is transferred from the follower to the cam. The main benefit of this type of cam and follower mechanism is that the follower moves slightly and helps to rotate the cam six times more circumference length with 70% of the force. The transmission of rotation between contacting toothed wheels can be traced back to the Antikythera mechanism of Greece and the south-pointing chariot of China. Illustrations by the Renaissance scientist Georgius Agricola show gear trains with cylindrical teeth. The implementation of the involute tooth yielded a standard gear design that provides a constant speed ratio. Some important features of gears and gear trains are: The design of mechanisms to achieve a particular movement and force transmission is known as the kinematic synthesis of mechanisms . [ 5 ] This is a set of geometric techniques which yield the dimensions of linkages, cam and follower mechanisms, and gears and gear trains to perform a required mechanical movement and power transmission. [ 6 ]
https://en.wikipedia.org/wiki/Planar_mechanisms
The planar process is a manufacturing process used in the semiconductor industry to build individual components of a transistor , and in turn, connect those transistors together. It is the primary process by which silicon integrated circuit chips are built, and it is the most commonly used method of producing junctions during the manufacture of semiconductor devices . [ 1 ] The process utilizes the surface passivation and thermal oxidation methods. The planar process was developed at Fairchild Semiconductor in 1959 and process proved to be one of the most important single advances in semiconductor technology. [ 1 ] The key concept is to view a circuit in its two-dimensional projection (a plane), thus allowing the use of photographic processing concepts such as film negatives to mask the projection of light exposed chemicals. This allows the use of a series of exposures on a substrate ( silicon ) to create silicon oxide (insulators) or doped regions (conductors). Together with the use of metallization, and the concepts of p–n junction isolation and surface passivation , it is possible to create circuits on a single silicon crystal slice (a wafer) from a monocrystalline silicon boule. The process involves the basic procedures of silicon dioxide (SiO 2 ) oxidation, SiO 2 etching and heat diffusion. The final steps involves oxidizing the entire wafer with an SiO 2 layer, etching contact vias to the transistors, and depositing a covering metal layer over the oxide , thus connecting the transistors without manually wiring them together. In 1955 at Bell Labs , Carl Frosch and Lincoln Derick accidentally grew a layer of silicon dioxide over a silicon wafer, for which they observed surface passivation properties. [ 2 ] [ 3 ] In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors, the first transistors in which drain and source were adjacent at the surface, showing that silicon dioxide surface passivation protected and insulated silicon wafers. [ 4 ] At Bell Labs, the importance of Frosch's technique was immediately realized. Results of their work circulated around Bell Labs in the form of BTL memos before being published in 1957. At Shockley Semiconductor , Shockley had circulated the preprint of their article in December 1956 to all his senior staff, including Jean Hoerni . [ 5 ] [ 6 ] [ 7 ] [ 8 ] Later, Hoerni attended a meeting where Atalla presented a paper about passivation based on the previous results at Bell Labs. [ 8 ] Taking advantage of silicon dioxide's passivating effect on the silicon surface, Hoerni proposed to make transistors that were protected by a layer of silicon dioxide. [ 8 ] Jean Hoerni, while working at Fairchild Semiconductor , had first patented the planar process in 1959. [ 9 ] [ 10 ] K. E. Daburlos and H. J. Patterson of Bell Laboratories continued on the work of C. Frosch and L. Derick, and developed a process similar to Hoerni’s about the same time. [ 8 ] Together with the use of metallization (to join together the integrated circuits), and the concept of p–n junction isolation (from Kurt Lehovec ), the researchers at Fairchild were able to create circuits on a single silicon crystal slice (a wafer) from a monocrystalline silicon boule . In 1959, Robert Noyce built on Hoerni's work with his conception of an integrated circuit (IC), which added a layer of metal to the top of Hoerni's basic structure to connect different components, such as transistors, capacitors , or resistors , located on the same piece of silicon. The planar process provided a powerful way of implementing an integrated circuit that was superior to earlier conceptions of the integrated circuit. [ 11 ] Noyce's invention was the first monolithic IC chip. [ 12 ] [ 13 ] Early versions of the planar process used a photolithography process using near-ultraviolet light from a mercury vapor lamp. As of 2011, small features are typically made with 193 nm "deep" UV lithography. [ 14 ] As of 2022, the ASML NXE platform uses 13.5 nm extreme ultraviolet (EUV) light, generated by a tin-based plasma source, as part of the extreme ultraviolet lithography process.
https://en.wikipedia.org/wiki/Planar_process
Planar projections are the subset of 3D graphical projections constructed by linearly mapping points in three-dimensional space to points on a two-dimensional projection plane . The projected point on the plane is chosen such that it is collinear with the corresponding three-dimensional point and the centre of projection . The lines connecting these points are commonly referred to as projectors . The centre of projection can be thought of as the location of the observer, while the plane of projection is the surface on which the two dimensional projected image of the scene is recorded or from which it is viewed (e.g., photographic negative, photographic print, computer monitor). When the centre of projection is at a finite distance from the projection plane, a perspective projection is obtained. When the centre of projection is at infinity, all the projectors are parallel, and the corresponding subset of planar projections are referred to as parallel projections . Mathematically, planar projections are linear transformations acting on a point in three-dimensional space a x , y , z {\displaystyle \mathbf {a} _{x,y,z}} to give a point b u , v {\displaystyle \mathbf {b} _{u,v}} on the projection plane. These transformations consist of various compositions of the five transformations: orthographic projection , rotation , shear , translation and perspective . It is also used in maps to show the planet Earth and other planets or objects in space. This is good for maps of close-up areas. This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Planar_projection
The planar reentry equations are the equations of motion governing the unpowered reentry of a spacecraft , based on the assumptions of planar motion and constant mass, in an Earth-fixed reference frame . [ 1 ] The equations are given by: { d V d t = − ρ V 2 2 β + g sin ⁡ γ d γ d t = − V cos ⁡ γ r − ρ V 2 β ( L D ) cos ⁡ σ + g cos ⁡ γ V d h d t = − V sin ⁡ γ {\displaystyle {\begin{cases}{\frac {dV}{dt}}&=-{\frac {\rho V^{2}}{2\beta }}+g\sin \gamma \\{\frac {d\gamma }{dt}}&=-{\frac {V\cos \gamma }{r}}-{\frac {\rho V}{2\beta }}\left({\frac {L}{D}}\right)\cos \sigma +{\frac {g\cos \gamma }{V}}\\{\frac {dh}{dt}}&=-V\sin \gamma \end{cases}}} where the quantities in these equations are: Harry Allen and Alfred Eggers , based on their studies of ICBM trajectories, were able to derive an analytical expression for the velocity as a function of altitude. [ 2 ] They made several assumptions: These assumptions are valid for hypersonic speeds , where the Mach number is greater than 5. Then the planar reentry equations for the spacecraft are: Rearranging terms and integrating from the atmospheric interface conditions at the start of reentry ( V atm , h atm ) {\displaystyle (V_{\text{atm}},h_{\text{atm}})} leads to the expression: The term exp ⁡ ( − h atm / H ) {\displaystyle \exp(-h_{\text{atm}}/H)} is small and may be neglected, leading to the velocity: Allen and Eggers were also able to calculate the deceleration along the trajectory, in terms of the number of g's experienced n = g 0 − 1 ( d V / d t ) {\displaystyle n=g_{0}^{-1}(dV/dt)} , where g 0 {\displaystyle g_{0}} is the gravitational acceleration at the planet's surface. The altitude and velocity at maximum deceleration are: It is also possible to compute the maximum stagnation point convective heating with the Allen-Eggers solution and a heat transfer correlation; the Sutton-Graves correlation [ 3 ] is commonly chosen. The heat rate q ˙ ″ {\displaystyle {\dot {q}}''} at the stagnation point, with units of Watts per square meter, is assumed to have the form: where r n {\displaystyle r_{n}} is the effective nose radius. The constant k = 1.74153 × 10 − 4 {\displaystyle k=1.74153\times 10^{-4}} for Earth. Then the altitude and value of peak convective heating may be found: Another commonly encountered simplification is a lifting entry with a shallow, slowly-varying, flight path angle. [ 4 ] The velocity as a function of altitude can be derived from two assumptions: From these two assumptions, we may infer from the second equation of motion that: [ 1 r + ρ 2 β ( L D ) cos ⁡ σ ] V 2 = g ⟹ V ( h ) = g r 1 + ρ r 2 β ( L D ) cos ⁡ σ {\displaystyle \left[{\frac {1}{r}}+{\frac {\rho }{2\beta }}\left({\frac {L}{D}}\right)\cos \sigma \right]V^{2}=g\implies V(h)={\sqrt {\frac {gr}{1+{\frac {\rho r}{2\beta }}\left({\frac {L}{D}}\right)\cos \sigma }}}}
https://en.wikipedia.org/wiki/Planar_reentry_equations
In physics , Planck's law (also Planck radiation law [ 1 ] : 1305 ) describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature T , when there is no net flow of matter or energy between the body and its environment. [ 2 ] At the end of the 19th century, physicists were unable to explain why the observed spectrum of black-body radiation , which by then had been accurately measured, diverged significantly at higher frequencies from that predicted by existing theories. In 1900, German physicist Max Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, E , that was proportional to the frequency of its associated electromagnetic wave . While Planck originally regarded the hypothesis of dividing energy into increments as a mathematical artifice, introduced merely to get the correct answer, other physicists including Albert Einstein built on his work, and Planck's insight is now recognized to be of fundamental importance to quantum theory . Every physical body spontaneously and continuously emits electromagnetic radiation and the spectral radiance of a body, B ν , describes the spectral emissive power per unit area, per unit solid angle and per unit frequency for particular radiation frequencies. The relationship given by Planck's radiation law, given below, shows that with increasing temperature, the total radiated energy of a body increases and the peak of the emitted spectrum shifts to shorter wavelengths. [ 3 ] According to Planck's distribution law, the spectral energy density (energy per unit volume per unit frequency) at given temperature is given by: [ 4 ] [ 5 ] u ν ( ν , T ) = 8 π h ν 3 c 3 ⋅ 1 e h ν k B T − 1 . {\displaystyle u_{\nu }(\nu ,T)={\frac {8\pi h\nu ^{3}}{c^{3}}}\cdot {\frac {1}{e^{\frac {h\nu }{k_{\mathrm {B} }T}}-1}}.} Alternatively, the law can be expressed for the spectral radiance of a body for frequency ν at absolute temperature T given as: [ 6 ] [ 7 ] [ 8 ] B ν ( ν , T ) = 2 h ν 3 c 2 ⋅ 1 e h ν k B T − 1 {\displaystyle B_{\nu }(\nu ,T)={\frac {2h\nu ^{3}}{c^{2}}}\cdot {\frac {1}{e^{\frac {h\nu }{k_{\mathrm {B} }T}}-1}}} where k B is the Boltzmann constant , h is the Planck constant , and c is the speed of light in the medium, whether material or vacuum. The SI units of the spectral radiance B ν are W · sr -1 · m -2 · Hz -1 . The spectral radiance B λ are W · sr -1 · m -3 . The cgs units of spectral radiance B ν are erg · s −1 · sr −1 · cm −2 · Hz −1 . The terms B and u are related to each other by a factor of ⁠ 4π / c ⁠ , since B is independent of direction and radiation travels at speed c . The spectral radiance can also be expressed per unit wavelength λ instead of per unit frequency. In addition, the law may be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. In the limit of low frequencies (i.e. long wavelengths), Planck's law tends to the Rayleigh–Jeans law , while in the limit of high frequencies (i.e. small wavelengths) it tends to the Wien approximation . Max Planck developed the law in 1900 with only empirically determined constants, and later showed that, expressed as an energy distribution, it is the unique stable distribution for radiation in thermodynamic equilibrium . [ 2 ] As an energy distribution, it is one of a family of thermal equilibrium distributions which include the Bose–Einstein distribution , the Fermi–Dirac distribution and the Maxwell–Boltzmann distribution . A black-body is an idealised object which absorbs and emits all radiation frequencies. Near thermodynamic equilibrium , the emitted radiation is closely described by Planck's law and because of its dependence on temperature , Planck radiation is said to be thermal radiation, such that the higher the temperature of a body the more radiation it emits at every wavelength. Planck radiation has a maximum intensity at a wavelength that depends on the temperature of the body. For example, at room temperature (~ 300 K ), a body emits thermal radiation that is mostly infrared and invisible. At higher temperatures the amount of infrared radiation increases and can be felt as heat, and more visible radiation is emitted so the body glows visibly red. At higher temperatures, the body is bright yellow or blue-white and emits significant amounts of short wavelength radiation, including ultraviolet and even x-rays . The surface of the Sun (~ 6000 K ) emits large amounts of both infrared and ultraviolet radiation; its emission is peaked in the visible spectrum. This shift due to temperature is called Wien's displacement law . Planck radiation is the greatest amount of radiation that any body at thermal equilibrium can emit from its surface, whatever its chemical composition or surface structure. [ 9 ] The passage of radiation across an interface between media can be characterized by the emissivity of the interface (the ratio of the actual radiance to the theoretical Planck radiance), usually denoted by the symbol ε . It is in general dependent on chemical composition and physical structure, on temperature, on the wavelength, on the angle of passage, and on the polarization . [ 10 ] The emissivity of a natural interface is always between ε = 0 and 1. A body that interfaces with another medium which both has ε = 1 and absorbs all the radiation incident upon it is said to be a black body. The surface of a black body can be modelled by a small hole in the wall of a large enclosure which is maintained at a uniform temperature with opaque walls that, at every wavelength, are not perfectly reflective. At equilibrium, the radiation inside this enclosure is described by Planck's law, as is the radiation leaving the small hole. Just as the Maxwell–Boltzmann distribution is the unique maximum entropy energy distribution for a gas of material particles at thermal equilibrium, so is Planck's distribution for a gas of photons . [ 11 ] [ 12 ] By contrast to a material gas where the masses and number of particles play a role, the spectral radiance, pressure and energy density of a photon gas at thermal equilibrium are entirely determined by the temperature. If the photon gas is not Planckian, the second law of thermodynamics guarantees that interactions (between photons and other particles or even, at sufficiently high temperatures, between the photons themselves) will cause the photon energy distribution to change and approach the Planck distribution. In such an approach to thermodynamic equilibrium, photons are created or annihilated in the right numbers and with the right energies to fill the cavity with a Planck distribution until they reach the equilibrium temperature. It is as if the gas is a mixture of sub-gases, one for every band of wavelengths, and each sub-gas eventually attains the common temperature. The quantity B ν ( ν , T ) is the spectral radiance as a function of temperature and frequency. It has units of W · m −2 · sr −1 · Hz −1 in the SI system . An infinitesimal amount of power B ν ( ν , T ) cos θ dA d Ω dν is radiated in the direction described by the angle θ from the surface normal from infinitesimal surface area dA into infinitesimal solid angle d Ω in an infinitesimal frequency band of width dν centered on frequency ν . The total power radiated into any solid angle is the integral of B ν ( ν , T ) over those three quantities, and is given by the Stefan–Boltzmann law . The spectral radiance of Planckian radiation from a black body has the same value for every direction and angle of polarization, and so the black body is said to be a Lambertian radiator . Planck's law can be encountered in several forms depending on the conventions and preferences of different scientific fields. The various forms of the law for spectral radiance are summarized in the table below. Forms on the left are most often encountered in experimental fields , while those on the right are most often encountered in theoretical fields . In the fractional bandwidth formulation, x = h ν k B T = h c λ k B T {\textstyle x={\frac {h\nu }{k_{\mathrm {B} }T}}={\frac {hc}{\lambda k_{\mathrm {B} }T}}} , and the integration is with respect to d ( ln ⁡ x ) = d ( ln ⁡ ν ) = d ν ν = − d λ λ = − d ( ln ⁡ λ ) {\textstyle \mathrm {d} (\ln x)=\mathrm {d} (\ln \nu )={\frac {\mathrm {d} \nu }{\nu }}=-{\frac {\mathrm {d} \lambda }{\lambda }}=-\mathrm {d} (\ln \lambda )} . Planck's law can also be written in terms of the spectral energy density ( u ) by multiplying B by ⁠ 4π / c ⁠ : [ 17 ] u i ( T ) = 4 π c B i ( T ) . {\displaystyle u_{i}(T)={\frac {4\pi }{c}}B_{i}(T).} These distributions represent the spectral radiance of blackbodies—the power emitted from the emitting surface, per unit projected area of emitting surface, per unit solid angle , per spectral unit (frequency, wavelength, wavenumber or their angular equivalents, or fractional frequency or wavelength). Since the radiance is isotropic (i.e. independent of direction), the power emitted at an angle to the normal is proportional to the projected area, and therefore to the cosine of that angle as per Lambert's cosine law , and is unpolarized . Different spectral variables require different corresponding forms of expression of the law. In general, one may not convert between the various forms of Planck's law simply by substituting one variable for another, because this would not take into account that the different forms have different units. Wavelength and frequency units are reciprocal. Corresponding forms of expression are related because they express one and the same physical fact: for a particular physical spectral increment, a corresponding particular physical energy increment is radiated. This is so whether it is expressed in terms of an increment of frequency, d ν , or, correspondingly, of wavelength, d λ , or of fractional bandwidth, d ν / ν or d λ / λ . Introduction of a minus sign can indicate that an increment of frequency corresponds with decrement of wavelength. In order to convert the corresponding forms so that they express the same quantity in the same units we multiply by the spectral increment. Then, for a particular spectral increment, the particular physical energy increment may be written B λ ( λ , T ) d λ = − B ν ( ν ( λ ) , T ) d ν , {\displaystyle B_{\lambda }(\lambda ,T)\,d\lambda =-B_{\nu }(\nu (\lambda ),T)\,d\nu ,} which leads to B λ ( λ , T ) = − d ν d λ B ν ( ν ( λ ) , T ) . {\displaystyle B_{\lambda }(\lambda ,T)=-{\frac {d\nu }{d\lambda }}B_{\nu }(\nu (\lambda ),T).} Also, ν ( λ ) = ⁠ c / λ ⁠ , so that ⁠ dν / dλ ⁠ = − ⁠ c / λ 2 ⁠ . Substitution gives the correspondence between the frequency and wavelength forms, with their different dimensions and units. [ 15 ] [ 18 ] Consequently, B λ ( T ) B ν ( T ) = c λ 2 = ν 2 c . {\displaystyle {\frac {B_{\lambda }(T)}{B_{\nu }(T)}}={\frac {c}{\lambda ^{2}}}={\frac {\nu ^{2}}{c}}.} Evidently, the location of the peak of the spectral distribution for Planck's law depends on the choice of spectral variable. Nevertheless, in a manner of speaking, this formula means that the shape of the spectral distribution is independent of temperature, according to Wien's displacement law, as detailed below in § Properties §§ Percentiles . The fractional bandwidth form is related to the other forms by [ 16 ] In the above variants of Planck's law, the wavelength and wavenumber variants use the terms 2 hc 2 and ⁠ hc / k B ⁠ which comprise physical constants only. Consequently, these terms can be considered as physical constants themselves, [ 19 ] and are therefore referred to as the first radiation constant c 1 L and the second radiation constant c 2 with and Using the radiation constants, the wavelength variant of Planck's law can be simplified to L ( λ , T ) = c 1 L λ 5 1 exp ⁡ ( c 2 λ T ) − 1 {\displaystyle L(\lambda ,T)={\frac {c_{1L}}{\lambda ^{5}}}{\frac {1}{\exp \left({\frac {c_{2}}{\lambda T}}\right)-1}}} and the wavenumber variant can be simplified correspondingly. L is used here instead of B because it is the SI symbol for spectral radiance . The L in c 1 L refers to that. This reference is necessary because Planck's law can be reformulated to give spectral radiant exitance M ( λ , T ) rather than spectral radiance L ( λ , T ) , in which case c 1 replaces c 1 L , with so that Planck's law for spectral radiant exitance can be written as M ( λ , T ) = c 1 λ 5 1 exp ⁡ ( c 2 λ T ) − 1 {\displaystyle M(\lambda ,T)={\frac {c_{1}}{\lambda ^{5}}}{\frac {1}{\exp \left({\frac {c_{2}}{\lambda T}}\right)-1}}} As measuring techniques have improved, the General Conference on Weights and Measures has revised its estimate of c 2 ; see Planckian locus § International Temperature Scale for details. Planck's law describes the unique and characteristic spectral distribution for electromagnetic radiation in thermodynamic equilibrium, when there is no net flow of matter or energy. [ 2 ] Its physics is most easily understood by considering the radiation in a cavity with rigid opaque walls. Motion of the walls can affect the radiation. If the walls are not opaque, then the thermodynamic equilibrium is not isolated. It is of interest to explain how the thermodynamic equilibrium is attained. There are two main cases: (a) when the approach to thermodynamic equilibrium is in the presence of matter, when the walls of the cavity are imperfectly reflective for every wavelength or when the walls are perfectly reflective while the cavity contains a small black body (this was the main case considered by Planck); or (b) when the approach to equilibrium is in the absence of matter, when the walls are perfectly reflective for all wavelengths and the cavity contains no matter. For matter not enclosed in such a cavity, thermal radiation can be approximately explained by appropriate use of Planck's law. Classical physics led, via the equipartition theorem , to the ultraviolet catastrophe , a prediction that the total blackbody radiation intensity was infinite. If supplemented by the classically unjustifiable assumption that for some reason the radiation is finite, classical thermodynamics provides an account of some aspects of the Planck distribution, such as the Stefan–Boltzmann law , and the Wien displacement law . For the case of the presence of matter, quantum mechanics provides a good account, as found below in the section headed Einstein coefficients . This was the case considered by Einstein, and is nowadays used for quantum optics. [ 20 ] [ 21 ] For the case of the absence of matter, quantum field theory is necessary, because non-relativistic quantum mechanics with fixed particle numbers does not provide a sufficient account. Quantum theoretical explanation of Planck's law views the radiation as a gas of massless, uncharged, bosonic particles, namely photons, in thermodynamic equilibrium . Photons are viewed as the carriers of the electromagnetic interaction between electrically charged elementary particles. Photon numbers are not conserved. Photons are created or annihilated in the right numbers and with the right energies to fill the cavity with photons described by the Planck distribution. For a photon gas in thermodynamic equilibrium, the internal energy density is entirely determined by the temperature; moreover, the pressure is entirely determined by the internal energy density. This is unlike the case of thermodynamic equilibrium for material gases, for which the internal energy is determined not only by the temperature, but also, independently, by the respective numbers of the different molecules, and independently again, by the specific characteristics of the different molecules. For different material gases at given temperature, the pressure and internal energy density can vary independently, because different molecules can carry independently different excitation energies. Planck's law arises as a limit of the Bose–Einstein distribution , the energy distribution describing non-interactive bosons in thermodynamic equilibrium. In the case of massless bosons such as photons and gluons , the chemical potential is zero and the Bose–Einstein distribution reduces to the Planck distribution. There is another fundamental equilibrium energy distribution: the Fermi–Dirac distribution , which describes fermions , such as electrons, in thermal equilibrium. The two distributions differ because multiple bosons can occupy the same quantum state, while multiple fermions cannot. At low densities, the number of available quantum states per particle is large, and this difference becomes irrelevant. In the low density limit, the Bose–Einstein and the Fermi–Dirac distribution each reduce to the Maxwell–Boltzmann distribution . Kirchhoff's law of thermal radiation is a succinct and brief account of a complicated physical situation. The following is an introductory sketch of that situation, and is very far from being a rigorous physical argument. The purpose here is only to summarize the main physical factors in the situation, and the main conclusions. There is a difference between conductive heat transfer and radiative heat transfer . Radiative heat transfer can be filtered to pass only a definite band of radiative frequencies. It is generally known that the hotter a body becomes, the more heat it radiates at every frequency. In a cavity in an opaque body with rigid walls that are not perfectly reflective at any frequency, in thermodynamic equilibrium, there is only one temperature, and it must be shared in common by the radiation of every frequency. One may imagine two such cavities, each in its own isolated radiative and thermodynamic equilibrium. One may imagine an optical device that allows radiative heat transfer between the two cavities, filtered to pass only a definite band of radiative frequencies. If the values of the spectral radiances of the radiations in the cavities differ in that frequency band, heat may be expected to pass from the hotter to the colder. One might propose to use such a filtered transfer of heat in such a band to drive a heat engine. If the two bodies are at the same temperature, the second law of thermodynamics does not allow the heat engine to work. It may be inferred that for a temperature common to the two bodies, the values of the spectral radiances in the pass-band must also be common. This must hold for every frequency band. [ 22 ] [ 23 ] [ 24 ] This became clear to Balfour Stewart and later to Kirchhoff. Balfour Stewart found experimentally that of all surfaces, one of lamp-black emitted the greatest amount of thermal radiation for every quality of radiation, judged by various filters. Thinking theoretically, Kirchhoff went a little further and pointed out that this implied that the spectral radiance, as a function of radiative frequency, of any such cavity in thermodynamic equilibrium must be a unique universal function of temperature. He postulated an ideal black body that interfaced with its surroundings in just such a way as to absorb all the radiation that falls on it. By the Helmholtz reciprocity principle, radiation from the interior of such a body would pass unimpeded directly to its surroundings without reflection at the interface. In thermodynamic equilibrium, the thermal radiation emitted from such a body would have that unique universal spectral radiance as a function of temperature. This insight is the root of Kirchhoff's law of thermal radiation. One may imagine a small homogeneous spherical material body labeled X at a temperature T X , lying in a radiation field within a large cavity with walls of material labeled Y at a temperature T Y . The body X emits its own thermal radiation. At a particular frequency ν , the radiation emitted from a particular cross-section through the centre of X in one sense in a direction normal to that cross-section may be denoted I ν , X ( T X ) , characteristically for the material of X . At that frequency ν , the radiative power from the walls into that cross-section in the opposite sense in that direction may be denoted I ν , Y ( T Y ) , for the wall temperature T Y . For the material of X , defining the absorptivity α ν , X , Y ( T X , T Y ) as the fraction of that incident radiation absorbed by X , that incident energy is absorbed at a rate α ν , X , Y ( T X , T Y ) I ν , Y ( T Y ) . The rate q ( ν , T X , T Y ) of accumulation of energy in one sense into the cross-section of the body can then be expressed q ( ν , T X , T Y ) = α ν , X , Y ( T X , T Y ) I ν , Y ( T Y ) − I ν , X ( T X ) . {\displaystyle q(\nu ,T_{X},T_{Y})=\alpha _{\nu ,X,Y}(T_{X},T_{Y})I_{\nu ,Y}(T_{Y})-I_{\nu ,X}(T_{X}).} Kirchhoff's seminal insight, mentioned just above, was that, at thermodynamic equilibrium at temperature T , there exists a unique universal radiative distribution, nowadays denoted B ν ( T ) , that is independent of the chemical characteristics of the materials X and Y , that leads to a very valuable understanding of the radiative exchange equilibrium of any body at all, as follows. When there is thermodynamic equilibrium at temperature T , the cavity radiation from the walls has that unique universal value, so that I ν , Y ( T Y ) = B ν ( T ) . Further, one may define the emissivity ε ν , X ( T X ) of the material of the body X just so that at thermodynamic equilibrium at temperature T X = T , one has I ν , X ( T X ) = I ν , X ( T ) = ε ν , X ( T ) B ν ( T ) . When thermal equilibrium prevails at temperature T = T X = T Y , the rate of accumulation of energy vanishes so that q ( ν , T X , T Y ) = 0 . It follows that in thermodynamic equilibrium, when T = T X = T Y , 0 = α ν , X , Y ( T , T ) B ν ( T ) − ϵ ν , X ( T ) B ν ( T ) . {\displaystyle 0=\alpha _{\nu ,X,Y}(T,T)B_{\nu }(T)-\epsilon _{\nu ,X}(T)B_{\nu }(T).} Kirchhoff pointed out that it follows that in thermodynamic equilibrium, when T = T X = T Y , α ν , X , Y ( T , T ) = ϵ ν , X ( T ) . {\displaystyle \alpha _{\nu ,X,Y}(T,T)=\epsilon _{\nu ,X}(T).} Introducing the special notation α ν , X ( T ) for the absorptivity of material X at thermodynamic equilibrium at temperature T (justified by a discovery of Einstein, as indicated below), one further has the equality α ν , X ( T ) = ϵ ν , X ( T ) {\displaystyle \alpha _{\nu ,X}(T)=\epsilon _{\nu ,X}(T)} at thermodynamic equilibrium. The equality of absorptivity and emissivity here demonstrated is specific for thermodynamic equilibrium at temperature T and is in general not to be expected to hold when conditions of thermodynamic equilibrium do not hold. The emissivity and absorptivity are each separately properties of the molecules of the material but they depend differently upon the distributions of states of molecular excitation on the occasion, because of a phenomenon known as "stimulated emission", that was discovered by Einstein. On occasions when the material is in thermodynamic equilibrium or in a state known as local thermodynamic equilibrium, the emissivity and absorptivity become equal. Very strong incident radiation or other factors can disrupt thermodynamic equilibrium or local thermodynamic equilibrium. Local thermodynamic equilibrium in a gas means that molecular collisions far outweigh light emission and absorption in determining the distributions of states of molecular excitation. Kirchhoff pointed out that he did not know the precise character of B ν ( T ) , but he thought it important that it should be found out. Four decades after Kirchhoff's insight of the general principles of its existence and character, Planck's contribution was to determine the precise mathematical expression of that equilibrium distribution B ν ( T ) . In physics, one considers an ideal black body, here labeled B , defined as one that completely absorbs all of the electromagnetic radiation falling upon it at every frequency ν (hence the term "black"). According to Kirchhoff's law of thermal radiation, this entails that, for every frequency ν , at thermodynamic equilibrium at temperature T , one has α ν , B ( T ) = ε ν , B ( T ) = 1 , so that the thermal radiation from a black body is always equal to the full amount specified by Planck's law. No physical body can emit thermal radiation that exceeds that of a black body, since if it were in equilibrium with a radiation field, it would be emitting more energy than was incident upon it. Though perfectly black materials do not exist, in practice a black surface can be accurately approximated. [ 2 ] As to its material interior, a body of condensed matter, liquid, solid, or plasma, with a definite interface with its surroundings, is completely black to radiation if it is completely opaque. That means that it absorbs all of the radiation that penetrates the interface of the body with its surroundings, and enters the body. This is not too difficult to achieve in practice. On the other hand, a perfectly black interface is not found in nature. A perfectly black interface reflects no radiation, but transmits all that falls on it, from either side. The best practical way to make an effectively black interface is to simulate an 'interface' by a small hole in the wall of a large cavity in a completely opaque rigid body of material that does not reflect perfectly at any frequency, with its walls at a controlled temperature. Beyond these requirements, the component material of the walls is unrestricted. Radiation entering the hole has almost no possibility of escaping the cavity without being absorbed by multiple impacts with its walls. [ 25 ] As explained by Planck, [ 26 ] a radiating body has an interior consisting of matter, and an interface with its contiguous neighbouring material medium, which is usually the medium from within which the radiation from the surface of the body is observed. The interface is not composed of physical matter but is a theoretical conception, a mathematical two-dimensional surface, a joint property of the two contiguous media, strictly speaking belonging to neither separately. Such an interface can neither absorb nor emit, because it is not composed of physical matter; but it is the site of reflection and transmission of radiation, because it is a surface of discontinuity of optical properties. The reflection and transmission of radiation at the interface obey the Stokes–Helmholtz reciprocity principle . At any point in the interior of a black body located inside a cavity in thermodynamic equilibrium at temperature T the radiation is homogeneous, isotropic and unpolarized. A black body absorbs all and reflects none of the electromagnetic radiation incident upon it. According to the Helmholtz reciprocity principle, radiation from the interior of a black body is not reflected at its surface, but is fully transmitted to its exterior. Because of the isotropy of the radiation in the body's interior, the spectral radiance of radiation transmitted from its interior to its exterior through its surface is independent of direction. [ 27 ] This is expressed by saying that radiation from the surface of a black body in thermodynamic equilibrium obeys Lambert's cosine law. [ 28 ] [ 29 ] This means that the spectral flux d Φ( dA , θ , d Ω, dν ) from a given infinitesimal element of area dA of the actual emitting surface of the black body, detected from a given direction that makes an angle θ with the normal to the actual emitting surface at dA , into an element of solid angle of detection d Ω centred on the direction indicated by θ , in an element of frequency bandwidth dν , can be represented as [ 30 ] d Φ ( d A , θ , d Ω , d ν ) d Ω = L 0 ( d A , d ν ) d A d ν cos ⁡ θ {\displaystyle {\frac {d\Phi (dA,\theta ,d\Omega ,d\nu )}{d\Omega }}=L^{0}(dA,d\nu )\,dA\,d\nu \,\cos \theta } where L 0 ( dA , dν ) denotes the flux, per unit area per unit frequency per unit solid angle, that area dA would show if it were measured in its normal direction θ = 0 . The factor cos θ is present because the area to which the spectral radiance refers directly is the projection, of the actual emitting surface area, onto a plane perpendicular to the direction indicated by θ . This is the reason for the name cosine law . Taking into account the independence of direction of the spectral radiance of radiation from the surface of a black body in thermodynamic equilibrium, one has L 0 ( dA , dν ) = B ν ( T ) and so d Φ ( d A , θ , d Ω , d ν ) d Ω = B ν ( T ) d A d ν cos ⁡ θ . {\displaystyle {\frac {d\Phi (dA,\theta ,d\Omega ,d\nu )}{d\Omega }}=B_{\nu }(T)\,dA\,d\nu \,\cos \theta .} Thus Lambert's cosine law expresses the independence of direction of the spectral radiance B ν ( T ) of the surface of a black body in thermodynamic equilibrium. The total power emitted per unit area at the surface of a black body ( P ) may be found by integrating the black body spectral flux found from Lambert's law over all frequencies, and over the solid angles corresponding to a hemisphere ( h ) above the surface. P = ∫ 0 ∞ d ν ∫ h d Ω B ν cos ⁡ ( θ ) {\displaystyle P=\int _{0}^{\infty }d\nu \int _{h}d\Omega \,B_{\nu }\cos(\theta )} The infinitesimal solid angle can be expressed in spherical polar coordinates : d Ω = sin ⁡ ( θ ) d θ d ϕ . {\displaystyle d\Omega =\sin(\theta )\,d\theta \,d\phi .} So that: P = ∫ 0 ∞ d ν ∫ 0 π 2 d θ ∫ 0 2 π d ϕ B ν ( T ) cos ⁡ ( θ ) sin ⁡ ( θ ) = σ T 4 {\displaystyle P=\int _{0}^{\infty }d\nu \int _{0}^{\tfrac {\pi }{2}}d\theta \int _{0}^{2\pi }d\phi \,B_{\nu }(T)\cos(\theta )\sin(\theta )=\sigma T^{4}} where σ = 2 k B 4 π 5 15 c 2 h 3 ≈ 5.670374419 × 10 − 8 J s − 1 m − 2 K − 4 {\displaystyle \sigma ={\frac {2k_{\mathrm {B} }^{4}\pi ^{5}}{15c^{2}h^{3}}}\approx 5.670374419\times 10^{-8}\,\mathrm {J\,s^{-1}m^{-2}K^{-4}} } is known as the Stefan–Boltzmann constant . [ 31 ] The equation of radiative transfer describes the way in which radiation is affected as it travels through a material medium. For the special case in which the material medium is in thermodynamic equilibrium in the neighborhood of a point in the medium, Planck's law is of special importance. For simplicity, we can consider the linear steady state, without scattering . The equation of radiative transfer states that for a beam of light going through a small distance d s , energy is conserved: The change in the (spectral) radiance of that beam ( I ν ) is equal to the amount removed by the material medium plus the amount gained from the material medium. If the radiation field is in equilibrium with the material medium, these two contributions will be equal. The material medium will have a certain emission coefficient and absorption coefficient . The absorption coefficient α is the fractional change in the intensity of the light beam as it travels the distance d s , and has units of length −1 . It is composed of two parts, the decrease due to absorption and the increase due to stimulated emission . Stimulated emission is emission by the material body which is caused by and is proportional to the incoming radiation. It is included in the absorption term because, like absorption, it is proportional to the intensity of the incoming radiation. Since the amount of absorption will generally vary linearly as the density ρ of the material, we may define a "mass absorption coefficient" κ ν = ⁠ α / ρ ⁠ which is a property of the material itself. The change in intensity of a light beam due to absorption as it traverses a small distance d s will then be [ 7 ] d I ν = − κ ν ρ I ν d s {\displaystyle dI_{\nu }=-\kappa _{\nu }\rho I_{\nu }\,ds} The "mass emission coefficient" j ν is equal to the radiance per unit volume of a small volume element divided by its mass (since, as for the mass absorption coefficient, the emission is proportional to the emitting mass) and has units of power⋅solid angle −1 ⋅frequency −1 ⋅density −1 . Like the mass absorption coefficient, it too is a property of the material itself. The change in a light beam as it traverses a small distance d s will then be [ 32 ] d I ν = j ν ρ d s {\displaystyle dI_{\nu }=j_{\nu }\rho \,ds} The equation of radiative transfer will then be the sum of these two contributions: [ 33 ] d I ν d s = j ν ρ − κ ν ρ I ν . {\displaystyle {\frac {dI_{\nu }}{ds}}=j_{\nu }\rho -\kappa _{\nu }\rho I_{\nu }.} If the radiation field is in equilibrium with the material medium, then the radiation will be homogeneous (independent of position) so that dI ν = 0 and: κ ν B ν = j ν {\displaystyle \kappa _{\nu }B_{\nu }=j_{\nu }} which is another statement of Kirchhoff's law, relating two material properties of the medium, and which yields the radiative transfer equation at a point around which the medium is in thermodynamic equilibrium: d I ν d s = κ ν ρ ( B ν − I ν ) . {\displaystyle {\frac {dI_{\nu }}{ds}}=\kappa _{\nu }\rho (B_{\nu }-I_{\nu }).} The principle of detailed balance states that, at thermodynamic equilibrium, each elementary process is in equilibrium with its reverse process. In 1916, Albert Einstein applied this principle on an atomic level to the case of an atom radiating and absorbing radiation due to transitions between two particular energy levels, [ 34 ] giving a deeper insight into the equation of radiative transfer and Kirchhoff's law for this type of radiation. If level 1 is the lower energy level with energy E 1 , and level 2 is the upper energy level with energy E 2 , then the frequency ν of the radiation radiated or absorbed will be determined by Bohr's frequency condition: [ 35 ] [ 36 ] E 2 − E 1 = h ν . {\displaystyle E_{2}-E_{1}=h\nu .} If n 1 and n 2 are the number densities of the atom in states 1 and 2 respectively, then the rate of change of these densities in time will be due to three processes: where u ν is the spectral energy density of the radiation field. The three parameters A 21 , B 21 and B 12 , known as the Einstein coefficients, are associated with the photon frequency ν produced by the transition between two energy levels (states). As a result, each line in a spectrum has its own set of associated coefficients. When the atoms and the radiation field are in equilibrium, the radiance will be given by Planck's law and, by the principle of detailed balance, the sum of these rates must be zero: 0 = A 21 n 2 + B 21 n 2 4 π c B ν ( T ) − B 12 n 1 4 π c B ν ( T ) {\displaystyle 0=A_{21}n_{2}+B_{21}n_{2}{\frac {4\pi }{c}}B_{\nu }(T)-B_{12}n_{1}{\frac {4\pi }{c}}B_{\nu }(T)} Since the atoms are also in equilibrium, the populations of the two levels are related by the Boltzmann factor : n 2 n 1 = g 2 g 1 e − h ν / k B T {\displaystyle {\frac {n_{2}}{n_{1}}}={\frac {g_{2}}{g_{1}}}e^{-h\nu /k_{\mathrm {B} }T}} where g 1 and g 2 are the multiplicities of the respective energy levels. Combining the above two equations with the requirement that they be valid at any temperature yields two relationships between the Einstein coefficients: A 21 B 21 = 8 π h ν 3 c 3 {\displaystyle {\frac {A_{21}}{B_{21}}}={\frac {8\pi h\nu ^{3}}{c^{3}}}} B 21 B 12 = g 1 g 2 {\displaystyle {\frac {B_{21}}{B_{12}}}={\frac {g_{1}}{g_{2}}}} so that knowledge of one coefficient will yield the other two. For the case of isotropic absorption and emission, the emission coefficient ( j ν ) and absorption coefficient ( κ ν ) defined in the radiative transfer section above, can be expressed in terms of the Einstein coefficients. The relationships between the Einstein coefficients will yield the expression of Kirchhoff's law expressed in the Radiative transfer section above, namely that j ν = κ ν B ν . {\displaystyle j_{\nu }=\kappa _{\nu }B_{\nu }.} These coefficients apply to both atoms and molecules. The distributions B ν , B ω , B ν̃ and B k peak at a photon energy of [ 37 ] E = [ 3 + W ( − 3 e − 3 ) ] k B T ≈ 2.821 k B T , {\displaystyle E=\left[3+W\left(-3e^{-3}\right)\right]k_{\mathrm {B} }T\approx 2.821\ k_{\mathrm {B} }T,} where W is the Lambert W function and e is Euler's number . However, the distribution B λ peaks at a different energy [ 37 ] E = [ 5 + W ( − 5 e − 5 ) ] k B T ≈ 4.965 k B T , {\displaystyle E=\left[5+W\left(-5e^{-5}\right)\right]k_{\mathrm {B} }T\approx 4.965\ k_{\mathrm {B} }T,} The reason for this is that, as mentioned above, one cannot go from (for example) B ν to B λ simply by substituting ν by λ . In addition, one must also multiply by | d ν / d λ | = c / λ 2 {\textstyle \left|{d\nu }/{d\lambda }\right|=c/{\lambda ^{2}}} , which shifts the peak of the distribution to higher energies. These peaks are the mode energy of a photon, when binned using equal-size bins of frequency or wavelength, respectively. Dividing hc ( 14 387 .770 μm·K ) by these energy expression gives the wavelength of the peak. The spectral radiance at these peaks is given by: with x = 3 + W ( − 3 e − 3 ) , {\displaystyle x=3+W(-3e^{-3}),} and B λ , max ( T ) = 2 k B 5 T 5 x 5 h 4 c 3 1 e x − 1 ≈ 4.096 × 10 − 6 W m 2 ⋅ sr × ( T / K ) 5 {\displaystyle {\begin{aligned}B_{\lambda ,{\text{max}}}(T)&={\frac {2k_{\mathrm {B} }^{5}T^{5}x^{5}}{h^{4}c^{3}}}{\frac {1}{e^{x}-1}}\\&\approx 4.096\times 10^{-6}{\frac {\text{W}}{{\text{m}}^{2}\cdot {\text{sr}}}}\times ~(T/{\text{K}})^{5}\end{aligned}}} with x = 5 + W ( − 5 e − 5 ) . {\displaystyle x=5+W(-5e^{-5}).} Meanwhile, the average energy of a photon from a blackbody is E = [ π 4 30 ζ ( 3 ) ] k B T ≈ 2.701 k B T , {\displaystyle E=\left[{\frac {\pi ^{4}}{30\ \zeta (3)}}\right]k_{\mathrm {B} }T\approx 2.701\ k_{\mathrm {B} }T,} where ζ {\displaystyle \zeta } is the Riemann zeta function . In the limit of low frequencies (i.e. long wavelengths), Planck's law becomes the Rayleigh–Jeans law [ 38 ] [ 39 ] [ 40 ] B ν ( T ) ≈ 2 ν 2 c 2 k B T {\displaystyle B_{\nu }(T)\approx {\frac {2\nu ^{2}}{c^{2}}}k_{\mathrm {B} }T} or B λ ( T ) ≈ 2 c λ 4 k B T {\displaystyle B_{\lambda }(T)\approx {\frac {2c}{\lambda ^{4}}}k_{\mathrm {B} }T} The radiance increases as the square of the frequency, illustrating the ultraviolet catastrophe . In the limit of high frequencies (i.e. small wavelengths) Planck's law tends to the Wien approximation : [ 40 ] [ 41 ] [ 42 ] B ν ( T ) ≈ 2 h ν 3 c 2 e − h ν k B T {\displaystyle B_{\nu }(T)\approx {\frac {2h\nu ^{3}}{c^{2}}}e^{-{\frac {h\nu }{k_{\mathrm {B} }T}}}} or B λ ( T ) ≈ 2 h c 2 λ 5 e − h c λ k B T . {\displaystyle B_{\lambda }(T)\approx {\frac {2hc^{2}}{\lambda ^{5}}}e^{-{\frac {hc}{\lambda k_{\mathrm {B} }T}}}.} Wien's displacement law in its stronger form states that the shape of Planck's law is independent of temperature. It is therefore possible to list the percentile points of the total radiation as well as the peaks for wavelength and frequency, in a form which gives the wavelength λ when divided by temperature T . [ 43 ] The second column of the following table lists the corresponding values of λT , that is, those values of x for which the wavelength λ is ⁠ x / T ⁠ micrometers at the radiance percentile point given by the corresponding entry in the first column. That is, 0.01% of the radiation is at a wavelength below ⁠ 910 / T ⁠ μm, 20% below ⁠ 2676 / T ⁠ μm , etc. The wavelength and frequency peaks are in bold and occur at 25.0% and 64.6% respectively. The 41.8% point is the wavelength-frequency-neutral peak (i.e. the peak in power per unit change in logarithm of wavelength or frequency). These are the points at which the respective Planck-law functions ⁠ 1 / λ 5 ⁠ , ν 3 and ⁠ ν 2 / λ 2 ⁠ , respectively, divided by exp ( ⁠ hν / k B T ⁠ ) − 1 attain their maxima. The much smaller gap in ratio of wavelengths between 0.1% and 0.01% (1110 is 22% more than 910) than between 99.9% and 99.99% (113374 is 120% more than 51613) reflects the exponential decay of energy at short wavelengths (left end) and polynomial decay at long. Which peak to use depends on the application. The conventional choice is the wavelength peak at 25.0% given by Wien's displacement law in its weak form. For some purposes the median or 50% point dividing the total radiation into two-halves may be more suitable. The latter is closer to the frequency peak than to the wavelength peak because the radiance drops exponentially at short wavelengths and only polynomially at long. The neutral peak occurs at a shorter wavelength than the median for the same reason. Solar radiation can be compared to black-body radiation at about 5778 K (but see graph). The table on the right shows how the radiation of a black body at this temperature is partitioned, and also how sunlight is partitioned for comparison. Also for comparison a planet modeled as a black body is shown, radiating at a nominal 288 K (15 °C) as a representative value of the Earth's highly variable temperature. Its wavelengths are more than twenty times that of the Sun, tabulated in the third column in micrometers (thousands of nanometers). That is, only 1% of the Sun's radiation is at wavelengths shorter than 296 nm, and only 1% at longer than 3728 nm. Expressed in micrometers this puts 98% of the Sun's radiation in the range from 0.296 to 3.728 μm. The corresponding 98% of energy radiated from a 288 K planet is from 5.03 to 79.5 μm, well above the range of solar radiation (or below if expressed in terms of frequencies ν = ⁠ c / λ ⁠ instead of wavelengths λ ). A consequence of this more-than-order-of-magnitude difference in wavelength between solar and planetary radiation is that filters designed to pass one and block the other are easy to construct. For example, windows fabricated of ordinary glass or transparent plastic pass at least 80% of the incoming 5778 K solar radiation, which is below 1.2 μm in wavelength, while blocking over 99% of the outgoing 288 K thermal radiation from 5 μm upwards, wavelengths at which most kinds of glass and plastic of construction-grade thickness are effectively opaque. The Sun's radiation is that arriving at the top of the atmosphere (TOA). As can be read from the table, radiation below 400 nm, or ultraviolet , is about 8%, while that above 700 nm, or infrared , starts at about the 48% point and so accounts for 52% of the total. Hence only 40% of the TOA insolation is visible to the human eye. The atmosphere shifts these percentages substantially in favor of visible light as it absorbs most of the ultraviolet and significant amounts of infrared. Consider a cube of side L with conducting walls filled with electromagnetic radiation in thermal equilibrium at temperature T . If there is a small hole in one of the walls, the radiation emitted from the hole will be characteristic of a perfect black body . We will first calculate the spectral energy density within the cavity and then determine the spectral radiance of the emitted radiation. At the walls of the cube, the parallel component of the electric field and the orthogonal component of the magnetic field must vanish. Analogous to the wave function of a particle in a box , one finds that the fields are superpositions of periodic functions. The three wavelengths λ 1 , λ 2 , and λ 3 , in the three directions orthogonal to the walls can be: λ i = 2 L n i , {\displaystyle \lambda _{i}={\frac {2L}{n_{i}}},} where the n i are positive integers. For each set of integers n i there are two linearly independent solutions (known as modes). The two modes for each set of these n i correspond to the two polarization states of the photon which has a spin of 1. According to quantum theory, the total energy of a mode is given by: The number r can be interpreted as the number of photons in the mode. For r = 0 the energy of the mode is not zero. This vacuum energy of the electromagnetic field is responsible for the Casimir effect . In the following we will calculate the internal energy of the box at absolute temperature T . According to statistical mechanics , the equilibrium probability distribution over the energy levels of a particular mode is given by: P r = e − β E ( r ) Z ( β ) . {\displaystyle P_{r}={\frac {e^{-\beta E\left(r\right)}}{Z\left(\beta \right)}}.} where we use the reciprocal temperature β = d e f 1 k B T . {\displaystyle \beta \ {\stackrel {\mathrm {def} }{=}}\ {\frac {1}{k_{\mathrm {B} }T}}.} The denominator Z ( β ) , is the partition function of a single mode. It makes P r properly normalized, and can be evaluated as Z ( β ) = ∑ r = 0 ∞ e − β E ( r ) = e − β ε / 2 1 − e − β ε , {\displaystyle Z\left(\beta \right)=\sum _{r=0}^{\infty }e^{-\beta E\left(r\right)}={\frac {e^{-\beta \varepsilon /2}}{1-e^{-\beta \varepsilon }}},} with being the energy of a single photon. The average energy in a mode can be obtained from the partition function : ⟨ E ⟩ = − d log ⁡ ( Z ) d β = ε 2 + ε e β ε − 1 . {\displaystyle \left\langle E\right\rangle =-{\frac {d\log \left(Z\right)}{d\beta }}={\frac {\varepsilon }{2}}+{\frac {\varepsilon }{e^{\beta \varepsilon }-1}}.} This formula, apart from the first vacuum energy term, is a special case of the general formula for particles obeying Bose–Einstein statistics . Since there is no restriction on the total number of photons, the chemical potential is zero. If we measure the energy relative to the ground state, the total energy in the box follows by summing ⟨ E ⟩ − ⁠ ε / 2 ⁠ over all allowed single photon states. This can be done exactly in the thermodynamic limit as L approaches infinity. In this limit, ε becomes continuous and we can then integrate ⟨ E ⟩ − ⁠ ε / 2 ⁠ over this parameter. To calculate the energy in the box in this way, we need to evaluate how many photon states there are in a given energy range. If we write the total number of single photon states with energies between ε and ε + dε as g ( ε ) dε , where g ( ε ) is the density of states (which is evaluated below), then the total energy is given by To calculate the density of states we rewrite equation ( 2 ) as follows: ε = h c 2 L n , {\displaystyle \varepsilon \ {=}\ {\frac {hc}{2L}}n,} where n is the norm of the vector n = ( n 1 , n 2 , n 3 ) . For every vector n with integer components larger than or equal to zero, there are two photon states. This means that the number of photon states in a certain region of n -space is twice the volume of that region. An energy range of dε corresponds to shell of thickness dn = ⁠ 2 L / hc ⁠ d ε in n -space. Because the components of n have to be positive, this shell spans an octant of a sphere. The number of photon states g ( ε ) dε , in an energy range dε , is thus given by: g ( ε ) d ε = 2 1 8 4 π n 2 d n = 8 π L 3 h 3 c 3 ε 2 d ε . {\displaystyle g(\varepsilon )\,d\varepsilon =2{\frac {1}{8}}4\pi n^{2}\,dn={\frac {8\pi L^{3}}{h^{3}c^{3}}}\varepsilon ^{2}\,d\varepsilon .} Inserting this in Eq. ( 3 ) and dividing by volume V = L 3 gives the total energy density U V = ∫ 0 ∞ u ν ( T ) d ν , {\displaystyle {\frac {U}{V}}=\int _{0}^{\infty }u_{\nu }(T)\,d\nu ,} where the frequency-dependent spectral energy density u ν ( T ) is given by u ν ( T ) = 8 π h ν 3 c 3 1 e h ν / k B T − 1 . {\displaystyle u_{\nu }(T)={8\pi h\nu ^{3} \over c^{3}}{1 \over e^{h\nu /k_{\mathrm {B} }T}-1}.} Since the radiation is the same in all directions, and propagates at the speed of light, the spectral radiance of radiation exiting the small hole is B ν ( T ) = u ν ( T ) c 4 π , {\displaystyle B_{\nu }(T)={\frac {u_{\nu }(T)c}{4\pi }},} which yields Planck's law B ν ( T ) = 2 h ν 3 c 2 1 e h ν / k B T − 1 . {\displaystyle B_{\nu }(T)={\frac {2h\nu ^{3}}{c^{2}}}~{\frac {1}{e^{h\nu /k_{\mathrm {B} }T}-1}}.} Other forms of the law can be obtained by change of variables in the total energy integral. The above derivation is based on Brehm & Mullin 1989 . For the non-degenerate case, A and B coefficients can be calculated using dipole approximation in time dependent perturbation theory in quantum mechanics. Calculation of A also requires second quantization since semi-classical theory cannot explain spontaneous emission which does not go to zero as the perturbing field goes to zero. Hence, the calculated transition rates are (in SI units): [ 45 ] [ 46 ] [ 47 ] Note that the rate of transition formula depends on dipole moment operator. For higher order approximations, it involves quadrupole moment and other similar terms. The A and B coefficients (which correspond to angular frequency energy distribution) are hence: where ω a b = E a − E b ℏ {\displaystyle \omega _{ab}={\frac {E_{a}-E_{b}}{\hbar }}} and A and B coefficients satisfy the given ratios for non degenerate case: Another useful ratio is from the Maxwell-Boltzmann distribution, which says that the number of particles in an energy level E {\displaystyle E} is proportional to the exponent β E {\displaystyle \beta E} . Mathematically: where N a {\displaystyle N_{a}} and N b {\displaystyle N_{b}} are number of occupied energy levels of E a {\displaystyle E_{a}} and E b {\displaystyle E_{b}} respectively, where E b > E a {\displaystyle E_{b}>E_{a}} . Then, using: d N b d t = − A b a N b − N b u ( ω b a ) B b a + N a u ( ω b a ) B a b = − N b w b → a s . e m i − N b w b → a e m i + N a w a → b a b s {\displaystyle {\frac {dN_{b}}{dt}}=-A_{ba}N_{b}-N_{b}u(\omega _{ba})B_{ba}+N_{a}u(\omega _{ba})B_{ab}=-N_{b}w_{b\rightarrow a}^{s.emi}-N_{b}w_{b\rightarrow a}^{emi}+N_{a}w_{a\rightarrow b}^{abs}} Solving for u {\displaystyle u} for equilibrium condition d N b d t = 0 {\displaystyle {\frac {dN_{b}}{dt}}=0} , and using the derived ratios, we get Planck's Law: In 1858, Balfour Stewart described his experiments on the thermal radiative emissive and absorptive powers of polished plates of various substances, compared with the powers of lamp-black surfaces, at the same temperature. [ 9 ] Stewart chose lamp-black surfaces as his reference because of various previous experimental findings, especially those of Pierre Prevost and of John Leslie . He wrote "Lamp-black, which absorbs all the rays that fall upon it, and therefore possesses the greatest possible absorbing power, will possess also the greatest possible radiating power." Stewart measured radiated power with a thermo-pile and sensitive galvanometer read with a microscope. He was concerned with selective thermal radiation, which he investigated with plates of substances that radiated and absorbed selectively for different qualities of radiation rather than maximally for all qualities of radiation. He discussed the experiments in terms of rays which could be reflected and refracted, and which obeyed the Helmholtz reciprocity principle (though he did not use an eponym for it). He did not in this paper mention that the qualities of the rays might be described by their wavelengths, nor did he use spectrally resolving apparatus such as prisms or diffraction gratings. His work was quantitative within these constraints. He made his measurements in a room temperature environment, and quickly so as to catch his bodies in a condition near the thermal equilibrium in which they had been prepared by heating to equilibrium with boiling water. His measurements confirmed that substances that emit and absorb selectively respect the principle of selective equality of emission and absorption at thermal equilibrium. Stewart offered a theoretical proof that this should be the case separately for every selected quality of thermal radiation, but his mathematics was not rigorously valid. According to historian D. M. Siegel: "He was not a practitioner of the more sophisticated techniques of nineteenth-century mathematical physics; he did not even make use of the functional notation in dealing with spectral distributions." [ 48 ] He made no mention of thermodynamics in this paper, though he did refer to conservation of vis viva . He proposed that his measurements implied that radiation was both absorbed and emitted by particles of matter throughout depths of the media in which it propagated. He applied the Helmholtz reciprocity principle to account for the material interface processes as distinct from the processes in the interior material. He concluded that his experiments showed that, in the interior of an enclosure in thermal equilibrium, the radiant heat, reflected and emitted combined, leaving any part of the surface, regardless of its substance, was the same as would have left that same portion of the surface if it had been composed of lamp-black. He did not mention the possibility of ideally perfectly reflective walls; in particular he noted that highly polished real physical metals absorb very slightly. In 1859, not knowing of Stewart's work, Gustav Robert Kirchhoff reported the coincidence of the wavelengths of spectrally resolved lines of absorption and of emission of visible light. Importantly for thermal physics, he also observed that bright lines or dark lines were apparent depending on the temperature difference between emitter and absorber. [ 49 ] Kirchhoff then went on to consider bodies that emit and absorb heat radiation, in an opaque enclosure or cavity, in equilibrium at temperature T . Here is used a notation different from Kirchhoff's. Here, the emitting power E ( T , i ) denotes a dimensioned quantity, the total radiation emitted by a body labeled by index i at temperature T . The total absorption ratio a ( T , i ) of that body is dimensionless, the ratio of absorbed to incident radiation in the cavity at temperature T . (In contrast with Balfour Stewart's, Kirchhoff's definition of his absorption ratio did not refer in particular to a lamp-black surface as the source of the incident radiation.) Thus the ratio ⁠ E ( T , i ) / a ( T , i ) ⁠ of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power, because a ( T , i ) is dimensionless. Also here the wavelength-specific emitting power of the body at temperature T is denoted by E ( λ , T , i ) and the wavelength-specific absorption ratio by a ( λ , T , i ) . Again, the ratio ⁠ E ( λ , T , i ) / a ( λ , T , i ) ⁠ of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power. In a second report made in 1859, Kirchhoff announced a new general principle or law for which he offered a theoretical and mathematical proof, though he did not offer quantitative measurements of radiation powers. [ 50 ] His theoretical proof was and still is considered by some writers to be invalid. [ 48 ] [ 51 ] His principle, however, has endured: it was that for heat rays of the same wavelength, in equilibrium at a given temperature, the wavelength-specific ratio of emitting power to absorption ratio has one and the same common value for all bodies that emit and absorb at that wavelength. In symbols, the law stated that the wavelength-specific ratio ⁠ E ( λ , T , i ) / a ( λ , T , i ) ⁠ has one and the same value for all bodies, that is for all values of index i . In this report there was no mention of black bodies. In 1860, still not knowing of Stewart's measurements for selected qualities of radiation, Kirchhoff pointed out that it was long established experimentally that for total heat radiation, of unselected quality, emitted and absorbed by a body in equilibrium, the dimensioned total radiation ratio ⁠ E ( T , i ) / a ( T , i ) ⁠ , has one and the same value common to all bodies, that is, for every value of the material index i . [ 52 ] Again without measurements of radiative powers or other new experimental data, Kirchhoff then offered a fresh theoretical proof of his new principle of the universality of the value of the wavelength-specific ratio ⁠ E ( λ , T , i ) / a ( λ , T , i ) ⁠ at thermal equilibrium. His fresh theoretical proof was and still is considered by some writers to be invalid. [ 48 ] [ 51 ] But more importantly, it relied on a new theoretical postulate of "perfectly black bodies" , which is the reason why one speaks of Kirchhoff's law. Such black bodies showed complete absorption in their infinitely thin most superficial surface. They correspond to Balfour Stewart's reference bodies, with internal radiation, coated with lamp-black. They were not the more realistic perfectly black bodies later considered by Planck. Planck's black bodies radiated and absorbed only by the material in their interiors; their interfaces with contiguous media were only mathematical surfaces, capable neither of absorption nor emission, but only of reflecting and transmitting with refraction. [ 53 ] Kirchhoff's proof considered an arbitrary non-ideal body labeled i as well as various perfect black bodies labeled BB . It required that the bodies be kept in a cavity in thermal equilibrium at temperature T . His proof intended to show that the ratio ⁠ E ( λ , T , i ) / a ( λ , T , i ) ⁠ was independent of the nature i of the non-ideal body, however partly transparent or partly reflective it was. His proof first argued that for wavelength λ and at temperature T , at thermal equilibrium, all perfectly black bodies of the same size and shape have the one and the same common value of emissive power E ( λ , T , BB) , with the dimensions of power. His proof noted that the dimensionless wavelength-specific absorption ratio a ( λ , T , BB) of a perfectly black body is by definition exactly 1. Then for a perfectly black body, the wavelength-specific ratio of emissive power to absorption ratio ⁠ E ( λ , T , BB) / a ( λ , T , BB) ⁠ is again just E ( λ , T , BB) , with the dimensions of power. Kirchhoff considered, successively, thermal equilibrium with the arbitrary non-ideal body, and with a perfectly black body of the same size and shape, in place in his cavity in equilibrium at temperature T . He argued that the flows of heat radiation must be the same in each case. Thus he argued that at thermal equilibrium the ratio ⁠ E ( λ , T , i ) / a ( λ , T , i ) ⁠ was equal to E ( λ , T , BB) , which may now be denoted B λ ( λ , T ) , a continuous function, dependent only on λ at fixed temperature T , and an increasing function of T at fixed wavelength λ , at low temperatures vanishing for visible but not for longer wavelengths, with positive values for visible wavelengths at higher temperatures, which does not depend on the nature i of the arbitrary non-ideal body. (Geometrical factors, taken into detailed account by Kirchhoff, have been ignored in the foregoing.) Thus Kirchhoff's law of thermal radiation can be stated: For any material at all, radiating and absorbing in thermodynamic equilibrium at any given temperature T , for every wavelength λ , the ratio of emissive power to absorptive ratio has one universal value, which is characteristic of a perfect black body, and is an emissive power which we here represent by B λ ( λ , T ) . (For our notation B λ ( λ , T ) , Kirchhoff's original notation was simply e .) [ 7 ] [ 52 ] [ 54 ] [ 55 ] [ 56 ] [ 57 ] Kirchhoff announced that the determination of the function B λ ( λ , T ) was a problem of the highest importance, though he recognized that there would be experimental difficulties to be overcome. He supposed that like other functions that do not depend on the properties of individual bodies, it would be a simple function. That function B λ ( λ , T ) has occasionally been called 'Kirchhoff's (emission, universal) function', [ 58 ] [ 59 ] [ 60 ] [ 61 ] though its precise mathematical form would not be known for another forty years, till it was discovered by Planck in 1900. The theoretical proof for Kirchhoff's universality principle was worked on and debated by various physicists over the same time, and later. [ 51 ] Kirchhoff stated later in 1860 that his theoretical proof was better than Balfour Stewart's, and in some respects it was so. [ 48 ] Kirchhoff's 1860 paper did not mention the second law of thermodynamics, and of course did not mention the concept of entropy which had not at that time been established. In a more considered account in a book in 1862, Kirchhoff mentioned the connection of his law with "Carnot's principle", which is a form of the second law. [ 62 ] According to Helge Kragh, "Quantum theory owes its origin to the study of thermal radiation, in particular to the "blackbody" radiation that Robert Kirchhoff had first defined in 1859–1860." [ 63 ] In 1860, Kirchhoff predicted experimental difficulties for the empirical determination of the function that described the dependence of the black-body spectrum as a function only of temperature and wavelength. And so it turned out. It took some forty years of development of improved methods of measurement of electromagnetic radiation to get a reliable result. [ 64 ] In 1865, John Tyndall described radiation from electrically heated filaments and from carbon arcs as visible and invisible. [ 65 ] Tyndall spectrally decomposed the radiation by use of a rock salt prism, which passed heat as well as visible rays, and measured the radiation intensity by means of a thermopile. [ 66 ] [ 67 ] In 1880, André-Prosper-Paul Crova published a diagram of the three-dimensional appearance of the graph of the strength of thermal radiation as a function of wavelength and temperature. [ 68 ] He determined the spectral variable by use of prisms. He analyzed the surface through what he called "isothermal" curves, sections for a single temperature, with a spectral variable on the abscissa and a power variable on the ordinate. He put smooth curves through his experimental data points. They had one peak at a spectral value characteristic for the temperature, and fell either side of it towards the horizontal axis. [ 69 ] [ 70 ] Such spectral sections are widely shown even today. In a series of papers from 1881 to 1886, Langley reported measurements of the spectrum of heat radiation, using diffraction gratings and prisms, and the most sensitive detectors that he could make. He reported that there was a peak intensity that increased with temperature, that the shape of the spectrum was not symmetrical about the peak, that there was a strong fall-off of intensity when the wavelength was shorter than an approximate cut-off value for each temperature, that the approximate cut-off wavelength decreased with increasing temperature, and that the wavelength of the peak intensity decreased with temperature, so that the intensity increased strongly with temperature for short wavelengths that were longer than the approximate cut-off for the temperature. [ 71 ] Having read Langley, in 1888, Russian physicist V.A. Michelson published a consideration of the idea that the unknown Kirchhoff radiation function could be explained physically and stated mathematically in terms of "complete irregularity of the vibrations of ... atoms". [ 72 ] [ 73 ] At this time, Planck was not studying radiation closely, and believed in neither atoms nor statistical physics. [ 74 ] Michelson produced a formula for the spectrum for temperature: I λ = B 1 θ 3 2 exp ⁡ ( − c λ 2 θ ) λ − 6 , {\displaystyle I_{\lambda }=B_{1}\theta ^{\frac {3}{2}}\exp \left(-{\frac {c}{\lambda ^{2}\theta }}\right)\lambda ^{-6},} where I λ denotes specific radiative intensity at wavelength λ and temperature θ , and where B 1 and c are empirical constants. In 1898, Otto Lummer and Ferdinand Kurlbaum published an account of their cavity radiation source. [ 75 ] Their design has been used largely unchanged for radiation measurements to the present day. It was a platinum box, divided by diaphragms, with its interior blackened with iron oxide. It was an important ingredient for the progressively improved measurements that led to the discovery of Planck's law. [ 76 ] A version described in 1901 had its interior blackened with a mixture of chromium, nickel, and cobalt oxides. [ 77 ] The importance of the Lummer and Kurlbaum cavity radiation source was that it was an experimentally accessible source of black-body radiation, as distinct from radiation from a simply exposed incandescent solid body, which had been the nearest available experimental approximation to black-body radiation over a suitable range of temperatures. The simply exposed incandescent solid bodies, that had been used before, emitted radiation with departures from the black-body spectrum that made it impossible to find the true black-body spectrum from experiments. [ 78 ] [ 79 ] Planck first turned his attention to the problem of black-body radiation in 1897. [ 80 ] Theoretical and empirical progress enabled Lummer and Pringsheim to write in 1899 that available experimental evidence was approximately consistent with the specific intensity law Cλ −5 e − c ⁄ λT where C and c denote empirically measurable constants, and where λ and T denote wavelength and temperature respectively. [ 81 ] [ 82 ] For theoretical reasons, Planck at that time accepted this formulation, which has an effective cut-off of short wavelengths. [ 83 ] [ 84 ] [ 85 ] Gustav Kirchhoff was Max Planck's teacher and surmised that there was a universal law for blackbody radiation and this was called "Kirchhoff's challenge". [ 86 ] Planck, a theorist, believed that Wilhelm Wien had discovered this law and Planck expanded on Wien's work presenting it in 1899 to the meeting of the German Physical Society. Experimentalists Otto Lummer , Ferdinand Kurlbaum , Ernst Pringsheim Sr. , and Heinrich Rubens did experiments that appeared to support Wien's law especially at higher frequency short wavelengths which Planck so wholly endorsed at the German Physical Society that it began to be called the Wien-Planck Law. [ 87 ] However, by September 1900, the experimentalists had proven beyond a doubt that the Wien-Planck law failed at the longer wavelengths. They would present their data on October 19. Planck was informed by his friend Rubens and quickly created a formula within a few days. [ 88 ] In June of that same year, Lord Rayleigh had created a formula that would work for short lower frequency wavelengths based on the widely accepted theory of equipartition . [ 89 ] So Planck submitted a formula combining both Rayleigh's Law (or a similar equipartition theory) and Wien's law which would be weighted to one or the other law depending on wavelength to match the experimental data. However, although this equation worked, Planck himself said unless he could explain the formula derived from a "lucky intuition" into one of "true meaning" in physics, it did not have true significance. [ 90 ] Planck explained that thereafter followed the hardest work of his life. Planck did not believe in atoms, nor did he think the second law of thermodynamics should be statistical because probability does not provide an absolute answer, and Boltzmann's entropy law rested on the hypothesis of atoms and was statistical. But Planck was unable to find a way to reconcile his Blackbody equation with continuous laws such as Maxwell's wave equations. So in what Planck called "an act of desperation", [ 91 ] he turned to Boltzmann's atomic law of entropy as it was the only one that made his equation work. Therefore, he used the Boltzmann constant k and his new constant h to explain the blackbody radiation law which became widely known through his published paper. [ 92 ] [ 93 ] Max Planck produced his law on 19 October 1900 [ 94 ] [ 95 ] as an improvement upon the Wien approximation , published in 1896 by Wilhelm Wien , which fit the experimental data at short wavelengths (high frequencies) but deviated from it at long wavelengths (low frequencies). [ 41 ] In June 1900, based on heuristic theoretical considerations, Rayleigh had suggested a formula [ 96 ] that he proposed might be checked experimentally. The suggestion was that the Stewart–Kirchhoff universal function might be of the form c 1 Tλ −4 exp(– ⁠ c 2 / λT ⁠ ) . This was not the celebrated Rayleigh–Jeans formula 8π k B Tλ −4 , which did not emerge until 1905, [ 38 ] though it did reduce to the latter for long wavelengths, which are the relevant ones here. According to Klein, [ 80 ] one may speculate that it is likely that Planck had seen this suggestion though he did not mention it in his papers of 1900 and 1901. Planck would have been aware of various other proposed formulas which had been offered. [ 64 ] [ 97 ] On 7 October 1900, Rubens told Planck that in the complementary domain (long wavelength, low frequency), and only there, Rayleigh's 1900 formula fitted the observed data well. [ 97 ] For long wavelengths, Rayleigh's 1900 heuristic formula approximately meant that energy was proportional to temperature, U λ = const. T . [ 80 ] [ 97 ] [ 98 ] It is known that ⁠ dS / dU λ ⁠ = ⁠ 1 / T ⁠ and this leads to ⁠ dS / dU λ ⁠ = ⁠ const. / U λ ⁠ and thence to ⁠ d 2 S / dU λ 2 ⁠ = − ⁠ const. / U λ 2 ⁠ for long wavelengths. But for short wavelengths, the Wien formula leads to ⁠ 1 / T ⁠ = − const. ln U λ + const. and thence to ⁠ d 2 S / dU λ 2 ⁠ = − ⁠ const. / U λ ⁠ for short wavelengths. Planck perhaps patched together these two heuristic formulas, for long and for short wavelengths, [ 97 ] [ 99 ] to produce a formula [ 94 ] d 2 S d U λ 2 = α U λ ( β + U λ ) . {\displaystyle {\frac {d^{2}S}{dU_{\lambda }^{2}}}={\frac {\alpha }{U_{\lambda }(\beta +U_{\lambda })}}.} This led Planck to the formula B λ ( T ) = C λ − 5 e c λ T − 1 , {\displaystyle B_{\lambda }(T)={\frac {C\lambda ^{-5}}{e^{\frac {c}{\lambda T}}-1}},} where Planck used the symbols C and c to denote empirical fitting constants. Planck sent this result to Rubens, who compared it with his and Kurlbaum's observational data and found that it fitted for all wavelengths remarkably well. On 19 October 1900, Rubens and Kurlbaum briefly reported the fit to the data, [ 100 ] and Planck added a short presentation to give a theoretical sketch to account for his formula. [ 94 ] Within a week, Rubens and Kurlbaum gave a fuller report of their measurements confirming Planck's law. Their technique for spectral resolution of the longer wavelength radiation was called the residual ray method. The rays were repeatedly reflected from polished crystal surfaces, and the rays that made it all the way through the process were 'residual', and were of wavelengths preferentially reflected by crystals of suitably specific materials. [ 101 ] [ 102 ] [ 103 ] Once Planck had discovered the empirically fitting function, he constructed a physical derivation of this law. His thinking revolved around entropy rather than being directly about temperature. Planck considered a cavity with perfectly reflective walls; inside the cavity, there are finitely many distinct but identically constituted resonant oscillatory bodies of definite magnitude, with several such oscillators at each of finitely many characteristic frequencies. These hypothetical oscillators were for Planck purely imaginary theoretical investigative probes, and he said of them that such oscillators do not need to "really exist somewhere in nature, provided their existence and their properties are consistent with the laws of thermodynamics and electrodynamics.". [ 104 ] Planck did not attribute any definite physical significance to his hypothesis of resonant oscillators but rather proposed it as a mathematical device that enabled him to derive a single expression for the black body spectrum that matched the empirical data at all wavelengths. [ 105 ] He tentatively mentioned the possible connection of such oscillators with atoms . In a sense, the oscillators corresponded to Planck's speck of carbon; the size of the speck could be small regardless of the size of the cavity, provided the speck effectively transduced energy between radiative wavelength modes. [ 97 ] Partly following a heuristic method of calculation pioneered by Boltzmann for gas molecules, Planck considered the possible ways of distributing electromagnetic energy over the different modes of his hypothetical charged material oscillators. This acceptance of the probabilistic approach, following Boltzmann, for Planck was a radical change from his former position, which till then had deliberately opposed such thinking proposed by Boltzmann. [ 106 ] In Planck's words, "I considered the [quantum hypothesis] a purely formal assumption, and I did not give it much thought except for this: that I had obtained a positive result under any circumstances and at whatever cost." [ 107 ] Heuristically, Boltzmann had distributed the energy in arbitrary merely mathematical quanta ϵ , which he had proceeded to make tend to zero in magnitude, because the finite magnitude ϵ had served only to allow definite counting for the sake of mathematical calculation of probabilities, and had no physical significance. Referring to a new universal constant of nature, h , [ 108 ] Planck supposed that, in the several oscillators of each of the finitely many characteristic frequencies, the total energy was distributed to each in an integer multiple of a definite physical unit of energy, ϵ , characteristic of the respective characteristic frequency. [ 95 ] [ 109 ] [ 110 ] [ 111 ] His new universal constant of nature, h , is now known as the Planck constant . Planck explained further [ 95 ] that the respective definite unit, ϵ , of energy should be proportional to the respective characteristic oscillation frequency ν of the hypothetical oscillator, and in 1901 he expressed this with the constant of proportionality h : [ 112 ] [ 113 ] ϵ = h ν . {\displaystyle \epsilon =h\nu .} Planck did not propose that light propagating in free space is quantized. [ 114 ] [ 115 ] [ 116 ] The idea of quantization of the free electromagnetic field was developed later, and eventually incorporated into what we now know as quantum field theory . [ 117 ] In 1906, Planck acknowledged that his imaginary resonators, having linear dynamics, did not provide a physical explanation for energy transduction between frequencies. [ 118 ] [ 119 ] Present-day physics explains the transduction between frequencies in the presence of atoms by their quantum excitability, following Einstein. Planck believed that in a cavity with perfectly reflecting walls and with no matter present, the electromagnetic field cannot exchange energy between frequency components. [ 120 ] This is because of the linearity of Maxwell's equations . [ 121 ] Present-day quantum field theory predicts that, in the absence of matter, the electromagnetic field obeys nonlinear equations and in that sense does self-interact. [ 122 ] [ 123 ] Such interaction in the absence of matter has not yet been directly measured because it would require very high intensities and very sensitive and low-noise detectors, which are still in the process of being constructed. [ 122 ] [ 124 ] Planck believed that a field with no interactions neither obeys nor violates the classical principle of equipartition of energy, [ 125 ] [ 126 ] and instead remains exactly as it was when introduced, rather than evolving into a black body field. [ 127 ] Thus, the linearity of his mechanical assumptions precluded Planck from having a mechanical explanation of the maximization of the entropy of the thermodynamic equilibrium thermal radiation field. This is why he had to resort to Boltzmann's probabilistic arguments. [ 128 ] [ 129 ] Planck's law may be regarded as fulfilling the prediction of Gustav Kirchhoff that his law of thermal radiation was of the highest importance. In his mature presentation of his own law, Planck offered a thorough and detailed theoretical proof for Kirchhoff's law, [ 130 ] theoretical proof of which until then had been sometimes debated, partly because it was said to rely on unphysical theoretical objects, such as Kirchhoff's perfectly absorbing infinitely thin black surface. [ 131 ] It was not until five years after Planck made his heuristic assumption of abstract elements of energy or of action that Albert Einstein conceived of really existing quanta of light in 1905 [ 132 ] as a revolutionary explanation of black-body radiation, of photoluminescence, of the photoelectric effect , and of the ionization of gases by ultraviolet light. In 1905, "Einstein believed that Planck's theory could not be made to agree with the idea of light quanta, a mistake he corrected in 1906." [ 133 ] Contrary to Planck's beliefs of the time, Einstein proposed a model and formula whereby light was emitted, absorbed, and propagated in free space in energy quanta localized in points of space. [ 132 ] As an introduction to his reasoning, Einstein recapitulated Planck's model of hypothetical resonant material electric oscillators as sources and sinks of radiation, but then he offered a new argument, disconnected from that model, but partly based on a thermodynamic argument of Wien, in which Planck's formula ϵ = hν played no role. [ 134 ] Einstein gave the energy content of such quanta in the form ⁠ Rβν / N ⁠ . Thus Einstein was contradicting the undulatory theory of light held by Planck. In 1910, criticizing a manuscript sent to him by Planck, knowing that Planck was a steady supporter of Einstein's theory of special relativity, Einstein wrote to Planck: "To me it seems absurd to have energy continuously distributed in space without assuming an aether." [ 135 ] According to Thomas Kuhn , it was not till 1908 that Planck more or less accepted part of Einstein's arguments for physical as distinct from abstract mathematical discreteness in thermal radiation physics. Still in 1908, considering Einstein's proposal of quantal propagation, Planck opined that such a revolutionary step was perhaps unnecessary. [ 136 ] Until then, Planck had been consistent in thinking that discreteness of action quanta was to be found neither in his resonant oscillators nor in the propagation of thermal radiation. Kuhn wrote that, in Planck's earlier papers and in his 1906 monograph, [ 137 ] there is no "mention of discontinuity, [nor] of talk of a restriction on oscillator energy, [nor of] any formula like U = nhν ." Kuhn pointed out that his study of Planck's papers of 1900 and 1901, and of his monograph of 1906, [ 137 ] had led him to "heretical" conclusions, contrary to the widespread assumptions of others who saw Planck's writing only from the perspective of later, anachronistic, viewpoints. [ 138 ] Kuhn's conclusions, finding a period till 1908, when Planck consistently held his 'first theory', have been accepted by other historians. [ 139 ] In the second edition of his monograph, in 1912, Planck sustained his dissent from Einstein's proposal of light quanta. He proposed in some detail that absorption of light by his virtual material resonators might be continuous, occurring at a constant rate in equilibrium, as distinct from quantal absorption. Only emission was quantal. [ 121 ] [ 140 ] This has at times been called Planck's "second theory". [ 141 ] It was not till 1919 that Planck in the third edition of his monograph more or less accepted his 'third theory', that both emission and absorption of light were quantal. [ 142 ] The colourful term " ultraviolet catastrophe " was given by Paul Ehrenfest in 1911 to the paradoxical result that the total energy in the cavity tends to infinity when the equipartition theorem of classical statistical mechanics is (mistakenly) applied to black-body radiation. [ 143 ] [ 144 ] But this had not been part of Planck's thinking, because he had not tried to apply the doctrine of equipartition: when he made his discovery in 1900, he had not noticed any sort of "catastrophe". [ 83 ] [ 84 ] [ 85 ] [ 80 ] [ 145 ] It was first noted by Lord Rayleigh in 1900, [ 96 ] [ 146 ] [ 147 ] and then in 1901 [ 148 ] by Sir James Jeans ; and later, in 1905, by Einstein when he wanted to support the idea that light propagates as discrete packets, later called 'photons', and by Rayleigh [ 39 ] and by Jeans. [ 38 ] [ 149 ] [ 150 ] [ 151 ] In 1913, Bohr gave another formula with a further different physical meaning to the quantity hν . [ 34 ] [ 35 ] [ 36 ] [ 152 ] [ 153 ] [ 154 ] In contrast to Planck's and Einstein's formulas, Bohr's formula referred explicitly and categorically to energy levels of atoms. Bohr's formula was W τ 2 − W τ 1 = hν where W τ 2 and W τ 1 denote the energy levels of quantum states of an atom, with quantum numbers τ 2 and τ 1 . The symbol ν denotes the frequency of a quantum of radiation that can be emitted or absorbed as the atom passes between those two quantum states. In contrast to Planck's model, the frequency ν {\displaystyle \nu } has no immediate relation to frequencies that might describe those quantum states themselves. Later, in 1924, Satyendra Nath Bose developed the theory of the statistical mechanics of photons, which allowed a theoretical derivation of Planck's law. [ 155 ] The actual word 'photon' was invented still later, by G.N. Lewis in 1926, [ 156 ] who mistakenly believed that photons were conserved, contrary to Bose–Einstein statistics; nevertheless the word 'photon' was adopted to express the Einstein postulate of the packet nature of light propagation. In an electromagnetic field isolated in a vacuum in a vessel with perfectly reflective walls, such as was considered by Planck, indeed the photons would be conserved according to Einstein's 1905 model, but Lewis was referring to a field of photons considered as a system closed with respect to ponderable matter but open to exchange of electromagnetic energy with a surrounding system of ponderable matter, and he mistakenly imagined that still the photons were conserved, being stored inside atoms. Ultimately, Planck's law of black-body radiation contributed to Einstein's concept of quanta of light carrying linear momentum, [ 34 ] [ 132 ] which became the fundamental basis for the development of quantum mechanics . The above-mentioned linearity of Planck's mechanical assumptions, not allowing for energetic interactions between frequency components, was superseded in 1925 by Heisenberg's original quantum mechanics. In his paper submitted on 29 July 1925, Heisenberg's theory accounted for Bohr's above-mentioned formula of 1913. It admitted non-linear oscillators as models of atomic quantum states, allowing energetic interaction between their own multiple internal discrete Fourier frequency components, on the occasions of emission or absorption of quanta of radiation. The frequency of a quantum of radiation was that of a definite coupling between internal atomic meta-stable oscillatory quantum states. [ 157 ] [ 158 ] At that time, Heisenberg knew nothing of matrix algebra, but Max Born read the manuscript of Heisenberg's paper and recognized the matrix character of Heisenberg's theory. Then Born and Jordan published an explicitly matrix theory of quantum mechanics, based on, but in form distinctly different from, Heisenberg's original quantum mechanics; it is the Born and Jordan matrix theory that is today called matrix mechanics. [ 159 ] [ 160 ] [ 161 ] Heisenberg's explanation of the Planck oscillators, as non-linear effects apparent as Fourier modes of transient processes of emission or absorption of radiation, showed why Planck's oscillators, viewed as enduring physical objects such as might be envisaged by classical physics, did not give an adequate explanation of the phenomena. Nowadays, as a statement of the energy of a light quantum, often one finds the formula E = ħω , where ħ = ⁠ h / 2π ⁠ , and ω = 2π ν denotes angular frequency, [ 162 ] [ 163 ] [ 164 ] [ 165 ] [ 166 ] and less often the equivalent formula E = hν . [ 165 ] [ 166 ] [ 167 ] [ 168 ] [ 169 ] This statement about a really existing and propagating light quantum, based on Einstein's, has a physical meaning different from that of Planck's above statement ϵ = hν about the abstract energy units to be distributed amongst his hypothetical resonant material oscillators. An article by Helge Kragh published in Physics World gives an account of this history. [ 111 ]
https://en.wikipedia.org/wiki/Planck's_law
Planck was a space observatory operated by the European Space Agency (ESA) from 2009 to 2013. It was an ambitious project that aimed to map the anisotropies of the cosmic microwave background (CMB) at microwave and infrared frequencies, with high sensitivity and angular resolution. The mission was highly successful and substantially improved upon observations made by the NASA Wilkinson Microwave Anisotropy Probe (WMAP). The Planck observatory was a major source of information relevant to several cosmological and astrophysical issues. One of its key objectives was to test theories of the early Universe and the origin of cosmic structure. The mission provided significant insights into the composition and evolution of the Universe, shedding light on the fundamental physics that governs the cosmos. Planck was initially called COBRAS/SAMBA, which stands for the Cosmic Background Radiation Anisotropy Satellite/Satellite for Measurement of Background Anisotropies. The project started in 1996, and it was later renamed in honor of the German physicist Max Planck (1858–1947), who is widely regarded as the originator of quantum theory by deriving the formula for black-body radiation . Built at the Cannes Mandelieu Space Center by Thales Alenia Space , Planck was created as a medium-sized mission for ESA 's Horizon 2000 long-term scientific program. The observatory was launched in May 2009 and reached the Earth/Sun L2 point by July 2009. By February 2010, it had successfully started a second all-sky survey. On 21 March 2013, the Planck team released its first all-sky map of the cosmic microwave background. The map was of exceptional quality and allowed researchers to measure temperature variations in the CMB with unprecedented accuracy. In February 2015, an expanded release was published, which included polarization data. The final papers by the Planck team were released in July 2018, marking the end of the mission. At the end of its mission, Planck was put into a heliocentric graveyard orbit and passivated to prevent it from endangering any future missions. The final deactivation command was sent to Planck in October 2013. The mission was a remarkable success and provided the most precise measurements of several key cosmological parameters. Planck's observations helped determine the age of the universe, the average density of ordinary matter and dark matter in the Universe, and other important characteristics of the cosmos. The mission had a wide variety of scientific aims, including: [ 2 ] Planck had a higher resolution and sensitivity than WMAP, allowing it to probe the power spectrum of the CMB to much smaller scales (×3). It also observed in nine frequency bands rather than WMAP's five, with the goal of improving the astrophysical foreground models. It is expected that most Planck measurements have been limited by how well foregrounds can be subtracted, rather than by the detector performance or length of the mission, a particularly important factor for the polarization measurements. [ needs update ] The dominant foreground radiation depends on frequency, but could include synchrotron radiation from the Milky Way at low frequencies, and dust at high frequencies. [ needs update ] The spacecraft carries two instruments: the Low Frequency Instrument (LFI) and the High Frequency Instrument (HFI). [ 2 ] Both instruments can detect both the total intensity and polarization of photons, and together cover a frequency range of nearly 830 GHz (from 30 to 857 GHz). The cosmic microwave background spectrum peaks at a frequency of 160.2 GHz. Planck 's passive and active cooling systems allow its instruments to maintain a temperature of −273.05 °C (−459.49 °F), or 0.1 °C above absolute zero . [ 3 ] From August 2009, Planck was the coldest known object in space, until its active coolant supply was exhausted in January 2012. [ 4 ] NASA played a role in the development of this mission and contributes to the analysis of scientific data. Its Jet Propulsion Laboratory built components of the science instruments, including bolometers for the high-frequency instrument, a 20-kelvin cryocooler for both the low- and high-frequency instruments, and amplifier technology for the low-frequency instrument. [ 5 ] The LFI has three frequency bands, covering the range of 30–70 GHz, covering the microwave to infrared regions of the electromagnetic spectrum. The detectors use high-electron-mobility transistors . [ 2 ] The HFI was sensitive between 100 and 857 GHz, using 52 bolometric detectors, manufactured by JPL/Caltech, [ 6 ] optically coupled to the telescope through cold optics, manufactured by Cardiff University's School of Physics and Astronomy, [ 7 ] consisting of a triple horn configuration and optical filters, a similar concept to that used in the Archeops balloon-borne experiment. These detection assemblies are divided into 6 frequency bands (centred at 100, 143, 217, 353, 545 and 857 GHz), each with a bandwidth of 33%. Of these six bands, only the lower four have the capability to measure the polarisation of incoming radiation; the two higher bands do not. [ 2 ] On 13 January 2012, it was reported that the on-board supply of helium-3 used in Planck 's dilution refrigerator had been exhausted, and that the HFI would become unusable within a few days. [ 8 ] By this date, Planck had completed five full scans of the CMB, exceeding its target of two. The LFI (cooled by helium-4 ) was expected to remain operational for another six to nine months. [ 8 ] A common service module (SVM) was designed and built by Thales Alenia Space in its Turin plant, for both the Herschel Space Observatory and Planck missions, combined into one single program. [ 2 ] The overall cost is estimated to be €700 million for the Planck [ 9 ] and €1,100 million for the Herschel mission. [ 10 ] Both figures include their mission's spacecraft and payload, (shared) launch and mission expenses, and science operations. Structurally, the Herschel and Planck SVMs are very similar. Both SVMs are octagonal in shape and each panel is dedicated to accommodate a designated set of warm units, while taking into account the dissipation requirements of the different warm units, of the instruments, as well as the spacecraft. On both spacecraft, a common design was used for the avionics , attitude control and measurement (ACMS), command and data management (CDMS), power, and tracking, telemetry and command (TT&C) subsystems. All units on the SVM are redundant. On each spacecraft, the power subsystem consists of a solar array , employing triple-junction solar cells , a battery and the power control unit (PCU). The PCU is designed to interface with the 30 sections of each solar array, to provide a regulated 28 volt bus, to distribute this power via protected outputs, and to handle the battery charging and discharging. For Planck , the circular solar array is fixed on the bottom of the satellite, always facing the Sun as the satellite rotates on its vertical axis. This function is performed by the attitude control computer (ACC), which is the platform for the attitude control and measurement subsystem (ACMS). It was designed to fulfil the pointing and slewing requirements of the Herschel and Planck payloads. The Planck satellite rotates at one revolution per minute, with an aim of an absolute pointing error less than 37 arc-minutes. As Planck is also a survey platform, there is the additional requirement for pointing reproducibility error less than 2.5 arc-minutes over 20 days. The main line-of-sight sensor in both Herschel and Planck is the star tracker . The satellite was successfully launched, along with the Herschel Space Observatory , at 13:12:02 UTC on 14 May 2009 aboard an Ariane 5 ECA heavy launch vehicle from the Guiana Space Centre . The launch placed the craft into a very elliptical orbit ( perigee : 270 km [170 mi], apogee : more than 1,120,000 km [700,000 mi]), bringing it near the L 2 Lagrangian point of the Earth-Sun system , 1,500,000 kilometres (930,000 mi) from the Earth. The manoeuvre to inject Planck into its final orbit around L 2 was successfully completed on 3 July 2009, when it entered a Lissajous orbit with a 400,000 km (250,000 mi) radius around the L 2 Lagrangian point. [ 11 ] The temperature of the High Frequency Instrument reached just a tenth of a degree above absolute zero (0.1 K ) on 3 July 2009, placing both the Low Frequency and High Frequency Instruments within their cryogenic operational parameters, making Planck fully operational. [ 12 ] In January 2012 the HFI exhausted its supply of liquid helium, causing the detector temperature to rise and rendering the HFI unusable. The LFI continued to be used until science operations ended on 3 October 2013. The spacecraft performed a manoeuvre on 9 October to move it away from Earth and its L 2 point , placing it into a heliocentric orbit , while payload deactivation occurred on 19 October. Planck was commanded on 21 October to exhaust its remaining fuel supply; passivation activities were conducted later, including battery disconnection and the disabling of protection mechanisms. [ 13 ] The final deactivation command, which switched off the spacecraft's transmitter, was sent to Planck on 23 October 2013 at 12:10:27 UTC. [ 14 ] Planck started its First All-Sky Survey on 13 August 2009. [ 16 ] In September 2009, the European Space Agency announced the preliminary results from the Planck First Light Survey , which was performed to demonstrate the stability of the instruments and the ability to calibrate them over long periods. The results indicated that the data quality is excellent. [ 17 ] On 15 January 2010 the mission was extended by 12 months, with observation continuing until at least the end of 2011. After the successful conclusion of the First Survey, the spacecraft started its Second All Sky Survey on 14 February 2010. The last observations for the Second All Sky Survey were made on 28 May 2010. [ 11 ] Some planned pointing list data from 2009 has been released publicly, along with a video visualization of the surveyed sky. [ 16 ] On 17 March 2010, the first Planck photos were published, showing dust concentration within 500 light years from the Sun. [ 18 ] [ 19 ] On 5 July 2010, the Planck mission delivered its first all-sky image. [ 20 ] The first public scientific result of Planck is the Early-Release Compact-Source Catalogue, released during the January 2011 Planck conference in Paris. [ 21 ] [ 22 ] On 5 May 2014 a map of the galaxy's magnetic field created using Planck was published. [ 23 ] The Planck team and principal investigators Nazzareno Mandolesi and Jean-Loup Puget shared the 2018 Gruber Prize in Cosmology . [ 24 ] Puget was also awarded the 2018 Shaw Prize in Astronomy. [ 25 ] On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map of the cosmic microwave background. [ 26 ] [ 27 ] This map suggests the Universe is slightly older than thought: according to the map, subtle fluctuations in temperature were imprinted on the deep sky when the Universe was about 370,000 years old. The imprint reflects ripples that arose as early in the existence of the Universe as the first nonillionth (10 −30 ) of a second. It is theorised that these ripples gave rise to the present vast cosmic web of galactic clusters and dark matter . The 2013 release found an asymmetry in the statistics of the CMB with respect to viewing angle in the sky, determining that "deviations from isotropy have been found and demonstrated to be robust against component separation algorithm, mask choice and frequency dependence", [ 28 ] more commonly known as the Axis of evil (cosmology) . According to the team, the Universe is 13.798 ± 0.037 billion-years-old, and contains 4.82% ± 0.05% ordinary matter, 26.8% ± 0.4% dark matter and 69% ± 1% dark energy . [ 29 ] [ 30 ] [ 31 ] The Hubble constant was also measured to be 67.80 ± 0.77 (km/s)/Mpc . [ 26 ] [ 29 ] [ 32 ] [ 33 ] [ 34 ] Results from an analysis of Planck 's full mission were made public on 1 December 2014 at a conference in Ferrara , Italy. [ 35 ] A full set of papers detailing the mission results were released in February 2015. [ 36 ] Some of the results include: Project scientists worked too with BICEP2 scientists to release joint research in 2015 answering whether a signal detected by BICEP2 was evidence of primordial gravitational waves , or was simple background noise from dust in the Milky Way galaxy. [ 35 ] Their results suggest the latter. [ 37 ]
https://en.wikipedia.org/wiki/Planck_(spacecraft)
The Planctomycetota are a phylum of widely distributed bacteria , occurring in both aquatic and terrestrial habitats. [ 5 ] They play a considerable role in global carbon and nitrogen cycles, with many species of this phylum capable of anaerobic ammonium oxidation, also known as anammox . [ 5 ] [ 6 ] Many Planctomycetota occur in relatively high abundance as biofilms , [ 7 ] often associating with other organisms such as macroalgae and marine sponges . [ 8 ] Planctomycetota are included in the PVC superphylum along with Verrucomicrobiota , Chlamydiota , Lentisphaerota , Kiritimatiellaeota, and Candidatus Omnitrophica . [ 9 ] [ 10 ] The phylum Planctomycetota is composed of the classes Planctomycetia and Phycisphaerae. First described in 1924, members of the Planctomycetota were identified as eukaryotes and were only later described as bacteria in 1972. [ 5 ] Early examination of members of the Planctomycetota suggested a cell plan differing considerably from other bacteria, although they are now confirmed as Gram-negative bacteria , but with many unique characteristics. Bacteria in the Planctomycetota are often small, spherical cells, but a large amount of morphological variation is seen. [ 11 ] Members of the Planctomycetota also display distinct reproductive habits, with many species dividing by budding , in contrast to most other free-living bacteria, which divide by binary fission . [ 5 ] [ 12 ] [ 13 ] Interest is growing in the Planctomycetota regarding biotechnology and human applications, mainly as a source of bioactive molecules. [ 14 ] In addition, some Planctomycetota were recently described as human pathogens. [ 8 ] The species Gemmata obscuriglobus has been identified specifically as comprising bacteria with unique characteristics among the Planctomycetota, [ 15 ] [ 16 ] such as their ability to synthesize sterols . [ 5 ] [ 17 ] [ 15 ] The distinct morphological characteristics of bacteria in the Planctomycetota have been discussed extensively. [ 6 ] The common morphology is often spherical cells roughly 2 μm in diameter, as observed in the species Aquisphaera giovannonii . However, the diversity in cell shape often varies greatly in them. Ovoid and pear-shaped cells have been described in some species, and often occur in rosettes of three to 10 cells. [ 11 ] Gemmata obscuriglobus is a well studied species in the Planctomycetota with spherical cells. In contrast, bacteria in the species Planctopirus limnophila have ovoid cells. [ 15 ] Many Planctomycetota species display structures and appendages on the outer surface of the cell. Flagella , common in most bacteria, have also been observed in the species P. limnophila. [ 5 ] [ 11 ] [ 18 ] Many Planctomycetota also have a holdfast, or stalk, which attaches the cell to a surface or substrate. [ 5 ] [ 18 ] Members of some species, though, such as Isosphaera pallida lack a holdfast. [ 5 ] Unique appendages known as crateriform structures have been observed [ 5 ] [ 11 ] [ 18 ] in species of Planctomycetota belonging to the class Planctomycetia. [ 13 ] The outer surface of cells in the species P. limnophila display both large and small crateriform structures. Large crateriform structures often cover the cell surface, while small crateriform structures are often only at the end of the cell. Light microscopy demonstrated fibers of both stalk and pili type in P. limnophila and G. obscuriglobus . The pili fibers in both these species were often associated with large crateriform structures; in contrast, the stalk fibers were associated with small crateriform structures. [ 18 ] Early examination of the Planctomycetota suggested that their cell plan differed considerably from both Gram-positive and Gram-negative bacteria. [ 5 ] Until recently, bacteria in the Planctomycetota were thought to lack peptidoglycans in their cell walls, and were instead suggested to have proteinaceous cell walls. Peptidoglycan is an essential polymer of glycans, present in all free-living bacteria, and its rigidity helps maintain integrity of the cell. Peptidoglycan synthesis is also essential during cell division . Recently, those in the species G. obscuriglobus were found to have peptidoglycan in their cell walls. [ 5 ] [ 18 ] Planctomycetota were once thought to display distinct compartmentalization within the cytosol . [ 5 ] [ 18 ] Three-dimensional electron tomography reconstruction of G. obscuriglobus displayed varying interpretations of this suggested compartmentalization. [ 16 ] The cytosol was suggested to be separated into compartments, both the paryphoplasm and pirellulosome, by an intracytoplasmic membrane. This interpretation has since been demonstrated to be incorrect. In fact, the intracytoplasmic membrane is well known to be the cytoplasmic membrane which displays unique invaginations , giving the appearance of compartmentalization within the cytosol. [ 5 ] [ 16 ] [ 18 ] Planctomycetota therefore display the two compartments typical of Gram-negative bacteria, the cytoplasm and periplasm . The excess membrane observed in G. obscuriglobus triples the surface area of the cell relative to its volume , which is suggested to be associated with sterol synthesis. [ 16 ] Many Planctomycetota species display pink or orange coloring, suggested to result from the production of carotenoid pigments. Carotenoids are produced by plants and fungi , and by some heterotrophic bacteria to protect against oxidative stress . Three different carotenoid pigments have been identified in two different strains of the Planctomycetota. [ 19 ] In marine environments, Planctomycetota are often suspended in the water column or present as biofilms on the surface of macroalgae, and are often exposed to harmful ultraviolet radiation. More highly pigmented species of the Planctomycetota are more resistant to ultraviolet radiation, although this is not yet well understood. [ 20 ] It has since been shown that Planctomycetota synthesize C30 carotenoids from squalene and that this squalene route to C30 carotenoids is the most widespread in prokaryotes. [ 21 ] Bacteria in the Planctomycetota that are anammox-capable form the order Brocadiales. [ 22 ] The cells of anammox bacteria are often coccoid with a diameter of about 0.8 μm, [ 7 ] and are suggested to contain three compartments, each surrounded by a membrane. The outer membrane encloses the cell and the protoplasm and the innermost membrane surrounds the anammoxosome, the central structure of anammox bacteria. [ 18 ] [ 23 ] The anammoxosome membrane is largely composed of unusual ladderane-based lipids. [ 23 ] Planctomycetota species grow slowly, when compared to other bacteria, [ 5 ] [ 10 ] [ 7 ] [ 24 ] often forming rosette structures of 3-5 cells. [ 5 ] [ 24 ] The species P. limnophila is suggested to be relatively fast growing, [ 5 ] [ 25 ] with a doubling time of roughly 6–14 days. In contrast, some other Planctomycetota have doubling times of around 30 days. [ 25 ] Their high abundance in many ecosystems is surprising, given their slow growth rates. [ 7 ] [ 10 ] Planctomycetota often perform a lifestyle switch between both a sessile stalked stage and a free-swimming stage. [ 24 ] Members of the species P. limnophila perform a lifestyle switch that is often associated with cell division. The sessile mother cell produces a free-swimming daughter cell. The daughter cell must then attach to a surface before starting the cycle over again. However, not all of the Planctomycetota have a motile stage, and the lifestyle switch observed in many species may not be common among all Planctomycetota. [ 5 ] The current understanding of bacterial cell division is based on model organisms such as Escherichia coli . [ 15 ] The dominant form of reproduction observed in almost all bacteria is cell division by binary fission , which involves the synthesis of both peptidoglycans and proteins known as FtsZ . [ 15 ] [ 26 ] In contrast, many bacteria in the Planctomycetota divide by budding . [ 5 ] [ 12 ] [ 13 ] FtsZ proteins are suggested to be similar in structure to that of tubulin , the protein present in eukaryotes, [ 27 ] and is essential for septal formation during cell division. [ 5 ] [ 6 ] The lack of FtsZ proteins is often lethal. [ 5 ] Peptidoglycan also play a considerable role in cell division by binary fission. [ 26 ] Planctomycetota is one of the only known phyla whose members lack FtsZ proteins. [ 5 ] [ 26 ] [ 27 ] Bacteria in the Chlamydiales, also a member of the PVC superphylum, also lack FtsZ. [ 27 ] Although bacteria in the Planctomycetota lack FtsZ, two distinct modes of cell division have been observed. [ 5 ] Most Planctomycetota divide by binary fission, mainly species of the class Phycisphaerae. In contrast, species of the class Planctomycetia divide by budding. [ 5 ] [ 12 ] [ 13 ] The mechanisms involved in budding have been described extensively for yeast cells. However, bacterial budding observed in Planctomycetota is still poorly understood. [ 15 ] Budding has been observed in both radial symmetric cells, such as bacteria in the species P. limnophila , and axially symmetric cells. [ 13 ] During cell division in members of P. limnophila , the daughter cells originate from the region opposite to the pole with the holdfast or stalk. Considerable diversity has been observed in cell division among bacteria in the Planctomycetota. [ 12 ] [ 13 ] During cell division in Fuerstia marisgermanicae , a tubular structure is connected from the bud to the mother cell. [ 5 ] [ 22 ] The species Kolteria novifilia forms a distinct clade of Planctomycetota, and is the only known species to divide by lateral budding at the middle of the cell. Lastly, members of the clade Saltatorellus are capable of switching between both binary fission and budding. [ 12 ] [ 13 ] Planctomycetota are known for their unusual cellular characteristics, and their distinctness from all other bacteria is additionally supported by the shared presence of two conserved signature indels (CSIs). [ 28 ] These CSIs demarcate the group from neighboring phyla within the PVC group. [ 29 ] An additional CSI has been found that is shared by all Planctomycetota species, with the exception of Kuenenia stuttgartiensis. This supports the idea that K. stuttgartiensis forms a deep branch within the Planctomycetota phylum. A CSI has also been found to be shared by the entire PVC superphylum, including the Planctomycetota. [ 28 ] [ 29 ] Planctomycetota also contain an important conserved signature protein that has been characterized to play an important housekeeping function that is exclusive to members belonging to the PVC superphylum. [ 30 ] The genome size of Rhodopirellula baltica has been estimated to be over 7 million bases, making it one of the largest prokaryotic genomes sequenced. Extensive genome duplication takes up about 25% of the genome sequence. [ 6 ] This may be a way for the organism to adapt to mutations , allowing for redundancy if a part of the genome is damaged. The polymerase chain reaction primer used often mismatches with the genes, creating difficulty when sequencing the genome. [ 9 ] When comparing under a microscope, a defining characteristic for some Planctomycetota is that a single unlinked rRNA operon can be identified near the origin. The changes of genetic material is through internal chromosomal inversion, and not through lateral gene transfer. This creates a way of diversification in the Planctomycetota variants as multiple transposon genes in these regions have reverse orientation that transfers to rearrangements. Some Planctomycetota thrive in regions containing highly concentrated nitrate , [ 6 ] and have genes that are required for heterotactic acid fermentation. The enzyme lactate dehydrogenase plays a key role in this process. The genetic process also has ultraviolet radiation protection response, and is associated with the genes recA, lexA, uvrA, uvrB, and uvrC , in addition to a photolyase gene that is expressed when the environment offers excessive ultraviolet radiation stress. Other stress responses include the decomposition of hydrogen peroxide and oxidation . Many Planctomycetota also express sulfatase genes. The genome of Pirellula sp. strain 1 incorporates 110 genes that contribute to encoding proteins that produce sulfatase enzymes. In comparison with a different species of prokaryotic, Pseudomonas aeruginosa, only 6 sulfatases occur and the genes that express these proteins are contained as two to five pairs, usually clustered in 22 groups. [ 6 ] Planctomycetota originate from within the Bacteria and these similarities between proteins in Planctomycetales and eukaryotes reflect convergent evolution . Gained protein families in Gemmataceae , a subgroup within Planctomycetota, have low sequence similarity to eukaryotic proteins; however, they show highest sequence similarity to other Gemmataceae protein families. [ 31 ] There is massive emergence of novel protein families within the Gemmataceae . More than one thousand protein families were acquired by duplications and domain rearrangements. The new paralogs function in signal transduction , regulatory systems, and protein interaction pathways. They are related to the functional organisation of the cell, which can be interpreted as an adaptation to a more complex lifestyle. [ 31 ] The protein length is longer in the Gemmataceae than in most other bacteria and the genes have linkers. There is an overlap between the longest proteins in Planctomycetales and the shortest proteins in eukaryotes. In the terms of gene paralogy, protein length, and protein domain structures, prokaryotes and eukaryotes do not have sharp boundaries. [ 31 ] Originally classified as a eukaryote due to morphology, the advent of genetic sequencing allowed researchers to agree that the Planctomycetota belong to the domain Bacteria. [ 5 ] Within that domain, Planctomycetota are classified as their own phylum, however, other researchers have argued they could also be categorized as part of a larger superphylum entitled PVC, which would encompass the phyla Verrucomicrobia, Chlamydiae and Lentisphaerae, and the candidate phylum " Candidatus Omnitrophica". [ 9 ] Within this superphylum, its members have been found to be closely related through the creation of 16S rRNA trees. Both the Planctomycetota and Chlamydiota encode proteins for nucleotide transporters, and the Verrucomicrobiota have also been found to have features common among eukaryotic cells. Thus, a common ancestor of this superphylum may have been the start of the eukaryotic lineage. [ 9 ] While this is one possible explanation, because PVC is not the start of the bacterial tree, [ 32 ] the existence of eukaryotic traits and genes is more likely explained through lateral gene transfer, and not a more recent eukaryotic ancestor. [ 9 ] Sedimentisphaerales Tepidisphaerales Phycisphaerales Gemmatales Isosphaerales Planctomycetales Pirellulales "Uabimicrobiales" " Brocadiales " Sedimentisphaerales Tepidisphaerales Phycisphaerales Gemmatales Isosphaerales Planctomycetales Pirellulales Members of the Planctomycetota are found in a diverse range of environments, both geographically and ecologically, [ 39 ] and occur in both aquatic and terrestrial habitats. [ 5 ] In aquatic environments, they are found in both freshwater and marine systems. [ 39 ] Planctomycetota were originally believed to exist exclusively in aquatic environments, but they are now known to be also abundant in soils [ 40 ] and hypersaline environments. [ 41 ] They are widespread on five continents, including Antarctica and Australia . [ 40 ] [ 39 ] Fluorescence in situ hybridization was used to detect Planctomycetota in various environments, and Planctomycetota are found in abundance in sphagnum bogs. Some Planctomycetota were found in the digestive systems of marine lifeforms, while others tend to live among eukaryotes. [ 9 ] Planctomycetota account for roughly 11% of prokaryotic communities in marine systems, and their vast distribution demonstrates their ability to inhabit many different environments. They can also adapt to both aerobic and anaerobic conditions. Many factors can affect their distribution, such as humidity, oxygen levels, and pH levels. Planctomycetota diversity and abundance are strongly associated with relative humidity. The effects of oxygen levels demonstrate the energy needs of the individual. Many species of Planctomycetota are chemoheterotrophic, including G. obscuriglobus . Thermostilla marina , a thermophilic anaerobic species occupying hydrothermal vent regions, can use elemental sulfur to generate sulfide and respire with nitrate . Planctomycetota can also inhabit regions with ranges in pH levels from 4.2 to 11.6. [ 8 ] Planctomycetota have a significant impact on global biogeochemistry and climate, with their ability to mineralize and break down detritus particles in the water column. [ 6 ] [ 20 ] Planctomycetota play a considerable role in the global carbon cycle . [ 5 ] [ 6 ] [ 13 ] [ 42 ] As both obligate and facultative aerobic chemoheterotrophs , the primary source of carbon used by Planctomycetota is from carbohydrates . Many Planctomycetota have the ability to breakdown extremely complex carbohydrates, making these nutrients available to other organisms. This ability to recycle carbon has been linked to specific C1 metabolism genes observed in many Planctomycetota and are suggested to play a significant role, but this area of research is still poorly understood. Planctomycetota also display many sulfatase enzymes , which are capable of breaking down sulfated heteropolysaccharides, which are produced by many groups of macroalgae. The breakdown of these sulfated heteropolysaccharides by Planctomycetota are then used as an energy source. Some Planctomycetota are suggested to be capable of breaking down carrageenan . [ 42 ] Planctomycetota have often been observed in association with many organisms, including, macroalgae, microalgae, marine sponges, and plants such as lichens and bryophytes . [ 8 ] They have also been observed inhabiting deep-sea cold seeps , where they are dominant organisms living on tube worms . [ 5 ] Planctomycetota are often associated with marine surfaces high in nutrients. They occur as biofilms on algal surfaces in relatively high abundance. [ 7 ] Macroalgae such as the kelps Laminaria hyperborea and Ecklonia radiata are suggested to be an important habitat for Planctomycetota. [ 5 ] [ 43 ] Roughly 70% of the bacterial community on Ecklonia radiata were Planctomycetota. [ 5 ] [ 10 ] Almost 150 Planctomycetota species have been isolated from the biofilms of macroalgae, and these communities associated with macroalgae are mainly independent of changes in geographical distribution. This would suggest a symbiotic relationship. [ 8 ] Kelp forests dominate the rocky coastlines of temperate regions, and provide habitat, shelter, and food for many organisms, including the Planctomycetota. [ 5 ] Given the considerable role of kelp forests in coastal primary productivity , the association of the Planctomycetota with kelp could indicate their significant role in coastal habitats. [ 44 ] Planctomycetota also play an important role as components of detritus in the water column, also known as marine snow , [ 5 ] [ 44 ] given their ability to attach to surfaces. [ 45 ] As the climate continues to warm, the abundance of Planctomycetota associated with macroalgae might increase. The seaweed Caulerpa taxifolia was incubated under higher CO 2 conditions, and the abundance of Planctomycetota increased substantially, as much as 10 times in some species. [ 5 ] While macroalgae are well known substrates for Planctomycetota communities, their abundance has also been known to correlate with blooms of microalgae such as diatoms. [ 44 ] [ 5 ] Blooms of cyanobacteria , diatoms , and dinoflagellates provide nutrients for Planctomycetota, which could explain the association. [ 8 ] Planctomycetota species are often associated with the surfaces of marine sponges. [ 8 ] [ 45 ] They interact with sponges either by attachment with a holdfast, or through a symbiotic relationship. A high diversity of Planctomycetota is present as biofilms on sponges. The symbiotic relationship among sponges and Planctomycetota contributes to the health of the sponge, and the sponge often provides suitable habitat and nutrients to the Planctomycetota. [ 8 ] Planctomycetota were found to be highly abundant in lichen communities throughout northwestern Siberia and displayed extremely high diversity. Planctomycetota have also been associated with lichen communities and Sphagnum wetlands. Sphagnum wetlands store large amounts of carbon, contributing to the global carbon cycle. Planctomycetota play a considerable role in the degradation of sphagnum, accounting for roughly 15% of the bacterial community. [ 8 ] Planctomycetota display associations with other bacterial communities, mainly Alphaproteobacteria , Bacteroidota , Gemmatimonadota , and Verrucomicrobiota . The growth of many Planctomycetota is often supported by the essential nutrients provided by other bacteria within the community, and some Planctomycetota rely strongly on symbiotic relationships with other bacteria. [ 8 ] The existence of membrane coat proteins near the intracytoplasmic membrane could be used for an endocytosis -like uptake system, which would be the first instance this function has been found outside of the eukaryotic domain. However, now that the existence of a rigid peptidoglycan cell wall has been confirmed, these vesicles to be able to pass through this cell wall seems unlikely. Additionally, deletion of one of these membrane coat proteins within P. limnophila found no decrease in macromolecule uptake. [ 18 ] In addition, with the use of cryoelectron tomography-based three-dimensional reconstruction of Planctomycetota has found that what were originally thought to be vesicles being held in the periplasm are actually just folds in the cytoplasmic membrane. [ 5 ] Yet it has been demonstrated that the Planctomycetota can survive on high-molecular-weight polysaccharides as their only source of carbon, meaning they must have the ability to incorporate complex carbon substrates into their cytoplasm. Three hypotheses have been put forth: First, the Planctomycetota excrete an enzyme which, outside of the cell wall, degrades the complex substrates into smaller monosaccharides, which can more easily be transported through the different membranes. Second, the complex substrates become anchored to the outside of Planctomycetota, which are then able to slowly break down these substrates into oligosaccharides, which are able to be transported into the periplasm of Planctomycetota by specialized proteins. The third hypothesis involves the crateriform structures found on the outside of Planctomycetota cell walls. These structures have fibers lining their pits that may be able to absorb whole polysaccharides into the periplasm, where they would then be digested. [ 18 ] Almost all bacteria have a cytosol following the outer shape of their peptidoglycan cell wall. Eukaryotes are different in that they have their cytosol divided into multiple compartments to create organelles such as a nucleus . Planctomycetota are unique in that they have large invaginations of their cytoplasmic membrane, pulling away from the peptidoglycan cell wall and leaving room for the periplasm. Traditionally, the cytoplasmic membrane has been thought to be responsible for controlling the osmotic pressure of bacterial cells. Yet due to the folds in the cytoplasmic membrane, and the existence of large spaces of periplasm within Planctomycetota, their peptidoglycan acts as an osmotic barrier with the periplasm being isotonic to the cytosol. [ 5 ] Anammox is the process of oxidizing ammonium where nitrite acts as the electron acceptor. This process creates energy for the organism performing the reaction in the same way humans gain energy from oxidizing glucose. [ 46 ] In a marine environment, this ultimately removes nitrogen from the water, as N 2 gas cannot be used by phytoplankton and is released into the atmosphere . Up to 67% of dinitrogen gas production in the ocean can be attributed to anammox [ 47 ] and about 50% of the nitrogen gas in the atmosphere is thought to be produced from anammox. [ 48 ] Planctomycetota are the most dominant phylum of bacteria capable of performing anammox, thus the Planctomycetota capable of performing anammox play an important role in the global cycling of nitrogen. [ 49 ] The synthesis of sterols, often observed in eukaryotes and uncommon among bacteria, has been observed very rarely in Planctomycetota. [ 5 ] [ 15 ] The synthesis of sterols such as lanosterol has been observed in G. obscuriglobus . Lanosterol is common in eukaryotes and two other groups of bacteria, both methylotrophic Pseudomonadota and myxobacteria . The synthesis of sterols observed in G. obscuriglobus is unique within Planctomycetota. Sterol synthesis is suggested to be associated with regulation of membrane fluidity in Planctomycetota, [ 15 ] and has been described as essential to the proper growth and reproduction of G. obscuriglobus . [ 17 ] Recently, interest has arisen in examining the Planctomycetota regarding their potential roles in biotechnology, mainly as a source of bioactive molecules, [ 8 ] [ 14 ] of interest mainly to the pharmaceutical industry. Bioactive compounds are mainly present as secondary metabolites, [ 14 ] although little is known about Planctomycetota secondary metabolites. [ 50 ] This is unexpected, as the Planctomycetota have several key features as other known producers of bioactive molecules, such as the Myxobacteria. [ 50 ] However, a number of ongoing studies serve as various first steps in including Planctomycetota in small-molecule drug development for humans. Planctomycetota species are worthwhile considerations in challenging the current models for the origin of the nucleus, along with other aspects of origin and evolution of the eukaryotic endomembrane system. [ 41 ] The impacts of research on Planctomycetota and their uses might be of global significance with regards to nutrient cycling processes and assist in furthering understanding for global marine biogeochemistry. However, with Planctomycetota's growing influences on metabolic processes involving water and air, it may also have a role in interchanges between oceans and atmosphere, potentially affecting climate change. [ 41 ] Planctomycetota species were recently identified as being an opportunistic human pathogen, [ citation needed ] but a lack of culture media limits studies on the bacteria in the Planctomycetota as pathogens of humans. [ 8 ]
https://en.wikipedia.org/wiki/Planctomycetota
In physics , the plane-wave expansion expresses a plane wave as a linear combination of spherical waves : e i k ⋅ r = ∑ ℓ = 0 ∞ ( 2 ℓ + 1 ) i ℓ j ℓ ( k r ) P ℓ ( k ^ ⋅ r ^ ) , {\displaystyle e^{i\mathbf {k} \cdot \mathbf {r} }=\sum _{\ell =0}^{\infty }(2\ell +1)i^{\ell }j_{\ell }(kr)P_{\ell }({\hat {\mathbf {k} }}\cdot {\hat {\mathbf {r} }}),} where In the special case where k is aligned with the z axis, e i k r cos ⁡ θ = ∑ ℓ = 0 ∞ ( 2 ℓ + 1 ) i ℓ j ℓ ( k r ) P ℓ ( cos ⁡ θ ) , {\displaystyle e^{ikr\cos \theta }=\sum _{\ell =0}^{\infty }(2\ell +1)i^{\ell }j_{\ell }(kr)P_{\ell }(\cos \theta ),} where θ is the spherical polar angle of r . With the spherical-harmonic addition theorem the equation can be rewritten as e i k ⋅ r = 4 π ∑ ℓ = 0 ∞ ∑ m = − ℓ ℓ i ℓ j ℓ ( k r ) Y ℓ m ( k ^ ) Y ℓ m ∗ ( r ^ ) , {\displaystyle e^{i\mathbf {k} \cdot \mathbf {r} }=4\pi \sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{\ell }i^{\ell }j_{\ell }(kr)Y_{\ell }^{m}{}({\hat {\mathbf {k} }})Y_{\ell }^{m*}({\hat {\mathbf {r} }}),} where Note that the complex conjugation can be interchanged between the two spherical harmonics due to symmetry. The plane wave expansion is applied in This mathematical physics -related article is a stub . You can help Wikipedia by expanding it . This scattering –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plane-wave_expansion
In mathematics , a plane is a two-dimensional space or flat surface that extends indefinitely. A plane is the two-dimensional analogue of a point (zero dimensions), a line (one dimension) and three-dimensional space . When working exclusively in two-dimensional Euclidean space , the definite article is used, so the Euclidean plane refers to the whole space. Several notions of a plane may be defined. The Euclidean plane follows Euclidean geometry , and in particular the parallel postulate . A projective plane may be constructed by adding "points at infinity" where two otherwise parallel lines would intersect, so that every pair of lines intersects in exactly one point. The elliptic plane may be further defined by adding a metric to the real projective plane. One may also conceive of a hyperbolic plane , which obeys hyperbolic geometry and has a negative curvature . Abstractly, one may forget all structure except the topology, producing the topological plane, which is homeomorphic to an open disk . Viewing the plane as an affine space produces the affine plane , which lacks a notion of distance but preserves the notion of collinearity . Conversely, in adding more structure, one may view the plane as a 1-dimensional complex manifold , called the complex line . Many fundamental tasks in mathematics, geometry , trigonometry , graph theory , and graphing are performed in a two-dimensional or planar space. [ 1 ] In mathematics , a Euclidean plane is a Euclidean space of dimension two , denoted E 2 {\displaystyle {\textbf {E}}^{2}} or E 2 {\displaystyle \mathbb {E} ^{2}} . It is a geometric space in which two real numbers are required to determine the position of each point . It is an affine space , which includes in particular the concept of parallel lines . It has also metrical properties induced by a distance , which allows to define circles , and angle measurement . A Euclidean plane with a chosen Cartesian coordinate system is called a Cartesian plane . In Euclidean geometry , a plane is a flat two- dimensional surface that extends indefinitely. Euclidean planes often arise as subspaces of three-dimensional space R 3 {\displaystyle \mathbb {R} ^{3}} . A prototypical example is one of a room's walls, infinitely extended and assumed infinitesimal thin. The elliptic plane is the real projective plane provided with a metric . Kepler and Desargues used the gnomonic projection to relate a plane σ to points on a hemisphere tangent to it. With O the center of the hemisphere, a point P in σ determines a line OP intersecting the hemisphere, and any line L ⊂ σ determines a plane OL which intersects the hemisphere in half of a great circle . The hemisphere is bounded by a plane through O and parallel to σ. No ordinary line of σ corresponds to this plane; instead a line at infinity is appended to σ . As any line in this extension of σ corresponds to a plane through O , and since any pair of such planes intersects in a line through O , one can conclude that any pair of lines in the extension intersect: the point of intersection lies where the plane intersection meets σ or the line at infinity. Thus the axiom of projective geometry, requiring all pairs of lines in a plane to intersect, is confirmed. [ 2 ] In mathematics , a projective plane is a geometric structure that extends the concept of a plane . In the ordinary Euclidean plane, two lines typically intersect at a single point, but there are some pairs of lines (namely, parallel lines) that do not intersect. A projective plane can be thought of as an ordinary plane equipped with additional "points at infinity" where parallel lines intersect. Thus any two distinct lines in a projective plane intersect at exactly one point. Renaissance artists, in developing the techniques of drawing in perspective , laid the groundwork for this mathematical topic. The archetypical example is the real projective plane , also known as the extended Euclidean plane. [ 4 ] This example, in slightly different guises, is important in algebraic geometry , topology and projective geometry where it may be denoted variously by PG(2, R) , RP 2 , or P 2 (R), among other notations. There are many other projective planes, both infinite, such as the complex projective plane , and finite, such as the Fano plane . In addition to its familiar geometric structure, with isomorphisms that are isometries with respect to the usual inner product, the plane may be viewed at various other levels of abstraction . Each level of abstraction corresponds to a specific category . At one extreme, all geometrical and metric concepts may be dropped to leave the topological plane, which may be thought of as an idealized homotopically trivial infinite rubber sheet, which retains a notion of proximity, but has no distances. The topological plane has a concept of a linear path, but no concept of a straight line. The topological plane, or its equivalent the open disc, is the basic topological neighborhood used to construct surfaces (or 2-manifolds) classified in low-dimensional topology . Isomorphisms of the topological plane are all continuous bijections . The topological plane is the natural context for the branch of graph theory that deals with planar graphs , and results such as the four color theorem . The plane may also be viewed as an affine space , whose isomorphisms are combinations of translations and non-singular linear maps. From this viewpoint there are no distances, but collinearity and ratios of distances on any line are preserved. Differential geometry views a plane as a 2-dimensional real manifold , a topological plane which is provided with a differential structure . Again in this case, there is no notion of distance, but there is now a concept of smoothness of maps, for example a differentiable or smooth path (depending on the type of differential structure applied). The isomorphisms in this case are bijections with the chosen degree of differentiability. In the opposite direction of abstraction, we may apply a compatible field structure to the geometric plane, giving rise to the complex plane and the major area of complex analysis . The complex field has only two isomorphisms that leave the real line fixed, the identity and conjugation . In the same way as in the real case, the plane may also be viewed as the simplest, one-dimensional (in terms of complex dimension , over the complex numbers) complex manifold , sometimes called the complex line . However, this viewpoint contrasts sharply with the case of the plane as a 2-dimensional real manifold. The isomorphisms are all conformal bijections of the complex plane, but the only possibilities are maps that correspond to the composition of a multiplication by a complex number and a translation. In addition, the Euclidean geometry (which has zero curvature everywhere) is not the only geometry that the plane may have. The plane may be given a spherical geometry by using the stereographic projection . This can be thought of as placing a sphere tangent to the plane (just like a ball on the floor), removing the top point, and projecting the sphere onto the plane from this point. This is one of the projections that may be used in making a flat map of part of the Earth's surface. The resulting geometry has constant positive curvature. Alternatively, the plane can also be given a metric which gives it constant negative curvature giving the hyperbolic plane . The latter possibility finds an application in the theory of special relativity in the simplified case where there are two spatial dimensions and one time dimension. (The hyperbolic plane is a timelike hypersurface in three-dimensional Minkowski space .) The one-point compactification of the plane is homeomorphic to a sphere (see stereographic projection ); the open disk is homeomorphic to a sphere with the "north pole" missing; adding that point completes the (compact) sphere. The result of this compactification is a manifold referred to as the Riemann sphere or the complex projective line . The projection from the Euclidean plane to a sphere without a point is a diffeomorphism and even a conformal map . The plane itself is homeomorphic (and diffeomorphic) to an open disk . For the hyperbolic plane such diffeomorphism is conformal, but for the Euclidean plane it is not.
https://en.wikipedia.org/wiki/Plane_(mathematics)
In projective geometry , a plane at infinity is the hyperplane at infinity of a three dimensional projective space or to any plane contained in the hyperplane at infinity of any projective space of higher dimension. This article will be concerned solely with the three-dimensional case. There are two approaches to defining the plane at infinity which depend on whether one starts with a projective 3-space or an affine 3-space . If a projective 3-space is given, the plane at infinity is any distinguished projective plane of the space. [ 1 ] This point of view emphasizes the fact that this plane is not geometrically different than any other plane. On the other hand, given an affine 3-space, the plane at infinity is a projective plane which is added to the affine 3-space in order to give it closure of incidence properties. Meaning that the points of the plane at infinity are the points where parallel lines of the affine 3-space will meet, and the lines are the lines where parallel planes of the affine 3-space will meet. The result of the addition is the projective 3-space, P 3 {\displaystyle P^{3}} . This point of view emphasizes the internal structure of the plane at infinity, but does make it look "special" in comparison to the other planes of the space. If the affine 3-space is real, R 3 {\displaystyle \mathbb {R} ^{3}} , then the addition of a real projective plane R P 2 {\displaystyle \mathbb {R} P^{2}} at infinity produces the real projective 3-space R P 3 {\displaystyle \mathbb {R} P^{3}} . Since any two projective planes in a projective 3-space are equivalent, we can choose a homogeneous coordinate system so that any point on the plane at infinity is represented as ( X : Y : Z :0). [ 2 ] Any point in the affine 3-space will then be represented as ( X : Y : Z :1). The points on the plane at infinity seem to have three degrees of freedom, but homogeneous coordinates are equivalent up to any rescaling: so that the coordinates ( X : Y : Z :0) can be normalized , thus reducing the degrees of freedom to two (thus, a surface, namely a projective plane). Proposition : Any line which passes through the origin (0:0:0:1) and through a point ( X : Y : Z :1) will intersect the plane at infinity at the point ( X : Y : Z :0). Proof : A line which passes through points (0:0:0:1) and ( X : Y : Z :1) will consist of points which are linear combinations of the two given points: For such a point to lie on the plane at infinity we must have, a + b = 0 {\displaystyle a+b=0} . So, by choosing a = − b {\displaystyle a=-b} , we obtain the point ( b X : b Y : b Z : 0 ) = ( X : Y : Z : 0 ) {\displaystyle (bX:bY:bZ:0)=(X:Y:Z:0)} , as required. Q.E.D. Any pair of parallel lines in 3-space will intersect each other at a point on the plane at infinity. Also, every line in 3-space intersects the plane at infinity at a unique point. This point is determined by the direction—and only by the direction—of the line. To determine this point, consider a line parallel to the given line, but passing through the origin, if the line does not already pass through the origin. Then choose any point, other than the origin, on this second line. If the homogeneous coordinates of this point are ( X : Y : Z :1), then the homogeneous coordinates of the point at infinity through which the first and second line both pass is ( X : Y : Z :0). Example : Consider a line passing through the points (0:0:1:1) and (3:0:1:1). A parallel line passes through points (0:0:0:1) and (3:0:0:1). This second line intersects the plane at infinity at the point (3:0:0:0). But the first line also passes through this point: when λ + μ = 0 {\displaystyle \lambda +\mu =0} . ■ Any pair of parallel planes in affine 3-space will intersect each other in a projective line (a line at infinity ) in the plane at infinity. Also, every plane in the affine 3-space intersects the plane at infinity in a unique line. [ 3 ] This line is determined by the direction—and only by the direction—of the plane. Since the plane at infinity is a projective plane, it is homeomorphic to the surface of a "sphere modulo antipodes", i.e. a sphere in which antipodal points are equivalent: S 2 /{1, −1} where the quotient is understood as a quotient by a group action (see quotient space ).
https://en.wikipedia.org/wiki/Plane_at_infinity
A plane mirror is a mirror with a flat ( planar ) reflective surface. [ 1 ] [ 2 ] For light rays striking a plane mirror, the angle of reflection equals the angle of incidence. [ 3 ] The angle of the incidence is the angle between the incident ray and the surface normal (an imaginary line perpendicular to the surface). Therefore, the angle of reflection is the angle between the reflected ray and the normal and a collimated beam of light does not spread out after reflection from a plane mirror, except for diffraction effects. A plane mirror makes an image of objects behind the mirror; these images appear to be behind the plane in which the mirror lies. A straight line drawn from part of an object to the corresponding part of its image makes a right angle with, and is bisected by, the surface of the plane mirror. The image formed by a plane mirror is virtual (meaning that the light rays do not actually come from the image) it is not real image (meaning that the light rays do actually come from the image). it is always upright, and of the same shape and size as the object it is reflecting. A virtual image is a copy of an object formed at the location from which the light rays appear to come. Actually, the image formed in the mirror is a perverted image ( Perversion ), there is a misconception among people about having confused with perverted and laterally-inverted image. If a person is reflected in a plane mirror, the image of his right hand appears to be the left hand of the image. Plane mirrors are the only type of mirror for which an object produces an image that is virtual, erect and of the same size as the object in all cases irrespective of the shape, size and distance from mirror of the object however same is possible for other types of mirror (concave and convex) but only for a specific conditions. However the focal length of a plane mirror is infinity ; [ 4 ] its optical power is zero. Using the mirror equation, where d 0 {\displaystyle d_{0}} is the object distance, d i {\displaystyle d_{i}} is the image distance, and f {\displaystyle f} is the focal length: Since [ 1 f = 0 ] {\displaystyle [{\frac {1}{f}}=0]} , Concave and Convex mirrors ( spherical mirrors ) [ 5 ] are also able to produce images similar to a plane mirror. However, the images formed by them are not of the same size as the object like they are in a plane mirror in all conditions rather specific one . In a convex mirror, the virtual image formed is always diminished, whereas in a concave mirror when the object is placed between the focus and the pole, an enlarged virtual image is formed. Therefore, in applications where a virtual image of the same size is required, a plane mirror is preferred over spherical mirrors. A plane mirror is made using some highly reflecting and polished surface such as a silver or aluminium surface in a process called silvering . [ 6 ] After silvering, a thin layer of red lead oxide is applied at the back of the mirror. The reflecting surface reflects most of the light striking it as long as the surface remains uncontaminated by tarnishing or oxidation . Most modern plane mirrors are designed with a thin piece of plate glass that protects and strengthens the mirror surface and helps prevent tarnishing. Historically, mirrors were simply flat pieces of polished copper , obsidian , brass , or a precious metal. Mirrors made from liquid also exist, as the elements gallium and mercury are both highly reflective in their liquid state. Mathematically, a plane mirror can be considered to be the limit of either a concave or a convex spherical curved mirror as the radius, and therefore the focal length becomes infinity. [ 4 ]
https://en.wikipedia.org/wiki/Plane_mirror
In describing reflection and refraction in optics , the plane of incidence (also called the incidence plane or the meridional plane [ citation needed ] ) is the plane which contains the surface normal and the propagation vector of the incoming radiation . [ 1 ] (In wave optics , the latter is the k-vector , or wavevector, of the incoming wave.) When reflection is specular , as it is for a mirror or other shiny surface, the reflected ray also lies in the plane of incidence; when refraction also occurs, the refracted ray lies in the same plane. The condition of co-planarity among incident ray, surface normal, and reflected ray (refracted ray) is known as the first law of reflection ( first law of refraction , respectively). [ 2 ] The orientation of the incident light's polarization with respect to the plane of incidence has an important effect on the strength of the reflection. P-polarized light is incident linearly polarized light with polarization direction lying in the plane of incidence. S-polarized light has polarization perpendicular to the plane of incidence. The s in s-polarized comes from the German word senkrecht , meaning perpendicular. The strength of reflection from a surface is determined by the Fresnel equations , which are different for s- and p-polarized light.
https://en.wikipedia.org/wiki/Plane_of_incidence
For light and other electromagnetic radiation , the plane of polarization is the plane spanned by the direction of propagation and either the electric vector or the magnetic vector , depending on the convention. It can be defined for polarized light, remains fixed in space for linearly-polarized light, and undergoes axial rotation for circularly-polarized light. Unfortunately the two conventions are contradictory. As originally defined by Étienne-Louis Malus in 1811, [ 2 ] the plane of polarization coincided (although this was not known at the time) with the plane containing the direction of propagation and the magnetic vector. [ 3 ] In modern literature, the term plane of polarization , if it is used at all, is likely to mean the plane containing the direction of propagation and the electric vector, [ 4 ] because the electric field has the greater propensity to interact with matter. [ 5 ] For waves in a birefringent (doubly-refractive) crystal, under the old definition, one must also specify whether the direction of propagation means the ray direction ( Poynting vector ) or the wave- normal direction, because these directions generally differ and are both perpendicular to the magnetic vector (Fig. 1). Malus, as an adherent of the corpuscular theory of light , could only choose the ray direction. But Augustin-Jean Fresnel , in his successful effort to explain double refraction under the wave theory (1822 onward), found it more useful to choose the wave-normal direction, with the result that the supposed vibrations of the medium were then consistently perpendicular to the plane of polarization. [ 6 ] In an isotropic medium such as air, the ray and wave-normal directions are the same, and Fresnel's modification makes no difference. Fresnel also admitted that, had he not felt constrained by the received terminology, it would have been more natural to define the plane of polarization as the plane containing the vibrations and the direction of propagation. [ 7 ] That plane, which became known as the plane of vibration , is perpendicular to Fresnel's "plane of polarization" but identical with the plane that modern writers tend to call by that name! It has been argued that the term plane of polarization , because of its historical ambiguity, should be avoided in original writing. One can easily specify the orientation of a particular field vector; and even the term plane of vibration carries less risk of confusion than plane of polarization . [ 8 ] For electromagnetic (EM) waves in an isotropic medium (that is, a medium whose properties are independent of direction), the electric field vectors ( E and D ) are in one direction, and the magnetic field vectors ( B and H ) are in another direction, perpendicular to the first, and the direction of propagation is perpendicular to both the electric and the magnetic vectors. In this case the direction of propagation is both the ray direction and the wave-normal direction (the direction perpendicular to the wavefront ). For a linearly -polarized wave (also called a plane -polarized wave), the orientations of the field vectors are fixed (Fig. 2). Because innumerable materials are dielectrics or conductors while comparatively few are ferromagnets , the reflection or refraction of EM waves (including light ) is more often due to differences in the electric properties of media than to differences in their magnetic properties. That circumstance tends to draw attention to the electric vectors, so that we tend to think of the direction of polarization as the direction of the electric vectors, and the "plane of polarization" as the plane containing the electric vectors and the direction of propagation. Indeed, that is the convention used in the online Encyclopædia Britannica , [ 4 ] and in Feynman 's lecture on polarization. [ 9 ] In the latter case one must infer the convention from the context: Feynman keeps emphasizing the direction of the electric ( E ) vector and leaves the reader to presume that the "plane of polarization" contains that vector — and this interpretation indeed fits the examples he gives. The same vector is used to describe the polarization of radio signals and antennas (Fig. 3). [ 10 ] If the medium is magnetically isotropic but electrically non -isotropic (like a doubly-refracting crystal), the magnetic vectors B and H are still parallel, and the electric vectors E and D are still perpendicular to both, and the ray direction is still perpendicular to E and the magnetic vectors, and the wave-normal direction is still perpendicular to D and the magnetic vectors; but there is generally a small angle between the electric vectors E and D , hence the same angle between the ray direction and the wave-normal direction (Fig. 1). [ 1 ] [ 11 ] Hence D , E , the wave-normal direction, and the ray direction are all in the same plane, and it is all the more natural to define that plane as the "plane of polarization". This "natural" definition, however, depends on the theory of EM waves developed by James Clerk Maxwell in the 1860s — whereas the word polarization was coined about 50 years earlier, and the associated mystery dates back even further. Whether by accident or by design, the plane of polarization has always been defined as the plane containing a field vector and a direction of propagation. In Fig. 1, there are three such planes, to which we may assign numbers for ease of reference: In an isotropic medium, E and D have the same direction, [ Note 1 ] so that the ray and wave-normal directions merge, and the planes (2a) and (2b) become one: Polarization was discovered — but not named or understood — by Christiaan Huygens , as he investigated the double refraction of "Iceland crystal" (transparent calcite , now called Iceland spar ). The essence of his discovery, published in his Treatise on Light (1690), was as follows. When a ray (meaning a narrow beam of light) passes through two similarly oriented calcite crystals at normal incidence, the ordinary ray emerging from the first crystal suffers only the ordinary refraction in the second, while the extraordinary ray emerging from the first suffers only the extraordinary refraction in the second. But when the second crystal is rotated 90° about the incident rays, the roles are interchanged, so that the ordinary ray emerging from the first crystal suffers only the extraordinary refraction in the second, and vice versa. At intermediate positions of the second crystal, each ray emerging from the first is doubly refracted by the second, giving four rays in total; and as the crystal is rotated from the initial orientation to the perpendicular one, the brightnesses of the rays vary, giving a smooth transition between the extreme cases in which there are only two final rays. [ 12 ] Huygens defined a principal section of a calcite crystal as a plane normal to a natural surface and parallel to the axis of the obtuse solid angle. [ 13 ] This axis was parallel to the axes of the spheroidal secondary waves by which he (correctly) explained the directions of the extraordinary refraction. The term polarization was coined by Étienne-Louis Malus in 1811. [ 2 ] In 1808, in the midst of confirming Huygens' geometric description of double refraction (while disputing his physical explanation), Malus had discovered that when a ray of light is reflected off a non-metallic surface at the appropriate angle, it behaves like one of the two rays emerging from a calcite crystal. [ 14 ] [ Note 2 ] As this behavior had previously been known only in connection with double refraction, Malus described it in that context. In particular, he defined the plane of polarization of a polarized ray as the plane, containing the ray, in which a principal section of a calcite crystal must lie in order to cause only ordinary refraction. [ 15 ] This definition was all the more reasonable because it meant that when a ray was polarized by reflection (off an isotopic medium), the plane of polarization was the plane of incidence and reflection — that is, the plane containing the incident ray, the normal to the reflective surface, and the polarized reflected ray. But, as we now know, this plane happens to contain the magnetic vectors of the polarized ray, not the electric vectors. [ 16 ] The plane of the ray and the magnetic vectors is the one numbered (2b) above. The implication that the plane of polarization contains the magnetic vectors is still found in the definition given in the online Merriam-Webster dictionary. [ 17 ] Even Julius Adams Stratton , having said that "It is customary to define the polarization in terms of E ", promptly adds: "In optics, however, the orientation of the vectors is specified traditionally by the 'plane of polarization,' by which is meant the plane normal to E containing both H and the axis of propagation." [ 10 ] That definition is identical with Malus's. In 1821, Augustin-Jean Fresnel announced his hypothesis that light waves are exclusively transverse and therefore always polarized in the sense of having a particular transverse orientation, and that what we call unpolarized light is in fact light whose orientation is rapidly and randomly changing. [ 18 ] [ 19 ] Supposing that light waves were analogous to shear waves in elastic solids , and that a higher refractive index corresponded to a higher density of the luminiferous aether , he found that he could account for the partial reflection (including polarization by reflection) at the interface between two transparent isotropic media, provided that the vibrations of the aether were perpendicular to the plane of polarization. [ 20 ] Thus the polarization, according to the received definition, was "in" a certain plane if the vibrations were perpendicular to that plane! Fresnel himself found this implication inconvenient; later that year he wrote: But he soon felt obliged to make a less radical change. In his successful model of double refraction, the displacement of the medium was constrained to be tangential to the wavefront, while the force was allowed to deviate from the displacement and from the wavefront. [ 21 ] Hence, if the vibrations were perpendicular to the plane of polarization, then the plane of polarization contained the wave-normal but not necessarily the ray. [ 22 ] In his "Second Memoir" on double refraction, Fresnel formally adopted this new definition, acknowledging that it agreed with the old definition in an isotropic medium such as air, but not in a birefringent crystal. [ 6 ] The vibrations normal to Malus's plane of polarization are electric, and the electric vibration tangential to the wavefront is D (Fig. 1). Thus, in terms of the above numbering, Fresnel changed the "plane of polarization" from (2b) to (2a) . Fresnel's definition remains compatible with the Merriam-Webster definition, [ 17 ] which fails to specify the propagation direction. And it remains compatible with Stratton's definition, [ 10 ] because that is given in the context of an isotropic medium, in which planes (2a) and (2b) merge into (2) . What Fresnel called the "more natural" choice was a plane containing D and a direction of propagation. In Fig. 1, the only plane meeting that specification is the one labeled "Plane of vibration" and later numbered (1) — that is, the one that modern authors tend to identify with the "plane of polarization". We might therefore wish that Fresnel had been less deferential to his predecessors. That scenario, however, is less realistic than it may seem, because even after Fresnel's transverse-wave theory was generally accepted, the direction of the vibrations was the subject of continuing debate. The principle that refractive index depended on the density of the aether was essential to Fresnel's aether drag hypothesis . [ 23 ] But it could not be extended to birefringent crystals — in which at least one refractive index varies with direction — because density is not directional. Hence his explanation of refraction required a directional variation in stiffness of the aether within a birefringent medium, plus a variation in density between media. [ 24 ] James MacCullagh and Franz Ernst Neumann avoided this complication by supposing that a higher refractive index corresponded always to the same density but a greater elastic compliance (lower stiffness). To obtain results that agreed with observations on partial reflection, they had to suppose, contrary to Fresnel, that the vibrations were within the plane of polarization. [ 25 ] The question called for an experimental determination of the direction of vibration, and the challenge was answered by George Gabriel Stokes . He defined the plane of vibration as "the plane passing through the ray and the direction of vibration" [ 26 ] (in agreement with Fig. 1). Now suppose that a fine diffraction grating is illuminated at normal incidence. At large angles of diffraction, the grating will appear somewhat edge-on, so that the directions of vibration will be crowded towards the direction parallel to the plane of the grating. If the planes of polarization coincide with the planes of vibration (as MacCullagh and Neumann said), they will be crowded in the same direction; and if the planes of polarization are normal to the planes of vibration (as Fresnel said), the planes of polarization will be crowded in the normal direction. To find the direction of the crowding, one could vary the polarization of the incident light in equal steps, and determine the planes of polarization of the diffracted light in the usual manner. Stokes performed such an experiment in 1849, and it found in favor of Fresnel. [ 26 ] [ 27 ] In 1852, Stokes noted a much simpler experiment that leads to the same conclusion. Sunlight scattered from a patch of blue sky 90° from the sun is found, by the methods of Malus, to be polarized in the plane containing the line of sight and the sun. But it is obvious from the geometry that the vibrations of that light can only be perpendicular to that plane. [ 28 ] There was, however, a sense in which MacCullagh and Neumann were correct. If we attempt an analogy between shear waves in a non-isotropic elastic solid, and EM waves in a magnetically isotropic but electrically non-isotropic crystal, the density must correspond to the magnetic permeability (both being non-directional), and the compliance must correspond to the electric permittivity (both being directional). The result is that the velocity of the solid corresponds to the H field, [ 29 ] so that the mechanical vibrations of the shear wave are in the direction of the magnetic vibrations of the EM wave. But Stokes's experiments were bound to detect the electric vibrations, because those have the greater propensity to interact with matter. In short, the MacCullagh-Neumann vibrations were the ones that had a mechanical analog, but Fresnel's vibrations were the ones that were more likely to be detected in experiments. [ Note 4 ] The electromagnetic theory of light further emphasized the electric vibrations because of their interactions with matter, [ 5 ] whereas the old "plane of polarization" contained the magnetic vectors. Hence the electromagnetic theory would have reinforced the convention that the vibrations were normal to the plane of polarization — provided, of course, that one was familiar with the historical definition of the plane of polarization. But if one was influenced by physical considerations alone , then, as Feynman [ 9 ] and the Britannica [ 4 ] illustrate, one would pay attention to the electric vectors and assume that the "plane" of polarization (if one needed such a concept) contained those vectors. However, it is not clear that a "plane of polarization" is needed at all: knowing what field vectors are involved, one can specify the polarization by specifying the orientation of a particular vector, or, as Born and Wolf suggest, by specifying the "plane of vibration" of that vector. [ 5 ] Hecht also prefers the term plane of vibration (or, more usually, plane-of-vibration ), which he defines as the plane of E and the wave-normal, in agreement with Fig. 1 above. [ 30 ] In an optically chiral medium — that is, one in which the direction of polarization gradually rotates as the wave propagates — the choice of definition of the "plane of polarization" does not affect the existence or direction ("handedness") of the rotation. This is one context in which the ambiguity of the term plane of polarization causes no further confusion. [ 31 ] There is also a context in which the original definition might still suggest itself. In a non-magnetic non-chiral crystal of the biaxial class (in which there is no ordinary refraction, but both refractions violate Snell's law ), there are three mutually perpendicular planes for which the speed of light is isotropic within the plane provided that the electric vectors are normal to the plane. [ 32 ] This situation naturally draws attention to a plane normal to the vibrations as envisaged by Fresnel, and that plane is indeed the plane of polarization as defined by Fresnel or Malus. In most contexts, however, the concept of a "plane of polarization" distinct from a plane containing the electric "vibrations" has arguably become redundant, and has certainly become a source of confusion. In the words of Born & Wolf, "it is… better not to use this term." [ 33 ]
https://en.wikipedia.org/wiki/Plane_of_polarization
In mathematics , reflection symmetry , line symmetry , mirror symmetry , or mirror-image symmetry is symmetry with respect to a reflection . That is, a figure which does not change upon undergoing a reflection has reflectional symmetry. In 2-dimensional space , there is a line/axis of symmetry, in 3-dimensional space , there is a plane of symmetry. An object or figure which is indistinguishable from its transformed image is called mirror symmetric . In formal terms, a mathematical object is symmetric with respect to a given operation such as reflection, rotation , or translation , if, when applied to the object, this operation preserves some property of the object. [ 1 ] The set of operations that preserve a given property of the object form a group . Two objects are symmetric to each other with respect to a given group of operations if one is obtained from the other by some of the operations (and vice versa). The symmetric function of a two-dimensional figure is a line such that, for each perpendicular constructed, if the perpendicular intersects the figure at a distance 'd' from the axis along the perpendicular, then there exists another intersection of the shape and the perpendicular at the same distance 'd' from the axis, in the opposite direction along the perpendicular. Another way to think about the symmetric function is that if the shape were to be folded in half over the axis, the two halves would be identical: the two halves are each other's mirror images . [ 1 ] Thus, a square has four axes of symmetry because there are four different ways to fold it and have the edges all match. A circle has infinitely many axes of symmetry, while a cone and sphere have infinitely many planes of symmetry. Triangles with reflection symmetry are isosceles . Quadrilaterals with reflection symmetry are kites , (concave) deltoids, rhombi , [ 2 ] and isosceles trapezoids . All even-sided polygons have two simple reflective forms, one with lines of reflections through vertices, and one through edges. For an arbitrary shape, the axiality of the shape measures how close it is to being bilaterally symmetric. It equals 1 for shapes with reflection symmetry, and between two-thirds and 1 for any convex shape . In 3D, the cube in which the plane can configure in all of the three axes that can reflect the cube has 9 planes of reflective symmetry. [ 3 ] For more general types of reflection there are correspondingly more general types of reflection symmetry. For example: Animals that are bilaterally symmetric have reflection symmetry around the sagittal plane , which divides the body vertically into left and right halves, with one of each sense organ and limb pair on either side. Most animals are bilaterally symmetric, likely because this supports forward movement and streamlining . [ 4 ] [ 5 ] [ 6 ] Mirror symmetry is often used in architecture , as in the facade of Santa Maria Novella , Florence . [ 7 ] It is also found in the design of ancient structures such as Stonehenge . [ 8 ] Symmetry was a core element in some styles of architecture, such as Palladianism . [ 9 ]
https://en.wikipedia.org/wiki/Plane_of_symmetry
In Euclidean geometry , a plane is a flat two- dimensional surface that extends indefinitely. Euclidean planes often arise as subspaces of three-dimensional space R 3 {\displaystyle \mathbb {R} ^{3}} . A prototypical example is one of a room's walls, infinitely extended and assumed infinitesimal thin. While a pair of real numbers R 2 {\displaystyle \mathbb {R} ^{2}} suffices to describe points on a plane, the relationship with out-of-plane points requires special consideration for their embedding in the ambient space R 3 {\displaystyle \mathbb {R} ^{3}} . A plane segment or planar region (or simply "plane", in lay use) is a planar surface region ; it is analogous to a line segment . A bivector is an oriented plane segment, analogous to directed line segments . [ a ] A face is a plane segment bounding a solid object . [ 1 ] A slab is a region bounded by two parallel planes. A parallelepiped is a region bounded by three pairs of parallel planes. Euclid set forth the first great landmark of mathematical thought, an axiomatic treatment of geometry. [ 2 ] He selected a small core of undefined terms (called common notions ) and postulates (or axioms ) which he then used to prove various geometrical statements. Although the plane in its modern sense is not directly given a definition anywhere in the Elements , it may be thought of as part of the common notions. [ 3 ] Euclid never used numbers to measure length, angle, or area. The Euclidean plane equipped with a chosen Cartesian coordinate system is called a Cartesian plane ; a non-Cartesian Euclidean plane equipped with a polar coordinate system would be called a polar plane . A plane is a ruled surface . In mathematics , a Euclidean plane is a Euclidean space of dimension two , denoted E 2 {\displaystyle {\textbf {E}}^{2}} or E 2 {\displaystyle \mathbb {E} ^{2}} . It is a geometric space in which two real numbers are required to determine the position of each point . It is an affine space , which includes in particular the concept of parallel lines . It has also metrical properties induced by a distance , which allows to define circles , and angle measurement . A Euclidean plane with a chosen Cartesian coordinate system is called a Cartesian plane . This section is solely concerned with planes embedded in three dimensions: specifically, in R 3 . In a Euclidean space of any number of dimensions, a plane is uniquely determined by any of the following: The following statements hold in three-dimensional Euclidean space but not in higher dimensions, though they have higher-dimensional analogues: In a manner analogous to the way lines in a two-dimensional space are described using a point-slope form for their equations, planes in a three dimensional space have a natural description using a point in the plane and a vector orthogonal to it (the normal vector ) to indicate its "inclination". Specifically, let r 0 be the position vector of some point P 0 = ( x 0 , y 0 , z 0 ) , and let n = ( a , b , c ) be a nonzero vector. The plane determined by the point P 0 and the vector n consists of those points P , with position vector r , such that the vector drawn from P 0 to P is perpendicular to n . Recalling that two vectors are perpendicular if and only if their dot product is zero, it follows that the desired plane can be described as the set of all points r such that n ⋅ ( r − r 0 ) = 0. {\displaystyle {\boldsymbol {n}}\cdot ({\boldsymbol {r}}-{\boldsymbol {r}}_{0})=0.} The dot here means a dot (scalar) product . Expanded this becomes a ( x − x 0 ) + b ( y − y 0 ) + c ( z − z 0 ) = 0 , {\displaystyle a(x-x_{0})+b(y-y_{0})+c(z-z_{0})=0,} which is the point–normal form of the equation of a plane. [ 4 ] This is just a linear equation a x + b y + c z + d = 0 , {\displaystyle ax+by+cz+d=0,} where d = − ( a x 0 + b y 0 + c z 0 ) , {\displaystyle d=-(ax_{0}+by_{0}+cz_{0}),} which is the expanded form of − n ⋅ r 0 . {\displaystyle -{\boldsymbol {n}}\cdot {\boldsymbol {r}}_{0}.} In mathematics it is a common convention to express the normal as a unit vector , but the above argument holds for a normal vector of any non-zero length. Conversely, it is easily shown that if a , b , c , and d are constants and a , b , and c are not all zero, then the graph of the equation a x + b y + c z + d = 0 , {\displaystyle ax+by+cz+d=0,} is a plane having the vector n = ( a , b , c ) as a normal. [ 5 ] This familiar equation for a plane is called the general form of the equation of the plane or just the plane equation . [ 6 ] Thus for example a regression equation of the form y = d + ax + cz (with b = −1 ) establishes a best-fit plane in three-dimensional space when there are two explanatory variables. Alternatively, a plane may be described parametrically as the set of all points of the form r = r 0 + s v + t w , {\displaystyle {\boldsymbol {r}}={\boldsymbol {r}}_{0}+s{\boldsymbol {v}}+t{\boldsymbol {w}},} where s and t range over all real numbers, v and w are given linearly independent vectors defining the plane, and r 0 is the vector representing the position of an arbitrary (but fixed) point on the plane. The vectors v and w can be visualized as vectors starting at r 0 and pointing in different directions along the plane. The vectors v and w can be perpendicular , but cannot be parallel. Let p 1 = ( x 1 , y 1 , z 1 ) , p 2 = ( x 2 , y 2 , z 2 ) , and p 3 = ( x 3 , y 3 , z 3 ) be non-collinear points. The plane passing through p 1 , p 2 , and p 3 can be described as the set of all points ( x , y , z ) that satisfy the following determinant equations: | x − x 1 y − y 1 z − z 1 x 2 − x 1 y 2 − y 1 z 2 − z 1 x 3 − x 1 y 3 − y 1 z 3 − z 1 | = | x − x 1 y − y 1 z − z 1 x − x 2 y − y 2 z − z 2 x − x 3 y − y 3 z − z 3 | = 0. {\displaystyle {\begin{vmatrix}x-x_{1}&y-y_{1}&z-z_{1}\\x_{2}-x_{1}&y_{2}-y_{1}&z_{2}-z_{1}\\x_{3}-x_{1}&y_{3}-y_{1}&z_{3}-z_{1}\end{vmatrix}}={\begin{vmatrix}x-x_{1}&y-y_{1}&z-z_{1}\\x-x_{2}&y-y_{2}&z-z_{2}\\x-x_{3}&y-y_{3}&z-z_{3}\end{vmatrix}}=0.} To describe the plane by an equation of the form a x + b y + c z + d = 0 {\displaystyle ax+by+cz+d=0} , solve the following system of equations: a x 1 + b y 1 + c z 1 + d = 0 {\displaystyle ax_{1}+by_{1}+cz_{1}+d=0} a x 2 + b y 2 + c z 2 + d = 0 {\displaystyle ax_{2}+by_{2}+cz_{2}+d=0} a x 3 + b y 3 + c z 3 + d = 0. {\displaystyle ax_{3}+by_{3}+cz_{3}+d=0.} This system can be solved using Cramer's rule and basic matrix manipulations. Let D = | x 1 y 1 z 1 x 2 y 2 z 2 x 3 y 3 z 3 | . {\displaystyle D={\begin{vmatrix}x_{1}&y_{1}&z_{1}\\x_{2}&y_{2}&z_{2}\\x_{3}&y_{3}&z_{3}\end{vmatrix}}.} If D is non-zero (so for planes not through the origin) the values for a , b and c can be calculated as follows: a = − d D | 1 y 1 z 1 1 y 2 z 2 1 y 3 z 3 | {\displaystyle a={\frac {-d}{D}}{\begin{vmatrix}1&y_{1}&z_{1}\\1&y_{2}&z_{2}\\1&y_{3}&z_{3}\end{vmatrix}}} b = − d D | x 1 1 z 1 x 2 1 z 2 x 3 1 z 3 | {\displaystyle b={\frac {-d}{D}}{\begin{vmatrix}x_{1}&1&z_{1}\\x_{2}&1&z_{2}\\x_{3}&1&z_{3}\end{vmatrix}}} c = − d D | x 1 y 1 1 x 2 y 2 1 x 3 y 3 1 | . {\displaystyle c={\frac {-d}{D}}{\begin{vmatrix}x_{1}&y_{1}&1\\x_{2}&y_{2}&1\\x_{3}&y_{3}&1\end{vmatrix}}.} These equations are parametric in d . Setting d equal to any non-zero number and substituting it into these equations will yield one solution set. This plane can also be described by the § Point–normal form and general form of the equation of a plane prescription above. A suitable normal vector is given by the cross product n = ( p 2 − p 1 ) × ( p 3 − p 1 ) , {\displaystyle {\boldsymbol {n}}=({\boldsymbol {p}}_{2}-{\boldsymbol {p}}_{1})\times ({\boldsymbol {p}}_{3}-{\boldsymbol {p}}_{1}),} and the point r 0 can be taken to be any of the given points p 1 , p 2 or p 3 [ 7 ] (or any other point in the plane). In Euclidean space , the distance from a point to a plane is the distance between a given point and its orthogonal projection on the plane, the perpendicular distance to the nearest point on the plane. It can be found starting with a change of variables that moves the origin to coincide with the given point then finding the point on the shifted plane a x + b y + c z = d {\displaystyle ax+by+cz=d} that is closest to the origin . The resulting point has Cartesian coordinates ( x , y , z ) {\displaystyle (x,y,z)} : In analytic geometry , the intersection of a line and a plane in three-dimensional space can be the empty set , a point , or a line. It is the entire line if that line is embedded in the plane, and is the empty set if the line is parallel to the plane but outside it. Otherwise, the line cuts through the plane at a single point. When the intersection of a sphere and a plane is not empty or a single point, it is a circle. This can be seen as follows: Let S be a sphere with center O , P a plane which intersects S . Draw OE perpendicular to P and meeting P at E . Let A and B be any two different points in the intersection. Then AOE and BOE are right triangles with a common side, OE , and hypotenuses AO and BO equal. Therefore, the remaining sides AE and BE are equal. This proves that all points in the intersection are the same distance from the point E in the plane P , in other words all points in the intersection lie on a circle C with center E . [ 8 ] This proves that the intersection of P and S is contained in C . Note that OE is the axis of the circle. Now consider a point D of the circle C . Since C lies in P , so does D . On the other hand, the triangles AOE and DOE are right triangles with a common side, OE , and legs EA and ED equal. Therefore, the hypotenuses AO and DO are equal, and equal to the radius of S , so that D lies in S . This proves that C is contained in the intersection of P and S . As a corollary, on a sphere there is exactly one circle that can be drawn through three given points. [ 9 ] The proof can be extended to show that the points on a circle are all a common angular distance from one of its poles. [ 10 ] A plane serves as a mathematical model for many physical phenomena, such as specular reflection in a plane mirror or wavefronts in a traveling plane wave . The free surface of undisturbed liquids tends to be nearly flat (see flatness ). The flattest surface ever manufactured is a quantum-stabilized atom mirror. [ 11 ] In astronomy, various reference planes are used to define positions in orbit. Anatomical planes may be lateral ("sagittal"), frontal ("coronal") or transversal. In geology, beds (layers of sediments) often are planar. Planes are involved in different forms of imaging , such as the focal plane , picture plane , and image plane . The attitude of a lattice plane is the orientation of the line normal to the plane, [ 12 ] and is described by the plane's Miller indices . In three-space a family of planes (a series of parallel planes) can be denoted by its Miller indices ( hkl ), [ 13 ] [ 14 ] so the family of planes has an attitude common to all its constituent planes. Many features observed in geology are planes or lines, and their orientation is commonly referred to as their attitude . These attitudes are specified with two angles. For a line, these angles are called the trend and the plunge . The trend is the compass direction of the line, and the plunge is the downward angle it makes with a horizontal plane. [ 15 ] For a plane, the two angles are called its strike (angle) and its dip (angle) . A strike line is the intersection of a horizontal plane with the observed planar feature (and therefore a horizontal line), and the strike angle is the bearing of this line (that is, relative to geographic north or from magnetic north ). The dip is the angle between a horizontal plane and the observed planar feature as observed in a third vertical plane perpendicular to the strike line.
https://en.wikipedia.org/wiki/Plane_orientation
The plane strain compression test is a specialized test used on some materials ranging from metals [ 1 ] to soils. [ 2 ] One variation of the test is also known as the Watts-Ford test . It is an engineering test, and is a particularly specialized way of determining some of the material characteristics of the metal being tested, and its specialization can be summarized by this quote: The test is useful when the sheet pieces are too small for a tensile test of a balanced biaxial test. It can give stress-strain curves up to considerably higher strains than tensile tests. [ 3 ] Plane-strain compression testing is typically used for measuring mechanical properties and for exploring microstructure development in the course of thermomechanical treatment. [ 4 ] During the test the specimen is placed between the punches and the constrain plates. When the upper punch is pushed down during the material test, the specimen is extended to horizontal directions. Friction between the tool and the specimen can be reduced by applying lubricants, such as graphite, MoS2, glass or PTFE(Teflon). [ 5 ] The testing essentially consists of a thin metal bar being compressed by two equally wide compressive strips, which are located of opposite sides of the thin bar. Then, over a range of increasing loads on the bar, the compressive forces lead to the thickness of the metal bar being reduced. This change of thickness is then measured sequentially after each loading, and after some mathematics a stress-strain curve can be plotted. The advantages of the Watts-Ford test are that it is convenient for testing thin sheets or strips, it is similar to a rolling process (in manufacturing analyses), frictional effects may be minimized, there is no 'barrelling' as would occur in a cylindrical compression test, and the plane strain deformation eases the analysis. Stress-strain curve The stress-strain curve is the relationship between the stress (force per unit area) and strain (resulting compression/stretching, known as deformation) that a particular material displays; stress–strain curves of various materials differ widely, and different tensile tests conducted on the same material yield different results depending upon the temperature of the specimen and the speed of the loading. When performing Watts-Ford tests, temperatures of the metal specimens will vary from 800 to 1100 °C and strain rates of (0.01- 10 s-1). [ 6 ] Pressure The average pressure on a unit of area of the contact surface between the punch and the specimen is expressed as: P= F/(wb), where F is force, w is the punch width, b is the specimen width. [ 4 ]
https://en.wikipedia.org/wiki/Plane_strain_compression_test
In continuum mechanics , a material is said to be under plane stress if the stress vector is zero across a particular plane. When that situation occurs over an entire element of a structure, as is often the case for thin plates, the stress analysis is considerably simplified, as the stress state can be represented by a tensor of dimension 2 (representable as a 2×2 matrix rather than 3×3). [ 1 ] A related notion, plane strain , is often applicable to very thick members. Plane stress typically occurs in thin flat plates that are acted upon only by load forces that are parallel to them. In certain situations, a gently curved thin plate may also be assumed to have plane stress for the purpose of stress analysis. This is the case, for example, of a thin-walled cylinder filled with a fluid under pressure. In such cases, stress components perpendicular to the plate are negligible compared to those parallel to it. [ 1 ] In other situations, however, the bending stress of a thin plate cannot be neglected. One can still simplify the analysis by using a two-dimensional domain, but the plane stress tensor at each point must be complemented with bending terms. Mathematically, the stress at some point in the material is a plane stress if one of the three principal stresses (the eigenvalues of the Cauchy stress tensor ) is zero. That is, there is Cartesian coordinate system in which the stress tensor has the form For example, consider a rectangular block of material measuring 10, 40 and 5 cm along the x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} , that is being stretched in the x {\displaystyle x} direction and compressed in the y {\displaystyle y} direction, by pairs of opposite forces with magnitudes 10 N and 20 N, respectively, uniformly distributed over the corresponding faces. The stress tensor inside the block will be More generally, if one chooses the first two coordinate axes arbitrarily but perpendicular to the direction of zero stress, the stress tensor will have the form and can therefore be represented by a 2 × 2 matrix, In certain cases, the plane stress model can be used in the analysis of gently curved surfaces. For example, consider a thin-walled cylinder subjected to an axial compressive load uniformly distributed along its rim, and filled with a pressurized fluid. The internal pressure will generate a reactive hoop stress on the wall, a normal tensile stress directed perpendicular to the cylinder axis and tangential to its surface. The cylinder can be conceptually unrolled and analyzed as a flat thin rectangular plate subjected to tensile load in one direction and compressive load in another other direction, both parallel to the plate. If one dimension is very large compared to the others, the principal strain in the direction of the longest dimension is constrained and can be assumed as constant, that means there will be effectively zero strain along it, hence yielding a plane strain condition (Figure 7.2). In this case, though all principal stresses are non-zero, the principal stress in the direction of the longest dimension can be disregarded for calculations. Thus, allowing a two dimensional analysis of stresses, e.g. a dam analyzed at a cross section loaded by the reservoir. The corresponding strain tensor is: and the corresponding stress tensor is: in which the non-zero σ 33 {\displaystyle \sigma _{33}\,\!} term arises from the Poisson's effect . However, this term can be temporarily removed from the stress analysis to leave only the in-plane terms, effectively reducing the analysis to two dimensions. [ 1 ] Consider a point P {\displaystyle P\,\!} in a continuum under a state of plane stress, or plane strain, with stress components ( σ x , σ y , τ x y ) {\displaystyle (\sigma _{x},\sigma _{y},\tau _{xy})\,\!} and all other stress components equal to zero (Figure 8.1). From static equilibrium of an infinitesimal material element at P {\displaystyle P\,\!} (Figure 8.2), the normal stress σ n {\displaystyle \sigma _{\mathrm {n} }\,\!} and the shear stress τ n {\displaystyle \tau _{\mathrm {n} }\,\!} on any plane perpendicular to the x {\displaystyle x\,\!} - y {\displaystyle y\,\!} plane passing through P {\displaystyle P\,\!} with a unit vector n {\displaystyle \mathbf {n} \,\!} making an angle of θ {\displaystyle \theta \,\!} with the horizontal, i.e. cos ⁡ θ {\displaystyle \cos \theta \,\!} is the direction cosine in the x {\displaystyle x\,\!} direction, is given by: These equations indicate that in a plane stress or plane strain condition, one can determine the stress components at a point on all directions, i.e. as a function of θ {\displaystyle \theta \,\!} , if one knows the stress components ( σ x , σ y , τ x y ) {\displaystyle (\sigma _{x},\sigma _{y},\tau _{xy})\,\!} on any two perpendicular directions at that point. It is important to remember that we are considering a unit area of the infinitesimal element in the direction parallel to the y {\displaystyle y\,\!} - z {\displaystyle z\,\!} plane. The principal directions (Figure 8.3), i.e., orientation of the planes where the shear stress components are zero, can be obtained by making the previous equation for the shear stress τ n {\displaystyle \tau _{\mathrm {n} }\,\!} equal to zero. Thus we have: and we obtain This equation defines two values θ p {\displaystyle \theta _{\mathrm {p} }\,\!} which are 90 ∘ {\displaystyle 90^{\circ }\,\!} apart (Figure 8.3). The same result can be obtained by finding the angle θ {\displaystyle \theta \,\!} which makes the normal stress σ n {\displaystyle \sigma _{\mathrm {n} }\,\!} a maximum, i.e. d σ n d θ = 0 {\displaystyle {\frac {d\sigma _{\mathrm {n} }}{d\theta }}=0\,\!} The principal stresses σ 1 {\displaystyle \sigma _{1}\,\!} and σ 2 {\displaystyle \sigma _{2}\,\!} , or minimum and maximum normal stresses σ m a x {\displaystyle \sigma _{\mathrm {max} }\,\!} and σ m i n {\displaystyle \sigma _{\mathrm {min} }\,\!} , respectively, can then be obtained by replacing both values of θ p {\displaystyle \theta _{\mathrm {p} }\,\!} into the previous equation for σ n {\displaystyle \sigma _{\mathrm {n} }\,\!} . This can be achieved by rearranging the equations for σ n {\displaystyle \sigma _{\mathrm {n} }\,\!} and τ n {\displaystyle \tau _{\mathrm {n} }\,\!} , first transposing the first term in the first equation and squaring both sides of each of the equations then adding them. Thus we have where which is the equation of a circle of radius R {\displaystyle R\,\!} centered at a point with coordinates [ σ a v g , 0 ] {\displaystyle [\sigma _{\mathrm {avg} },0]\,\!} , called Mohr's circle . But knowing that for the principal stresses the shear stress τ n = 0 {\displaystyle \tau _{\mathrm {n} }=0\,\!} , then we obtain from this equation: When τ x y = 0 {\displaystyle \tau _{xy}=0\,\!} the infinitesimal element is oriented in the direction of the principal planes, thus the stresses acting on the rectangular element are principal stresses: σ x = σ 1 {\displaystyle \sigma _{x}=\sigma _{1}\,\!} and σ y = σ 2 {\displaystyle \sigma _{y}=\sigma _{2}\,\!} . Then the normal stress σ n {\displaystyle \sigma _{\mathrm {n} }\,\!} and shear stress τ n {\displaystyle \tau _{\mathrm {n} }\,\!} as a function of the principal stresses can be determined by making τ x y = 0 {\displaystyle \tau _{xy}=0\,\!} . Thus we have Then the maximum shear stress τ m a x {\displaystyle \tau _{\mathrm {max} }\,\!} occurs when sin ⁡ 2 θ = 1 {\displaystyle \sin 2\theta =1\,\!} , i.e. θ = 45 ∘ {\displaystyle \theta =45^{\circ }\,\!} (Figure 8.3): Then the minimum shear stress τ m i n {\displaystyle \tau _{\mathrm {min} }\,\!} occurs when sin ⁡ 2 θ = − 1 {\displaystyle \sin 2\theta =-1\,\!} , i.e. θ = 135 ∘ {\displaystyle \theta =135^{\circ }\,\!} (Figure 8.3):
https://en.wikipedia.org/wiki/Plane_stress
A plane table ( plain table prior to 1830) [ 1 ] is a device used in surveying , site mapping, exploration mapping, coastal navigation mapping, and related disciplines to provide a solid and level surface on which to make field drawings, charts and maps. The early use of the name plain table reflected its simplicity and plainness rather than its flatness. [ 2 ] " Plane " refers to the table being both flat and levelled ( horizontal ). The earliest mention of a plane table dates to 1551 in Abel Foullon 's "Usage et description de l'holomètre" , published in Paris . [ 3 ] However, since Foullon's description was of a complete, fully developed instrument, it must have been invented earlier. [ 2 ] A brief description was also added to the 1591 edition of Digge's Pantometria . [ 3 ] The first mention of the device in English was by Cyprian Lucar in 1590. [ 1 ] Some have credited Johann Richter, also known as Johannes Praetorius , [ 4 ] a Nuremberg mathematician, in 1610 [ 5 ] with the first plane table, but this appears to be incorrect. The plane table became a popular instrument for surveying . [ 2 ] Its use was widely taught. Some considered it a substandard instrument compared to other devices such as the theodolite , since it was relatively easy to use. [ 1 ] By allowing the use of graphical methods rather than mathematical calculations, it could be used by those with less education than other instruments. The addition of a camera to the plane table, as was done from 1890 by Sebastian Finsterwalder in conjunction with a phototheodolite , established photogrammetry in spatial and temporal surveying. A plane table consists of a smooth table surface mounted on a sturdy base. The connection between the table top and the base permits one to level the table precisely, using bubble levels , in a horizontal plane . The base, a tripod, is designed to support the table over a specific point on land. By adjusting the length of the legs, one can bring the table level regardless of the roughness of the terrain. In use, a plane table is set over a point and brought to precise horizontal level. A drawing sheet is attached to the surface and an alidade is used to sight objects of interest. The alidade, in modern examples of the instrument a rule with a telescopic sight , can then be used to construct a line on the drawing that is in the direction of the object of interest. By using the alidade as a surveying level , information on the topography of the site can be directly recorded on the drawing as elevations. Distances to the objects can be measured directly or by the use of stadia marks in the telescope of the alidade.
https://en.wikipedia.org/wiki/Plane_table
In physics , a plane wave is a special case of a wave or field : a physical quantity whose value, at any given moment, is constant through any plane that is perpendicular to a fixed direction in space. [ 1 ] For any position x → {\displaystyle {\vec {x}}} in space and any time t {\displaystyle t} , the value of such a field can be written as F ( x → , t ) = G ( x → ⋅ n → , t ) , {\displaystyle F({\vec {x}},t)=G({\vec {x}}\cdot {\vec {n}},t),} where n → {\displaystyle {\vec {n}}} is a unit-length vector , and G ( d , t ) {\displaystyle G(d,t)} is a function that gives the field's value as dependent on only two real parameters: the time t {\displaystyle t} , and the scalar-valued displacement d = x → ⋅ n → {\displaystyle d={\vec {x}}\cdot {\vec {n}}} of the point x → {\displaystyle {\vec {x}}} along the direction n → {\displaystyle {\vec {n}}} . The displacement is constant over each plane perpendicular to n → {\displaystyle {\vec {n}}} . The values of the field F {\displaystyle F} may be scalars, vectors, or any other physical or mathematical quantity. They can be complex numbers , as in a complex exponential plane wave . When the values of F {\displaystyle F} are vectors, the wave is said to be a longitudinal wave if the vectors are always collinear with the vector n → {\displaystyle {\vec {n}}} , and a transverse wave if they are always orthogonal (perpendicular) to it. Often the term "plane wave" refers specifically to a traveling plane wave , whose evolution in time can be described as simple translation of the field at a constant wave speed c {\displaystyle c} along the direction perpendicular to the wavefronts. Such a field can be written as F ( x → , t ) = G ( x → ⋅ n → − c t ) {\displaystyle F({\vec {x}},t)=G\left({\vec {x}}\cdot {\vec {n}}-ct\right)\,} where G ( u ) {\displaystyle G(u)} is now a function of a single real parameter u = d − c t {\displaystyle u=d-ct} , that describes the "profile" of the wave, namely the value of the field at time t = 0 {\displaystyle t=0} , for each displacement d = x → ⋅ n → {\displaystyle d={\vec {x}}\cdot {\vec {n}}} . In that case, n → {\displaystyle {\vec {n}}} is called the direction of propagation . For each displacement d {\displaystyle d} , the moving plane perpendicular to n → {\displaystyle {\vec {n}}} at distance d + c t {\displaystyle d+ct} from the origin is called a " wavefront ". This plane travels along the direction of propagation n → {\displaystyle {\vec {n}}} with velocity c {\displaystyle c} ; and the value of the field is then the same, and constant in time, at every one of its points. [ 2 ] The term is also used, even more specifically, to mean a "monochromatic" or sinusoidal plane wave : a travelling plane wave whose profile G ( u ) {\displaystyle G(u)} is a sinusoidal function. That is, F ( x → , t ) = A sin ⁡ ( 2 π f ( x → ⋅ n → − c t ) + φ ) {\displaystyle F({\vec {x}},t)=A\sin \left(2\pi f({\vec {x}}\cdot {\vec {n}}-ct)+\varphi \right)} The parameter A {\displaystyle A} , which may be a scalar or a vector, is called the amplitude of the wave; the scalar coefficient f {\displaystyle f} is its "spatial frequency"; and the scalar φ {\displaystyle \varphi } is its " phase shift ". A true plane wave cannot physically exist, because it would have to fill all space. Nevertheless, the plane wave model is important and widely used in physics. The waves emitted by any source with finite extent into a large homogeneous region of space can be well approximated by plane waves when viewed over any part of that region that is sufficiently small compared to its distance from the source. That is the case, for example, of the light waves from a distant star that arrive at a telescope. A standing wave is a field whose value can be expressed as the product of two functions, one depending only on position, the other only on time. A plane standing wave , in particular, can be expressed as F ( x → , t ) = G ( x → ⋅ n → ) S ( t ) {\displaystyle F({\vec {x}},t)=G({\vec {x}}\cdot {\vec {n}})\,S(t)} where G {\displaystyle G} is a function of one scalar parameter (the displacement d = x → ⋅ n → {\displaystyle d={\vec {x}}\cdot {\vec {n}}} ) with scalar or vector values, and S {\displaystyle S} is a scalar function of time. This representation is not unique, since the same field values are obtained if S {\displaystyle S} and G {\displaystyle G} are scaled by reciprocal factors. If | S ( t ) | {\displaystyle \left|S(t)\right|} is bounded in the time interval of interest (which is usually the case in physical contexts), S {\displaystyle S} and G {\displaystyle G} can be scaled so that the maximum value of | S ( t ) | {\displaystyle \left|S(t)\right|} is 1. Then | G ( x → ⋅ n → ) | {\displaystyle \left|G({\vec {x}}\cdot {\vec {n}})\right|} will be the maximum field magnitude seen at the point x → {\displaystyle {\vec {x}}} . A plane wave can be studied by ignoring the directions perpendicular to the direction vector n → {\displaystyle {\vec {n}}} ; that is, by considering the function G ( z , t ) = F ( z n → , t ) {\displaystyle G(z,t)=F(z{\vec {n}},t)} as a wave in a one-dimensional medium. Any local operator , linear or not, applied to a plane wave yields a plane wave. Any linear combination of plane waves with the same normal vector n → {\displaystyle {\vec {n}}} is also a plane wave. For a scalar plane wave in two or three dimensions, the gradient of the field is always collinear with the direction n → {\displaystyle {\vec {n}}} ; specifically, ∇ F ( x → , t ) = n → ∂ 1 G ( x → ⋅ n → , t ) {\displaystyle \nabla F({\vec {x}},t)={\vec {n}}\partial _{1}G({\vec {x}}\cdot {\vec {n}},t)} , where ∂ 1 G {\displaystyle \partial _{1}G} is the partial derivative of G {\displaystyle G} with respect to the first argument. The divergence of a vector-valued plane wave depends only on the projection of the vector G ( d , t ) {\displaystyle G(d,t)} in the direction n → {\displaystyle {\vec {n}}} . Specifically, ∇ ⋅ F → ( x → , t ) = n → ⋅ ∂ 1 G ( x → ⋅ n → , t ) {\displaystyle \nabla \cdot {\vec {F}}({\vec {x}},t)\;=\;{\vec {n}}\cdot \partial _{1}G({\vec {x}}\cdot {\vec {n}},t)} In particular, a transverse planar wave satisfies ∇ ⋅ F → = 0 {\displaystyle \nabla \cdot {\vec {F}}=0} for all x → {\displaystyle {\vec {x}}} and t {\displaystyle t} .
https://en.wikipedia.org/wiki/Plane_wave
PlanetQuest is NASA 's education and public outreach program centered on the science and technology of NASA's long-term search for habitable planets beyond the Solar System . Major components of the PlanetQuest outreach program include: Begun in January 2002 and based at NASA's Jet Propulsion Laboratory , PlanetQuest is funded by the Exoplanet Exploration Program, a long-term suite of NASA missions designed to detect and characterize of Earth-like planets. The main components of Navigator include two ground-based and two space based missions: On Earth: In space:
https://en.wikipedia.org/wiki/PlanetQuest
Planet Hunters is a citizen science project to find exoplanets using human eyes. It does this by having users analyze data from the NASA Kepler space telescope and the NASA Transiting Exoplanet Survey Satellite . [ 1 ] [ 2 ] It was launched by a team led by Debra Fischer at Yale University , [ 3 ] as part of the Zooniverse project. [ 4 ] The project was launched on December 16, 2010, after the first Data Release of Kepler data as the Planet Hunters Project. [ 5 ] 300,000 volunteers participated in the project and the project team published 8 scientific papers. On December 14, 2014, the project was re-launched as Planet Hunters 2.0, with an improved website and considering that the volunteers will look at K2 data. [ 6 ] As of November 2018 Planet Hunters had identified 50% of the known planets with an orbital period larger than two years. [ 7 ] In 2017 the project Exoplanet Explorers was launched. It was another planet hunting project at Zooniverse and discovered the system K2-138 and the exoplanet K2-288Bb . This project was launched during the television program Stargazing Live and the discovery of the K2-138 system was announced during the program. [ 8 ] On December 6, 2018, the project Planet Hunters TESS (PHT) was launched and is led by astronomer Nora Eisner. This project uses data from the Transiting Exoplanet Survey Satellite (TESS) and is currently active (as of March 2023). [ 2 ] This project discovered the Saturn-sized exoplanet TOI-813 b [ 9 ] [ 10 ] and many more. Until March 2023 PHT discovered 284 exoplanet candidates (e.g. TIC 35021200.01 [ 11 ] ), 15 confirmed exoplanets (e.g. TOI-5174 b [ 12 ] [ 13 ] ) and countless eclipsing binaries . All discovered exoplanet candidates are uploaded to ExoFOP by Nora Eisner or sometimes by another project member (see TOI and CTOI list provided by ExoFOP [ 14 ] ). All exoplanet candidates are manually checked by multiple project members (volunteers and moderators) and need to pass different tests before they are accepted by Nora Eisner and uploaded to ExoFOP. But it is possible that not all PHT planet candidates become real (confirmed) exoplanets. Some of them may be grazing eclipsing binaries . On October 19, 2021, the project Planet Hunters: NGTS was launched. It uses a dataset from the Next Generation Transit Survey to find transiting planets. It is the first Planet Hunters project that uses data from a ground-based telescope. The project looks at candidates that were already automatically filtered, similar to the Exoplanet Explorers project. [ 15 ] The project found four candidate planets so far. [ 16 ] In the pre-print five candidates are presented. This includes a giant planet candidate around TIC-165227846 , a mid-M dwarf. [ 17 ] This candidate was independently detected by Byrant et al. 2023 [ 18 ] and if confirmed could represent the lowest-mass star to host a close-in giant. [ 17 ] The Planet Hunters project exploits the fact that humans are better at recognising visual patterns than computers. The website displays an image of data collected by the NASA Kepler Space Mission and asks human users (referred to as "Citizen Scientists") to look at the data and see how the brightness of a star changes over time. This brightness data is represented as a graph and referred to as a star's light curve . Such curves are helpful in discovering extrasolar planets due to the brightness of a star decreasing when a planet passes in front of it, as seen from Earth. [ 19 ] Periods of reduced brightness can thus provide evidence of planetary transits , but may also be caused by errors in recording, projection, or other phenomena. [ 20 ] From time to time, the project will observe eclipsing binary stars. Essentially these are stars that orbit each other. Much as a planet can interrupt the brightness of a star, another star can too. There is a noticeable difference on the light curves. It will appear as a large transit (a large dip) and a smaller transit (a smaller dip). [ 21 ] [ 22 ] As of December 2017, there are a total of 621 multiplanet systems , or stars that contains at least two confirmed planets. [ 23 ] In a multiplanet system plot, there are many different patterns of transit. Due to the different sizes of planets, the transits dip down to different points. [ 24 ] Stellar flares are observed when there is an explosion on the surface of a star. This will cause the star's brightness to shoot up considerably, with a steep drop off. [ 25 ] So far, over 12 million observations have been analyzed. Out of those, 34 candidate planets had been found as of July 2012. [ 26 ] In October 2012 it was announced that two volunteers from the Planet Hunters initiative had discovered a novel Neptune -like planet which is part of a four star double binary system, orbiting one of the pairs of stars while the other pair of stars orbits at a distance of around 1000 AU. This is the first planet discovered to have a stable orbit in such a complex stellar environment. The system is located 7200 light years away, [ 27 ] and the new planet has been designated PH1b , short for Planet Hunters 1 b. [ 28 ] [ 29 ] Yellow indicates a circumbinary planet. Light green indicates planet orbiting around one star in a multiple star system. Light blue indicates host stars with a planetary system consisting of two or more planets. Values for the host stars are acquired via SIMBAD [ 30 ] and otherwise are cited. The apparent magnitude represents the V magnitude. Planet Hunters TESS (PHT) publishes Community TESS Object of Interest (CTOI) at ExoFOP, which can be promoted into a TESS Object of Interest (TOI). Of the 151 CTOIs submitted by Planet Hunters researchers, 81 were promoted to TOIs (as of September 2022). [ 60 ] The following exoplanets first submitted as PHT CTOIs were later researched by other teams (some examples): TOI-1759 b , [ 61 ] [ 62 ] [ 63 ] TOI-1899 b , [ 64 ] TOI-2180 b , [ 65 ] TOI-4562 b [ 66 ] [ 67 ] and HD 148193 b (TOI-1836). [ 68 ] [ 69 ] In September 2013 the project discovered the unusual cataclysmic variable KIC 9406652 . [ 70 ] In April 2014 the unusually active SU Ursae Majoris-type dwarf nova GALEX J194419.33+491257.0 was discovered. This cataclysmic variable was discovered as a background dwarf nova of KIC 11412044. [ 71 ] In January 2016 unusual dips in KIC 8462852 were announced. The unusual light curve of KIC 8462852 (also known as Boyajian's Star) [ 72 ] has engendered speculation that an alien civilization's Dyson sphere [ 73 ] [ 74 ] is responsible. [ 75 ] In June 2016 the project found 32 likely eclipsing binaries . The work also announced likely exoplanets. [ 76 ] In February 2018 the first transiting exocomets were discovered. The dips were found by one of the authors, a Planet Hunters participant, in a visual search over five months of the complete Q1-Q17 Kepler light curve archive spanning 201250 target stars. [ 77 ] [ 78 ] In February 2022 Planet Hunters:TESS announced the discovery of BD+61 2536 (TIC 470710327), a massive hierarchical triple star system. The system is predicted to undergo multiple phases of mass transfer in the future, and likely end up as a double neutron star gravitational wave progenitor or an exotic Thorne-Zytkow object . [ 79 ] Zooniverse projects:
https://en.wikipedia.org/wiki/Planet_Hunters
Planet Patrol is a NASA citizen science project available in Zooniverse and aimed at discovering new exoplanets with data from the TESS telescope. [ 1 ] The project is built on results produced by a computer algorithm. The algorithm measures the center-of-light of the images and automatically compares it to the catalog position of the corresponding star. [ 2 ] The main difference with Planet Hunters is that Planet Patrol looks at objects that represent a detected planet candidate in TESS data, whereas Planet Hunters searches through all the stars in the TESS databases and asks humans to find such candidates. [ 3 ] As of September 2020, there are 1370 volunteers and 72,938 classifications have been done. [ 4 ] The images representing a possible exoplanet transit show a single bright source near the middle of the image with a dot at the center. Two papers were published by Planet Patrol, vetting 1998 TESS Objects of Interest (TOIs). Of these TOIs 1461 passed as planet candidates, 286 were ruled out as false-positive and 251 were labelled as potential false-positive. The resulting catalog is named TESS Triple 9 (TT9), named after the number of vetted TOIs in each paper being 999. [ 5 ] [ 6 ] The second TT9 paper describes interesting planet candidates, such as TIC 396720998.01 (TOI 709.01), a sub- Jovian around a hot subdwarf , named LB 1721 . [ 7 ] The planet candidate produces a V-shaped transit , which is different from the U-shaped transits that most planets produce. [ 6 ] TOI 709.01 was previously classified as a false-positive by TRICERATOPS, another vetting tool. Because this tool uses pre-existing knowledge of its host star and transit shape, [ 8 ] this tool might have been confused by the small size of the host star and the resulting V-shape of a transit. TOI 709.01 would be the second transiting planet around a degenerate star if confirmed. The first transiting planet around a white dwarf was WD 1856+534 b . The paper also describes two planet candidates in the habitable zone : TOI 715.01 and the already confirmed TOI 1227 b . [ 6 ]
https://en.wikipedia.org/wiki/Planet_Patrol_(project)
The Planet Simulator , also known as a Planetary Simulator , is a climate -controlled simulation chamber designed to aid in the study of the origin of life . The device was announced by researchers at McMaster University on behalf of the Origins Institute on 4 October 2018. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The project began in 2012 and was funded with $1 million from the Canada Foundation for Innovation , the Ontario government , and McMaster University . It was built and manufactured by Angstrom Engineering Inc of Kitchener, Ontario . [ 1 ] [ 5 ] The device was designed and developed by biophysicist Maikel Rheinstadter and co-principal investigators biochemist Yingfu Li and astrophysicist Ralph Pudritz for researchers to study a theory that suggests life on early Earth began in " warm little ponds " rather than in deep ocean vents nearly four billion years ago. [ 3 ] The device can recreate conditions of the primitive Earth to see whether cellular life can be created, and then later, evolve . [ 3 ] The Planet Simulator can mimic the environmental conditions consistent on the early Earth and other astronomical bodies, including other planets and exoplanets [ 3 ] by controlling temperature , humidity , pressure , atmosphere and radiation levels within the simulation chamber. [ 2 ]
https://en.wikipedia.org/wiki/Planet_Simulator
Planet V is a hypothetical fifth terrestrial planet posited by NASA scientists John Chambers and Jack J. Lissauer to have once existed between Mars and the asteroid belt . In their hypothesis the Late Heavy Bombardment of the Hadean era began after perturbations from the other terrestrial planets caused Planet V's orbit to cross into the asteroid belt. Chambers and Lissauer presented the results of initial tests of this hypothesis during the 33rd Lunar and Planetary Science Conference , held from March 11 through 15, 2002. [ 1 ] In the Planet V hypothesis, five terrestrial planets were produced during the planetary formation era. The fifth terrestrial planet began on a low- eccentricity orbit between Mars and the asteroid belt with a semi-major axis between 1.8 and 1.9 AU . While long-lived, this orbit was unstable on a time-scale of 600 Myr. Eventually perturbations from the other inner planets drove Planet V onto a high-eccentricity orbit which crossed into the inner asteroid belt. Asteroids were scattered onto Mars-crossing and resonant orbits by close encounters with Planet V. Many of these asteroids then evolved onto Earth-crossing orbits temporarily enhancing the lunar impact rate. This process continued until Planet V was lost most likely by impacting the Sun after entering the ν 6 secular resonance . [ 2 ] As an initial test of the Planet V hypothesis, Chambers and Lissauer conducted 36 computer simulations of the Solar System with an additional terrestrial planet. A variety of parameters were used to determine the impacts of Planet V's initial orbit and mass. The mean time at which Planet V was lost was found to increase from 100 Myr to 400 Myr as its initial semi-major axis was increased from 1.8 to 1.9 AU. Results consistent with the current Solar System were most common with a 0.25 Mars mass Planet V. In cases with a larger mass Planet V collisions between planets were likely. Overall a third of these simulations were deemed successful in that Planet V was removed without impacting another planet. To test whether Planet V could increase the lunar impact rate they added test particles to one of the simulations. After an initial decline the number of particles on Earth-crossing orbits increased after Planet V entered the inner asteroid belt a pattern consistent with the LHB. These results were presented at the 33rd Lunar and Planetary Science Conference. [ 2 ] In a later article published in the journal Icarus in 2007, Chambers reported the results of 96 simulations examining the orbital dynamics of the Solar System with five terrestrial planets. In a quarter of the simulations Planet V was ejected or impacted the Sun without other terrestrial planets suffering collisions. This result was most frequent if Planet V's mass was less than 0.25 of Mars. The other simulations were not considered successful because Planet V either survived for the entire 1 billion year length of the simulations, or collisions occurred between planets. [ 3 ] The terrestrial Planet V hypothesis was examined by Ramon Brasser and Alessandro Morbidelli in 2011. Their work was the first to focus on the magnitude of the bombardment caused by Planet V. Brasser and Morbidelli calculated that to create the Late Heavy Bombardment Planet V would have to remove 95% of the pre-LHB main asteroid belt or 98% of the inner asteroid belt (semi-major axis < 2.5 AU). Depleting the main asteroid belt by 95% with a 0.5 Mars-mass Planet V was found to require it remain in an orbit crossing the entire asteroid belt for 300 million years. This orbital evolution was not observed in any simulations; Planet V typically entered an Earth-crossing orbit resulting in a short dynamic lifetime before entering such an orbit. In a few percent of simulations Planet V remained in the inner belt long enough to produce the LHB. However, producing the LHB from the inner asteroid belt would require the inner asteroid belt to have begun with 4–13 times the mass, and 10–24 time the orbital density, as the rest of the asteroid belt. [ 4 ] Brasser and Morbidelli also examined the hypothesis that Planet V caused the LHB by disrupting putative asteroid belts between the terrestrial planets. The authors noted that the lack of present-day detection of the remnants of these belts places a significant constraint on this hypothesis, requiring that they be 99.99% depleted before Planet V was lost. While this occurred in 66% of the simulations compatible with the current Solar System for a Venus-Earth belt, it did not occur in any for the Earth-Mars belt due to its higher stability. Morbidelli and Brasser concluded from this result that an Earth-Mars belt could not have contained a significant population. Although Planet V could generate a Late Heavy Bombardment by disrupting a massive Venus-Earth belt alone, the authors observed that significant differences in these belts has not been produced in planetary formation models. [ 4 ] An impact of Planet V onto Mars, forming the Borealis Basin has recently been proposed as an explanation for the Late Heavy Bombardment. Debris from this impact would have a different size distribution than the asteroid belt with a smaller fraction of large bodies and would result in a lower number of giant impact basins relative to craters. [ 5 ] [ 6 ] Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Planet_V
A planetarium ( pl. : planetariums or planetaria ) is a theatre built primarily for presenting educational and entertaining shows about astronomy and the night sky , or for training in celestial navigation . [ 1 ] [ 2 ] [ 3 ] A dominant feature of most planetariums is the large dome -shaped projection screen onto which scenes of stars , planets , and other celestial objects can be made to appear and move realistically to simulate their motion. The projection can be created in various ways, such as a star ball , slide projector , video , fulldome projector systems, and lasers. Typical systems can be set to simulate the sky at any point in time, past or present, and often to depict the night sky as it would appear from any point of latitude on Earth. Planetaria range in size from the 37 meter dome in St. Petersburg, Russia (called "Planetarium No 1") to three-meter inflatable portable domes where attendees sit on the floor. The largest planetarium in the Western Hemisphere is the Jennifer Chalsty Planetarium at Liberty Science Center in New Jersey , its dome measuring 27 meters in diameter. The Birla Planetarium in Kolkata, India is the largest by seating capacity, having 630 seats. [ 4 ] In North America, the Hayden Planetarium at the American Museum of Natural History in New York City has the greatest number of seats, at 423. The term planetarium is sometimes used generically to describe other devices which illustrate the Solar System , such as a computer simulation or an orrery . Planetarium software refers to a software application that renders a three-dimensional image of the sky onto a two-dimensional computer screen, or in a virtual reality headset for a 3D representation. [ 5 ] The term planetarian is used to describe a member of the professional staff of a planetarium. The ancient Greek polymath Archimedes is attributed with creating a primitive planetarium device that could predict the movements of the Sun and the Moon and the planets. [ 6 ] [ 7 ] The discovery of the Antikythera mechanism proved that such devices already existed during antiquity , though likely after Archimedes' lifetime. Campanus of Novara described a planetary equatorium in his Theorica Planetarum , and included instructions on how to build one. The Globe of Gottorf built around 1650 had constellations painted on the inside. [ 8 ] These devices would today usually be referred to as orreries (named for the Earl of Orrery ). In fact, many planetariums today have projection orreries, which project onto the dome the Solar System (including the Sun and planets up to Saturn ) in their regular orbital paths. In 1229, following the conclusion of the Fifth Crusade , Holy Roman Emperor Frederick II of Hohenstaufen brought back a tent with scattered holes representing stars or planets . The device was operated internally with a spinnable table that rotated the tent. [ 9 ] The small size of typical 18th century orreries limited their impact, and towards the end of that century a number of educators attempted to create a larger sized version. The efforts of Adam Walker (1730–1821) and his sons are noteworthy in their attempts to fuse theatrical illusions with education. Walker's Eidouranion was the heart of his public lectures or theatrical presentations. Walker's son describes this "Elaborate Machine" as "twenty feet high, and twenty-seven in diameter: it stands vertically before the spectators, and its globes are so large, that they are distinctly seen in the most distant parts of the Theatre. Every Planet and Satellite seems suspended in space, without any support; performing their annual and diurnal revolutions without any apparent cause". Other lecturers promoted their own devices: R E Lloyd advertised his Dioastrodoxon, or Grand Transparent Orrery, and by 1825 William Kitchener was offering his Ouranologia, which was 42 feet (13 m) in diameter. These devices most probably sacrificed astronomical accuracy for crowd-pleasing spectacle and sensational and awe-provoking imagery. The oldest still-working planetarium can be found in the Frisian city of Franeker . It was built by Eise Eisinga (1744–1828) in the living room of his house. It took Eisinga seven years to build his planetarium, which was completed in 1781. [ 10 ] In 1905 Oskar von Miller (1855–1934) of the Deutsches Museum in Munich commissioned updated versions of a geared orrery and planetarium from M Sendtner, and later worked with Franz Meyer, chief engineer at the Carl Zeiss optical works in Jena , on the largest mechanical planetarium ever constructed, capable of displaying both heliocentric and geocentric motion. This was displayed at the Deutsches Museum in 1924, construction work having been interrupted by the war. The planets travelled along overhead rails, powered by electric motors: the orbit of Saturn was 11.25 m in diameter. 180 stars were projected onto the wall by electric bulbs. While this was being constructed, von Miller was also working at the Zeiss factory with German astronomer Max Wolf , director of the Landessternwarte Heidelberg-Königstuhl observatory of the University of Heidelberg , on a new and novel design, inspired by Wallace W. Atwood 's work at the Chicago Academy of Sciences and by the ideas of Walther Bauersfeld and Rudolf Straubel [ 11 ] at Zeiss . The result was a planetarium design which would generate all the necessary movements of the stars and planets inside the optical projector, and would be mounted centrally in a room, projecting images onto the white surface of a hemisphere. In August 1923, the first (Model I) Zeiss planetarium projected images of the night sky onto the white plaster lining of a 16 m hemispherical concrete dome, erected on the roof of the Zeiss works. The first official public showing was at the Deutsches Museum in Munich on October 21, 1923. [ 12 ] [ 13 ] Zeiss Planetarium became popular, and attracted a lot of attention. Next Zeiss planetariums were opened in Rome (1928, in Aula Ottagona , part of the Baths of Diocletian ), Chicago (1930), Osaka (1937, in the Osaka City Electricity Science Museum ). [ 13 ] When Germany was divided into East and West Germany after the war, the Zeiss firm was also split. Part remained in its traditional headquarters at Jena , in East Germany , and part migrated to West Germany . The designer of the first planetariums for Zeiss, Walther Bauersfeld , also migrated to West Germany with the other members of the Zeiss management team. There he remained on the Zeiss West management team until his death in 1959. The West German firm resumed making large planetariums in 1954, and the East German firm started making small planetariums a few years later. Meanwhile, the lack of planetarium manufacturers had led to several attempts at construction of unique models, such as one built by the California Academy of Sciences in Golden Gate Park , San Francisco , which operated 1952–2003. The Korkosz brothers built a large projector for the Boston Museum of Science , which was unique in being the first (and for a very long time only) planetarium to project the planet Uranus . Most planetariums ignore Uranus as being at best marginally visible to the naked eye. A great boost to the popularity of the planetarium worldwide was provided by the Space Race of the 1950s and 60s when fears that the United States might miss out on the opportunities of the new frontier in space stimulated a massive program to install over 1,200 planetariums in U.S. high schools. Armand Spitz recognized that there was a viable market for small inexpensive planetaria. His first model, the Spitz A, was designed to project stars from a dodecahedron , thus reducing machining expenses in creating a globe. [ 14 ] Planets were not mechanized, but could be shifted by hand. Several models followed with various upgraded capabilities, until the A3P, which projected well over a thousand stars, had motorized motions for latitude change, daily motion, and annual motion for Sun, Moon (including phases), and planets. This model was installed in hundreds of high schools, colleges, and even small museums from 1964 to the 1980s. Japan entered the planetarium manufacturing business in the 1960s, with Goto and Minolta both successfully marketing a number of different models. Goto was particularly successful when the Japanese Ministry of Education put one of their smallest models, the E-3 or E-5 (the numbers refer to the metric diameter of the dome) in every elementary school in Japan. Phillip Stern, as former lecturer at New York City 's Hayden Planetarium , had the idea of creating a small planetarium which could be programmed. His Apollo model was introduced in 1967 with a plastic program board, recorded lecture, and film strip. Unable to pay for this himself, Stern became the head of the planetarium division of Viewlex , a mid-size audio-visual firm on Long Island . About thirty canned programs were created for various grade levels and the public, while operators could create their own or run the planetarium live. Purchasers of the Apollo were given their choice of two canned shows, and could purchase more. A few hundred were sold, but in the late 1970s Viewlex went bankrupt for reasons unrelated to the planetarium business. During the 1970s, the OmniMax movie system (now known as IMAX Dome) was conceived to operate on planetarium screens. More recently, some planetariums have re-branded themselves as dome theaters , with broader offerings including wide-screen or "wraparound" films, fulldome video , and laser shows that combine music with laser-drawn patterns. Learning Technologies Inc. in Massachusetts offered the first easily portable planetarium in 1977. Philip Sadler designed this patented system which projected stars, constellation figures from many mythologies , celestial coordinate systems, and much else, from removable cylinders (Viewlex and others followed with their own portable versions). When Germany reunified in 1989, the two Zeiss firms did likewise, and expanded their offerings to cover many different size domes. In 1983, Evans & Sutherland installed the first digital planetarium projector displaying computer graphics ( Hansen planetarium , Salt Lake City, Utah)—the Digistar I projector used a vector graphics system to display starfields as well as line art . This gives the operator great flexibility in showing not only the modern night sky as visible from Earth , but as visible from points far distant in space and time. The newest generations of planetarium projectors, beginning with Digistar 3 , offer fulldome video technology. This allows for the projection of any image. [ citation needed ] Planetarium domes range in size from 3 to 35 m in diameter , accommodating from 1 to 500 people. They can be permanent or portable, depending on the application. The realism of the viewing experience in a planetarium depends significantly on the dynamic range of the image, i.e., the contrast between dark and light. This can be a challenge in any domed projection environment, because a bright image projected on one side of the dome will tend to reflect light across to the opposite side, "lifting" the black level there and so making the whole image look less realistic. Since traditional planetarium shows consisted mainly of small points of light (i.e., stars) on a black background, this was not a significant issue, but it became an issue as digital projection systems started to fill large portions of the dome with bright objects (e.g., large images of the sun in context). For this reason, modern planetarium domes are often not painted white but rather a mid grey colour, reducing reflection to perhaps 35-50%. This increases the perceived level of contrast. A major challenge in dome construction is to make seams as invisible as possible. Painting a dome after installation is a major task, and if done properly, the seams can be made almost to disappear. Traditionally, planetarium domes were mounted horizontally, matching the natural horizon of the real night sky. However, because that configuration requires highly inclined chairs for comfortable viewing "straight up", increasingly domes are being built tilted from the horizontal by between 5 and 30 degrees to provide greater comfort. Tilted domes tend to create a favoured "sweet spot" for optimum viewing, centrally about a third of the way up the dome from the lowest point. Tilted domes generally have seating arranged stadium-style in straight, tiered rows; horizontal domes usually have seats in circular rows, arranged in concentric (facing center) or epicentric (facing front) arrays. Planetaria occasionally include controls such as buttons or joysticks in the arm rests of seats to allow audience feedback that influences the show in real time . Often around the edge of the dome (the "cove") are: Traditionally, planetariums needed many incandescent lamps around the cove of the dome to help audience entry and exit, to simulate sunrise and sunset , and to provide working light for dome cleaning. More recently, solid-state LED lighting has become available that significantly decreases power consumption and reduces the maintenance requirement as lamps no longer have to be changed on a regular basis. The world's largest mechanical planetarium is located in Monico, Wisconsin. The Kovac Planetarium . It is 22 feet in diameter and weighs two tons. The globe is made of wood and is driven with a variable speed motor controller. This is the largest mechanical planetarium in the world, larger than the Atwood Globe in Chicago (15 feet in diameter) and one third the size of the Hayden. Some new planetariums now feature a glass floor , which allows spectators to stand near the center of a sphere surrounded by projected images in all directions, giving the impression of floating in outer space . For example, a small planetarium at AHHAA in Tartu , Estonia features such an installation, with special projectors for images below the feet of the audience, as well as above their heads. [ 16 ] Traditional planetarium projection apparatus use a hollow ball with a light inside, and a pinhole for each star, hence the name "star ball". With some of the brightest stars (e.g. Sirius , Canopus , Vega ), the hole must be so big to let enough light through that there must be a small lens in the hole to focus the light to a sharp point on the dome. In later and modern planetarium star balls, the individual bright stars often have individual projectors, shaped like small hand-held torches, with focusing lenses for individual bright stars. Contact breakers prevent the projectors from projecting below the "horizon". [ citation needed ] The star ball is usually mounted so it can rotate as a whole to simulate the Earth's daily rotation, and to change the simulated latitude on Earth. There is also usually a means of rotating to produce the effect of precession of the equinoxes . Often, one such ball is attached at its south ecliptic pole. In that case, the view cannot go so far south that any of the resulting blank area at the south is projected on the dome. Some star projectors have two balls at opposite ends of the projector like a dumbbell . In that case all stars can be shown and the view can go to either pole or anywhere between. But care must be taken that the projection fields of the two balls match where they meet or overlap. Smaller planetarium projectors include a set of fixed stars, Sun, Moon, and planets, and various nebulae . Larger projectors also include comets and a far greater selection of stars. Additional projectors can be added to show twilight around the outside of the screen (complete with city or country scenes) as well as the Milky Way . Others add coordinate lines and constellations , photographic slides, laser displays, and other images. Each planet is projected by a sharply focused spotlight that makes a spot of light on the dome. Planet projectors must have gearing to move their positioning and thereby simulate the planets' movements. These can be of these types:- Despite offering a good viewer experience, traditional star ball projectors suffer several inherent limitations. From a practical point of view, the low light levels require several minutes for the audience to "dark adapt" its eyesight. "Star ball" projection is limited in education terms by its inability to move beyond an Earth-bound view of the night sky. Finally, in most traditional projectors the various overlaid projection systems are incapable of proper occultation . This means that a planet image projected on top of a star field (for example) will still show the stars shining through the planet image, degrading the quality of the viewing experience. For related reasons, some planetariums show stars below the horizon projecting on the walls below the dome or on the floor, or (with a bright star or a planet) shining in the eyes of someone in the audience. However, the new breed of Optical-Mechanical projectors using fiber-optic technology to display the stars show a much more realistic view of the sky. An increasing number of planetariums are using digital technology to replace the entire system of interlinked projectors traditionally employed around a star ball to address some of their limitations. Digital planetarium manufacturers claim reduced maintenance costs and increased reliability from such systems compared with traditional "star balls" on the grounds that they employ few moving parts and do not generally require synchronisation of movement across the dome between several separate systems. Some planetariums mix both traditional opto-mechanical projection and digital technologies on the same dome. In a fully digital planetarium, the dome image is generated by a computer and then projected onto the dome using a variety of technologies including cathode-ray tube , LCD , DLP , or laser projectors. Sometimes a single projector mounted near the centre of the dome is employed with a fisheye lens to spread the light over the whole dome surface, while in other configurations several projectors around the horizon of the dome are arranged to blend together seamlessly. Digital projection systems all work by creating the image of the night sky as a large array of pixels . Generally speaking, the more pixels a system can display, the better the viewing experience. While the first generation of digital projectors were unable to generate enough pixels to match the image quality of the best traditional "star ball" projectors, high-end systems now offer a resolution that approaches the limit of human visual acuity . LCD projectors have fundamental limits on their ability to project true black as well as light, which has tended to limit their use in planetaria. LCOS and modified LCOS projectors have improved on LCD contrast ratios while also eliminating the "screen door" effect of small gaps between LCD pixels. "Dark chip" DLP projectors improve on the standard DLP design and can offer relatively inexpensive solution with bright images, but the black level requires physical baffling of the projectors. As the technology matures and reduces in price, laser projection looks promising for dome projection as it offers bright images, large dynamic range and a very wide color space . Worldwide, most planetariums provide shows to the general public. Traditionally, shows for these audiences with themes such as "What's in the sky tonight?", or shows which pick up on topical issues such as a religious festival (often the Christmas star ) linked to the night sky, have been popular. Live format is preferred by many venues as a live speaker or presenter can answer questions raised by the audience. [ citation needed ] Since the early 1990s, fully featured 3-D digital planetariums have added an extra degree of freedom to a presenter giving a show because they allow simulation of the view from any point in space, not only the Earth-bound view which we are most familiar with. This new virtual reality capability to travel through the universe provides important educational benefits because it vividly conveys that space has depth, helping audiences to leave behind the ancient misconception that the stars are stuck on the inside of a giant celestial sphere and instead to understand the true layout of the Solar System and beyond. For example, a planetarium can now 'fly' the audience towards one of the familiar constellations such as Orion , revealing that the stars which appear to make up a co-ordinated shape from an Earth-bound viewpoint are at vastly different distances from Earth and so not connected, except in human imagination and mythology . For especially visual or spatially aware people, this experience can be more educationally beneficial than other demonstrations.
https://en.wikipedia.org/wiki/Planetarium
Planetarium software is application software that allows a user to simulate the celestial sphere at any time of day, especially at night , on a computer. Such applications can be as rudimentary as displaying a star chart or sky map for a specific time and location, or as complex as rendering photorealistic views of the sky . While some planetarium software is meant to be used exclusively on a personal computer, some applications can be used to interface with and control telescopes or planetarium projectors . Optional features may include inserting the orbital elements of comets and other newly discovered bodies for display. This article about astronomy software is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Planetarium_software
The Planetary Habitability Laboratory (PHL) is a research remote laboratory intended to study the habitability of the Solar System and other stellar systems, specifically, potentially habitable exoplanets. [ 1 ] The PHL is managed by the University of Puerto Rico at Arecibo with the collaboration of international scientists from different organizations including the SETI Institute and NASA . [ 2 ] The Laboratory is directed by astrobiologist Professor Abel Méndez . [ 3 ] PHL is especially known for its Habitable Exoplanets Catalog, one of the most comprehensive catalogs on exoplanetary habitability. This astrobiology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Planetary_Habitability_Laboratory
A planetary coordinate system (also referred to as planetographic , planetodetic , or planetocentric ) [ 1 ] [ 2 ] is a generalization of the geographic , geodetic , and the geocentric coordinate systems for planets other than Earth. Similar coordinate systems are defined for other solid celestial bodies , such as in the selenographic coordinates for the Moon . The coordinate systems for almost all of the solid bodies in the Solar System were established by Merton E. Davies of the Rand Corporation , including Mercury , [ 3 ] [ 4 ] Venus , [ 5 ] Mars , [ 6 ] the four Galilean moons of Jupiter , [ 7 ] and Triton , the largest moon of Neptune . [ 8 ] A planetary datum is a generalization of geodetic datums for other planetary bodies, such as the Mars datum ; it requires the specification of physical reference points or surfaces with fixed coordinates, such as a specific crater for the reference meridian or the best-fitting equigeopotential as zero-level surface. [ 9 ] The longitude systems of most of those bodies with observable rigid surfaces have been defined by references to a surface feature such as a crater . The north pole is that pole of rotation that lies on the north side of the invariable plane of the Solar System (near the ecliptic ). The location of the prime meridian as well as the position of the body's north pole on the celestial sphere may vary with time due to precession of the axis of rotation of the planet (or satellite). If the position angle of the body's prime meridian increases with time, the body has a direct (or prograde ) rotation; otherwise the rotation is said to be retrograde . In the absence of other information, the axis of rotation is assumed to be normal to the mean orbital plane ; Mercury and most of the satellites are in this category. For many of the satellites, it is assumed that the rotation rate is equal to the mean orbital period . In the case of the giant planets , since their surface features are constantly changing and moving at various rates, the rotation of their magnetic fields is used as a reference instead. In the case of the Sun , even this criterion fails (because its magnetosphere is very complex and does not really rotate in a steady fashion), and an agreed-upon value for the rotation of its equator is used instead. For planetographic longitude , west longitudes (i.e., longitudes measured positively to the west) are used when the rotation is prograde, and east longitudes (i.e., longitudes measured positively to the east) when the rotation is retrograde. In simpler terms, imagine a distant, non-orbiting observer viewing a planet as it rotates. Also suppose that this observer is within the plane of the planet's equator. A point on the Equator that passes directly in front of this observer later in time has a higher planetographic longitude than a point that did so earlier in time. However, planetocentric longitude is always measured positively to the east, regardless of which way the planet rotates. East is defined as the counter-clockwise direction around the planet, as seen from above its north pole, and the north pole is whichever pole more closely aligns with the Earth's north pole. Longitudes traditionally have been written using "E" or "W" instead of "+" or "−" to indicate this polarity. For example, −91°, 91°W, +269° and 269°E all mean the same thing. The modern standard for maps of Mars (since about 2002) is to use planetocentric coordinates. Guided by the works of historical astronomers, Merton E. Davies established the meridian of Mars at Airy-0 crater. [ 10 ] [ 11 ] For Mercury , the only other planet with a solid surface visible from Earth, a thermocentric coordinate is used: the prime meridian runs through the point on the equator where the planet is hottest (due to the planet's rotation and orbit, the Sun briefly retrogrades at noon at this point during perihelion , giving it more sunlight). By convention, this meridian is defined as exactly twenty degrees of longitude east of Hun Kal . [ 12 ] [ 13 ] [ 14 ] Tidally-locked bodies have a natural reference longitude passing through the point nearest to their parent body: 0° the center of the primary-facing hemisphere, 90° the center of the leading hemisphere, 180° the center of the anti-primary hemisphere, and 270° the center of the trailing hemisphere. [ 15 ] However, libration due to non-circular orbits or axial tilts causes this point to move around any fixed point on the celestial body like an analemma . Planetographic latitude and planetocentric latitude may be similarly defined. The zero latitude plane ( Equator ) can be defined as orthogonal to the mean axis of rotation ( poles of astronomical bodies ). The reference surfaces for some planets (such as Earth and Mars ) are ellipsoids of revolution for which the equatorial radius is larger than the polar radius, such that they are oblate spheroids . Vertical position can be expressed with respect to a given vertical datum , by means of physical quantities analogous to the topographical geocentric distance (compared to a constant nominal Earth radius or the varying geocentric radius of the reference ellipsoid surface) or altitude / elevation (above and below the geoid ). [ 16 ] The areoid (the geoid of Mars ) [ 17 ] has been measured using flight paths of satellite missions such as Mariner 9 and Viking . The main departures from the ellipsoid expected of an ideal fluid are from the Tharsis volcanic plateau, a continent-size region of elevated terrain, and its antipodes. [ 18 ] The selenoid (the geoid of the Moon ) has been measured gravimetrically by the GRAIL twin satellites. [ 19 ] Reference ellipsoids are also useful for defining geodetic coordinates and mapping other planetary bodies including planets, their satellites, asteroids and comet nuclei. Some well observed bodies such as the Moon and Mars now have quite precise reference ellipsoids. For rigid-surface nearly-spherical bodies, which includes all the rocky planets and many moons, ellipsoids are defined in terms of the axis of rotation and the mean surface height excluding any atmosphere. Mars is actually egg shaped , where its north and south polar radii differ by approximately 6 km (4 miles), however this difference is small enough that the average polar radius is used to define its ellipsoid. The Earth's Moon is effectively spherical, having almost no bulge at its equator. Where possible, a fixed observable surface feature is used when defining a reference meridian. For gaseous planets like Jupiter , an effective surface for an ellipsoid is chosen as the equal-pressure boundary of one bar . Since they have no permanent observable features, the choices of prime meridians are made according to mathematical rules. For the WGS84 ellipsoid to model Earth , the defining values are [ 20 ] from which one derives so that the difference of the major and minor semi-axes is 21.385 km (13 mi). This is only 0.335% of the major axis, so a representation of Earth on a computer screen would be sized as 300 pixels by 299 pixels. This is rather indistinguishable from a sphere shown as 300 pix by 300 pix. Thus illustrations typically greatly exaggerate the flattening to highlight the concept of any planet's oblateness. Other f values in the Solar System are 1 ⁄ 16 for Jupiter , 1 ⁄ 10 for Saturn , and 1 ⁄ 900 for the Moon . The flattening of the Sun is about 9 × 10 −6 . In 1687, Isaac Newton published the Principia in which he included a proof that a rotating self-gravitating fluid body in equilibrium takes the form of an oblate ellipsoid of revolution (a spheroid ). [ 21 ] The amount of flattening depends on the density and the balance of gravitational force and centrifugal force . Generally any celestial body that is rotating (and that is sufficiently massive to draw itself into spherical or near spherical shape) will have an equatorial bulge matching its rotation rate. With 11 808 km Saturn is the planet with the largest equatorial bulge in our Solar System. Equatorial bulges should not be confused with equatorial ridges . Equatorial ridges are a feature of at least four of Saturn's moons: the large moon Iapetus and the tiny moons Atlas , Pan , and Daphnis . These ridges closely follow the moons' equators. The ridges appear to be unique to the Saturnian system, but it is uncertain whether the occurrences are related or a coincidence. The first three were discovered by the Cassini probe in 2005; the Daphnean ridge was discovered in 2017. The ridge on Iapetus is nearly 20 km wide, 13 km high and 1300 km long. The ridge on Atlas is proportionally even more remarkable given the moon's much smaller size, giving it a disk-like shape. Images of Pan show a structure similar to that of Atlas, while the one on Daphnis is less pronounced. Small moons, asteroids, and comet nuclei frequently have irregular shapes. For some of these, such as Jupiter's Io , a scalene (triaxial) ellipsoid is a better fit than the oblate spheroid. For highly irregular bodies, the concept of a reference ellipsoid may have no useful value, so sometimes a spherical reference is used instead and points identified by planetocentric latitude and longitude. Even that can be problematic for non-convex bodies, such as Eros , in that latitude and longitude don't always uniquely identify a single surface location. Smaller bodies ( Io , Mimas , etc.) tend to be better approximated by triaxial ellipsoids ; however, triaxial ellipsoids would render many computations more complicated, especially those related to map projections . Many projections would lose their elegant and popular properties. For this reason spherical reference surfaces are frequently used in mapping programs.
https://en.wikipedia.org/wiki/Planetary_coordinate_system
Planetary engineering is the development and application of technology for the purpose of influencing the environment of a planet . Planetary engineering encompasses a variety of methods such as terraforming , seeding , and geoengineering . Widely discussed in the scientific community, terraforming refers to the alteration of other planets to create a habitable environment for terrestrial life. Seeding refers to the introduction of life from Earth to habitable planets. Geoengineering refers to the engineering of a planet's climate, and has already been applied on Earth. Each of these methods are composed of varying approaches and possess differing levels of feasibility and ethical concern. Terraforming is the process of modifying the atmosphere , temperature , surface topography or ecology of a planet, moon, or other body in order to replicate the environment of Earth. A common object of discussion on potential terraforming is the planet Mars. To terraform Mars, humans would need to create a new atmosphere, due to the planet's high carbon dioxide concentration and low atmospheric pressure. This would be possible by introducing more greenhouse gases to below "freezing point from indigenous materials". [ 2 ] To terraform Venus, carbon dioxide would need to be converted to graphite since Venus receives twice as much sunlight as Earth. This process is only possible if the greenhouse effect is removed with the use of "high-altitude absorbing fine particles" or a sun shield, creating a more habitable Venus. [ 2 ] NASA has defined categories of habitability systems and technologies for terraforming to be feasible. [ 3 ] These topics include creating power-efficient systems for preserving and packaging  food for crews, preparing and cooking foods, dispensing water, and developing facilities for rest, trash and recycling, and areas for crew hygiene and rest. [ 3 ] A variety of planetary engineering challenges stand in the way of terraforming efforts. The atmospheric terraforming of Mars, for example, would require "significant quantities of gas" to be added to the Martian atmosphere. [ 4 ] This gas has been thought to be stored in solid and liquid form within Mars' polar ice caps and underground reservoirs. It is unlikely, however, that enough CO 2 for sufficient atmospheric change is present within Mars' polar deposits, and liquid CO 2 could only be present at warmer temperatures "deep within the crust". [ 4 ] Furthermore, sublimating the entire volume of Mars' polar caps would increase its current atmospheric pressure to 15 millibar, where an increase to around 1000 millibar would be required for habitability. [ 4 ] For reference, Earth's average sea-level pressure is 1013.25 mbar . First formally proposed by astrophysicist Carl Sagan, the terraforming of Venus has since been discussed through methods such as organic molecule-induced carbon conversion, sun reflection, increasing planetary spin, and various chemical means. [ 5 ] Due to the high presence of sulfuric acid and solar wind on Venus, which are harmful to organic environments, organic methods of carbon conversion have been found unfeasible. [ 5 ] Other methods, such as solar shading, hydrogen bombardment, and magnesium-calcium bombardment are theoretically sound but would require large-scale resources and space technologies not yet available to humans. [ 5 ] While successful terraforming would allow life to prosper on other planets, philosophers have debated whether this practice is morally sound. Certain ethics experts suggest that planets like Mars hold an intrinsic value independent of their utility to humanity and should therefore be free from human interference. [ 6 ] Also, some argue that through the steps that are necessary to make Mars habitable - such as fusion reactors, space-based solar-powered lasers, or spreading a thin layer of soot on Mars' polar ice caps - would deteriorate the current aesthetic value that Mars possesses. [ 7 ] This calls into question humanity's intrinsic ethical and moral values, as it raises the question of whether humanity is willing to eradicate the current ecosystem of another planet for their benefit. [ 8 ] Through this ethical framework, terraforming attempts on these planets could be seen to threaten their intrinsically valuable environments, rendering these efforts unethical. [ 6 ] Mars is the primary subject of discussion for seeding. Locations for seeding are chosen based on atmospheric temperature, air pressure, existence of harmful radiation, and availability of natural resources, such as water and other compounds essential to terrestrial life. [ 10 ] Natural or engineered microorganisms must be created or discovered that can withstand the harsh environments of Mars. The first organisms used must be able to survive exposure to ionizing radiation and the high concentration of CO 2 present in the Martian atmosphere. [ 10 ] Later organisms such as multicellular plants must be able to withstand the freezing temperatures, withstand high CO 2 levels, and produce significant amounts of O 2 . Microorganisms provide significant advantages over non-biological mechanisms. They are self-replicating, negating the needs to either transport or manufacture large machinery to the surface of Mars. They can also perform complicated chemical reactions with little maintenance to realize planet-scale terraforming. [ 11 ] Climate engineering is a form of planetary engineering which involves the process of deliberate and large-scale alteration of the Earth's climate system to combat climate change. [ 12 ] Examples of geoengineering are carbon dioxide removal (CDR), which removes carbon dioxide from the atmosphere, and solar radiation modification (SRM) to reflect solar energy to space. [ 12 ] [ 13 ] Carbon dioxide removal (CDR) has multiple practices, the simplest being reforestation , to more complex processes such as direct air capture . [ 12 ] [ 14 ] The latter is rather difficult to deploy on an industrial scale, for high costs and substantial energy usage would be some aspects to address. [ 12 ] Examples of SRM include stratospheric aerosol injection (SAI) and marine cloud brightening (MCB). [ 12 ] When a volcano erupts, small particles known as aerosols proliferate throughout the atmosphere, reflecting the sun's energy back into space. [ 12 ] [ 15 ] This results in a cooling effect, and humanity could conceivably inject these aerosols into the stratosphere, spurring large-scale cooling. [ 12 ] [ 15 ] One proposal for MCB involves spraying a vapor into low-laying sea clouds, creating more cloud condensation nuclei. [ 16 ] This would in theory result in the cloud becoming whiter, and reflecting light more efficiently. [ 16 ]
https://en.wikipedia.org/wiki/Planetary_engineering
The planetary equilibrium temperature is a theoretical temperature that a planet would be if it were in radiative equilibrium , typically under the assumption that it radiates as a black body being heated only by its parent star . In this model, the presence or absence of an atmosphere (and therefore any greenhouse effect ) is irrelevant, as the equilibrium temperature is calculated purely from a balance with incident stellar energy . Other authors use different names for this concept, such as equivalent blackbody temperature of a planet. [ 1 ] The effective radiation emission temperature is a related concept, [ 2 ] but focuses on the actual power radiated rather than on the power being received, and so may have a different value if the planet has an internal energy source or when the planet is not in radiative equilibrium. [ 3 ] [ 4 ] Planetary equilibrium temperature differs from the global mean temperature and surface air temperature , which are measured observationally by satellites or surface-based instruments , and may be warmer than the equilibrium temperature due to the greenhouse effect. [ 3 ] [ 4 ] Consider a planet orbiting its host star. The star emits radiation isotropically , and some fraction of this radiation reaches the planet. The amount of radiation arriving at the planet is referred to as the incident solar radiation, I o {\displaystyle I_{o}} . The planet has an albedo that depends on the characteristics of its surface and atmosphere, and therefore only absorbs a fraction of radiation. The planet absorbs the radiation that isn't reflected by the albedo, and heats up. One may assume that the planet radiates energy like a blackbody at some temperature according to the Stefan–Boltzmann law . Radiative equilibrium exists when the power supplied by the star is equal to the power emitted by the planet. The temperature at which this balance occurs is the planetary equilibrium temperature. [ 4 ] [ 5 ] [ 6 ] The solar flux absorbed by the planet from the star is equal to the flux emitted by the planet: [ 4 ] [ 5 ] [ 6 ] F a b s = F e m i t {\displaystyle {F}_{\rm {abs}}={F}_{\rm {emit}}} Assuming a fraction of the incident sunlight is reflected according to the planet's Bond albedo , A B {\displaystyle A_{B}} : ( 1 − A B ) F s o l a r = F e m i t {\displaystyle (1-A_{B}){F}_{\rm {solar}}={F}_{\rm {emit}}} where F s o l a r {\displaystyle {F}_{\rm {solar}}} represents the area- and time-averaged incident solar flux, and may be expressed as: F s o l a r = I o / 4 {\displaystyle F_{\rm {solar}}=I_{o}/4} The factor of 1/4 in the above formula comes from the fact that only a single hemisphere is lit at any moment in time (creates a factor of 1/2), and from integrating over angles of incident sunlight on the lit hemisphere (creating another factor of 1/2). [ 6 ] Assuming the planet radiates as a blackbody according to the Stefan–Boltzmann law at some equilibrium temperature T e q {\displaystyle {T}_{eq}} , a balance of the absorbed and outgoing fluxes produces: ( 1 − A B ) ( I o 4 ) = σ T e q 4 {\displaystyle (1-A_{B})\left({\frac {I_{o}}{4}}\right)=\sigma \ T_{\rm {eq}}^{4}} where σ {\displaystyle \sigma } is the Stefan-Boltzmann constant . Rearranging the above equation to find the equilibrium temperature leads to: T e q = ( I o ( 1 − A B ) 4 σ ) 1 / 4 {\displaystyle {T}_{\rm {eq}}={\left({\frac {I_{o}\left(1-A_{B}\right)}{4\ \sigma }}\right)}^{1/4}} T e q = ( L o ( 1 − A B ) 16 σ π d 2 ) 1 / 4 {\displaystyle T_{\mathrm {eq} }={\left({\frac {L_{o}\left(1-A_{B}\right)}{16\ \sigma \ \pi \ d^{2}}}\right)}^{1/4}} where L 0 {\displaystyle L_{0}} is the luminosity of the Sun ( 3 , 828 ⋅ 10 26 {\displaystyle 3,828\cdot 10^{26}} W), and d {\displaystyle d} the distance between the planet and the Sun, then : T e q = 1.07652 ⋅ 10 8 ( ( 1 − A B ) d 2 ) 1 / 4 {\displaystyle T_{\mathrm {eq} }=1.07652\cdot 10^{8}{\left({\frac {\left(1-A_{B}\right)}{d^{2}}}\right)}^{1/4}} (with d {\displaystyle d} in metres), or : T e q = 3 , 404.255 ⋅ ( ( 1 − A B ) d 2 ) 1 / 4 {\displaystyle T_{\mathrm {eq} }=3,404.255\cdot {\left({\frac {\left(1-A_{B}\right)}{d^{2}}}\right)}^{1/4}} (with d {\displaystyle d} in million kilometres). For a planet around another star, I o {\displaystyle I_{o}} (the incident stellar flux on the planet) is not a readily measurable quantity. To find the equilibrium temperature of such a planet, it may be useful to approximate the host star's radiation as a blackbody as well, such that: F s t a r = σ T s t a r 4 {\displaystyle F_{\rm {star}}=\sigma T_{\rm {star}}^{4}} The luminosity ( L {\displaystyle L} ) of the star, which can be measured from observations of the star's apparent brightness , [ 7 ] can then be written as: L = 4 π R s t a r 2 σ T s t a r 4 {\displaystyle L=4\pi R_{\rm {star}}^{2}\sigma T_{\rm {star}}^{4}} where the flux has been multiplied by the surface area of the star. To find the incident stellar flux on the planet, I x {\displaystyle I_{x}} , at some orbital distance from the star, a {\displaystyle a} , one can divide by the surface area of a sphere with radius a {\displaystyle a} : [ 8 ] I x = ( L 4 π a 2 ) {\displaystyle I_{x}=\left({\frac {L}{4\pi a^{2}}}\right)} Plugging this into the general equation for planetary equilibrium temperature gives: T e q = ( L ( 1 − A B ) 16 σ π a 2 ) 1 / 4 {\displaystyle {T}_{\rm {eq}}={\left({\frac {L\left(1-A_{B}\right)}{16\sigma \pi a^{2}}}\right)}^{1/4}} If the luminosity of the star is known from photometric observations, the other remaining variables that must be determined are the Bond albedo and orbital distance of the planet. Bond albedos of exoplanets can be constrained by flux measurements of transiting exoplanets , [ 9 ] and may in future be obtainable from direct imaging of exoplanets and a conversion from geometric albedo . [ 10 ] Orbital properties of the planet such as the orbital distance can be measured through radial velocity and transit period measurements. [ 11 ] [ 12 ] Alternatively, the planetary equilibrium may be written in terms of the temperature and radius of the star: T e q = T s t a r R 2 a ( 1 − A B ) 1 / 4 {\displaystyle {T}_{\rm {eq}}=T_{\rm {star}}{\sqrt {\frac {R}{2a}}}\left(1-A_{B}\right)^{1/4}} The equilibrium temperature is neither an upper nor lower bound on actual temperatures on a planet. There are several reasons why measured temperatures deviate from predicted equilibrium temperatures. In the greenhouse effect , long wave radiation emitted by a planet is absorbed by certain gases in the atmosphere, reducing longwave emissions to space. Planets with substantial greenhouse atmospheres emit more longwave radiation at the surface than what reaches space. Consequently, such planets have surface temperatures higher than their effective radiation emission temperature. For example, Venus has an effective temperature of approximately 226 K (−47 °C; −53 °F), but a surface temperature of 740 K (467 °C; 872 °F). [ 13 ] [ 14 ] Similarly, Earth has an effective temperature of 255 K (−18 °C; −1 °F), [ 14 ] but a surface temperature of about 288 K (15 °C; 59 °F) [ 15 ] due to the greenhouse effect in our lower atmosphere. [ 5 ] [ 4 ] The surface temperatures of such planets are more accurately estimated by modeling thermal radiation transport through the atmosphere. [ 16 ] [ 17 ] On airless bodies, the lack of any significant greenhouse effect allows equilibrium temperatures to approach mean surface temperatures, as on Mars , [ 5 ] where the equilibrium temperature is 210 K (−63 °C; −82 °F) and the mean surface temperature of emission is 215 K (−58 °C; −73 °F). [ 6 ] There are large variations in surface temperature over space and time on airless or near-airless bodies like Mars, which has daily surface temperature variations of 50–60 K. [ 18 ] [ 19 ] Because of a relative lack of air to transport or retain heat, significant variations in temperature develop. Assuming the planet radiates as a blackbody (i.e. according to the Stefan-Boltzmann law), temperature variations propagate into emission variations, this time to the power of 4. This is significant because our understanding of planetary temperatures comes not from direct measurement of the temperatures, but from measurements of the fluxes. Consequently, in order to derive a meaningful mean surface temperature on an airless body (to compare with an equilibrium temperature), a global average surface emission flux is considered, and then an ' effective temperature of emission' that would produce such a flux is calculated. [ 6 ] [ 18 ] The same process would be necessary when considering the surface temperature of the Moon , which has an equilibrium temperature of 271 K (−2 °C; 28 °F), [ 20 ] but can have temperatures of 373 K (100 °C; 212 °F) in the daytime and 100 K (−173 °C; −280 °F) at night. [ 21 ] Again, these temperature variations result from poor heat transport and retention in the absence of an atmosphere. Orbiting bodies can also be heated by tidal heating , [ 22 ] geothermal energy which is driven by radioactive decay in the core of the planet, [ 23 ] or accretional heating. [ 24 ] These internal processes will cause the effective temperature (a blackbody temperature that produces the observed radiation from a planet) to be warmer than the equilibrium temperature (the blackbody temperature that one would expect from solar heating alone). [ 6 ] [ 4 ] For example, on Saturn , the effective temperature is approximately 95 K, compared to an equilibrium temperature of about 63 K. [ 25 ] [ 26 ] This corresponds to a ratio between power emitted and solar power received of ~2.4, indicating a significant internal energy source. [ 26 ] Jupiter and Neptune have ratios of power emitted to solar power received of 2.5 and 2.7, respectively. [ 27 ] Close correlation between the effective temperature and equilibrium temperature of Uranus can be taken as evidence that processes producing an internal flux are negligible on Uranus compared to the other giant planets. [ 27 ] Earth has insufficient geothermal heating to significantly affect its global temperature, with geothermal heating supplying only 0.03% of Earth's total energy budget. [ 28 ]
https://en.wikipedia.org/wiki/Planetary_equilibrium_temperature
Planetary habitability in the Solar System is the study that searches the possible existence of past or present extraterrestrial life in those celestial bodies. As exoplanets are too far away and can only be studied by indirect means, the celestial bodies in the Solar System allow for a much more detailed study: direct telescope observation, space probes , rovers and even human spaceflight . Aside from Earth, no planets in the solar system are known to harbor life. Mars , Europa , and Titan are considered to have once had or currently have conditions permitting the existence of life. Multiple rovers have been sent to Mars, while Europa Clipper is planned to reach Europa in 2030, and the Dragonfly space probe is planned to launch in 2027. The vacuum of outer space is a harsh environment. Besides the vacuum itself, temperatures are extremely low and there is a high amount of radiation from the Sun. Multicellular life cannot endure such conditions. [ 1 ] Bacteria can not thrive in the vacuum either, but may be able to survive under special circumstances. An experiment by microbiologist Akihiko Yamagishi held at the International Space Station exposed a group of bacteria to the vacuum, completely unprotected, for three years. The Deinococcus radiodurans survived the exposure. In earlier experiments, it had survived radiation, vacuum, and low temperatures in lab-controlled experiments. The outer cells of the group had died, but their remains shielded the cells on the inside, which were able to survive. [ 2 ] Those studies give credence to the theory of panspermia , which proposes that life may be moved across planets within meteorites. Yamagishi even proposed the term massapanspermia for cells moving across the space in clumps instead of rocks. However, astrobiologist Natalie Grefenstette considers that unprotected cell clumps would have no protection during the ejection from one planet and the re-entry into another one. [ 2 ] According to NASA, Mercury is not a suitable planet for Earth-like life. It has a surface boundary exosphere instead of a layered atmosphere, extreme temperatures that range from 800 °F (430 °C) during the day to -290 °F (-180 °C) during the night, and high solar radiation. It is unlikely that any living beings can withstand those conditions. [ 3 ] It is unlikely to ever find remains of ancient life, either. If any type of life ever appeared on the planet, it would have suffered an extinction event in a very short time. It is also suspected that most of the planetary surface was stripped away by a large impact, which would have also removed any life on the planet. [ 4 ] The spacecraft MESSENGER found evidence of water ice on Mercury , within permanently shadowed craters not reached by sunlight. As a result of the thin atmosphere, temperatures within them stay cold and there is very little sublimation . There may be scientific support, based on studies reported in March 2020, for considering that parts of the planet Mercury may have hosted sub-surfaced volatiles . [ 5 ] [ 6 ] The geology of Mercury is considered to be shaped by impact craters and earthquakes caused by a large impact at the Caloris basin . The studies suggest that the required times would not be consistent and that it could be instead that sub-surface volatiles were heated and sublimated, causing the surface to fall apart. Those volatiles may have condensed at craters elsewhere on the planet, or lost to space by solar winds. It is not known which volatiles may have been part of this process. [ 7 ] The surface of Venus is completely inhospitable for life. As a result of a runaway greenhouse effect Venus has a temperature of 900 degrees Fahrenheit (475 degrees Celsius), hot enough to melt lead. It is the hottest planet in the Solar System, even more than Mercury, despite being farther away from the Sun. [ 8 ] Likewise, the atmosphere of Venus is almost completely carbon dioxide, and the atmospheric pressure is 90 times that of Earth. [ 8 ] There is no significant temperature change during the night, and the low axial tilt , only 3.39 degrees with respect to the Sun, makes temperatures quite uniform across the planet and without noticeable seasons . [ 9 ] Venus likely had liquid water on its surface for at least a few million years after its formation. [ 10 ] [ 11 ] The Venus Express detected that Venus loses oxygen and hydrogen to space, and that the escaping hydrogen doubles the oxygen. The source could be Venusian water, that the ultraviolet radiation from the Sun splits into its basic composition. There is also deuterium in the planet's atmosphere, a heavy type of hydrogen that is less capable of escaping the planet's gravity. However, the surface water may have been only atmospheric and not form any oceans. [ 10 ] Astrobiologist David Grinspoon considers that although there is no proof of Venus having oceans, it is likely that it had them, as a result of similar processes to those that took place on Earth. He considers that those oceans may have lasted for 600 million years, and were lost 4 billion years ago. [ 11 ] The growing scarcity of liquid water altered the carbon cycle , reducing carbon sequestration . With most carbon dioxide staying in the atmosphere for good, the greenhouse effect worsened even more. [ 12 ] Nevertheless, between the altitudes of 50 and 65 kilometers, the pressure and temperature are Earth-like, and it may accommodate thermoacidophilic extremophile microorganisms in the acidic upper layers of the Venusian atmosphere. [ 13 ] [ 14 ] [ 15 ] [ 16 ] According to this theory, life would have started in Venusian oceans when the planet was cooler, adapt to other environments as it did on Earth, and remain at the last habitable zone of the planet. [ 16 ] The putative detection of an absorption line of phosphine in Venus's atmosphere, with no known pathway for abiotic production, led to speculation in September 2020 that there could be extant life currently present in the atmosphere. [ 17 ] [ 18 ] Later research attributed the spectroscopic signal that was interpreted as phosphine to sulfur dioxide , [ 19 ] or found that in fact there was no absorption line. [ 20 ] [ 21 ] Earth is the only celestial body known for sure to have generated living beings, and thus the only current example of a habitable planet. At a distance of 1 AU from the Sun, it is within the circumstellar habitable zone of the Solar system, which means it can have oceans of water in a liquid state. [ 22 ] There also exist a great number of elements required by lifeforms, such as carbon, oxygen, nitrogen, hydrogen, and phosphorus. [ 23 ] The Sun provides energy for most ecosystems on Earth, processed by vegetals with photosynthesis , but there are also ecosystems in the deep areas of the oceans that never receive sunlight and thrive on geothermal heat instead. The atmosphere of Earth also plays an important role. The ozone layer protects the planet from the harmful radiations from the Sun, and free oxygen is abundant enough for the breathing needs of terrestrial life. [ 24 ] Earth's magnetosphere , generated by its active core , is also important for the long-term habitability of Earth, as it prevents the solar winds from stripping the atmosphere out of the planet. [ 25 ] The atmosphere is thick enough to generate atmospheric pressure at sea level that keeps water in a liquid state, but it is not strong enough to be harmful either. [ 23 ] There are further elements that benefited the presence of life, but it is not completely clear if life could have thrived or not without them. The planet is not tidally locked and the atmosphere allows the distribution of heat, so temperatures are largely uniform and without great swift changes. The bodies of water cover most of the world but still leave large landmasses and interact with rocks at the bottom. A nearby celestial body, the Moon, subjects the Earth to substantial but not catastrophic tidal forces. [ 23 ] Following a suggestion of Carl Sagan , the Galileo probe studied Earth from the distance, to study it in a way similar to the one we use to study other planets. The presence of life on Earth could be confirmed by the levels of oxygen and methane in the atmosphere, and the red edge was evidence of plants. It even detected a technosignature , strong radio waves that could not be caused by natural reasons. [ 26 ] Despite its proximity to Earth, the Moon is mostly inhospitable to life. No native lunar life has been found, including any signs of life in the samples of Moon rocks and soil. [ 27 ] In 2019, Israeli craft Beresheet carrying tardigrades crash landed on the Moon . [ 28 ] While their "chances of survival" were "extremely high", [ 29 ] it was the force of the crash – and not the Moon's environment – that likely killed them. [ 30 ] The atmosphere of the Moon is almost non-existent, there is no liquid water (although there is solid ice at some permanently shadowed craters ), and no protection from the radiation of the Sun. However, circumstances could have been different in the past. There are two possible time periods of habitability: right after its origin , and during a period of high volcanic activity. In the first case, it is debated how many volatiles would survive in the debris disk, but it is thought that some water could have been retained thanks to its difficulty to diffuse in a silicate-dominated vapor. In the second case, thanks to extreme outgassing from lunar magma the Moon could have an atmosphere of 10 millibars. [ 31 ] Although that's just 1% of the atmosphere of Earth, it is higher than on Mars and may be enough to allow liquid surface water, such as in the theorized Lunar magma ocean . [ 32 ] This theory is supported by studies of Lunar rocks and soil, which were more hydrated than expected. Studies of Lunar vulcanism also reveal water within the Moon, and that the Lunar mantle would have a composition of water similar to Earth's upper mantle . [ 31 ] This may be confirmed by studies on the crust of the Moon that would suggest an old exposition to magma water. [ 33 ] The early Moon may have also had its own magnetic field, deflecting solar winds. [ 34 ] Life on the Moon may have been the result of a local process of abiogenesis, but also from panspermia from Earth. [ 34 ] Dirk Schulze-Makuch, professor of planetary science and astrobiology at the University of London considers that those theories may be properly tested if a future expedition to the Moon seeks markers of life on lunar samples from the age of volcanic activity, and by testing the survival of microorganisms at simulated lunar environment that try to imitate that specific Lunar age. [ 34 ] Mars is the celestial body in the solar system with the most similarities to Earth. A Mars sol lasts almost the same as an Earth day, and its axial tilt gives it similar seasons. There is water on Mars , most of it frozen at the Martian polar ice caps , and some of it underground. However, there are many obstacles to its habitability. The surface temperature averages about -60 degrees Celsius (-80 degrees Fahrenheit). [ 35 ] There are no permanent bodies of liquid water on the surface. The atmosphere is thin, and more than 96% of it is toxic carbon dioxide . Its atmospheric pressure is below 1% that of Earth. Combined with its lack of a magnetosphere , Mars is open to harmful radiation from the Sun. Although no astronauts have set foot on Mars, the planet has been studied in great detail by rovers. So far, no native lifeforms have been found. [ 36 ] The origin of the potential biosignature of methane observed in the atmosphere of Mars is unexplained, although hypotheses not involving life have been proposed. [ 37 ] It is thought, however, that those conditions may have been different in the past. Mars could have had bodies of water, a thicker atmosphere and a working magnetosphere, and may have been habitable then. The rover Opportunity first discovered evidences of such a wet past, but later studies found that the territories studied by the rover were in contact with sulfuric acid, not water. [ 38 ] The Gale crater, on the other hand, has clay minerals that could have only been formed in water with a neutral PH. For this reason, NASA selected it for the landing of the Curiosity rover. [ 38 ] [ 39 ] The crater Jezero is suspected of being the location of an ancient lake. For this reason NASA sent the Perseverance rover to investigate. Although no actual life has been found, the rocks may still contain fossil traces of ancient life, if the lake had any. [ 36 ] It is also suggested that microscopic life may have escaped the worsening conditions of the surface by moving underground. An experiment simulated those conditions to check the reactions of lichen and found that it survived by finding refuge in rock cracks and soil gaps. [ 40 ] Although many geological studies suggest that Mars was habitable in the past, that does not necessarily mean that it was inhabited. Finding fossils of microscopic life of such distant times is an incredibly difficult task, even for Earth's earliest known life forms . Such fossils require a material capable to preserve cellular structures and survive degradational rock-forming and environmental processes. The knowledge of taphonomy for those cases is limited to the sparse fossils found so far, and are based on Earth's environment, which greatly differs from the Martian one. [ 41 ] Ceres , the only dwarf planet in the asteroid belt , has a thin water-vapor atmosphere. [ 42 ] [ 43 ] The vapor is likely the result of impacts of meteorites containing ice, but there is hardly an atmosphere besides said vapor. [ 44 ] Nevertheless, the presence of water had led to speculation that life may be possible there. [ 45 ] [ 46 ] [ 47 ] It is even conjectured that Ceres could be the source of life on Earth by panspermia, as its small size would allow fragments of it to escape its gravity more easily. [ 45 ] Although the dwarf planet might not have living things today, there could be signs it harbored life in the past. [ 48 ] The water in Ceres, however, is not liquid water on the surface. It comes frozen in meteorites and sublimates to vapor. The dwarf planet is out of the habitable zone, is too small to have sustained tectonic activity, and does not orbit a tidally disruptive body like the moons of the gas giants. [ 45 ] However, studies by the Dawn space probe confirmed that Ceres has liquid salt-enriched water underground. [ 49 ] Carl Sagan and others in the 1960s and 1970s computed conditions for hypothetical microorganisms living in the atmosphere of Jupiter . [ 50 ] The intense radiation and other conditions, however, do not appear to permit encapsulation and molecular biochemistry, so life there is thought unlikely. [ 51 ] In addition, as a gas giant Jupiter has no surface, so any potential microorganisms would have to be airborne. Although there are some layers of the atmosphere that may be habitable, Jovian climate is in constant turbulence and those microorganisms would eventually be sucked into the deeper parts of Jupiter. In those areas atmospheric pressure is 1,000 times that of Earth, and temperatures can reach 10,000 degrees. [ 52 ] However, it was discovered that the Great Red Spot contains water clouds. Astrophysicist Máté Ádámkovics said that "where there’s the potential for liquid water, the possibility of life cannot be completely ruled out. So, though it appears very unlikely, life on Jupiter is not beyond the range of our imaginations". [ 53 ] Callisto has a thin atmosphere and a subsurface ocean, and may be a candidate for hosting life. It is more distant to the planet than other moons, so the tidal forces are weaker, but also it receives less harmful radiation. [ 54 ] Europa may have a liquid ocean beneath its icy surface, which may be a habitable environment. This potential ocean was first noticed by the two Voyager spacecraft, and later backed by telescope studies from Earth. Current estimations consider that this ocean may contain twice the amount of water of all Earth's oceans combined, despite Europa's smaller size. The ice crust would be between 15 and 25 miles thick and may represent an obstacle to study this ocean, though it may be probed via possible eruption columns that reach outer space. [ 56 ] Life would need liquid water, a number of chemical elements, and a source of energy. Although Europa may have the first two elements, it is not confirmed if it has the three of them. A potential source of energy would be a hydrothermal vent , which has not been detected yet. [ 56 ] Solar light is not considered a viable energy source, as it is too weak in the Jupiter system and would also have to cross the thick ice surface. Other proposed energy sources, although still speculative, are the Magnetosphere of Jupiter and kinetic energy . [ 57 ] Unlike the oceans of Earth, the oceans of Europa would be under a permanent thick ice layer, which may make water aeration difficult. Richard Greenberg of the University of Arizona considers that the ice layer would not be a homogeneous block, but the ice would be rather in a cycle renewing itself at the top and burying the surface ice deeper, which would eventually drop the surface ice into the lower side in contact with the ocean. [ 58 ] This process would allow some air from the surface to eventually reach the ocean below. [ 59 ] Greenberg considers that the first surface oxygen to reach the oceans would have done so after a couple of billion years, allowing life to emerge and develop defenses against oxidation. [ 58 ] He also considers that, once the process started, the amount of oxygen would even allow the development of multicellular beings, and perhaps even sustain a population comparable to all the fishes of Earth. [ 58 ] On 11 December 2013, NASA reported the detection of " clay-like minerals " (specifically, phyllosilicates ), often associated with organic materials , on the icy crust of Europa. [ 60 ] The presence of the minerals may have been the result of a collision with an asteroid or comet , according to the scientists. [ 60 ] The Europa Clipper , which would assess the habitability of Europa, launched in 2024 and is set to reach the moon in 2030. [ 61 ] Europa's subsurface ocean is considered the best target for the discovery of life. [ 57 ] [ 61 ] Ganymede , the largest moon in the Solar system, is the only one that has a magnetic field of its own. The surface seems similar to Mercury and the Moon, and is likely as hostile to life as them. [ 23 ] It is suspected that it has an ocean below the surface, and that primitive life may be possible there. [ 62 ] This suspicion is caused because of the unusually high level of water vapor in the thin atmosphere of Ganymede. The moon likely has several layers of ice and liquid water, and finally a liquid layer in contact with the mantle. The core, the likely cause of Ganymede's magnetic field, would have a temperature near 1600 K. This particular environment is suspected to be likely to be habitable. [ 23 ] The moon is set to be the subject of investigation by the European Space Agency 's Jupiter Icy Moons Explorer , which was launched in 2023 and will reach the Jovian system in 2031. Of all the Galilean moons, Io is the closest to the planet. It is the moon with the highest volcanic activity in the Solar System, as a result of the tidal forces from the planet and its oval orbit around it. Even so, the surface is still cold: -143 Cº. The atmosphere is 200 times lighter than Earth's atmosphere, the proximity of Jupiter gives a lot of radiation, and it is completely devoid of water. However, it may have had water in the past, and perhaps lifeforms underground. [ 54 ] Similarly to Jupiter, Saturn is not likely to host life. It is a gas giant and the temperatures, pressures, and materials found in it are too dangerous for life. [ 63 ] The planet is hydrogen and helium for the most part, with trace amounts of ice water. Temperatures near the surface are near -150 C. The planet gets warmer on the inside, but in the depth where water may be liquid the atmospheric pressure is too high. [ 64 ] Enceladus , the sixth-largest moon of Saturn, has some of the conditions for life, including geothermal activity and water vapor, as well as possible under-ice oceans heated by tidal effects. [ 65 ] [ 66 ] The Cassini–Huygens probe detected carbon, hydrogen, nitrogen and oxygen—all key elements for supporting life—during its 2005 flyby through one of Enceladus's geysers spewing ice and gas. The temperature and density of the plumes indicate a warmer, watery source beneath the surface. Of the bodies on which life is possible, living organisms could most easily enter the other bodies of the Solar System from Enceladus. [ 67 ] Mimas , the seventh-largest moon of Saturn, is similar in size and orbit location to Enceladus. In 2024, based on orbital data from the Cassini–Huygens mission, Mimas was calculated to contain a large tidally heated subsurface ocean starting ~20–30 km below the heavily cratered but old and well-preserved surface, hinting at the potential for life. [ 68 ] Titan, the largest moon of Saturn , is the only known moon in the Solar System with a significant atmosphere. Data from the Cassini–Huygens mission refuted the hypothesis of a global hydrocarbon ocean, but later demonstrated the existence of liquid hydrocarbon lakes in the polar regions—the first stable bodies of surface liquid discovered outside Earth. [ 69 ] [ 70 ] [ 71 ] Further data from Cassini have strengthened evidence that Titan likely harbors a layer of liquid water under its ice shell. [ 72 ] Analysis of data from the mission has uncovered aspects of atmospheric chemistry near the surface that are consistent with—but do not prove—the hypothesis that organisms there , if present, could be consuming hydrogen, acetylene and ethane, and producing methane. [ 73 ] [ 74 ] [ 75 ] NASA's Dragonfly mission is slated to land on Titan in the mid-2030s with a VTOL-capable rotorcraft with a launch date set for 2027. The planet Uranus , an ice giant , is unlikely to be habitable. The local temperatures and pressures may be too extreme, and the materials too volatile. [ 76 ] The only spacecraft to visit, and thus observe, Uranus and its moons in detail is Voyager 2 in 1986. The five major moons of Uranus , however, may have been home to tidally heated subsurface oceans at some point in their histories, based on observations of Ariel 's and Miranda 's variegated geology, [ 77 ] [ 78 ] combined with computer models of the four largest moons, with Titania , the largest, deemed the most likely. [ 79 ] The planet Neptune , another ice giant explored by Voyager 2 , is also unlikely to be habitable. The local temperatures and pressures may be too extreme, and the materials too volatile. [ 80 ] The moon Triton , however, was thoroughly shown to have cryovolcanism on its surface, as well as deposits of water ice and relatively young and smooth geology for its age, raising the possibility of a subsurface ocean. [ 81 ] [ 82 ] [ 51 ] [ 83 ] [ 84 ] [ 85 ] The dwarf planet Pluto is too cold to sustain life on the surface. It has an average of -232 °C, and surface water only exists in a rocky state. The interior of Pluto may be warmer and perhaps contain a subsurface ocean. Also, the possibility of geothermal activity comes into play. That combined with the fact that Pluto has an eccentric orbit, making it sometimes closer to the sun, means that there is a slight chance that the dwarf planet could contain life. [ 86 ] The dwarf planet Makemake is not habitable, due to its extremely low temperatures. [ 87 ] The same goes for Haumea [ 88 ] and Eris . [ 89 ] Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Planetary_habitability_in_the_Solar_System
In astronomy , planetary mass is a measure of the mass of a planet -like astronomical object . Within the Solar System , planets are usually measured in the astronomical system of units , where the unit of mass is the solar mass ( M ☉ ), the mass of the Sun . In the study of extrasolar planets , the unit of measure is typically the mass of Jupiter ( M J ) for large gas giant planets, and the mass of Earth ( M E ) for smaller rocky terrestrial planets . The mass of a planet within the Solar System is an adjusted parameter in the preparation of ephemerides . There are three variations of how planetary mass can be calculated: The choice of solar mass , M ☉ , as the basic unit for planetary mass comes directly from the calculations used to determine planetary mass. In the most precise case, that of the Earth itself, the mass is known in terms of solar masses to twelve significant figures : the same mass, in terms of kilograms or other Earth-based units, is only known to five significant figures, which is less than a millionth as precise. [ 1 ] The difference comes from the way in which planetary masses are calculated. It is impossible to "weigh" a planet, and much less the Sun, against the sort of mass standards which are used in the laboratory. On the other hand, the orbits of the planets give a great range of observational data as to the relative positions of each body, and these positions can be compared to their relative masses using Newton's law of universal gravitation (with small corrections for General Relativity where necessary). To convert these relative masses to Earth-based units such as the kilogram, it is necessary to know the value of the Newtonian constant of gravitation , G . This constant is remarkably difficult to measure in practice, and its value is known to a relative precision of only 2.2 × 10 −5 . [ 2 ] The solar mass is quite a large unit on the scale of the Solar System: 1.9884(2) × 10 30 kg . [ 1 ] The largest planet, Jupiter , is 0.09% the mass of the Sun, while the Earth is about three millionths (0.0003%) of the mass of the Sun. When comparing the planets among themselves, it is often convenient to use the mass of the Earth ( M E or M E ) as a standard, particularly for the terrestrial planets . For the mass of gas giants , and also for most extrasolar planets and brown dwarfs , the mass of Jupiter ( M J ) is a convenient comparison. The mass of a planet has consequences for its structure by having a large mass, especially while it is in the hand of process of formation . A body with enough mass can overcome its compressive strength and achieve a rounded shape (roughly hydrostatic equilibrium ). Since 2006, these objects have been classified as dwarf planet if it orbits around the Sun (that is, if it is not the satellite of another planet). The threshold depends on a number of factors, such as composition, temperature, and the presence of tidal heating. The smallest body that is known to be rounded is Saturn's moon Mimas , at about 1 ⁄ 160000 the mass of Earth; on the other hand, bodies as large as the Kuiper belt object Salacia , at about 1 ⁄ 13000 the mass of Earth, may not have overcome their compressive strengths. Smaller bodies like asteroids are classified as " small Solar System bodies ". A dwarf planet, by definition, is not massive enough to have gravitationally cleared its neighbouring region of planetesimals . The mass needed to do so depends on location: Mars clears its orbit in its current location, but would not do so if it orbited in the Oort cloud . The smaller planets retain only silicates and metals, and are terrestrial planets like Earth or Mars . The interior structure of rocky planets is mass-dependent: for example, plate tectonics may require a minimum mass to generate sufficient temperatures and pressures for it to occur. [ 3 ] Geophysical definitions would also include the dwarf planets and moons in the outer Solar System, which are like terrestrial planets except that they are composed of ice and rock rather than rock and metal: the largest such bodies are Ganymede , Titan , Callisto , Triton , and Pluto . If the protoplanet grows by accretion to more than about twice the mass of Earth, its gravity becomes large enough to retain hydrogen in its atmosphere . In this case, it will grow into an ice giant or gas giant . As such, Earth and Venus are close to the maximum size a planet can usually grow to while still remaining rocky. [ 4 ] If the planet then begins migration , it may move well within its system's frost line , and become a hot Jupiter orbiting very close to its star, then gradually losing small amounts of mass as the star's radiation strips its atmosphere. The theoretical minimum mass a star can have, and still undergo hydrogen fusion at the core, is estimated to be about 75 M J , though fusion of deuterium can occur at masses as low as 13 Jupiters. [ 5 ] [ 6 ] [ 7 ] The DE405/LE405 ephemeris from the Jet Propulsion Laboratory [ 1 ] [ 8 ] is a widely used ephemeris dating from 1998 and covering the whole Solar System. As such, the planetary masses form a self-consistent set, which is not always the case for more recent data (see below). Where a planet has natural satellites, its mass is usually quoted for the whole system (planet + satellites), as it is the mass of the whole system which acts as a perturbation on the orbits of other planets. The distinction is very slight, as natural satellites are much smaller than their parent planets (as can be seen in the table above, where only the largest satellites are even listed). The Earth and the Moon form a case in point, partly because the Moon is unusually large (just over 1% of the mass of the Earth) in relation to its parent planet compared with other natural satellites. There are also very precise data available for the Earth–Moon system, particularly from the Lunar Laser Ranging experiment (LLR). The geocentric gravitational constant – the product of the mass of the Earth times the Newtonian constant of gravitation – can be measured to high precision from the orbits of the Moon and of artificial satellites. The ratio of the two masses can be determined from the slight wobble in the Earth's orbit caused by the gravitational attraction of the Moon. The construction of a full, high-precision Solar System ephemeris is an onerous task. [ 9 ] It is possible (and somewhat simpler) to construct partial ephemerides which only concern the planets (or dwarf planets, satellites, asteroids) of interest by "fixing" the motion of the other planets in the model. The two methods are not strictly equivalent, especially when it comes to assigning uncertainties to the results: however, the "best" estimates – at least in terms of quoted uncertainties in the result – for the masses of minor planets and asteroids usually come from partial ephemerides. Nevertheless, new complete ephemerides continue to be prepared, most notably the EPM2004 ephemeris from the Institute of Applied Astronomy of the Russian Academy of Sciences . EPM2004 is based on 317 014 separate observations between 1913 and 2003, more than seven times as many as DE405, and gave more precise masses for Ceres and five asteroids. [ 9 ] A new set of "current best estimates" for various astronomical constants [ 15 ] was approved the 27th General Assembly of the International Astronomical Union (IAU) in August 2009. [ 16 ] The 2009 set of "current best estimates" was updated in 2012 by resolution B2 of the IAU XXVIII General Assembly. [ 24 ] Improved values were given for Mercury and Uranus (and also for the Pluto system and Vesta).
https://en.wikipedia.org/wiki/Planetary_mass
A planetary mnemonic refers to a phrase created to remember the planets and dwarf planets of the Solar System , with the order of words corresponding to increasing sidereal periods of the bodies. One simple visual mnemonic is to hold out both hands side-by-side with thumbs in the same direction (typically left-hand facing palm down, and right-hand palm up). The fingers of hand with palm down represent the terrestrial planets where the left pinkie represents Mercury and its thumb represents the asteroid belt , including Ceres . The other hand represents the giant planets , with its thumb representing trans-Neptunian objects , including Pluto . Before 2006, Mercury , Venus , Earth , Mars , Jupiter , Saturn , Uranus , Neptune , and Pluto were considered as planets. Below is a partial list of these mnemonics : With the IAU 's 2006 definition of planet which reclassified Pluto as a dwarf planet, along with Ceres and Eris , these mnemonics became obsolete. When Pluto's significance was changed to dwarf planet , mnemonics could no longer include the final "P". The first notable suggestion came from Kyle Sullivan of Lumberton, Mississippi , USA, whose mnemonic was published in the Jan. 2007 issue of Astronomy magazine: "My Violent Evil Monster Just Scared Us Nuts". [ 3 ] In August 2006, for the eight planets recognized under the new definition, [ 4 ] Phyllis Lugger, professor of astronomy at Indiana University suggested the following modification to the common mnemonic for the nine planets: "My Very Educated Mother Just Served Us Nachos". She proposed this mnemonic to Owen Gingerich , Chair of the International Astronomical Union (IAU) Planet Definition Committee and published the mnemonic in the American Astronomical Society Committee on the Status of Women in Astronomy Bulletin Board on August 25, 2006. [ 5 ] It also appeared in Indiana University's IU News Room Star Trak on August 30, 2006. [ 6 ] This mnemonic is used by the IAU on their website for the public. [ 7 ] Others angry at the IAU's decision to "demote" Pluto composed sarcastic mnemonics in protest: In 2007, the National Geographic Society sponsored a contest for a new mnemonic of MVEMCJSUNPE, incorporating the then-eleven known planets and dwarf planets, including Eris, Ceres, and the newly demoted Pluto. On February 22, 2008, "My Very Exciting Magic Carpet Just Sailed Under Nine Palace Elephants", coined by 10-year-old Maryn Smith of Great Falls, Montana , was announced as the winner. [ 11 ] The phrase was featured in the song 11 Planets by Grammy-nominated singer and songwriter Lisa Loeb and in the book 11 Planets: A New View of the Solar System by David Aguilar ( ISBN 978-1426302367 ). [ 12 ] Since the National Geographic competition, two additional bodies were designated as dwarf planets, Makemake and Haumea , on July 11 and September 17, 2008 respectively. A 2015 New York Times article suggested some mnemonics including, "My Very Educated Mother Cannot Just Serve Us Nine Pizzas—Hundreds May Eat!" [ 13 ] Longer mnemonics will be required in the future, if more of the possible dwarf planets are recognized as such by the IAU. However, at some point enthusiasm for new mnemonics will wane as the number of dwarf planets exceeds the number that people will want to learn (it is estimated that there may be up to 200 dwarf planets). [ 14 ] Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Planetary_mnemonic
In astronomy , a planetary parade , also known as a planetary alignment or planetary procession , occurs when multiple planets in the Solar System appear close together in the night sky, visible at the same time from Earth. However, a planetary parade is not a true alignment in space, but rather an apparent alignment that is the result of the planets' orbital positions relative to viewpoint of Earth-bound observers lying in an arc across the sky. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Planetary parades of three, four or five planets are commonplace events, but larger planetary parades are less frequent. [ 6 ] Because the motions of planets are predictable, the timing of planetary parades past and future can be easily calculated across long time periods. Two large planetary parades occurred in 2025, aligning six and seven planets respectively. The ongoing planetary alignment is the first phase of this astronomical event in 2025, which began on 21 January and ends on 21 February 2025. [ 7 ] During this phase Venus , Mars , Jupiter , Uranus , Neptune , and Saturn will remain visible in the night sky. [ 8 ] The second phase of this astronomical event is predicted to occur on 28 February 2025 with seven planets. The planet Mercury will join with six other planets which were visible during the first phase. [ 9 ] Following the 2025 parade, the next six-planet parade will not occur until 2040. [ 6 ] [ 10 ] This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Planetary_parade
Planetary protection is a guiding principle in the design of an interplanetary mission , aiming to prevent biological contamination of both the target celestial body and the Earth in the case of sample-return missions. Planetary protection reflects both the unknown nature of the space environment and the desire of the scientific community to preserve the pristine nature of celestial bodies until they can be studied in detail. [ 2 ] [ 3 ] There are two types of interplanetary contamination . Forward contamination is the transfer of viable organisms from Earth to another celestial body. Back contamination is the transfer of potential extraterrestrial organisms back to the Earth's biosphere . The potential problem of lunar and planetary contamination was first raised at the International Astronautical Federation VIIth Congress in Rome in 1956. [ 4 ] In 1958 [ 5 ] the U.S. National Academy of Sciences (NAS) passed a resolution stating, “The National Academy of Sciences of the United States of America urges that scientists plan lunar and planetary studies with great care and deep concern so that initial operations do not compromise and make impossible forever after critical scientific experiments.” This led to creation of the ad hoc Committee on Contamination by Extraterrestrial Exploration (CETEX), which met for a year and recommended that interplanetary spacecraft be sterilized , and stated, “The need for sterilization is only temporary. Mars and possibly Venus need to remain uncontaminated only until study by manned ships becomes possible”. [ 6 ] In 1959, planetary protection was transferred to the newly formed Committee on Space Research (COSPAR). COSPAR in 1964 issued Resolution 26 affirming that: the search for extraterrestrial life is an important objective of space research, that the planet of Mars may offer the only feasible opportunity to conduct this search during the foreseeable future, that contamination of this planet would make such a search far more difficult and possibly even prevent for all time an unequivocal result, that all practical steps should be taken to ensure that Mars be not biologically contaminated until such time as this search has been satisfactorily carried out, and that cooperation in proper scheduling of experiments and use of adequate spacecraft sterilization techniques is required on the part of all deep space probe launching authorities to avoid such contamination. [ 7 ] In 1967, the US, USSR, and UK ratified the United Nations Outer Space Treaty . The legal basis for planetary protection lies in Article IX of this treaty: "Article IX: ... States Parties to the Treaty shall pursue studies of outer space, including the Moon and other celestial bodies, and conduct exploration of them so as to avoid their harmful contamination and also adverse changes in the environment of the Earth resulting from the introduction of extraterrestrial matter and, where necessary, shall adopt appropriate measures for this purpose... [ 8 ] [ 9 ] This treaty has since been signed and ratified by 104 nation-states. Another 24 have signed but not ratified. All the current space-faring nation-states, along with all current aspiring space-faring nation-states, have both signed and ratified the treaty. [ 10 ] The Outer Space Treaty has consistent and widespread international support, and as a result of this, together with the fact that it is based on the 1963 declaration which was adopted by consensus in the UN National Assembly, it has taken on the status of customary international law. The provisions of the Outer Space Treaty are therefore binding on all states, even those who have neither signed nor ratified it. [ 11 ] For forward contamination, the phrase to be interpreted is "harmful contamination". Two legal reviews came to differing interpretations of this clause (both reviews were unofficial). However the currently accepted interpretation is that “any contamination which would result in harm to a state’s experiments or programs is to be avoided”. NASA policy states explicitly that “the conduct of scientific investigations of possible extraterrestrial life forms, precursors, and remnants must not be jeopardized”. [ 12 ] The Committee on Space Research (COSPAR) meets every two years, in a gathering of 2000 to 3000 scientists, [ 13 ] and one of its tasks is to develop recommendations for avoiding interplanetary contamination. Its legal basis is Article IX of the Outer Space Treaty [ 14 ] (see history below for details ). Its recommendations depend on the type of space mission and the celestial body explored. [ 15 ] COSPAR categorizes the missions into 5 groups: For Category IV missions, a certain level of biological burden is allowed for the mission. In general this is expressed as a 'probability of contamination', required to be less than one chance in 10,000 [ 19 ] [ 20 ] of forward contamination per mission, but in the case of Mars Category IV missions (above) the requirement has been translated into a count of Bacillus spores per surface area, as an easy to use assay method. [ 16 ] [ 21 ] More extensive documentation is also required for Category IV. Other procedures required, depending on the mission, may include trajectory biasing, the use of clean rooms during spacecraft assembly and testing, bioload reduction, partial sterilization of the hardware having direct contact with the target body, a bioshield for that hardware, and, in rare cases, complete sterilization of the entire spacecraft. [ 16 ] For restricted Category V missions, the current recommendation [ 22 ] is that no uncontained samples should be returned unless sterilized. Since sterilization of the returned samples would destroy much of their science value, current proposals involve containment and quarantine procedures. For details, see Containment and quarantine below. Category V missions also have to fulfill the requirements of Category IV to protect the target body from forward contamination. A special region is a region classified by COSPAR where terrestrial organisms could readily propagate, or thought to have a high potential for existence of Martian life forms. This is understood to apply to any region on Mars where liquid water occurs, or can occasionally occur, based on the current understanding of requirements for life. If a hard landing risks biological contamination of a special region, then the whole lander system must be sterilized to COSPAR category IVc. Some targets are easily categorized. Others are assigned provisional categories by COSPAR, pending future discoveries and research. The 2009 COSPAR Workshop on Planetary Protection for Outer Planet Satellites and Small Solar System Bodies covered this in some detail. Most of these assessments are from that report, with some future refinements. This workshop also gave more precise definitions for some of the categories: [ 23 ] [ 24 ] “not of direct interest for understanding the process of chemical evolution or the origin of life.” [ 25 ] … where there is only a remote chance that contamination carried by a spacecraft could jeopardize future exploration”. In this case we define “remote chance” as “the absence of niches (places where terrestrial microorganisms could proliferate) and/or a very low likelihood of transfer to those places.” [ 23 ] [ 25 ] Provisionally, they assigned these objects to Category II. However, they state that more research is needed, because there is a remote possibility that the tidal interactions of Pluto and Charon could maintain some water reservoir below the surface. Similar considerations apply to the other larger KBOs. Triton is insufficiently well understood at present to say it is definitely devoid of liquid water. The only close up observations to date are those of Voyager 2 . In a detailed discussion of Titan, scientists concluded that there was no danger of contamination of its surface, except short term adding of negligible amounts of organics, but Titan could have a below surface water reservoir that communicates with the surface, and if so, this could be contaminated. In the case of Ganymede, the question is, given that its surface shows pervasive signs of resurfacing, is there any communication with its subsurface ocean? They found no known mechanism by which this could happen, and the Galileo spacecraft found no evidence of cryovolcanism . Initially, they assigned it as Priority B minus, meaning that precursor missions are needed to assess its category before any surface missions. However, after further discussion they provisionally assigned it to Category II, so no precursor missions are required, depending on future research. If there is cryovolcanism on Ganymede or Titan, the undersurface reservoir is thought to be 50 – 150 km below the surface. They were unable to find a process that could transfer the surface melted water back down through 50 km of ice to the under surface sea. [ 28 ] This is why both Ganymede and Titan were assigned a reasonably firm provisional Category II, but pending results of future research. Icy bodies that show signs of recent resurfacing need further discussion and might need to be assigned to a new category depending on future research. This approach has been applied, for instance, to missions to Ceres . The planetary protection Category is subject for review during the mission of the Ceres orbiter ( Dawn ) depending on the results found. [ 29 ] “…where there is a significant chance that contamination carried by a spacecraft could jeopardize future exploration.” We define “significant chance” as “the presence of niches (places where terrestrial microorganisms could proliferate) and the likelihood of transfer to those places.” [ 23 ] [ 25 ] Unrestricted Category V: “Earth-return missions from bodies deemed by scientific opinion to have no indigenous life forms.” [ 25 ] Restricted Category V: "Earth-return missions from bodies deemed by scientific opinion to be of significant interest to the process of chemical evolution or the origin of life." [ 25 ] In the category V for sample return the conclusions so far are: [ 25 ] The aim of the current regulations is to keep the number of microorganisms low enough so that the probability of contamination of Mars (and other targets) is acceptable. It is not an objective to make the probability of contamination zero. The aim is to keep the probability of contamination of 1 chance in 10,000 of contamination per mission flown. [ 19 ] This figure is obtained typically by multiplying together the number of microorganisms on the spacecraft, the probability of growth on the target body, and a series of bioload reduction factors. In detail the method used is the Coleman–Sagan equation. [ 30 ] P c = N 0 R P S P t P R P g {\displaystyle P_{c}=N_{0}RP_{S}P_{t}P_{R}P_{g}} . where Then the requirement is P c < 10 − 4 {\displaystyle P_{c}<10^{-4}} The 10 − 4 {\displaystyle 10^{-4}} is a number chosen by Sagan et al., somewhat arbitrarily. Sagan and Coleman assumed that about 60 missions to the Mars surface would occur before the exobiology of Mars is thoroughly understood, 54 of those successful, and 30 flybys or orbiters, and the number was chosen to endure a probability to keep the planet free from contamination of at least 99.9% over the duration of the exploration period. [ 20 ] The Coleman–Sagan equation has been criticised because the individual parameters are often not known to better than a magnitude or so. For example, the thickness of the surface ice of Europa is unknown, and may be thin in places, which can give rise to a high level of uncertainty in the equation. [ 31 ] [ 32 ] It has also been criticised because of the inherent assumption made of an end to the protection period and future human exploration. In the case of Europa, this would only protect it with reasonable probability for the duration of the period of exploration. [ 31 ] [ 32 ] Greenberg has suggested an alternative, to use the natural contamination standard — that our missions to Europa should not have a higher chance of contaminating it than the chance of contamination by meteorites from Earth. [ 33 ] [ 34 ] As long as the probability of people infecting other planets with terrestrial microbes is substantially smaller than the probability that such contamination happens naturally, exploration activities would, in our view, be doing no harm. We call this concept the natural contamination standard. Another approach for Europa is the use of binary decision trees which is favoured by the Committee on Planetary Protection Standards for Icy Bodies in the Outer Solar System under the auspices of the Space Studies Board. [ 19 ] This goes through a series of seven steps, leading to a final decision on whether to go ahead with the mission or not. [ 35 ] Recommendation: Approaches to achieving planetary protection should not rely on the multiplication of bioload estimates and probabilities to calculate the likelihood of contaminating Solar System bodies with terrestrial organisms unless scientific data unequivocally define the values, statistical variation, and mutual independence of every factor used in the equation. Recommendation: Approaches to achieving planetary protection for missions to icy Solar System bodies should employ a series of binary decisions that consider one factor at a time to determine the appropriate level of planetary protection procedures to use. In the case of restricted Category V missions, Earth would be protected through quarantine of sample and astronauts in a yet to be built Biosafety level 4 facility. [ 36 ] In the case of a Mars sample return, missions would be designed so that no part of the capsule that encounters the Mars surface is exposed to the Earth environment. One way to do that is to enclose the sample container within a larger outer container from Earth, in the vacuum of space. The integrity of any seals is essential and the system must also be monitored to check for the possibility of micro-meteorite damage during return to Earth. [ 37 ] [ 38 ] [ 39 ] [ 40 ] The recommendation of the ESF report is that [ 22 ] “No uncontained Mars materials, including space craft surfaces that have been exposed to the Mars environment should be returned to Earth unless sterilised" ..."For unsterilised samples returned to Earth, a programme of life detection and biohazard testing, or a proven sterilisation process, shall be undertaken as an absolute precondition for the controlled distribution of any portion of the sample.” No restricted category V returns have been carried out. During the Apollo program, the sample-returns were regulated through the Extra-Terrestrial Exposure Law . This was rescinded in 1991, so new regulations would need to be enacted. The Apollo era quarantine procedures are of interest as the only attempt to date of a return to Earth of a sample that, at the time, was thought to have a remote possibility of including extraterrestrial life. Samples and astronauts were quarantined in the Lunar Receiving Laboratory . [ 41 ] The methods used would be considered inadequate for containment by modern standards. [ 42 ] Also the lunar receiving laboratory would be judged a failure by its own design criteria as the sample return didn't contain the lunar material, with two failure points during the Apollo 11 return mission, at the splashdown and at the facility itself. However the Lunar Receiving Laboratory was built quickly with only two years from start to finish, a time period now considered inadequate. Lessons learned from it can help with design of any Mars sample return receiving facility. [ 43 ] Design criteria for a proposed Mars Sample Return Facility, and for the return mission, have been developed by the American National Research Council, [ 44 ] and the European Space Foundation. [ 45 ] They concluded that it could be based on biohazard 4 containment but with more stringent requirements to contain unknown microorganisms possibly as small as or smaller than the smallest Earth microorganisms known, the ultramicrobacteria . The ESF study also recommended that it should be designed to contain the smaller gene transfer agents if possible, as these could potentially transfer DNA from martian microorganisms to terrestrial microorganisms if they have a shared evolutionary ancestry. It also needs to double as a clean room facility to protect the samples from terrestrial contamination that could confuse the sensitive life detection tests that would be used on the samples. Before a sample return, new quarantine laws would be required. Environmental assessment would also be required, and various other domestic and international laws not present during the Apollo era would need to be negotiated. [ 46 ] For all spacecraft missions requiring decontamination, the starting point is clean room assembly in US federal standard class 100 cleanrooms . These are rooms with fewer than 100 particles of size 0.5 μm or larger per cubic foot. Engineers wear cleanroom suits with only their eyes exposed. Components are sterilized individually before assembly, as far as possible, and they clean surfaces frequently with alcohol wipes during assembly. Spores of Bacillus subtilis was chosen for not only its ability to readily generate spores, but its well-established use as a model species. It is a useful tracker of UV irradiation effects because of its high resilience to a variety of extreme conditions. As such it is an important indicator species for forward contamination in the context of planetary protection. For Category IVa missions (Mars landers that do not search for Martian life), the aim is to reduce the bioburden to 300,000 bacterial spores on any surface from which the spores could get into the Martian environment. Any heat tolerant components are heat sterilized to 114 °C. Sensitive electronics such as the core box of the rover including the computer, are sealed and vented through high-efficiency filters to keep any microbes inside. [ 47 ] [ 48 ] [ 49 ] For more sensitive missions such as Category IVc (to Mars special regions ), a far higher level of sterilization is required. These need to be similar to levels implemented on the Viking landers, which were sterilized for a surface which, at the time, was thought to be potentially hospitable to life similar to special regions on Mars today. In microbiology, it is usually impossible to prove that there are no microorganisms left viable, since many microorganisms are either not yet studied, or not cultivable. Instead, sterilization is done using a series of tenfold reductions of the numbers of microorganisms present. After a sufficient number of tenfold reductions, the chance that there any microorganisms left will be extremely low. [ original research? ] The two Viking Mars landers were sterilized using dry heat sterilization. After preliminary cleaning to reduce the bioburden to levels similar to present day Category IVa spacecraft, the Viking spacecraft were heat-treated for 30 hours at 112 °C, nominal 125 °C (five hours at 112 °C was considered enough to reduce the population tenfold even for enclosed parts of the spacecraft, so this was enough for a million-fold reduction of the originally low population). [ 50 ] Modern materials however are often not designed to handle such temperatures, especially since modern spacecraft often use "commercial off the shelf" components. Problems encountered include nanoscale features only a few atoms thick, plastic packaging, and conductive epoxy attachment methods. Also many instrument sensors cannot be exposed to high temperature, and high temperature can interfere with critical alignments of instruments. [ 50 ] As a result, new methods are needed to sterilize a modern spacecraft to the higher categories such as Category IVc for Mars, similar to Viking. [ 50 ] Methods under evaluation, or already approved, include: Some other methods are of interest as they can sterilize the spacecraft after arrival on the planet. [ citation needed ] The spore count is used as an indirect measure of the number of microorganisms present. Typically 99% of microorganisms by species will be non-spore forming and able to survive in dormant states, [ citation needed ] and so the actual number of viable dormant microorganisms remaining on the sterilized spacecraft is expected to be many times the number of spore-forming microorganisms. One new spore method approved is the "Rapid Spore Assay". This is based on commercial rapid assay systems, detects spores directly and not just viable microorganisms and gives results in 5 hours instead of 72 hours. [ 50 ] It is also long been recognized that spacecraft cleaning rooms harbour polyextremophiles as the only microbes able to survive in them. [ 53 ] [ 54 ] [ 55 ] [ 56 ] For example, in a recent study, microbes from swabs of the Curiosity rover were subjected to desiccation, UV exposure, cold and pH extremes. Nearly 11% of the 377 strains survived more than one of these severe conditions. [ 56 ] The genomes of resistant spore producing Bacillus sp. have been studied and genome level traits potentially linked to the resistance have been reported. [ 57 ] [ 58 ] [ 59 ] [ 60 ] This does not mean that these microbes have contaminated Mars. This is just the first stage of the process of bioburden reduction. To contaminate Mars they also have to survive the low temperature, vacuum, UV and ionizing radiation during the months long journey to Mars, and then have to encounter a habitat on Mars and start reproducing there. Whether this has happened or not is a matter of probability. The aim of planetary protection is to make this probability as low as possible. The currently accepted target probability of contamination per mission is to reduce it to less than 0.01%, though in the special case of Mars, scientists also rely on the hostile conditions on Mars to take the place of the final stage of heat treatment decimal reduction used for Viking. But with current technology scientists cannot reduce probabilities to zero. [ original research? ] Two recent molecular methods have been approved [ 50 ] for assessment of microbial contamination on spacecraft surfaces. [ 48 ] [ 61 ] [ when? ] This particularly applies to orbital missions, Category III, as they are sterilized to a lower standard than missions to the surface. It is also relevant to landers, as an impact gives more opportunity for forward contamination, and impact could be on an unplanned target, such as a special region on Mars. The requirement for an orbital mission is that it needs to remain in orbit for at least 20 years after arrival at Mars with probability of at least 99% and for 50 years with probability at least 95%. This requirement can be dropped if the mission is sterilized to Viking sterilization standard. [ 62 ] In the Viking era (1970s), the requirement was given as a single figure, that any orbital mission should have a probability of less than 0.003% probability of impact during the current exploratory phase of exploration of Mars. [ 63 ] For both landers and orbiters, the technique of trajectory biasing is used during approach to the target. The spacecraft trajectory is designed so that if communications are lost, it will miss the target. Despite the above measures, there has been one notable failure of impact prevention. The Mars Climate Orbiter which was sterilized only to Category III, crashed on Mars in 1999 due to a mix-up of imperial and metric units. The office of planetary protection stated that it is likely that it burnt up in the atmosphere, but if it survived to the ground, then it could cause forward contamination. [ 64 ] Mars Observer is another Category III mission with potential planetary contamination. Communications were lost three days before its orbital insertion maneuver in 1993. It seems most likely it did not succeed in entering into orbit around Mars and simply continued past on a heliocentric orbit. If it did succeed in following its automatic programming, and attempted the manoeuvre, however, there is a chance it crashed on Mars. [ citation needed ] Three landers have had hard landings on Mars. These are Schiaparelli EDM lander , the Mars Polar Lander , and Deep Space 2 . These were all sterilized for surface missions but not for special regions (Viking pre-sterilization only). Mars Polar Lander , and Deep Space 2 crashed into the polar regions which are now treated as special regions because of the possibility of forming liquid brines. Alberto G. Fairén and Dirk Schulze-Makuch published an article in Nature recommending that planetary protection measures need to be scaled down. They gave as their main reason for this, that exchange of meteorites between Earth and Mars means that any life on Earth that could survive on Mars has already got there and vice versa. [ 65 ] Robert Zubrin used similar arguments in favour of his view that the back contamination risk has no scientific validity. [ 66 ] [ 67 ] The meteorite argument was examined by the NRC in the context of back contamination. It is thought that all the Martian meteorites originate in relatively few impacts every few million years on Mars. The impactors would be kilometers in diameter and the craters they form on Mars tens of kilometers in diameter. Models of impacts on Mars are consistent with these findings. [ 68 ] [ 69 ] [ 70 ] Earth receives a steady stream of meteorites from Mars, but they come from relatively few original impactors, and transfer was more likely in the early Solar System. Also some life forms viable on both Mars and on Earth might be unable to survive transfer on a meteorite, and there is so far no direct evidence of any transfer of life from Mars to Earth in this way. The NRC concluded that though transfer is possible, the evidence from meteorite exchange does not eliminate the need for back contamination protection methods. [ 71 ] Impacts on Earth able to send microorganisms to Mars are also infrequent. Impactors of 10 km across or larger can send debris to Mars through the Earth's atmosphere but these occur rarely, and were more common in the early Solar System. [ citation needed ] In their 2013 paper "The Over Protection of Mars", Alberto Fairén and Dirk Schulze-Makuch suggested that we no longer need to protect Mars, essentially using Zubrin 's meteorite transfer argument. [ 72 ] This was rebutted in a follow-up article "Appropriate Protection of Mars", in Nature by the current and previous planetary protection officers Catharine Conley and John Rummel. [ 73 ] [ 74 ] The scientific consensus is that the potential for large-scale effects, either through pathogenesis or ecological disruption, is extremely small. [ 44 ] [ 75 ] [ 76 ] [ 77 ] [ 78 ] Nevertheless, returned samples from Mars will be treated as potentially biohazardous until scientists can determine that the returned samples are safe. The goal is to reduce the probability of release of a Mars particle to less than one in a million. [ 76 ] A COSPAR workshop in 2010, looked at issues to do with protecting areas from non biological contamination. [ 79 ] [ 80 ] They recommended that COSPAR expand its remit to include such issues. Recommendations of the workshop include: Recommendation 3 COSPAR should add a separate and parallel policy to provide guidance on requirements/best practices for protection of non-living/nonlife-related aspects of Outer Space and celestial bodies Some ideas proposed include protected special regions, or "Planetary Parks" [ 81 ] to keep regions of the Solar System pristine for future scientific investigation, and also for ethical reasons. Astrobiologist Christopher McKay has argued that until we have better understanding of Mars, our explorations should be biologically reversible. [ 82 ] [ 83 ] For instance if all the microorganisms introduced to Mars so far remain dormant within the spacecraft, they could in principle be removed in the future, leaving Mars completely free of contamination from modern Earth lifeforms. In the 2010 workshop one of the recommendations for future consideration was to extend the period for contamination prevention to the maximum viable lifetime of dormant microorganisms introduced to the planet. "' Recommendation 4.' COSPAR should consider that the appropriate protection of potential indigenous extraterrestrial life shall include avoiding the harmful contamination of any habitable environment —whether extant or foreseeable— within the maximum potential time of viability of any terrestrial organisms (including microbial spores) that may be introduced into that environment by human or robotic activity." [ 80 ] In the case of Europa , a similar idea has been suggested, that it is not enough to keep it free from contamination during our current exploration period. It might be that Europa is of sufficient scientific interest that the human race has a duty to keep it pristine for future generations to study as well. This was the majority view of the 2000 task force examining Europa, though there was a minority view of the same task force that such strong protection measures are not required. "One consequence of this view is that Europa must be protected from contamination for an open-ended period, until it can be demonstrated that no ocean exists or that no organisms are present. Thus, we need to be concerned that over a time scale on the order of 10 million to 100 million years (an approximate age for the surface of Europa), any contaminating material is likely to be carried into the deep ice crust or into the underlying ocean." [ 84 ] In July 2018, the National Academies of Sciences, Engineering, and Medicine issued a Review and Assessment of Planetary Protection Policy Development Processes. In part, the report urges NASA to create a broad strategic plan that covers both forward and back contamination. The report also expresses concern about private industry missions, for which there is no governmental regulatory authority. [ 85 ] [ 86 ] The proposal by the German physicist Claudius Gros , that the technology of the Breakthrough Starshot project may be utilized to establish a biosphere of unicellular organisms on otherwise only transiently habitable exoplanets, [ 87 ] has sparked a discussion, [ 88 ] to what extent planetary protection should be extended to exoplanets . [ 89 ] [ 90 ] Gros argues that the extended timescales of interstellar missions imply that planetary and exoplanetary protection have different ethical groundings. [ 91 ] "This protocol was defined in concert with Viking, the first mission to face the most stringent planetary protection requirements; its implementation remains the gold standard today." A policy review of the Outer Space Treaty concluded that, while Article IX "imposed international obligations on all state parties to protect and preserve the environmental integrity of outer space and celestial bodies such as Mars," there is no definition as to what constitutes harmful contamination, nor does the treaty specify under what circumstances it would be necessary to "adopt appropriate measures" or which measures would in fact be "appropriate" An earlier legal review, however, argued that "if the assumption is made that the parties to the treaty were not merely being verbose" and "harmful contamination" is not simply redundant, "harmful" should be interpreted as "harmful to the interests of other states," and since "states have an interest in protecting their ongoing space programs," Article IX must mean that "any contamination which would result in harm to a state’s experiments or programs is to be avoided" Current NASA policy states that the goal of NASA’s forward contamination planetary protection policy is the protection of scientific investigations, declaring explicitly that "the conduct of scientific investigations of possible extraterrestrial life forms, precursors, and remnants must not be jeopardized" "The best that I hear now is that the techniques of isolation we used wouldn’t be adequate for a sample coming back from Mars, so somebody else has a big job on their hands."
https://en.wikipedia.org/wiki/Planetary_protection
Planetary science (or more rarely, planetology ) is the scientific study of planets (including Earth ), celestial bodies (such as moons , asteroids , comets ) and planetary systems (in particular those of the Solar System ) and the processes of their formation. It studies objects ranging in size from micrometeoroids to gas giants , with the aim of determining their composition, dynamics, formation, interrelations and history. It is a strongly interdisciplinary field, which originally grew from astronomy and Earth science , [ 1 ] and now incorporates many disciplines, including planetary geology , cosmochemistry , atmospheric science , physics , oceanography , hydrology , theoretical planetary science , glaciology , and exoplanetology . [ 1 ] Allied disciplines include space physics , when concerned with the effects of the Sun on the bodies of the Solar System, and astrobiology . There are interrelated observational and theoretical branches of planetary science. Observational research can involve combinations of space exploration , predominantly with robotic spacecraft missions using remote sensing , and comparative, experimental work in Earth-based laboratories . The theoretical component involves considerable computer simulation and mathematical modelling . Planetary scientists are generally located in the astronomy and physics or Earth sciences departments of universities or research centres, though there are several purely planetary science institutes worldwide. Generally, planetary scientists study one of the Earth sciences, astronomy, astrophysics , geophysics , or physics at the graduate level and concentrate their research in planetary science disciplines. There are several major conferences each year, and a wide range of peer reviewed journals . Some planetary scientists work at private research centres and often initiate partnership research tasks. The history of planetary science may be said to have begun with the Ancient Greek philosopher Democritus , who is reported by Hippolytus as saying The ordered worlds are boundless and differ in size, and that in some there is neither sun nor moon, but that in others, both are greater than with us, and yet with others more in number. And that the intervals between the ordered worlds are unequal, here more and there less, and that some increase, others flourish and others decay, and here they come into being and there they are eclipsed. But that they are destroyed by colliding with one another. And that some ordered worlds are bare of animals and plants and all water. [ 2 ] In more modern times, planetary science began in astronomy, from studies of the unresolved planets. In this sense, the original planetary astronomer would be Galileo , who discovered the four largest moons of Jupiter , the mountains on the Moon , and first observed the rings of Saturn , all objects of intense later study. Galileo's study of the lunar mountains in 1609 also began the study of extraterrestrial landscapes: his observation "that the Moon certainly does not possess a smooth and polished surface" suggested that it and other worlds might appear "just like the face of the Earth itself". [ 3 ] Advances in telescope construction and instrumental resolution gradually allowed increased identification of the atmospheric as well as surface details of the planets. The Moon was initially the most heavily studied, due to its proximity to the Earth, as it always exhibited elaborate features on its surface, and the technological improvements gradually produced more detailed lunar geological knowledge. In this scientific process, the main instruments were astronomical optical telescopes (and later radio telescopes ) and finally robotic exploratory spacecraft , such as space probes . The Solar System has now been relatively well-studied, and a good overall understanding of the formation and evolution of this planetary system exists. However, there are large numbers of unsolved questions, [ 4 ] and the rate of new discoveries is very high, partly due to the large number of interplanetary spacecraft currently exploring the Solar System. Planetary science studies observational and theoretical astronomy, geology ( astrogeology ), atmospheric science , and an emerging subspecialty in planetary oceans , called planetary oceanography . [ 5 ] This is both an observational and a theoretical science. Observational researchers are predominantly concerned with the study of the small bodies of the Solar System: those that are observed by telescopes, both optical and radio, so that characteristics of these bodies such as shape, spin, surface materials and weathering are determined, and the history of their formation and evolution can be understood. Theoretical planetary astronomy is concerned with dynamics : the application of the principles of celestial mechanics to the Solar System and extrasolar planetary systems. Observing exoplanets and determining their physical properties, exoplanetology , is a major area of research besides Solar System studies. Every planet has its own branch. In planetary science, the term geology is used in its broadest sense, to mean the study of the surface and interior parts of planets and moons, from their core to their magnetosphere. The best-known research topics of planetary geology deal with the planetary bodies in the near vicinity of the Earth: the Moon , and the two neighboring planets: Venus and Mars . Of these, the Moon was studied first, using methods developed earlier on the Earth. Planetary geology focuses on celestial objects that exhibit a solid surface or have significant solid physical states as part of their structure. Planetary geology applies geology , geophysics and geochemistry to planetary bodies. [ 6 ] Geomorphology studies the features on planetary surfaces and reconstructs the history of their formation, inferring the physical processes that acted on the surface. Planetary geomorphology includes the study of several classes of surface features: The history of a planetary surface can be deciphered by mapping features from top to bottom according to their deposition sequence , as first determined on terrestrial strata by Nicolas Steno . For example, stratigraphic mapping prepared the Apollo astronauts for the field geology they would encounter on their lunar missions. Overlapping sequences were identified on images taken by the Lunar Orbiter program , and these were used to prepare a lunar stratigraphic column and geological map of the Moon. One of the main problems when generating hypotheses on the formation and evolution of objects in the Solar System is the lack of samples that can be analyzed in the laboratory, where a large suite of tools are available, and the full body of knowledge derived from terrestrial geology can be brought to bear. Direct samples from the Moon, asteroids and Mars are present on Earth, removed from their parent bodies, and delivered as meteorites . Some of these have suffered contamination from the oxidising effect of Earth's atmosphere and the infiltration of the biosphere , but those meteorites collected in the last few decades from Antarctica are almost entirely pristine. The different types of meteorites that originate from the asteroid belt cover almost all parts of the structure of differentiated bodies: meteorites even exist that come from the core-mantle boundary ( pallasites ). The combination of geochemistry and observational astronomy has also made it possible to trace the HED meteorites back to a specific asteroid in the main belt, 4 Vesta . The comparatively few known Martian meteorites have provided insight into the geochemical composition of the Martian crust, although the unavoidable lack of information about their points of origin on the diverse Martian surface has meant that they do not provide more detailed constraints on theories of the evolution of the Martian lithosphere . [ 10 ] As of July 24, 2013, 65 samples of Martian meteorites have been discovered on Earth. Many were found in either Antarctica or the Sahara Desert. During the Apollo era, in the Apollo program , 384 kilograms of lunar samples were collected and transported to the Earth, and three Soviet Luna robots also delivered regolith samples from the Moon. These samples provide the most comprehensive record of the composition of any Solar System body besides the Earth. The numbers of lunar meteorites are growing quickly in the last few years – [ 11 ] as of April 2008 there are 54 meteorites that have been officially classified as lunar. Eleven of these are from the US Antarctic meteorite collection, 6 are from the Japanese Antarctic meteorite collection and the other 37 are from hot desert localities in Africa, Australia, and the Middle East. The total mass of recognized lunar meteorites is close to 50 kg. Space probes made it possible to collect data in not only the visible light region but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics. Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons , beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins. If a planet's magnetic field is sufficiently strong, its interaction with the solar wind forms a magnetosphere around a planet. Early space probes discovered the gross dimensions of the terrestrial magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind , a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles, the Van Allen radiation belts . Planetary geophysics includes, but is not limited to, seismology and tectonophysics , geophysical fluid dynamics , mineral physics , geodynamics , mathematical geophysics , and geophysical surveying . Planetary geodesy (also known as planetary geodetics) deals with the measurement and representation of the planets of the Solar System, their gravitational fields and geodynamic phenomena ( polar motion in three-dimensional, time-varying space). The science of geodesy has elements of both astrophysics and planetary sciences. The shape of the Earth is to a large extent the result of its rotation, which causes its equatorial bulge , and the competition of geologic processes such as the collision of plates and of vulcanism , resisted by the Earth's gravity field. These principles can be applied to the solid surface of Earth ( orogeny ; Few mountains are higher than 10 km (6 mi), few deep sea trenches deeper than that because quite simply, a mountain as tall as, for example, 15 km (9 mi), would develop so much pressure at its base, due to gravity, that the rock there would become plastic , and the mountain would slump back to a height of roughly 10 km (6 mi) in a geologically insignificant time. Some or all of these geologic principles can be applied to other planets besides Earth. For instance on Mars, whose surface gravity is much less, the largest volcano, Olympus Mons , is 27 km (17 mi) high at its peak, a height that could not be maintained on Earth. The Earth geoid is essentially the figure of the Earth abstracted from its topographic features. Therefore, the Mars geoid ( areoid ) is essentially the figure of Mars abstracted from its topographic features. Surveying and mapping are two important fields of application of geodesy. An atmosphere is an important transitional zone between the solid planetary surface and the higher rarefied ionizing and radiation belts. Not all planets have atmospheres: their existence depends on the mass of the planet, and the planet's distance from the Sun – too distant and frozen atmospheres occur. Besides the four giant planets , three of the four terrestrial planets ( Earth , Venus , and Mars ) have significant atmospheres. Two moons have significant atmospheres: Saturn 's moon Titan and Neptune 's moon Triton . A tenuous atmosphere exists around Mercury . The effects of the rotation rate of a planet about its axis can be seen in atmospheric streams and currents. Seen from space, these features show as bands and eddies in the cloud system and are particularly visible on Jupiter and Saturn. Exoplanetology studies exoplanets , the planets existing outside our Solar System . Until recently, the means of studying exoplanets have been extremely limited, but with the current rate of innovation in research technology , exoplanetology has become a rapidly developing subfield of astronomy . Planetary science frequently makes use of the method of comparison to give a greater understanding of the object of study. This can involve comparing the dense atmospheres of Earth and Saturn's moon Titan , the evolution of outer Solar System objects at different distances from the Sun, or the geomorphology of the surfaces of the terrestrial planets, to give only a few examples. The main comparison that can be made is to features on the Earth, as it is much more accessible and allows a much greater range of measurements to be made. Earth analog studies are particularly common in planetary geology, geomorphology, and also in atmospheric science. The use of terrestrial analogs was first described by Gilbert (1886). [ 8 ] This non-exhaustive list includes those institutions and universities with major groups of people working in planetary science. Alphabetical order is used. Smaller workshops and conferences on particular fields occur worldwide throughout the year.
https://en.wikipedia.org/wiki/Planetary_science
Planetary-surface construction is the construction of artificial habitats and other structures on planetary surfaces . Planetary surface construction can be divided into three phases or classes, coinciding with a phased schedule for habitation: [ 1 ] [ 2 ] • Class I: Pre-integrated hard shell modules ready to use immediately upon delivery. • Class II: Prefabricated kit-of-parts that is surface assembled after delivery. • Class III: in-situ resource utilization (ISRU) derived structure with integrated Earth components. [ 3 ] Class I structures are prepared and tested on Earth, and are designed to be fully self-contained habitats that can be delivered to the surface of other planets. In an initial mission to put human explorers on Mars, a Class I habitat would provide the bare minimum habitable facilities when continued support from Earth is not possible. The Class II structures call for a pre-manufactured kit-of-parts system that has flexible capacity for demountability and reuse. Class II structures can be used to expand the facilities established by the initial Class I habitat, and can allow for the assembly of additional structures either before the crew arrives, or after their occupancy of the pre-integrated habitat. The purpose of Class III structures is to allow for the construction of additional facilities that would support a larger population, and to develop the capacity for the local production of building materials and structures without the need for resupply from Earth. To facilitate the development of technology required to implement the three phases, Cohen and Kennedy (1997) stress the need to explore robust robotic system concepts that can be used to assist in the construction process, or perform the tasks autonomously. Among other things, they suggest a roadmap that stresses the need for adapting structural components for robotic assembly, and determining appropriate levels of modularity, assembly, and component packaging. The roadmap also sets the development of experimental construction systems in parallel with components as an important milestone.
https://en.wikipedia.org/wiki/Planetary_surface_construction
Planetary symbols are used in astrology and traditionally in astronomy to represent a classical planet (which includes the Sun and the Moon) or one of the modern planets. The classical symbols were also used in alchemy for the seven metals known to the ancients , which were associated with the planets , and in calendars for the seven days of the week associated with the seven planets. The original symbols date to Greco-Roman astronomy ; their modern forms developed in the 16th century, and additional symbols would be created later for newly discovered planets. The seven classical planets, their symbols, days and most commonly associated planetary metals are: The International Astronomical Union (IAU) discourages the use of these symbols in modern journal articles, and their style manual proposes one- and two-letter abbreviations for the names of the planets for cases where planetary symbols might be used, such as in the headings of tables. [ 1 ] The modern planets with their traditional symbols and IAU abbreviations are: The symbols of Venus and Mars are also used to represent female and male in biology following a convention introduced by Carl Linnaeus in the 1750s. The origins of the planetary symbols can be found in the attributes given to classical deities. The Roman planisphere of Bianchini (2nd century, currently in the Louvre , inv. Ma 540) [ 2 ] shows the seven planets represented by portraits of the seven corresponding gods, each a bust with a halo and an iconic object or dress, as follows: Mercury has a caduceus and a winged cap; Venus has a necklace and a shining mirror; Mars has a war-helmet and a spear; Jupiter has a laurel crown and a staff; Saturn has a conical headdress and a scythe; the Sun has rays emanating from his head; and the Moon has a crescent atop her head. The written symbols for Mercury, Venus, Jupiter, and Saturn have been traced to forms found in late Greek papyri. [ 3 ] [ b ] Early forms are also found in medieval Byzantine codices which preserve horoscopes. [ 4 ] A diagram in the astronomical compendium by Johannes Kamateros (12th century) closely resembles the 11th-century forms shown above, with the Sun represented by a circle with a single ray, Jupiter by the letter zeta (the initial of Zeus , Jupiter's counterpart in Greek mythology), Mars by a round shield in front of a diagonal spear, and the remaining classical planets by symbols resembling the modern ones, though without the crosses seen in modern versions of Mercury, Venus, Jupiter and Saturn. [ citation needed ] These crosses first appear in the late 15th or early 16th century. According to Maunder, the addition of crosses appears to be "an attempt to give a savour of Christianity to the symbols of the old pagan gods." [ 5 ] The modern forms of the classical planetary symbols are found in a woodcut of the seven planets in a Latin translation of Abu Ma'shar al-Balkhi 's De Magnis Coniunctionibus printed at Venice in 1506, represented as the corresponding gods riding chariots. [ 6 ] Earth is not one of the classical planets, as "planets" by definition were "wandering stars" as seen from Earth's surface. Earth's status as planet is a consequence of heliocentrism in the 16th century. Nonetheless, there is a pre-heliocentric symbol for the world, now used as a planetary symbol for the Earth. This is a circle crossed by two lines, horizontal and vertical, representing the world divided by four rivers into the four quarters of the world (often translated as the four "corners" of the world): . A variant, now obsolete, had only the horizontal line: . [ 7 ] A medieval European symbol for the world – the globus cruciger , (the globe surmounted by a Christian cross ) – is also used as a planetary symbol; it resembles an inverted symbol for Venus. The planetary symbols for Earth are encoded in Unicode at U+1F728 🜨 ALCHEMICAL SYMBOL FOR VERDIGRIS and U+2641 ♁ EARTH . The crescent shape has been used to represent the Moon since antiquity. In classical antiquity, it is worn by lunar deities ( Selene/Luna , Artemis/Diana , Men , etc.) either on the head or behind the shoulders, with its horns pointing upward. The representation of the moon as a simple crescent with the horns pointing to the side (as a heraldic crescent increscent or crescent decrescent ) is attested from late Classical times. The same symbol can be used in a different context not for the Moon itself but for a lunar phase , as part of a sequence of four symbols for "new moon" (U+1F311 🌑︎), "waxing" (U+263D ☽︎), "full moon" (U+1F315 🌕︎) and "waning" (U+263E ☾︎). The symbol ☿ for Mercury is a caduceus (a staff intertwined with two serpents), a symbol associated with Mercury / Hermes throughout antiquity. Some time after the 11th century, a cross was added to the bottom of the staff to make it seem more Christian. [ 3 ] The ☿ symbol has also been used to indicate intersex , transgender , or non-binary gender . [ 8 ] A related usage is for the 'worker' or 'neuter' sex among social insects that is neither male nor (due to its lack of reproductive capacity) fully female, such as worker bees . [ 9 ] It was also once the designated symbol for hermaphroditic or 'perfect' flowers , [ 10 ] but botanists now use ⚥ for these. [ 11 ] Its Unicode codepoint is U+263F ☿ MERCURY . The Venus symbol , ♀, consists of a circle with a small cross below it. It has been interpreted as a depiction of the hand-mirror of the goddess, which may also explain Venus's association with the planetary metal copper, as mirrors in antiquity were made of polished copper, [ 12 ] [ d ] though this is not certain. [ 3 ] In the Greek Oxyrhynchus Papyri 235 , the symbols for Venus and Mercury did not have the cross on the bottom stem, [ 3 ] and Venus appears without the cross (⚲) in Johannes Kamateros (12th century). [ citation needed ] In botany and biology , the symbol for Venus is used to represent the female sex , alongside the symbol for Mars representing the male sex, [ 13 ] following a convention introduced by Linnaeus in the 1750s. [ 10 ] [ e ] Arising from the biological convention, the symbol also came to be used in sociological contexts to represent women or femininity . This gendered association of Venus and Mars has been used to pair them heteronormatively , describing women and men stereotypically as being so different that they can be understood as coming from different planets, an understanding popularized in 1992 by the book titled Men Are from Mars, Women Are from Venus . [ 14 ] [ 15 ] Unicode encodes the symbol as U+2640 ♀ FEMALE SIGN , in the Miscellaneous Symbols block. [ f ] The modern astronomical symbol for the Sun, the circumpunct ( U+2609 ☉ SUN ), was first used in the Renaissance . It possibly represents Apollo's golden shield with a boss ; it is unknown if it traces descent from the nearly identical Egyptian hieroglyph for the Sun. Bianchini's planisphere , produced in the 2nd century, shows a circlet with rays radiating from it. [ 5 ] [ 2 ] In late Classical times, the Sun is attested as a circle with a single ray. A diagram in Johannes Kamateros' 12th century Compendium of Astrology shows the same symbol. [ 18 ] This older symbol is encoded by Unicode as U+1F71A 🜚 ALCHEMICAL SYMBOL FOR GOLD in the Alchemical Symbols block. Both symbols have been used alchemically for gold, as have more elaborate symbols showing a disk with multiple rays or even a face. The Mars symbol , ♂, is a depiction of a circle with an arrow emerging from it, pointing at an angle to the upper right in Europe and to the upper left in India. [ 19 ] [ 20 ] It is also the old and obsolete symbol for iron in alchemy. In zoology and botany, it is used to represent the male sex (alongside the astrological symbol for Venus representing the female sex), [ 13 ] following a convention introduced by Linnaeus in the 1750s. [ 10 ] The symbol dates from at latest the 11th century, at which time it was an arrow across or through a circle, thought to represent the shield and spear of the god Mars; in the medieval form, for example in the 12th-century Compendium of Astrology by Johannes Kamateros, the spear is drawn across the shield. [ 18 ] The Greek Oxyrhynchus Papyri show a different symbol, [ 3 ] perhaps simply a spear. [ 2 ] Its Unicode codepoint is U+2642 ♂ MALE SIGN ( &male; ). The symbol for Jupiter , ♃, was originally a Greek zeta, Ζ , with a stroke indicating that it is an abbreviation (for Zeus , the Greek equivalent of Roman Jupiter). Its Unicode codepoint is U+2643 ♃ JUPITER . Salmasius and earlier attestations show that the symbol for Saturn, ♄, derives from the initial letters ( Kappa , rho ) of its ancient Greek name Κρόνος ( Kronos ), with a stroke to indicate an abbreviation . [ 10 ] By the time of Kamateros (12th century), the symbol had been reduced to a shape similar to a lower-case letter eta η, with the abbreviation stroke surviving (if at all) in the curl on the bottom-right end. Its Unicode codepoint is U+2644 ♄ SATURN . The symbols for Uranus were created shortly after its discovery in 1781. One symbol, ⛢, invented by J. G. Köhler and refined by Bode , was intended to represent the newly discovered metal platinum ; since platinum, commonly called white gold, was found by chemists mixed with iron, the symbol for platinum combines the alchemical symbols for iron , ♂, and gold , ☉. [ 21 ] [ 22 ] Gold and iron are the planetary metals for the Sun and Mars, and so share their symbols. Several orientations were suggested, but an upright arrow is now universal. Another symbol, , was suggested by Lalande in 1784. In a letter to Herschel , Lalande described it as "a globe surmounted by the first letter of your name". [ 23 ] The platinum symbol tends to be used by astronomers, and the monogram by astrologers. [ 24 ] For use in computer systems, the symbols are encoded U+26E2 ⛢ ASTRONOMICAL SYMBOL FOR URANUS and U+2645 ♅ URANUS . Several symbols were proposed for Neptune to accompany the suggested names for the planet. Claiming the right to name his discovery, Urbain Le Verrier originally proposed to name the planet for the Roman god Neptune [ 25 ] and the symbol of a trident , [ 26 ] while falsely stating that this had been officially approved by the French Bureau des Longitudes . [ 25 ] In October, he sought to name the planet Leverrier , after himself, and he had loyal support in this from the observatory director, François Arago , [ 27 ] who in turn proposed a new symbol for the planet, . [ 28 ] However, this suggestion met with resistance outside France, [ 27 ] and French almanacs quickly reintroduced the name Herschel for Uranus , after that planet's discoverer Sir William Herschel , and Leverrier for the new planet, [ 29 ] though it was used by anglophone institutions. [ 30 ] Professor James Pillans of the University of Edinburgh defended the name Janus for the new planet, and proposed a key for its symbol. [ 26 ] Meanwhile, Struve presented the name Neptune on December 29, 1846, to the Saint Petersburg Academy of Sciences . [ 31 ] In August 1847, the Bureau des Longitudes announced its decision to follow prevailing astronomical practice and adopt the choice of Neptune , with Arago refraining from participating in this decision. [ 32 ] The planetary symbol was Neptune's trident , with the handle stylized either as a crossed , following Mercury, Venus, Jupiter, Saturn, and the asteroids, or as an orb , following the symbols for Uranus, Earth, and Mars. [ 7 ] The crossed variant is the more common today. For use in computer systems, the symbols are encoded as U+2646 ♆ NEPTUNE and U+2BC9 ⯉ NEPTUNE FORM TWO . Pluto was almost universally considered a planet from its discovery in 1930 until its re-classification as a dwarf planet (planetoid) by the IAU in 2006. Planetary geologists [ 33 ] and astrologers continue to treat it as a planet. The original planetary symbol for Pluto was , a monogram of the letters P and L. Astrologers generally use a bident with an orb. NASA has used the bident symbol since Pluto's reclassification. These symbols are encoded as U+2647 ♇ PLUTO and U+2BD3 ⯓ PLUTO FORM TWO . In the 19th century, planetary symbols for the major asteroids were also in use, including 1 Ceres (a reaper's sickle , encoded U+26B3 ⚳ CERES ), 2 Pallas (a lance, U+26B4 ⚴ PALLAS ) and 3 Juno (a sceptre, encoded U+26B5 ⚵ JUNO ). Encke (1850) used symbols for 5 Astraea , 6 Hebe , 7 Iris , 8 Flora and 9 Metis in the Berliner Astronomisches Jahrbuch . [ 34 ] In the late 20th century, astrologers abbreviated the symbol for 4 Vesta (the sacred fire of Vesta , encoded U+26B6 ⚶ VESTA ), [ 35 ] and introduced new symbols for 5 Astraea ( , a stylised % sign, shift-5 on QWERTY keyboards for asteroid 5), 10 Hygiea encoded U+2BDA ⯚ HYGIEA ) [ 36 ] and for 2060 Chiron , discovered in 1977 (a key, U+26B7 ⚷ CHIRON ). [ 35 ] Chiron's symbol was adapted as additional centaurs were discovered; symbols for 5145 Pholus and 7066 Nessus have been encoded in Unicode. [ 36 ] The abbreviated Vesta symbol is now universal, and the astrological symbol for Pluto has been used astronomically for Pluto as a dwarf planet. [ 37 ] In the early 21st century, symbols for the trans-Neptunian dwarf planets have been given Unicode codepoints , particularly Eris (the hand of Eris , ⯰, but also ⯱), Sedna , Haumea , Makemake , Gonggong , Quaoar and Orcus which are in Unicode. All (except Eris, for which the hand of Eris is a traditional Discordian symbol) were devised by Denis Moskowitz, a software engineer in Massachusetts. [ 37 ] [ 38 ] Other symbols have also been invented by Moskowitz, for some smaller TNOs as well as many planetary moons. (Charon in particular coincidentally matches a symbol already existing in Unicode as an astrological Pluto.) However, these have not been broadly adopted. [ 37 ] [ 39 ] From 1845 to 1855, many symbols were created for newly discovered asteroids. But by 1851, the spate of discoveries had led to a general abandonment of these symbols in favour of numbering all asteroids instead. [ 41 ]
https://en.wikipedia.org/wiki/Planetary_symbols
A planetary system is a set of gravitationally bound non-stellar bodies in or out of orbit around a star or star system . Generally speaking, systems with one or more planets constitute a planetary system, although such systems may also consist of bodies such as dwarf planets , asteroids , natural satellites , meteoroids , comets , planetesimals [ 1 ] [ 2 ] and circumstellar disks . For example, the Sun together with the planetary system revolving around it, including Earth , form the Solar System . [ 3 ] [ 4 ] The term exoplanetary system is sometimes used in reference to other planetary systems. Planetary systems are, by convention, named for their host, or parent star, as is the case in our Solar Planetary System, named for its hosting, star, "Sol". As of 17 April 2025, there are 5,943 confirmed exoplanets in 4,461 planetary systems, with 976 systems having more than one planet . [ 5 ] Debris disks are known to be common while other objects are more difficult to observe. Of particular interest to astrobiology is the habitable zone of planetary systems where planets could have surface liquid water, and thus, the capacity to support Earth-like life. Heliocentrism is the doctrine that the Sun is at the centre of the universe, as opposed to geocentrism (placing Earth at the centre of the universe). Some interpret Aryabhatta 's writings in Āryabhaṭīya as implicitly heliocentric although this has also been rebutted. [ 6 ] The idea was first proposed in Western philosophy and Greek astronomy as early as the 3rd century BC by Aristarchus of Samos , [ 7 ] https://books.google.com/books?id=B4br4XJFj0MC&pg=PA38 but received no support from most other ancient astronomers. De revolutionibus orbium coelestium by Nicolaus Copernicus , published in 1543, presented the first mathematically predictive heliocentric model of a planetary system. 17th-century successors Galileo Galilei , Johannes Kepler , and Sir Isaac Newton developed an understanding of physics which led to the gradual acceptance of the idea that the Earth moves around the Sun and that the planets are governed by the same physical laws that governed Earth. In the 16th century the Italian philosopher Giordano Bruno , an early supporter of the Copernican theory that Earth and other planets orbit the Sun, put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planets. He was burned at the stake for his ideas by the Roman Inquisition . [ 8 ] In the 18th century, the same possibility was mentioned by Sir Isaac Newton in the " General Scholium " that concludes his Principia . Making a comparison to the Sun's planets, he wrote "And if the fixed stars are the centres of similar systems, they will all be constructed according to a similar design and subject to the dominion of One ." [ 9 ] His theories gained popularity through the 19th and 20th centuries despite a lack of supporting evidence. Long before their confirmation by astronomers, conjecture on the nature of planetary systems had been a focus of the search for extraterrestrial intelligence and has been a prevalent theme in fiction , particularly science fiction. The first confirmed detection of an exoplanet was in 1992, with the discovery of several terrestrial-mass planets orbiting the pulsar PSR B1257+12 . The first confirmed detection of exoplanets of a main-sequence star was made in 1995, when a giant planet, 51 Pegasi b , was found in a four-day orbit around the nearby G-type star 51 Pegasi . The frequency of detections has increased since then, particularly through advancements in methods of detecting extrasolar planets and dedicated planet-finding programs such as the Kepler mission . Planetary systems come from protoplanetary disks that form around stars as part of the process of star formation . During formation of a system, much material is gravitationally-scattered into distant orbits, and some planets are ejected completely from the system, becoming rogue planets . Planets orbiting pulsars have been discovered. Pulsars are the remnants of the supernova explosions of high-mass stars, but a planetary system that existed before the supernova would likely be mostly destroyed. Planets would either evaporate, be pushed off of their orbits by the masses of gas from the exploding star, or the sudden loss of most of the mass of the central star would see them escape the gravitational hold of the star, or in some cases the supernova would kick the pulsar itself out of the system at high velocity so any planets that had survived the explosion would be left behind as free-floating objects. Planets found around pulsars may have formed as a result of pre-existing stellar companions that were almost entirely evaporated by the supernova blast, leaving behind planet-sized bodies. Alternatively, planets may form in an accretion disk of fallback matter surrounding a pulsar. [ 10 ] Fallback disks of matter that failed to escape orbit during a supernova may also form planets around black holes . [ 11 ] As stars evolve and turn into red giants , asymptotic giant branch stars, and planetary nebulae they engulf the inner planets, evaporating or partially evaporating them depending on how massive they are. [ 13 ] [ 14 ] As the star loses mass, planets that are not engulfed move further out from the star. If an evolved star is in a binary or multiple system, then the mass it loses can transfer to another star, forming new protoplanetary disks and second- and third-generation planets which may differ in composition from the original planets, which may also be affected by the mass transfer. The Solar System consists of an inner region of small rocky planets and outer region of large giant planets . However, other planetary systems can have quite different architectures. Studies suggest that architectures of planetary systems are dependent on the conditions of their initial formation. [ 15 ] Many systems with a hot Jupiter gas giant very close to the star have been found. Theories, such as planetary migration or scattering, have been proposed for the formation of large planets close to their parent stars. [ 16 ] At present, [ when? ] few systems have been found to be analogous to the Solar System with small terrestrial planets in the inner region, as well as a gas giant with a relatively circular orbit, which suggests that this configuration is uncommon. [ 17 ] More commonly, systems consisting of multiple Super-Earths have been detected. [ 18 ] [ 19 ] These super-Earths are usually very close to their star, with orbits smaller than that of Mercury . [ 20 ] Planetary system architectures may be partitioned into four classes based on how the mass of the planets is distributed around the host star : [ 21 ] [ 22 ] Multiplanetary systems tend to be in a "peas in a pod" configuration meaning they tend to have the following factors: [ 23 ] Most known exoplanets orbit stars roughly similar to the Sun : that is, main-sequence stars of spectral categories F, G, or K. One reason is that planet-search programs have tended to concentrate on such stars. In addition, statistical analyses indicate that lower-mass stars ( red dwarfs , of spectral category M) are less likely to have planets massive enough to be detected by the radial-velocity method . [ 24 ] [ 25 ] Nevertheless, several tens of planets around red dwarfs have been discovered by the Kepler space telescope by the transit method , which can detect smaller planets. After planets, circumstellar disks are one of the most commonly-observed properties of planetary systems, particularly of young stars. The Solar System possesses at least four major circumstellar disks (the asteroid belt , Kuiper belt , scattered disc , and Oort cloud ) and clearly-observable disks have been detected around nearby solar analogs including Epsilon Eridani and Tau Ceti . Based on observations of numerous similar disks, they are assumed to be quite common attributes of stars on the main sequence . Interplanetary dust clouds have been studied in the Solar System and analogs are believed to be present in other planetary systems. Exozodiacal dust, an exoplanetary analog of zodiacal dust , the 1–100 micrometre-sized grains of amorphous carbon and silicate dust that fill the plane of the Solar System [ 26 ] has been detected around the 51 Ophiuchi , Fomalhaut , [ 27 ] [ 28 ] Tau Ceti , [ 28 ] [ 29 ] and Vega systems. As of November 2014 [update] there are 5,253 known Solar System comets [ 30 ] and they are thought to be common components of planetary systems. The first exocomets were detected in 1987 [ 31 ] [ 32 ] around Beta Pictoris , a very young A-type main-sequence star . There are now a total of 11 stars around which the presence of exocomets have been observed or suspected. [ 33 ] [ 34 ] [ 35 ] [ 36 ] All discovered exocometary systems ( Beta Pictoris , HR 10 , [ 33 ] 51 Ophiuchi , HR 2174 , [ 34 ] 49 Ceti , 5 Vulpeculae , 2 Andromedae , HD 21620 , HD 42111 , HD 110411 , [ 35 ] [ 37 ] and more recently HD 172555 [ 36 ] ) are around very young A-type stars . Computer modelling of an impact in 2013 detected around the star NGC 2547-ID8 by the Spitzer Space Telescope , and confirmed by ground observations, suggests the involvement of large asteroids or protoplanets similar to the events believed to have led to the formation of terrestrial planets like the Earth. [ 38 ] Based on observations of the Solar System's large collection of natural satellites, they are believed common components of planetary systems; however, the existence of exomoons has not yet been confirmed. The star 1SWASP J140747.93-394542.6 , in the constellation Centaurus , is a strong candidate for a natural satellite. [ 39 ] Indications suggest that the confirmed extrasolar planet WASP-12b also has at least one satellite. [ 40 ] Unlike the Solar System, which has orbits that are nearly circular, many of the known planetary systems display much higher orbital eccentricity . [ 41 ] An example of such a system is 16 Cygni . The mutual inclination between two planets is the angle between their orbital planes . Many compact systems with multiple close-in planets interior to the equivalent orbit of Venus are expected to have very low mutual inclinations, so the system (at least the close-in part) would be even flatter than the Solar System. Captured planets could be captured into any arbitrary angle to the rest of the system. As of 2016 [update] there are only a few systems where mutual inclinations have actually been measured [ 42 ] One example is the Upsilon Andromedae system: the planets c and d have a mutual inclination of about 30 degrees. [ 43 ] [ 44 ] Planetary systems can be categorized according to their orbital dynamics as resonant, non-resonant-interacting, hierarchical, or some combination of these. In resonant systems the orbital periods of the planets are in integer ratios. The Kepler-223 system contains four planets in an 8:6:4:3 orbital resonance . [ 45 ] Giant planets are found in mean-motion resonances more often than smaller planets. [ 46 ] In interacting systems the planets' orbits are close enough together that they perturb the orbital parameters. The Solar System could be described as weakly interacting. In strongly interacting systems Kepler's laws do not hold. [ 47 ] In hierarchical systems the planets are arranged so that the system can be gravitationally considered as a nested system of two-bodies, e.g. in a star with a close-in hot Jupiter with another gas giant much further out, the star and hot Jupiter form a pair that appears as a single object to another planet that is far enough out. Other, as yet unobserved, orbital possibilities include: double planets ; various co-orbital planets such as quasi-satellites, trojans and exchange orbits; and interlocking orbits maintained by precessing orbital planes . [ 48 ] Free-floating planets in open clusters have similar velocities to the stars and so can be recaptured. They are typically captured into wide orbits between 100 and 10 5 AU. The capture efficiency decreases with increasing cluster size, and for a given cluster size it increases with the host/primary [ clarification needed ] mass. It is almost independent of the planetary mass. Single and multiple planets could be captured into arbitrary unaligned orbits, non-coplanar with each other or with the stellar host spin, or pre-existing planetary system. Some planet–host metallicity correlation may still exist due to the common origin of the stars from the same cluster. Planets would be unlikely to be captured around neutron stars because these are likely to be ejected from the cluster by a pulsar kick when they form. Planets could even be captured around other planets to form free-floating planet binaries. After the cluster has dispersed some of the captured planets with orbits larger than 10 6 AU would be slowly disrupted by the galactic tide and likely become free-floating again through encounters with other field stars or giant molecular clouds . [ 49 ] The habitable zone around a star is the region where the temperature range allows for liquid water to exist on a planet; that is, not too close to the star for the water to evaporate and not too far away from the star for the water to freeze. The heat produced by stars varies depending on the size and age of the star; this means the habitable zone will also vary accordingly. Also, the atmospheric conditions on the planet influence the planet's ability to retain heat so that the location of the habitable zone is also specific to each type of planet. Habitable zones have usually been defined in terms of surface temperature; however, over half of Earth's biomass is from subsurface microbes, [ 50 ] and temperature increases as depth underground increases, so the subsurface can be conducive for life when the surface is frozen; if this is considered, the habitable zone extends much further from the star. [ 51 ] Studies in 2013 indicate that an estimated 22±8% of Sun-like [ a ] stars have an Earth-sized [ b ] planet in the habitable [ c ] zone. [ 52 ] [ 53 ] The Venus zone is the region around a star where a terrestrial planet would have runaway greenhouse conditions like Venus , but not so near the star that the atmosphere completely escapes. As with the habitable zone, the location of the Venus zone depends on several factors, including the type of star and properties of the planets such as mass, rotation rate, and atmospheric clouds. Studies of the Kepler spacecraft data indicate that 32% of red dwarfs have potentially Venus-like planets based on planet size and distance from star, increasing to 45% for K-type and G-type stars. [ d ] Several candidates have been identified, but spectroscopic follow-up studies of their atmospheres are required to determine whether they are like Venus. [ 54 ] [ 55 ] The Milky Way is 100,000 light-years across, but 90% of planets with known distances are within about 2000 light years of Earth, as of July 2014. One method that can detect planets much further away is microlensing . The upcoming Nancy Grace Roman Space Telescope could use microlensing to measure the relative frequency of planets in the galactic bulge versus the galactic disk . [ 56 ] So far, the indications are that planets are more common in the disk than the bulge. [ 57 ] Estimates of the distance of microlensing events is difficult: the first planet considered with high probability of being in the bulge is MOA-2011-BLG-293Lb at a distance of 7.7 kiloparsecs (about 25,000 light years). [ 58 ] Population I , or metal-rich stars , are those young stars whose metallicity is highest. The high metallicity of population I stars makes them more likely to possess planetary systems than older populations, because planets form by the accretion of metals. [ citation needed ] The Sun is an example of a metal-rich star. These are common in the spiral arms of the Milky Way . [ citation needed ] Generally, the youngest stars, the extreme population I, are found farther in and intermediate population I stars are farther out, etc. The Sun is considered an intermediate population I star. Population I stars have regular elliptical orbits around the Galactic Center , with a low relative velocity . [ 59 ] Population II , or metal-poor stars , are those with relatively low metallicity which can have hundreds (e.g. BD +17° 3248 ) or thousands (e.g. Sneden's Star ) times less metallicity than the Sun. These objects formed during an earlier time of the universe. [ citation needed ] Intermediate population II stars are common in the bulge near the center of the Milky Way , [ citation needed ] whereas Population II stars found in the galactic halo are older and thus more metal-poor. [ citation needed ] Globular clusters also contain high numbers of population II stars. [ 60 ] In 2014, the first planets around a halo star were announced around Kapteyn's star , the nearest halo star to Earth, around 13 light years away. However, later research suggests that Kapteyn b is just an artefact of stellar activity and that Kapteyn c needs more study to be confirmed. [ 61 ] The metallicity of Kapteyn's star is estimated to be about 8 [ e ] times less than the Sun. [ 62 ] Different types of galaxies have different histories of star formation and hence planet formation . Planet formation is affected by the ages, metallicities, and orbits of stellar populations within a galaxy. Distribution of stellar populations within a galaxy varies between the different types of galaxies. [ 63 ] Stars in elliptical galaxies are much older than stars in spiral galaxies . Most elliptical galaxies contain mainly low-mass stars , with minimal star-formation activity. [ 64 ] The distribution of the different types of galaxies in the universe depends on their location within galaxy clusters , with elliptical galaxies found mostly close to their centers. [ 65 ]
https://en.wikipedia.org/wiki/Planetary_system
In astronomy , planetary transits and occultations occur when a planet passes in front of another object , as seen by an observer. The occulted object may be a distant star , but in rare cases it may be another planet, in which case the event is called a mutual planetary occultation or mutual planetary transit , depending on the relative apparent diameters of the objects. [ 1 ] The word "transit" refers to cases where the nearer object appears smaller than the more distant object. Cases where the nearer object appears larger and completely hides the more distant object are known as occultations . Mutual occultations or transits of planets are extremely rare. The most recent event occurred on 3 January 1818, and the next will occur on 22 November 2065. Both involve the same two planets: Venus and Jupiter . An occultation of Mars by Venus on 13 October 1590 was observed by the German astronomer Michael Maestlin at Heidelberg . [ 2 ] [ 3 ] The 1737 event (see list below) was observed by John Bevis at Greenwich Observatory – it is the only detailed account of a mutual planetary occultation. A transit of Mars across Jupiter on 12 September 1170 was observed by the monk Gervase at Canterbury , [ 4 ] and by Chinese astronomers. [ 5 ] The next time a mutual planetary transit or occultation will happen (as seen from Earth) will be on 22 November 2065 at about 12:43 UTC , when Venus near superior conjunction (with an angular diameter of 10.6") will transit in front of Jupiter (with an angular diameter of 30.9"); however, this will take place only 8° west of the Sun, and will therefore not be visible to the unaided/unprotected eye. Before transiting Jupiter, Venus will occult Jupiter's moon Ganymede at around 11:24 UTC as seen from some southernmost parts of Earth. Parallax will cause actual observed times to vary by a few minutes, depending on the precise location of the observer. [ citation needed ] There are only 18 mutual planetary transits and occultations as seen from Earth between 1700 and 2200. There is a very long break of events between 1818 and 2065. [ 3 ] Twice during the orbital cycles of Jupiter and Saturn , the equatorial (and satellite) planes of those planets are aligned with Earth's orbital plane, resulting in a series of mutual occultations and eclipses between the moons of these giant planets. The terms eclipse , occultation , and transit are also used to describe these events. [ 1 ] A satellite of Jupiter (for example) may be eclipsed (i.e. made dimmer because it moves into Jupiter's shadow), occulted (i.e. hidden from view because Jupiter lies on our line of sight), or may transit (i.e. pass in front of) Jupiter's disk (see also Solar eclipses on Jupiter ). This table is another compilation of occultations and transits of bright stars and planets by solar planets. [ citation needed ] These events are not visible everywhere the occulting body and the occulted body are above the skyline. Some events are barely visible, because they take place in close proximity to the Sun.
https://en.wikipedia.org/wiki/Planetary_transits_and_occultations
In analytic geometry , the intersection of two planes in three-dimensional space is a line . The line of intersection between two planes Π 1 : n 1 ⋅ r = h 1 {\displaystyle \Pi _{1}:{\boldsymbol {n}}_{1}\cdot {\boldsymbol {r}}=h_{1}} and Π 2 : n 2 ⋅ r = h 2 {\displaystyle \Pi _{2}:{\boldsymbol {n}}_{2}\cdot {\boldsymbol {r}}=h_{2}} where n i {\displaystyle {\boldsymbol {n}}_{i}} are normalized is given by where This is found by noticing that the line must be perpendicular to both plane normals, and so parallel to their cross product n 1 × n 2 {\displaystyle {\boldsymbol {n}}_{1}\times {\boldsymbol {n}}_{2}} (this cross product is zero if and only if the planes are parallel, and are therefore non-intersecting or entirely coincident). The remainder of the expression is arrived at by finding an arbitrary point on the line. To do so, consider that any point in space may be written as r = c 1 n 1 + c 2 n 2 + λ ( n 1 × n 2 ) {\displaystyle {\boldsymbol {r}}=c_{1}{\boldsymbol {n}}_{1}+c_{2}{\boldsymbol {n}}_{2}+\lambda ({\boldsymbol {n}}_{1}\times {\boldsymbol {n}}_{2})} , since { n 1 , n 2 , ( n 1 × n 2 ) } {\displaystyle \{{\boldsymbol {n}}_{1},{\boldsymbol {n}}_{2},({\boldsymbol {n}}_{1}\times {\boldsymbol {n}}_{2})\}} is a basis . We wish to find a point which is on both planes (i.e. on their intersection), so insert this equation into each of the equations of the planes to get two simultaneous equations which can be solved for c 1 {\displaystyle c_{1}} and c 2 {\displaystyle c_{2}} . If we further assume that n 1 {\displaystyle {\boldsymbol {n}}_{1}} and n 2 {\displaystyle {\boldsymbol {n}}_{2}} are orthonormal then the closest point on the line of intersection to the origin is r 0 = h 1 n 1 + h 2 n 2 {\displaystyle {\boldsymbol {r}}_{0}=h_{1}{\boldsymbol {n}}_{1}+h_{2}{\boldsymbol {n}}_{2}} . If that is not the case, then a more complex procedure must be used. [ 1 ] Given two intersecting planes described by Π 1 : a 1 x + b 1 y + c 1 z + d 1 = 0 {\displaystyle \Pi _{1}:a_{1}x+b_{1}y+c_{1}z+d_{1}=0} and Π 2 : a 2 x + b 2 y + c 2 z + d 2 = 0 {\displaystyle \Pi _{2}:a_{2}x+b_{2}y+c_{2}z+d_{2}=0} , the dihedral angle between them is defined to be the angle α {\displaystyle \alpha } between their normal directions:
https://en.wikipedia.org/wiki/Plane–plane_intersection
A planimeter , also known as a platometer , is a measuring instrument used to determine the area of an arbitrary two-dimensional shape. There are several kinds of planimeters, but all operate in a similar way. The precise way in which they are constructed varies, with the main types of mechanical planimeter being polar, linear, and Prytz or "hatchet" planimeters. The Swiss mathematician Jakob Amsler-Laffon built the first modern planimeter in 1854, the concept having been pioneered by Johann Martin Hermann in 1818. [ 1 ] Many developments followed Amsler's famous planimeter, including electronic versions. The Amsler (polar) type consists of a two-bar linkage. At the end of one link is a pointer, used to trace around the boundary of the shape to be measured. The other end of the linkage pivots freely on a weight that keeps it from moving. Near the junction of the two links is a measuring wheel of calibrated diameter, with a scale to show fine rotation, and worm gearing for an auxiliary turns counter scale. As the area outline is traced, this wheel rolls on the surface of the drawing. The operator sets the wheel, turns the counter to zero, and then traces the pointer around the perimeter of the shape. When the tracing is complete, the scales at the measuring wheel show the shape's area. When the planimeter's measuring wheel moves perpendicular to its axis, it rolls, and this movement is recorded. When the measuring wheel moves parallel to its axis, the wheel skids without rolling, so this movement is ignored. That means the planimeter measures the distance that its measuring wheel travels, projected perpendicularly to the measuring wheel's axis of rotation. The area of the shape is proportional to the number of turns through which the measuring wheel rotates. The polar planimeter is restricted by design to measuring areas within limits determined by its size and geometry. However, the linear type has no restriction in one dimension, because it can roll. Its wheels must not slip, because the movement must be constrained to a straight line. Developments of the planimeter can establish the position of the first moment of area ( center of mass ), and even the second moment of area . The images show the principles of a linear and a polar planimeter. The pointer M at one end of the planimeter follows the contour C of the surface S to be measured. For the linear planimeter the movement of the "elbow" E is restricted to the y -axis. For the polar planimeter the "elbow" is connected to an arm with its other endpoint O at a fixed position. Connected to the arm ME is the measuring wheel with its axis of rotation parallel to ME. A movement of the arm ME can be decomposed into a movement perpendicular to ME, causing the wheel to rotate, and a movement parallel to ME, causing the wheel to skid, with no contribution to its reading. The working of the linear planimeter may be explained by measuring the area of a rectangle ABCD (see image). Moving with the pointer from A to B the arm EM moves through the yellow parallelogram, with area equal to PQ×EM. This area is also equal to the area of the parallelogram A"ABB". The measuring wheel measures the distance PQ (perpendicular to EM). Moving from C to D the arm EM moves through the green parallelogram, with area equal to the area of the rectangle D"DCC". The measuring wheel now moves in the opposite direction, subtracting this reading from the former. The movements along BC and DA are the same but opposite, so they cancel each other with no net effect on the reading of the wheel. The net result is the measuring of the difference of the yellow and green areas, which is the area of ABCD. The operation of a linear planimeter can be justified by applying Green's theorem , though the design of the major varieties predates the theorem's proof. Apply it to the components of the vector field N, given by: where b is the y -coordinate of the elbow E. This vector field is perpendicular to the measuring arm EM: and has a constant size, equal to the length m of the measuring arm: Then: because: The left hand side of the above equation, which is equal to the area A enclosed by the contour, is proportional to the distance measured by the measuring wheel, with proportionality factor m , the length of the measuring arm. The justification for the above derivation lies in noting that the linear planimeter only records movement perpendicular to its measuring arm, or when The connection with Green's theorem can be understood in terms of integration in polar coordinates : in polar coordinates, area is computed by the integral ∫ θ 1 2 ( r ( θ ) ) 2 d θ , {\textstyle \int _{\theta }{\tfrac {1}{2}}(r(\theta ))^{2}\,d\theta ,} where the form being integrated is quadratic in r, meaning that the rate at which area changes with respect to change in angle varies quadratically with the radius. For a parametric equation in polar coordinates, where both r and θ vary as a function of time, this becomes ∫ t 1 2 ( r ( t ) ) 2 d ( θ ( t ) ) = ∫ t 1 2 ( r ( t ) ) 2 θ ˙ ( t ) d t . {\displaystyle \int _{t}{\tfrac {1}{2}}(r(t))^{2}\,d(\theta (t))=\int _{t}{\tfrac {1}{2}}(r(t))^{2}\,{\dot {\theta }}(t)\,dt.} For a polar planimeter the total rotation of the wheel is proportional to ∫ t r ( t ) θ ˙ ( t ) d t , {\textstyle \int _{t}r(t)\,{\dot {\theta }}(t)\,dt,} as the rotation is proportional to the distance traveled, which at any point in time is proportional to radius and to change in angle, as in the circumference of a circle ( ∫ r d θ = 2 π r {\textstyle \int r\,d\theta =2\pi r} ). This last integrand r ( t ) θ ˙ ( t ) {\textstyle r(t)\,{\dot {\theta }}(t)} can be recognized as the derivative of the earlier integrand 1 2 ( r ( t ) ) 2 θ ˙ ( t ) {\textstyle {\tfrac {1}{2}}(r(t))^{2}{\dot {\theta }}(t)} (with respect to r ), and shows that a polar planimeter computes the area integral in terms of the derivative , which is reflected in Green's theorem, which equates a line integral of a function on a (1-dimensional) contour to the (2-dimensional) integral of the derivative.
https://en.wikipedia.org/wiki/Planimeter
Planimetrics is the study of plane measurements, including angles , distances , and areas . To measure planimetrics a planimeter or dot planimeter is used. This rather advanced analog technology is being taken over by simple image measurement software tools like, ImageJ , Adobe Acrobat , Google Earth Pro , Gimp , Photoshop and KLONK Image Measurement which can help do this kind of work from digitalized images. Planimetric elements in geography are those features that are independent of elevation, such as roads, building footprints, and rivers and lakes. They are represented on two-dimensional maps as they are seen from the air, or in aerial photography. [ 1 ] These features are often digitized from orthorectified aerial photography into data layers that can be used in analysis and cartographic outputs. [ 2 ] A planimetric map is one that does not include relief data. [ 3 ] This cartography or mapping term article is a stub . You can help Wikipedia by expanding it . This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Planimetrics
In astronomy , a planisphere ( / ˈ p l eɪ . n ɪ ˌ s f ɪər , ˈ p l æ n . ɪ -/ ) is a star chart analog computing instrument in the form of two adjustable disks that rotate on a common pivot. It can be adjusted to display the visible stars for any time and date. It is an instrument to assist in learning how to recognize stars and constellations . The astrolabe , an instrument that has its origins in Hellenistic astronomy , is a predecessor of the modern planisphere. The term planisphere contrasts with armillary sphere , where the celestial sphere is represented by a three-dimensional framework of rings. A planisphere consists of a circular star chart attached at its center to an opaque circular overlay that has a clear window or hole so that only a portion of the sky map will be visible in the window or hole area at any given time. The chart and overlay are mounted so that they are free to rotate about a common axis. The star chart contains the brightest stars , constellations and (possibly) deep-sky objects visible from a particular latitude on Earth. The night sky that one sees from the Earth depends on whether the observer is in the northern or southern hemispheres and the latitude. A planisphere window is designed for a particular latitude and will be accurate enough for a certain band either side of that. Planisphere makers will usually offer them in a number of versions for different latitudes. Planispheres only show the stars visible from the observer's latitude ; stars below the horizon are not included. A complete twenty-four-hour time cycle is marked on the rim of the overlay. A full twelve months of calendar dates are marked on the rim of the starchart. The window is marked to show the direction of the eastern and western horizons. The disk and overlay are adjusted so that the observer's local time of day on the overlay corresponds to that day's date on the star chart disc. The portion of the star chart visible in the window then represents (with a distortion because it is a flat surface representing a spherical volume) the distribution of stars in the sky at that moment for the planisphere's designed location. Users hold the planisphere above their head with the eastern and western horizons correctly aligned to match the chart to actual star positions. The word planisphere (Latin planisphaerium ) was originally used in the second century by Claudius Ptolemy to describe the representation of a spherical Earth by a map drawn in the plane. This usage continued into the Renaissance: for example Gerardus Mercator described his 1569 world map as a planisphere. In this article the word describes the representation of the star-filled celestial sphere on a flat disc. The first star chart to have the name "planisphere" was made in 1624 by Jacob Bartsch . Bartsch was the son-in-law of Johannes Kepler , discoverer of Kepler's laws of planetary motion . Since the planisphere shows the celestial sphere in a printed flat, there is always considerable distortion. Planispheres, like all charts, are made using a certain projection method. For planispheres there are two major methods in use, leaving the choice with the designer. One such method is the polar azimuthal equidistant projection . Using this projection the sky is charted centered on one of the celestial poles (polar), while circles of equal declination (for instance 60°, 30°, 0° (the celestial equator), −30°, and −60°) lie equidistant from each other and from the poles (equidistant). The shapes of the constellations are proportionally correct in a straight line from the centre outwards, but at right angles to this direction (parallel to the declination circles) there is considerable distortion. That distortion will be worse as the distance to the pole gets greater. If we study the famous constellation of Orion in this projection and compare this to the real Orion, we can clearly see this distortion. One notable planisphere using azimuthal equidistant projection addresses this issue by printing a northern view on one side and the southern view on the other, thus reducing the distance charted from the center outward. The stereographic projection solves this problem while introducing another. Using this projection the distances between the declination circles are enlarged in such a way that the shapes of the constellations remain correct. Naturally in this projection the constellations on the edge become too large in comparison to constellations near the celestial pole: Orion will be twice as high as it should be. (This is the same effect that makes Greenland so huge in Mercator maps .) Another disadvantage is that, with more space for constellations near the edge of the planisphere, the space for the constellations around the celestial pole in question will be less than they deserve. For observers at moderate latitudes, who can see the sky near the celestial pole of their hemisphere better than that nearer the horizon, this may be a good reason to prefer a planisphere made with the polar azimuthal equidistant projection method. The upper disc contains a "horizon", that defines the visible part of the sky at any given moment, which is naturally half of the total starry sky. That horizon line is most of the time also distorted, for the same reason the constellations are distorted. The horizon line on a stereographic projection is a perfect circle. The horizon line on other projections is a kind of "collapsed" oval. The horizon is designed for a particular latitude and thus determines the area for which a planisphere is meant. Some more expensive planispheres have several upper discs that can be exchanged, or have an upper disc with more horizon-lines, for different latitudes. When a planisphere is used in a latitude zone other than the zone for which it was designed, the user will either see stars that are not in the planisphere, or the planisphere will show stars that are not visible in that latitude zone's sky. To study the starry sky thoroughly it may be necessary to buy a planisphere particularly for the area in question. However, most of the time the part of the sky near the horizon will not show many stars, due to hills, woods, buildings or just because of the thickness of the atmosphere we look through. The lower 5° above the horizon in particular hardly shows any stars (let alone objects) except under the very best conditions. Therefore, a planisphere can fairly accurately be used from +5° to −5° of the design latitude. For example, a planisphere for 40° north can be used between 35° and 45° north. Accurate planispheres represent the celestial coordinates : right ascension and declination . The changing positions of planets, asteroids or comets in terms of these coordinates can be looked up in annual astronomical guides, and these enable planisphere users to find them in the sky. Some planispheres use a separate pointer for the declination, using the same pivot point as the upper disc. Some planispheres have a declination feature printed on the upper disc, along the line connecting north and south on the horizon. Right ascension is represented on the edge, where the dates with which to set the planisphere are also found.
https://en.wikipedia.org/wiki/Planisphere
A planktivore is an aquatic organism that feeds on planktonic food, including zooplankton and phytoplankton . [ 1 ] [ 2 ] Planktivorous organisms encompass a range of some of the planet's smallest to largest multicellular animals in both the present day and in the past billion years; basking sharks and copepods are just two examples of giant and microscopic organisms that feed upon plankton. [ 3 ] Planktivory can be an important mechanism of top-down control that contributes to trophic cascades in aquatic and marine systems. [ 4 ] [ 5 ] There is a tremendous diversity of feeding strategies and behaviors that planktivores utilize to capture prey. [ 6 ] [ 4 ] [ 7 ] Some planktivores utilize tides and currents to migrate between estuaries and coastal waters; [ 8 ] other aquatic planktivores reside in lakes or reservoirs where diverse assemblages of plankton are present, or migrate vertically in the water column searching for prey. [ 9 ] [ 5 ] [ 10 ] [ 11 ] Planktivore populations can impact the abundance and community composition of planktonic species through their predation pressure, [ 12 ] and planktivore migrations facilitate nutrient transport between benthic and pelagic habitats. [ 13 ] Planktivores are an important link in marine and freshwater systems that connect primary producers to the rest of the food chain. As climate change causes negative effects throughout the global oceans, planktivores are often directly impacted through changes to food webs and prey availability. [ 14 ] Additionally, harmful algal blooms (HABs) can negatively impact many planktivores and can transfer harmful toxins from the phytoplankton, to the planktivores, and along up the food chain. [ 15 ] As an important source of revenue for humans through tourism and commercial uses in fisheries, many conservation efforts are going on globally to protect these diverse animals known as planktivores. [ 16 ] [ 17 ] [ 7 ] [ 18 ] Plankton are defined as any type of organism that is unable to swim actively against currents and are thus transported by the physical forcing of tides and currents in the ocean. [ 19 ] Phytoplankton form the lowest trophic level of marine food webs and thus capture light energy and materials to provide food and energy for hundreds of thousands of types of planktivores. [ 20 ] Because they require light and abundant nutrients, phytoplankton are typically found in surface waters where light rays can penetrate water. [ 19 ] Nutrients that sustain phytoplankton include nitrate, phosphate, silicate, calcium, and micronutrients like iron; however, not all phytoplankton require all these identified nutrients and thus differences in nutrient availability impact phytoplankton species composition . [ 21 ] [ 20 ] This class of microscopic, photosynthetic organisms includes diatoms , coccolithophores , protists , cyanobacteria , dinoflagellates , and other microscopic algae . [ 20 ] Phytoplankton conduct photosynthesis via pigments in their cells; phytoplankton can use chlorophyll as well as other accessory photosynthetic pigments like fucoxanthin , chlorophyll c , alloxanthin , and carotenoids , depending on species. [ 22 ] [ 19 ] Due to their environmental requirements for light and nutrients, phytoplankton are most commonly found near continental margins, the equator, high-latitudes, and nutrient-rich areas. [ 20 ] They also form the foundation of the biological pump , which transports carbon to depth in the ocean. Zooplankton ("zoo" meaning "animal" [ 23 ] ) are generally consumers of other organisms for food. [ 24 ] Zooplankton may consume either phytoplankton or other zooplankton, making them the smallest class of planktivores. [ 18 ] They are common to most marine pelagic environments and act as an important step in the food chain to transfer energy up from primary producers to the rest of the marine food web. [ 25 ] Some zooplankton remain planktonic for their entire lives, while others eventually grow large enough to swim against currents. For instance, fish are born as planktonic larvae but once they grow large enough to swim, they are no longer considered plankton. [ 26 ] Many taxonomic groups (e.g. fishes, krill, corals, etc.) are zooplankton at some point in their lives. [ 26 ] For example, oysters begin as planktonic larvae; during this stage when they are considered zooplankton, they consume phytoplankton. Once they mature to adulthood, oysters continue to consume phytoplankton. [ 27 ] The spiny water flea is another example of a planktivorous invertebrate. [ 28 ] Some of the largest communities of zooplankton exist in high latitude systems like the eastern Bering Sea; pockets of dense zooplankton abundance also exist in the California Current and the Gulf of Mexico . [ 25 ] Zooplankton are, in turn, common prey items for planktivores; they respond to environmental change very rapidly due to their relatively short life spans, and so scientists can track their dynamics to understand what might be occurring in the larger marine food web and environment. [ 25 ] The relative ratios of certain zooplankton in the larger zooplankton community can also indicate an environmental change (e.g., eutrophication ) that may be significant. [ 29 ] For instance, an increase in rotifer abundance in the Great Lakes has been correlated with abnormally high levels of nutrients (eutrophication). [ 30 ] Many fishes are planktivorous during all or part of their life cycles, and these planktivorous fish are important to human industry and as prey for other organisms in the environment like seabirds and piscivorous fishes. [ 31 ] Planktivores comprise a large component of tropical ecosystems; in the Indo-Australian Archipelago , one study identified 350 planktivorous fish species in one studied grid cell and found that 27% of all fish species in this region were planktivorous. [ 32 ] This global study found that coral reef habitats globally have a disproportionate amount of planktivorous fishes. [ 32 ] In other habitats, examples of planktivorous fishes include many types of salmon like the pink salmon , sandeels , sardines , and silvery lightfish. [ 31 ] [ 33 ] [ 34 ] In ancient systems (read more below), the Titanichthys was an early massive vertebrate pelagic planktivore, with a lifestyle similar to that of the modern basking , whale , and megamouth sharks , all of whom are also planktivores. [ 3 ] Sea birds can also be planktivores; least auklets , crested auklets , storm petrels , ancient auklets, phalaropes , and many penguins are all examples of avian planktivores. [ 16 ] [ 34 ] Planktivorous seabirds can be indicators of ecosystem status because their dynamics often reflect processes affecting many trophic levels, like the consequences of climate change. [ 35 ] Blue whales and bowhead whales as well as some seals like the crabeater seal ( Lobodon carcinophagus ) are also planktivorous. [ 17 ] [ 36 ] Blue whales were recently found to consume a vast amount more plankton than was previously understood, representing an important element of the ocean biogeochemical cycle. [ 17 ] As previously mentioned, some plankton communities are well-studied and respond to environmental change very rapidly; understanding unusual plankton dynamics can elucidate potential consequences to planktivorous species and the larger marine food chain. [ 37 ] [ 29 ] One well-studied planktivore species is the gizzard shad ( Dorosoma cepedianum ) which has a voracious appetite for various forms of plankton across its life cycle. [ 38 ] [ 31 ] Planktivores can be either obligate planktivores, meaning they can only feed on plankton, or facultative planktivores, which take plankton when available but eat other types of food as well. In the case of the gizzard shad, they are obligate planktivores when larvae and juveniles, in part due to their very small mouth size; larval gizzard shad are most successful when small zooplankton are present in adequate quantities within their habitat. [ 12 ] As they grow, gizzard shad become omnivores, consuming phytoplankton, zooplankton, and larger pieces of nutritious detritus . Adult gizzard shad consume large volumes of zooplankton until it becomes scarce, then start consuming organic debris instead. Larval fishes and blueback herring are other well-studied examples of obligate planktivores, whereas fishes like the ocean sunfish can alternate between plankton and other food sources (i.e., are facultative planktivores). Facultative planktivores tend to be more opportunistic and live in ecosystems with many types of food sources. [ 7 ] Obligate planktivores have fewer options for prey choices; they are typically restricted to marine pelagic ecosystems that have a dominant plankton presence, such as highly productive upwelling regions. [ 7 ] Planktivores, whether obligate or facultative, obtain food in multiple ways. Particulate feeders eat planktonic items selectively, by identifying plankton and pursuing them in the water column. [ 7 ] Filter feeders process large volumes of water internally via different mechanisms, explained below, and strain food items out en masse or remove food particles from water as it passes by. "Tow-net" filter feeders swim rapidly with mouths open to filter the water, whereas "pumping" filter feeders suck in water via pumping actions. The charismatic flamingo is a pumping filter feeder, using its muscular tongue to pump water along specialized grooves in its bill and pump water back out once plankton have been retrieved. [ 39 ] In a different filter feeding process, stationary animals, like corals, use their tentacles to grab plankton particles out of the water column and transfer the particles into their mouth. [ 40 ] There are numerous interesting adaptations to remove plankton from the water column. The phalaropes use surface tension feeding to transport particles of prey to their mouth to be swallowed. These birds capture individual particles of plankton held in a droplet of water, suspended in their beaks. They then use a sequence of actions that begin with a quick opening of their beak to increase the surface area of the water droplet encasing prey. The action of stretching out the water droplet ultimately pushes the water and prey to the back of the throat where it can be consumed. [ 37 ] These birds also spin around at the water surface, creating their own eddies that draw prey up closer to their beaks. [ 37 ] Some species actively hunt plankton: in certain habitats such as the deep open ocean, as mentioned above, the planktivorous basking shark ( Cetorhinus maximus ) track the movements of their prey closely up and down the water column. [ 11 ] The megamouth shark ( Megachasma pelagios ), another planktivorous species, adopts a similar feeding strategy that mirrors the movement in the water column of their planktonic prey. [ 41 ] Similar to active hunting, some zooplankton, like copepods, are ambush hunters meaning they wait in the water column for prey to come within range and then rapidly attack and consume. [ 42 ] Some fishes change their feeding strategy throughout their lives; the Atlantic menhaden ( Brevoortia tyrannus ) is an obligate filter feeder in early life stages, but matures into a particulate feeder. [ 7 ] Some fishes, like the northern anchovy ( Engraulis mordax ) can merely modify their feeding behavior depending on the prey or environmental conditions. [ 7 ] Some fishes also school together when feeding to help improve contact rates of plankton and simultaneously prevent themselves from predation. [ 7 ] Some fishes have gill rakes, an internal filtration structure that assists fishes with capturing plankton prey. [ 6 ] The amount of gill rakes can indicate planktivory as well as the typical size of plankton consumed, showing a correlation between gill rake structure and the consumed plankton type. [ 6 ] Plankton have highly variable chemical compositions, which impacts their nutritional quality as a food source. [ 43 ] Scientists are still understanding how nutritional quality varies with the type of plankton; for example diatom nutritional quality is a controversial topic. [ 43 ] The ratios of phosphorus and nitrogen to carbon within a given plankton determine its nutritional quality. More carbon in an organism relative to these two elements decreases the plankton's nutritional value. [ 43 ] Additionally, plankton with higher amounts of polyunsaturated fatty acids are typically more energy dense. [ 43 ] [ 44 ] The nutritional value of plankton does sometimes depend on the nutritional needs of the planktivorous species. For fishes, the nutritional value of plankton is dependent on docosahexaenoic acid , long-chain polyunsaturated fatty acids, arachidonic acid , and eicosapentaenoic acid with higher concentrations of those chemicals leading to higher nutritional value. [ 44 ] However, lipids in plankton prey are not the only required chemical for larval fish; Malzahn et al. [ 45 ] found that other nutrients, like phosphorus, were necessary before growth improvements due to lipid concentrations can be realized. Additionally, it has been shown experimentally that the nutritional value of prey is more important than prey abundance for larval fishes. [ 45 ] With climate change , plankton may decrease in nutritional quality. Lau et al. [ 44 ] discovered that warming conditions and inorganic nutrient depletion in lakes as a result of climate change decreased the nutritional value of plankton communities. Planktivory is a common feeding strategy among some of our planet's largest organisms in both the present and the past. [ 3 ] Massive Mesozoic organisms like pachycormids have recently been identified as planktivores; [ 3 ] some individuals of this group reached lengths upwards of 9 feet. [ 3 ] Scientists also recently discovered the fossilized remains of another ancient organism, which they named the "false megamouth" ( Pseudomegachasma ) shark, and which was likely a filter-feeding planktivore during the Cretaceous period. [ 46 ] This new discovery illuminated planktivory as an example of convergent evolution, whereby distinct lineages evolved to fulfill similar dietary niches. [ 46 ] In other words, the false megamouth and its planktivory evolved separate from the ancestors of present-day shark planktivores like the megamouth shark, whale shark, and basking shark, all mentioned above. [ 46 ] The Arctic supports productive ecosystems that include many types of planktivorous species. Planktivorous pink salmon are common in the Arctic and the Bering Strait and have been suggested to exert significant control on structuring the phytoplankton and zooplankton dynamics in the subarctic North Pacific. [ 36 ] Shifts in prey type have also been observed: in northern Arctic regions, salmon are typically piscivorous (consuming other fish) while in the southern Arctic and Bering Strait they are planktivorous. [ 36 ] Capelin , Mallotus villosus , are also distributed across much of the Arctic and can exert significant control on zooplankton populations as a result of their planktivorous diet. [ 41 ] Capelin have also been seen to exhibit cannibalism on their eggs when other types of preferred plankton sources become less available; alternatively, this behavior may be because increased spawning leads to more eggs in the environment for consumption. [ 41 ] Arctic cod are also important zooplankton consumers and appear to follow aggregations of zooplankton around the region. [ 36 ] Planktivorous birds like the fork-tailed storm-petrel and many types of auklets are also very common in the Arctic. [ 36 ] Little auks are the most common Arctic planktivore species; as they reproduce on land, their planktivory creates an important link between marine and terrestrial nutrient reserves. [ 47 ] This link is formed as little auks consume plankton with marine-derived nutrients at sea, then deposit nutrient-rich waste products on land during their reproductive process. [ 47 ] In freshwater lake systems, planktivory can be an important forcer of trophic cascades which can ultimately affect phytoplankton production. [ 5 ] Fishes, in these systems, can promote phytoplankton productivity by preying on the zooplankton that control phytoplankton abundances. [ 5 ] This is an example of top-down trophic control, where higher trophic organisms like fishes impose control on the abundance of lower trophic organisms, like phytoplankton. [ 48 ] Such control on primary production via planktivorous organisms can be important in the functioning of mid-western United States lake systems. [ 5 ] Fishes are often the most impactful zooplankton predators, as seen in Newfoundland where three-spine stickleback ( Gasterosteus aculeatus ) predate heavily upon zooplankton. [ 45 ] In temperate lakes, the cyprinid and centrarchid fish families are commonly represented among the planktivore community. [ 45 ] Planktivores can exert significant competition pressure on organisms in certain lake systems; for instance, in an Idaho lake the introduced planktivorous invertebrate shrimp Mysis relicta competes with the native landlocked planktivorous salmon kokanees . [ 5 ] Because of the salmon's importance in trophic cycling, the loss of fishes in temperate lake systems could lead to widespread ecological consequences; in this example, such a loss could lead to unchecked predation on plankton by Mysis relicta . [ 5 ] Planktivory can also be important in man-made reservoirs. In contrast to deeper and colder natural lakes, reservoirs are warmer, shallower, heavily modified human made systems with different ecosystem dynamics. [ 12 ] Gizzard shad, the previously mentioned obligate planktivore, is frequently the most common fish in many reservoir systems. [ 12 ] In certain sub-Arctic habitats like deep waters, the planktivorous basking shark tracks the movements of their prey closely up and down the water column in deep waters. [ 11 ] Other species like the megamouth shark adopt a similar feeding strategy that mirrors movement in the water column of their plankton prey. [ 49 ] In sub-Arctic lakes, certain morphs of the whitefish ( Coregonus lavaretus ) are planktivorous; the pelagic whitefish feeds primarily on zooplankton and as such have more gill rakers for enhanced feeding than other, non-planktivorous morphs of the same species. [ 50 ] The primary limiting nutrient shifts between nitrogen and phosphorus; a resulting consequence of changes in the structure of the food-web, thus limiting primary and secondary production in aquatic ecosystems. [ 14 ] [ 51 ] The bioavailability of such nutrients drives variation in the biomass and productivity of planktonic species. [ 51 ] Due to variance in the N:P excretion of planktivorous fish species, consumer-driven nutrient cycling results in changes in nutrient availability. [ 12 ] [ 14 ] By feeding on zooplankton, planktivorous fish can increase the rate of nutrient recycling by releasing phosphorus from their prey. [ 14 ] [ 52 ] Planktivorous fish may release cyanobacteria from nutrient limitation by increasing the concentration of bioavailable phosphorus through excretion. [ 52 ] The presence of planktivorous fish can disturb sediments, resulting in an increase in the amount of nutrients that are bioavailable to phytoplankton and further support in phytoplankton nutrient demands. [ 52 ] Planktivory can play an important role in the growth, abundance, and community composition of planktonic species via top-down trophic control. For example, competitive superiority of large zooplankton over smaller species in lake systems leads to large-body dominance in the absence of planktivorous fish as a result of increased food availability and grazing efficiency. [ 53 ] Alternatively, the presence of planktivorous fish results in a decrease in zooplankton population through predation and shifts the community composition towards smaller zooplankton by limiting food availability and influencing size-selective predation (see the " predation " page for more information regarding size-selective predation). [ 54 ] [ 53 ] Predation by planktivorous fish reduces grazing by zooplankton and subsequently increases phytoplankton primary production and biomass. [ 54 ] By limiting the population and growth rate of zooplankton, obligate zooplanktivores are less likely to migrate to the area due to the lack of available food. For example, the presence of gizzard shad in reservoirs has been observed to strongly influence the recruitment of other planktivores. [ 12 ] Variations of fish recruitment and mortality rates from nutrient limitation have also been noted in lake ecosystems. [ 55 ] Piscivory can have similar top-down effects on planktonic species by influencing the community composition of planktivores. The population of planktivorous fish can also be influenced through predation by piscivorous species such as marine mammals and aquatic birds. For example, planktivorous minnows in Lake Gatun experienced a rapid population decline after the introduction of peacock bass ( Cichla ocellaris ). [ 53 ] However, a reduced population of planktivorous fish species result in a population increase of another class of planktivores – zooplankton. In lake ecosystems, some fish have been observed to behave first as zooplanktivores then as piscivores, affecting cascading trophic interactions. [ 55 ] Planktivory pressure from zooplankton in marine communities (top-down control, as previously mentioned)has a large influence on phytoplankton productivity. [ 4 ] Zooplankton can control phytoplankton seasonal dynamics as they exert the largest grazing pressure on phytoplankton; they also may modify their grazing strategies depending on environmental conditions, leading to seasonal change. [ 4 ] For instance, copepods can switch between ambushing prey and using water flow to capture prey depending on external conditions and prey abundance. [ 4 ] The planktivorous pressure zooplankton exert could explain the diversity of phytoplankton despite many phytoplankton occupying similar ecological niches (see the " paradox of the plankton " page for more information regarding this ecological conundrum). [ 4 ] [ 56 ] One notable example of trophic control is how planktivores have the ability to impact the species distribution of larval crabs in estuaries and coastal waters. Crab larvae, which are also planktivores, are hatched inside estuaries but some species then begin their migration out to waters along the coast where there are not as many predators. These crab larvae then utilize the tides to return to the estuaries when they become benthic organisms and are no longer planktivores. [ 8 ] Planktivores tend to live their early lives within estuaries. These juvenile fish tend to inhabit these regions throughout the warmer months in the year. Throughout the year, the risk for plankton varies within estuaries, the risk reaches its highest from August to October, and the lowest from December to April, this is consistent with the theory that planktivory is the highest in the summer months in this system. The risk of planktivory is strongly correlated with the number of planktivores within this system. [ 8 ] Consumers can regulate primary production in an ecosystem by altering ratios of nutrients via different rates of recycling. [ 55 ] Nutrient transport is greatly influenced by planktivorous fish, which recycle and transport nutrients between benthic and pelagic habitats. [ 13 ] Nutrients released by benthic-feeding fishes can increase the total nutrient content of pelagic waters, as transported nutrients are fundamentally different from those that are recycled. [ 12 ] Additionally, planktivorous fish can have significant effect on nutrient transport as well as total nutrient concentration by disturbing sediments through bioturbation . Increased nutrient cycling from near-sediment bioturbation by filter-feeding planktivores can increase phytoplankton population via nutrient enrichment. [ 12 ] [ 13 ] [ 57 ] Salmon accumulate marine nutrients as they mature in ocean environments which they then transport back to their stream of origin to spawn. As they decompose, the freshwater streams become enriched with nutrients which contribute to the development of the ecosystem. [ 58 ] The physical transport of nutrients and plankton can greatly affect the community composition and food web structure within oceanic ecosystems. In nearshore regions, planktivores and piscivores have been shown to be highly sensitive to changes in ocean currents while zooplankton populations are unable to tainted levels of predation pressure. [ 59 ] In some marine systems, planktivory can be an important factor controlling the duration and extent of phytoplankton blooms. [ 60 ] Changes in phytoplankton communities and growth rates can modify the amount of grazing pressure present; grazing pressure can also be dampened by physical factors in the water column. [ 60 ] The scientist Michael Behrenfeld proposed that the deepening of the mixed layer in the ocean, a vertical region near the surface made physically and chemically homogenous by active mixing, leads to a decrease in grazing interactions among planktivores and plankton because planktivores and plankton become more spatially distant from one another. [ 60 ] This spatial distance thereby facilitates phytoplankton blooms and ultimately grazing rates by planktivores; both the physical changes and changes to grazing pressure have a significant influence on where and when phytoplankton blooms occur. [ 60 ] The shallowing of the mixed layer due to physical processes within the water column conversely intensifies planktivore feeding. [ 60 ] Harmful algal blooms occur when there is a bloom of toxin producing phytoplankton. Planktivores such as fish and filter feeders that are present have a high likelihood of consuming these phytoplankton because that is what makes up the majority of their diet, or the diet of their prey. Since these planktivores near the bottom of the food chain consume harmful toxins, those toxins then move up the food web when predators consume these fish. [ 61 ] The increasing concentration of some toxins through trophic levels presented here is called bioaccumulation , and this can lead to a range of impacts from non-lethal changes in behavior to major die-offs of large marine animals. There are monitoring programs in place for shellfish due to human health concerns and the ease of sampling in oysters. Some fish feed directly on phytoplankton, like the Atlantic herring ( Clupea harengus ), and Clupeidae , while other fish feed on zooplankton that consume the harmful algae. [ 62 ] Domoic acid is a toxin carried by a type of diatom called Pseudo-nitzschia . [ 63 ] Pseudo-nitzchia were the main organism responsible for a large HAB that took place along the west coast of the US in 2015 and had a large impact on the Dungeness crab fishery that year. [ 64 ] When harmful algal blooms occur, planktivorous fish can act as vectors for poisonous substances like domoic acid. These planktivorous fish are eaten by larger fish and birds and the subsequent ingestion of toxins can then harm those species. [ 15 ] Those animals consume planktivorous fish during a harmful algal bloom, and can have miscarriages, seizures, vomiting, and can sometimes die. [ 63 ] Additionally, marine mammal mortality is occasionally attributed to harmful algal blooms, according to NOAA. [ 65 ] Krill are another example of a planktivore that may exhibit high levels of domoic acid in their system; these large plankton are then consumed by humpback and blue whales. Since krill can have such a high level of domoic acid in their system when blooms are present, that concentration is rapidly transferred to whales which leads them to have a high concentration of domoic acid in their system as well. [ 66 ] There is no evidence proving that this domoic acid has had a negative impact on the whales, but if the concentration of domoic acid is great enough, they could be impacted similarly to other marine mammals. [ 66 ] Climate change is a worldwide phenomenon that affects everything from the largest planktivores such as whales, to even the smallest plankton. Climate change affects weather patterns, creates seasonal anomalies, alters sea surface temperature , alters ocean currents, and can affect nutrient availability for phytoplankton, and may even spur HABs in some systems. The Arctic has been hit hard with shorter winters and hotter summers creating less permafrost and rapidly melting ice caps causing lower salinity levels. [ 67 ] The coupling of higher ocean CO 2 levels, temperatures, and lower salinity is causing changes in phytoplankton communities and diatom diversity. [ 49 ] Thalassiosira spp . Plankton was replaced by solitary Cylindrotheca closterium or Pseudo-nitzschia spp ., a common HAB causing phytoplankton, under higher temperature and lower salinity in combination. [ 49 ] Community changes such as this one, have large-scale effects through trophic levels. A shift in the primary producer communities can cause shifts in consumer communities, as the new food may provide different dietary benefits. As there is less permanent ice in the Arctic and less summer ice, some planktivores species are already moving north into these new open waters. Atlantic cod and orcas have been documented in these new territories, while planktivores such as Arctic cod are losing their habitat and feeding grounds under and around the sea ice. [ 68 ] Similarly, the Arctic birds, the Least and Crested Auklets rely on zooplankton that lives under the disappearing sea ice and has seen dramatic effects on reproductive fitness and nutrition stress with the decreasing amounts of zooplankton available in the Bering Sea basin . [ 69 ] In another prime example of shifting food webs, Moore et al. (2018) have found a shift from benthic dominated ecosystem to a more pelagic dominated ecosystem feeding structure. [ 70 ] With longer open water periods, due to a loss of sea ice the Chukchi Sea has seen a shift in the past three decades. [ 70 ] The increase in air temperature and loss of sea ice have coupled to promote an increase in pelagic fishes and a decrease in benthic biomass. [ 70 ] This shift has encouraged a shift to planktivorous seabirds instead of piscivorous seabirds. [ 71 ] Pollock fish are a planktivorous fish that rely on copepods as their primary diet as juveniles. According to the Oscillating Control Hypothesis, early ice retreat caused by a warming climate creates a later bloom of copepods and aphids (a plankton species). The later bloom produces fewer large lipid rich copepods, and results in smaller less nutrient rich copepods. The older pollock then face a winter starvation, causing carnivory on young pollock (<1yr old), and reduced population numbers and fitness. [ 72 ] Similar to the Arctic, sea ice in the Antarctic is melting rapidly and permanent ice is becoming less and less (Zachary Lab Cite). This ice melt creates changes in freshwater input and ocean stratification , consequently affecting nutrient delivery to primary producers. [ 73 ] As sea ice recedes, there is less valuable surface area for algae to grow on the bottom of the ice. This lack of algae inhibits krill (a partial planktonic species) to have less food availability, consequently affecting the fitness of Antarctic primary consumers such as krill, squid, pollock, and other carnivorous zooplankton. The Subarctic has seen similar ecosystem changes especially in well studied places such as Alaska. The warmer waters have helped increase zooplankton communities and have been creating a shift in ecosystem dynamics (Green 2017). There has been a large shift from piscivorous seabirds such as pacific loons and black-legged kittiwakes to planktivores sea birds such as ancient auklets and short-tailed shearwaters . [ 74 ] Marine planktivores such as the charismatic humpback , fin , and minke whales have been benefiting from the increase in zooplankton such as an increase in krill. [ 75 ] As these large whales spend more time migrating into these northern water, they are taking up resources previously only used by arctic planktivores, creating potential shifts in food availability and thus food webs. Tropical and equatorial marine regions are mainly characterized by coral reef communities or vast open oceans. Coral reefs are one of the most susceptible ecosystems to climate change, in particular the symptoms of warming oceans and acidification. Ocean acidification raises CO 2 levels in the ocean and has significant effects on zooplankton communities. Smith et al. (2016) [ 76 ] discovered that increased levels of CO 2 show reductions in zooplankton biomass but not zooplankton quality in tropical ecosystems , as increased CO 2 had no negative effects on fatty acid compositions. [ 77 ] This means that planktivores are not receiving less nutritious zooplankton, but are experiencing lesser availability of zooplankton than is needed for survival. [ 77 ] One of the most important planktivores in the tropics are corals themselves. Although spending a portion of their life cycle as planktonic organisms themselves, established corals are sedentary organisms that can use their tentacles to capture plankton from the surrounding environment to help supplement energy produced by the photosynthetic zooxanthellae . Climate change has had significant impacts on coral reefs, with warming causing coral bleaching and increases in infectious diseases, sea-level rise causing more sedimentation that then smothers corals, stronger and more frequent storms causing breakage and structural destruction, an increase of land runoff bringing more nutrients into the systems causing algal blooms that murk up the water and therefore diminish light availability for photosynthesis, altered ocean currents causing a difference in the dispersal of larvae and planktonic food availability, and lastly changes in ocean pH decreasing structural integrity and growth rates. [ 78 ] There is also a plethora of planktivorous fish throughout the tropics that play important ecological roles within marine systems. Similar to corals, planktivorous reef fish are directly affected by these changing systems and these negative effects then disrupt food webs through the oceans. [ 77 ] As plankton communities shift in speciation and availability, primary consumers have a harder time meeting energy budgets. This lack of food availability can influence reproductivity and overall primary consumer populations, creating food shortages for higher trophic consumers. The global fisheries industry is a multi-billion dollar, international industry that provides food and livelihoods to billions of people around the globe. Some of the most important fisheries include salmon, pollock, mackerel, char, cod, halibut, and trout. In 2021, the take home total profits, before bonuses, actually going into fishermen's pockets, from the Alaskan salmon, cod, flounder, and groundfish fishing season came to $248 million. Planktivorous fish alone create an important, large economic industry. In 2017 Alaska pollock was the United States' largest commercial fishery by volume with 3.4 billion pounds being caught and coming in at total value of $413 million. [ 79 ] Besides fishing, planktivorous marine animals drive tourism economy as well. Tourist travel across the world for whale watching , to see charismatic megafauna such as humpback whales in Hawaii, Minke whales in Alaska, grey whales in Oregon, and whale sharks in South America. Manta rays also drive dive and snorkel tourism, raking in over $73 million annually, in direct revenue, over 23 countries around the world. [ 80 ] The main participating countries in Manta ray tourism include Japan, Indonesia, the Maldives, Mozambique, Thailand, Australia, Mexico, United States, Federated States of Micronesia and Palau. [ 80 ]
https://en.wikipedia.org/wiki/Planktivore
Planktology is the study of plankton , various small drifting plants, animals and microorganisms that inhabit bodies of water . Planktology topics include primary production , energy flow and the carbon cycle . Plankton drive the " biological pump ", a process by which the ocean ecosystem transports carbon from the surface euphotic zone to the ocean's depths. Such processes are vital to carbon dioxide sinks , one of several possibilities for countering global warming . Modern planktology includes behavioral aspects of drifting organisms, engaging modern in situ imaging devices. Some planktology projects allow the public to participate online , such as the Long-term Ecosystem Observatory . There are a very large number of, often closely related or similar looking, plankton species which makes classification a challenge for scientists. Their habitat also adds challenges to their study. [ 1 ]
https://en.wikipedia.org/wiki/Planktology
A plankton net is equipment used for collecting samples of plankton in standing bodies of water. It consists of a towing line and bridles, nylon mesh net, and a cod end. Plankton nets are considered one of the oldest, simplest and least expensive methods of sampling plankton. [ 1 ] The plankton net can be used for both vertical and horizontal sampling. [ 1 ] It allows researchers to analyse plankton both quantitatively (cell density, cell colony or biomass) and qualitatively (e.g. Chlorophyll-a as a primary production of phytoplankton) in water samples from the environment. One common method for collecting a plankton sample is to tow the net horizontally using a low-speed boat. Before collecting the plankton, the net should be rinsed with the sample water. The user should ensure that the cod end is completely closed by turning the valve into a vertical position. Then the plankton net is then lowered horizontal to the water surface at the side of the slowly moving boat . Sampling is done for 1.5 minutes. [ 4 ] After this time, the plankton sample is collected in a sample bottle by opening the cod end above it by turning the valve horizontally. When the sample is collected it can be analyzed using a microscope to identify the type of zooplankton or phytoplankton , or a cell count can be undertaken to determine the plankton cell density of the water source. John Vaughan Thompson developed a plankton net during his return voyage from Mauritius, which reached the UK in 1816. Impressed by marine bioluminescence in small crustacea he later named Sapphirina , he felt "under great obligations to this beautiful little animal, which by its splendid appearance in the water induced me to commence the use of a muslin hoop-net, which when it failed to procure me a specimen, brought up such a profusion of other marine animals altogether invisible while in the sea, as to induce a continued use of it on every favourable opportunity." He published his research in a series of six memoirs from 1828 to 1834. [ 5 ] The second recorded use of a plankton net was by Charles Darwin on 10 January 1832, during the Beagle survey voyage . His diary included a sketch of the net, which appears to have been based on a trawl net described by John Coldstream in a letter to Darwin. It is possible that Thompson's idea had earlier been drawn to Darwin's attention by Robert Edmond Grant in Edinburgh. Darwin describes this "contrivance" as "a bag four feet deep, made of bunting, & attached to [a] semicircular bow this by lines is kept upright, & dragged behind the vessel". The next day he remarked that "The number of animals that the net collects is very great & fully explains the manner so many animals of a large size live so far from land. — Many of these creatures so low in the scale of nature are most exquisite in their forms & rich colours. — It creates a feeling of wonder that so much beauty should be apparently created for such little purpose." [ 6 ]
https://en.wikipedia.org/wiki/Plankton_net
Planning is the process of thinking regarding the activities required to achieve a desired goal . Planning is based on foresight, the fundamental capacity for mental time travel . Some researchers regard the evolution of forethought - the capacity to think ahead - as a prime mover in human evolution . [ 1 ] Planning is a fundamental property of intelligent behavior. [ citation needed ] It involves the use of logic and imagination to visualize not only a desired result, but the steps necessary to achieve that result. An important aspect of planning is its relationship to forecasting . Forecasting aims to predict what the future will look like, while planning imagines what the future could look like. Planning according to established principles - most notably since the early-20th century [ 2 ] - forms a core part of many professional occupations, particularly in fields such as management and business . Once people have developed a plan, they can measure and assess progress , efficiency and effectiveness . As circumstances change, plans may need to be modified or even abandoned. In light of the popularity of the concept of planning, some adherents of the idea advocate planning for unplannable eventualities. [ 3 ] [ 4 ] Planning has been modeled in terms of intentions : deciding what tasks one might wish to do; tenacity : continuing towards a goal in the face of difficulty and flexibility , adapting one's approach in response implementation. [ 5 ] : 89 An implementation intention is a specification of behavior that an individual believes to be correlated with a goal will take place, such as at a particular time or in a particular place. Implementation intentions are distinguished from goal intentions, which specifies an outcome such as running a marathon. [ 5 ] : 89 Planning is one of the executive functions of the brain, encompassing the neurological processes involved in the formulation, evaluation and selection of a sequence of thoughts and actions to achieve a desired goal. Various studies utilizing a combination of neuropsychological , neuropharmacological and functional neuroimaging approaches have suggested there is a positive relationship between impaired planning ability and damage to the frontal lobe . A specific area within the mid-dorsolateral frontal cortex located in the frontal lobe has been implicated as playing an intrinsic role in both cognitive planning and associated executive traits such as working memory . Disruption of the neural pathways , via various mechanisms such as traumatic brain injury , or the effects of neurodegenerative diseases between this area of the frontal cortex and the basal ganglia , specifically the striatum (corticostriatal pathway), may disrupt the processes required for normal planning function. [ 6 ] Individuals who were born very low birth weight (<1500 grams) and extremely low birth weight are at greater risk for various cognitive deficits including planning ability. [ 7 ] [ 8 ] The other region activated in planning process is default mode network which contributes to activity of remembering the past and imagine the future. [ 9 ] This network distributed set of regions that involve association cortex and paralimbic region but spare sensory and motor cortex this is make possible planning process disruption by active task that uses sensory and motoric regions. [ 10 ] [ 11 ] There are a variety of neuropsychological tests which can be used to measure variance of planning ability between the subject and controls. Test participants with damage to the right anterior, and left or right posterior areas of the frontal lobes, showed no impairment. The results implicating the left anterior frontal lobes involvement in solving the Tower of London were supported in concomitant neuroimaging studies which also showed a reduction in regional cerebral blood flow to the left pre-frontal lobe. For the number of moves, a significant negative correlation was observed for the left prefrontal area: i.e. subjects that took more time planning their moves showed greater activation in the left prefrontal area. [ 14 ] Patrick Montana and Bruce Charnov outline a three-step result-oriented process for planning: [ 15 ] In organizations, planning can become a management process, concerned with defining goals for a future direction and determining on the missions and resources to achieve those targets. To meet the goals, managers may develop plans such as a business plan or a marketing plan . Planning always has a purpose. The purpose may involve the achievement of certain goals or targets: efficient use of resources, reducing risk, expanding the organization and its assets, etc. Public policies include laws, rules, decisions, and decrees. Public policy can be defined as efforts to tackle social issues via policymaking. [ 16 ] A policy is crafted with a specific goal in mind in order to address a societal problem that has been prioritized by the government. [ 17 ] Public policy planning includes environmental , land use , regional , urban and spatial planning . In many countries, the operation of a town and country planning system is often referred to as "planning" and the professionals which operate the system are known as " planners ". It is a conscious as well as sub-conscious activity. It is "an anticipatory decision making process" that helps in coping with complexities. It is deciding future course of action from amongst alternatives. It is a process that involves making and evaluating each set of interrelated decisions . It is selection of missions, objectives and "translation of knowledge into action." A planned performance brings better results compared to an unplanned one. A manager's job is planning, monitoring and controlling. Planning and goal setting are important traits of an organization. It is done at all levels of the organization. Planning includes the plan, the thought process, action, and implementation. Planning gives more power over the future. Planning is deciding in advance what to do, how to do it, when to do it, and who should do it. This bridges the gap from where the organization is to where it wants to be. The planning function involves establishing goals and arranging them in logical order. An organization that plans well achieves goals faster than one that does not plan before implementation. Planning is not just a professional activity: it is a feature of everyday life, whether for career advancement, organizing an event or even just getting through a busy day. Opportunism can supplement or replace planning. [ 18 ] [ 19 ]
https://en.wikipedia.org/wiki/Planning
Plano-convex ingots are lumps of metal with a flat or slightly concave top and a convex base. They are sometimes, misleadingly, referred to as bun ingots which imply the opposite concavity. [ 1 ] They are most often made of copper , although other materials such as copper alloy, lead and tin are used. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The first examples known were from the Near East during the 3rd and 2nd Millennia BC. [ 7 ] By the end of the Bronze Age they were found throughout Europe [ 8 ] [ 9 ] and in Western and South Asia . [ 10 ] [ 11 ] [ 12 ] Similar ingot forms continued in use during later Roman and Medieval periods. Traditionally bun ingots were seen as a primary product of smelting , forming at the base of a furnace beneath a layer of less dense slag. However, experimental reconstruction of copper smelting showed that regular plano-convex ingots are difficult to form within the smelting furnace, producing only small ingots or copper prills that need to be remelted. [ 13 ] [ 14 ] High purity copper bun ingots found in Late Bronze Age Britain and the Mediterranean seem to have undergone a secondary refining procedure. The metallographic structure and high iron compositions of some plano-convex ingots suggest that they are the product of primary smelting. [ 15 ] Tylecote suggested that Roman plano-convex copper ingots may have been formed by tapping both slag and copper in one step into a mould or pit outside the furnace. [ 16 ] A similar process was described by Agricola in book IX of his De Re Metallica [ 17 ] and has been replicated experimentally. [ 18 ] Although all bun ingots share the same basic morphology, the details of their form and the texture of their convex base is dependent on the mould in which they cooled. Bun ingots made in purpose-dug depressions in sand can be highly variable in form even on the same site, [ 19 ] whereas ingots cast in reusable moulds will form sets of identical “mould siblings”. [ 20 ] The composition of the metal and its cooling conditions affect structure. As the ingot cools gases are released giving the upper surface a “blistered” texture and if cooling takes place outside of the furnace, the outer surface often becomes oxidised. Casting in a warm mould or reheating furnace gives the ingot an even columnar structure running in the direction of cooling, whereas ingots cast in a cold mould have a distinctive two stage cooling structure with an outer chilled layer reflecting the rapid cooling of the bottom when it came into contact with the mould. A slightly concave upper surface can be produced if the top of the ingot cools more slowly than the bottom. By the Late Bronze Age , the copper bun ingot, either in a simple form or with a hole in its center, had become the main form of copper ingot, replacing the earlier ‘bar ingot’ or rippenbarre . Weights of complete examples average ~4 kg, but examples of up to about 7 kg are known. Many early finds of British LBA bun ingots were unstratified but recently bun-shaped ingots and ingot fragments have been found in hoards alongside bronze artifacts and scrap metal. [ 21 ] Several offshore finds of probable LBA date suggest that copper bun ingots may have been traded by sea during this period. The copper is of high purity, although earlier examples are sometimes composed of arsenical copper. Tylecote suggested that they are not primary smelting products and instead were refined and recast. [ 22 ] The macrostructure of a half section example from Gillan, Cornwall shows a columnar structure that probably indicates slow cooling in a reheating furnace or a warm mold, rather than from pouring into a cold mold. [ 23 ] A second major group of British bun ingots date to the Roman period and are found mostly in the copper-rich highland areas of Wales and in Scotland . They are heavier than the LBA examples, with weights ranging between 12 and 22 kg. [ 24 ] Some have stamps clearly dating them to the Roman period [ 24 ] including an example that reads SOCIO ROMAE NATSOL . The term "socio" suggests that the ingots were cast by a private company rather than by the state. [ 25 ] Fraser Hunter reassessed the context of the Scottish examples and some of the unstamped Welsh examples and argues that they could in fact date to the Iron Age or at least reflect native rather than Roman copper working. [ 26 ] Although ingots of any sort are not common in the British Iron Age, planoconvex or bun-shaped ingots exist, e.g. a tin ingot discovered within the Iron Age hillfort at Chun Castle , Cornwall. [ 4 ] The Roman Bun Ingots are less pure than the earlier LBA examples and Tylecote suggests that they may be a direct product of smelting. [ 27 ] Theoretically such an ingot could be formed in the base of the furnace. However, this is problematic in the case of the stamped examples as this would require the furnace to be dismantled or else have a short shaft to allow access for stamping. [ 28 ] [ 29 ] As a solution the furnace could have been tapped into a mould at the completion of smelting. It is possible that both methods were used as several of the ingots seem to have had additional metal poured onto the top in order to allow stamping. [ 30 ]
https://en.wikipedia.org/wiki/Plano-convex_ingot
Plant-based digital data storage is a futuristic view that proposes storing digital data in plants and seeds. [ 1 ] [ 2 ] The first practical implication showed the possibility of using plants as storage media for digital data. New approaches for data archiving are required due to the constant increase in digital data production and the lack of a capacitive, low maintenance storage medium. With the help of two biotechnologists, they encoded a basic computer program in Python Programming language into Nicotiana benthamiana . They first encoded a “Hello World” computer program into a DNA code, synthesized it and cloned this coded DNA into a plasmid-vector to be used further for transformation into Nicotiana benthamiana plants. The encoded program was reconstructed from the resulting seedlings with 100% accuracy by showing “Hello World” on the computer screen. Their approach demonstrated that artificially encoded data can be stored and multiplied within plants without affecting their vigor and fertility. It also takes a step forward from storing data into a naked DNA molecule. It is inherent in progeny and authentically reproducible while the reduced metabolism of the seeds provides an additional protection for encoded DNA archives. That was the first practical implication of utilizing a multi-cellular, eukaryotic organism for storing digital data in the world. It goes beyond plant genome manipulations for biotechnological research and plant breeding. It takes the advantage of multi-cellular organisms and serves to propagate the encoded information in daughter cells. The host organism is able to grow and multiply with the embedded information, and every cell of the organism contains a copy of the encoded information; therefore, it avoids the costs of synthetic production of multiple copies of the same encoded information. Moreover, in contrast to naked DNA, which can be affected by unfavorable environmental conditions like excessive temperature, in desiccation/re-hydration conditions, DNA stored in a seed is protected against alterations and degradation over time without the need of any active maintenance. Insertion of short computer programs into plants could also serve to provide a detailed description of a given variety, since the need for such labeling has already been expressed. [ 3 ] As for manipulating and storing archives, their approach leverages a new look at accessing, browsing and reading information. 1g of DNA could store exabytes of data and it is a huge, capacitive storage medium. DNA protected within a seed of a living plant could be easy to access when hand-held readers will become a reality. This article about biological engineering is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plant-based_digital_data_storage
A Plant Information Management System (PIMS) collects and integrates information about a production process from different sources. [ 1 ] Important tasks of a PIMS are: Manufacturers are: There are the following subdivisions: This business software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plant_Information_Management_System
Plant Resources of Tropical Africa , known by its acronym PROTA , is a retired NGO and interdisciplinary documentation programme active between 2000 and 2013. PROTA produced a large database and various publications about Africa's useful plants. PROTA was concerned with increasing accessibility to traditional knowledge and scientific information about many types of African plants including: dyes & tannins , fibers , medicinal plants , stimulants , tropical timbers , vegetables , tubers (carbohydrates), oil seeds , ornamental plants , forage plants , and cereals . PROTA supported the sustainable use of these useful plants to preserve culture , reduce poverty and hunger, and respond to climate change . To this end, PROTA's overall goal was synthesize diverse, published information for approximately 8,000 plants used in tropical Africa, then make it widely accessible through an online database and various book publications. In other words, PROTA was dedicated to making the useful plant biodiversity of tropical Africa better-known and respected. [ 1 ] [ 2 ] PROTA's database and various publications are considered unique in their epistemological approach because they were compiled as much from obscure publications as from peer-reviewed and popular literature, gathered throughout Africa and Europe. [ 3 ] In this way PROTA publications include Africa-centered references and perspectives, which is a major focus of the broader discipline of African studies . PROTA also was an international NGO registered in Nairobi, Kenya that used information from its publications to structure a number of community projects involving over 800 farmers in Benin, Botswana, Burkina Faso, Kenya, and Madagascar. [ 1 ] [ 4 ] Some of PROTA's other goals included: [ 5 ] [ 6 ] PROTA retired in 2013 while facing large operational costs after its funding expired. At the point of its retirement, about 50% of PROTA's encyclopedia series was complete. During its operation, PROTA received funds from the European Union's Directorate-General for International Partnerships , [ 7 ] Netherlands Ministry of Foreign Affairs , Netherlands Ministry of Agriculture, Netherlands Organization for Scientific Research , Wageningen University , [ 8 ] COFRA Foundation , [ 9 ] International Tropical Timber Organization , [ 10 ] and the Bill & Melinda Gates Foundation . Since the program's retirement there have been ongoing efforts to fundraise and preserve PROTA's various publications and online database. [ 11 ] As of 2022, the PROTA database Prota4U is still online in an archive-like capacity at Wageningen University with articles written in English and French. Information in the PROTA database can also be accessed at the website Pl@ntUse –though in a different format. As of 2019, Prota4U had about 1,500 daily visitors and 500,000 unique visitors each year. [ 11 ] All of the PROTA's encyclopedia volumes have been digitized and are available for free as Open access publications from the Wageningen University library . It is uncertain how much of the PROTA Recommends Series has been digitized. The programme operated through an international network of institutional partners and collaborators of the PROTA Foundation. PROTA had representatives in 20 African countries and dual headquarters in Wageningen, Netherlands and Nairobi , Kenya. PROTA also had regional offices with institutional partners in Burkina Faso, France, Gabon, Ghana, Madagascar, Malawi, Uganda, and the United Kingdom. [ 12 ] In Wageningen, PROTA also partnered with the EU funded, Technical Centre for Agricultural and Rural Cooperation (CTA) and the now-retired Agromisa Foundation to help distribute its various publications. Agromisa and PROTA were considered suitable partners because they were both committed to bridging the gap between scientific knowledge and traditional knowledge and were open access publishers of books with practical information about sustainable agriculture for small-farmers in Africa. [ 13 ] The PROTA Handbook Series is a large illustrated encyclopedia series of utility plant species found in Tropical Africa . PROTA's retirement in 2013 made it unfeasible to complete the encyclopedia series, therefore only 9 volumes were ever published. In 2002, the series was projected to contain 16 volumes with entires for 7,000-8000 species. It was estimated that the series would include 2,500 botanical line drawings , and 2,500 species distribution maps in about 11,000 pages. [ 14 ] The existing PROTA encyclopedia volumes been described metaphorically in the Kew Bulletin as a treasure trove of information. [ 15 ] The Food and Agriculture Organization and Biodiversity International described PROTA 2: Vegetables as a detailed collection of ethnobotanical knowledge. [ 16 ] Some PROTA encyclopedias have received more than 376 citations. [ 17 ] PROTA Encyclopedia editors included individuals such as G.J. Grubben, who had led projects commissioned by the United Nations International Board for Plant Genetic Resources ; and Ameenah Gurib-Fakim , a biodiversity scientist who later became the President of Mauritius . [ 18 ] Though organized by species according to conventional botanical nomenclature , PROTA encyclopedias also include vernacular names in major African languages such as Swahili where information was available. [ 12 ] PROTA continued to distribute its encyclopedias after the organization's retirement. As of 2019, than 30,000 PROTA encyclopedias had been printed in English and French and were distributed widely with the help of the Technical Centre for Agricultural and Rural Cooperation (CTA) and the now-retired Agromisa Foundation . Several PROTA encyclopedias are also available at the International Union for Conservation of Nature (IUCN) Headquarters' Library in Switzerland. [ 19 ] Species articles in the PROTA encyclopedia series were written by hundreds of authors from around the world and in Africa, and cover a range of information including: [ 20 ] [ 12 ] [ 21 ] Currently, all published PROTA encyclopedias volumes have been digitized and are available as Open access publications from the Wageningen University library . [ 22 ] Several encyclopedias in the series were planned but not started at the time of PROTA's retirement in 2013. The PROTA 4U Database was conceived to improve access to information in PROTA's printed publications. The PROTA web database PROTA4U is a combination of PROTA’s highly standardized expert-validated review articles (PROTAbase) and yet-to-be-validated ‘starter kits’ for all other useful plants. These ‘starter kits’ are pre-filled with basic information from PROTA’s databases SPECIESLIST (important synonyms, uses, basic sources of information) and AFRIREFS (‘grey’ literature). Furthermore, the records contain the results of a meta-analysis from a large collection of agricultural and botanical databases , conducted successfully in cooperation with the ICON Group International . [ 25 ] The websites, which allowed their databases to be harvested, are properly acknowledged in the ‘starter kits’. Some believe that the 2010–2012 world food price crisis and 2011 East Africa drought led to widespread interest in supporting research for intensive farming of popular food crops instead of traditional, diversified local plant resources which were the focus of PROTA. [ 26 ] During this time, responses to these large crises in the international finance and philanthropy communities may have shifted interest away from ethnobotanical research programs like PROTA. This raises questions about the role of traditional, diversified local plant resources in the study of food security , economic development , biodiversity conservation , and the preservation of cultural heritage and traditional knowledge .
https://en.wikipedia.org/wiki/Plant_Resources_of_Tropical_Africa
Plant anatomy or phytotomy is the general term for the study of the internal structure of plants . Originally, it included plant morphology , the description of the physical form and external structure of plants, but since the mid-20th century, plant anatomy has been considered a separate field referring only to internal plant structure. [ 1 ] [ 2 ] Plant anatomy is now frequently investigated at the cellular level , and often involves the sectioning of tissues and microscopy . [ 3 ] Some studies of plant anatomy use a systems approach, organized on the basis of the plant's activities, such as nutrient transport, flowering, pollination, embryogenesis or seed development. [ 4 ] Others are more classically [ 5 ] divided into the following structural categories: About 300 BC, Theophrastus wrote a number of plant treatises, only two of which survive, Enquiry into Plants ( Περὶ φυτῶν ἱστορία ), and On the Causes of Plants ( Περὶ φυτῶν αἰτιῶν ). He developed concepts of plant morphology and classification, which did not withstand the scientific scrutiny of the Renaissance . A Swiss physician and botanist, Gaspard Bauhin , introduced binomial nomenclature into plant taxonomy . He published Pinax theatri botanici in 1596, which was the first to use this convention for naming of species. His criteria for classification included natural relationships, or 'affinities', which in many cases were structural. It was in the late 1600s that plant anatomy became refined into a modern science. Italian doctor and microscopist, Marcello Malpighi , was one of the two founders of plant anatomy. In 1671, he published his Anatomia Plantarum , the first major advance in plant physiogamy since Aristotle . The other founder was the British doctor Nehemiah Grew . He published An Idea of a Philosophical History of Plants in 1672 and The Anatomy of Plants in 1682. Grew is credited with the recognition of plant cells, although he called them 'vesicles' and 'bladders'. He correctly identified and described the sexual organs of plants (flowers) and their parts. [ 6 ] In the eighteenth century, Carl Linnaeus established taxonomy based on structure, and his early work was with plant anatomy. While the exact structural level which is to be considered to be scientifically valid for comparison and differentiation has changed with the growth of knowledge, the basic principles were established by Linnaeus. He published his master work, Species Plantarum in 1753. In 1802, French botanist Charles-François Brisseau de Mirbel , published Traité d'anatomie et de physiologie végétale ( Treatise on Plant Anatomy and Physiology ) establishing the beginnings of the science of plant cytology . In 1812, Johann Jacob Paul Moldenhawer published Beyträge zur Anatomie der Pflanzen , describing microscopic studies of plant tissues. In 1813, a Swiss botanist, Augustin Pyrame de Candolle , published Théorie élémentaire de la botanique , in which he argued that plant anatomy, not physiology, ought to be the sole basis for plant classification. Using a scientific basis, he established structural criteria for defining and separating plant genera. In 1830, Franz Meyen published Phytotomie , the first comprehensive review of plant anatomy. In 1838, German botanist Matthias Jakob Schleiden , published Contributions to Phytogenesis , stating, "the lower plants all consist of one cell, while the higher plants are composed of (many) individual cells" thus confirming and continuing Mirbel's work. A German-Polish botanist, Eduard Strasburger , described the mitotic process in plant cells and further demonstrated that new cell nuclei can only arise from the division of other pre-existing nuclei. His Studien über Protoplasma was published in 1876. Gottlieb Haberlandt , a German botanist, studied plant physiology and classified plant tissue based upon function. On this basis, in 1884, he published Physiologische Pflanzenanatomie ( Physiological Plant Anatomy ), in which he described twelve types of tissue systems (absorptive, mechanical, photosynthetic, etc.). British paleobotanists Dunkinfield Henry Scott and William Crawford Williamson described the structures of fossilized plants at the end of the nineteenth century. Scott's Studies in Fossil Botany was published in 1900. Following Charles Darwin 's Origin of Species a Canadian botanist, Edward Charles Jeffrey , who was studying the comparative anatomy and phylogeny of different vascular plant groups, applied the theory to plants using the form and structure of plants to establish a number of evolutionary lines. He published his The Anatomy of Woody Plants in 1917. The growth of comparative plant anatomy was spearheaded by British botanist Agnes Arber . She published Water Plants: A Study of Aquatic Angiosperms in 1920, Monocotyledons: A Morphological Study in 1925, and The Gramineae: A Study of Cereal, Bamboo and Grass in 1934. [ 7 ] Following World War II, Katherine Esau published, Plant Anatomy (1953), which became the definitive textbook on plant structure in North American universities and elsewhere, it was still in print as of 2006. [ 8 ] She followed up with her Anatomy of seed plants in 1960.
https://en.wikipedia.org/wiki/Plant_anatomy
Plant arithmetic is a form of plant intelligence whereby plants appear to perform arithmetic operations – a form of number sense in plants. Some such plants include the Venus flytrap and Arabidopsis thaliana. The Venus flytrap can count to two and five in order to trap and then digest its prey. [ 1 ] [ 2 ] The Venus flytrap is a carnivorous plant that catches its prey with a trapping structure formed by the terminal portion of each of the plant's leaves, which is triggered by tiny hairs on their inner surfaces. A Venus flytrap's reactions can occur due to electric and mechanical, or movement-related, changes. [ 3 ] [ 4 ] [ 5 ] When an insect or spider crawling along the leaves contacts a hair, the trap prepares to close, snapping shut only if a second contact occurs within approximately twenty seconds of the first strike. The requirement of redundant triggering in this mechanism serves as a safeguard against wasting energy by trapping objects with no nutritional value, and the plant will only begin digestion after five more stimuli to ensure it has caught a live bug worthy of consumption. There are two steps, which are a closed and locked state, that a Venus flytrap undergoes after its open state and before digestion , which differ due to the formation of the trap. [ 3 ] [ 4 ] [ 5 ] A closed trap occurs when the two lobes close or catch prey. [ 3 ] [ 4 ] [ 5 ] A locked trap occurs when the cilia further trap the prey. [ 3 ] [ 4 ] The trap can possess a strength of four newtons . [ 4 ] In addition, the cilia can further hinder a creature's ability to escape. [ 3 ] [ 4 ] The mechanism is so highly specialized that it can distinguish between living prey and non-prey stimuli, such as falling raindrops; [ 6 ] two trigger hairs must be touched in succession within 20 seconds of each other or one hair touched twice in rapid succession, [ 6 ] whereupon the lobes of the trap will snap shut, typically in about one-tenth of a second. [ 7 ] The number of days that the trap remains closed will depend on whether or not the plant has caught prey. [ 3 ] Furthermore, the size of the prey can affect the number of days needed for digestion. [ 3 ] If a creature is too small, then the Venus flytrap has the ability to release it, which means that it can start the stage of becoming semi-open. [ 3 ] [ 4 ] The transition from closed to open will take two days and can result after the plant has finished digesting or realizing it has not caught anything worthwhile. [ 3 ] [ 4 ] One day will be needed to become semi-open, which creates a concave look, and the other day will allow the Venus flytrap to become fully open, which creates a convex look. [ 3 ] [ 4 ] The angle of a Venus flytrap's lobes when they are open can be impacted by the water within it. [ 5 ] Arabidopsis thaliana in effect performs division to control starch use at night. [ 8 ] Most plants accumulate starch by day, then metabolize it at a fixed rate during night time. However, if the onset of darkness is unusually early, Arabidopsis thaliana reduces its use of starch by an amount that effectively requires division. [ 9 ] However, there are alternative explanations, [ 10 ] such as feedback control by sensing the amount of soluble sugars left. [ 11 ] As of 2015, open questions remain. [ 12 ]
https://en.wikipedia.org/wiki/Plant_arithmetic
A clan badge , sometimes called a plant badge , is a badge or emblem , usually a sprig of a specific plant, that is used to identify a member of a particular Scottish clan . [ 1 ] They are usually worn affixed to the bonnet [ 2 ] behind the Scottish crest badge , [ 3 ] or pinned at the shoulder of a lady's tartan sash. According to popular lore clan badges were used by Scottish clans as a means of identification in battle. An authentic example of plants being used in this way (though not by a clan) were the sprigs of oats used by troops under the command of Montrose during the sack of Aberdeen . Similar items are known to have been used by military forces in Scotland, like paper, or the "White Cockade " (a bunch of white ribbon) of the Jacobites . [ 4 ] Despite popular lore, many clan badges attributed to Scottish clans would be completely impractical for use as a means of identification. Many would be unsuitable, even for a modern clan gathering, let alone a raging clan battle. Also, a number of the plants (and flowers) attributed as clan badges are only available during certain times of year. Even though it is maintained that clan badges were used long before the Scottish crest badges used today, according to a former Lord Lyon King of Arms the oldest symbols used at gatherings were heraldic flags such as the banner, standard and pinsel. [ 5 ] There is much confusion as to why some clans have been attributed more than one clan badge. Several 19th century writers variously attributed plants to clans, many times contradicting each other. It has been claimed by one writer that if a clan gained new lands it may have also acquired that district's "badge" and used it along with their own clan badge. It is clear however, that there are several large groups of clans which share badges and also share a historical connection. The Clan Donald group (clans Macdonald , Macdonald of Clanranald , Macdonell of Glengarry , MacDonald of Keppoch ) and clans/ septs which have been associated with Clan Donald (like certain MacIntyres and the Macqueens of Skye) all have common heath attributed as their badge. Another large group is the Clan Chattan group (clans Mackintosh , Macpherson , Macgillivray , Macqueen , Macbain , Farquharson , Davidson ) which have been attributed red whortleberry (sometimes called cranberry in Scotland), or bearberry , or boxwood . The leaves of these three plants are very similar, and at least one writer has claimed that whatever plant which happened to be available was used. One group, the Siol Alpin group, of clans are said to have claimed or are thought to share a common descent. The Siol Alpin clans (clans Grant , Gregor , MacAulay , Macfie , Macnab , Mackinnon , Macquarrie ) are all attributed the clan badge of pine (Scots fir). In some cases, clan badges are derived from the heraldry of clan chiefs. For example, the Farquharsons have pine attributed as a clan badge of theirs (pine also appears on the uniforms of the Invercauld Highlanders). Pine was actually used in the Invercauld Arms as a mark of cadencing to the basic Shaw-Mackintosh Arms. [ 5 ]
https://en.wikipedia.org/wiki/Plant_badge
Plant breeders' rights ( PBR ), also known as plant variety rights ( PVR ), are rights granted in certain places to the breeder of a new variety of plant that give the breeder exclusive control over the propagating material (including seed , cuttings, divisions, tissue culture) and harvested material ( cut flowers , fruit, foliage) of a new variety for a number of years. The system of Plant breeders' rights is considered a sui generis form of intellectual property rights. [ 1 ] With these rights, the breeder can choose to become the exclusive marketer of the variety, or to license the variety to others. In order to qualify for these exclusive rights, a variety must be new, distinct, uniform, and stable. [ 2 ] A variety is: The breeder must also give the variety an acceptable "denomination", which becomes its generic name and must be used by anyone who markets the variety. Typically, plant variety rights are granted by national offices after examination. Seed is submitted to the plant variety office, who grow it for one or more seasons, to check that it is distinct, stable, and uniform. If these tests are passed, exclusive rights are granted for a specified period (typically 20/25 years, or 25/30 years for trees and vines). Renewal fees (often, annual) are required to maintain the rights. Breeders can bring suit to enforce their rights and can recover damages. Plant breeders' rights contain exemptions that are not recognized under other legal doctrines such as patent law. Commonly, there is an exemption for farm-saved seed. Farmers may store this production in their own bins for their own use as seed, but this does not necessarily extend to "brown-bag sales" (i.e. resale of farm-saved seed to neighbors in the local area). [ 3 ] Further sales for propagation purposes are not allowed without the written approval of the breeder. There is also a breeders' exemption (research exemption in the 1991 Act) that allows breeders to use protected varieties as sources of initial variation to create new varieties of plants (1978 Act), [ 4 ] or for other experimental purposes (1991 Act). [ 5 ] There is also a provision for compulsory licensing to assure public access to protected varieties if the national interest requires it and the breeder is unable to meet the demand. There is tension over the relationship between patent rights and plant breeder's rights. There has been litigation in Australia, the United States, and Canada over the overlap between such rights. [ 6 ] Each of these cases was decided on the principle that patents and plant breeders' rights were overlapping and not mutually exclusive. Thus, the exemptions from infringement of plant breeders' rights, such as the saved seed exemption, do not create corresponding exemptions from infringement of the patents covering the same plants. Likewise, acts that infringe the plant breeders' rights, such as exportation of the variety, would not necessarily infringe a patent on the variety, which only allows the patent owner to prohibit making, using, or selling (first sale, but not resale) the patented invention. In 1957, in France negotiations took place concerned with the protection of new varieties. This led to the creation of the Union Internationale pour la Protection des Obtentions Végétales (UPOV) and adoption of the first text of the International Convention for the Protection of New Varieties of Plants (UPOV Convention) in 1961. The purpose of the Convention was to ensure that the member states party to the Convention acknowledge the achievements of breeders of new plant varieties by making available to them an exclusive property right, on the basis of a set of uniform and clearly defined principles. The Convention was revised in Geneva in 1972, 1978 and 1991. Both the 1978 and the 1991 Acts set out a minimum scope of protection and offer member States the possibility of taking national circumstances into account in their legislation. Under the 1978 Act, the minimum scope of the plant breeder's right requires that the holder's prior authorization is necessary for the production for purposes of commercial marketing, the offering for sale and the marketing of propagating material of the protected variety. The 1991 Act contains more detailed provisions defining the acts concerning propagating material in relation to which the holder's authorization is required. The breeder's authorization is also required in relation to any of the specified acts done with harvested material of the variety, unless the breeder has had reasonable opportunity to exercise their right in relation to the propagating material, or if not doing so could constitute an "Omega Threat" situation. Under that provision, for example, a flower breeder who protects their variety in the Netherlands could block importation of cut flowers of that variety into the Netherlands from Egypt, which does not grant plant breeders' rights, because the breeder had no opportunity to exercise any rights in Egypt. Member countries also have the option to require the breeder's authorization with respect to the specified acts as applied to products directly obtained from the harvested material (such as flour or oil from grain, or juice from fruit), unless the breeder has had reasonable opportunity to exercise their right in relation to the harvested material. The UPOV Convention also establishes a multilateral system of national treatment , under which citizens of any member state are treated as citizens of all member states for the purpose of obtaining plant breeders rights. It also sets up a multilateral priority filing system, under which an application for protection filed in one member state establishes a filing date for applications filed in all other member states within one year of that original filing date. This allows a breeder to file in any one member country within the one-year period required to preserve the novelty of their variety, and the novelty of the variety will still be recognized when the filing is done in other member countries within one year of the original filing date. However, if the applicant does not wish to make use of priority filing, he or she has four years in which to apply in all other member states, excepting the United States, for all species except tree and vine species in which case he or she has six years to make application. More information can be obtained in Article 10 (1) (b) of Council Regulation (EC) No. 2100/94 of 27 July 2004. The trigger to start the four- or six-year period is not actually the date on which the first filing is made but the date on which the variety was first commercialized. The UPOV Convention is not self-executing . Each member state must adopt legislation consistent with the requirements of the convention and submit that legislation to the UPOV Secretariat for review and approval by the UPOV Council, which consists of all the UPOV member states acting in committee. In compliance with these treaty obligations, the United Kingdom enacted the Plant Variety and Seeds Act 1964 . Similar legislation was passed in the Netherlands, Denmark, Germany, and New Zealand. In 1970 the United States followed the lead of seventeen Western European nations and passed the Plant Variety Protection Act 1970 (US). This legislation provided protection to developers of novel, sexually reproduced plants. However, the United States originally acceded to the UPOV Convention on the basis of the Plant Patent Act and did not bring the PVP Act into compliance with UPOV requirements until 1984 when the Commissioner of Plant Variety Protection promulgated rules to do so. Since the 1980s, the US Patent Office has granted patents on plants, including plant varieties this provides a second way of protecting plant varieties in the United States. Australia passed the Plant Variety Protection Act 1987 (Cth) and the Plant Breeders Rights Act 1994 (Cth). Australian patent law also permits the patenting of plant varieties. In total, 65 countries have signed the UPOV Convention and adopted plant breeders' rights legislation consistent with the requirements of the convention. The WTO 's Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs) requires member states to provide protection for plant varieties either by patents or by an effective sui generis (stand alone) system, or a combination of the two. Most countries meet this requirement through UPOV Convention-compliant legislation. India has adopted a plant breeders' rights law that has been rejected by the UPOV Council as not meeting the requirements of the treaty. The most recent 1991 UPOV convention established several restrictions upon international plant breeders' rights. While the current legislature of the convention recognizes novel varieties of plants as intellectual property , laws were formed concerning the preservation of seeds for future plantation, such that the need to buy seeds to use in subsequent planting seasons would be significantly reduced, and even potentially eliminated altogether. [ 7 ] [ 8 ] In addition, the 1991 convention also concerns the method of instigating plant breeding by implementing pre-existing and patented plant species as contributor of vital genetic information in the creation of what would legally be regarded as a new variety of plant. [ 7 ] Constituent countries of the World Trade Organization are required to acknowledge the creation of new varieties of plants, and to uphold these creations within full recognition of intellectual property rights laws. A formalized legislature, exemplifying the manner in which such intellectual property rights can be conferred, is demonstrated by the 1991 UPOV convention, which declares such rights upon an individual breeder. [ 7 ] This document further identifies a breeder as one who has found or created a plant variety, one who possesses legal authority for the contractual production of a new plant variety, or one who has inherited legal rights to this form of intellectual property as it was derived under either of the two aforementioned conditions. As a result of debate over the protection of hybrid plants as new varieties, the legal measure of double protection, as expressed within the current iteration of the UPOV, can be taken. [ 7 ] [ 9 ] Double protection mediates the overlap between plant breeders' rights and patents that exists within the purview of intellectual property rights law, by enabling the protections of both to be conferred upon a particular plant variety. [ 7 ] Plant breeders' rights (sometimes referred to as breeders' privilege ) are contentious, in particular when analyzed in balance with other relevant international legal instruments, such as the Convention on Biological Diversity (and its Nagoya Protocol ) or the International Treaty on Plant Genetic Resources for Food and Agriculture (Plant Treaty). The UPOV is often criticized on this basis . There have been contrary opinions expressed by both lawyers and scientists assessing the general necessity for the protection of bred plant varieties as a form of intellectual property. [ 8 ] [ 10 ] Currently, intellectual property rights protect ideas that can be demonstrated as being novel and undiscovered at the time of its legal claim as intellectual property. [ 10 ] This definition of novelty, however, has been flexible throughout the history of intellectual property law, both internationally, and within the United States . [ 11 ] [ 12 ] Expectations of future changes to the legal protection of plant-related forms of intellectual property differ from the legal requirements for the first plant patent. [ 13 ] [ 14 ] Proponents of these laws recognize an overarching need for the financial support of research and development. Agricultural research and development, for example, has been specified as a particularly demanding endeavor, with respect to immediate concerns for the ability to sustainably feed an increasing global population. [ 9 ] [ 15 ] On the contrary, some believe that a more diverse approach than the imposition of intellectual property rights laws upon new plant varieties is required. [ 9 ] [ 8 ] This counter argument asserts that complex social, cultural, and economic factors affect the nature of intellectual property and its protection. A specific concern within this argument is with the means by which seeds are accessed within different local and international regions. [ 9 ] Recognizing that this process is extremely transient in nature and can vary greatly over time, supporters of this argument purport that this diversity must be reflected within intellectual property rights laws in order for them to exist as an effective protection of plant breeders' rights. [ 9 ] As a result of this conflict concerning authority over seeds, new legislation has been implemented in the United States. [ 8 ] The Open Source Seed Initiative (OSSI) is a national attempt that has been introduced within the United States, and is the first of its kind to model its approach regarding plant breeders' rights upon the mechanisms implemented by openly sourced software mechanisms. [ 8 ] Subsequent discourse on this approach has arisen, as concerns with the use of open source technology within a legal framework have developed. Some perceive OSSI as having significantly limited plant breeders' ability to access intellectual property rights for new plant varieties. [ 8 ] This has resulted in claims that funding for research and development in this sector will also decline. [ 8 ] Seed sovereignty can be defined as the right "to breed and exchange diverse open-sourced seeds". [ 16 ] [ failed verification ] Generally, it comes from the belief that communities should have control over their own seed stock, as a means to increase agricultural biodiversity, resilience, and food security. This idea is closely connected to issues of intellectual property rights, particularly related to the patenting of plant genetics, due to the importance of seed saving in seed sovereignty. [ 17 ] Activists argue that farmers and individuals should have legal protection for the practice for maintaining traditional plant varieties. [ 18 ] Seed sovereignty activists also argue that seed saving should be protected on the grounds of environmentalism and food security. [ 19 ] Some activists argue that seed sovereignty is important because of the cultural value of certain seeds and plant varieties, especially among indigenous communities. [ 20 ] Seed sovereignty has strong ties to the food justice and food sovereignty movements, due to its focus on increasing food security for all communities.
https://en.wikipedia.org/wiki/Plant_breeders'_rights
Plant breeding is the science of changing the traits of plants in order to produce desired characteristics. [ 1 ] It is used to improve the quality of plant products for use by humans and animals. [ 2 ] The goals of plant breeding are to produce crop varieties that boast unique and superior traits for a variety of applications. The most frequently addressed agricultural traits are those related to biotic and abiotic stress tolerance, grain or biomass yield, end-use quality characteristics such as taste or the concentrations of specific biological molecules (proteins, sugars, lipids, vitamins, fibers) and ease of processing (harvesting, milling, baking, malting, blending, etc.). [ 3 ] Plant breeding can be performed using many different techniques, ranging from the selection of the most desirable plants for propagation, to methods that make use of knowledge of genetics and chromosomes, to more complex molecular techniques. Genes in a plant are what determine what type of qualitative or quantitative traits it will have. Plant breeders strive to create a specific outcome of plants and potentially new plant varieties, [ 2 ] and in the course of doing so, narrow down the genetic diversity of that variety to a specific few biotypes. [ 4 ] It is practiced worldwide by individuals such as gardeners and farmers, and by professional plant breeders employed by organizations such as government institutions, universities, crop-specific industry associations or research centers. International development agencies believe that breeding new crops is important for ensuring food security by developing new varieties that are higher yielding, disease resistant, drought tolerant or regionally adapted to different environments and growing conditions. [ 5 ] A recent study shows that without plant breeding, Europe would have produced 20% fewer arable crops over the last 20 years, consuming an additional 21.6 million hectares (53 million acres) of land and emitting 4 billion tonnes (3.9 × 10 9 long tons; 4.4 × 10 9 short tons) of carbon. [ 6 ] [ 7 ] Wheat species created for Morocco are currently being crossed with plants to create new varieties for northern France. Soy beans, which were previously grown predominantly in the south of France , are now grown in southern Germany. [ 6 ] [ 8 ] Plant breeding started with sedentary agriculture and particularly the domestication of the first agricultural plants, a practice which is estimated to date back 9,000 to 11,000 years. [ 9 ] Initially early farmers simply selected food plants with particular desirable characteristics, and employed these as progenitors for subsequent generations, resulting in an accumulation of valuable traits over time. Grafting technology had been practiced in China before 2000 BCE. [ 10 ] By 500 BCE grafting was well established and practiced. [ 11 ] Gregor Mendel (1822–84) is considered the "father of genetics ". His experiments with plant hybridization led to his establishing laws of inheritance . Genetics stimulated research to improve crop production through plant breeding. Selective breeding played a crucial role in the Green Revolution of the 20th century. Modern plant breeding is applied genetics, but its scientific basis is broader, covering molecular biology , cytology , systematics , physiology , pathology , entomology , chemistry , and statistics ( biometrics ). It has also developed its own technology. One major technique of plant breeding is selection , the process of selectively propagating plants with desirable characteristics and eliminating or "culling" those with less desirable characteristics. [ 12 ] Another technique is the deliberate interbreeding (crossing) of closely or distantly related individuals to produce new crop varieties or lines with desirable properties. Plants are crossbred to introduce traits / genes from one variety or line into a new genetic background. For example, a mildew -resistant pea may be crossed with a high-yielding but susceptible pea, the goal of the cross being to introduce mildew resistance without losing the high-yield characteristics. Progeny from the cross would then be crossed with the high-yielding parent to ensure that the progeny were most like the high-yielding parent, ( backcrossing ). The progeny from that cross would then be tested for yield (selection, as described above) and mildew resistance and high-yielding resistant plants would be further developed. Plants may also be crossed with themselves to produce inbred varieties for breeding. Pollinators may be excluded through the use of pollination bags . Classical breeding relies largely on homologous recombination between chromosomes to generate genetic diversity . The classical plant breeder may also make use of a number of in vitro techniques such as protoplast fusion , embryo rescue or mutagenesis (see below) to generate diversity and produce hybrid plants that would not exist in nature . Traits that breeders have tried to incorporate into crop plants include: Gartons Agricultural Plant Breeders in England was established in 1880, which became a public company in 1898, by John Garton, who was one of the first to commercialize new varieties of agricultural crops created through cross-pollination. [ 13 ] The firm's first introduction was the Abundance Oat , an oat variety. [ 14 ] [ 15 ] It is one of the first agricultural grain varieties bred from a controlled cross, introduced to commerce in 1892. [ 14 ] [ 15 ] In the early 20th century, plant breeders realized that Gregor Mendel 's findings on the non-random nature of inheritance could be applied to seedling populations produced through deliberate pollinations to predict the frequencies of different types. Wheat hybrids were bred to increase the crop production of Italy during the so-called " Battle for Grain " (1925–1940). Heterosis was explained by George Harrison Shull . It describes the tendency of the progeny of a specific cross to outperform both parents. The detection of the usefulness of heterosis for plant breeding has led to the development of inbred lines that reveal a heterotic yield advantage when they are crossed. Maize was the first species where heterosis was widely used to produce hybrids. Statistical methods were also developed to analyze gene action and distinguish heritable variation from variation caused by environment. In 1933 another important breeding technique, cytoplasmic male sterility (CMS), developed in maize, was described by Marcus Morton Rhoades . CMS is a maternally inherited trait that makes the plant produce sterile pollen . This enables the production of hybrids without the need for labor-intensive detasseling . These early breeding techniques resulted in large yield increase in the United States in the early 20th century. Similar yield increases were not produced elsewhere until after World War II , the Green Revolution increased crop production in the developing world in the 1960s. Following World War II a number of techniques were developed that allowed plant breeders to hybridize distantly related species, and artificially induce genetic diversity. When distantly related species are crossed, plant breeders make use of a number of plant tissue culture techniques to produce progeny from otherwise fruitless mating. Interspecific and intergeneric hybrids are produced from a cross of related species or genera that do not normally sexually reproduce with each other. These crosses are referred to as Wide crosses . For example, the cereal triticale is a wheat and rye hybrid. The cells in the plants derived from the first generation created from the cross contained an uneven number of chromosomes and as a result was sterile. The cell division inhibitor colchicine was used to double the number of chromosomes in the cell and thus allow the production of a fertile line. Failure to produce a hybrid may be due to pre- or post- fertilization incompatibility. If fertilization is possible between two species or genera, the hybrid embryo may abort before maturation. If this does occur the embryo resulting from an interspecific or intergeneric cross can sometimes be rescued and cultured to produce a whole plant. Such a method is referred to as embryo rescue . This technique has been used to produce new rice for Africa , an interspecific cross of Asian rice Oryza sativa and African rice O. glaberrima . Hybrids may also be produced by a technique called protoplast fusion. In this case protoplasts are fused, usually in an electric field. Viable recombinants can be regenerated in culture. Chemical mutagens like ethyl methanesulfonate (EMS) and dimethyl sulfate (DMS), radiation , and transposons are used for mutagenesis . Mutagenesis is the generation of mutants. The breeder hopes for desirable traits to be bred with other cultivars – a process known as mutation breeding . Classical plant breeders also generate genetic diversity within a species by exploiting a process called somaclonal variation , which occurs in plants produced from tissue culture, particularly plants derived from callus . Induced polyploidy , and the addition or removal of chromosomes using a technique called chromosome engineering may also be used. When a desirable trait has been bred into a species, a number of crosses to the favored parent are made to make the new plant as similar to the favored parent as possible. Returning to the example of the mildew resistant pea being crossed with a high- yielding but susceptible pea, to make the mildew resistant progeny of the cross most like the high-yielding parent, the progeny will be crossed back to that parent for several generations (See backcrossing ). This process removes most of the genetic contribution of the mildew resistant parent. Classical breeding is therefore a cyclical process. [ clarification needed ] With classical breeding techniques, the breeder does not know exactly what genes have been introduced to the new cultivars. Some scientists therefore argue that plants produced by classical breeding methods should undergo the same safety testing regime as genetically modified plants. There have been instances where plants bred using classical techniques have been unsuitable for human consumption, for example the poison solanine was unintentionally increased to unacceptable levels in certain varieties of potato through plant breeding. New potato varieties are often screened for solanine levels before reaching the marketplace. [ citation needed ] Even with the very latest in biotech -assisted conventional breeding, incorporation of a trait takes an average of seven generations for clonally propagated crops, nine for self-fertilising , and seventeen for cross-pollinating . [ 16 ] [ 17 ] Modern plant breeding may use techniques of molecular biology to select, or in the case of genetic modification, to insert, desirable traits into plants. Application of biotechnology or molecular biology is also known as molecular breeding . Sometimes many different genes can influence a desirable trait in plant breeding. The use of tools such as molecular markers or DNA fingerprinting can map thousands of genes. This allows plant breeders to screen large populations of plants for those that possess the trait of interest. The screening is based on the presence or absence of a certain gene as determined by laboratory procedures, rather than on the visual identification of the expressed trait in the plant. The purpose of marker assisted selection, or plant genome analysis, is to identify the location and function ( phenotype ) of various genes within the genome. If all of the genes are identified it leads to genome sequence . [ citation needed ] [ clarification needed ] All plants have varying sizes and lengths of genomes with genes that code for different proteins, but many are also the same. If a gene's location and function is identified in one plant species, a very similar gene likely can also be found in a similar location in another related species genome. [ 18 ] Homozygous plants with desirable traits can be produced from heterozygous starting plants, if a haploid cell with the alleles for those traits can be produced, and then used to make a doubled haploid . The doubled haploid will be homozygous for the desired traits. Furthermore, two different homozygous plants created in that way can be used to produce a generation of F1 hybrid plants which have the advantages of heterozygosity and a greater range of possible traits. Thus, an individual heterozygous plant chosen for its desirable characteristics can be converted into a heterozygous variety (F1 hybrid) without the necessity of vegetative reproduction but as the result of the cross of two homozygous/doubled haploid lines derived from the originally selected plant. [ 19 ] This shortcut has been dubbed 'reverse breeding'. [ 20 ] Plant tissue culturing can produce haploid or double haploid plant lines and generations. This cuts down the genetic diversity taken from that plant species in order to select for desirable traits that will increase the fitness of the individuals. Using this method decreases the need for breeding multiple generations of plants to get a generation that is homogeneous for the desired traits, thereby saving much time over the natural version of the same process. There are many plant tissue culturing techniques that can be used to achieve haploid plants, but microspore culturing is currently the most promising for producing the largest numbers of them. [ 18 ] Genetic modification of plants is achieved by adding a specific gene or genes to a plant, or by knocking down a gene with RNAi , to produce a desirable phenotype . The plants resulting from adding a gene are often referred to as transgenic plants . If for genetic modification genes of the species or of a crossable plant are used under control of their native promoter, then they are called cisgenic plants . Sometimes genetic modification can produce a plant with the desired trait or traits faster than classical breeding because the majority of the plant's genome is not altered. To genetically modify a plant, a genetic construct must be designed so that the gene to be added or removed will be expressed by the plant. To do this, a promoter to drive transcription and a termination sequence to stop transcription of the new gene, and the gene or genes of interest must be introduced to the plant. A marker for the selection of transformed plants is also included. In the laboratory , antibiotic resistance is a commonly used marker: Plants that have been successfully transformed will grow on media containing antibiotics; plants that have not been transformed will die. In some instances markers for selection are removed by backcrossing with the parent plant prior to commercial release. The construct can be inserted in the plant genome by genetic recombination using the bacteria Agrobacterium tumefaciens or A. rhizogenes , or by direct methods like the gene gun or microinjection . Using plant viruses to insert genetic constructs into plants is also a possibility, but the technique is limited by the host range of the virus. For example, Cauliflower mosaic virus (CaMV) only infects cauliflower and related species. Another limitation of viral vectors is that the virus is not usually passed on to the progeny, so every plant has to be inoculated. The majority of commercially released transgenic plants are currently limited to plants that have introduced resistance to insect pests and herbicides . Insect resistance is achieved through incorporation of a gene from Bacillus thuringiensis (Bt) that encodes a protein that is toxic to some insects. For example, the cotton bollworm , a common cotton pest, feeds on Bt cotton it will ingest the toxin and die. Herbicides usually work by binding to certain plant enzymes and inhibiting their action. [ 21 ] The enzymes that the herbicide inhibits are known as the herbicide's " target site ". Herbicide resistance can be engineered into crops by expressing a version of target site protein that is not inhibited by the herbicide. This is the method used to produce glyphosate resistant (" Roundup Ready ") crop plants. Genetic modification can further increase yields by increasing stress tolerance to a given environment. Stresses such as temperature variation, are signalled to the plant via a cascade of signalling molecules which will activate a transcription factor to regulate gene expression . Overexpression of particular genes involved in cold acclimation has been shown to produce more resistance to freezing, which is one common cause of yield loss [ 22 ] Genetic modification of plants that can produce pharmaceuticals (and industrial chemicals), sometimes called pharming , is a rather radical new area of plant breeding. [ 23 ] The debate surrounding genetically modified food during the 1990s peaked in 1999 in terms of media coverage and risk perception, [ 24 ] and continues today – for example, " Germany has thrown its weight behind a growing European mutiny over genetically modified crops by banning the planting of a widely grown pest-resistant corn variety. " [ 25 ] The debate encompasses the ecological impact of genetically modified plants , the safety of genetically modified food and concepts used for safety evaluation like substantial equivalence . Such concerns are not new to plant breeding. Most countries have regulatory processes in place to help ensure that new crop varieties entering the marketplace are both safe and meet farmers' needs. Examples include variety registration, seed schemes, regulatory authorizations for GM plants, etc. Industrial breeding of plants has unintentionally altered how agricultural cultivars associate with their microbiome. [ 26 ] In maize, for example, breeding has altered the nitrogen cycling taxa required to the rhizosphere, with more modern lines recruiting less nitrogen fixing taxa and more nitrifiers and denitrifiers . [ 27 ] Microbiomes of breeding lines showed that hybrid plants share much of their bacterial community with their parents, such as Cucurbita seeds and apple shoot endophytes. [ 28 ] [ 29 ] [ 30 ] In addition, the proportional contribution of the microbiome from parents to offspring corresponds to the amount of genetic material contributed by each parent during breeding and domestication. [ 30 ] As of 2020 [update] machine learning – and especially deep machine learning – has recently become more commonly used in phenotyping . Computer vision using ML has made great strides and is now being applied to leaf phenotyping and other phenotyping jobs typically performed by human eyes. Pound et al. 2017 and Singh et al. 2016 are especially salient examples of early successful application and demonstration of the general usability of the process across multiple target plant species. These methods will work even better with large, publicly available open data sets . [ 31 ] Speed breeding is introduced by Watson et al. 2018. Classical (human performed) phenotyping during speed breeding is also possible, using a procedure developed by Richard et al. 2015. As of 2020 [update] it is highly anticipated that SB and automated phenotyping will, combined, produce greatly improved outcomes – see § Phenotyping and artificial intelligence above. [ 31 ] The NGS platform has substantially declined the time and cost required for sequencing and facilitated SNP discovery in model and non-model plants. This in turn has led to employing large-scale SNP markers in genomic selection approaches which aim at predicting genomic breeding values/GEBVs of genotypes in a given population. This method can increase the selection accuracy and decrease the time of each breeding cycle. It has been used in different crops such as maize, wheat, etc. [ 32 ] [ 33 ] Participatory plant breeding (PPB) is when farmers are involved in a crop improvement programme with opportunities to make decisions and contribute to the research process at different stages. [ 34 ] [ 35 ] [ 36 ] Participatory approaches to crop improvement can also be applied when plant biotechnologies are being used for crop improvement. [ 37 ] Local agricultural systems and genetic diversity are strengthened by participatory programs, and outcomes are enhanced by farmers knowledge of the quality required and evaluation of the target environment. [ 38 ] A 2019 review of participatory plant breeding indicated that it had not gained widespread acceptance despite its record of successfully developing varieties with improved diversity and nutritional quality, as well as greater likelihood of these improved varieties being adopted by farmers. This review also found participatory plant breeding to have a better cost/benefit ratio than non-participatory approaches, and suggested incorporating participatory plant breeding with evolutionary plant breeding. [ 39 ] Evolutionary plant breeding describes practices which use mass populations with diverse genotypes grown under competitive natural selection. Survival in common crop cultivation environments is the predominant method of selection, rather than direct selection by growers and breeders. Individual plants that are favored under prevailing growing conditions, such as environment and inputs, contribute more seed to the next generation than less-adapted individuals. [ 40 ] Evolutionary plant breeding has been successfully used by the Nepal National Gene Bank to preserve landrace diversity within Jumli Marshi rice while reducing its susceptibility to blast disease. These practices have also been used in Nepal with bean landraces. [ 41 ] In 1929, Harlan and Martini proposed a method of plant breeding with heterogeneous populations by pooling an equal number of F2 seeds obtained from 378 crosses among 28 geographically diverse barley cultivars. In 1938, Harlan and Martini demonstrated evolution by natural selection in mixed dynamic populations as a few varieties that became dominant in some locations almost disappeared in others; poorly-adapted varieties disappeared everywhere. [ 42 ] Evolutionary breeding populations have been used to establish self-regulating plant–pathogen systems. Examples include barley, where breeders were able to improve resistance to Rynchosporium secalis scald over 45 generations. [ 43 ] An evolutionary breeding project grew F5 hybrid bulk soybean populations on soil infested by the soybean cyst nematode and was able to increase the proportion of resistant plants from 5% to 40%. The International Center for Agricultural Research in the Dry Areas (ICARDA) evolutionary plant breeding is combined with participatory plant breeding in order to allow farmers to choose which varieties suit their needs in their local environment. [ 43 ] An influential 1956 effort by Coit A. Suneson to codify this approach coined the term evolutionary plant breeding and concluded that 15 generations of natural selection are desirable to produce results that are competitive with conventional breeding. [ 44 ] Evolutionary breeding allows working with much larger plant population sizes than conventional breeding. [ 42 ] It has also been used in tandem with conventional practices in order to develop both heterogeneous and homogeneous crop lines for low input agricultural systems that have unpredictable stress conditions. [ 45 ] Evolutionary plant breeding has been delineated into four stages: [ 40 ] Issues facing plant breeding in the future include the lack of arable land, increasingly harsh cropping conditions and the need to maintain food security, which involves being able to provide the world population with sufficient nutrition. Crops need to be able to mature in multiple environments to allow worldwide access, which involves solving problems including drought tolerance. It has been suggested that global solutions are achievable through the process of plant breeding, with its ability to select specific genes allowing crops to perform at a level which yields the desired results. [ 46 ] One issue facing agriculture is the loss of landraces and other local varieties which have diversity that may have useful genes for climate adaptation in the future. [ 43 ] Conventional breeding intentionally limits phenotype plasticity within genotypes and limits variability between genotypes. [ 45 ] Uniformity does not allow crops to adapt to climate change and other biotic stresses and abiotic stresses . [ 43 ] Plant breeders' rights is an important and controversial issue. Production of new varieties is dominated by commercial plant breeders, who seek to protect their work and collect royalties through national and international agreements based in intellectual property rights . The range of related issues is complex. In the simplest terms, critics of the increasingly restrictive regulations argue that, through a combination of technical and economic pressures, commercial breeders are reducing biodiversity and significantly constraining individuals (such as farmers) from developing and trading seed on a regional level. [ 47 ] Efforts to strengthen breeders' rights, for example, by lengthening periods of variety protection, are ongoing. [ citation needed ] Intellectual property legislation for plants often uses definitions that typically include genetic uniformity and unchanging appearance over generations. These legal definitions of stability contrast with traditional agronomic usage, which considers stability in terms of how consistent the yield or quality of a crop remains across locations and over time. [ 40 ] As of 2020, regulations in Nepal only allow uniform varieties to be registered or released. Evolutionary plant populations and many landraces are polymorphic and do not meet these standards. [ 41 ] Uniform and genetically stable cultivars can be inadequate for dealing with environmental fluctuations and novel stress factors. [ 40 ] Plant breeders have focused on identifying crops which will ensure crops perform under these conditions; a way to achieve this is finding strains of the crop that is resistance to drought conditions with low nitrogen. It is evident from this that plant breeding is vital for future agriculture to survive as it enables farmers to produce stress resistant crops hence improving food security. [ 48 ] In countries that experience harsh winters such as Iceland , Germany and further east in Europe, plant breeders are involved in breeding for tolerance to frost, continuous snow-cover, frost-drought (desiccation from wind and solar radiation under frost) and high moisture levels in soil in winter. [ 49 ] Breeding is not a quick process, which is especially important when breeding to ameliorate a disease. The average time from human recognition of a new fungal disease threat to the release of a resistant crop for that pathogen is at least twelve years. [ 17 ] [ 50 ] When new plant breeds or cultivars are bred, they must be maintained and propagated. Some plants are propagated by asexual means while others are propagated by seeds. Seed propagated cultivars require specific control over seed source and production procedures to maintain the integrity of the plant breeds results. Isolation is necessary to prevent cross contamination with related plants or the mixing of seeds after harvesting. Isolation is normally accomplished by planting distance but in certain crops, plants are enclosed in greenhouses or cages (most commonly used when producing F1 hybrids). Modern plant breeding, whether classical or through genetic engineering, comes with issues of concern, particularly with regard to food crops. The question of whether breeding can have a negative effect on nutritional value is central in this respect. Although relatively little direct research in this area has been done, there are scientific indications that, by favoring certain aspects of a plant's development, other aspects may be retarded. A study published in the Journal of the American College of Nutrition in 2004, entitled Changes in USDA Food Composition Data for 43 Garden Crops, 1950 to 1999 , compared nutritional analysis of vegetables done in 1950 and in 1999, and found substantial decreases in six of 13 nutrients measured, including 6% of protein and 38% of riboflavin . Reductions in calcium , phosphorus , iron and ascorbic acid were also found. The study, conducted at the Biochemical Institute, University of Texas at Austin , concluded in summary: "We suggest that any real declines are generally most easily explained by changes in cultivated varieties between 1950 and 1999, in which there may be trade-offs between yield and nutrient content." [ 51 ] Plant breeding can contribute to global food security as it is a cost-effective tool for increasing nutritional value of forage and crops. Improvements in nutritional value for forage crops from the use of analytical chemistry and rumen fermentation technology have been recorded since 1960; this science and technology gave breeders the ability to screen thousands of samples within a small amount of time, meaning breeders could identify a high performing hybrid quicker. The genetic improvement was mainly in vitro dry matter digestibility (IVDMD) resulting in 0.7-2.5% increase, at just 1% increase in IVDMD a single Bos Taurus also known as beef cattle reported 3.2% increase in daily gains. This improvement indicates plant breeding is an essential tool in gearing future agriculture to perform at a more advanced level. [ 52 ] With an increasing population, the production of food needs to increase with it. It is estimated that a 70% increase in food production is needed by 2050 in order to meet the Declaration of the World Summit on Food Security. But with the degradation of agricultural land, simply planting more crops is no longer a viable option. New varieties of plants can in some cases be developed through plant breeding that generate an increase of yield without relying on an increase in land area. An example of this can be seen in Asia, where food production per capita has increased twofold. This has been achieved through not only the use of fertilisers, but through the use of better crops that have been specifically designed for the area. [ 53 ] [ 54 ] Some critics of organic agriculture claim it is too low-yielding to be a viable alternative to conventional agriculture in situations when that poor performance may be the result in part of growing poorly-adapted varieties. [ 55 ] [ 56 ] It is estimated that over 95% of organic agriculture is based on conventionally adapted varieties, even though the production environments found in organic vs. conventional farming systems are vastly different due to their distinctive management practices. [ 56 ] Most notably, organic farmers have fewer inputs available than conventional growers to control their production environments. Breeding varieties specifically adapted to the unique conditions of organic agriculture is critical for this sector to realize its full potential. This requires selection for traits such as: [ 56 ] Currently, few breeding programs are directed at organic agriculture and until recently those that did address this sector have generally relied on indirect selection (i.e. selection in conventional environments for traits considered important for organic agriculture). However, because the difference between organic and conventional environments is large, a given genotype may perform very differently in each environment due to an interaction between genes and the environment (see gene–environment interaction ). If this interaction is severe enough, an important trait required for the organic environment may not be revealed in the conventional environment, which can result in the selection of poorly adapted individuals. [ 55 ] To ensure the most adapted varieties are identified, advocates of organic breeding now promote the use of direct selection (i.e. selection in the target environment) for many agronomic traits. There are many classical and modern breeding techniques that can be utilized for crop improvement in organic agriculture despite the ban on genetically modified organisms . For instance, controlled crosses between individuals allow desirable genetic variation to be recombined and transferred to seed progeny via natural processes. Marker assisted selection can also be employed as a diagnostics tool to facilitate selection of progeny who possess the desired trait(s), greatly speeding up the breeding process. [ 57 ] This technique has proven particularly useful for the introgression of resistance genes into new backgrounds, as well as the efficient selection of many resistance genes pyramided into a single individual. Molecular markers are not currently available for many important traits, especially complex ones controlled by many genes.
https://en.wikipedia.org/wiki/Plant_breeding
Plants are exposed to many stress factors such as disease, temperature changes, herbivory, injury and more. [ 1 ] Therefore, in order to respond or be ready for any kind of physiological state, they need to develop some sort of system for their survival in the moment and/or for the future. Plant communication encompasses communication using volatile organic compounds, electrical signaling, and common mycorrhizal networks between plants and a host of other organisms such as soil microbes , [ 2 ] other plants [ 3 ] (of the same or other species), animals, [ 4 ] insects, [ 5 ] and fungi. [ 6 ] Plants communicate through a host of volatile organic compounds (VOCs) that can be separated into four broad categories, each the product of distinct chemical pathways: fatty acid derivatives, phenylpropanoids / benzenoids , amino acid derivatives, and terpenoids . [ 7 ] Due to the physical/chemical constraints most VOCs are of low molecular mass (< 300 Da), are hydrophobic , and have high vapor pressures. [ 8 ] The responses of organisms to plant emitted VOCs varies from attracting the predator of a specific herbivore to reduce mechanical damage inflicted on the plant [ 5 ] to the induction of chemical defenses of a neighboring plant before it is being attacked. [ 9 ] In addition, the host of VOCs emitted varies from plant to plant, where for example, the Venus Fly Trap can emit VOCs to specifically target and attract starved prey. [ 10 ] While these VOCs typically lead to increased resistance to herbivory in neighboring plants, there is no clear benefit to the emitting plant in helping nearby plants. As such, whether neighboring plants have evolved the capability to "eavesdrop" or whether there is an unknown tradeoff occurring is subject to much scientific debate. [ 11 ] As related to the aspect of meaning-making, the field is also identified as phytosemiotics . [ 12 ] In Runyon et al. 2006, the researchers demonstrate how the parasitic plant , Cuscuta pentagona (field dodder), uses VOCs to interact with various hosts and determine locations. Dodder seedlings show direct growth toward tomato plants ( Lycopersicon esculentum ) and, specifically, tomato plant volatile organic compounds. This was tested by growing a dodder weed seedling in a contained environment, connected to two different chambers. One chamber contained tomato VOCs while the other had artificial tomato plants. After 4 days of growth, the dodder weed seedling showed a significant growth towards the direction of the chamber with tomato VOC's. Their experiments also showed that the dodder weed seedlings could distinguish between wheat ( Triticum aestivum ) VOCs and tomato plant volatiles. As when one chamber was filled with each of the two different VOCs, dodder weeds grew towards tomato plants as one of the wheat VOC's is repellent. These findings show evidence that volatile organic compounds determine ecological interactions between plant species and show statistical significance that the dodder weed can distinguish between different plant species by sensing their VOCs. [ 13 ] Tomato plant to plant communication is further examined in Zebelo et al. 2012, which studies tomato plant response to herbivory. Upon herbivory by Spodoptera littoralis , tomato plants emit VOCs that are released into the atmosphere and induce responses in neighboring tomato plants. When the herbivory-induced VOCs bind to receptors on other nearby tomato plants, responses occur within seconds. The neighboring plants experience a rapid depolarization in cell potential and increase in cytosolic calcium. Plant receptors are most commonly found on plasma membranes as well as within the cytosol, endoplasmic reticulum, nucleus, and other cellular compartments. VOCs that bind to plant receptors often induce signal amplification by action of secondary messengers including calcium influx as seen in response to neighboring herbivory. These emitted volatiles were measured by GC-MS and the most notable were 2-hexenal and 3-hexenal acetate. It was found that depolarization increased with increasing green leaf volatile concentrations. These results indicate that tomato plants communicate with one another via airborne volatile cues, and when these VOC's are perceived by receptor plants, responses such as depolarization and calcium influx occur within seconds. [ 14 ] Terpenoids facilitate communication between plants and insects, mammals, fungi, microorganisms, and other plants. [ 16 ] Terpenoids may act as both attractants and repellants for various insects. For example, pine shoot beetles ( Tomicus piniperda ) are attracted to certain monoterpenes ( (+/-)- a-pinene , (+)- 3-carene and terpinolene) produced by Scots pines ( Pinus sylvestris ), while being repelled by others (such as verbenone ). [ 17 ] Terpenoids are a large family of biological molecules with over 22,000 compounds. [ 18 ] Terpenoids are similar to terpenes in their carbon skeleton but unlike terpenes contain functional groups . The structure of terpenoids is described by the biogenetic isoprene rule which states that terpenoids can be thought of being made of isoprenoid subunits, arranged either regularly or irregularly. [ 19 ] The biosynthesis of terpenoids occurs via the methylerythritol phosphate (MEP) and mevalonic acid (MVA) pathways [ 7 ] both of which include isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP) as key components. [ 20 ] The MEP pathway produces hemiterpenes , monoterpenes , diterpenes , and volatile carotenoid derivatives while the MVA pathway produces sesquiterpenes . [ 7 ] Many researchers have shown that plants have the ability to use electrical signaling to communicate from leaves to stem to roots. Starting in the late 1800s scientists, such as Charles Darwin , examined ferns and Venus fly traps because they showed excitation patterns similar to animal nerves. [ 21 ] However, the mechanisms behind this electrical signaling are not well known and are a current topic of ongoing research. [ 22 ] A plant may produce electrical signaling in response to wounding, temperature extremes, high salt conditions, drought conditions, and other various stimuli. [ 22 ] [ 23 ] There are two types of electrical signals that a plant uses. The first is the action potential and the second is the variation potential . Similar to action potentials in animals, action potentials in plants are characterized as “all or nothing.” [ 24 ] This is the understood mechanism for how plant action potentials are initiated: [ 25 ] [ 26 ] [ 24 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] Plant resting membrane potentials range from -80 to -200 mV. [ 26 ] [ 25 ] High H+-ATPase activity corresponds with hyperpolarization (up to -200mV), making it harder to depolarize and fire an action potential. [ 25 ] [ 24 ] [ 27 ] [ 31 ] This is why it is essential for calcium ions to inactivate H+-ATPase activity so that depolarization can be reached. [ 24 ] [ 27 ] When the voltage gated chloride channels are activated and full depolarization occurs, calcium ions are pumped out of the cell (via a calcium-ATPase) after so that H+-ATPase activity resumes so that the cell can repolarize. [ 24 ] [ 27 ] Calcium's interaction with the H+-ATPase is through a kinase . [ 27 ] Therefore, calcium's influx causes the activation of a kinase that phosphorylates and deactivates the H+-ATPase so that the cell can depolarize. [ 27 ] It is unclear whether all of the heightened calcium ion intracellular concentration is solely due to calcium channel activation. It is possible that the transitory activation of calcium channels causes an influx of calcium ions into the cell which activates intracellular stores of calcium ions to be released and subsequently causes depolarization (through the inactivation of H+-ATPase and activation of voltage gated chloride channels). [ 27 ] [ 28 ] [ 29 ] [ 30 ] Variation potentials have proven hard to study and their mechanism is less well known than action potentials. [ 32 ] Variation potentials are slower than action potentials, are not considered “all or nothing,” and they themselves can trigger several action potentials. [ 26 ] [ 32 ] [ 31 ] [ 33 ] The current understanding is that upon wounding or other stressful events, a plant's turgor pressure changes which releases a hydraulic wave throughout the plant that is transmitted through the xylem . [ 26 ] [ 34 ] This hydraulic wave may activate pressure gated channels due to the sudden change in pressure. [ 35 ] Their ionic mechanism is very different from action potentials and is thought to involve the inactivation of the P-type H+-ATPase . [ 26 ] [ 36 ] Long distance electrical signaling in plants is characterized by electrical signaling that occurs over distances greater than the span of a single cell. [ 37 ] In 1873, Sir John Burdon-Sanderson described action potentials and their long-distance propagation throughout plants. [ 33 ] Action potentials in plants are carried out through a plants vascular network (particularly the phloem ), [ 38 ] a network of tissues that connects all of the various plant organs, transporting signaling molecules throughout the plant. [ 37 ] Increasing the frequency of action potentials causes the phloem to become increasingly cross linked . [ 39 ] In the phloem, the propagation of action potentials is dictated by the fluxes of chloride, potassium, and calcium ions, but the exact mechanism for propagation is not well understood. [ 40 ] Alternatively, the transport of action potentials over short, local distances is distributed throughout the plant via plasmodesmatal connections between cells. [ 38 ] When a plant responds to stimuli, sometimes the response time is nearly instantaneous which is much faster than chemical signals are able to travel. Current research suggests that electrical signaling may be responsible. [ 41 ] [ 42 ] [ 43 ] [ 44 ] In particular, the response of a plant to a wound is triphasic. [ 42 ] Phase 1 is an immediate great increase in expression of target genes. [ 42 ] Phase 2 is a period of dormancy. [ 42 ] Phase 3 is a weakened and delayed upregulation of the same target genes as phase 1. [ 42 ] In phase 1, the speed of upregulation is nearly instantaneous which has led researchers to theorize that the initial response from a plant is through action potentials and variation potentials as opposed to chemical or hormonal signaling which is most likely responsible for the phase 3 response. [ 42 ] [ 43 ] [ 44 ] Upon stressful events, there is variation in a plant's response. That is to say, it is not always the case that a plant responds with an action potential or variation potential . [ 42 ] However, when a plant does generate either an action potential or variation potential, one of the direct effects can be an upregulation of a certain gene's expression. [ 43 ] In particular, protease inhibitors and calmodulin exhibit rapid upregulated gene expression. [ 43 ] Additionally, ethylene has shown quick upregulation in the fruit of a plant as well as jasmonate in neighboring leaves to a wound. [ 45 ] [ 46 ] Aside from gene expression, action potentials and variation potentials also can result in stomatal and leaf movement. [ 47 ] [ 48 ] In summary, electric signaling in plants is a powerful tool of communication and controls a plant's response to dangerous stimuli (like herbivory ), helping to maintain homeostasis . Pisum sativum (garden pea) plants communicate stress cues via their roots to allow neighboring unstressed plants to anticipate an abiotic stressor. Pea plants are commonly grown in temperate regions throughout the world. [ 49 ] However, this adaptation allows plants to anticipate abiotic stresses such as drought. In 2011, Falik et al. tested the ability of unstressed pea plants to sense and respond to stress cues by inducing osmotic stress on a neighboring plant. [ 50 ] Falik et al. subjected the root of an externally-induced plant to mannitol in order to inflict osmotic stress and drought-like conditions. Five unstressed plants neighbored both sides of this stressed plant. On one side, the unstressed plants shared their root system with their neighbors to allow for root communication. On the other side, the unstressed plants did not share root systems with their neighbors. [ 50 ] Falik et al. found that unstressed plants demonstrated the ability to sense and respond to stress cues emitted from the roots of the osmotically stressed plant. Furthermore, the unstressed plants were able to send additional stress cues to other neighboring unstressed plants in order to relay the signal. A cascade effect of stomatal closure was observed in neighboring unstressed plants that shared their rooting system but was not observed in the unstressed plants that did not share their rooting system. [ 50 ] Therefore, neighboring plants demonstrate the ability to sense, integrate, and respond to stress cues transmitted through roots. Although Falik et al. did not identify the chemical responsible for perceiving stress cues, research conducted in 2016 by Delory et al. suggests several possibilities. They found that plant roots synthesize and release a wide array of organic compounds including solutes and volatiles (i.e. terpenes). [ 51 ] They cited additional research demonstrating that root-emitted molecules have the potential to induce physiological responses in neighboring plants either directly or indirectly by modifying the soil chemistry. [ 51 ] Moreover, Kegge et al. demonstrated that plants perceive the presence of neighbors through changes in water/nutrient availability, root exudates, and soil microorganisms. [ 52 ] Although the underlying mechanism behind stress cues emitted by roots remains largely unknown, Falik et al. suggested that the plant hormone abscisic acid (ABA) may be responsible for integrating the observed phenotypic response (stomatal closure). [ 50 ] Further research is needed to identify a well-defined mechanism and the potential adaptive implications for priming neighbors in preparation for forthcoming abiotic stresses; however, a literature review by Robbins et al. published in 2014 characterized the root endodermis as a signaling control center in response to abiotic environmental stresses including drought. [ 53 ] They found that the plant hormone ABA regulates the root endodermal response under certain environmental conditions. In 2016 Rowe et al. experimentally validated this claim by showing that ABA regulated root growth under osmotic stress conditions. [ 54 ] Additionally, changes in cytosolic calcium concentrations act as signals to close stomata in response to drought stress cues. Therefore, the flux of solutes, volatiles, hormones, and ions are likely involved in the integration of the response to stress cues emitted by roots. Another form of plant communication occurs through their root networks. [ 55 ] Through roots, plants can share many different resources including carbon, nitrogen, and other nutrients. This transfer of below ground carbon is examined in Philip et al. 2011. The goals of this paper were to test if carbon transfer was bi-directional, if one species had a net gain in carbon, and if more carbon was transferred through the soil pathway or common mycorrhizal network (CMN). CMNs occur when fungal mycelia link roots of plants together. [ 56 ] The researchers followed seedlings of paper birch and Douglas-fir in a greenhouse for 8 months, where hyphal linkages that crossed their roots were either severed or left intact. The experiment measured amounts of labeled carbon exchanged between seedlings. It was discovered that there was indeed a bi-directional sharing of carbon between the two tree species, with the Douglas-fir receiving a slight net gain in carbon. Also, the carbon was transferred through both soil and the CMN pathways, as transfer occurred when the CMN linkages were interrupted, but much more transfer occurred when the CMN's were left unbroken. This experiment showed that through fungal mycelia linkage of the roots of two plants, plants are able to communicate with one another and transfer nutrients as well as other resources through below ground root networks. [ 56 ] Further studies go on to argue that this underground “tree talk” is crucial in the adaptation of forest ecosystems. Plant genotypes have shown that mycorrhizal fungal traits are heritable and play a role in plant behavior. These relationships with fungal networks can be mutualistic, commensal, or even parasitic. It has been shown that plants can rapidly change behavior such as root growth, shoot growth, photosynthetic rate, and defense mechanisms in response to mycorrhizal colonization. [ 57 ] Through root systems and common mycorrhizal networks, plants are able to communicate with one another below ground and alter behaviors or even share nutrients depending on different environmental cues. Studies have shown that plants can respond to airborne sounds at audible frequencies [ 58 ] and that they also produce airborne sounds at the ultrasonic range, presumably audible to multiple organisms including bats, mice, moths and other insects. [ 59 ]
https://en.wikipedia.org/wiki/Plant_communication
A plant community is a collection or association [ 1 ] [ page needed ] of plant species within a designated geographical unit, which forms a relatively uniform patch, distinguishable from neighboring patches of different vegetation types . The components of each plant community are influenced by soil type , topography , climate and human disturbance. In many cases there are several soil types present within a given plant community. [ 2 ] [ page needed ] This is because the soil type within an area is influenced by two factors, the rate at which water infiltrates or exits (via evapotranspiration ) the soil, as well as the rate at which organic matter (any carbon-based compound within the environment, such as decaying plant matter) enters or decays from the soil. [ 3 ] Plant communities are studied substantially by ecologists, due to providing information on the effects of dispersal, tolerance to environmental conditions, and response to disturbance of a variety of plant species, information valuable to the comprehension of various plant community dynamics. [ 4 ] Plant communities having a stable composition after a relatively long period free of disturbance represent the potential natural vegetation, or “climax” plant community [ 5 ] and are often called "Plant Associations." A Plant Association can be conceptual, and gives an indication of the direction of succession. The USDA Forest Service collects field data, performs spatial statistics, and maps potential plant associations to assist in planting and restoration efforts. [ 6 ] The US Bureau of Land Management also establishes plant communities using "Ecological Sites," which are roughly equivalent to plant associations. [ 7 ] A plant community can be described floristically (the species of flowers or flora the plant community contains) [ 8 ] and/or phytophysiognomically (the physical structure or appearance of the plant community). For example, a forest (a community of trees) includes the overstory , or upper tree layer of the canopy , as well as the understory , a layer consisting of trees and shrubs located beneath the canopy but above the forest floor. The understory can be further subdivided into the shrub layer , composed of vegetation and trees between a height of approximately one to five meters, the herbaceous layer , composed of vascular plants at a height of one meter or less, [ 9 ] and sometimes also the moss layer , a layer of non-vascular bryophytes typically present at ground level (approximately 0.15 meters in height or less). [ 10 ] In some cases of complex forests there is also a well-defined lower tree layer. A plant community is similar in concept to a vegetation type , with the former having more of an emphasis on the ecological association of species within it, and the latter on overall appearance by which it is readily recognized by a layperson. [ citation needed ] A plant community can be rare even if none of the major species defining it are rare. [ 1 ] : 115 This is because it is the association of species and relationship to their environment that may be rare. [ 1 ] : 115 An example is the sycamore alluvial woodland in California dominated by the California sycamore Platanus racemosa . [ 1 ] : 115 The community is rare, being localized to a small area of California and existing nowhere else, yet the California sycamore is not a rare tree in California. [ 1 ] : 115 An example is a grassland on the northern Caucasus steppes , where common grass species found are Festuca sulcata and Poa bulbosa . The most common species defining this grassland phytocoenosis is Carex shreberi . Other representative forbs occurring in these steppe grasslands are Artemisia austriaca and Polygonum aviculare . [ 11 ] [ page needed ] Other examples of different plant communities include the forests located on the granite peaks of the Huangshan Mountains in Eastern China. [ 12 ] The deciduous broad-leaved forest, present from a height of 1,100 metres, is populated by trees such as Pinus hwangshanesis , also known as the Huangshan pine. The Huangshan mountain also possesses an evergreen broad-leaved forest community, home to a variety of shrubs and small trees. [ 13 ] Some examples of species present in the evergreen broad-leaved forest community include Castanopsis eyrei , Eurya nitidia , Rhododendron ovatum , Pinus massoniana , as well as Loropetalum chinense . [ 14 ] An example of a three tiered plant community is in central Westland in the South Island, New Zealand. These forests are the most extensive continuous reaches of podocarp /broadleaf forests in that country. The canopy includes Prumnopitys ferruginea , rimu and mountain totara . The mid-story includes tree ferns such as Cyathea smithii and Dicksonia squarrosa , whilst the lowest tier and epiphytic associates include Asplenium polyodon , Tmesipteris tannensis , Astelia solandri and Lomaria discolor . [ 15 ]
https://en.wikipedia.org/wiki/Plant_community
The abundances of plant species are often measured by plant cover , which is the relative area covered by different plant species in a small plot. Plant cover is not biased by the size and distributions of individuals, and is an important and often measured characteristic of the composition of plant communities . [ 1 ] [ 2 ] Plant cover data may be used to classify the studied plant community into a vegetation type , to test different ecological hypotheses on plant abundance, and in gradient studies, where the effects of different environmental gradients on the abundance of specific plant species are studied . [ 3 ] The most common way to measure plant cover in herbal plant communities, is to make a visual assessment of the relative area covered by the different species in a small plot (see quadrat ). The visually assessed cover of a plant species is then recorded as a continuous variable between 0 and 1, or divided into interval classes as an ordinal variable. [ 4 ] An alternative methodology, called the pin-point method (or point-intercept method), has also been widely employed. In a pin-point analysis, a frame with a fixed grid pattern is placed randomly above the vegetation, and a thin pin is inserted vertically through one of the grid points into the vegetation. The different species touched by the pin are recorded at each insertion. The cover of plant species k in a plot, c k {\displaystyle c_{k}} , is now assumed to be proportional to the number of “hits” by the pin, c k = y k n {\displaystyle c_{k}={\frac {y_{k}}{n}}} , where y k {\displaystyle y_{k}} is the number of pins that hit species k out of a total of n pins. Since a single pin in multi-species plant communities often will hit more than a single species, the sum of the plant cover of the different species may be larger than unity when estimated by the pin-point method. The sum of the estimated plant cover is expected to increase with the number of plant species in a plot and with increasing 3-dimensional structuring of the plants in the community. Plant cover data obtained by the pin-point method may be modelled by a generalised binomial distribution (or Pólya–Eggenberger distribution ). [ 5 ]
https://en.wikipedia.org/wiki/Plant_cover
Plant density is the number of individual plants present per unit of ground area. It is most easily interpreted in the case of monospecific stands, where all plants belong to the same species and have germinated at the same time. However, it could also indicate the number of individual plants found at a given location. Plant density is defined as the number of plants present per unit area of ground. In nature, plant densities can be especially high when seeds present in a seed bank germinate after winter, or in a forest understory after a tree fall opens a gap in the canopy. Due to competition for light, nutrients and water, individual plants will not be able to take up all resources that are required for optimal growth. This indicates that plant density not only depends on the space available to grow but it is also determined by the amount of resources available. Especially in the case of light, smaller plants will take up fewer resources than bigger plants, even less than would be expected on the basis of their size differences. [ 1 ] As plant density increases it will affect the structure of the plant as well as the developmental patterns of the plant. [ 2 ] This is called 'asymmetric competition'  and will cause some subordinate plants to die off, in a process that has been named 'self-thinning'. The remaining plants perform better as fewer plants will now compete for resources. A key factor in agronomy and forestry is plant population density, which provides an experimental approach for better understanding plant-plant competition. [ 3 ] Many of the processes related to plant density can well be studied in monocultures of even-aged individuals that are sown or planted at the same time. These can be referred to as 'monostands' and are often studied in the context of agricultural, horticultural or silvicultural questions. However, they are also highly relevant in ecology . [ 4 ] In general, the total above-ground biomass of a monostand increases with increasing density, up to the point where the biomass saturates. This is what has been dubbed 'constant final yield', [ 5 ] and refers to the total plant biomass per unit ground area. Seed production per ground area is not constant, but often declines with density after total biomass per ground area reached its maximum value. [ 6 ] Experiments with herbaceous plants have been carried out with extremely high densities (up to 80,000 plants per square meter). At such high densities, these plants will start to compete soon after germination, and eventually a large number of those individuals (up to 95%) will die. In agriculture, farmers avoid these very high densities as they do not contribute to seed yield. The optimal densities vary based on desired plant size, location and a variety of environmental factors and range from 30,000 to 90,000 plants per Hectare for Maize [ 7 ] to 40 to 872 plants per square meter (400,000 - 8,720,000 per Hectare) for Winter Wheat . [ 8 ] In forestry , normal densities are less than 0.1 plants per square meter. Not only the biomass per square meter increases with density, but also the Leaf Area Index (LAI, leaf area per ground area). The higher the Leaf Area Index, the higher the fraction of intercepted sunlight will be, but the gain in light interception and photosynthesis will not match the increase in LAI, and this is the reason that total biomass per ground area saturates at high plant densities. Contrary to the total biomass per unit ground area, which increases with density until reaching saturation, the average biomass of individual plants in a monostand strongly declines with plant density, such that for every doubling in density individual plants will become ~30-40% smaller. [ 9 ] Plants in higher density stands invest relatively more of their biomass in stems (higher Stem Mass Fraction ), and less in leaves and roots. Apart from their weight, plants will change their phenotype in many other ways and at different integration levels: [ 9 ] Individual plants in dense stands have fewer leaves and they are often smaller and more narrow (see photo). Leaves of high-density plants are thinner (higher SLA – leaf area per unit mass), especially lower in the vegetation, with a similar concentration of nitrogen per unit mass, but a lower nitrogen content per area. Average plant height or vegetation height often remains remarkably similar, but a very consistent difference is that the stems of high-density plants have a much smaller diameter. They also have fewer side shoots ( tillers ) in the case of grasses, or branches in the case of herbs and trees. Root growth in environments with high plant density show that there will be fewer roots per plant and but the length and general density of the individual root remain somewhat the same, this is expected to still cause issues for the plant in future growth. In dense stands, there is a strong gradient of light from top to bottom. Lower leaves in high-density stands will therefore have a lower photosynthetic rate and a lower transpiration rate than similar leaves of plants in open stands. There are indications that also the well-illuminated top leaves may have a lower photosynthetic capacity in densely-grown plants. Because densely-grown plants are smaller, they will also produce fewer seeds per individual. But also the seed production as a fraction of total plant biomass ( harvest index ) is lower, and so is the seed weight of an individual seed .
https://en.wikipedia.org/wiki/Plant_density
Important structures in plant development are buds , shoots , roots , leaves , and flowers ; plants produce these tissues and structures throughout their life from meristems [ 1 ] located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. By contrast, an animal embryo will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature. However, both plants and animals pass through a phylotypic stage that evolved independently [ 2 ] and that causes a developmental constraint limiting morphological diversification. [ 3 ] [ 4 ] [ 5 ] [ 6 ] According to plant physiologist A. Carl Leopold , the properties of organization seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts." [ 7 ] A vascular plant begins from a single celled zygote , formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis . As this happens, the resulting cells will organize so that one end becomes the first root while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" ( cotyledons ). By the end of embryogenesis, the young plant will have all the parts necessary to begin in its life. [ citation needed ] Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis . New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the tip of the shoot. [ 8 ] Branching occurs when small clumps of cells left behind by the meristem, and which have not yet undergone cellular differentiation to form a specialized tissue, begin to grow as the tip of a new root or shoot. Growth from any such meristem at the tip of a root or shoot is termed primary growth and results in the lengthening of that root or shoot. Secondary growth results in widening of a root or shoot from divisions of cells in a cambium . [ 9 ] Direct organogenesis is a method of plant tissue culture in which organs like roots and shoots develop directly from meristematic or non-meristematic cells, bypassing the callus formation stage. This process takes place through the activation of shoot and root apical meristems or axillary buds, influenced by internal or externally applied plant growth regulators. As a result, specific cell types differentiate to form plant structures that can grow into whole plants. This technique is commonly used for propagating various plant species, including vegetables, fruits, woody plants, and medicinal plants. Shoot tips and nodal segments are typically used as explants in this process. In some cases, adventitious structures arise from somatic tissues under specific conditions, allowing for the regeneration of shoots or roots in areas where they would not naturally develop. This approach is particularly effective in herbaceous species, and while adventitious regeneration can lead to a higher rate of shoot formation, axillary shoot proliferation remains the most widely used method in micropropagation due to its efficiency and practicality. The general sequence of organ development in this process follows the pattern: Primary Explant → Meristemoid → Organ Primordium. Indirect organogenesis is a developmental process in which plant cells undergo dedifferentiation, allowing them to revert from their specialized state and transition into a new developmental pathway. This process is characterized by an intermediate callus stage, where cells lose their original identity and become morphologically adaptable, serving as the foundation for organ formation. The progression of indirect organogenesis involves several key phases, beginning with dedifferentiation, which enables the cells to attain competence, followed by an induction stage that leads to a fully determined state. Once determination is achieved, the cells undergo morphological changes, ultimately giving rise to functional shoots or roots. This process follows a structured developmental sequence: Primary Explant → Callus → Meristemoid → Organ Primordium, ensuring the organized formation of plant organs. The ability to regenerate plants successfully depends on selecting the right explant, which varies among species and plant varieties. In direct organogenesis, explants sourced from meristematic tissues, such as shoot tips, lateral buds, leaves, petioles, roots, and floral structures, are often preferred due to their ability to rapidly develop into new organs. These tissues have high survival rates, fast growth, and strong regenerative potential in vitro. Meristems, shoot tips, axillary buds, immature leaves, and embryos are particularly effective in promoting regeneration across a wide range of plant species. Additionally, mature plant parts, including leaves, stems, roots, petioles, and flower segments, can also serve as viable explants for organ formation under suitable conditions. Plant regeneration occurs through the formation of callus, an undifferentiated mass of cells that later gives rise to new organs. Callus formation can be induced from various explants, such as cotyledons, hypocotyls, stems, leaves, shoot apices, roots, inflorescences, and floral structures, when cultured under controlled conditions. Generally, explants containing actively dividing cells are more effective for callus initiation, as they have a higher capacity for cellular reprogramming. Immature tissues tend to be more adaptable for regeneration compared to mature tissues due to their increased developmental plasticity. The size and shape of the explant also influence the success of culture establishment, as larger or more structurally favorable explants may enhance the chances of survival and growth. Callus development is primarily triggered by wounding and the presence of plant hormones, which may be naturally present in the tissue or supplemented in the growth medium to stimulate cellular activity and organ formation. Culture media compositions vary significantly in their mineral elements and vitamin content to accommodate diverse plant species requirements. Murashige and Skoog (MS) medium is distinguished by its high nitrogen content in ammonium form, a characteristic not found in other formulations. Sucrose typically serves as the primary carbohydrate source across various media types. The interaction between auxins and cytokinins in regulating organogenesis is well-established, though responses vary by species. Some plants, such as tobacco, can spontaneously form shoot buds without exogenous growth regulators, while others like Scurrula pulverulenta , Lactuca sativa , and Brassica juncea strictly require hormonal supplementation. In B. juncea cotyledon cultures, benzylaminopurine (BAP) alone induces shoot formation from petiole tissue, similar to radiata pine where cytokinin alone suffices for shoot induction. Research indicates that endogenous hormone concentrations, rather than exogenous application levels, ultimately determine organogenic differentiation. Among the various cytokinins (2iP, BAP, thidiazuron, kinetin, and zeatin) used for shoot induction, BAP has demonstrated superior efficacy and widespread application. Auxins similarly influence organogenic pathways, with 2,4-D commonly used for callus induction in cereals, though organogenesis typically requires transfer to media containing IAA or NAA or lacking 2,4-D entirely. The auxin-to-cytokinin ratio largely determines which organs develop. Gibberellic acid (GA3) contributes to cell elongation and meristemoid formation, while unconventional compounds like tri-iodobenzoic acid (TIBA), abscisic acid (ABA), kanamycin, and auxin inhibitors have proven effective for recalcitrant species. Natural additives like ginseng powder can enhance regeneration frequency in certain cultures. Since ethylene typically suppresses shoot differentiation, inhibitors of ethylene synthesis such as aminoethoxyvinylglycine (AVG) and silver nitrate (AgNO3) are often employed to promote organogenesis, with documented success in wheat, tobacco, and sunflower cultures. Agar is not an essential component of the culture medium, but quality and quantity of agar is an important factor that may determine a role in organogenesis. Commercially available agar may contain impurities. With a high concentration of agar, the nutrient medium becomes hard and does not allow the diffusion of nutrients to the growing tissue. It influences the organogenesis process by producing adventitious roots, unwanted callus at the base, or senescence of the foliage. The pH is another important factor that may affect organogenesis route. The pH of the culture medium is adjusted to between 5.6 and 5.8 before sterilization. Medium pH facilitates or inhibits nutrient availability in the medium; for example, ammonium uptake in vitro occurs at a stable pH of 5.5 (Thorpe et al., 2008). The timing of explant collection significantly impacts regenerative capacity in tissue culture systems, with seasonal variations playing a crucial role in organ formation success. This phenomenon is clearly demonstrated in Lilium speciosum , where bulb scales exhibit differential regenerative responses based on collection season. Explants harvested during spring and autumn periods readily form bulblets in vitro, while those collected during summer or winter months fail to produce bulblets despite identical culture conditions. Similar seasonal dependency is observed in Chlorophytum borivillianum , a medicinally valuable species that shows markedly enhanced in vitro tuber formation during monsoon seasons compared to other times of year. This seasonal variation in morphogenic potential likely reflects differences in the physiological state of the source plant, including endogenous hormone levels, carbohydrate reserves, and metabolic activity that fluctuate throughout the annual growth cycle. Oxygen has a key role in tissue culture, which influences the organ formation. In some cultures, shoot bud formation takes place when the gradient of available oxygen inside the culture vessel is reduced, while induction of roots requires a high oxygen gradient. Light conditions, including both intensity and spectral quality, function as significant morphogenic signals in plant tissue culture systems. Spectral composition research has revealed distinct wavelength-dependent responses, with blue light generally promoting shoot organogenesis while red light wavelengths typically favor root induction. Sequential photoperiod exposure—blue light followed by red light—has been documented to effectively stimulate specific organogenetic pathways in certain species. The regulatory effect of different wavelengths demonstrates how light quality can selectively control morphogenic outcomes. Artificial fluorescent lighting produces variable responses depending on the species, promoting root formation in some cultures while inhibiting it in others. Some species exhibit specialized light requirements, as observed in Pisum sativum (garden pea), where shoot bud initiation occurs optimally in darkness before exposure to light stimulates further development. For most tissue culture applications, standard lighting protocols typically recommend illumination of approximately 2,000-3,000 lux intensity with a 16-hour photoperiod. However, certain species demonstrate exceptional light intensity requirements, exemplified by Nicotiana tabacum (tobacco) callus cultures, which require substantially higher light intensities of 10,000-15,000 lux to induce shoot bud formation or somatic embryogenesis. Temperature serves as a critical environmental factor in plant tissue culture systems, with optimal incubation temperatures varying significantly among species based on their natural habitat requirements. While 25°C represents the standard incubation temperature suitable for many plant species in vitro, species-specific temperature adaptations should be considered to maximize organogenic potential. Geophytic species from temperate regions typically require lower temperature regimes than the standard protocol. Notable examples include bulbous plants such as Galanthus (snowdrop) which exhibits optimal growth at approximately 15°C, while certain cultivars of Narcissus (daffodil) and Allium (ornamental onion) demonstrate enhanced regeneration efficiency at around 18°C. Conversely, species of tropical origin generally require elevated temperatures for optimal growth and organogenesis in culture. Date palm cultures thrive at 27°C, while Monstera deliciosa (Swiss cheese plant) exhibits peak regenerative performance at 30°C. These temperature requirements reflect evolutionary adaptations to the plants' native environmental conditions. Variation in chromosome number, that is, aneuploidy, polyploidy, etc., in plant cell culture has been well documented in the past. Chromosome instability of the cells results in gradual decline of morphogenetic potentiality of the callus tissue. Therefore, to maintain organogenic potential of the callus tissue and the chromosome stability, it is suggested that the time and frequency of subculture should be regularly followed. Age of culture is often the key to successful organogenesis. A young culture/freshly subcultured material may produce organs more frequently than the aged ones. The probable reason for this is the reduction or loss of the organogenic potential in old cultures. However, in some plants, the plant regeneration capacity may retain indefinitely for many years The ability of cells to undergo organogenesis largely depends on the application of plant growth regulators (PGRs), which influence the developmental direction of the tissue. The balance between auxins and cytokinins plays a critical role in determining whether shoots or roots will form. A lower auxin-to-cytokinin ratio favors shoot regeneration, whereas a higher auxin concentration promotes root formation. For example, in Medicago sativa (alfalfa) cultures, an elevated level of kinetin combined with a low concentration of 2,4-D (a synthetic auxin) leads to shoot development, whereas increasing 2,4-D while reducing kinetin concentration encourages root formation. However, successful organogenesis is not solely dependent on PGR treatment. The physical size of the callus or developing tissue must reach a certain threshold to support proper organ formation, highlighting the importance of intercellular signaling in coordinating developmental processes. The induction phase in organogenesis represents the transition period between a tissue achieving competence and becoming fully determined to initiate primordia formation. During this stage, an integrated genetic pathway directs the developmental process before morphological differentiation occurs. Research suggests that certain chemical and physical factors can interfere with genetically programmed developmental pathways, altering morphogenic outcomes. In the case of Convolvulus arvensis , these external influences were found to inhibit shoot formation, leading instead to callus development. The conclusion of the induction phase is marked by a cell or group of cells committing to either shoot or root formation. This determination is tested by transferring the tissue from a growth regulator-supplemented medium to a basal medium containing essential minerals, vitamins, and a carbon source but no plant growth regulators. At this stage, the tissue completes the induction process and becomes fully determined to its developmental fate. A key concept in this process is canalization, which refers to the ability of a developmental pathway to consistently produce a standard phenotype despite potential genetic or environmental variations. If explants are removed from a shoot-inducing medium before full canalization occurs, shoot formation is significantly reduced, and root development becomes the dominant outcome. This phenomenon highlights the morphogenic plasticity of plant tissues in vitro, demonstrating their ability to adjust to external conditions and developmental cues. During this phase, the process of morphological differentiation begins, leading to the formation and development of the nascent organ. The initiation of organogenesis is characterized by a distinct shift in polarity, followed by the establishment of radial symmetry and subsequent growth along the newly defined axis, ultimately forming the structural bulge that marks organ initiation. The sequential development of organogenesis can be observed in species such as Pinus oocarpa Schiede , where shoot buds are regenerated directly from cotyledons through direct organogenesis. However, the specific developmental patterns may vary across different plant species grown in vitro. The progression of organ formation includes distinct morphological changes, beginning with alterations in surface texture, the emergence of meristemoids, and the expansion of the meristematic region either vertically or horizontally. This is followed by the protrusion of the meristematic region beyond the epidermal layer, the formation of a structured meristem with visible leaf primordia, and eventually, the full development of an adventitious bud. A notable characteristic of in vitro organogenic cultures is the simultaneous formation of multiple meristemoids on a single explant, with varying degrees of differentiation. Within the same explant, buds may exist in different developmental stages, ranging from early initiation to fully developed structures. Once the elongated shoots surpass a length of 1 cm, they are transferred to either in vitro or ex vitro rooting substrates, allowing for the completion of plantlet regeneration and the establishment of a fully formed plant. In the process of direct organogenesis, axillary shoots are generated directly from pre-existing meristems located at the shoot tips and nodes, offering a high rate of multiplication. One of the key advantages of this method is the low likelihood of mutations occurring in the organized shoot meristems, ensuring that the resulting plants maintain genetic consistency. This technique is particularly valuable for the production and conservation of economically and environmentally significant plants, as it allows for the efficient generation of multiple shoots from a single explant, maintaining uniformity across the propagated plants. Furthermore, all plants produced via direct organogenesis are true-to-type, meaning they are genetic clones of the original plant. However, there are some limitations to organogenesis. Somaclonal variation, which can result in unwanted genetic diversity, is a potential issue, particularly in the indirect organogenesis process. Additionally, this technique may not be suitable for recalcitrant plant species, which are those that do not respond well to in vitro culture or regeneration protocols. These limitations highlight the need for ongoing research and optimization of methods for different plant species to overcome these challenges in plant propagation and conservation. In addition to growth by cell division, a plant may grow through cell elongation . This occurs when individual cells or groups of cells grow longer. Not all plant cells grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem bends to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light ( phototropism ), gravity ( gravitropism ), water, ( hydrotropism ), and physical contact ( thigmotropism ). [ citation needed ] Plant growth and development are mediated by specific plant hormones and plant growth regulators (PGRs) (Ross et al. 1983). [ 10 ] Endogenous hormone levels are influenced by plant age, cold hardiness, dormancy, and other metabolic conditions; photoperiod, drought, temperature, and other external environmental conditions; and exogenous sources of PGRs, e.g., externally applied and of rhizospheric origin. Plants exhibit natural variation in their form and structure. While all organisms vary from individual to individual, plants exhibit an additional type of variation. Within a single individual, parts are repeated which may differ in form and structure from other similar parts. This variation is most easily seen in the leaves of a plant, though other organs such as stems and flowers may show similar variation. There are three primary causes of this variation: positional effects, environmental effects, and juvenility. [ citation needed ] There is variation among the parts of a mature plant resulting from the relative position where the organ is produced. For example, along a new branch the leaves may vary in a consistent pattern along the branch. The form of leaves produced near the base of the branch differs from leaves produced at the tip of the plant, and this difference is consistent from branch to branch on a given plant and in a given species. The way in which new structures mature as they are produced may be affected by the point in the plants life when they begin to develop, as well as by the environment to which the structures are exposed. Temperature has a multiplicity of effects on plants depending on a variety of factors, including the size and condition of the plant and the temperature and duration of exposure. The smaller and more succulent the plant, the greater the susceptibility to damage or death from temperatures that are too high or too low. Temperature affects the rate of biochemical and physiological processes, rates generally (within limits) increasing with temperature. Juvenility or heteroblasty is when the organs and tissues produced by a young plant, such as a seedling , are often different from those that are produced by the same plant when it is older. For example, young trees will produce longer, leaner branches that grow upwards more than the branches they will produce as a fully grown tree. In addition, leaves produced during early growth tend to be larger, thinner, and more irregular than leaves on the adult plant. Specimens of juvenile plants may look so completely different from adult plants of the same species that egg-laying insects do not recognize the plant as food for their young. The transition from early to late growth forms is sometimes called vegetative phase change . [ 11 ] Plant structures, including, roots, buds, and shoots, that develop in unusual locations are called adventitious . Adventitious roots and buds usually develop near the existing vascular tissues so that they can connect to the xylem and phloem . However, the exact location varies greatly. In young stems, adventitious roots often form from parenchyma between the vascular bundles . In stems with secondary growth, adventitious roots often originate in phloem parenchyma near the vascular cambium . In stem cuttings, adventitious roots sometimes also originate in the callus cells that form at the cut surface. Leaf cuttings of the Crassula form adventitious roots in the epidermis. [ 12 ] Adventitious buds develop from places other than a shoot apical meristem , which occurs at the tip of a stem, or on a shoot node , at the leaf axil, the bud being left there during primary growth. They may develop on roots or leaves, or on shoots as a new growth. Shoot apical meristems produce one or more axillary or lateral buds at each node. When stems produce considerable secondary growth , the axillary buds may be destroyed. Adventitious buds may then develop on stems with secondary growth. [ citation needed ] Adventitious buds are often formed after the stem is wounded or pruned . The adventitious buds help to replace lost branches. Adventitious buds and shoots also may develop on mature tree trunks when a shaded trunk is exposed to bright sunlight because surrounding trees are cut down. Redwood ( Sequoia sempervirens ) trees often develop many adventitious buds on their lower trunks. If the main trunk dies, a new one often sprouts from one of the adventitious buds. Small pieces of redwood trunk are sold as souvenirs termed redwood burls. They are placed in a pan of water, and the adventitious buds sprout to form shoots. [ citation needed ] Some plants normally develop adventitious buds on their roots, which can extend quite a distance from the plant. Shoots that develop from adventitious buds on roots are termed suckers . They are a type of natural vegetative reproduction in many species , e.g. many grasses, quaking aspen and Canada thistle . The Pando quaking aspen grew from one trunk to 47,000 trunks via adventitious bud formation on a single root system. [ citation needed ] Some leaves develop adventitious buds, which then form adventitious roots, as part of vegetative reproduction ; e.g. piggyback plant ( Tolmiea menziesii ) and mother-of-thousands ( Kalanchoe daigremontiana ). The adventitious plantlets then drop off the parent plant and develop as separate clones of the parent. [ citation needed ] Coppicing is the practice of cutting tree stems to the ground to promote rapid growth of adventitious shoots. It is traditionally used to produce poles, fence material or firewood. It is also practiced for biomass crops grown for fuel, such as poplar or willow. Adventitious rooting may be a stress-avoidance acclimation for some species, driven by such inputs as hypoxia [ 13 ] or nutrient deficiency. Another ecologically important function of adventitious rooting is the vegetative reproduction of tree species such as Salix and Sequoia in riparian settings. [ 14 ] The ability of plant stems to form adventitious roots is utilised in commercial propagation by cuttings . Understanding of the physiological mechanisms behind adventitious rooting has allowed some progress to be made in improving the rooting of cuttings by the application of synthetic auxins as rooting powders and by the use of selective basal wounding. [ 15 ] Further progress can be made in future years by applying research into other regulatory mechanisms to commercial propagation and by the comparative analysis of molecular and ecophysiological control of adventitious rooting in 'hard to root' vs. 'easy to root' species. [ citation needed ] Adventitious roots and buds are very important when people propagate plants via cuttings, layering , tissue culture . Plant hormones , termed auxins , are often applied to stem, shoot or leaf cuttings to promote adventitious root formation, e.g., African violet and sedum leaves and shoots of poinsettia and coleus . Propagation via root cuttings requires adventitious bud formation, e.g., in horseradish and apple . In layering, adventitious roots are formed on aerial stems before the stem section is removed to make a new plant. Large houseplants are often propagated by air layering . Adventitious roots and buds must develop in tissue culture propagation of plants. The genetics behind leaf shape development in Arabidopsis thaliana has been broken down into three stages: The initiation of the leaf primordium , the establishment of dorsiventrality , and the development of a marginal meristem . Leaf primordium is initiated by the suppression of the genes and proteins of the class I KNOX family (such as SHOOT APICAL MERISTEMLESS ). These class I KNOX proteins directly suppress gibberellin biosynthesis in the leaf primodium. Many genetic factors were found to be involved in the suppression of these genes in leaf primordia (such as ASYMMETRIC LEAVES1, BLADE-ON-PETIOLE1 , SAWTOOTH1 , etc.). Thus, with this suppression, the levels of gibberellin increase and leaf primorium initiates growth. [ citation needed ] Flower development is the process by which angiosperms produce a pattern of gene expression in meristems that leads to the appearance of an organ oriented towards sexual reproduction , the flower. There are three physiological developments that must occur in order for this to take place: firstly, the plant must pass from sexual immaturity into a sexually mature state (i.e. a transition towards flowering); secondly, the transformation of the apical meristem 's function from a vegetative meristem into a floral meristem or inflorescence ; and finally the growth of the flower's individual organs. The latter phase has been modelled using the ABC model , which describes the biological basis of the process from the perspective of molecular and developmental genetics. [ citation needed ] An external stimulus is required in order to trigger the differentiation of the meristem into a flower meristem. This stimulus will activate mitotic cell division in the meristem, particularly on its sides where new primordia are formed. This same stimulus will also cause the meristem to follow a developmental pattern that will lead to the growth of floral meristems as opposed to vegetative meristems. The main difference between these two types of meristem, apart from the obvious disparity between the objective organ, is the verticillate (or whorled) phyllotaxis , that is, the absence of stem elongation among the successive whorls or verticils of the primordium. These verticils follow an acropetal development, giving rise to sepals , petals , stamens and carpels . Another difference from vegetative axillary meristems is that the floral meristem is «determined», which means that, once differentiated, its cells will no longer divide . [ 16 ] The identity of the organs present in the four floral verticils is a consequence of the interaction of at least three types of gene products , each with distinct functions. According to the ABC model, functions A and C are required in order to determine the identity of the verticils of the perianth and the reproductive verticils, respectively. These functions are exclusive and the absence of one of them means that the other will determine the identity of all the floral verticils. The B function allows the differentiation of petals from sepals in the secondary verticil, as well as the differentiation of the stamen from the carpel on the tertiary verticil. Plants use floral form, flower, and scent to attract different insects for pollination . Certain compounds within the emitted scent appeal to particular pollinators . In Petunia hybrida , volatile benzenoids are produced to give off the floral smell. While components of the benzenoid biosynthetic pathway are known, the enzymes within the pathway, and subsequent regulation of those enzymes, are yet to be discovered. [ 17 ] To determine pathway regulation, P. hybrida Mitchell flowers were used in a petal-specific microarray to compare the flowers that were just about to produce the scent, to the P. hybrida cultivar W138 flowers that produce few volatile benzenoids. cDNAs of genes of both plants were sequenced. The results demonstrated that there is a transcription factor upregulated in the Mitchell flowers, but not in the W138 flowers lacking the floral aroma. This gene was named ODORANT1 (ODO1). To determine expression of ODO1 throughout the day, RNA gel blot analysis was done. The gel showed that ODO1 transcript levels began increasing between 1300 and 1600 h, peaked at 2200 h and were lowest at 1000 h. These ODO1 transcript levels directly correspond to the timeline of volatile benzenoid emission. Additionally, the gel supported the previous finding that W138 non-fragrant flowers have only one-tenth the ODO1 transcript levels of the Mitchell flowers. Thus, the amount of ODO1 made corresponds to the amount of volatile benzenoid emitted, indicating that ODO1 regulates benzenoid biosynthesis. [ 17 ] Additional genes contributing to the biosynthesis of major scent compounds are OOMT1 and OOMT2. OOMT1 and OOMT2 help to synthesize orcinol O-methyltransferases (OOMT), which catalyze the last two steps of the DMT pathway, creating 3,5-dimethoxytoluene (DMT). DMT is a scent compound produced by many different roses yet, some rose varieties, like Rosa gallica and Damask rose Rosa damascene , do not emit DMT. It has been suggested that these varieties do not make DMT because they do not have the OOMT genes. However, following an immunolocalization experiment, OOMT was found in the petal epidermis. To study this further, rose petals were subjected to ultracentrifugation . Supernatants and pellets were inspected by western blot . Detection of OOMT protein at 150,000g in the supernatant and the pellet allowed for researchers to conclude that OOMT protein is tightly associated with petal epidermis membranes. Such experiments determined that OOMT genes do exist within Rosa gallica and Damask rose Rosa damascene varieties, but the OOMT genes are not expressed in the flower tissues where DMT is made. [ 18 ]
https://en.wikipedia.org/wiki/Plant_development
Plant disease resistance protects plants from pathogens in two ways: by pre-formed structures and chemicals, and by infection-induced responses of the immune system. Relative to a susceptible plant, disease resistance is the reduction of pathogen growth on or in the plant (and hence a reduction of disease), while the term disease tolerance describes plants that exhibit little disease damage despite substantial pathogen levels. Disease outcome is determined by the three-way interaction of the pathogen, the plant, and the environmental conditions (an interaction known as the disease triangle ). Defense-activating compounds can move cell-to-cell and systematically through the plant's vascular system. However, plants do not have circulating immune cells , so most cell types exhibit a broad suite of antimicrobial defenses. Although obvious qualitative differences in disease resistance can be observed when multiple specimens are compared (allowing classification as "resistant" or "susceptible" after infection by the same pathogen strain at similar inoculum levels in similar environments), a gradation of quantitative differences in disease resistance is more typically observed between plant strains or genotypes . Plants consistently resist certain pathogens but succumb to others; resistance is usually specific to certain pathogen species or pathogen strains. Plant disease resistance is crucial to the reliable production of food, and it provides significant reductions in agricultural use of land, water, fuel, and other inputs. Plants in both natural and cultivated populations carry inherent disease resistance, but this has not always protected them. The late blight Great Famine of Ireland of the 1840s was caused by the oomycete Phytophthora infestans . The world's first mass-cultivated banana cultivar Gros Michel was lost in the 1920s to Panama disease caused by the fungus Fusarium oxysporum . The current wheat stem rust , leaf rust , and yellow stripe rust epidemics spreading from East Africa into the Indian subcontinent are caused by rust fungi Puccinia graminis and P. striiformis . Other epidemics include chestnut blight , as well as recurrent severe plant diseases such as rice blast , soybean cyst nematode , and citrus canker . [ 1 ] [ 2 ] Plant pathogens can spread rapidly over great distances, vectored by water, wind, insects, and humans. Across large regions and many crop species, it is estimated that diseases typically reduce plant yields by 10% every year in more developed nations or agricultural systems, but yield loss to diseases often exceeds 20% in less developed settings. [ 1 ] However, disease control is reasonably successful for most crops. Disease control is achieved by use of plants that have been bred for good resistance to many diseases, and by plant cultivation approaches such as crop rotation , pathogen-free seed, appropriate planting date and plant density, control of field moisture, and pesticide use. The plant immune system carries two interconnected tiers of receptors, one most frequently sensing molecules outside the cell and the other most frequently sensing molecules inside the cell. Both systems sense the intruder and respond by activating antimicrobial defenses in the infected cell and neighboring cells. In some cases, defense-activating signals spread to the rest of the plant or even to neighboring plants. The two systems detect different types of pathogen molecules and classes of plant receptor proteins. [ 5 ] [ 6 ] The first tier is primarily governed by pattern recognition receptors that are activated by recognition of evolutionarily conserved pathogen or microbial–associated molecular patterns (PAMPs or MAMPs). Activation of PRRs leads to intracellular signaling, transcriptional reprogramming, and biosynthesis of a complex output response that limits colonization. The system is known as PAMP-triggered immunity or as pattern-triggered immunity (PTI). [ 7 ] [ 6 ] [ 8 ] The second tier, primarily governed by R gene products, is often termed effector-triggered immunity (ETI). ETI is typically activated by the presence of specific pathogen "effectors" and then triggers strong antimicrobial responses (see R gene section below). In addition to PTI and ETI, plant defenses can be activated by the sensing of damage-associated compounds (DAMP), such as portions of the plant cell wall released during pathogenic infection. [ 9 ] Responses activated by PTI and ETI receptors include ion channel gating, oxidative burst , cellular redox changes, or protein kinase cascades that directly activate cellular changes (such as cell wall reinforcement or antimicrobial production), or activate changes in gene expression that then elevate other defensive responses. Plant immune systems show some mechanistic similarities with the immune systems of insects and mammals, but also exhibit many plant-specific characteristics. [ 10 ] The two above-described tiers are central to plant immunity but do not fully describe plant immune systems. In addition, many specific examples of apparent PTI or ETI violate common PTI/ETI definitions, suggesting a need for broadened definitions and/or paradigms. [ 11 ] The term quantitative resistance (discussed below) refers to plant disease resistance that is controlled by multiple genes and multiple molecular mechanisms that each have small effects on the overall resistance trait. Quantitative resistance is often contrasted to ETI resistance mediated by single major-effect R genes. PAMPs , conserved molecules that inhabit multiple pathogen genera , are referred to as MAMPs by many researchers. The defenses induced by MAMP perception are sufficient to repel most pathogens. However, pathogen effector proteins (see below) are adapted to suppress basal defenses such as PTI. Many receptors for MAMPs (and DAMPs) have been discovered. MAMPs and DAMPs are often detected by transmembrane receptor-kinases that carry LRR or LysM extracellular domains . [ 5 ] Effector triggered immunity (ETI) is activated by the presence of pathogen effectors. The ETI response is reliant on R genes , and is activated by specific pathogen strains. Plant ETI often causes an apoptotic hypersensitive response . Plants have evolved R genes (resistance genes) whose products mediate resistance to specific virus, bacteria, oomycete, fungus, nematode or insect strains. R gene products are proteins that allow recognition of specific pathogen effectors, either through direct binding or by recognition of the effector's alteration of a host protein. [ 6 ] Many R genes encode NB-LRR proteins (proteins with nucleotide-binding and leucine-rich repeat domains, also known as NLR proteins or STAND proteins, among other names). Most plant immune systems carry a repertoire of 100–600 different R gene homologs. Individual R genes have been demonstrated to mediate resistance to specific virus, bacteria, oomycete, fungus, nematode or insect strains. R gene products control a broad set of disease resistance responses whose induction is often sufficient to stop further pathogen growth/spread. Studied R genes usually confer specificity for particular strains of a pathogen species (those that express the recognized effector). As first noted by Harold Flor in his mid-20th century formulation of the gene-for-gene relationship , a plant R gene has specificity for a pathogen avirulence gene (Avr gene). Avirulence genes are now known to encode effectors. The pathogen Avr gene must have matched specificity with the R gene for that R gene to confer resistance, suggesting a receptor/ ligand interaction for Avr and R genes. [ 10 ] Alternatively, an effector can modify its host cellular target (or a molecular decoy of that target), and the R gene product (NLR protein) activates defenses when it detects the modified form of the host target or decoy. [ 6 ] [ 12 ] Effectors are central to the pathogenic or symbiotic potential of microbes and microscopic plant-colonizing animals such as nematodes. [ 13 ] [ 14 ] [ 15 ] Effectors typically are proteins that are delivered outside the microbe and into the host cell. These colonist-derived effectors manipulate the host's cell physiology and development. As such, effectors offer examples of co-evolution (example: a fungal protein that functions outside of the fungus but inside of plant cells has evolved to take on plant-specific functions). Pathogen host range is determined, among other things, by the presence of appropriate effectors that allow colonization of a particular host. [ 5 ] Pathogen-derived effectors are a powerful tool to identify plant functions that play key roles in disease and in disease resistance. Apparently most effectors function to manipulate host physiology to allow disease to occur. Well-studied bacterial plant pathogens typically express a few dozen effectors, often delivered into the host by a Type III secretion apparatus. [ 13 ] Fungal, oomycete and nematode plant pathogens apparently express a few hundred effectors. [ 14 ] [ 15 ] So-called "core" effectors are defined operationally by their wide distribution across the population of a particular pathogen and their substantial contribution to pathogen virulence. Genomics can be used to identify core effectors, which can then be used to discover new R gene alleles , which can be used in plant breeding for disease resistance. Plant sRNA pathways are understood to be important components of pathogen-associated molecular pattern (PAMP)-triggered immunity (PTI) and effector-triggered immunity (ETI). [ 16 ] [ 17 ] Bacteria‐induced microRNAs (miRNAs) in Arabidopsis have been shown to influence hormonal signalling including auxin, abscisic acid (ABA), jasmonic acid (JA) and salicylic acid (SA). [ 18 ] [ 19 ] Advances in genome‐wide studies revealed a massive adaptation of host miRNA expression patterns after infection by fungal pathogens Fusarium virguliforme , [ 20 ] Erysiphe graminis , [ 21 ] Verticillium dahliae , [ 22 ] and Cronartium quercuum , [ 23 ] and the oomycete Phytophthora sojae . [ 24 ] Changes to sRNA expression in response to fungal pathogens indicate that gene silencing may be involved in this defense pathway. However, there is also evidence that the antifungal defense response to Colletotrichum spp. infection in maize is not entirely regulated by specific miRNA induction, but may instead act to fine-tune the balance between genetic and metabolic components upon infection. [ citation needed ] Transport of sRNAs during infection is likely facilitated by extracellular vesicles (EVs) and multivesicular bodies (MVBs). [ 25 ] The composition of RNA in plant EVs has not been fully evaluated, but it is likely that they are, in part, responsible for trafficking RNA. Plants can transport viral RNAs, mRNAs , miRNAs and small interfering RNAs (siRNAs) systemically through the phloem. [ 26 ] This process is thought to occur through the plasmodesmata and involves RNA-binding proteins that assist RNA localization in mesophyll cells. Although they have been identified in the phloem with mRNA, there is no determinate evidence that they mediate long-distant transport of RNAs. [ 27 ] EVs may therefore contribute to an alternate pathway of RNA loading into the phloem, or could possibly transport RNA through the apoplast. [ 28 ] There is also evidence that plant EVs can allow for interspecies transfer of sRNAs by RNA interference such as Host-Induced Gene Silencing (HIGS). [ 29 ] [ 30 ] The transport of RNA between plants and fungi seems to be bidirectional as sRNAs from the fungal pathogen Botrytis cinerea have been shown to target host defense genes in Arabidopsis and tomato. [ 31 ] In a small number of cases, plant genes are effective against an entire pathogen species, even though that species is pathogenic on other genotypes of that host species. Examples include barley MLO against powdery mildew , wheat Lr34 against leaf rust and wheat Yr36 against wheat stripe rust . An array of mechanisms for this type of resistance may exist depending on the particular gene and plant-pathogen combination. Other reasons for effective plant immunity can include a lack of coadaptation (the pathogen and/or plant lack multiple mechanisms needed for colonization and growth within that host species), or a particularly effective suite of pre-formed defenses. [ citation needed ] Plant defense signaling is activated by the pathogen-detecting receptors that are described in an above section. [ 5 ] The activated receptors frequently elicit reactive oxygen and nitric oxide production, calcium , potassium and proton ion fluxes , altered levels of salicylic acid and other hormones and activation of MAP kinases and other specific protein kinases . [ 10 ] These events in turn typically lead to the modification of proteins that control gene transcription , and the activation of defense-associated gene expression . [ 8 ] Numerous genes and/or proteins as well as other molecules have been identified that mediate plant defense signal transduction. [ 32 ] [ 33 ] Cytoskeleton and vesicle trafficking dynamics help to orient plant defense responses toward the point of pathogen attack. Plant immune system activity is regulated in part by signaling hormones such as: [ 34 ] [ 35 ] There can be substantial cross-talk among these pathways. [ 34 ] As with many signal transduction pathways, plant gene expression during immune responses can be regulated by degradation. This often occurs when hormone binding to hormone receptors stimulates ubiquitin -associated degradation of repressor proteins that block expression of certain genes. The net result is hormone-activated gene expression. Examples: [ 36 ] Ubiquitination plays a central role in cell signaling that regulates processes including protein degradation and immunological response. [ 37 ] Although one of the main functions of ubiquitin is to target proteins for destruction, it is also useful in signaling pathways, hormone release, apoptosis and translocation of materials throughout the cell. Ubiquitination is a component of several immune responses. Without ubiquitin's proper functioning, the invasion of pathogens and other harmful molecules would increase dramatically due to weakened immune defenses. [ 37 ] The E3 ubiquitin ligase enzyme is a main component that provides specificity in protein degradation pathways, including immune signaling pathways. [ 36 ] The E3 enzyme components can be grouped by which domains they contain and include several types. [ 38 ] These include the Ring and U-box single subunit, HECT, and CRLs . [ 39 ] [ 40 ] Plant signaling pathways including immune responses are controlled by several feedback pathways, which often include negative feedback; and they can be regulated by De-ubiquitination enzymes, degradation of transcription factors and the degradation of negative regulators of transcription. [ 36 ] [ 41 ] Differences in plant disease resistance are often incremental or quantitative rather than qualitative. The term quantitative resistance (QR) refers to plant disease resistance that is controlled by multiple genes and multiple molecular mechanisms that each have small or minor effects on the overall resistance trait. [ 42 ] QR is important in plant breeding because the resulting resistance is often more durable (effective for more years), and more likely to be effective against most or all strains of a particular pathogen. QR is typically effective against one pathogen species or a group of closely related species, rather than being broadly effective against multiple pathogens. [ 42 ] QR is often obtained through plant breeding without knowledge of the causal genetic loci or molecular mechanisms. QR is likely to depend on many of the plant immune system components discussed in this article, as well as traits that are unique to certain plant-pathogen pairings (such as sensitivity to certain pathogen effectors), as well as general plant traits such as leaf surface characteristics or root system or plant canopy architecture. The term QR is synonymous with minor gene resistance . [ 43 ] Adult plant resistance (APR) is a specialist term referring to quantitative resistance that is not effective in the seedling stage but is effective throughout many remaining plant growth stages. [ 43 ] [ 44 ] [ 42 ] The difference between adult plant resistance and seedling resistance is especially important in annual crops . [ 45 ] Seedling resistance is resistance which begins in the seedling stage of plant development and continues throughout its lifetime. When used by specialists, the term does not refer to resistance that is only active during the seedling stage. "Seedling resistance" is meant to be synonymous with major gene resistance or all stage resistance (ASR), and is used as a contrast to "adult plant resistance". [ 43 ] Seedling resistance is often mediated by single R genes, but not all R genes encode seedling resistance. Plant breeders emphasize selection and development of disease-resistant plant lines. Plant diseases can also be partially controlled by use of pesticides and by cultivation practices such as crop rotation , tillage, planting density, disease-free seeds and cleaning of equipment, but plant varieties with inherent (genetically determined) disease resistance are generally preferred. [ 2 ] Breeding for disease resistance began when plants were first domesticated. Breeding efforts continue because pathogen populations are under selection pressure and evolve increased virulence, pathogens move (or are moved) to new areas, changing cultivation practices or climate favor some pathogens and can reduce resistance efficacy, and plant breeding for other traits can disrupt prior resistance. [ 46 ] A plant line with acceptable resistance against one pathogen may lack resistance against others. Breeding for resistance typically includes: Resistance is termed durable if it continues to be effective over multiple years of widespread use as pathogen populations evolve. " Vertical resistance " is specific to certain races or strains of a pathogen species, is often controlled by single R genes and can be less durable. Horizontal or broad-spectrum resistance against an entire pathogen species is often only incompletely effective, but more durable, and is often controlled by many genes that segregate in breeding populations. [ 2 ] Durability of resistance is important even when future improved varieties are expected to be on the way: The average time from human recognition of a new fungal disease threat to the release of a resistant crop for that pathogen is at least twelve years. [ 47 ] [ 48 ] Crops such as potato, apple, banana, and sugarcane are often propagated by vegetative reproduction to preserve highly desirable plant varieties, because for these species, outcrossing seriously disrupts the preferred traits. See also asexual propagation . Vegetatively propagated crops may be among the best targets for resistance improvement by the biotechnology method of plant transformation to manage genes that affect disease resistance. [ 1 ] Scientific breeding for disease resistance originated with Sir Rowland Biffen , who identified a single recessive gene for resistance to wheat yellow rust. Nearly every crop was then bred to include disease resistance (R) genes, many by introgression from compatible wild relatives. [ 1 ] The term GM ( "genetically modified" ) is often used as a synonym of transgenic to refer to plants modified using recombinant DNA technologies. Plants with transgenic/GM disease resistance against insect pests have been extremely successful as commercial products, especially in maize and cotton, and are planted annually on over 20 million hectares in over 20 countries worldwide [ 49 ] (see also genetically modified crops ). Transgenic plant disease resistance against microbial pathogens was first demonstrated in 1986. Expression of viral coat protein gene sequences conferred virus resistance via small RNAs . This proved to be a widely applicable mechanism for inhibiting viral replication. [ 50 ] Combining coat protein genes from three different viruses, scientists developed squash hybrids with field-validated, multiviral resistance. Similar levels of resistance to this variety of viruses had not been achieved by conventional breeding. A similar strategy was deployed to combat papaya ringspot virus , which by 1994 threatened to destroy Hawaii 's papaya industry. Field trials demonstrated excellent efficacy and high fruit quality. By 1998 the first transgenic virus-resistant papaya was approved for sale. Disease resistance has been durable for over 15 years. Transgenic papaya accounts for ~85% of Hawaiian production. The fruit is approved for sale in the U.S., Canada, and Japan. Potato lines expressing viral replicase sequences that confer resistance to potato leafroll virus were sold under the trade names NewLeaf Y and NewLeaf Plus, and were widely accepted in commercial production in 1999–2001, until McDonald's Corp. decided not to purchase GM potatoes and Monsanto decided to close their NatureMark potato business. [ 51 ] NewLeaf Y and NewLeaf Plus potatoes carried two GM traits, as they also expressed Bt-mediated resistance to Colorado potato beetle. No other crop with engineered disease resistance against microbial pathogens had reached the market by 2013, although more than a dozen were in some state of development and testing. Research aimed at engineered resistance follows multiple strategies. One is to transfer useful PRRs into species that lack them. Identification of functional PRRs and their transfer to a recipient species that lacks an orthologous receptor could provide a general pathway to additional broadened PRR repertoires. For example, the Arabidopsis PRR EF-Tu receptor (EFR) recognizes the bacterial translation elongation factor EF-Tu . Research performed at Sainsbury Laboratory demonstrated that deployment of EFR into either Nicotiana benthamiana or Solanum lycopersicum (tomato), which cannot recognize EF-Tu , conferred resistance to a wide range of bacterial pathogens. EFR expression in tomato was especially effective against the widespread and devastating soil bacterium Ralstonia solanacearum . [ 52 ] Conversely, the tomato PRR Verticillium 1 ( Ve1 ) gene can be transferred from tomato to Arabidopsis , where it confers resistance to race 1 Verticillium isolates. [ 1 ] The second strategy attempts to deploy multiple NLR genes simultaneously, a breeding strategy known as stacking. Cultivars generated by either DNA-assisted molecular breeding or gene transfer will likely display more durable resistance, because pathogens would have to mutate multiple effector genes. DNA sequencing allows researchers to functionally “mine” NLR genes from multiple species/strains. [ 1 ] The avrBs2 effector gene from Xanthomona perforans is the causal agent of bacterial spot disease of pepper and tomato. The first “effector-rationalized” search for a potentially durable R gene followed the finding that avrBs2 is found in most disease-causing Xanthomonas species and is required for pathogen fitness. The Bs2 NLR gene from the wild pepper, Capsicum chacoense , was moved into tomato, where it inhibited pathogen growth. Field trials demonstrated robust resistance without bactericidal chemicals. However, rare strains of Xanthomonas overcame Bs2 -mediated resistance in pepper by acquisition of avrBs2 mutations that avoid recognition but retain virulence. Stacking R genes that each recognize a different core effector could delay or prevent adaptation. [ 1 ] More than 50 loci in wheat strains confer disease resistance against wheat stem, leaf and yellow stripe rust pathogens. The Stem rust 35 ( Sr35 ) NLR gene, cloned from a diploid relative of cultivated wheat, Triticum monococcum , provides resistance to wheat rust isolate Ug99 . Similarly, Sr33 , from the wheat relative Aegilops tauschii , encodes a wheat ortholog to barley Mla powdery mildew–resistance genes. Both genes are unusual in wheat and its relatives. Combined with the Sr2 gene that acts additively with at least Sr33, they could provide durable disease resistance to Ug99 and its derivatives. [ 1 ] Another class of plant disease resistance genes opens a “trap door” that quickly kills invaded cells, stopping pathogen proliferation. Xanthomonas and Ralstonia transcription activator –like (TAL) effectors are DNA-binding proteins that activate host gene expression to enhance pathogen virulence. Both the rice and pepper lineages independently evolved TAL-effector binding sites that instead act as an executioner that induces hypersensitive host cell death when up-regulated. Xa27 from rice and Bs3 and Bs4c from pepper, are such “executor” (or "executioner") genes that encode non-homologous plant proteins of unknown function. Executor genes are expressed only in the presence of a specific TAL effector. [ 1 ] Engineered executor genes were demonstrated by successfully redesigning the pepper Bs3 promoter to contain two additional binding sites for TAL effectors from disparate pathogen strains. Subsequently, an engineered executor gene was deployed in rice by adding five TAL effector binding sites to the Xa27 promoter. The synthetic Xa27 construct conferred resistance against Xanthomonas bacterial blight and bacterial leaf streak species. [ 1 ] Most plant pathogens reprogram host gene expression patterns to directly benefit the pathogen. Reprogrammed genes required for pathogen survival and proliferation can be thought of as “disease-susceptibility genes.” Recessive resistance genes are disease-susceptibility candidates. For example, a mutation disabled an Arabidopsis gene encoding pectate lyase (involved in cell wall degradation), conferring resistance to the powdery mildew pathogen Golovinomyces cichoracearum . Similarly, the Barley MLO gene and spontaneously mutated pea and tomato MLO orthologs also confer powdery mildew resistance. [ 1 ] Lr34 is a gene that provides partial resistance to leaf and yellow rusts and powdery mildew in wheat. Lr34 encodes an adenosine triphosphate (ATP)–binding cassette (ABC) transporter. The dominant allele that provides disease resistance was recently found in cultivated wheat (not in wild strains) and, like MLO , provides broad-spectrum resistance in barley. [ 1 ] Natural alleles of host translation elongation initiation factors eif4e and eif4g are also recessive viral-resistance genes. Some have been deployed to control potyviruses in barley, rice, tomato, pepper, pea, lettuce, and melon. The discovery prompted a successful mutant screen for chemically induced eif4e alleles in tomato. [ 1 ] Natural promoter variation can lead to the evolution of recessive disease-resistance alleles. For example, the recessive resistance gene xa13 in rice is an allele of Os-8N3 . Os-8N3 is transcriptionally activated by Xanthomonas oryzae pv. oryzae strains that express the TAL effector PthXo1 . The xa13 gene has a mutated effector-binding element in its promoter that eliminates PthXo1 binding and renders these lines resistant to strains that rely on PthXo1 . This finding also demonstrated that Os-8N3 is required for susceptibility. [ 1 ] Xa13/Os-8N3 is required for pollen development, showing that such mutant alleles can be problematic should the disease-susceptibility phenotype alter function in other processes. However, mutations in the Os11N3 (OsSWEET14) TAL effector–binding element were made by fusing TAL effectors to nucleases ( TALENs ). Genome-edited rice plants with altered Os11N3 binding sites remained resistant to Xanthomonas oryzae pv. oryzae , but still provided normal development function. [ 1 ] RNA silencing -based resistance is a powerful tool for engineering resistant crops. The advantage of RNAi as a novel gene therapy against fungal, viral, and bacterial infection in plants lies in the fact that it regulates gene expression via messenger RNA degradation, translation repression and chromatin remodelling through small non-coding RNAs. Mechanistically, the silencing processes are guided by processing products of the double-stranded RNA ( dsRNA ) trigger, which are known as small interfering RNAs and microRNAs . [ 53 ] Temperature Effects on Virus Resistance Temperature significantly affects plant resistance to viruses. For example, plants with the N gene for tobacco develop tolerance to tobacco mosaic virus (TMV) but become systemically infected at temperatures above 28°C. Similarly, Capsicum chinense plants carrying the Tsw gene can become systemically infected with Tomato spotted wilt virus (TSWV) at 32°C. In the case of Beet necrotic yellow vein virus (BNYVV), plants expressing the BvGLYR1 gene showed higher virus accumulation at 22°C compared to 30°C, indicating that temperature influences the effectiveness of this gene in virus resistance. [ 54 ] Among the thousands of species of plant pathogenic microorganisms, only a small minority have the capacity to infect a broad range of plant species. Most pathogens instead exhibit a high degree of host-specificity. Non-host plant species are often said to express non-host resistance . The term host resistance is used when a pathogen species can be pathogenic on the host species but certain strains of that plant species resist certain strains of the pathogen species. The causes of host resistance and non-host resistance can overlap. Pathogen host range is determined, among other things, by the presence of appropriate effectors that allow colonization of a particular host. [ 5 ] Pathogen host range can change quite suddenly if, for example, the pathogen's capacity to synthesize a host-specific toxin or effector is gained by gene shuffling/mutation, or by horizontal gene transfer . [ 55 ] [ 56 ] Native populations are often characterized by substantial genotype diversity and dispersed populations (growth in a mixture with many other plant species). They also have undergone of plant-pathogen coevolution . Hence as long as novel pathogens are not introduced/do not evolve, such populations generally exhibit only a low incidence of severe disease epidemics . [ 57 ] Monocrop agricultural systems provide an ideal environment for pathogen evolution, because they offer a high density of target specimens with similar/identical genotypes. [ 57 ] The rise in mobility stemming from modern transportation systems provides pathogens with access to more potential targets. [ 57 ] Climate change can alter the viable geographic range of pathogen species and cause some diseases to become a problem in areas where the disease was previously less important. [ 57 ] These factors make modern agriculture more prone to disease epidemics. Common solutions include constant breeding for disease resistance, use of pesticides, use of border inspections and plant import restrictions, maintenance of significant genetic diversity within the crop gene pool (see crop diversity ), and constant surveillance to accelerate initiation of appropriate responses. Some pathogen species have much greater capacity to overcome plant disease resistance than others, often because of their ability to evolve rapidly and to disperse broadly. [ 57 ] Chestnut blight was first noticed in American Chestnut trees that were growing in what is now known as the Bronx Zoo in the year 1904. For years following this incident, it was argued as to what the identity of the pathogen was, as well as the appropriate approach to its control. The earliest attempts to fix the problem on the chestnut involved chemical solutions or physical ones. They attempted to use fungicides, cut limbs off of trees to stop the infection, and completely remove infected trees from habitations to not allow them to infect the others. All of these strategies ended up unsuccessful. Even quarantine measures were put into place which were helped by the passage of Plant Quarantine Act. Chestnut blight still proved to be a huge problem as it rapidly moved through the densely populated forests of chestnut trees. In 1914, the idea was considered to induce blight resistance to the trees through various different means and breeding mechanisms. [ 58 ]
https://en.wikipedia.org/wiki/Plant_disease_resistance