text
stringlengths
11
320k
source
stringlengths
26
161
Pancritical rationalism (literally "criticism of all things", from pan- , "all", also known as PCR ), also called comprehensively critical rationalism ( CCR ), is a development of critical rationalism and panrationalism originated by William Warren Bartley in his book The Retreat to Commitment . PCR attempts to work around the problem of ultimate commitment or infinite regress by decoupling criticism and justification. A pancritical rationalist holds all positions open to criticism, including PCR itself. Such a position in principle never resorts to appeal to authority for justification of stances, since all authorities are held to be intrinsically fallible . This article about epistemology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pancritical_rationalism
The Pancyprian Union of Chemists (PUC; Greek : Παγκύπρια Ένωση Επιστημόνων Χημικών (ΠΕΕΧ) ) is the chemical society for Cypriot chemists. It comprises a board of nine members which is elected every two years by the General Assembly of the PUC. [ 1 ] The PUC was founded in 1960. This coincides with the period during which the Republic of Cyprus was established. The PUC currently has over 550 members. It also publishes the Greek language magazine '“Chemica Nea'”, or the Chemical News Magazine in English. This magazine is distributed quarterly to PUC members plus approximately one hundred additional subscribers throughout Cyprus and Greece . [ 2 ] [ 3 ] The society has various aims centred on promoting chemistry education and research within Cyprus and on representing Cypriot chemists internationally. The PUC protects and regulates the Cypriot chemical profession through the Chemists' Registration Council. [ 1 ] It also promotes information exchange between Chemists across the country. The PUC participates in an advisory capacity on several committees within the public sector including the International Union of Pure and Applied Chemistry (IUPAC), the European Association for Chemical and Molecular Sciences (EuCheMS). [ 1 ] Through this, the Union supplies opinion and advice surrounding a range of public interest topics such as education in the chemical sciences and the environment. [ 1 ] In education, the PUC works to improve the standard of teaching in the chemical sciences and advises on the chemistry curriculum taught within schools. [ 1 ] Together with the Union of Chemistry Teachers in Secondary Education the PUC organises the local Chemical Olympiads within Cyprus and facilitates educational and outreach events such as local and international seminars and conferences which are open to the public. [ 4 ]
https://en.wikipedia.org/wiki/Pancyprian_Union_of_Chemists
The Pandya theorem is a good illustration of the richness of information forthcoming from a judicious use of subtle symmetry principles connecting vastly different sectors of nuclear systems. It is a tool for calculations regarding both particles and holes. Pandya theorem provides a theoretical framework for connecting the energy levels in jj coupling of a nucleon -nucleon and nucleon-hole system. It is also referred to as Pandya Transformation or Pandya Relation in literature. It provides a very useful tool for extending shell model calculations across shells, for systems involving both particles and holes. The Pandya transformation, which involves angular momentum re-coupling coefficients ( Racah-Coefficient ), can be used to deduce one-particle one-hole (ph) matrix elements. By assuming the wave function to be "pure" (no configuration mixing), Pandya transformation could be used to set an upper bound to the contributions of 3-body forces to the energies of nuclear states . It was first published in 1956 as follows: Nucleon-Hole Interaction in jj Coupling S.P. Pandya, Phys. Rev. 103, 956 (1956). Received 9 May 1956 A theorem connecting the energy levels in jj coupling of a nucleon-nucleon and nucleon-hole system is derived, and applied in particular to Cl38 and K40. Since it is by no means obvious how to extract "pairing correlations" from the realistic shell-model calculations, Pandya transform is applied in such cases. The "pairing Hamiltonian" is an integral part of the residual shell-model interaction. The shell-model Hamiltonian is usually written in the p-p representation, but it also can be transformed to the p-h representation by means of the Pandya transformation. This means that the high- J interaction between pairs can translate into the low- J interaction in the p-h channel. It is only in the mean-field theory that the division into "particle-hole" and "particle-particle" channels appears naturally. Some features of the Pandya transformation are as follows: Pandya theorem establishes a relation between particle-particle and particle-hole spectra. Here one considers the energy levels of two nucleons with one in orbit j and another in orbit j' and relate them to the energy levels of a nucleon hole in orbit j and a nucleus in j . Assuming pure j-j coupling and two-body interaction, Pandya (1956) derived the following relation: This was successfully tested in the spectra of Figure 3 shows the results where the discrepancy between the calculated and observed spectra is less than 25 keV. [ 1 ]
https://en.wikipedia.org/wiki/Pandya_theorem
In architecture , a paned window is a window that is divided into panes of glass , usually rectangular pieces of glass that are joined to create the glazed element of the window. Window panes are often separated from other panes (or "lights") by lead strips, or glazing bars, moulded wooden strips known as muntins in the US. [ 1 ] Paned windows originally existed because of the difficulty of making large flat sheets of glass using traditional glassblowing techniques, which typically did not produce flat sheets larger than 8 inches square. [ 2 ] Modern glass manufacturing process such as float glass make window panes unnecessary, but paned windows are still used as an architectural feature for aesthetic reasons. This architecture -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Paned_window_(architecture)
A panel PC , also known as a panel-mounted computer, touch panel PC, or industrial panel PC, is a combined industrial PC and Computer monitor so that the entire computer can be mounted in any manner available for mounting a display alone. It eliminates the need for a separate space for the computer. A panel PC is typically ruggedized for use in industrial or high-traffic settings. Also, Industrial Panel PCs have higher dependability applications. Panel PCs can have a variety of mounting options including panel mounting, VESA Mount ( Flat Display Mounting Interface ), rackmount or DIN rail Mount . Panel PCs often come with mounting brackets or flanges for direct installation onto a panel, or a cutout in an enclosure such as an electrical enclosure . The enclosure can be electrical cabinets, control panels, and machinery cabinets. Cooling is a consideration when mounting a panel PC into electrical enclosure or rackmount. [ 1 ] [ 2 ] Panel PCs are commonly used in industrial automation, manufacturing, process control, and machinery control applications. It may include a range of computer ports and connectivity options such as serial port , EtherNet/IP , CAN bus , Modbus . A panel PC typically has a touchscreen (Touch Panel PC) which enables users to interact with the computer directly on the display. This eliminates the need for separate input devices such as a keyboard or a computer mouse . Panel PCs come in various display sizes, ranging as small as 6 inches up to 24 inches. Heavy-duty Panel PC models have front panels sealed to be waterproof according to IP67 standards, and includes models which are explosion proof for installation in hazardous environments. [ 3 ]
https://en.wikipedia.org/wiki/Panel_PC
Panel edge staining is a naturally occurring problem that occurs to anodized aluminium and stainless steel panelling and façades . It is semi-permanent staining that dulls the panel or façade's surface (in particular the edges of the panelling), reducing the natural lustre and shine produced by the anodizing processes used on the aluminium. Panel edge staining may also appear on powder coated aluminium, painted aluminium, stainless steel and titanium surfaces. Panel edge staining is the by-product of the build-up of dirt and pollution. It is especially more noticeable on buildings using metallic façades in Asia, and regions close to the equator (such as Florida or South East Asia ), as higher rates of air pollution , [ 1 ] high levels of humidity and consistent rainfall encourage panel edge staining to develop. The unique top-to-bottom stain pattern of panel edge staining is caused when the build-up of dirt and pollution is washed from the higher panels to the lower panels of a surface by natural precipitation . [ 2 ] This architecture -related article is a stub . You can help Wikipedia by expanding it . This corrosion -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Panel_edge_staining
Pangenesis was Charles Darwin 's hypothetical mechanism for heredity , in which he proposed that each part of the body continually emitted its own type of small organic particles called gemmules that aggregated in the gonads , contributing heritable information to the gametes . [ 1 ] He presented this 'provisional hypothesis' in his 1868 work The Variation of Animals and Plants Under Domestication , intending it to fill what he perceived as a major gap in evolutionary theory at the time. The etymology of the word comes from the Greek words pan (a prefix meaning "whole", "encompassing") and genesis ("birth") or genos ("origin"). Pangenesis mirrored ideas originally formulated by Hippocrates and other pre-Darwinian scientists, but using new concepts such as cell theory , explaining cell development as beginning with gemmules which were specified to be necessary for the occurrence of new growths in an organism, both in initial development and regeneration. [ 2 ] It also accounted for regeneration and the Lamarckian concept of the inheritance of acquired characteristics, as a body part altered by the environment would produce altered gemmules. This made Pangenesis popular among the neo-Lamarckian school of evolutionary thought. [ 3 ] This hypothesis was made effectively obsolete after the 1900 rediscovery among biologists of Gregor Mendel 's theory of the particulate nature of inheritance . Pangenesis was similar to ideas put forth by Hippocrates , Democritus and other pre-Darwinian scientists in proposing that the whole of parental organisms participate in heredity (thus the prefix pan ). [ 4 ] Darwin wrote that Hippocrates' pangenesis was "almost identical with mine—merely a change of terms—and an application of them to classes of facts necessarily unknown to the old philosopher." [ 5 ] The historian of science Conway Zirkle wrote that: The hypothesis of pangenesis is as old as the belief in the inheritance of acquired characters. It was endorsed by Hippocrates , Democritus , Galen , Clement of Alexandria , Lactantius , St. Isidore of Seville , Bartholomeus Anglicus , St. Albert the Great , St. Thomas of Aquinas , Peter of Crescentius , Paracelsus , Jerome Cardan , Levinus Lemnius , Venette , John Ray , Buffon , Bonnet , Maupertuis , von Haller and Herbert Spencer . [ 4 ] Zirkle demonstrated that the idea of inheritance of acquired characteristics had become fully accepted by the 16th century and remained immensely popular through to the time of Lamarck's work, at which point it began to draw more criticism due to lack of hard evidence. [ 4 ] He also stated that pangenesis was the only scientific explanation ever offered for this concept, developing from Hippocrates' belief that "the semen was derived from the whole body." [ 4 ] In the 13th century, pangenesis was commonly accepted on the principle that semen was a refined version of food unused by the body, which eventually translated to 15th and 16th century widespread use of pangenetic principles in medical literature, especially in gynecology. [ 4 ] Later pre-Darwinian important applications of the idea included hypotheses about the origin of the differentiation of races. [ 4 ] A theory put forth by Pierre Louis Maupertuis in 1745 called for particles from both parents governing the attributes of the child, although some historians have called his remarks on the subject cursory and vague. [ 6 ] [ 7 ] In 1749, the French naturalist Georges-Louis Leclerc, Comte de Buffon developed a hypothetical system of heredity much like Darwin's pangenesis, wherein 'organic molecules' were transferred to offspring during reproduction and stored in the body during development. [ 7 ] [ 8 ] Commenting on Buffon's views, Darwin stated, "If Buffon had assumed that his organic molecules had been formed by each separate unit throughout the body, his view and mine would have been very closely similar." [ 4 ] In 1801, Erasmus Darwin advocated a hypothesis of pangenesis in the third edition of his book Zoonomia . [ 9 ] In 1809, Jean-Baptiste Lamarck in his Philosophie Zoologique put forth evidence for the idea that characteristics acquired during the lifetime of an organism, from either environmental or behavioural effects, may be passed on to the offspring. Charles Darwin first had significant contact with Lamarckism during his time at the University of Edinburgh Medical School in the late 1820s, both through Robert Edmond Grant , whom he assisted in research, and in Erasmus's journals. [ 10 ] Darwin's first known writings on the topic of Lamarckian ideas as they related to inheritance are found in a notebook he opened in 1837, also entitled Zoonomia . [ 11 ] Historian Jonathan Hodge states that the theory of pangenesis itself first appeared in Darwin's notebooks in 1841. [ 12 ] In 1861, the Irish physician Henry Freke developed a variant of pangenesis in his book Origin of Species by Means of Organic Affinity . [ 13 ] Freke proposed that all life was developed from microscopic organic agents which he named granules , which existed as 'distinct species of organizing matter' and would develop into different biological structures. [ 14 ] Four years before the publication of Variation , in his 1864 book Principles of Biology , Herbert Spencer proposed a theory of "physiological units" similar to Darwin's gemmules, which likewise were said to be related to specific body parts and responsible for the transmission of characteristics of those body parts to offspring. [ 5 ] He supported the Lamarckian idea of transmission of acquired characteristics. Darwin had debated whether to publish a theory of heredity for an extended period of time due to its highly speculative nature. He decided to include pangenesis in Variation after sending a 30-page manuscript to his close friend and supporter Thomas Huxley in May 1865, which was met by significant criticism from Huxley that made Darwin even more hesitant. [ 15 ] However, Huxley eventually advised Darwin to publish, writing: "Somebody rummaging among your papers half a century hence will find Pangenesis & say 'See this wonderful anticipation of our modern Theories—and that stupid ass, Huxley, prevented his publishing them'" [ 16 ] Darwin's initial version of pangenesis appeared in the first edition of Variation in 1868, and was later reworked for the publication of a second edition in 1875. Darwin's pangenesis theory attempted to explain the process of sexual reproduction , inheritance of traits, and complex developmental phenomena such as cellular regeneration in a unified mechanistic structure. [ 15 ] [ 17 ] Longshan Liu wrote that in modern terms, pangenesis deals with issues of "dominance inheritance, graft hybridization , reversion , xenia, telegony , the inheritance of acquired characters, regeneration and many groups of facts pertaining to variation, inheritance and development." [ 18 ] Mechanistically, Darwin proposed pangenesis to occur through the transfer of organic particles which he named 'gemmules.' Gemmules, which he also sometimes referred to as plastitudes, [ 19 ] pangenes, granules, [ 20 ] or germs, were supposed to be shed by the organs of the body and carried in the bloodstream to the reproductive organs where they accumulated in the germ cells or gametes. [ 21 ] Their accumulation was thought to occur by some sort of a 'mutual affinity.' [ 15 ] Each gemmule was said to be specifically related to a certain body part- as described, they did not contain information about the entire organism. [ 20 ] The different types were assumed to be dispersed through the whole body, and capable of self-replication given 'proper nutriment'. When passed on to offspring via the reproductive process, gemmules were thought to be responsible for developing into each part of an organism and expressing characteristics inherited from both parents. [ 20 ] Darwin thought this to occur in a literal sense: he explained cell proliferation to progress as gemmules to bind to more developed cells of their same character and mature. In this sense, the uniqueness of each individual would be due to their unique mixture of their parents' gemmules, and therefore characters. [ 20 ] Similarity to one parent over the other could be explained by a quantitative superiority of one parent's gemmules. [ 18 ] Yongshen Lu points out that Darwin knew of cells' ability to multiply by self-division, so it is unclear how Darwin supposed the two proliferation mechanisms to relate to each other. [ 18 ] He did clarify in a later statement that he had always supposed gemmules to only bind to and proliferate from developing cells, not mature ones. [ 22 ] Darwin hypothesized that gemmules might be able to survive and multiply outside of the body in a letter to J. D. Hooker in 1870. [ 23 ] Some gemmules were thought to remain dormant for generations, whereas others were routinely expressed by all offspring. Every child was built up from selective expression of the mixture of the parents and grandparents' gemmules coming from either side. Darwin likened this to gardening: a flowerbed could be sprinkled with seeds "most of which soon germinate, some lie for a period dormant, whilst others perish." [ 24 ] He did not claim gemmules were in the blood, although his theory was often interpreted in this way. Responding to Fleming Jenkin 's review of On the Origin of Species , he argued that pangenesis would permit the preservation of some favourable variations in a population so that they wouldn't die out through blending. [ 25 ] Darwin thought that environmental effects that caused altered characteristics would lead to altered gemmules for the affected body part. The altered gemmules would then have a chance of being transferred to offspring, since they were assumed to be produced throughout an organism's life. [ 2 ] Thus, pangenesis theory allowed for the Lamarckian idea of transmission of characteristics acquired through use and disuse. Accidental gemmule development in incorrect parts of the body could explain deformations and the 'monstrosities' Darwin cited in Variation . [ 2 ] Hugo de Vries characterized his own version of pangenesis theory in his 1889 book Intracellular Pangenesis with two propositions, of which he only accepted the first: The historian of science Janet Browne points out that while Spencer and Carl von Nägeli also put forth ideas for systems of inheritance involving gemmules, their version of gemmules differed from Darwin's in that it contained "a complete microscopic blueprint for an entire creature." [ 27 ] Spencer published his theory of "physiological units" three years prior to Darwin's publication of Variation . [ 4 ] Browne adds that Darwin believed specifically in gemmules from each body part because they might explain how environmental effects could be passed on as characteristics to offspring. [ 27 ] Interpretations and applications of pangenesis continued to appear frequently in medical literature up until Weismann's experiments and subsequent publication on germ-plasm theory in 1892. [ 4 ] For instance, an address by Huxley spurred on substantial work by Dr. James Ross in linking ideas found in Darwin's pangenesis to the germ theory of disease . [ 28 ] Ross cites the work of both Darwin and Spencer as key to his application of pangenetic theory. [ 28 ] Darwin's half-cousin Francis Galton conducted wide-ranging inquiries into heredity which led him to refute Charles Darwin's hypothetical theory of pangenesis. In consultation with Darwin, he set out to see if gemmules were transported in the blood. In a long series of experiments from 1869 to 1871, he transfused the blood between dissimilar breeds of rabbits, and examined the features of their offspring. He found no evidence of characters transmitted in the transfused blood. [ 29 ] Galton was troubled because he began the work in good faith, intending to prove Darwin right, and having praised pangenesis in Hereditary Genius in 1869. Cautiously, he criticized his cousin's theory, although qualifying his remarks by saying that Darwin's gemmules, which he called "pangenes", might be temporary inhabitants of the blood that his experiments had failed to pick up. [ 30 ] Darwin challenged the validity of Galton's experiment, giving his reasons in an article published in Nature where he wrote: [ 31 ] Now, in the chapter on Pangenesis in my Variation of Animals and Plants under Domestication, I have not said one word about the blood, or about any fluid proper to any circulating system. It is, indeed, obvious that the presence of gemmules in the blood can form no necessary part of my hypothesis; for I refer in illustration of it to the lowest animals, such as the Protozoa, which do not possess blood or any vessels; and I refer to plants in which the fluid, when present in the vessels, cannot be considered as true blood." He goes on to admit: "Nevertheless, when I first heard of Mr. Galton's experiments, I did not sufficiently reflect on the subject, and saw not the difficulty of believing in the presence of gemmules in the blood. [ 31 ] After the circulation of Galton's results, the perception of pangenesis quickly changed to severe skepticism if not outright disbelief. [ 18 ] August Weismann 's idea, set out in his 1892 book Das Keimplasma: eine Theorie der Vererbung (The Germ Plasm: a Theory of Inheritance), [ 32 ] was that the hereditary material, which he called the germ plasm , and the rest of the body (the soma ) had a one-way relationship: the germ-plasm formed the body, but the body did not influence the germ-plasm, except indirectly in its participation in a population subject to natural selection. This distinction is commonly referred to as the Weismann Barrier . If correct, this made Darwin's pangenesis wrong and Lamarckian inheritance impossible. His experiment on mice, cutting off their tails and showing that their offspring had normal tails across multiple generations, was proposed as a proof of the non-existence of Lamarckian inheritance, although Peter Gauthier has argued that Weismann's experiment showed only that injury did not affect the germ plasm and neglected to test the effect of Lamarckian use and disuse. [ 33 ] Weismann argued strongly and dogmatically for Darwinism and against neo-Lamarckism, polarising opinions among other scientists. [ 3 ] This increased anti-Darwinian feeling, contributing to its eclipse . [ 34 ] [ 35 ] Darwin's pangenesis theory was widely criticised, in part for its Lamarckian premise that parents could pass on traits acquired in their lifetime. [ 36 ] Conversely, the neo-Lamarckians of the time seized upon pangenesis as evidence to support their case. [ 3 ] Italian Botanist Federico Delpino's objection that gemmules' ability to self-divide is contrary to their supposedly innate nature gained considerable traction; however, Darwin was dismissive of this criticism, remarking that the particulate agents of smallpox and scarlet fever seem to have such characteristics. [ 22 ] Lamarckism fell from favour after August Weismann 's research in the 1880s indicated that changes from use (such as lifting weights to increase muscle mass) and disuse (such as being lazy and becoming weak) were not heritable. [ 37 ] [ 38 ] However, some scientists continued to voice their support in spite of Galton's and Weismann's results: notably, in 1900 Karl Pearson wrote that pangenesis "is no more disproved by the statement that 'gemmules have not been found in the blood,' than the atomic theory is disproved by the fact that no atoms have been found in the air." [ 39 ] Finally, the rediscovery of Mendel's Laws of Inheritance in 1900 led to pangenesis being fully set aside. [ 40 ] Julian Huxley has observed that the later discovery of chromosomes and the research of T. H. Morgan also made pangenesis untenable. [ 41 ] Some of Darwin's pangenesis principles do relate to heritable aspects of phenotypic plasticity , although the status of gemmules as a distinct class of organic particles has been firmly rejected. However, starting in the 1950s, many research groups in revisiting Galton's experiments found that heritable characteristics could indeed arise in rabbits and chickens following DNA injection or blood transfusion. [ 42 ] This type of research originated in the Soviet Union in the late 1940s in the work of Sopikov and others, and was later corroborated by researchers in Switzerland as it was being further developed by the Soviet scientists. [ 43 ] [ 18 ] Notably, this work was supported in the USSR in part due to its conformation with the ideas of Trofim Lysenko , who espoused a version of neo-Lamarckism as part of Lysenkoism . [ 43 ] Further research of this heritability of acquired characteristics developed into, in part, the modern field of epigenetics . Darwin himself had noted that "the existence of free gemmules is a gratuitous assumption"; by some accounts in modern interpretation, gemmules may be considered a prescient mix of DNA, RNA, proteins, prions, and other mobile elements that are heritable in a non-Mendelian manner at the molecular level. [ 15 ] [ 44 ] [ 45 ] Liu points out that Darwin's ideas about gemmules replicating outside of the body are predictive of in vitro gene replication used, for instance, in PCR . [ 18 ]
https://en.wikipedia.org/wiki/Pangenesis
In the fields of molecular biology and genetics , a pan-genome ( pangenome or supragenome ) is the entire set of genes from all strains within a clade . More generally, it is the union of all the genomes of a clade. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The pan-genome can be broken down into a "core pangenome" that contains genes present in all individuals, a "shell pangenome" that contains genes present in two or more strains, and a "cloud pangenome" that contains genes only found in a single strain. [ 3 ] [ 4 ] [ 6 ] Some authors also refer to the cloud genome as "accessory genome" containing 'dispensable' genes present in a subset of the strains and strain-specific genes. [ 2 ] [ 3 ] [ 4 ] Note that the use of the term 'dispensable' has been questioned, at least in plant genomes, as accessory genes play "an important role in genome evolution and in the complex interplay between the genome and the environment". [ 5 ] The field of study of pangenomes is called pangenomics. [ 2 ] The genetic repertoire of a bacterial species is much larger than the gene content of an individual strain. [ 7 ] Some species have open (or extensive) pangenomes, while others have closed pangenomes. [ 2 ] For species with a closed pan-genome, very few genes are added per sequenced genome (after sequencing many strains), and the size of the full pangenome can be theoretically predicted. Species with an open pangenome have enough genes added per additional sequenced genome that predicting the size of the full pangenome is impossible. [ 4 ] Population size and niche versatility have been suggested as the most influential factors in determining pan-genome size. [ 2 ] Pangenomes were originally constructed for species of bacteria and archaea , but more recently eukaryotic pan-genomes have been developed, particularly for plant species. Plant studies have shown that pan-genome dynamics are linked to transposable elements. [ 8 ] [ 9 ] [ 10 ] [ 11 ] The significance of the pan-genome arises in an evolutionary context, especially with relevance to metagenomics , [ 12 ] but is also used in a broader genomics context. [ 13 ] An open access book reviewing the pangenome concept and its implications, edited by Tettelin and Medini, was published in the spring of 2020. [ 14 ] The term 'pangenome' was defined with its current meaning by Tettelin et al. in 2005; [ 2 ] it derives 'pan' from the Greek word παν , meaning 'whole' or 'everything', while the genome is a commonly used term to describe an organism's complete genetic material. Tettelin et al. applied the term specifically to bacteria , whose pangenome "includes a core genome containing genes present in all strains and a dispensable genome composed of genes absent from one or more strains and genes that are unique to each strain." [ 2 ] Is the part of the pangenome that is shared by every genome in the tested set. Some authors have divided the core pangenome in hard core, those families of homologous genes that has at least one copy of the family shared by every genome (100% of genomes) and the soft core or extended core, [ 15 ] those families distributed above a certain threshold (90%). In a study that involves the pangenomes of Bacillus cereus and Staphylococcus aureus , some of them isolated from the international space station, the thresholds used for segmenting the pangenomes were as follows: "Cloud", "Shell", and "Core" corresponding to gene families with presence in <10%, 10–95%, and >95% of the genomes, respectively. [ 16 ] The core genome size and proportion to the pangenome depends on several factors, but it is especially dependent on the phylogenetic similarity of the considered genomes. For example, the core of two identical genomes would also be the complete pangenome. The core of a genus will always be smaller than the core genome of a species. Genes that belong to the core genome are often related to house keeping functions and primary metabolism of the lineage, nevertheless, the core gene can also contain some genes that differentiate the species from other species of the genus, i.e. that may be related pathogenicity to niche adaptation. [ 17 ] Is the part of the pangenome shared by the majority of the genomes in a pangenome. [ 18 ] There is not a universally accepted threshold to define the shell genome, some authors consider a gene family as part of the shell pangenome if it shared by more than 50% of the genomes in the pangenome. [ 19 ] A family can be part of the shell by several evolutive dynamics, for example by gene loss in a lineage where it was previously part of the core genome, such is the case of enzymes in the tryptophan operon in Actinomyces , [ 20 ] or by gene gain and fixation of a gene family that was previously part of the dispensable genome such is the case of trpF gene in several Corynebacterium species. [ 21 ] The cloud genome consists of those gene families shared by a minimal subset of the genomes in the pangenome, [ 22 ] it includes singletons or genes present in only one of the genomes. It is also known as the peripheral genome, or accessory genome. Gene families in this category are often related to ecological adaptation. [ citation needed ] The pan-genome can be somewhat arbitrarily classified as open or closed based on the alpha value of Heaps' law : N = k n − α {\displaystyle N=kn^{-\alpha }} [ 23 ] [ 15 ] Usually, the pangenome software can calculate the parameters of the Heap law that best describe the behavior of the data. An open pangenome occurs when the number of new gene families in one taxonomic lineage keeps increasing without appearing to be asymptotic regardless how many new genomes are added to the pangenome. Escherichia coli is an example of a species with an open pangenome. Any E. coli genome size is in the range of 4000–5000 genes and the pangenome size estimated for this species with approximately 2000 genomes is composed by 89,000 different gene families. [ 24 ] The pangenome of the domain bacteria is also considered to be open. A closed pangenome occurs in a lineage when only few gene families are added when new genomes are incorporated into the pangenome analysis, and the total amount of gene families in the pangenome seem to be asymptotic to one number. It is believed that parasitism and species that are specialists in some ecological niche tend to have closed pangenomes. Staphylococcus lugdunensis is an example of a commensal bacteria with closed pan-genome. [ 25 ] The original pangenome concept was developed by Tettelin et al. [ 2 ] when they analyzed the genomes of eight isolates of Streptococcus agalactiae , where they described a core genome shared by all isolates, accounting for approximately 80% of any single genome, plus a dispensable genome consisting of partially shared and strain-specific genes. Extrapolation suggested that the gene reservoir in the S. agalactiae pan-genome is vast and that new unique genes would continue to be identified even after sequencing hundreds of genomes. [ 2 ] The pangenome comprises the entirety of the genes discovered in the sequenced genomes of a given microbial species and it can change when new genomes are sequenced and incorporated into the analysis. [ citation needed ] The pangenome of a genomic lineage accounts for the intra lineage gene content variability. Pangenome evolves due to: gene duplication, gene gain and loss dynamics and interaction of the genome with mobile elements that are shaped by selection and drift. [ 26 ] Some studies point that prokaryotes pangenomes are the result of adaptive, not neutral evolution that confer species the ability to migrate to new niches. [ 27 ] The supergenome can be thought of as the real pangenome size if all genomes from a species were sequenced. [ 28 ] It is defined as all genes accessible for being gained by a certain species. It cannot be calculated directly but its size can be estimated by the pangenome size calculated from the available genome data. Estimating the size of the cloud genome can be troubling because of its dependence on the occurrence of rare genes and genomes. In 2011 genomic fluidity was proposed as a measure to categorize the gene-level similarity among groups of sequenced isolates. [ 29 ] In some lineages the supergenomes did appear infinite , [ 30 ] as is the case of the Bacteria domain. [ 31 ] 'Metapangenome' has been defined as the outcome of the analysis of pangenomes in conjunction with the environment where the abundance and prevalence of gene clusters and genomes are recovered through shotgun metagenomes. [ 32 ] The combination of metagenomes with pangenomes, also referred to as "metapangenomics", reveals the population-level results of habitat-specific filtering of the pangenomic gene pool. [ 33 ] Other authors consider that Metapangenomics expands the concept of pangenome by incorporating gene sequences obtained from uncultivated microorganisms by a metagenomics approach. A metapangenome comprises both sequences from metagenome-assembled genomes ( MAGs ) and from genomes obtained from cultivated microorganisms. [ 34 ] Metapangenomics has been applied to assess diversity of a community, microbial niche adaptation, microbial evolution, functional activities, and interaction networks of the community. [ 35 ] The Anvi'o platform developed a workflow that integrates analysis and visualization of metapangenomes by generating pangenomes and study them in conjunction with metagenomes. [ 32 ] In 2018, 87% of the available whole genome sequences were bacteria fueling researchers interest in calculating prokaryote pangenomes at different taxonomic levels. [ 22 ] In 2015, the pangenome of 44 strains of Streptococcus pneumoniae bacteria shows few new genes discovered with each new genome sequenced (see figure). In fact, the predicted number of new genes dropped to zero when the number of genomes exceeds 50 (note, however, that this is not a pattern found in all species). This would mean that S. pneumoniae has a 'closed pangenome'. [ 37 ] The main source of new genes in S. pneumoniae was Streptococcus mitis from which genes were transferred horizontally . The pan-genome size of S. pneumoniae increased logarithmically with the number of strains and linearly with the number of polymorphic sites of the sampled genomes, suggesting that acquired genes accumulate proportionately to the age of clones. [ 36 ] Another example of prokaryote pan-genome is Prochlorococcus , the core genome set is much smaller than the pangenome, which is used by different ecotypes of Prochlorococcus . [ 38 ] Open pan-genome has been observed in environmental isolates such as Alcaligenes sp. [ 39 ] and Serratia sp., [ 40 ] showing a sympatric lifestyle. Nevertheless, open pangenome is not exclusive to free living microorganisms, a 2015 study on Prevotella bacteria isolated from humans , compared the gene repertoires of its species derived from different body sites of human. It also reported an open pan-genome showing vast diversity of gene pool. [ 41 ] Archaea also have some pangenome studies. Halobacteria pangenome shows the following gene families in the pangenome subsets: core (300), variable components (Softcore: 998, Cloud:36531, Shell:11784). [ 42 ] Eukaryote organisms such as fungi , animals and plants have also shown evidence of pangenomes. In four fungi species whose pangenome has been studied, between 80 and 90% of gene models were found as core genes. The remaining accessory genes were mainly involved in pathogenesis and antimicrobial resistance. [ 43 ] In animals, the human pangenome is being studied. In 2010 a study estimated that a complete human pan-genome would contain ~19–40 Megabases of novel sequence not present in the extant reference human genome . [ 44 ] The Human Pangenome consortium has the goal to acknowledge the human genome diversity. In 2023, a draft human pangenome reference was published. [ 45 ] It is based on 47 diploid genomes from persons of varied ethnicity. [ 45 ] Plans are underway for an improved reference capturing still more biodiversity from a still wider sample. [ 45 ] Among plants, there are examples of pangenome studies in model species, both diploid [ 9 ] and polyploid, [ 10 ] and a growing list of crops. [ 46 ] [ 47 ] Pangenomes have shown promise as a tool in plant breeding by accounting for structural variants and SNPs in non-reference genomes, which helps to solve the problem of missing heritability that persists in genome wide association studies . [ 48 ] An emerging plant-based concept is that of pan-NLRome, which is the repertoire of nucleotide-binding leucine-rich repeat (NLR) proteins, intracellular immune receptors that recognize pathogen proteins and confer disease resistance. [ 49 ] Virus does not necessarily have genes extensively shared by clades such as is the case of 16S in bacteria , and therefore the core genome of the full Virus Domain is empty. Nevertheless, several studies have calculated the pangenome of some viral lineages. The core genome from six species of pandoraviruses comprises 352 gene families only 4.7% of the pangenome, resulting in an open pangenome. [ 50 ] The number of sequenced genomes is continuously growing "simply scaling up established bioinformatics pipelines will not be sufficient for leveraging the full potential of such rich genomic data sets". [ 51 ] Pan-genome graph constructions are emerging data structure technique designed to represent pangenomes and to efficiently map reads to them. They have been reviewed by Eizenga et al. [ 52 ] As interest in pangenomes increased, there have been several software tools developed to help analyze this kind of data. To start a pangenomic analysis the first step is the homogenization of genome annotation. [ 23 ] The same software should be used to annotate all genomes used, such as GeneMark [ 53 ] or RAST. [ 54 ] In 2015, a group reviewed the different kinds of analyses and tools a researcher may have available. [ 55 ] There are seven kinds of software developed to analyze pangenomes: Those dedicated to cluster homologous genes; identify SNPs ; plot pangenomic profiles; build phylogenetic relationships of orthologous genes/families of strains/isolates; function-based searching; annotation and/or curation; and visualization. [ 55 ] The two most cited software tools for pangenomic analysis at the end of 2014 [ 55 ] were Panseq [ 56 ] and the pan-genomes analysis pipeline (PGAP). [ 57 ] Other options include BPGA – A Pan-Genome Analysis Pipeline for prokaryotic genomes, [ 58 ] GET_HOMOLOGUES, [ 59 ] Roary. [ 60 ] and PanDelos. [ 61 ] In 2015 a review focused on prokaryote pangenomes [ 62 ] and another for plant pan-genomes were published. [ 63 ] Among the first software packages designed for plant pangenomes were PanTools. [ 64 ] and GET_HOMOLOGUES-EST. [ 11 ] [ 59 ] In 2018 panX was released, an interactive web tool that allows inspection of gene families evolutionary history. [ 65 ] panX can display an alignment of genomes, a phylogenetic tree, mapping of mutations and inference about gain and loss of the family on the core-genome phylogeny. In 2019 OrthoVenn 2.0 [ 66 ] allowed comparative visualization of families of homologous genes in Venn diagrams up to 12 genomes. In 2023, BRIDGEcereal was developed to survey and graph indel-based haplotypes from pan-genome through a gene model ID. [ 67 ] In 2020 Anvi'o [ 1 ] was available as a multiomics platform that contains pangenomic and metapangenomic analyses as well as visualization workflows. In Anvi'o, genomes are displayed in concentrical circles and each radius represents a gene family, allowing for comparison of more than 100 genomes in its interactive visualization. In 2020, a computational comparison of tools for extracting gene-based pangenomic contents (such as GET_HOMOLOGUES, PanDelos, Roary, and others) has been released. [ 68 ] Tools were compared from a methodological perspective, analyzing the causes that lead a given methodology to outperform other tools. The analysis was performed by taking into account different bacterial populations, which are synthetically generated by changing evolutionary parameters. Results show a differentiation of the performance of each tool that depends on the composition of the input genomes. Again in 2020, several tools introduced a graphical representation of the pangenomes showing the contiguity of genes (PPanGGOLiN, [ 46 ] Panaroo [ 65 ] ). Other software tools for pangenomics include Prodigal, Prokka, PanVis, PanTools, Pangenome Graph Builder (PGGB), PanX, Pagoo, and pgr-tk. [ 69 ]
https://en.wikipedia.org/wiki/Pangenomics
The PANGU (Planet and Asteroid Natural scene Generation Utility) is a computer graphics utility of which the development was funded by ESA and performed by University of Dundee . [ 1 ] It generates scenes of planets, moons, asteroids, spacecraft and rovers. The main purpose of the tool is to test and validate navigation techniques based on the processing of images coming from on-board sensors, such as a camera or imaging LIDAR on a planetary lander. [ 2 ] This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pangu_utility
Panguite is a type of titanium oxide mineral first discovered as an inclusion within the Allende meteorite , and first described in 2012 . [ 4 ] [ 5 ] The hitherto unknown meteorite mineral was named for the ancient Chinese god Pan Gu , the creator of the world through the separation of yin (earth) from yang (sky). [ 4 ] The mineral's chemical formula is (Ti 4+ ,Sc,Al,Mg,Zr,Ca) 1.8 O 3 . The elements found in it are titanium , scandium , aluminium , magnesium , zirconium , calcium , and oxygen . Samples from the meteorite include some which are zirconium rich. The mineral was found in conjunction with the already identified mineral davisite , within an olivine aggregate . [ 6 ] Panguite is in a class of refractory minerals that formed under the high temperatures and extremely varied pressures present in the early Solar System , up to 4.5 billion years ago. This makes panguite one of the oldest minerals in the Solar System. Zirconium is a key element in determining conditions prior to and during the Solar System's formation. Chi Ma, director of the Geological and Planetary Sciences division's Analytical Facility at the California Institute of Technology was the lead author of its first peer-reviewed article, published in American Mineralogist . [ 3 ] Ma has been leading a nano mineralogy investigation, since 2007, of primitive meteorites, including the well studied Allende meteorite. The mineral was first described in a paper submitted to the 42nd annual Lunar and Planetary Science Conference in 2011. [ 7 ]
https://en.wikipedia.org/wiki/Panguite
The Panguna mine is a large copper mine located in Bougainville , Papua New Guinea . Panguna represents one of the largest copper reserves in Papua New Guinea and in the world, having an estimated reserve of one billion tonnes of ore copper and twelve million ounces of gold . [ 1 ] [ 2 ] The mine has been closed since 1989 and has ceased all production. The discovery of vast copper ore deposits in Bougainville 's Crown Prince Range led to the establishment of the copper mine in 1969 by Bougainville Copper Ltd (BCL), a subsidiary of the Australian company Conzinc Rio Tinto of Australia . The mine began production in 1972, with the support of the Papua New Guinea National Government as a 20% shareholder. In contrast to this, the Bougainvilleans received 0.5–1.25% share of the total profit. The site was at the time the world's largest open-pit copper/gold mine, generating 12% of PNG's GDP [ 3 ] and over 45% of the nation's export revenue. [ 4 ] Profits derived from the mine helped fund Papua New Guinea's independence from Australia, in 1975. [ 5 ] Mining at Panguna included the direct discharge of tailings into tributaries of the Jaba River . [ 6 ] The mine caused devastating environmental issues on the island, and the company was responsible for poisoning the entire length of the Jaba River, causing birth defects, as well as the extinction of the flying fox on the island. Bougainville Copper had set up a system of racial segregation on the island, with one set of facilities for white workers and one set for locals. This prompted an uprising in 1988, led by Francis Ona , a Panguna landowner and commander of the Bougainville Revolutionary Army (BRA). The outcome of the uprising was the Bougainville conflict , between the BRA, who sought secession from PNG, and the Papua New Guinea Defence Force . The ten-year conflict resulted in over 20,000 deaths, the eventual closure of the mine on 15 May 1989, and the complete withdrawal of BCL personnel by 24 March 1990. As of May 2025 [update] , it remains closed. [ 7 ] In June 2016, Rio Tinto relinquished its role by divesting its interests in the mine to national and local governments. [ 8 ] In 2020, the Human Rights Law Centre lodged a complaint with the Australian government regarding adverse environmental and human rights impacts of the mine. [ 9 ] The environmental impacts of the mine continue to this day. Many people have had to relocate to higher ground to avoid contaminated drinking water. [ 9 ] Heavy metals such as copper, zinc, and mercury are found in the surrounding rivers. [ 10 ] Rio Tinto has refused to fund remediation works, stating that it fully complied with the relevant laws during mining operations. [ 8 ] The autonomous government of Bougainville wants to reopen the mine for the purposes of seeking an independent funding source. [ 10 ] Estimates place the cost of reopening the mine at $5 to $6 billion. [ 10 ] Media related to Panguna mine at Wikimedia Commons
https://en.wikipedia.org/wiki/Panguna_mine
Panning law , or panning rule , is a recording and mixing principle that states that any signal of equal amplitude and phase that is played in both channels of a stereo system will increase in loudness up to 6.02 dBSPL , provided there is perfect response in the loudspeaker system and perfect acoustics in the room. [ 1 ] Often, the acoustic summing of a room and system are inferior to the ideal, so the specific relative level will increase from −3 dB to 0 dB as the mono signal is panned from center to hard left or right. The idea of including a pan law is so that when one directs signals left or right with the pan pot , the perceived loudness will stay the same. [ 2 ] However, both the direction of attenuation throughout the panoramic sweep and the amount by which the signal is attenuated vary according to panning rule. For example, Yamaha digital consoles employ a typical (compromise) 3 dB panning rule where the signal is at full level when pan position is centered and becomes progressively louder (up to + 3 dB) as it is panned to the right or left. The 3 dB panning rule is a commonly applied compromise to comply with the mediocre acoustic summing capabilities of most control rooms. However, the console manufacturer SSL used to employ a 4.5 dB panning rule, because it was believed that their expensive consoles would normally be used in tuned rooms that had acoustic summing capabilities closer to the ideal. Many consoles that have only one panning rule employ one such that a signal panned hard left or right is at full level and becomes progressively lower in level as the pan is directed to the center.
https://en.wikipedia.org/wiki/Panning_law
The Pannonian Biogeographic Region is a biogeographic region, as defined by the European Environment Agency . It covers the lowlands of the Pannonian Basin centered on Hungary. The Pannonian Region is a large alluvial basin surrounded by the Carpathian Mountains to the north and east, the Alps to the west and the Dinaric Alps to the south. The basin was once the bed of an inland sea. It is flat, and is crossed from north to south by the Danube and Tisza rivers. The region contains all of Hungary, and around the periphery contains parts of Slovakia, the Czech Republic, Romania, Serbia, Croatia, Bosnia and Herzegovina and Ukraine. [ 1 ] The region is sheltered by the mountains, but has complex weather caused by the interaction of wet winds from the west, drier winds from the south and cooler winds from the Carpathians and Alps, which sometimes results in severe storms. The basin was once largely forested, with many marshes and shallow lakes, but has long been cleared and drained to make way for grasslands and cultivation. It contains inland sand dunes, sand steppes, loess grasslands and maple-oak loess forests., [ 2 ] and significantly overlaps with the Pannonian mixed forests ecoregion .
https://en.wikipedia.org/wiki/Pannonian_Biogeographic_Region
Pannus is an abnormal layer of fibrovascular tissue or granulation tissue . Common sites for pannus formation include over the cornea, over a joint surface (as seen in rheumatoid arthritis ), or on a prosthetic heart valve . [ 1 ] Pannus may grow in a tumor-like fashion, as in joints where it may erode articular cartilage and bone. In common usage, the term pannus is often used to refer to a panniculus (a hanging flap of tissue). The term " pannus " is derived from the Latin for "tablecloth". Chronic inflammation and exuberant proliferation of the synovium leads to formation of pannus and destruction of cartilage, bone, tendons, ligaments, and blood vessels. [ 2 ] Pannus tissue is composed of aggressive macrophage- and fibroblast-like mesenchymal cells, macrophage-like cells and other inflammatory cells that release collagenolytic enzymes. [ 3 ] In people suffering from rheumatoid arthritis , pannus tissue eventually forms in the joint affected by the disease, causing bony erosion and cartilage loss via release of IL-1, prostaglandins, and substance P by macrophages. In ophthalmology , pannus refers to the growth of blood vessels into the peripheral cornea . In normal individuals, the cornea is avascular . Chronic local hypoxia (such as that occurring with overuse of contact lenses ) or inflammation may lead to peripheral corneal vascularization, or pannus. Pannus may also develop in diseases of the corneal stem cells , such as aniridia . It is often resolved by peritomy .
https://en.wikipedia.org/wiki/Pannus
The Panorama of the City of New York is an urban model of New York City that is a centerpiece of the Queens Museum . It was originally created for the 1964 New York World's Fair . In June 1961, the New York City Board of Estimate awarded a contract to the architectural model makers Raymond Lester Associates for the construction of a scale model of New York City within the City Building. [ 1 ] [ 2 ] City officials planned to install suspended cars to allow visitors to see the model during the 1964 New York World's Fair . [ 1 ] The Panorama was built by a team of 100 people working for Lester Associates in West Nyack, New York in the three years before the opening of the 1964 World's Fair. [ 3 ] Commissioned by World's Fair Corporation president Robert Moses as a celebration of the City's municipal infrastructure, this 9,335-square-foot (867.2 m 2 ) model includes every single building constructed before 1992 in all five boroughs, at a scale of 1 inch = 100 feet (1:1200). [ 3 ] The model was constructed in 273 sections [ 4 ] [ 5 ] of 4 by 10 feet (1.2 m × 3.0 m) Formica boards and polyurethane foam , [ 3 ] originally depicting 835,000 individual structures. [ 6 ] The section showing the Far Rockaway neighborhood was never installed, due to space limitations. [ 3 ] The original Panorama included about 25,000 Plexiglass models of major buildings, 100,000 handmade models of less substantial structures, and 50,000 models of churches. For other structures such as tenements and brownstones, Lester Associates created 50,000 copies of each type of structure. In total, Lester Associates manufactured about two to three million buildings, including duplicates. [ 6 ] Displayed alongside the modern city, the 1964 exhibition also included a 1:300 diorama of a "Castello Model" based on the 17th century Castello Plan , borrowed from Museum of the City of New York . [ 7 ] [ 8 ] The Panorama was one of the most successful attractions at the 1964 Fair, with "millions" of people paying 10 cents each for a 9-minute simulated helicopter ride around the City, [ 3 ] a dark ride narrated by Lowell Thomas to a text written by Harvey Yale Gross . It was one of three colossal representations of geography at the fair, alongside the Unisphere and the New York State Pavilion . [ 8 ] Visitors could also look at the model from a balcony and, for another 10 cents, could peer at specific neighborhoods using binoculars. [ 6 ] The panorama was also intended to serve as a standing urban planning tool after the fair, after Moses' vision. In this way it anticipated the technology of a 3D city model , though in practice it was of limited utility. It did however, play a role in the defeat of Donald Trump's 1980s Television City proposal, as a model put on the panorama by activists demonstrated the relative size of the development. [ 8 ] Additionally, the opening of the Panorama was set to coincide with the 300-year anniversary of the English takeover of New Amsterdam —which occurred in 1664—and highlight the city's growth over that period. [ 9 ] After the Fair closed, the Panorama remained open to the public, and Lester's team updated the map in 1967, 1968, and 1969. [ 3 ] After another update in 1974, very few changes were made until 1992, when again Lester Associates was hired to update the model to coincide with the re-opening of the museum, after a two-year total renovation of the building by Rafael Viñoly . The model makers changed over 60,000 structures to bring it up-to-date at that time. [ 3 ] There are now 895,000 structures total, [ 4 ] [ 10 ] including buildings made of plastic or wood. [ 4 ] There are also bridges made of brass. [ 4 ] [ 6 ] The mechanical "helicopter" vehicles for conveying exhibition visitors were showing signs of wear, and were removed before the 1994 reopening. [ 3 ] The current installation by Viñoly features accessible ramps and an elevated walkway which surround the Panorama , allowing viewers to proceed at their own pace, or to linger for as long a look as they desire. Because of space constraints, portions of the walkway are cantilevered over the outer edges of the map, but a glass floor still allows views of the model below. [ 3 ] As in the original installation, tiny scale model airplanes take off and land at the model airport of LaGuardia Airport , mechanically guided by long wires. [ 3 ] [ 11 ] In March 2009, the museum announced the intention to update the Panorama on an ongoing basis. To raise funds and draw public attention, the museum will allow individuals and developers to have accurate scale models made of buildings newer than the 1992 update created and added, in exchange for a donation of at least $50. More-detailed models of smaller apartment buildings and private homes, now represented by generic models, can also be added. [ 3 ] As of 2025 [update] , the original Twin Towers of the World Trade Center are still on the map, even though some new buildings have been built on the actual site; the museum has chosen to allow the destroyed structures to remain until construction is complete, rather than representing the ongoing construction. The first new building to be added under the new program was the new Citi Field stadium of the New York Mets ; the model of the old Shea Stadium was to be displayed elsewhere in the museum. [ 12 ] The New York City Panorama was featured in two 2011 fictional works: the movie New Year's Eve directed by Garry Marshall , and the book Wonderstruck by Brian Selznick , and also in a subsequent Wonderstruck film. A revamped lighting system was installed in 2017, as part of a sponsorship promoting the film. [ 13 ] Photographer Spencer Lowell took images of the model in the art series New York, New York, New York in 2016, and these were acquired by the museum and versions were sold at art fairs. The model was also featured in the 2021 documentary series Pretend It's a City . [ 14 ] Every year, the Queens Museum hosts the "Panorama Challenge", a trivia contest run by The City Reliquary ; the inaugural contest was held in 2007. [ 15 ] Contestants use the Panorama to identify various New York City landmarks. In recent years, the panorama has often functioned as installation art , providing context for temporary site-specific works taking the form of model buildings, or otherwise displayed in the panorama's gallery. A scale model of the 1964 New York World's Fair site, showing all the buildings and pavilions of the time, is located in a separate area devoted to World's Fair exhibits. It is built to the same scale as the Panorama by Lester and Associates, and was one of originally 7 travelling models. A larger model of the Fair site that was 1 inch : 32 feet was the one exhibited there in 1964. [ 8 ]
https://en.wikipedia.org/wiki/Panorama_of_the_City_of_New_York
Panrationalism (or comprehensive rationalism ) [ 1 ] holds two premises true: The first problem that needs to be dealt with is: what is the rational criterion or authority to which they appeal? Here the panrationalists diverge into two groups: Descartes is considered the founder of rationalism and gave the illustration cogito ergo sum as the paradigm to demonstrate what he believed. The problem of both these appeals is that: In his The Critique of Pure Reason Kant sought to reconcile both appeals. This article about epistemology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Panrationalism
Panspermia (from Ancient Greek πᾶν (pan) ' all ' and σπέρμα (sperma) ' seed ' ) is the hypothesis that life exists throughout the universe , distributed by space dust , [ 1 ] meteoroids , [ 2 ] asteroids , comets , [ 3 ] and planetoids , [ 4 ] as well as by spacecraft carrying unintended contamination by microorganisms , [ 5 ] [ 6 ] [ 7 ] known as directed panspermia . The theory argues that life did not originate on Earth, but instead evolved somewhere else and seeded life as we know it. Panspermia comes in many forms, such as radiopanspermia, lithopanspermia, and directed panspermia . Regardless of its form, the theories generally propose that microbes able to survive in outer space (such as certain types of bacteria or plant spores [ 8 ] ) can become trapped in debris ejected into space after collisions between planets and small solar system bodies that harbor life. [ 9 ] This debris containing the lifeforms is then transported by meteors between bodies in a solar system, or even across solar systems within a galaxy. In this way, panspermia studies concentrate not on how life began but on methods that may distribute it within the Universe. [ 10 ] [ 11 ] [ 12 ] This point is often used as a criticism of the theory. Panspermia is a fringe theory with little support amongst mainstream scientists. [ 13 ] Critics argue that it does not answer the question of the origin of life but merely places it on another celestial body. It is further criticized because it cannot be tested experimentally. Historically, disputes over the merit of this theory centered on whether life is ubiquitous or emergent throughout the Universe. [ 14 ] The theory maintains support today, with some work being done to develop mathematical treatments of how life might migrate naturally throughout the Universe. [ 15 ] [ 16 ] Its long history lends itself to extensive speculation and hoaxes that have arisen from meteoritic events. In contrast, pseudo-panspermia is the well-supported hypothesis that many of the small organic molecules used for life originated in space, and were distributed to planetary surfaces. Panspermia has a long history, dating back to the 5th century BCE and the natural philosopher Anaxagoras . [ 17 ] Classicists came to agree that Anaxagoras maintained the Universe (or Cosmos) was full of life, and that life on Earth started from the fall of these extra-terrestrial seeds. [ 18 ] Panspermia as it is known today, however, is not identical to this original theory. The name, as applied to this theory, was only first coined in 1908 by Svante Arrhenius , a Swedish scientist. [ 14 ] [ 19 ] Prior to this, since around the 1860s, many prominent scientists were becoming interested in the theory. More recent advocates include Sir Fred Hoyle , and Chandra Wickramasinghe . [ 20 ] [ 21 ] In the 1860s, there were three scientific developments that began to bring the focus of the scientific community to the problem of the origin of life. [ 14 ] Firstly, the Kant-Laplace Nebular theory of solar system and planetary formation was gaining favor, and implied that when the Earth first formed, the surface conditions would have been inhospitable to life as we know it. This meant that life could not have evolved parallel with the Earth, and must have evolved at a later date, without biological precursors. Secondly, Charles Darwin 's famous theory of evolution implied some elusive origin, because in order for something to evolve, it must start somewhere. In his Origin of Species , Darwin was unable or unwilling to touch on this issue. [ 22 ] Third and finally, Louis Pasteur and John Tyndall experimentally disproved the (now superseded) theory of spontaneous generation , which suggested that life was constantly evolving from non-living matter and did not have a common ancestor, as suggested by Darwin's theory of evolution. Altogether, these three developments in science presented the wider scientific community with a seemingly paradoxical situation regarding the origin of life: life must have evolved from non-biological precursors after the Earth was formed, and yet spontaneous generation as a theory had been experimentally disproved. From here, is where the study of the origin of life branched. Those who accepted Pasteur's rejection of spontaneous generation began to develop the theory that under (unknown) conditions on a primitive Earth, life must have gradually evolved from organic material. This theory became known as abiogenesis , and is the currently accepted one. On the other side of this are those scientists of the time who rejected Pasteur's results and instead supported the idea that life on Earth came from existing life. This necessarily requires that life has always existed somewhere on some planet, and that it has a mechanism of transferring between planets. Thus, the modern treatment of panspermia began in earnest. Lord Kelvin , in a presentation to The British Association for the Advancement of Science in 1871, proposed the idea that similarly to how seeds can be transferred through the air by winds, so can life be brought to Earth by the infall of a life-bearing meteorite. [ 14 ] He further proposed the idea that life can only come from life, and that this principle is invariant under philosophical uniformitarianism , similar to how matter can neither be created nor destroyed . [ 23 ] This argument was heavily criticized because of its boldness, and additionally due to technical objections from the wider community. In particular, Johann Zollner from Germany argued against Kelvin by saying that organisms carried in meteorites to Earth would not survive the descent through the atmosphere due to friction heating. [ 14 ] [ 24 ] The arguments went back and forth until Svante Arrhenius gave the theory its modern treatment and designation. Arrhenius argued against abiogenesis on the basis that it had no experimental foundation at the time, and believed that life had always existed somewhere in the Universe. [ 19 ] He focused his efforts of developing the mechanism(s) by which this pervasive life may be transferred through the Universe. At this time, it was recently discovered that solar radiation can exert pressure, and thus force, on matter. Arrhenius thus concluded that it is possible that very small organisms such as bacterial spores could be moved around due to this radiation pressure . [ 19 ] At this point, panspermia as a theory now had a potentially viable transport mechanism, as well as a vehicle for carrying life from planet to planet. The theory still faced criticism mostly due to doubts about how long spores would actually survive under the conditions of their transport from one planet, through space, to another. [ 25 ] Despite all the emphasis placed on trying to establish the scientific legitimacy of this theory, it still lacked testability; that was and still is a serious problem the theory has yet to overcome. Support for the theory persisted, however, with Fred Hoyle and Chandra Wickramasinghe using two reasons for why an extra-terrestrial origin of life might be preferred. First is that required conditions for the origin of life may have been more favorable somewhere other than Earth, and second that life on Earth exhibits properties that are not accounted for by assuming an endogenic origin. [ 14 ] [ 20 ] Hoyle studied spectra of interstellar dust, and came to the conclusion that space contained large amounts of organics, which he suggested were the building blocks of the more complex chemical structures. [ 26 ] Critically, Hoyle argued that this chemical evolution was unlikely to have taken place on a prebiotic Earth, and instead the most likely candidate is a comet. [ 14 ] Furthermore, Hoyle and Wickramasinghe concluded that the evolution of life requires a large increase in genetic information and diversity, which might have resulted from the influx of viral material from space via comets. [ 20 ] Hoyle reported (in a lecture at Oxford on January 16, 1978) a pattern of coincidence between the arrival of major epidemics and the occasions of close encounters with comets, which lead Hoyle to suggest [ 27 ] that the epidemics were a direct result of material raining down from these comets. [ 14 ] This claim in particular garnered criticism from biologists. Since the 1970s, a new era of planetary exploration meant that data could be used to test panspermia and potentially transform it from conjecture to a testable theory. Though it has yet to be tested, panspermia is still explored today in some mathematical treatments, [ 28 ] [ 16 ] [ 15 ] and as its long history suggests, the appeal of the theory has stood the test of time. Panspermia requires: The creation and distribution of organic molecules from space is now uncontroversial; it is known as pseudo-panspermia . The jump from organic materials to life originating from space, however, is hypothetical and currently untestable. Bacterial spores and plant seeds are two common proposed vessels for panspermia. According to the theory, they could be encased in a meteorite and transported to another planet from their origin, subsequently descend through the atmosphere and populate the surface with life (see lithopanspermia below). This naturally requires that these spores and seeds have formed somewhere else, maybe even in space in the case of how panspermia deals with bacteria. Understanding of planetary formation theory and meteorites has led to the idea that some rocky bodies originating from undifferentiated parent bodies could be able to generate local conditions conducive to life. [ 15 ] Hypothetically, internal heating from radiogenic isotopes could melt ice to provide water as well as energy. In fact, some meteorites have been found to show signs of aqueous alteration which may indicate that this process has taken place. [ 15 ] Given that there are such large numbers of these bodies found within the Solar System, an argument can be made that they each provide a potential site for life to develop. A collision occurring in the asteroid belt could alter the orbit of one such site, and eventually deliver it to Earth. Plant seeds can be an alternative transport vessel. Some plants produce seeds that are resistant to the conditions of space, [ 8 ] which have been shown to lie dormant in extreme cold, vacuum, and resist short wavelength UV radiation. [ 8 ] They are not typically proposed to have originated on space, but on another planet. Theoretically, even if a plant is partially damaged during its travel in space, the pieces could still seed life in a sterile environment. [ 8 ] Sterility of the environment is relevant because it is unclear if the novel plant could out-compete existing life forms. This idea is based on previous evidence showing that cellular reconstruction can occur from cytoplasms released from damaged algae. [ 8 ] Furthermore, plant cells contain obligate endosymbionts , which could be released into a new environment. Though both plant seeds and bacterial spores have been proposed as potentially viable vehicles, their ability to not only survive in space for the required time, but also survive atmospheric entry is debated. Space probes may be a viable transport mechanism for interplanetary cross-pollination within the Solar System. Space agencies have implemented planetary protection procedures to reduce the risk of planetary contamination, [ 29 ] [ 30 ] but microorganisms such as Tersicoccus phoenicis may be resistant to spacecraft assembly cleaning . [ 5 ] [ 6 ] Panspermia is generally subdivided into two classes: either transfer occurs between planets of the same system (interplanetary) or between stellar systems (interstellar). Further classifications are based on different proposed transport mechanisms, as follows. In 1903, Svante Arrhenius proposed radiopanspermia, the theory that singular microscopic forms of life can be propagated in space, driven by the radiation pressure from stars. [ 31 ] This is the mechanism by which light can exert a force on matter. Arrhenius argued that particles at a critical size below 1.5 μm would be propelled at high speed by radiation pressure of a star. [ 19 ] However, because its effectiveness decreases with increasing size of the particle, this mechanism holds for very tiny particles only, such as single bacterial spores . The main criticism of radiopanspermia came from Iosif Shklovsky and Carl Sagan , who cited evidence for the lethal action of space radiation ( UV and X-rays ) in the cosmos. [ 32 ] If enough of these microorganisms are ejected into space, some may rain down on a planet in a new star system after 10 6 years wandering interstellar space. [ citation needed ] There would be enormous death rates of the organisms due to radiation and the generally hostile conditions of space, but nonetheless this theory is considered potentially viable by some. [ citation needed ] Data gathered by the orbital experiments ERA , BIOPAN , EXOSTACK and EXPOSE showed that isolated spores, including those of B. subtilis , were rapidly killed if exposed to the full space environment for merely a few seconds, but if shielded against solar UV , the spores were capable of surviving in space for up to six years while embedded in clay or meteorite powder (artificial meteorites). [ 33 ] Spores would therefore need to be heavily protected against UV radiation: exposure of unprotected DNA to solar UV and cosmic ionizing radiation would break it up into its constituent bases. [ 34 ] Rocks at least 1 meter in diameter are required to effectively shield resistant microorganisms, such as bacterial spores against galactic cosmic radiation . [ 35 ] Additionally, exposing DNA to the ultrahigh vacuum of space alone is sufficient to cause DNA damage , so the transport of unprotected DNA or RNA during interplanetary flights powered solely by light pressure is extremely unlikely. [ 36 ] The feasibility of other means of transport for the more massive shielded spores into the outer Solar System—for example, through gravitational capture by comets—is unknown. There is little evidence in full support of the radiopanspermia hypothesis. This transport mechanism generally arose following the growth of planetary science with the discovery of exoplanets and the sudden availability of data. [ 18 ] Lithopanspermia is the proposed transfer of organisms in rocks from one planet to another through planetary objects such as in comets or asteroids ; it remains speculative. A variant would be for organisms to travel between star systems on nomadic exoplanets or exomoons. [ 37 ] Although there is no concrete evidence that lithopanspermia has occurred in the Solar System, the various stages have become amenable to experimental testing. [ 38 ] Lithopanspermia, described by the mechanism above, can be either interplanetary or interstellar. It is possible to quantify panspermia models and treat them as viable mathematical theories. For example, a recent study of planets of the Trappist-1 planetary system presents a model for estimating the probability of interplanetary panspermia, similar to studies in the past done about Earth-Mars panspermia. [ 16 ] This study found that lithopanspermia is 'orders of magnitude more likely to occur' [ 16 ] in the Trappist-1 system as opposed to the Earth-to-Mars scenario. According to their analysis, the increase in probability of lithopanspermia is linked to an increased probability of abiogenesis amongst the Trappist-1 planets. In a way, these modern treatments attempt to keep panspermia as a contributing factor to abiogenesis, as opposed to a theory that directly opposes it. In line with this, it is suggested that if biosignatures could be detected on two (or more) adjacent planets, that would provide evidence that panspermia is a potentially required mechanism for abiogenesis. As of yet, no such discovery has been made. Lithopanspermia has also been hypothesized to operate between stellar systems. One mathematical analysis, estimating the total number of rocky or icy objects that could potentially be captured by planetary systems within the Milky Way , has concluded that lithopanspermia is not necessarily bound to a single stellar system. [ 28 ] This not only requires these objects have life in the first place, but also that it survives the journey. Thus intragalactic lithopanspermia is heavily dependent on the survival lifetime of organisms, as well as the velocity of the transporter. Again, there is no evidence that such a process has, or can occur. The complex nature of the requirements for lithopanspermia, as well as evidence against the longevity of bacteria being able to survive under these conditions, [ 25 ] makes lithopanspermia a difficult theory to support. That being said, impact events did occur often in the early solar system and still occur today, such as within the asteroid belt. [ 46 ] First proposed in 1972 by Nobel prize winner Francis Crick along with Leslie Orgel , directed panspermia is the theory that life was deliberately brought to Earth by a higher intelligent being from another planet. [ 47 ] In light of the evidence at the time that it seems unlikely for an organism to have been delivered to Earth via radiopanspermia or lithopanspermia, Crick and Orgel proposed this as an alternative theory, though it is worth noting that Orgel was less serious about the claim. [ 48 ] They do acknowledge that the scientific evidence is lacking, but discuss what kinds of evidence would be needed to support the theory. In a similar vein, Thomas Gold suggested that life on Earth might have originated accidentally from a pile of 'Cosmic Garbage' dumped on Earth long ago by extraterrestrial beings. [ 49 ] These theories are often considered more science fiction, however, Crick and Orgel use the principle of cosmic reversibility to argue for it. This principle is based on the fact that if our species is capable of infecting a sterile planet, then what is preventing another technological society from having done that to Earth in the past? [ 47 ] They concluded that it would be possible to deliberately infect another planet in the foreseeable future. As far as evidence goes, Crick and Orgel argued that given the universality of the genetic code, it follows that an infective theory for life is viable. [ 47 ] Directed panspermia could, in theory, be demonstrated by finding a distinctive 'signature' message had been deliberately implanted into either the genome or the genetic code of the first microorganisms by our hypothetical progenitor, some 4 billion years ago. [ 50 ] However, there is no known mechanism that could prevent mutation and natural selection from removing such a message over long periods of time. [ 51 ] In 1972, both abiogenesis and panspermia were seen as viable theories by different experts. [ 18 ] Given this, Crick and Orgel argued that experimental evidence required to validate one theory over the other was lacking. [ 47 ] That being said, evidence strongly in favor of abiogenesis over panspermia exists today [ citation needed ] , whereas evidence for panspermia, particularly directed panspermia, is decidedly lacking. Pseudo-panspermia is the well-supported hypothesis that many of the small organic molecules used for life originated in space, and were distributed to planetary surfaces. Life then emerged on Earth , and perhaps on other planets , by the processes of abiogenesis . [ 52 ] [ 53 ] Evidence for pseudo-panspermia includes the discovery of organic compounds such as sugars, amino acids , and nucleobases in meteorites and other extraterrestrial bodies, [ 54 ] [ 55 ] [ 56 ] [ 57 ] [ 58 ] and the formation of similar compounds in the laboratory under outer space conditions. [ 59 ] [ 60 ] [ 61 ] [ 62 ] A prebiotic polyester system has been explored as an example. [ 63 ] [ 64 ] On May 14, 1864, twenty fragments from a meteorite crashed into the French city of Orgueil. A separate fragment of the Orgueil meteorite (kept in a sealed glass jar since its discovery) was found in 1965 to have a seed capsule embedded in it, while the original glassy layer on the outside remained undisturbed. Despite great initial excitement, the seed was found to be that of a European Juncaceae or rush plant that had been glued into the fragment and camouflaged using coal dust . [ 8 ] The outer "fusion layer" was in fact glue. While the perpetrator of this hoax is unknown, it is thought that they sought to influence the 19th-century debate on spontaneous generation —rather than panspermia—by demonstrating the transformation of inorganic to biological matter. [ 65 ] In 2017, the Pan-STARRS telescope in Hawaii detected a reddish object with significant, periodic fluctiations in albedo , strongly suggestive of a slender, rotating object. Analysis of its orbit provided evidence that it was an interstellar object, originating from outside our Solar System, accelerating away from the sun with the absence of the visible outgassing that usually explains the acceleration of asteroids. [ 66 ] Astronomer Avi Loeb argues that there are no satisfying natural explanations for this acceleration, and proposes that Oumuamua may be a solar sail , which would be partial evidence for the feasibility of directed panspermia. [ 67 ] This claim has been considered unlikely by other authors. [ 68 ]
https://en.wikipedia.org/wiki/Panspermia
In mathematics, a pantachy or pantachie (from the Greek word πανταχη meaning everywhere) is a maximal totally ordered subset of a partially ordered set , especially a set of equivalence classes of sequences of real numbers. The term was introduced by du Bois-Reymond ( 1879 , 1882 ) to mean a dense subset of an ordered set, and he also introduced "infinitary pantachies" to mean the ordered set of equivalence classes of real functions ordered by domination, but as Felix Hausdorff pointed out this is not a totally ordered set. [ 1 ] Hausdorff (1907) redefined a pantachy to be a maximal totally ordered subset of this set. This set theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pantachy
Pantevenvirales is an order of viruses . [ 1 ] Pantevenvirales contains the following families: [ 2 ] This virus -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pantevenvirales
A pantropical ("all tropics") distribution is one which covers tropical regions of both the Eastern and Western hemispheres. [ 1 ] Examples of species include caecilians , modern sirenians and the plant genera Acacia and Bacopa . [ 2 ] Neotropical is a zoogeographic term that covers a large part of the Americas , roughly from Mexico and the Caribbean southwards (including cold regions in southernmost South America). Palaeotropical refers to geographical occurrence. For a distribution to be palaeotropical a taxon must occur in tropical regions in the Old World . According to Takhtajan (1978), the following families have a pantropical distribution: Annonaceae , Hernandiaceae , Lauraceae , Piperaceae , Urticaceae , Dilleniaceae , Tetrameristaceae , Passifloraceae , Bombacaceae , Euphorbiaceae , Rhizophoraceae , Myrtaceae , Anacardiaceae , Sapindaceae , Malpighiaceae , Proteaceae , Bignoniaceae , Orchidaceae and Arecaceae . [ 3 ] [ 4 ] This biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pantropical
Paola Caselli (born 1966) is an Italian astronomer and astrochemist known for her research on molecular clouds , star formation and planet formation , and the astrochemistry behind the materials found within the Solar System . [ 1 ] She is the director of the Max Planck Institute for Extraterrestrial Physics near Munich in Germany. She also holds an honorary professorship at Ludwig Maximilian University of Munich . [ 2 ] [ 3 ] Caselli was born on 26 July 1966 in Follonica , Italy, [ 2 ] and as a teenager was inspired to work in space science and molecular clouds by a teacher who gave her Fred Hoyle 's 1957 science fiction novel The Black Cloud to read. [ 1 ] She earned a laurea in astronomy and physics in 1990, from the University of Bologna , and completed her Ph.D. there in 1994. [ 2 ] [ 3 ] After postdoctoral research at the Center for Astrophysics | Harvard & Smithsonian , she became a researcher at the Arcetri Observatory in Florence , Italy, in 1996, and remained there until 2005. For the next two years, she was a visiting scholar at the University of California, Berkeley and at Harvard University . [ 3 ] In 2007 she became a professor of astronomy at the University of Leeds , and in 2011 became head of astrophysics at Leeds. She joined the Max Planck Institute for Extraterrestrial Physics as its director in 2014. [ 2 ] While continuing at the Max Planck Institute, she has also held temporary positions as Hasselblad Guest Professor at the Onsala Space Observatory in Sweden and as Blaauw Professor at the University of Groningen in the Netherlands. [ 3 ]
https://en.wikipedia.org/wiki/Paola_Caselli
The Paolo Farinella Prize is named after Paolo Farinella . The prize recognizes significant contributions in the fields of planetary sciences, space geodesy, fundamental physics, science popularization, security in space, weapons control, and disarmament. [ 1 ] Recipients must be under the age of 47 (the age at which Farinella died) to qualify for the prize. This science awards article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Paolo_Farinella_Prize
Paolo Ruffini (22 September 1765 – 10 May 1822) was an Italian mathematician and philosopher . By 1788 he had earned university degrees in philosophy, medicine/surgery and mathematics. His works include developments in algebra : He also wrote on probability and the quadrature of the circle . He was a professor of mathematics at the University of Modena and a medical doctor [ 5 ] including scientific work [ 6 ] on typhus . In 1799 Ruffini marked a major improvement for group theory , developing Joseph-Louis Lagrange 's work on permutation theory ("Réflexions sur la théorie algébrique des équations", 1770–1771). Lagrange's work was largely ignored until Ruffini established strong connections between permutations and the solvability of algebraic equations. Ruffini was the first to assert, controversially, the unsolvability by radicals of algebraic equations higher than quartics , which angered many members of the community such as Gian Francesco Malfatti (1731–1807). Work in that area was later carried on by those such as Abel and Galois , who succeeded in such a proof. [ 7 ]
https://en.wikipedia.org/wiki/Paolo_Ruffini
Papanicolaou stain (also Papanicolaou's stain and Pap stain ) is a multichromatic (multicolored) cytological staining technique developed by George Papanicolaou in 1942. [ 1 ] [ 2 ] [ 3 ] The Papanicolaou stain is one of the most widely used stains in cytology , [ 1 ] where it is used to aid pathologists in making a diagnosis. Although most notable for its use in the detection of cervical cancer in the Pap test or Pap smear, it is also used to stain non- gynecological specimen preparations from a variety of bodily secretions and from small needle biopsies of organs and tissues. [ 4 ] [ 5 ] Papanicolaou published three formulations of this stain in 1942, 1954, and 1960. [ 2 ] Pap staining is used to differentiate cells in smear preparations (in which samples are spread or smeared onto a glass microscope slide) [ 6 ] from various bodily secretions and needle biopsies ; the specimens may include gynecological smears ( Pap smears ), sputum , brushings, washings, urine , cerebrospinal fluid , [ 4 ] abdominal fluid, pleural fluid , synovial fluid , seminal fluid , [ 7 ] fine needle aspirations , tumor touch samples, or other materials containing loose cells. [ 8 ] [ 4 ] [ 9 ] The pap stain is not fully standardized and comes in several formulations, differing in the exact dyes used, their ratios, and the timing of the process. [ 2 ] [ 1 ] Pap staining is usually associated with cytopathology in which loose cells are examined, but the stain has also been modified and used on tissue slices. [ 9 ] Pap staining is used in the Pap smear (or Pap test) and is a reliable technique in cervical cancer screening in gynecology . [ 10 ] The classic form of the Papanicolaou stain involves five stains in three solutions. [ 2 ] [ 11 ] [ 12 ] The counterstains are dissolved in 95% ethyl alcohol which prevents cells from over staining which would obscure nuclear detail and cell outlines especially in the case when cells are overlapping on the slide. [ 3 ] [ 2 ] Phosphotungstic acid is added to adjust the pH of counterstains and helps to optimize the color intensity. [ 2 ] The EA counterstain contains Bismarck brown and phosphotungstic acid, which when in combination, cause both to precipitate out of solution, reducing the useful life of the mixture. [ 2 ] The stain should result in cells that are fairly transparent so even thicker specimens with overlapping cells can be interpreted. [ 2 ] Cell nuclei should be crisp, blue to black in color [ 12 ] [ 13 ] and the chromatin patterns of the nucleus should be well defined. Cell cytoplasm stains blue-green and keratin stains orange in color. [ 13 ] [ 5 ] Eosin Y stains the superficial epithelial squamous cells , nucleoli , cilia , and red blood cells . [ 2 ] Light Green SF yellowish confers a blue staining for the cytoplasm of active cells such as columnar cells, parabasal squamous cells, and intermediate squamous cells. [ 14 ] Superficial cells are orange to pink, and intermediate and parabasal cells are turquoise green to blue. [ 12 ] Ultrafast Papanicolaou stain is an alternative for the fine needle aspiration samples, developed to achieve comparable visual clarity in a significantly shorter time. The process differs in rehydration of the air-dried smear with saline , use 4% formaldehyde in 65% ethanol fixative , and use of Richard-Allan Hematoxylin -2 and Cyto-Stain , resulting in a 90-second process yielding transparent polychromatic stains. [ 15 ]
https://en.wikipedia.org/wiki/Papanicolaou_stain
Paparazzi is an open-source autopilot system oriented toward inexpensive autonomous aircraft. [ 1 ] Low cost and availability enable hobbyist use in small remotely piloted aircraft . [ 2 ] The project began in 2003, [ 1 ] and is being further developed and used at École nationale de l'aviation civile (ENAC), [ 3 ] a French civil aeronautics academy. Several vendors are currently producing Paparazzi autopilots and accessories. An autopilot allows a remotely piloted aircraft to be flown out of sight. [ 1 ] All hardware and software is open-source and freely available to anyone under the GNU licensing agreement. Open Source autopilots provide flexible software: users can easily modify the autopilot based on their own special requirements, such as forest fire evaluation. [ 4 ] [ 5 ] Paparazzi collaborators share ideas and information using the same MediaWiki software that is used by Wikipedia . [ 6 ] Paparazzi accepts commands and sensor data, and adjusts flight controls accordingly. For example, a command might be to climb at a certain rate, and paparazzi will adjust power and/or control surfaces. As of 2010 paparazzi did not have a good speed hold and changing function, because no air speed sensor reading is considered by the controller. [ 5 ] Delft University of Technology released its Lisa/S chip project in 2013 which is based on Paparazzi. [ 7 ] Paparazzi supports for multiple hardware designs, including STM32 and LPC2100 series microcontrollers . A number of CAD files have been released. Paparazzi provides for a minimum set of flight sensors: [ 8 ] The open-source software suite "contains everything" to let "airborne system fly reliably". [ 9 ] This article about aircraft components is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Paparazzi_Project
Papaver somniferum × Papaver bracteatum , also known as Sagan's poppy is a hybrid between the opium poppy and the Iranian poppy . This hybrid, true poppy is diploid with 18 chromosomes and exhibits strongly reduced fitness relative to parents, possibly due to unpaired chromosomes since the Iranian and opium poppies do not have the same number. The clearest example of its reduced fitness is seen through semilethal dwarfism , with about 53.4% of specimens grown exhibiting dwarfism. While this hybrid does not possess the cold hardiness of the Iranian poppy ( Papaver bracteatum ), it does possess notably more cold tolerance than Papaver somniferum and in a greenhouse or protected setting could be grown as a perennial . Another notable feature of this hybrid is that it contains a higher concentration of morphinian alkaloids (including morphine ) than any known cultivar of Papaver somniferum or Papaver bracteatum . This is despite one of its parents ( Papaver bracteatum ) producing negligible concentrations of morphine and is believed to be due to a greater expression of one of the rate-limiting enzymes of morphine synthesis . This Papaveraceae article is a stub . You can help Wikipedia by expanding it . This article about medicinal chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Papaver_somniferum_×_bracteatum
The Paper, Allied-Industrial, Chemical and Energy Workers International Union (PACE) was an international union that represented workers in the United States and Canada . PACE was founded on January 4, 1999, by the merger of the United Paperworkers' International Union with the Oil, Chemical and Atomic Workers International Union . Like all labor unions, PACE fought for rights, wage raises, and improvement of working conditions for workers in such fields as: the paper industry, the oil industry, chemicals, nuclear materials, pharmaceuticals, automobile parts, motorcycles, tissues, toys, cement, corn sugar, etc. On January 11, 2005, the union announced a merger with the United Steel Workers of America . The new union, with 860,000 active members in the United States and Canada, was the largest industrial labor union in North America. The union is known as the United Steel, Paper and Forestry, Rubber, Manufacturing, Energy, Allied-Industrial and Service Workers International Union, abbreviated as the "United Steelworkers" or by the acronym USW. Throughout its existence, the union was led by president Boyd Young . [ 1 ] This article related to a North American labor union or trade union is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Paper,_Allied-Industrial,_Chemical_and_Energy_Workers_International_Union
A paper-ruling machine is a device for ruling paper . In 1770, John Tetlow was awarded a patent for a "machine for ruling paper for music and other purposes." [ 1 ] William Orville Hickok invented an "improved ruling machine" in the mid-19th century. [ 2 ] As the device is designed for drawing lines on paper, it can produce tables and ruled paper . The functionality of the machine is based on pens manufactured especially for the device. The pens have multiple tips side by side, and water-based ink is led into them along threads. It is possible to program stop-lines on the equipment by mounting pens on shafts equipped with cams that lower and raise them at predetermined points. [ 3 ] The spread of computerized accounting between the 1960s and 1980s significantly decreased the demand for accounting tables and ruled paper. Nowadays, their demand is primarily filled by using offset printing . [ 3 ]
https://en.wikipedia.org/wiki/Paper-ruling_machine
Paper chemicals designate a group of chemicals that are used for paper manufacturing , or modify the properties of paper . These chemicals can be used to alter the paper in many ways, including changing its color and brightness , or by increasing its strength and resistance to water. [ 1 ] The chemicals can be defined on basis of their usage in the process. Chemical usage is not only for imparting properties to paper but to handle the water cycles in the process, conditioning of fabrics, cleaning of equipment and several other applications. Chemical pulping involves dissolving lignin in order to extract the cellulose from the wood fiber. The different processes of chemical pulping include the Kraft process , which uses caustic soda and sodium sulfide and is the most common; alternatively, the use of sulfurous acid is known as the sulfite process , the neutral sulfite semichemical is treated as a third process separate from sulfite, and soda pulping which is the least ecologically hazardous utilizing sodium hydroxide or anthraquinone . [ 2 ] Caustic soda is added to increase the pH in the pulping process of fibers. The higher pH of the paper-fiber solution causes the fibers to smoothen and swell, which is important for the grinding process of the fibers. In the production of white paper, the wood pulp is bleached to remove any color from the trace amounts of lignin that was not extracted in the chemical pulping process. There are three predominant methods of bleaching: Most paper types must have some water-resistance to maintain a specific writing quality and printability. Until 1980, the typical manner of adding this resistance was by using a rosin in combination with alum . When the paper industry started using chalk instead of china clay as filler, the paper chemistry had to switch to a neutral process. At several places AKD ( alkyl ketene dimer ) and ASA ( alkenyl succinic anhydride ) are used. Latest development is to use surface size, [ 5 ] which is applied using a size press. The advantage of surface sizing is that it does not interfere with the backend water chemistry. Wet-strength additives ensure that paper retains its strength when it gets wet. This is especially important in tissue paper . Chemicals typically used for this purpose include epichlorohydrin , melamine , urea formaldehyde and polyimines . These substances polymerize in the paper and result in the construction of a strengthening network. To enhance the paper's strength, cationic starch is added to wet pulp in the manufacturing process. Starch has a similar chemical structure as the cellulose fibre of the pulp, and the surface of both the starch and fibre are negatively charged. By adding cationic (positive charged) starch, the fibre can bind with the starch and thus also increase the interconnections between the fibres. The positively charged portion of the starch is usually formed by quaternary ammonium cations . Quaternary salts that are used include 2.3-epoxy propyl trimethyl ammoniumchloride (EPTAC, also known as or Glytac Quab, GMAC™) and (3-chloro-2-hydroxypropyl) trimethyl ammonium chloride (CHPTAC, also known as Quat 188, Quab 188, Reagens™). Dry-strength additives, or dry-strengthening agents, are chemicals that improve paper strength normal conditions. These improve the paper's compression strength , bursting strength , tensile breaking strength , and delamination resistance . Typical chemicals used include cationic starch and polyacrylamide (PAM) derivatives. These substances work by binding fibers, often under the aid of aluminum ions in paper sheet. Binders promote the binding of pigment particles between themselves and the coating layer of the paper. [ 6 ] Binders are spherical particles less than 1 μm in diameter. Common binders are styrene maleic anhydride copolymer or styrene-acrylate copolymer. [ 7 ] The surface chemical composition is differentiated by the adsorption of acrylic acid or an anionic surfactant , both of which are used for stabilization of the dispersion in water. [ 8 ] Co-binders, or thickeners, are generally water-soluble polymers that influence the paper's color viscosity, water retention, sizing , and gloss. Some common examples are carboxymethyl cellulose (CMC), cationic and anionic hydroxyethyl cellulose (EHEC), modified starch , and dextrin . Styrene butadiene latex, Styrene acrylic, dextrin , oxidized starch are used in coatings to bind the filler to the paper. Co-binders are natural products such as starch and CMC ( Carboxymethyl cellulose ), that are used along with the synthetic binders, like styrene acrylic or styrene butadiene. Co-binders are used to reduce the cost of the synthetic binder and improve the water retention and rheology of the coating. Mineral fillers are used to lower the consumption of more expensive binder material or to improve some properties of the paper. [ 9 ] China clay , calcium carbonate, titanium dioxide , and talc are common mineral fillers used in paper production. [ citation needed ] A Retention agent is added to bind fillers to the paper. Fillers, such as calcium carbonate , usually have a weak surface charge. The retention agent is a polymer with high cationic, positively charged groups. An additional feature of a retention agent is to accelerate the dewatering in the wire section of the paper machine . Polyethyleneimine and polyacrylamide are examples of chemicals used in this process. [ citation needed ] Pigments that absorb in the yellow and red part of the visible spectrum can be added. As the dye absorbs light, the brightness of the paper will decrease, unlike the effect of an optical-brightening agent. To increase whiteness, a combination of pigments and an optical-brightening agent are often used. The most commonly used pigments are blue and violet dyes. [ citation needed ] Optical brightener is used to make paper appear whiter. Optical-brightening agents use fluorescence to absorb invisible radiation from the ultraviolet part of the light spectrum and re-emit the radiation as light in the visible blue range. The optical-brightening agent thus generates blue light that is added to the reflected light. The additional blue light offsets the yellowish tinge that would otherwise exist in the reflected light characteristics. It thus increases the brightness of the material (when the illumination includes ultraviolet radiation). [ 10 ]
https://en.wikipedia.org/wiki/Paper_chemicals
Paper chromatography is an analytical method used to separate colored chemicals or substances. It can also be used for colorless chemicals that can be located by a stain or other visualisation method after separation. [ 1 ] It is now primarily used as a teaching tool, having been replaced in the laboratory by other chromatography methods such as thin-layer chromatography (TLC). This analytic method has three components, a mobile phase, stationary phase and a support medium (the paper). The mobile phase is generally a non-polar organic solvent in which the sample is dissolved. The stationary phase consists of (polar) water molecules that were incorporated into the paper when it was manufactured. The mobile phase travels up the stationary phase by capillary action , carrying the sample with it. The difference between TLC and paper chromatography is that the stationary phase in TLC is a layer of adsorbent (usually silica gel , or aluminium oxide ), and the stationary phase in paper chromatography is less absorbent paper. A paper chromatography variant, two-dimensional chromatography , involves using two solvents and rotating the paper 90° in between. This is useful for separating complex mixtures of compounds having similar polarity, for example, amino acids . The retention factor (R ƒ ) may be defined as the ratio of the distance travelled by the solute to the distance travelled by the solvent. It is used in chromatography to quantify the amount of retardation of a sample in a stationary phase relative to a mobile phase. [ 2 ] R ƒ values are usually expressed as a fraction of two decimal places. For example, if a compound travels 9.9 cm and the solvent front travels 12.7 cm, the R ƒ value = (9.9/12.7) = 0.779 or 0.78. R ƒ value depends on temperature and the solvent used in experiment, so several solvents offer several R ƒ values for the same mixture of compound. A solvent in chromatography is the liquid the paper is placed in, and the solute is the ink which is being separated. Paper chromatography is one method for testing the purity of compounds and identifying substances. Paper chromatography is a useful technique because it is relatively quick and requires only small quantities of material. Separations in paper chromatography involve the principle of partition. In paper chromatography, substances are distributed between a stationary phase and a mobile phase. The stationary phase is the water trapped between the cellulose fibers of the paper. The mobile phase is a developing solution that travels up the stationary phase, carrying the samples with it. Components of the sample will separate readily according to how strongly they adsorb onto the stationary phase versus how readily they dissolve in the mobile phase. When a colored chemical sample is placed on a filter paper, the colors separate from the sample by placing one end of the paper in a solvent . The solvent diffuses up the paper, dissolving the various molecules in the sample according to the polarities of the molecules and the solvent. If the sample contains more than one color, that means it must have more than one kind of molecule. Because of the different chemical structures of each kind of molecule, the chances are very high that each molecule will have at least a slightly different polarity, giving each molecule a different solubility in the solvent. The unequal solubility causes the various color molecules to leave solution at different places as the solvent continues to move up the paper. The more soluble a molecule is, the higher it will migrate up the paper. If a chemical is very non-polar it will not dissolve at all in a very polar solvent. This is the same for a very polar chemical and a very non-polar solvent. When using water (a very polar substance) as a solvent, the more polar the color, the higher it will rise on the papers. [ citation needed ] [ original research? ] Development of the chromatogram is done by allowing the solvent to travel down the paper. Here, the mobile phase is placed in a solvent holder at the top. The spot is kept at the top and solvent flows down the paper from above. Here the solvent travels up the chromatographic paper. Both descending and ascending paper chromatography are used for the separation of organic and inorganic substances. The sample and solvent move upward. This is the hybrid of both of the above techniques. The upper part of ascending chromatography can be folded over a rod in order to allow the paper to become descending after crossing the rod. A circular filter paper is taken and the sample is deposited at the center of the paper. After drying the spot, the filter paper is tied horizontally on a Petri dish containing solvent, so that the wick of the paper is dipped in the solvent. The solvent rises through the wick and the components are separated into concentric rings. In this technique a square or rectangular paper is used. Here the sample is applied to one of the corners and development is performed at a right angle to the direction of the first run. The discovery of paper chromatography in 1943 by Martin and Synge provided, for the first time, the means of surveying constituents of plants and for their separation and identification. [ 3 ] Erwin Chargaff credits in Weintraub's history of the man the 1944 article by Consden, Gordon and Martin. [ 4 ] [ 5 ] There was an explosion of activity in this field after 1945. [ 3 ]
https://en.wikipedia.org/wiki/Paper_chromatography
Paper engineering is a branch of engineering that deals with the usage of physical science (e.g. chemistry and physics) and life sciences (e.g. biology and biochemistry ) in conjunction with mathematics as applied to the converting of raw materials into useful paper products and co-products. [ 1 ] The field applies various principles in process engineering and unit operations to the manufacture of paper , chemicals , energy and related materials. The following timeline shows some of the key steps in the development of the science of chemical and bioprocess engineering: [ 2 ] From a heritage perspective, the field encompasses the design and analysis of a wide variety of thermal, chemical and biochemical unit operations employed in the manufacture of pulp and paper, and addresses the preparation of its raw materials from trees or other natural resources via a pulping process, chemical and mechanical pretreatment of these recovered biopolymer (e.g. principally, although not solely, cellulose -based) fibers in a fluid suspension, the high-speed forming and initial dewatering of a non-woven web, the development of bulk sheet properties via control of energy and mass transfer operations, as well as post-treatment of the sheet with coating, calendering, and other chemical and mechanical processes. [ 1 ] Today, the field of paper and chemical engineering is applied to the manufacture of a wide variety of products. The forestry and biology, chemical science, (bio) chemical industry scope manufactures organic and agrochemicals (fertilizers, insecticides, herbicides), oleochemicals , fragrances and flavors, food, feed , pharmaceuticals , nutraceuticals , chemicals , polymers and power from biological materials. The resulting products of paper engineering including paper, cardboard, and various paper derivatives are widely used in everyday life. In addition to being a subset of chemical engineering, the field of paper engineering is closely linked to forest management , product recycling , and the mass production of paper – based media. In the process of mechanical pulping, "grinding" and "refining" are the two main methods used to create the pulp. Grinding is the method of pressing logs and chips against a turning stone to produce fibers. Refiner pulping is treating wood chips with chemicals or heat and then crushing the objects between two disks, one or both of which are rotating. There are four main types of refiner pulping, which includes refiner mechanical pulping, thermo-mechanical pulping, chemi-mechanical pulping, and chemithermomechanical pulping. [ 3 ] Further descriptions of each process are contained in this link: [ 4 ] Mechanical pulping, when compared to chemical pulping, is relatively inexpensive and has a high pulp yield (85–95%). However, the paper created is generally weak since it retains the lignin. [ 1 ] The process of chemical pulping is used to chemically disband the lignin found in the cell walls of the material undergoing the process. After the cellulose fibers are separated from the lignin, a pulp is created which can then be treated to create durable paper, boxes, and corrugated cardboard. Chemical pulping can be characterized by two main methods: sulfate ( Kraft process ) pulping and sulfite pulping, and these two methods have different benefits. Sulfate pulping can be performed on a wide range of tree varieties and results in the creation of a strong type of paper. Conversely, sulfite pulping results in a higher volume of pulp which is easier to bleach and process. However, sulfate pulping is more widely used since the product is more durable and the chemicals used in the process can be recovered, thus resulting in minimal environmental pollution. [ 5 ] The pulp is then processed through an apparatus which renders the pulp as a mesh of fibers. This fiber network is then pressed to remove all water contents, and the paper is subsequently dried to remove all traces of moisture. After the above processes have been completed, the resulting paper is coated with a minuscule amount of china clay or calcium carbonate to modify the surface, and the paper is then re-sized depending on its intended purpose. Generally, the material to be recycled first undergoes mechanical or chemical pulping to render it in pulp form. The resulting pulp is then processed in the same way normal pulp is processed; however, original fiber is sometimes added to enhance the quality and appearance of the product. Today, the field of paper and bioprocess engineering is a diverse one, covering areas from biotechnology and nanotechnology to electricity generation . Generally offered as a specialization within chemical engineering: [ 6 ]
https://en.wikipedia.org/wiki/Paper_engineering
A paper machine (or paper-making machine ) is an industrial machine which is used in the pulp and paper industry to create paper in large quantities at high speed. Modern paper-making machines are based on the principles of the Fourdrinier Machine, which uses a moving woven mesh to create a continuous paper web by filtering out the fibres held in a paper stock and producing a continuously moving wet mat of fibre. This is dried in the machine to produce a strong paper web. The basic process is an industrialised version of the historical process of hand paper-making, which could not satisfy the demands of developing modern society for large quantities of a printing and writing substrate. The first modern paper machine was invented by Louis-Nicolas Robert in France in 1799, and an improved version patented in Britain by Henry and Sealy Fourdrinier in 1806. The same process is used to produce paperboard on a paperboard machine. Paper machines usually have at least five distinct operational sections: There can also be a coating section to modify the surface characteristics with coatings such as kaolin clay , alternatively known as china clay. This section can be on-line or off-line as well. Before the invention of continuous paper making, paper was made in individual sheets by stirring a container of pulp slurry and either pouring it into a fabric sieve called a sheet mould or dipping and lifting the sheet mould from the vat. While still on the fabric in the sheet mould, the wet paper was pressed to remove excess water. The sheet was then lifted off to be hung over a rope or wooden rod to air dry. In 1799, Louis-Nicolas Robert of Essonnes , France, was granted a patent for a continuous paper making machine. At the time, Robert was working for Saint-Léger Didot , with whom he quarreled over the ownership of the invention. Didot believed that England was a better place to develop the machine but due to the turbulence of the French Revolution , he could not go there himself, so he sent his brother-in-law, John Gamble, an Englishman living in Paris. Through a chain of acquaintances, Gamble was introduced to the brothers Sealy and Henry Fourdrinier , stationers of London, who agreed to finance the project. Gamble was granted British patent 2487 on October 20, 1801. The Fourdrinier machine used a specially woven fabric mesh conveyor belt (known as a wire, as it was once woven from bronze) in the forming section, where a slurry of fibre (usually wood or other vegetable fibres) is drained to create a continuous paper web. The original Fourdrinier forming section used a horizontal drainage area, referred to as the drainage table . With the help of Bryan Donkin , a skilled and ingenious mechanic, an improved version of the Robert original was installed at Frogmore Paper Mill , Apsley, Hertfordshire , in 1803, followed by another in 1804. A third machine was installed at the Fourdriniers' own mill at Two Waters. The Fourdriniers also bought a mill at St Neots intending to install two machines there, and the process and machines continued to develop. Close to Frogmore Mill in Apsley, John Dickinson designed and built an alternate machine type; a Cylinder Mould Machine in 1809. Thomas Gilpin is most often credited for creating the first U.S. cylinder type papermaking machine at Brandywine Creek , Delaware in 1817. This machine was a cylinder mould machine. The Fourdrinier machine wasn't introduced into the USA until 1827. [ 1 ] Records show Charles Kinsey of Paterson, NJ had already patented a continuous process papermaking machine in 1807. Kinsey's machine was built locally by Daniel Sawn and by 1809 the Kinsey machine was successfully making paper at the Essex Mill in Paterson. Financial stress and potential opportunities created by the Embargo of 1807 eventually persuaded Kinsey and his backers to change the mill's focus from paper to cotton and Kinsey's early papermaking successes were soon overlooked and forgotten. [ 2 ] [ 3 ] Gilpin's 1817 patent was similar to Kinsey's, as was the John Ames patent of 1822. The Ames patent was challenged by his competitors, asserting that Kinsey was the original inventor and Ames had been stealing other people's ideas, their evidence being the employment of Daniel Sawn to work on his machine. [ 2 ] The method of continuous production demonstrated by the paper machine influenced the development of continuous rolling of iron and later steel and other continuous production processes. [ 4 ] The plant fibres used for pulp are composed mostly of cellulose and hemi-cellulose, which have a tendency to form molecular linkages between fibres in the presence of water. After the water evaporates the fibres remain bonded. It is not necessary to add additional binders for most paper grades, although both wet and dry strength additives may be added. Rags of cotton and linen were the major source of pulp for paper before wood pulp. Today almost all pulp is of wood fibre. Cotton fibre is used in speciality grades, usually in printing paper for such things as resumes and currency. Sources of rags often appear as waste from other manufacturing such as denim fragments or glove cuts. Fibres from clothing come from the cotton boll. The fibres can range from 3 to 7 cm in length as they exist in the cotton field. Bleach and other chemicals remove the colour from the fabric in a process of cooking, usually with steam. The cloth fragments mechanically abrade into fibres, and the fibres get shortened to a length appropriate for manufacturing paper with a cutting process. Rags and water dump into a trough forming a closed loop. A cylinder with cutting edges, or knives, and a knife bed is part of the loop. The spinning cylinder pushes the contents of the trough around repeatedly. As it lowers slowly over a period of hours, it breaks the rags up into fibres, and cuts the fibres to the desired length. The cutting process terminates when the mix has passed the cylinder enough times at the programmed final clearance of the knives and bed. Another source of cotton fibre comes from the cotton ginning process. The seeds remain, surrounded by short fibres known as linters for their short length and resemblance to lint. Linters are too short for successful use in fabric. Linters removed from the cotton seeds are available as first and second cuts. The first cuts are longer. The two major classifications of pulp are chemical and mechanical . Chemical pulps formerly used a sulphite process , but the kraft process is now predominant. Kraft pulp has superior strength to sulphite and mechanical pulps and kraft process spent pulping chemicals are easier to recover and regenerate. Both chemical pulps and mechanical pulps may be bleached to a high brightness. Chemical pulping dissolves the lignin that bonds fibres to one another, and binds the outer fibrils that compose individual fibres to the fibre core. Lignin, like most other substances that can separate fibres from one another, acts as a debonding agent, lowering strength. Strength also depends on maintaining long cellulose molecule chains. The kraft process, due to the alkali and sulphur compounds used, tends to minimize attack on the cellulose and the non-crystalline hemicellulose , which promotes bonding, while dissolving the lignin. Acidic pulping processes shorten the cellulose chains. Kraft pulp makes superior linerboard and excellent printing and writing papers. Groundwood, the main ingredient used in newsprint and a principal component of magazine papers (coated publications), is literally ground wood produced by a grinder. Therefore, it contains a lot of lignin, which lowers its strength. The grinding produces very short fibres that drain slowly. Thermomechanical pulp (TMP) is a variation of groundwood where fibres are separated mechanically while at high enough temperatures to soften the lignin. Between chemical and mechanical pulps there are semi-chemical pulps that use a mild chemical treatment followed by refining. Semi-chemical pulp is often used for corrugating medium. Bales of recycled paper (normally old corrugated containers) for unbleached (brown) packaging grades may be simply pulped, screened and cleaned. Recycling to make white papers is usually done in a deinking plant, which employs screening, cleaning, washing, bleaching and flotation. Deinked pulp is used in printing and writing papers and in tissue , napkins and paper towels . It is often blended with virgin pulp. At integrated pulp and paper mills, pulp is usually stored in high density towers before being pumped to stock preparation. Non integrated mills use either dry pulp or wet lap (pressed) pulp, usually received in bales. The pulp bales are slushed in a [re]pulper. Stock preparation is the area where pulp is usually refined, blended to the appropriate proportion of hardwood , softwood or recycled fibre, and diluted to as uniform and constant as possible consistency. The pH is controlled and various fillers, such as whitening agents, size and wet strength or dry strength are added if necessary. Additional fillers such as clay , calcium carbonate and titanium dioxide increase opacity so printing on reverse side of a sheet will not distract from content on the obverse side of the sheet. Fillers also improve printing quality. [ 5 ] Pulp is pumped through a sequence of tanks that are commonly called chests , which may be either round or more commonly rectangular. Historically these were made of special ceramic tile faced reinforced concrete, but mild and stainless steels are also used. Because fibre and fillers are denser than water and tend to settle out quickly and also fibres are attracted together to form clumps called floc, low consistency pulp slurries are kept agitated in these chests by propeller like agitators near the pump suction at the chest bottom. In the following process, different types of pulp, if used, are normally treated in separate but similar process lines until combined at a blend chest: From high density storage or from slusher/pulper the pulp is pumped to a low density storage chest (tank). From there it is typically diluted to about 4% consistency before being pumped to an unrefined stock chest. From the unrefined stock chest stock is again pumped, with consistency control, through a refiner. Refining is an operation whereby the pulp slurry passes between a pair of discs, one of which is stationary and the other rotating at speeds of typically 1,000 or 1,200 RPM for 50 and 60 Hz AC, respectively. The discs have raised bars on their faces and pass each other with narrow clearance. This action unravels the outer layer of the fibres, causing the fibrils of the fibres to partially detach and bloom outward, increasing the surface area to promoting bonding. Refining thus increases tensile strength. For example, tissue paper is relatively unrefined whereas packaging paper is more highly refined. Refined stock from the refiner then goes to a refined stock chest, or blend chest, if used as such. Hardwood fibres are typically 1 mm long and smaller in diameter than the 4 mm length typical of softwood fibres. Refining can cause the softwood fibre tube to collapse resulting in undesirable properties in the sheet. From the refined stock, or blend chest, stock is again consistency controlled as it is being pumped to a machine chest. It may be refined or additives may be added en route to the machine chest. The machine chest is basically a consistency levelling chest having about 15 minutes retention. This is enough retention time to allow any variations in consistency entering the chest to be levelled out by the action of the basis weight valve receiving feedback from the on line basis weight measuring scanner. (Note: Many paper machines mistakenly control consistency coming out of the machine chest, interfering with basis weight control.) [ notes 1 ] There are four main sections on this paper machine. The forming section makes the pulp into the basis for sheets along the wire. The press section, which removes much of the remaining water via a system of nips formed by rolls pressing against each other aided by press felts that support the sheet and absorb the pressed water. The dryer section of the paper machine, as its name suggests, dries the paper by way of a series of internally steam -heated cylinders that evaporate the moisture. Calenders are used to make the paper surface extra smooth and glossy. In practice calender rolls are normally placed vertically in a stack . From the machine chest stock is pumped to a head tank, commonly called a "head tank" or stuff box , whose purpose is to maintain a constant head (pressure) on the fiber slurry or stock as it feeds the basis weight valve. The stuff box also provides a means allowing air bubbles to escape. The consistency of the pulp slurry at the stuff box is in the 3% range. Flow from the stuff box is by gravity and is controlled by the basis weight valve on its way to the fan pump suction where it is injected into the main flow of water to the fan pump. The main flow of water pumped by the fan pump is from a whitewater chest or tank that collects all the water drained from the forming section of the paper machine. Before the fiber stream from the stuff box is introduced, the whitewater is very low in fiber content. The whitewater is constantly recirculated by the fan pump through the headbox and recollected from the wire pit and various other tanks and chests that receive drainage from the forming wire and vacuum assisted drainage from suction boxes and wet fiber web handling rolls. On the way to the head box the pulp slurry may pass through centrifugal cleaners, which remove heavy contaminants like sand, and screens, which break up fibre clumps and remove oversized debris. The fan pump ultimately feeds the headbox, whether or not any centrifugal cleaners or screens are present. [ 6 ] [ 7 ] [ 8 ] The purpose of the headbox is to create turbulence in order to keep the fibers from clumping together and to uniformly distribute the slurry across the width of the wire. Wood fibers have a tendency to attract one another, forming clumps, the effect being called flocculation. Flocculation is lessened by lowering consistency and or by agitating the slurry; however, de-flocculation becomes very difficult at much above 0.5% consistency. Minimizing the degree of flocculation when forming is important to physical properties of paper . [ 7 ] [ 8 ] The consistency in the headbox is typically under 0.4% for most paper grades, with longer fibres requiring lower consistency than short fibres. Higher consistency causes more fibres to be oriented in the z direction, while lower consistency promotes fibre orientation in the x-y direction. Higher consistency promotes higher calliper (thickness) and stiffness, lower consistency promotes higher tensile and some other strength properties and also improves formation (uniformity). [ 7 ] [ 8 ] Many sheet properties continue to improve down to below 0.1% consistency; however, this is an impractical amount of water to handle. (Most paper machine run a higher headbox consistency than optimum because they have been sped up over time without replacing the fan pump and headbox. There is also an economic trade off with high pumping costs for lower consistency). The stock slurry, often called white water at this point, exits the head box through a rectangular opening of adjustable height called the slice , the white water stream being called the jet and it is pressurized on high speed machines so as to land gently on the moving fabric loop or wire at a speed typically between plus or minus 3% of the wire speed, called rush and drag respectively. Excessive rush or drag causes more orientation of fibres in the machine direction and gives differing physical properties in machine and cross directions; however, this phenomenon is not completely avoidable on Fourdrinier machines. [ 7 ] [ 8 ] On lower speed machines at 700 feet per minute, gravity and the height of the stock in the headbox creates sufficient pressure to form the jet through the opening of the slice. The height of the stock is the head, which gives the headbox its name. The speed of the jet compared to the speed of the wire is known as the jet-to-wire ratio . When the jet-to-wire ratio is less than unity, the fibres in the stock become drawn out in the machine direction. On slower machines where sufficient liquid remains in the stock before draining out, the wire can be driven back and forth with a process known as shake . This provides some measure of randomizing the direction of the fibres and gives the sheet more uniform strength in both the machine and cross-machine directions. On fast machines, the stock does not remain on the wire in liquid form long enough and the long fibres line up with the machine. When the jet-to-wire ratio exceeds unity, the fibers tend to pile up in lumps. [ 7 ] [ 8 ] The resulting variation in paper density provides the antique or parchment paper look. Two large rolls typically form the ends of the drainage section, which is called the drainage table . The breast roll is located under the flow box, the jet being aimed to land on it at about the top centre. At the other end of the drainage table is the suction ( couch ) roll. The couch roll is a hollow shell, drilled with many thousands of precisely spaced holes of about 4 to 5 mm diameter. The hollow shell roll rotates over a stationary suction box, normally placed at the top centre or rotated just down machine. Vacuum is pulled on the suction box, which draws water from the web into the suction box. From the suction roll the sheet feeds into the press section. [ 7 ] [ 8 ] Down machine from the suction roll, and at a lower elevation, is the wire turning roll . This roll is driven and pulls the wire around the loop. The wire turning roll has a considerable angle of wrap in order to grip the wire. [ 7 ] Supporting the wire in the drainage table area are a number of drainage elements. In addition to supporting the wire and promoting drainage, the elements de-flocculate the sheet. On low speed machines these table elements are primarily table rolls . As speed increases the suction developed in the nip of a table roll increases and at high enough speed the wire snaps back after leaving the vacuum area and causes stock to jump off the wire, disrupting the formation. To prevent this drainage foils are used. The foils are typically sloped between zero and two or three degrees and give a more gentle action. Where rolls and foils are used, rolls are used near the headbox and foils further down machine. [ 7 ] [ 8 ] Ultrasonic foils can also be used, creating millions of pressure pulses from imploding cavitation bubbles which keep the fibres apart, giving them a more uniform distribution. Approaching the dry line on the table are located low vacuum boxes that are drained by a barometric leg under gravity pressure. After the dry line are the suction boxes with applied vacuum. Suction boxes extend up to the couch roll. At the couch the sheet consistency should be about 25%. [ 7 ] [ 8 ] The forming section type is usually based on the grade of paper or paperboard being produced; however, many older machines use a less than optimum design. Older machines can be upgraded to include more appropriate forming sections. A second headbox may be added to a conventional fourdrinier to put a different fibre blend on top of a base layer. A secondary headbox is normally located at a point where the base sheet is completely drained. This is not considered a separate ply because the water action does a good job of intermixing the fibers of the top and bottom layer. Secondary headboxes are common on linerboard . A modification to the basic Fourdrinier table by adding a second wire on top of the drainage table is known as a top wire former . The bottom and top wires converge and some drainage is up through the top wire. A top wire improves formation and also gives more drainage, which is useful for machines that have been sped up. The Twin Wire Machine or Gap former uses two vertical wires in the forming section, thereby increasing the de-watering rate of the fibre slurry while also giving uniform two sidedness. [ 9 ] There are also machines with entire Fourdrinier sections mounted above a traditional Fourdrinier. This allows making multi-layer paper with special characteristics. These are called top Fourdriniers and they make multi-ply paper or paperboard . Commonly this is used for making a top layer of bleached fibre to go over an unbleached layer. Another type forming section is the cylinder mould machine invented by John Dickinson in 1809, originally as a competitor to the Fourdrinier machine. [ 10 ] [ 11 ] This machine uses a mesh-covered rotating cylinder partially immersed in a tank of fibre slurry in the wet end to form a paper web, giving a more random distribution of the cellulose fibres. Cylinder machines can form a sheet at higher consistency, which gives a more three dimensional fibre orientation than lower consistencies, resulting in higher caliper (thickness) and more stiffness in the machine direction (MD). High MD stiffness is useful in food packaging like cereal boxes and other boxes like dry laundry detergent. Tissue machines typically form the paper web between a wire and a special fabric (felt) as they wrap around a forming roll. The web is pressed from the felt directly onto a large diameter dryer called a yankee . The paper sticks to the yankee dryer and is peeled off with a scraping blade called a doctor . Tissue machines operate at speeds of up to 2000 m/min. The second section of the paper machine is the press section, which removes much of the remaining water via a system of nips formed by rolls pressing against each other aided by press felts that support the sheet and absorb the pressed water. The paper web consistency leaving the press section can be above 40%. [ 12 ] Pressing is the second most efficient method of de-watering the sheet (behind free drainage in the forming section) as only mechanical action is required. The number of press rolls, their arrangement and the arrangement and type of felts used are influenced by the grades of paper being produced and the desired operational characteristics of the machine. [ 13 ] Press felts historically were made from wool. However, today they are nearly 100% synthetic. They are made up of a polyamide woven fabric with thick batt applied in a specific design to maximise water absorption. Presses can be single or double felted. A single felted press has a felt on one side and a smooth roll on the other. A double felted press has both sides of the sheet in contact with a press felt. Single felted nips are useful when mated against a smooth roll (usually in the top position), which adds a two-sidedness—making the top side appear smoother than the bottom. Double felted nips impart roughness on both sides of the sheet. Double felted presses are desirable for the first press section of heavy paperboard. Simple press rolls can be rolls with grooved or blind drilled surface. More advanced press rolls are suction rolls. These are rolls with perforated shell and cover. The shell made of metal material such as bronze or stainless steel is covered with rubber or a synthetic material. Both shell and cover are drilled throughout the surface. A stationary suction box is fitted in the core of the suction roll to support the shell being pressed. End face mechanical seals are used for the interface between the inside surface of the shell and the suction box. For the smooth rolls, they are typically made of granite rolls. [ 14 ] The granite rolls can be up to 30-foot (9.1 m) long and 6 feet (1.8 m) in diameter. [ 15 ] Conventional roll presses are configured with one of the press rolls is in a fixed position, with a mating roll being loaded against this fixed roll. The felts run through the nips of the press rolls and continues around a felt run, normally consisting of several felt rolls. During the dwell time in the nip, the moisture from the sheet is transferred to the press felt. When the press felt exits the nip and continues around, a vacuum box known as an Uhle Box applies vacuum (normally -60 kPa) to the press felt to remove the moisture so that when the felt returns to the nip on the next cycle, it does not add moisture to the sheet. Some grades of paper use suction pick up rolls that use vacuum to transfer the sheet from the couch to a lead in felt on the first press or between press sections. Pickup roll presses normally have a vacuum box that has two vacuum zones (low vacuum and high vacuum). These rolls have a large number of drilled holes in the cover to allow the vacuum to pass from the stationary vacuum box through the rotating roll covering. The low vacuum zone picks up the sheet and transfers, while the high vacuum zone attempts to remove moisture. Unfortunately, at high enough speed centrifugal force flings out vacuumed water, making this less effective for dewatering. Pickup presses also have standard felt runs with Uhle boxes. However, pickup press design is quite different, as air movement is important for the pickup and dewatering facets of its role. Crown Controlled Rolls (also known as CC Rolls) are usually the mating roll in a press arrangement. They have hydraulic cylinders in the press rolls that ensure that the roll does not bow. The cylinders connect to a shoe or multiple shoes to keep the crown on the roll flat, to counteract the natural "bend" in the roll shape due to applying load to the edges. Extended Nip Presses (or ENP) are a relatively modern alternative to conventional roll presses. The top roll is usually a standard roll, while the bottom roll is actually a large CC roll with an extended shoe curved to the shape of the top roll, surrounded by a rotating rubber belt rather than a standard roll cover. The goal of the ENP is to extend the dwell time of the sheet between the two rolls thereby maximising the de-watering. Compared to a standard roll press that achieves up to 35% solids after pressing, an ENP brings this up to 45% and higher—delivering significant steam savings or speed increases. ENPs densify the sheet, thus increasing tensile strength and some other physical properties. The dryer section of the paper machine, as its name suggests, dries the paper by way of a series of internally steam -heated cylinders that evaporate the moisture. Steam pressures may range up to 160 psig. Steam enters the end of the dryer head (cylinder cap) through a steam joint and condensate exits through a siphon that goes from the internal shell to a centre pipe. From the centre pipe the condensate exits through a joint on the dryer head. Wide machines require multiple siphons. In faster machines, centrifugal force holds the condensate layer still against the shell and turbulence generating bars are typically used to agitate the condensate layer and improve heat transfer. [ 12 ] The sheet is usually held against the dryers by long felt loops on the top and bottom of each dryer section. The felts greatly improve heat transfer. Dryer felts are made of coarse thread and have a very open weave that is almost see through, It is common to have the first bottom dryer section unfelted to dump broke on the basement floor during sheet breaks or when threading the sheet. Paper dryers are typically arranged in groups called sections so that they can be run at a progressively slightly slower speed to compensate for sheet shrinkage as the paper dries. Some grades of paper may also stretch as they run through the machine, requiring increasing speed between sections. The gaps between sections are called draws . The drying sections are usually enclosed to conserve heat. Heated air is usually supplied to the pockets where the sheet breaks contact with the driers. This increases the rate of drying. The pocket ventilating tubes have slots along their entire length that face into the pocket. The dryer hoods are usually exhausted with a series of roof mounted hood exhausts fans down the dryer section. Additional sizing agents, including resins , glue , or starch , can be added to the web to alter its characteristics. Sizing improves the paper's water resistance, decreases its ability to fuzz, reduces abrasiveness, and improves its printing properties and surface bond strength. These may be applied at the wet (internal sizing) or on the dry end (surface sizing), or both. At the dry end sizing is usually applied with a size press . The size press may be a roll applicator (flooded nip) or Nozzle applicator . It is usually placed before the last dryer section. Some paper machines also make use of a 'coater' to apply a coating of fillers such as calcium carbonate or china clay usually suspended in a binder of cooked starch and styrene-butadiene latex. Coating produces a very smooth, bright surface with the highest printing qualities. A calender consists of two or more rolls, where pressure is applied to the passing paper. Calenders are used to make the paper surface extra smooth and glossy. It also gives it a more uniform thickness. The pressure applied to the web by the rollers determines the finish of the paper. After calendering, the web has a moisture content of about 6% (depending on the furnish). The paper is wound onto metal spools using a large cylinder called a reel drum . Constant nip pressure is maintained between the reel drum and the spool, allowing the resulting friction to spin the spool. Paper runs over the top of the reel drum and is wound onto the spool to create a master roll . To be able to keep the paper machine running continuously, the reel must be able to quickly switch from winding a finished roll to an empty spool without stopping the flow of paper. To accomplish this, each reel section will have two or more spools rotating through the process. Using an overhead crane, empty spools will be loaded onto two primary arms above the reel drum. When the master roll reaches its maximum diameter, the arms will lower the new spool into contact with the reel drum and a machine behind the drum will run a tape along the moving sheet of paper, swiftly tearing it and attaching incoming paper onto the new spool. The spool is then lowered onto the secondary arms , which steadily guide the spool away from the reel drum as the diameter of paper on the spool increases. The roll hardness should be checked, obtained and adjusted accordingly to insure that the roll hardness is within the acceptable range for the product. Reels of paper wound up at the end of the drying process are the full trimmed width, minus shrinkage from drying, of the web leaving the wire. In the winder section reels of paper are slit into smaller rolls of a width and roll diameter range specified by a customer order. To accomplish this the reel is placed on an unwind stand and the distances between the slitters (sharp cutting wheels), are adjusted to the specified widths for the orders. The winder is run until the desired roll diameter is reached and the rolls are labeled according to size and order before being sent to shipping or the warehouse. A reel usually has sufficient diameter to make two or more sets of rolls. broke : waste paper made during the paper making process, either made during a sheet break or trimmings. It is gathered up and put in a repulper for recycling back into the process. consistency : the percent dry fiber in a pulp slurry. couch : French meaning to lie down . Following the couch roll the sheet is lifted off the wire and transferred into the press section. dandy roll : a mesh covered hollow roll that rides on top of the Fourdrinier. It breaks up fiber clumps to improve the sheet formation and can also be used to make an imprint, as with laid paper . See also watermark . fan pump : the large pump that circulates white water from the white water chest to the headbox. The pump is a special low pulse design that minimizes the effect of vane pulses which would cause uneven basis weight of paper in the machine direction known as barring . The flow from the fan pump may go through screens and cleaners, if used. On large paper machines fan pumps may be rated in tens of thousands of gallons per minute. felt : a loop of fabric or synthetic material that goes between press rolls and serves as a place to receive the pressed out water. Felts also support the wet paper web and guide it through the press section. Felts are also used in the dryer section to keep the sheet in close contact with the dryers and increase heat transfer. filler : a finely divided substance added to paper in the forming process. Fillers improve print quality, brightness and opacity. The most common fillers are clay and calcium carbonate. Titanium dioxide is a filler but also improves brightness and opacity. Use of calcium carbonate filler is the commonly used in alkaline papermaking, while kaolin clay is prevalent in acidic papermaking. Alkaline paper has superior ageing properties. formation : the degree of uniformity of fiber distribution in finished paper, which is easily seen by holding paper up to the light. headbox : the pressure chamber where turbulence is applied to break up fiber clumps in the slurry. The main job of the headbox is to distribute the fiber slurry uniformly across the wire. nip : the contact area where two opposing rolls meet, such as in a press or calender. pH : the degree of acidity or alkalinity of a solution. Alkaline paper has a very long life. Acid paper deteriorates over time, which caused libraries to either take conservation measures or replace many older books. size : a chemical or starch, applied to paper to retard the rate of water penetration. Sizing prevents bleeding of ink during printing, improving the sharpness of printing. slice : the adjustable rectangular orifice, usually at the bottom of the headbox, through which the whitewater jet discharges onto the wire. The slice opening and water pressure together determine the amount and velocity of whitewater flow through the slice. The slice usually has some form of adjustment mechanism to even out the paper weight profile across the machine (CD profile), although a newer method is to inject water into the whitewater across the headbox slice area, thereby using localized consistency to control CD weight profile. stock : a pulp slurry that has been processed in the stock preparation area with necessary additives, refining and pH adjustment and ready for making paper web : the continuous flow of un-dried fiber from the couch roll down the paper machine white water : filtrate from the drainage table. The white water from the table is usually stored in a white water chest from which it is pumped by the fan pump to the headbox. wire : the woven mesh fabric loop that is used for draining the pulp slurry from the headbox. Until the 1970s bronze wires were used but now they are woven from coarse mono-filament synthetics similar to fishing line but very stiff. Stainless steels are used extensively in the pulp and paper industry [ 16 ] for two primary reasons, to avoid iron contamination of the product and their corrosion resistance to the various chemicals used in the paper making process. Type 316 stainless steel is a common material used in paper machines.
https://en.wikipedia.org/wiki/Paper_machine
Paper models , also called card models or papercraft , are models constructed mainly from sheets of heavy paper , paperboard , card stock , or foam. This may be considered a broad category that contains origami and card modeling. Origami is the process of making a paper model by folding a single piece of paper without using glue or cutting while the variation kirigami does. Card modeling is making scale models from sheets of cardstock on which the parts were printed, usually in full color. These pieces would be cut out, folded, scored, and glued together. Papercraft is the art of combining these model types to build complex creations such as wearable suits of armor, life-size characters, and accurate weapon models. Sometimes the model pieces can be punched out. More frequently the printed parts must be cut out. Edges may be scored to aid folding. The parts are usually glued together with polyvinyl acetate glue ("white glue", "PVA"). In this kind of modeling, the sections are usually pre-painted, so there is no need to paint the model after completion. Some enthusiasts may enhance the model by painting and detailing. Due to the nature of the paper medium, the model may be sealed with varnish or filled with spray foam to last longer. Some enthusiasts also use papercrafts or perdurable to do life-sized props starting by making the craft, covering it with resin and painting them. Some also use photo paper and laminate them by heat, thus preventing the printed side from color wearing out, beyond the improved realistic effect on certain kinds of models (ships, cars, buses, trains, etc.). Paper crafts can be used as references to do props with other materials too. The first paper models appeared in Europe in the 17th century with the earliest commercial models were appearing in French toy catalogs in 1800. [ 1 ] Printed card became common in magazines in the early part of the 20th century. The popularity of card modeling boomed during World War II when the paper was one of the few items whose use and production was not heavily regulated. [ 2 ] Micromodels , designed and published in England from 1941 were very popular with 100 different models, including architecture , ships, and aircraft. [ 3 ] But as plastic model kits became more commonly available, interest in paper decreased. The Robert Freidus Collection, held at the V&A Museum of Childhood has over 14000 card models exclusively in the category Architectural Paper Models. [ 4 ] Since paper model patterns can be easily printed and assembled, the Internet has become a popular means of exchanging them. Commercial corporations have recently begun using downloadable paper models for their marketing (examples are Yamaha and Canon ). The availability of numerous models on the Internet at little or no cost, which can then be downloaded and printed on inexpensive inkjet printers has caused its popularity again to increase worldwide. Home printing also allows models to be scaled up or down easily (for example, in order to make two models from different authors, in different scales, match each other in size), although the paper weight might need to be adjusted in the same ratio. Inexpensive kits are available from dedicated publishers (mostly based in Eastern Europe ; examples include Halinski, JSC Models, and Maly Modelarz), a portion of the catalog of which date back to 1950. Experienced hobbyists often scratchbuild models, either by first hand drawing or using software such as Adobe Illustrator and Inkscape . An historical example of highly specialized software is Designer Castles for BBC Micro and Acorn Archimedes platforms, which was developed as a tool for creation of card model castles. [ 5 ] CAD and CG software, such as Rhino 3D , 3DS Max , Blender , and specialist software, like Pepakura Designer from Tama Software , Dunreeb Cutout or Ultimate Papercraft 3D, may be employed to convert 3D computer models into two-dimensional printable templates for assembly. The use of 3D models greatly assists in the construction of paper models, with video game models being the most prevalent source. The video game or source in question will have to be loaded into the computer. Various methods of extracting the model exist, including using a model viewer and exporting it into a workable file type, or capturing the model from the emulation directly. The methods of capturing the model are often unique to the subject and the tools available. Readability of file-formats including proprietary ones could mean that a model viewer and exporter is unavailable outside of the developer. Using other tools that capture rendered 3D models and textures is often the only way to obtain them. In this case, the designer may have to arrange the textures and the wireframe model on a 3D program, such as SketchUp , 3DS MAX , Metasequoia , or Blender before exporting it to a papercraft creating program, such as Dunreeb Cutout or Pepakura Designer by Tama Software . From there the model is typically refined to give a proper layout and construction tabs that will affect the overall appearance and difficulty in constructing the model. Because people can create their own patterns, paper models are limited only by their designers' imaginations and ability to manipulate paper into forms. Vehicles of all forms, from cars and cargo trucks to space shuttles, are a frequent subject of paper models, some using photo-realistic textures from their real-life counterparts for extremely fine details. Architecture models can be very simple and crude forms to very detailed models with thousands of pieces to assemble. The most prevalent designs are from video games, due to their popularity and ease of producing paper models. On the Web, enthusiasts can find hundreds of models from different designers across a wide range of subjects. The models include very difficult and ambitious paper projects, such as life-sized and complex creations. Architectural paper models are popular with model railway enthusiasts. Various models are used in tabletop gaming, primarily wargaming . Scale paper models allow for easy production of armies and buildings for use in gaming and that can be scaled up or down readily or produced as desired. Whether they be three-dimensional models or two-dimensional icons, players are able to personalize and modify the models to bear unique unit designations and insignias for gaming.
https://en.wikipedia.org/wiki/Paper_model
The Papkovich–Neuber solution is a technique for generating analytic solutions to the Newtonian incompressible Stokes equations , though it was originally developed to solve the equations of linear elasticity . It can be shown that any Stokes flow with body force f = 0 {\displaystyle \mathbf {f} =0} can be written in the form: where Φ {\displaystyle \mathbf {\Phi } } is a harmonic vector potential and χ {\displaystyle \chi } is a harmonic scalar potential. The properties and ease of construction of harmonic functions makes the Papkovich–Neuber solution a powerful technique for solving the Stokes Equations in a variety of domains. This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Papkovich–Neuber_solution
See text Papovaviricetes is a class of viruses . [ 1 ] The class shares the name of an abolished family, Papovaviridae , which was split in 1999 into the two families Papillomaviridae and Polyomaviridae . [ 2 ] The class was established in 2019 and takes its name from the former family. [ 3 ] The following orders are recognized: This virus -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Papovaviricetes
Pappus's area theorem describes the relationship between the areas of three parallelograms attached to three sides of an arbitrary triangle . The theorem, which can also be thought of as a generalization of the Pythagorean theorem , is named after the Greek mathematician Pappus of Alexandria (4th century AD), who discovered it. Given an arbitrary triangle with two arbitrary parallelograms attached to two of its sides the theorem tells how to construct a parallelogram over the third side, such that the area of the third parallelogram equals the sum of the areas of the other two parallelograms. Let ABC be the arbitrary triangle and ABDE and ACFG the two arbitrary parallelograms attached to the triangle sides AB and AC. The extended parallelogram sides DE and FG intersect at H. The line segment AH now "becomes" the side of the third parallelogram BCML attached to the triangle side BC, i.e., one constructs line segments BL and CM over BC, such that BL and CM are a parallel and equal in length to AH. The following identity then holds for the areas (denoted by A) of the parallelograms: The theorem generalizes the Pythagorean theorem twofold. Firstly it works for arbitrary triangles rather than only for right angled ones and secondly it uses parallelograms rather than squares. For squares on two sides of an arbitrary triangle it yields a parallelogram of equal area over the third side and if the two sides are the legs of a right angle the parallelogram over the third side will be square as well. For a right-angled triangle, two parallelograms attached to the legs of the right angle yield a rectangle of equal area on the third side and again if the two parallelograms are squares then the rectangle on the third side will be a square as well. Due to having the same base length and height the parallelograms ABDE and ABUH have the same area, the same argument applying to the parallelograms ACFG and ACVH , ABUH and BLQR , ACVH and RCMQ . This already yields the desired result, as we have:
https://en.wikipedia.org/wiki/Pappus's_area_theorem
In mathematics, Pappus's centroid theorem (also known as the Guldinus theorem , Pappus–Guldinus theorem or Pappus's theorem ) is either of two related theorems dealing with the surface areas and volumes of surfaces and solids of revolution. The theorems are attributed to Pappus of Alexandria [ a ] and Paul Guldin . [ b ] Pappus's statement of this theorem appears in print for the first time in 1659, but it was known before, by Kepler in 1615 and by Guldin in 1640. [ 4 ] The first theorem states that the surface area A of a surface of revolution generated by rotating a plane curve C about an axis external to C and on the same plane is equal to the product of the arc length s of C and the distance d traveled by the geometric centroid of C : A = s d . {\displaystyle A=sd.} For example, the surface area of the torus with minor radius r and major radius R is A = ( 2 π r ) ( 2 π R ) = 4 π 2 R r . {\displaystyle A=(2\pi r)(2\pi R)=4\pi ^{2}Rr.} A curve given by the positive function f ( x ) {\displaystyle f(x)} is bounded by two points given by: a ≥ 0 {\displaystyle a\geq 0} and b ≥ a {\displaystyle b\geq a} If d L {\displaystyle dL} is an infinitesimal line element tangent to the curve, the length of the curve is given by: L = ∫ a b d L = ∫ a b d x 2 + d y 2 = ∫ a b 1 + ( d y d x ) 2 d x {\displaystyle L=\int _{a}^{b}dL=\int _{a}^{b}{\sqrt {dx^{2}+dy^{2}}}=\int _{a}^{b}{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}}}\,dx} The y {\displaystyle y} component of the centroid of this curve is: y ¯ = 1 L ∫ a b y d L = 1 L ∫ a b y 1 + ( d y d x ) 2 d x {\displaystyle {\bar {y}}={\frac {1}{L}}\int _{a}^{b}y\,dL={\frac {1}{L}}\int _{a}^{b}y{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}}}\,dx} The area of the surface generated by rotating the curve around the x-axis is given by: A = 2 π ∫ a b y d L = 2 π ∫ a b y 1 + ( d y d x ) 2 d x {\displaystyle A=2\pi \int _{a}^{b}y\,dL=2\pi \int _{a}^{b}y{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}}}\,dx} Using the last two equations to eliminate the integral we have: A = 2 π y ¯ L {\displaystyle A=2\pi {\bar {y}}L} The second theorem states that the volume V of a solid of revolution generated by rotating a plane figure F about an external axis is equal to the product of the area A of F and the distance d traveled by the geometric centroid of F . (The centroid of F is usually different from the centroid of its boundary curve C .) That is: V = A d . {\displaystyle V=Ad.} For example, the volume of the torus with minor radius r and major radius R is V = ( π r 2 ) ( 2 π R ) = 2 π 2 R r 2 . {\displaystyle V=(\pi r^{2})(2\pi R)=2\pi ^{2}Rr^{2}.} This special case was derived by Johannes Kepler using infinitesimals. [ c ] The area bounded by the two functions: y = f ( x ) , y ≥ 0 {\displaystyle y=f(x),\,\qquad y\geq 0} y = g ( x ) , f ( x ) ≥ g ( x ) {\displaystyle y=g(x),\,\qquad f(x)\geq g(x)} and bounded by the two lines: x = a ≥ 0 {\displaystyle x=a\geq 0} and x = b ≥ a {\displaystyle x=b\geq a} is given by: A = ∫ a b d A = ∫ a b [ f ( x ) − g ( x ) ] d x {\displaystyle A=\int _{a}^{b}dA=\int _{a}^{b}[f(x)-g(x)]\,dx} The x {\displaystyle x} component of the centroid of this area is given by: x ¯ = 1 A ∫ a b x [ f ( x ) − g ( x ) ] d x {\displaystyle {\bar {x}}={\frac {1}{A}}\,\int _{a}^{b}x\,[f(x)-g(x)]\,dx} If this area is rotated about the y-axis, the volume generated can be calculated using the shell method. It is given by: V = 2 π ∫ a b x [ f ( x ) − g ( x ) ] d x {\displaystyle V=2\pi \int _{a}^{b}x\,[f(x)-g(x)]\,dx} Using the last two equations to eliminate the integral we have: V = 2 π x ¯ A {\displaystyle V=2\pi {\bar {x}}A} Let A {\displaystyle A} be the area of F {\displaystyle F} , W {\displaystyle W} the solid of revolution of F {\displaystyle F} , and V {\displaystyle V} the volume of W {\displaystyle W} . Suppose F {\displaystyle F} starts in the x z {\displaystyle xz} -plane and rotates around the z {\displaystyle z} -axis. The distance of the centroid of F {\displaystyle F} from the z {\displaystyle z} -axis is its x {\displaystyle x} -coordinate R = ∫ F x d A A , {\displaystyle R={\frac {\int _{F}x\,dA}{A}},} and the theorem states that V = A d = A ⋅ 2 π R = 2 π ∫ F x d A . {\displaystyle V=Ad=A\cdot 2\pi R=2\pi \int _{F}x\,dA.} To show this, let F {\displaystyle F} be in the xz -plane, parametrized by Φ ( u , v ) = ( x ( u , v ) , 0 , z ( u , v ) ) {\displaystyle \mathbf {\Phi } (u,v)=(x(u,v),0,z(u,v))} for ( u , v ) ∈ F ∗ {\displaystyle (u,v)\in F^{*}} , a parameter region. Since Φ {\displaystyle {\boldsymbol {\Phi }}} is essentially a mapping from R 2 {\displaystyle \mathbb {R} ^{2}} to R 2 {\displaystyle \mathbb {R} ^{2}} , the area of F {\displaystyle F} is given by the change of variables formula: A = ∫ F d A = ∬ F ∗ | ∂ ( x , z ) ∂ ( u , v ) | d u d v = ∬ F ∗ | ∂ x ∂ u ∂ z ∂ v − ∂ x ∂ v ∂ z ∂ u | d u d v , {\displaystyle A=\int _{F}dA=\iint _{F^{*}}\left|{\frac {\partial (x,z)}{\partial (u,v)}}\right|\,du\,dv=\iint _{F^{*}}\left|{\frac {\partial x}{\partial u}}{\frac {\partial z}{\partial v}}-{\frac {\partial x}{\partial v}}{\frac {\partial z}{\partial u}}\right|\,du\,dv,} where | ∂ ( x , z ) ∂ ( u , v ) | {\displaystyle \left|{\tfrac {\partial (x,z)}{\partial (u,v)}}\right|} is the determinant of the Jacobian matrix of the change of variables. The solid W {\displaystyle W} has the toroidal parametrization Φ ( u , v , θ ) = ( x ( u , v ) cos ⁡ θ , x ( u , v ) sin ⁡ θ , z ( u , v ) ) {\displaystyle {\boldsymbol {\Phi }}(u,v,\theta )=(x(u,v)\cos \theta ,x(u,v)\sin \theta ,z(u,v))} for ( u , v , θ ) {\displaystyle (u,v,\theta )} in the parameter region W ∗ = F ∗ × [ 0 , 2 π ] {\displaystyle W^{*}=F^{*}\times [0,2\pi ]} ; and its volume is V = ∫ W d V = ∭ W ∗ | ∂ ( x , y , z ) ∂ ( u , v , θ ) | d u d v d θ . {\displaystyle V=\int _{W}dV=\iiint _{W^{*}}\left|{\frac {\partial (x,y,z)}{\partial (u,v,\theta )}}\right|\,du\,dv\,d\theta .} Expanding, | ∂ ( x , y , z ) ∂ ( u , v , θ ) | = | det [ ∂ x ∂ u cos ⁡ θ ∂ x ∂ v cos ⁡ θ − x sin ⁡ θ ∂ x ∂ u sin ⁡ θ ∂ x ∂ v sin ⁡ θ x cos ⁡ θ ∂ z ∂ u ∂ z ∂ v 0 ] | = | − ∂ z ∂ v ∂ x ∂ u x + ∂ z ∂ u ∂ x ∂ v x | = | − x ∂ ( x , z ) ∂ ( u , v ) | = x | ∂ ( x , z ) ∂ ( u , v ) | . {\displaystyle {\begin{aligned}\left|{\frac {\partial (x,y,z)}{\partial (u,v,\theta )}}\right|&=\left|\det {\begin{bmatrix}{\frac {\partial x}{\partial u}}\cos \theta &{\frac {\partial x}{\partial v}}\cos \theta &-x\sin \theta \\[6pt]{\frac {\partial x}{\partial u}}\sin \theta &{\frac {\partial x}{\partial v}}\sin \theta &x\cos \theta \\[6pt]{\frac {\partial z}{\partial u}}&{\frac {\partial z}{\partial v}}&0\end{bmatrix}}\right|\\[5pt]&=\left|-{\frac {\partial z}{\partial v}}{\frac {\partial x}{\partial u}}\,x+{\frac {\partial z}{\partial u}}{\frac {\partial x}{\partial v}}\,x\right|=\ \left|-x\,{\frac {\partial (x,z)}{\partial (u,v)}}\right|=x\left|{\frac {\partial (x,z)}{\partial (u,v)}}\right|.\end{aligned}}} The last equality holds because the axis of rotation must be external to F {\displaystyle F} , meaning x ≥ 0 {\displaystyle x\geq 0} . Now, V = ∭ W ∗ | ∂ ( x , y , z ) ∂ ( u , v , θ ) | d u d v d θ = ∫ 0 2 π ∬ F ∗ x ( u , v ) | ∂ ( x , z ) ∂ ( u , v ) | d u d v d θ = 2 π ∬ F ∗ x ( u , v ) | ∂ ( x , z ) ∂ ( u , v ) | d u d v = 2 π ∫ F x d A {\displaystyle {\begin{aligned}V&=\iiint _{W^{*}}\left|{\frac {\partial (x,y,z)}{\partial (u,v,\theta )}}\right|\,du\,dv\,d\theta \\[1ex]&=\int _{0}^{2\pi }\!\!\!\!\iint _{F^{*}}x(u,v)\left|{\frac {\partial (x,z)}{\partial (u,v)}}\right|du\,dv\,d\theta \\[6pt]&=2\pi \iint _{F^{*}}x(u,v)\left|{\frac {\partial (x,z)}{\partial (u,v)}}\right|\,du\,dv\\[1ex]&=2\pi \int _{F}x\,dA\end{aligned}}} by change of variables. The theorems can be generalized for arbitrary curves and shapes, under appropriate conditions. Goodman & Goodman [ 6 ] generalize the second theorem as follows. If the figure F moves through space so that it remains perpendicular to the curve L traced by the centroid of F , then it sweeps out a solid of volume V = Ad , where A is the area of F and d is the length of L . (This assumes the solid does not intersect itself.) In particular, F may rotate about its centroid during the motion. However, the corresponding generalization of the first theorem is only true if the curve L traced by the centroid lies in a plane perpendicular to the plane of C . In general, one can generate an n {\displaystyle n} dimensional solid by rotating an n − p {\displaystyle n-p} dimensional solid F {\displaystyle F} around a p {\displaystyle p} dimensional sphere. This is called an n {\displaystyle n} -solid of revolution of species p {\displaystyle p} . Let the p {\displaystyle p} -th centroid of F {\displaystyle F} be defined by R = ∬ F x p d A A , {\displaystyle R={\frac {\iint _{F}x^{p}\,dA}{A}},} Then Pappus' theorems generalize to: [ 7 ] Volume of n {\displaystyle n} -solid of revolution of species p {\displaystyle p} = (Volume of generating ( n − p ) {\displaystyle (n{-}p)} -solid) × {\displaystyle \times } (Surface area of p {\displaystyle p} -sphere traced by the p {\displaystyle p} -th centroid of the generating solid) and Surface area of n {\displaystyle n} -solid of revolution of species p {\displaystyle p} = (Surface area of generating ( n − p ) {\displaystyle (n{-}p)} -solid) × {\displaystyle \times } (Surface area of p {\displaystyle p} -sphere traced by the p {\displaystyle p} -th centroid of the generating solid) The original theorems are the case with n = 3 , p = 1 {\displaystyle n=3,\,p=1} . They who look at these things are hardly exalted, as were the ancients and all who wrote the finer things. When I see everyone occupied with the rudiments of mathematics and of the material for inquiries that nature sets before us, I am ashamed; I for one have proved things that are much more valuable and offer much application. In order not to end my discourse declaiming this with empty hands, I will give this for the benefit of the readers: The ratio of solids of complete revolution is compounded of (that) of the revolved figures and (that) of the straight lines similarly drawn to the axes from the centers of gravity in them; that of (solids of) incomplete (revolution) from (that) of the revolved figures and (that) of the arcs that the centers of gravity in them describe, where the (ratio) of these arcs is, of course, (compounded) of (that) of the (lines) drawn and (that) of the angles of revolution that their extremities contain, if these (lines) are also at (right angles) to the axes. These propositions, which are practically a single one, contain many theorems of all kinds, for curves and surfaces and solids, all at once and by one proof, things not yet and things already demonstrated, such as those in the twelfth book of the First Elements .
https://en.wikipedia.org/wiki/Pappus's_centroid_theorem
In mathematics, Pappus's hexagon theorem (attributed to Pappus of Alexandria ) states that It holds in a projective plane over any field, but fails for projective planes over any noncommutative division ring . [ 1 ] Projective planes in which the "theorem" is valid are called pappian planes . If one considers a pappian plane containing a hexagon as just described but with sides A b {\displaystyle Ab} and a B {\displaystyle aB} parallel and also sides B c {\displaystyle Bc} and b C {\displaystyle bC} parallel (so that the Pappus line u {\displaystyle u} is the line at infinity ), one gets the affine version of Pappus's theorem shown in the second diagram. If the Pappus line u {\displaystyle u} and the lines g , h {\displaystyle g,h} have a point in common, one gets the so-called little version of Pappus's theorem. [ 2 ] The dual of this incidence theorem states that given one set of concurrent lines A , B , C {\displaystyle A,B,C} , and another set of concurrent lines a , b , c {\displaystyle a,b,c} , then the lines x , y , z {\displaystyle x,y,z} defined by pairs of points resulting from pairs of intersections A ∩ b {\displaystyle A\cap b} and a ∩ B , A ∩ c {\displaystyle a\cap B,\;A\cap c} and a ∩ C , B ∩ c {\displaystyle a\cap C,\;B\cap c} and b ∩ C {\displaystyle b\cap C} are concurrent. ( Concurrent means that the lines pass through one point.) Pappus's theorem is a special case of Pascal's theorem for a conic—the limiting case when the conic degenerates into 2 straight lines. Pascal's theorem is in turn a special case of the Cayley–Bacharach theorem . The Pappus configuration is the configuration of 9 lines and 9 points that occurs in Pappus's theorem, with each line meeting 3 of the points and each point meeting 3 lines. In general, the Pappus line does not pass through the point of intersection of A B C {\displaystyle ABC} and a b c {\displaystyle abc} . [ 3 ] This configuration is self dual . Since, in particular, the lines B c , b C , X Y {\displaystyle Bc,bC,XY} have the properties of the lines x , y , z {\displaystyle x,y,z} of the dual theorem, and collinearity of X , Y , Z {\displaystyle X,Y,Z} is equivalent to concurrence of B c , b C , X Y {\displaystyle Bc,bC,XY} , the dual theorem is therefore just the same as the theorem itself. The Levi graph of the Pappus configuration is the Pappus graph , a bipartite distance-regular graph with 18 vertices and 27 edges. If the affine form of the statement can be proven, then the projective form of Pappus's theorem is proven, as the extension of a pappian plane to a projective plane is unique. Because of the parallelity in an affine plane one has to distinct two cases: g ∦ h {\displaystyle g\not \parallel h} and g ∥ h {\displaystyle g\parallel h} . The key for a simple proof is the possibility for introducing a "suitable" coordinate system: Case 1: The lines g , h {\displaystyle g,h} intersect at point S = g ∩ h {\displaystyle S=g\cap h} . In this case coordinates are introduced, such that S = ( 0 , 0 ) , A = ( 0 , 1 ) , c = ( 1 , 0 ) {\displaystyle \;S=(0,0),\;A=(0,1),\;c=(1,0)\;} (see diagram). B , C {\displaystyle B,C} have the coordinates B = ( 0 , γ ) , C = ( 0 , δ ) , γ , δ ∉ { 0 , 1 } {\displaystyle \;B=(0,\gamma ),\;C=(0,\delta ),\;\gamma ,\delta \notin \{0,1\}} . From the parallelity of the lines B c , C b {\displaystyle Bc,\;Cb} one gets b = ( δ γ , 0 ) {\displaystyle b=({\tfrac {\delta }{\gamma }},0)} and the parallelity of the lines A b , B a {\displaystyle Ab,Ba} yields a = ( δ , 0 ) {\displaystyle a=(\delta ,0)} . Hence line C a {\displaystyle Ca} has slope − 1 {\displaystyle -1} and is parallel line A c {\displaystyle Ac} . Case 2: g ∥ h {\displaystyle g\parallel h\ } (little theorem). In this case the coordinates are chosen such that c = ( 0 , 0 ) , b = ( 1 , 0 ) , A = ( 0 , 1 ) , B = ( γ , 1 ) , γ ≠ 0 {\displaystyle \;c=(0,0),\;b=(1,0),\;A=(0,1),\;B=(\gamma ,1),\;\gamma \neq 0} . From the parallelity of A b ∥ B a {\displaystyle Ab\parallel Ba} and c B ∥ b C {\displaystyle cB\parallel bC} one gets C = ( γ + 1 , 1 ) {\displaystyle \;C=(\gamma +1,1)\;} and a = ( γ + 1 , 0 ) {\displaystyle \;a=(\gamma +1,0)\;} , respectively, and at least the parallelity A c ∥ C a {\displaystyle \;Ac\parallel Ca\;} . Choose homogeneous coordinates with On the lines A C , A c , A X {\displaystyle AC,Ac,AX} , given by x 2 = x 3 , x 1 = x 3 , x 2 = x 1 {\displaystyle x_{2}=x_{3},\;x_{1}=x_{3},\;x_{2}=x_{1}} , take the points B , Y , b {\displaystyle B,Y,b} to be for some p , q , r {\displaystyle p,q,r} . The three lines X B , C Y , c b {\displaystyle XB,CY,cb} are x 1 = x 2 p , x 2 = x 3 q , x 3 = x 1 r {\displaystyle x_{1}=x_{2}p,\;x_{2}=x_{3}q,\;x_{3}=x_{1}r} , so they pass through the same point a {\displaystyle a} if and only if r q p = 1 {\displaystyle rqp=1} . The condition for the three lines C b , c B {\displaystyle Cb,cB} and X Y {\displaystyle XY} with equations x 2 = x 1 q , x 1 = x 3 p , x 3 = x 2 r {\displaystyle x_{2}=x_{1}q,\;x_{1}=x_{3}p,\;x_{3}=x_{2}r} to pass through the same point Z {\displaystyle Z} is r p q = 1 {\displaystyle rpq=1} . So this last set of three lines is concurrent if all the other eight sets are because multiplication is commutative, so p q = q p {\displaystyle pq=qp} . Equivalently, X , Y , Z {\displaystyle X,Y,Z} are collinear. The proof above also shows that for Pappus's theorem to hold for a projective space over a division ring it is both sufficient and necessary that the division ring is a (commutative) field. German mathematician Gerhard Hessenberg proved that Pappus's theorem implies Desargues's theorem . [ 4 ] [ 5 ] In general, Pappus's theorem holds for some projective plane if and only if it is a projective plane over a commutative field. The projective planes in which Pappus's theorem does not hold are Desarguesian projective planes over noncommutative division rings, and non-Desarguesian planes . The proof is invalid if C , c , X {\displaystyle C,c,X} happen to be collinear. In that case an alternative proof can be provided, for example, using a different projective reference. Because of the principle of duality for projective planes the dual theorem of Pappus is true: If 6 lines A , b , C , a , B , c {\displaystyle A,b,C,a,B,c} are chosen alternately from two pencils with centers G , H {\displaystyle G,H} , the lines are concurrent, that means: they have a point U {\displaystyle U} in common. The left diagram shows the projective version, the right one an affine version, where the points G , H {\displaystyle G,H} are points at infinity. If point U {\displaystyle U} is on the line G H {\displaystyle GH} than one gets the "dual little theorem" of Pappus' theorem. If in the affine version of the dual "little theorem" point U {\displaystyle U} is a point at infinity too, one gets Thomsen's theorem , a statement on 6 points on the sides of a triangle (see diagram). The Thomsen figure plays an essential role coordinatising an axiomatic defined projective plane. [ 6 ] The proof of the closure of Thomsen's figure is covered by the proof for the "little theorem", given above. But there exists a simple direct proof, too: Because the statement of Thomsen's theorem (the closure of the figure) uses only the terms connect, intersect and parallel , the statement is affinely invariant, and one can introduce coordinates such that P = ( 0 , 0 ) , Q = ( 1 , 0 ) , R = ( 0 , 1 ) {\displaystyle P=(0,0),\;Q=(1,0),\;R=(0,1)} (see right diagram). The starting point of the sequence of chords is ( 0 , λ ) . {\displaystyle (0,\lambda ).} One easily verifies the coordinates of the points given in the diagram, which shows: the last point coincides with the first point. In addition to the above characterizations of Pappus's theorem and its dual, the following are equivalent statements: In its earliest known form, Pappus's Theorem is Propositions 138, 139, 141, and 143 of Book VII of Pappus's Collection . [ 10 ] These are Lemmas XII, XIII, XV, and XVII in the part of Book VII consisting of lemmas to the first of the three books of Euclid 's Porisms. The lemmas are proved in terms of what today is known as the cross ratio of four collinear points. Three earlier lemmas are used. The first of these, Lemma III, has the diagram below (which uses Pappus's lettering, with G for Γ, D for Δ, J for Θ, and L for Λ). Here three concurrent straight lines, AB, AG, and AD, are crossed by two lines, JB and JE, which concur at J. Also KL is drawn parallel to AZ. Then These proportions might be written today as equations: [ 11 ] The last compound ratio (namely JD : GD & BG : JB) is what is known today as the cross ratio of the collinear points J, G, D, and B in that order; it is denoted today by (J, G; D, B). So we have shown that this is independent of the choice of the particular straight line JD that crosses the three straight lines that concur at A. In particular It does not matter on which side of A the straight line JE falls. In particular, the situation may be as in the next diagram, which is the diagram for Lemma X. Just as before, we have (J, G; D, B) = (J, Z; H, E). Pappus does not explicitly prove this; but Lemma X is a converse, namely that if these two cross ratios are the same, and the straight lines BE and DH cross at A, then the points G, A, and Z must be collinear. What we showed originally can be written as (J, ∞; K, L) = (J, G; D, B), with ∞ taking the place of the (nonexistent) intersection of JK and AG. Pappus shows this, in effect, in Lemma XI, whose diagram, however, has different lettering: What Pappus shows is DE.ZH : EZ.HD :: GB : BE, which we may write as The diagram for Lemma XII is: The diagram for Lemma XIII is the same, but BA and DG, extended, meet at N. In any case, considering straight lines through G as cut by the three straight lines through A, (and accepting that equations of cross ratios remain valid after permutation of the entries,) we have by Lemma III or XI Considering straight lines through D as cut by the three straight lines through B, we have Thus (E, H; J, G) = (E, K; D, L), so by Lemma X, the points H, M, and K are collinear. That is, the points of intersection of the pairs of opposite sides of the hexagon ADEGBZ are collinear. Lemmas XV and XVII are that, if the point M is determined as the intersection of HK and BG, then the points A, M, and D are collinear. That is, the points of intersection of the pairs of opposite sides of the hexagon BEKHZG are collinear.
https://en.wikipedia.org/wiki/Pappus's_hexagon_theorem
Papuasia is a Level 2 botanical region defined in the World Geographical Scheme for Recording Plant Distributions (WGSRPD). [ 1 ] It lies in the southwestern Pacific Ocean , in the Melanesia ecoregion of Oceania and Tropical Asia . It comprises the following geographic and political entities:
https://en.wikipedia.org/wiki/Papuasia
The ParMRC system is a mechanism for sorting DNA plasmids to opposite ends of a bacterial cell during cell division . It has three components: ParM , an actin -like protein that forms a long filament to push two plasmids apart, ParR , which binds the plasmid to ParM and generates the ParM filament, and parC , which is a DNA sequence on the plasmid that anchors ParR to itself. There the plasmids are segregated and can replicate without interference from the chromosomal DNA. [ 1 ] During cell division many plasmids are plagued with low copy numbers and thus evolved active segregation to avoid plasmid loss during cell division [ 2 ] The process of this segregation is carried out by a small number of components, three to be exact, in the DNA, with incredible efficiency. [ 3 ] The three components, a parC DNA site, and two proteins parR and parM all combine to create the ParMRC system, a type II plasmid partitioning system. [ 3 ] The process by which the plasmids are segregated from the chromosomal DNA is not an extremely complicated one and contains just three components. The first component ParM is an actin-like protein . The second is a DNA-binding adaptor protein known as ParR. The last component is a centromere-like region called ParC. [ 4 ] The process works using all three of these components and has been evolved to work extremely efficiently. In the cell the ParM protein filaments search for plasmids. Next, they find the ParR and ParC that are headed to DNA molecules and push them to opposite poles of the cell in order to segregate them. [ 5 ] This type of process using filament forming actin-like protein (ParM) to move DNA to opposite sides of the cell has been adopted by several Bacteria as their main plasmid segregation systems, due to its efficiency. This discovery as well as improvements in technology, such as higher resolution in light microscopy, will soon allow scientists to track individual molecules in cells to reveal even more about this ParMRC system. [ 4 ]
https://en.wikipedia.org/wiki/ParMRC_system
para -Dimethylaminobenzaldehyde is an organic compound containing amine and aldehyde moieties which is used in Ehrlich's reagent and Kovac's reagent to test for indoles . The carbonyl group typically reacts with the electron rich 2-position of the indole but may also react at the C-3 or N-1 positions. [ 2 ] It may also be used for determination of hydrazine . para -Dimethylaminobenzaldehyde is the main ingredient in Ehrlich's reagent . It acts as a strong electrophile which reacts with the electron-rich α-carbon (2-position) of indole rings to form a blue-colored adduct. [ 3 ] It can be used to detect the presence of indole alkaloids . Not all indole alkaloids give a colored adduct as result of steric hindrance which does not allow the reaction to proceed. [ citation needed ] Ehrlich's reagent is also used as a stain in thin layer chromatography and as a reagent to detect urobilinogen in fresh, cool urine . If a urine sample is left to oxidize in air to form urobilin the reagent will not detect the urobilinogen. By adding few drops of reagent to 3 mL of urine in a test tube one can see a change of color, to dark pink or red. The degree of color change is proportional to the amount of urobilinogen in the urine sample. p -Dimethylaminobenzaldehyde reacts with hydrazine to form p -Dimethylaminobenzalazine azo-dye which has a distinct yellow color. It is therefore used for spectrophotometric determination of hydrazine in aqueous solutions at 457 nm. [ 4 ] Isaac Asimov , in a 1963 humorous essay entitled "You, too, can speak Gaelic", [ 5 ] reprinted in the anthology Adding a Dimension among others, traces the etymology of each component of the chemical name "para-di-methyl-amino-benz-alde-hyde" (e.g. the syllable "-benz-" ultimately derives from the Arabic lubān jāwī (لبان جاوي, " frankincense from Java "). Asimov points out that the name can be pronounced to the tune of the familiar jig " The Irish Washerwoman ", and relates an anecdote in which a receptionist of Irish descent, hearing him singing the syllables thus, mistook them for the original Gaelic words to the jig. This essay inspired Jack Carroll's 1963 filk song "The Chemist's Drinking Song," ( NESFA Hymnal Vol. 2 2nd ed. p. 65) set to the tune of that jig, which begins "Paradimethylaminobenzaldehyde, / Sodium citrate , ammonium cyanide , / ..." [ 6 ]
https://en.wikipedia.org/wiki/Para-Dimethylaminobenzaldehyde
ParaHoxozoa (or Parahoxozoa ) is a clade of animals that consists of Bilateria , Placozoa , and Cnidaria . [ 1 ] The relationship of Parahoxozoa relative to the two other animal lineages Ctenophora and Porifera is debated. Some phylogenomic studies have presented evidence supporting Ctenophora as the sister to Parahoxozoa and Porifera as the sister group to the rest of animals (e.g. [ 2 ] [ 3 ] [ 4 ] ). Other studies have presented evidence supporting Porifera as the sister to Parahoxozoa and Ctenophora as the sister group to the rest of animals (e.g. [ 5 ] [ 6 ] [ 7 ] ), finding that nervous systems either evolved independently in ctenophores and parahoxozoans, [ 8 ] or were secondarily lost in poriferans. [ 9 ] If ctenophores are taken to have diverged first, Eumetazoa is sometimes used as a synonym for ParaHoxozoa. [ 10 ] The cladogram, which is congruent with the vast majority of these phylogenomic studies, conveys this uncertainty with a polytomy . Choanoflagellata Ctenophora Porifera Placozoa Cnidaria Bilateria "ParaHox" genes are usually referred to in CamelCase and the original paper that named the clade used "ParaHoxozoa"; the single initial capital format "Parahoxozoa" has also come to be used in the literature. [ 11 ] Parahoxozoa was defined by the presence of several gene (sub)classes (HNF, CUT, PROS, ZF, CERS, K50, S50-PRD), as well as Hox / ParaHox -ANTP from which the name of this clade originated. It was later proposed [ 12 ] [ 13 ] and contested [ 14 ] that a gene of the same class (ANTP) as the Hox/ParaHox, the NK gene and the Cdx Parahox gene, is also present in Porifera , the sponges. Regardless of whether a ParaHox gene is ever definitively identified, Parahoxozoa, as originally defined, is monophyletic and therefore continues to be used as such. [ 15 ] The original bilaterian is hypothesized to be a bottom dwelling worm with a single body opening. [ 16 ] A through-gut may already have developed with the Ctenophora. [ 17 ] The through-gut may have developed from the corners of a single opening with lips fusing. E.g. Acoela resemble the planula larvae of some Cnidaria, which exhibit some bilaterian symmetry. They are vermiform, just as the cnidarian Buddenbrockia is. [ 18 ] [ 19 ] [ 20 ] Placozoans have been noted to resemble planula. [ 21 ] Usually, " Planulozoa " is a Cnidaria–Bilateria clade that excludes Placozoa. [ 11 ] Otherwise, when including all three lineages, it is synonymous with Parahoxozoa. [ 22 ] Triploblasty may have developed before the Cnidaria–Bilateria radiation. [ 23 ]
https://en.wikipedia.org/wiki/ParaHoxozoa
ParaSurf is a molecular modelling system using semi-empirical orbital programs to construct molecular surfaces and calculate local properties and descriptors using pharmaceutical companies for drug design. [ 1 ] ParaSurf is supplied to and used by many pharmaceutical companies and biotechs, including Boehringer Ingelheim , F. Hoffmann-La Roche and Sanofi-Aventis . [ 2 ] This article about molecular modelling software is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/ParaSurf
Parabiosis is a laboratory technique used in physiological research, derived from the Greek word meaning "living beside." The technique involves the surgical joining of two living organisms in such a way that they develop a single, shared physiological system . Through this approach, researchers can study the exchange of blood , hormones , and other substances between the two organisms, allowing for the examination of a wide range of physiological phenomena and interactions. Parabiosis has been employed in various fields of study, including stem cell research, endocrinology , aging research , and immunology . Heterochronic parabiosis involves parabiosis of animals of different ages; this allows researchers to study how circulating blood-borne factors influence aging and tissue regeneration. The method has led to insights into stem cell function, neurogenesis , regeneration (biology) , and aging . In contrast, isochronic parabiosis joins two animals of the same age. Parabiosis combines two living organisms which are joined surgically and develop single, shared physiological systems. [ 1 ] [ 2 ] Researchers can prove that the feedback system in one animal is circulated and affects the second animal via blood and plasma exchange. Parabiotic experiments were pioneered by Paul Bert in the mid-1800s. He postulated that surgically connected animals could share a circulatory system. Bert was awarded the Prize of Experimental Physiology of the French Academy of Science in 1866 for his discoveries. [ 3 ] One limitation of the experiments is that outbred rats cannot be used because it can lead to a significant loss of pairs due to intoxication of the blood supply from a dissimilar rat. [ 4 ] Many of the parabiotic experiments since 1950 involve research regarding metabolism. One of these experiments was published in 1959 by G. R. Hervey in the Journal of Physiology . This experiment supported the theory that damage to the hypothalamus , particularly the ventromedial hypothalamus, leads to obesity caused by the overconsumption of food. The study's rats were from the same litter, which had been a closed colony for multiple years. The two rats in each pair had no more than a 3% difference in weight. Rats were paired at four weeks old. Unpaired rats were used as controls. The rats were conjoined in three ways. In early experiments, the peritoneal cavities were opened and connected between the two rats. In later experiments, to avoid the risk of tangling the two rats’ intestines together, smaller cuts were made. After further refinement of the experimental procedure, the abdominal cavities were not opened, and the rats were conjoined at the hip bone with minimal cutting. To prove that the two animals were sharing blood, researchers injected dye into one rat's veins, and the pigment would show up in the conjoined rat. In each pair, one rat became obese and exhibited hyperphagia. The weight of the rat with the surgical lesion rose rapidly for a few months, then reached a plateau as a direct result of the surgical procedure. After the procedure, the rat with the impaired hypothalamus ate voraciously while the paired rat's appetite decreased. The paired rat became obviously thin throughout the experiment, even rejecting food when it was offered. [ 5 ] [ 6 ] Later studies identified this satiety factor as the adipose -derived hormone leptin . Many hormones and metabolites were proven not to be the satiety factor that caused one rat to starve in the experiments. Leptin seemed like a viable candidate. Starting in 1977, Ruth B.S. Harris, a graduate student under Hervey, repeated previous studies about parabiosis in rats and mice. Due to the discovery of leptin, she analyzed leptin concentrations of the mice in the parabiotic experiments. After injecting leptin into each pair's obese mouse, she found that leptin circulated between the conjoined animals, but the circulation of leptin took some time to reach equilibrium. As a result of the injections, the almost immediate weight loss resulted in the parabiotic pairs due to increased inhibition. Approximately 50–70% of fat was lost in pairs. The obese mouse lost only fat. The lean mouse lost muscle mass and fat. Harris concluded that leptin levels are increased in obese animals, but other factors could also affect them. Also, leptin was determined to decrease fat storage in both obese and thin animals. [ 4 ] Early parabiotic experiments also included cancer research. One study, published in 1966 by Friedell, studied radiation's effects with X-rays on ovarian tumors. To study the tumors, two adult female rats were conjoined. The left rat was shielded, and the right rat was exposed to high levels of radiation. The rats were given a controlled amount of food and water. 149 of 328 pairs showed possible ovarian tumors in the irradiated animals, but not in their partners. This result matched previous studies of single rats. [ 7 ] Chronic diseases of age are studied by conjoining an older animal with a younger animal. Known as heterochronic parabiosis , this process has been used in studies to investigate the age-related and disease-related changes in the composition of the blood, especially plasma proteome . [ 8 ] This process could be used to research cardiovascular disease, diabetes, osteoarthritis, and Alzheimer's disease. As animals age, their oligodendrocytes reduce in efficiency, resulting in decreased myelination , causing negative effects on the central nervous system (CNS). Julia Ruckh and fellow researchers have used parabiosis to study remyelination from adult stem cells to see if conjoining young with older mice could reverse or delay this process. The two mice were conjoined in the experiment, and demyelination was induced via injection into the older mice. The experiment determined that the younger mice's factors reversed CNS demyelination in older mice by revitalizing the oligodendrocytes. The monocytes from the younger mice also enhanced the older mice's ability to clear myelin debris because the young monocytes can clear lipids from myelin sheaths more effectively than older monocytes. The conjoining of the two animals reversed the effects of age on the myelination cells. The ability of the young mouse's cells was unaffected. Enhanced immunity from the younger mouse also promoted the general health of the older mouse in each pair. The results of this experiment could lead to therapy processes for people with demyelinating diseases like multiple sclerosis. [ 9 ] [ 3 ] Julia Ruckh and fellow researchers have used parabiosis to study remyelination from adult stem cells to see if conjoining young with older mice could reverse or delay this process. The two mice were conjoined in the experiment, and demyelination was induced via injection into the older mice. The experiment determined that the younger mice's factors reversed CNS demyelination in older mice by revitalizing the oligodendrocytes. The monocytes from the younger mice also enhanced the older mice's ability to clear myelin debris because the young monocytes can clear lipids from myelin sheaths more effectively than older monocytes. The conjoining of the two animals reversed the effects of age on the myelination cells. The ability of the young mouse's cells was unaffected. Enhanced immunity from the younger mouse also promoted the general health of the older mouse in each pair. The results of this experiment could lead to therapy processes for people with demyelinating diseases like multiple sclerosis. [ 10 ] < [ 3 ] Studies using heterochronic parabiosis have shown that exposure of old mice to young blood can reverse some age-related impairments in multiple tissues, including the brain, liver, heart, and skeletal muscle. Conversely, young mice exposed to old blood often show signs of accelerated aging. The term is also applicable to spontaneously occurring conditions such as in conjoined twins . [ 13 ] Obligate parasitic reproduction of Anglerfish of the family Ceratiidae , in which the circulatory systems of the males and females unite completely. Without the attachment of males to females, the endocrine functions cannot mature; the individuals fail to develop properly and die young and without reproducing. [ 14 ] Plants growing closely together roots or stems in intimate contact sometimes form natural grafts. In parasitic plants such as mistletoe and dodder the haustoria unite the circulatory systems of the host and the parasite so intimately that parasitic twiners such as Cassytha may act as vectors carrying disease organisms from one host plant to another. [ 15 ] Ant colonies can share their nests with essentially unrelated species of ants, and even non-ants . They did not obviously share anything beyond the nests' upkeep, even segregating their brood, so these were very surprising observations; most ants are radically intolerant of intruders, usually including even intruders of their own species. In the early 20th century Auguste-Henri Forel coined the term "parabiosis" for such associations, and it was adopted by the likes of William Morton Wheeler . [ 16 ] [ 17 ] Furthermore, there is evidence for the partitioning of functions of work between the two species in the nest. [ 18 ] Early reports that parabiotic ant colonies forage and feed together peacefully also have been qualified by observations that revealed ants of one species in such an association aggressively displacing members of the other species from artificially provided food, while also profiting by following their recruitment trails to new food sources. [ 17 ] Benefits from shared nest defence and maintenance even when there is neither direct cooperation nor interaction between the two associated populations in a nest. [ 19 ] Parabiosis derives most directly from Neo-Latin , [ 13 ] but the Latin in turn derives from two classical Greek roots. The first is παρά ( para ) for "beside" or "next to". In modern etymology, this root appears in various senses, such as "close to", "outside of", and "different". The second classical Greek root from which the Latin derives is βίος ( bios ), meaning "life."
https://en.wikipedia.org/wiki/Parabiosis
In algebra , a parabolic Lie algebra p {\displaystyle {\mathfrak {p}}} is a subalgebra of a semisimple Lie algebra g {\displaystyle {\mathfrak {g}}} satisfying one of the following two conditions: These conditions are equivalent over an algebraically closed field of characteristic zero , such as the complex numbers . If the field F {\displaystyle \mathbb {F} } is not algebraically closed, then the first condition is replaced by the assumption that where F ¯ {\displaystyle {\overline {\mathbb {F} }}} is the algebraic closure of F {\displaystyle \mathbb {F} } . For the general linear Lie algebra g = g l n ( F ) {\displaystyle {\mathfrak {g}}={\mathfrak {gl}}_{n}(\mathbb {F} )} , a parabolic subalgebra is the stabilizer of a partial flag of F n {\displaystyle \mathbb {F} ^{n}} , i.e. a sequence of nested linear subspaces. For a complete flag, the stabilizer gives a Borel subalgebra. For a single linear subspace F k ⊂ F n {\displaystyle \mathbb {F} ^{k}\subset \mathbb {F} ^{n}} , one gets a maximal parabolic subalgebra p {\displaystyle {\mathfrak {p}}} , and the space of possible choices is the Grassmannian G r ( k , n ) {\displaystyle \mathrm {Gr} (k,n)} . In general, for a complex simple Lie algebra g {\displaystyle {\mathfrak {g}}} , parabolic subalgebras are in bijection with subsets of simple roots , i.e. subsets of the nodes of the Dynkin diagram . This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parabolic_Lie_algebra
In differential geometry and the study of Lie groups , a parabolic geometry is a homogeneous space G / P which is the quotient of a semisimple Lie group G by a parabolic subgroup P . More generally, the curved analogs of a parabolic geometry in this sense is also called a parabolic geometry: any geometry that is modeled on such a space by means of a Cartan connection . The projective space P n is an example. It is the homogeneous space PGL( n +1)/ H where H is the isotropy group of a line. In this geometrical space, the notion of a straight line is meaningful, but there is no preferred ("affine") parameter along the lines. The curved analog of projective space is a manifold in which the notion of a geodesic makes sense, but for which there are no preferred parametrizations on those geodesics. A projective connection is the relevant Cartan connection that gives a means for describing a projective geometry by gluing copies of the projective space to the tangent spaces of the base manifold. Broadly speaking, projective geometry refers to the study of manifolds with this kind of connection. Another example is the conformal sphere . Topologically, it is the n -sphere, but there is no notion of length defined on it, just of angle between curves. Equivalently, this geometry is described as an equivalence class of Riemannian metrics on the sphere (called a conformal class). The group of transformations that preserve angles on the sphere is the Lorentz group O( n +1,1), and so S n = O( n +1,1)/ P . Conformal geometry is, more broadly, the study of manifolds with a conformal equivalence class of Riemannian metrics, i.e., manifolds modeled on the conformal sphere. Here the associated Cartan connection is the conformal connection . Other examples include:
https://en.wikipedia.org/wiki/Parabolic_geometry_(differential_geometry)
Paracatenula is a genus of millimeter sized free-living marine gutless catenulid flatworms . [ Ref 1 ] Paracatenula spp. are found worldwide in warm temperate to tropical subtidal sediments. They are part of the interstitial meiofauna of sandy sediments. Adult Paracatenula lack a mouth and a gut and are associated with intracellular symbiotic alphaproteobacteria of the genus Candidatus Riegeria . [ Ref 2 ] [ Ref 3 ] The symbionts are housed in bacteriocytes in a specialized organ, the trophosome (Greek τροφος trophos ‘food’). Ca. Riegeria can make up half of the worms' biomass. [ Ref 3 ] [ Ref 4 ] The beneficial symbiosis with the carbon dioxide fixing and sulfur-oxidizing endosymbionts allows the marine flatworm to live in nutrient poor environments . The symbionts not only provide the nutrition but also maintain the primary energy reserves in the symbiosis. [ Ref 5 ] Five species of Paracatenula have been described— P. erato , P. kalliope , P. polyhymnia , P. urania and P. galateia , named after muses and nymphs of the Greek mythology . [ Ref 1 ] [ Ref 6 ] Several more species have been morphologically and molecularly identified, but are not formally described. [ Ref 3 ] The best studied species are P. galateia from the Belize barrier reef and a yet undescribed species P. sp. santandrea from the Italian Island of Elba. [ Ref 5 ] Paracatenula are globally distributed in warm temperate to tropical regions and have been collected from Belize ( Caribbean Sea ), Egypt ( Red Sea ), Australia ( Pacific Ocean ) and Italy ( Mediterranean Sea ). They occur in the oxic -anoxic interface of subtidal sands and have been found in water depths up to 40 m. [ Ref 3 ] Paracatenula can reach a length of up to 15 mm and a width of 0.4 mm. Several larger species of Paracatenula , such as P. galateia are flattened like a leaf, while all smaller species are round. All Paracatenula species examined so far were found to harbor bacterial symbionts in specialized symbiont-housing cells that form the nutritive organ - the trophosome . [ Ref 3 ] [ Ref 7 ] The frontal part of the worms—the rostrum —is transparent and bacteria-free, and houses the brain, while the trophosome region appears white due to light refracting inclusions in the bacterial symbionts. [ Ref 1 ] [ Ref 3 ] Some species of Paracatenula such as P. galateia possess a statocyst with a single statolith. [ Ref 6 ] Although Paracatenula produce sperm and eggs that can be very informative to differentiate between species, sexual reproduction has not been observed. [ Ref 5 ] Instead, the worms reproduce by asexual fission or fragmentation , a process called paratomy . Paracatenula worms have high regenerative capabilities and can regenerate a lost head including the brain within 10–14 days [ Ref 8 ] [ Ref 9 ] The bacteriocytes of dividing worms are split during the fission process and the population of symbiotic bacteria is distributed to the two daughter individuals. [ Ref 8 ] Paracatenula host their symbionts within bacteriocytes in the trophosome . These bacteria, named Ca. Riegeria , belong to the lineage of Alphaproteobacteria forming a monophyletic group within the order Rhodospirillales [ Ref 3 ] and the family Rhodospirillaceae . [ Ref 5 ] The co-speciation between host and bacteria suggests a strict vertical transmission of the bacteria in which the endosymbionts are directly transferred from parents to their offspring. [ Ref 3 ] [ Ref 8 ] [ Ref 10 ] The symbiosis is shown to be beneficial for both partners . [ Ref 3 ] [ Ref 4 ] [ Ref 5 ] The lack of both a gut lumen and a mouth indicate that the host derives most of its nutrition from its symbionts, which have the potential for carbon dioxide fixation and sulfur oxidation . [ Ref 2 ] [ Ref 3 ] [ Ref 5 ] In return, the host provides its symbionts with a stable supply of electron donors such as sulfide and oxygen in a dynamic and heterogeneous environment. [ Ref 2 ] [ Ref 3 ] [ Ref 5 ] Furthermore, symbionts living intracellularly in the worms are protected from predation as well as competition for nutrients by other bacteria. [ Ref 3 ] Despite having a reduced genome with roughly 1400 genes, Ca . Riegeria symbionts have maintained a broad physiological repertoire, which stands in contrast to all other reduced symbionts vertically transmitted for hundreds of millions of years. Ca . R. santandreae symbionts fix carbon dioxide, store carbon in multiple storage compounds and produce all necessary building blocks for cellular life, including sugars, nucleotides, amino acids, vitamins and co-factors. [ Ref 5 ] Paracatenula lack mouth and gut, and are nutritionally dependent on their symbionts. In all other chemosynthetic symbioses the host acquires their nutrition by digestion of symbionts. In contrast to this, in Paracatenula , the symbionts cater their host by secreting outer-membrane vesicles (OMVs) and symbiont digestion is rare. [ Ref 5 ] With their massive storage capabilities and the elegant way of providing the nutrition via OMVs, the symbionts have been suggested to form a ‘bacterial liver’ and peculiar ‘battery’ in the integrated Paracatenula symbiosis [ Ref 5 ] [ Ref 11 ] [ Ref 12 ]
https://en.wikipedia.org/wiki/Paracatenula
Paracellular transport refers to the transfer of substances across an epithelium by passing through the intercellular space between the cells. [ 1 ] It is in contrast to transcellular transport , where the substances travel through the cell, passing through both the apical membrane and basolateral membrane . [ 2 ] [ 3 ] The distinction has particular significance in renal physiology and intestinal physiology. Transcellular transport often involves energy expenditure whereas paracellular transport is unmediated and passive down a concentration gradient, [ 4 ] or by osmosis (for water) and solvent drag for solutes. [ 5 ] Paracellular transport also has the benefit that absorption rate is matched to load because it has no transporters that can be saturated. In most mammals, intestinal absorption of nutrients is thought to be dominated by transcellular transport, e.g., glucose is primarily absorbed via the SGLT1 transporter and other glucose transporters . Paracellular absorption therefore plays only a minor role in glucose absorption, [ 6 ] although there is evidence that paracellular pathways become more available when nutrients are present in the intestinal lumen. [ 7 ] In contrast, small flying vertebrates (small birds and bats) rely on the paracellular pathway for the majority of glucose absorption in the intestine. [ 8 ] [ 9 ] This has been hypothesized to compensate for an evolutionary pressure to reduce mass in flying animals, which resulted in a reduction in intestine size and faster transit time of food through the gut. [ 10 ] [ 11 ] Capillaries of the blood–brain barrier have only transcellular transport, in contrast with normal capillaries which have both transcellular and paracellular transport. The paracellular pathway of transport is also important for the absorption of drugs in the gastrointestinal tract . The paracellular pathway allows the permeation of hydrophilic molecules that are not able to permeate through the lipid membrane by the transcellular pathway of absorption. This is particularly important for hydrophilic pharmaceuticals, which may not have affinity for membrane-bound transporters, and therefore may be excluded from the transcellular pathway. The vast majority of drug molecules are transported through the transcellular pathway, and the few which rely on the paracellular pathway of transportation typically have a much lower bioavailability; for instance, levothyroxine has an oral bioavailability of 40 to 80%, and desmopressin of 0.16%. Some claudins form tight junction -associated pores that allow paracellular ion transport. [ 12 ] The tight junctions have a net negative charge, and are believed to preferentially transport positively charged molecules. Tight junctions in the intestinal epithelium are also known to be size-selective, such that large molecules (with molecular radii greater than about 4.5 Å ) are excluded. [ 13 ] [ 14 ] [ 15 ] Larger molecules may also pass the intestinal epithelium via the paracellular pathway, although at a much slower rate and the mechanism of this transport via a "leak" pathway is unknown but may include transient breaks in the epithelial barrier. Paracellular transport can be enhanced through the displacement of zona occludens proteins from the junctional complex by the use of permeation enhancers. Such enhancers include medium chain fatty acids (e.g. capric acid), chitosans, zona occludens toxin, etc. [ citation needed ]
https://en.wikipedia.org/wiki/Paracellular_transport
Since early in the history of flight , non-human animals have been dropped from heights with the benefit of parachutes . Early on, animals were used as test subjects for parachutes and as entertainment. Following the development of the balloon , dogs, cats, fowl, and sheep were dropped from heights. During the 18th and 19th-century ballooning craze known as balloonomania , many aeronauts included parachuting animals such as monkeys in their demonstrations. Later, animals were parachuted from airplanes, as test subjects, for amusement, and as a means of transporting working animals . During World War II, the many dogs parachuted from planes came to be known as "paradogs". Animal test subjects included a bear parachuted at supersonic speeds. Bat bombs , devised by the U.S. military, were designed to parachute a canister containing thousands of bomb-laden bats in Japan. Parachutes have also been used to transport animals, including mules and sheepdogs. In 1948, beaver drops in the United States parachuted beavers that were considered nuisances to remote locations. Many animals were sent into space as test subjects and would return to Earth in capsules with parachutes. The development of the parachute in the 18th century followed the invention of the balloon . Some of the earliest tests of parachutes involved dogs, cats, and domesticated fowl. [ 1 ] In a 19 September 1783 demonstration in Versailles observed by Marie Antoinette and Louis XVI , a duck, a rooster, and a sheep were carried by a Montgolfier brother balloon for eight minutes. [ 2 ] In the early 1780s, Louis-Sébastien Lenormand parachuted a cat and a dog from the top of Babotte Tower in Montpellier , France. In 1784, the Marquis de Brantes parachuted a sheep from the roof of the Palais des Papes in Avignon . [ 3 ] Soon after, Joseph Montgolfier dropped animals from towers to test parachute-like devices. [ 4 ] During the balloon craze known as balloonomania in the late 18th and 19th centuries, balloonists, known then as aeronauts, began experimenting with parachuting animals. The aeronaut Jean-Pierre Blanchard parachuted dozens of animals from balloons during his career. On 3 June 1785, he made a successful test of a parachute using a dog. Blanchard later dropped a cat and more dogs from parachutes. His attempt to drop a sheep with a parachute was unsuccessful. Later that year, a Mr Durry in Ireland repeated the feat, with a dog "suspended over the side of the gondola, wearing nothing but a parachute, and dropped". [ 5 ] Blanchard took a 12 lb (5.4 kg) cat up in his balloon. He placed the cat in a net connected by a long cord to a parachute and then slowly lowered the parachute from the gondola until it opened up. The cat descended to the ground. Blanchard also dropped a dog attached to a large chute, twice, over Lille . The second time was witnessed by Prince de Robecq and the "dog received no hurt" according to an article in Gentleman's Magazine . In 1788, Blanchard made a demonstration for Frederick the Great , placing a bird and a cat in a basket that was attached to the parachute. The animals lived. [ 5 ] The Cat, who was very unquiet in the state of slavery, appeared to have forgot its voracious nature, having spared the companion of its voyage; and the Bird appeared to have grown bold by the circumstance, and kept perched on the back of its enemy. [ 5 ] On 5 June 1793, Blanchard parachuted a dog, a cat, and a squirrel in Philadelphia . The animals were placed in a basket that was tethered to his balloon. A slow-burning fuse was set that released the basket while the balloon was mid-air, dropping the animals on the ground near Bush Hill. Blanchard repeated the demonstration on 17 and 21 June. [ 6 ] A writer for the City Gazette in South Carolina claimed that Blanchard had thrown over 60 animals "from the height of the clouds" that had parachuted to safety. [ 5 ] Blanchard's wife Sophie also parachuted dogs from her balloon. [ 7 ] In 1789, Blanchard demonstrated a parachuting dog for the Polish king Stanisław August Poniatowski . [ 5 ] In April 1835, a dog was parachuted from the roof of a theatre in Cincinnati, Ohio. [ 8 ] That same year, balloonist Charles Green parachuted a monkey from the Surrey Zoological Gardens named Jacopo from his balloon as he was over Walworth . Jacopo would later return to the air with Margaret Graham two years later. [ 9 ] Balloon-parachute acts were popular in the mid to late 19th century and sometimes included animals. In 1886, the aeronaut Emil Leandro Melville in San Francisco repeatedly parachuted a small arboreal monkey from his balloon. Meanwhile, Maud DeHaven and Richard P. Hill had parachuting dogs. [ 1 ] The exploits of parachuting balloonist Thomas Scott Baldwin were replicated in 1889 by a rhesus macaque known as "the Monkey Baldwin" at English music halls. [ 1 ] Twice a day the monkey would parachute from the roof of the Royal Aquarium in Westminster. His handler Mademoiselle Eichlerette reported training three "Monkey Baldwins" and toured India and the United States for six years with her act. [ 10 ] In 1893, aeronaut Jennie Leland had a parachuting dog named "Rollo" as part of her act. [ 11 ] The American aeronaut Hazel Keyes had a monkey named "Miss Jennie Yan-Yan" who was one of the most famous parachuting monkeys. [ 1 ] Keyes toured the U.S. west coast in the 1890s with Jennie Yan-Yan, who had her own miniature parachute. While Keyes suffered injuries during several of her exhibitions, Jennie Yan-Yan was seemingly never harmed. During an exhibition in Austin, Texas , Keyes and Jennie were suspended 1,000 feet over Lake Austin . While Keyes failed to reach the powerhouse of the Austin Dam, Jennie jumped from her shoulder with a miniature parachute and descended to the waters below. [ 12 ] William Kalt, who wrote a book about Keyes, said: "I don't mean to laugh, but so many of those news reports are similar, and they almost always end by mentioning that the monkey was not harmed. She would have these horrific accidents, where everyone nearly died, yet she kept getting right back up in that balloon!" [ 13 ] A trained bonnet macaque named Mrs. Murphy made at least 150 parachute jumps during her tour of Europe and the United States in 1899 and 1900. She was purchased by her handler in India when she was two years old. [ 14 ] She would hold her hands together and "pray" prior to her ascent in the balloon and then parachute solo from heights of around 1,000 ft (300 m). [ 8 ] Parachuting animals continued to draw crowds in the 20th century. The parachuting monkey Bimbo made a series of balloon ascensions around Montana in 1906. On August 16, at the Columbia Gardens amusement park, mid-way through his parachute descent before a crowd of thousands, he fell. He had apparently gnawed through the ropes tying him to the parachute, fell around 1,000 ft (300 m), and was "crushed to a shapeless pulp on the roof of the pavilion". [ 15 ] In 1912, a chimpanzee named Topsy performed around the United States in a balloon-parachute act. [ 16 ] Air shows with stunt flyers also featured parachuting animals. Harold "Daredevil" Lockwood, for one, had a parachuting dog in 1928. [ 17 ] Despite growing concerns for animal welfare in parachute acts, few performances were stopped. Nonetheless, the use of animals in daredevil acts became increasingly rare in the 20th century. [ 1 ] In 1929, two planned parachute drops of monkeys at Roosevelt Field in Long Island were cancelled. The first, by Charles de Bevere and his monkey "Jumpy" was stopped by clubwomen from Garden City. [ 18 ] A second drop, by parachutist Saul Debever and his monkey, was halted under threat of prosecution by the Society for the Prevention of Cruelty to Animals . [ 19 ] Animals have long been used in the military as working animals , mascots , and test subjects. As airplane and parachute technology advanced in the 20th century, there was an increasing incidence of parachuting animals, particularly dogs. [ 20 ] Parachuting dogs, sometimes referred to as "paradogs", have been frequently employed by militaries . [ 21 ] In the early 1920s, a dog named Jeff made multiple successful jumps with the Colorado Air National Guard . In his final jump in August 1924, Jeff's chute did not open. [ 21 ] Later in 1935, an article in Popular Science featured a successful Soviet experimental parachute for a locked coop for dogs that sprang open when it hit the ground. [ 21 ] During World War II, the British 13th Parachute Battalion recruited dogs. The dogs served as mascots but were also trained to detect mines and serve as guard dogs. The Collie -German Shepherd mix breed dog Bing parachuted into Normandy on D-Day , though he had to be thrown out of the plane. He landed in a tree [ 22 ] but survived and later parachuted into western Germany in March 1945 as part of Operation Varsity . Bing was awarded a Dickin Medal . [ 23 ] Two other German Shepherds with the battalion, Ranee and Monty, also served as paradogs. [ 24 ] Smoky , a famous Yorkshire terrier in World War II, was parachuted from trees at heights of 30 ft (9.1 m). She was parachuted as a stunt by her handler Bill Wynne and won the Best Mascot of the Southwest Pacific Area award. [ 25 ] Rob , a Collie, was alleged to have made over 20 parachute descents during the North African campaign of World War II, serving with the SAS , and was awarded a Dickin Medal. In 2006, his jumps were revealed as a possible hoax perpetrated by members of his regiment to prevent the dog from returning to his original owners. [ 26 ] Some paradogs were killed in action. A Doberman with the 463rd Parachute Field Artillery and a German Shepherd named Jaint de Montmorency with the 506th Parachute Infantry Regiment , both of the US, dropped into France in 1944 and were killed in action. [ 27 ] The U.S. Army experimented with parachuting Siberian Huskies with water and K-rations to bring to stranded soldiers. The dogs were taken up in a transport plane and pushed out of the side door, sometimes two dogs per parachute, on a static line which would open after they cleared the door. [ 28 ] In the 1950s, during Operation Deep Freeze , a series of United States missions to Antarctica , working dogs were intended to be parachuted in. [ 29 ] Most modern parachuting dogs have specially designed harnesses and make tandem jumps with their handlers. During World War II , the United States military developed an experimental weapon known as a bat bomb . The device consisted of a bomb-shaped casing with over a thousand compartments, each containing a hibernating Mexican free-tailed bat with a small, timed incendiary bomb attached. Dropped from a bomber at dawn, the casings would deploy a parachute mid-flight and open to release the bats, which would then disperse and roost in eaves and attics in a 20–40-mile radius (32–64 km). The incendiaries, which were set on timers, would then ignite and start fires in inaccessible places in the largely wood and paper constructions of the Japanese cities that were the weapon's intended target. [ 30 ] [ 31 ] Attempts were made by the U.S. Army in 1942 to parachute mules. In one attempt, a dozen mules were taken up in an airplane. Six of them could not be pushed out of the airplane, while another six were dropped in slings attached to parachutes. Unfortunately, the jerk from the opening of the chutes severed the mules' mesenteric arteries , killing them. [ 32 ] During the Burma Campaign , mules were flown in to the Chindits , long-range penetration forces on the ground. Lt. Col. K. I. Barlow suggested parachuting the animals and arranged trial drops at the Air Transport Development Centre in Chaklala , Punjab. An elderly, sedated mule was placed at the centre of nylon bladder pontoons that were fastened to a platform. Two triple clusters of 28 ft (8.5 m) statichutes were used and the mule was dropped from 600 ft (180 m), landing at 15 ft/s (4.6 m/s). After the successful test, mules were transported via the Chabua 44th Air Depot from Assam, India. [ 33 ] It was hard to know who kicked harder. The mules slipped and slid on an aluminum floor, protesting every shove toward the portside drop. The Americans dropped their mules from a height of 2000 to 3000 feet. Three large 'chutes per mule were not enough to guarantee a happy landing. Those that didn't make it became fresh rations. [ 33 ] Later in 1945, the British successfully devised crates with airbags for 900 lb (410 kg) mules that were dropped by parachute from C-47s . The U.S. Army experimented again with parachuting a mule in 1946. A sedated mule was strapped to a padded pallet and dropped from a C-47 by a static line at a height of 25,000 ft (7,600 m). [ 32 ] Jacksonia, a monkey captured from the island of Luzon , made two jumps by parachute in Japan during World War II with a sergeant from the 11th Airborne Division . [ 34 ] Boudgie, an African parachuting monkey, received a North Africa campaign ribbon and was credited with saving the life of her handler four times. [ 35 ] During the Vietnam War, the 173rd Airborne Brigade had a "parachuting primate", Pfc. Bufford L. Monkey, who joined troops on parachute jumps. [ 36 ] The Asian black bear Rocky was born in 1953 and purchased from a Kumamoto zoo to serve as a mascot for the U.S. 187th Airborne Regimental Combat Team during the Korean War . She completed five parachute jumps, earning her parachutist badge . After sustaining injuries during an artillery attack, she was awarded a Purple Heart . [ 37 ] [ 38 ] During the Vietnam War , supply drops made to isolated outposts could include livestock such as chickens, ducks, pigs, and cows. Except for the cows, the animals were placed in bamboo wicker baskets and dropped by parachute from heights of 250–300 ft (76–91 m). [ 39 ] Specialized harnesses and other equipment for parachuting animals have developed over time. Tandem jumps have become the predominant method. Other equipment such as goggles for dogs ( doggles ) have been designed. [ 40 ] The U.S. Special Operations Command 's innovation cell hosted a competition to design oxygen masks for dogs in 2017. [ 41 ] Humans have also taken dogs and other animals skydiving . [ 42 ] Mike Forsythe and his dog Cara set a record for the highest tandem dog-human parachute deployment in 2011, making their descent from 30,100 ft (9,200 m). [ 21 ] Alongside advancements in human flight and parachute technology, animals have served as test subjects. Initial tests of parachutes were often conducted with animals. Later, animals were parachuted from airplanes and rockets. During World War II, Major, a 145 lb (66 kg) St. Bernard , was fitted with a custom oxygen mask before being dropped from a plane at 26,000 ft (7,900 m). Witnesses to the test, which was to determine the impact of high altitudes on parachute straps, reported seeing Major dogpaddling during his descent. [ 43 ] The rhesus macaque Albert I was launched in a V-2 Rocket on 18 June 1948. The respiratory apparatus and the parachute system both failed, [ 44 ] and Albert likely died due to breathing problems but would have died on impact anyway since the capsule's parachute failed to open. Another rhesus macaque, Albert II became the first mammal in space on 14 June 1949, but plummeted to his death after a parachute failure. [ 45 ] [ 46 ] At Edwards Air Force Base in 1962, bears were used for a series of escape capsule ejection tests of the Convair B-58 Hustler . The first supersonic ejection test occurred on 21 March 1962 at the speed of Mach 1.3 at 35,000 ft (11,000 m) and the bear survived the nearly eight-minute parachute descent. A bear was ejected again from a height of 45,000 ft (14,000 m) on 6 April. After examination, it was determined that the bear had received minor hemorrhage of the neck muscles from whiplash and two pelvic bone fractures. On 8 June, a chimpanzee served as the test subject for the escape capsule and parachuted to the ground unharmed. Bears were sent up for subsequent tests, with one sent up on 27 July reported to have suffered "internal injuries of some severity". [ 47 ] In July 1962, four hamsters, two rhesus monkeys, and several flower beetles were sent in a 39-hour, 1,920-mile (3,090 km) high-altitude balloon trip from Goose Bay , Labrador , as part of an experiment by NASA's Ames Research Center to test the effects of radiation. Following the parachute descent of the capsules, it was discovered that the monkeys and hamsters had died due to a life support system failure. [ 48 ] During the development of spaceflight , many animals were sent into space as test subjects and would return to Earth in capsules with parachutes. [ 49 ] In 1948, the Idaho Department of Fish and Game devised a program to relocate beavers from Northwestern Idaho to the Chamberlain Basin in Central Idaho. The beaver drop program was started to address complaints about property damage from residents and involved flying 76 beavers by airplane and parachuting them down to the ground. Parachuting the beavers proved to be more cost-effective than alternative methods of relocation and also decreased beaver mortality rates . [ 50 ] An older beaver named "Geronimo" was a test subject for the boxes, repeatedly parachuting to the ground. The Idaho Fish and Game Department produced a 14-minute film about the relocation and the program was written up in an April 1950 article in the Journal of Wildlife Management titled "Transplanting Beavers by Airplane and Parachute". [ 51 ] [ 52 ] In 1949, shepherds in Carbon County, Utah , had a shortage of sheepdogs to protect their flocks, many of them having been poisoned by coyote bait. Due to snow, the marooned flocks were inaccessible by land, so the Civil Air Patrol arranged for a "doglift" where sheepdogs were parachuted in. [ 53 ] Parachutes were provided by the state Aeronautics Commission and a special harness for the paradogs was designed by E. L. Davis. [ 54 ] [ 55 ] The United Kingdom 's Royal Air Force delivered cats, equipment and supplies to remote regions of the then-British colony of Sarawak (today part of Malaysia ), on the island of Borneo in 1960. [ 56 ] The cats were flown out of Singapore and delivered in crates dropped by parachutes as part of a broader program of supplying cats to combat an infestation of rats. [ 56 ] The operation, known as Operation Cat Drop , was reported as a success at the time. [ 57 ] [ 58 ] Newspaper reports published soon after the Operation reference only 23 cats being used. However, some later accounts of the event claim as many as 14,000 cats were used. [ 59 ] An additional source references a "recruitment" drive for 30 cats a few days before Operation Cat Drop. [ 60 ] In 2016, the South African defense contractor Paramount Group established the Anti-Poaching and Canine Training Academy. Belgian Malinois and German Shepherds trained at the facility are parachuted from helicopters to assist in tracking elephant poachers . [ 61 ] The Utah Division of Wildlife Resources has dropped fish from aircraft to re-stock high altitude lakes since at least 1956. This is done to repopulate the lakes with fish for recreational anglers given that the fish don't naturally reproduce in them. In 2021 the agency stated that it dropped as many as 35,000 fish during each flight, with a 95 percent survival rate. Flights are conducted each summer. [ 62 ] The Division of Wildlife Resources states that restocking the lakes by air is cheaper than transporting the fish overland and less stressful for the animals. [ 63 ] Parachuting animals have been depicted in fiction numerous times. The 1945 Japanese film Momotaro: Sacred Sailors included a monkey, dog, and bear cub who become paratroopers. The 1995 film Operation Dumbo Drop concerns the delivery of an elephant by parachute during the Vietnam War. In the late 1990s, the artist Banksy produced a series of Parachuting Rat stencil art in Melbourne, depicting rats descending in parachutes. [ 64 ] In the 1949 British film Passport to Pimlico , pigs are parachuted to the people of Pimlico. [ 65 ]
https://en.wikipedia.org/wiki/Parachuting_animals
Paraconsistent mathematics , sometimes called inconsistent mathematics , represents an attempt to develop the classical infrastructure of mathematics (e.g. analysis ) based on a foundation of paraconsistent logic instead of classical logic . A number of reformulations of analysis can be developed, for example functions which both do and do not have a given value simultaneously. Chris Mortensen claims (see references): This mathematical logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Paraconsistent_mathematics
Cell signaling can be divided into three major categories: autocrine regulation, endocrine regulation, and paracrine regulation. Autocrine signaling occurs when regulator molecules are secreted by a cell and received by receptor molecules on the same cell. In endocrine signaling , regulator molecules are released by endocrine glands into the bloodstream to produce activity in distant cells. Lastly, in paracrine signaling, the paracrine regulators are released by a cell to produce an activity on a neighboring cell within the same tissue. [ 1 ] Paracrine regulation is vital to many cellular processes. Examples of paracrine signaling include the regulation of insulin secretion, the regulation of blood flow, and the regulation of epidermal homeostasis. Insulin is secreted by beta cells within the pancreatic islets of Langerhans and regulates the movement of glucose from the bloodstream into the cells for metabolism . [ 2 ] When the blood glucose levels are high, for example right after a meal, the beta cells of the pancreas are stimulated to release insulin. Insulin is a hormone that stimulates cells throughout the body to take up glucose to metabolize, therefore decreasing blood glucose levels. This occurs when ATP levels rise due to the increase in glucose metabolism, closing ATP-sensitive potassium channels on the beta cells. The subsequent depolarization of the cell opens voltage-gated calcium channels leading to an influx of Ca 2+ in the cell, which is required for the release of insulin. [ 3 ] The secretion of insulin by these beta cells is regulated by the paracrine activity of alpha and delta cells also located within the pancreatic islets, and the autocrine activity of neighboring beta cells. [ 3 ] Alpha cells in the pancreatic islet release glucagon , a hormone that regulates blood glucose levels antagonistically to insulin by stimulating the breakdown of glycogen stores to increase glucose concentrations in the bloodstream. [ 4 ] However, glucagon can also activate receptors on pancreatic beta cells to increase insulin secretion. This will only occur in slightly hyperglycemic conditions because these conditions stimulate a depolarization of the cell by closing potassium channels and opening calcium channels that is necessary for the release of insulin to occur, as previously discussed. Alpha cells exhibit a few other paracrine functions that stimulate the secretion of insulin by pancreatic beta cells. These include the release of GLP-1 and corticotropin-releasing hormone (CRH). [ 3 ] Pancreatic delta cells also function in paracrine regulation of insulin and glucagon secretion by releasing somatostatin , or growth hormone-inhibiting hormone (GHIH). Somatostatin acts as an inhibitor to both the release of glucagon by alpha cells and the release of insulin by beta cells. [ 4 ] Another cellular process that is regulated by paracrine signaling is blood flow. Vasoconstriction and vasodilation are the respective constriction and dilation of blood vessels throughout the body to precisely control the flow of blood. This occurs myogenically by smooth muscle cells surrounding the vessels, metabolically by the changes in oxygen and carbon dioxide concentrations, and through local paracrine signaling. [ 5 ] [ 6 ] The paracrine signaling mechanism of controlling blood flow relies on the release of hormones from the bloodstream and the immune system. Platelets in the bloodstream release the hormones thromboxane A 2 , thrombin, and serotonin. When there is an absence of intact endothelium of the blood vessels, these hormones will diffuse to the vascular smooth muscle tissue where they stimulate contraction, and therefore vasoconstriction, leading to a decrease in blood flow to that area. When the endothelium is intact, the serotonin and thrombin released by the platelets as well as ADP stimulate the endothelial cells to produce nitric oxide and prostacyclin . These signal molecules then stimulate the relaxation of the vascular smooth muscle, causing vasodilation and an increase in blood flow. [ 6 ] Mast cells , a type of white blood cell, also contribute to the paracrine regulation of blood flow by releasing histamine . During an immune response , histamines are released by the mast cells and stimulate the endothelial cells to produce nitric oxide and prostacyclin. Again, this signals the relaxation of the vascular smooth muscle tissue, causing vasodilation and an increase in blood flow. [ 6 ] [ 7 ] Epidermal homeostasis is maintained by the replacement of skin cells during tissue turnover and injury, as well as the prevention of an excess of skin cell development. [ 8 ] This is controlled by the proliferation and differentiation of keratinocytes in the epidermis that is controlled by paracrine signaling. Without proper regulation, skin conditions such as psoriasis and a lack of wound repair may occur. [ 9 ] In the dermis , the tissue layer below the epidermis, fibroblast cells are located. Fibroblast cells contribute to the formation and maintenance of connective tissue in the body. [10] These fibroblast cells release many hormones that regulate epidermal keratinocytes, two of which include keratinocyte growth factor (KGF) and granulocyte-macrophage colony-stimulating factor (GM-CSF). KGF and GM-CSF are both hormones that stimulate the regeneration of keratinocytes in the epidermis and are both regulated by the keratinocyte-derived factor IL-1. IL-1 is a growth factor that is released by keratinocytes under stress conditions, such as injury or UV radiation. When IL-1 is released, it stimulates the release of KGF and GM-CSF by the fibroblasts, thus inducing regeneration of keratinocytes. [ 10 ] Psoriasis is a condition that occurs when epidermal homeostasis is not properly controlled, and an excess of keratinocyte proliferation causes patches of thick skin lesions. EGFR is a receptor tyrosine kinase (RTK) involved in the psoriasis condition. EGFR and its many ligands are overproduced or hyperactive in psoriasis patients, leading to a hyper-proliferation of keratinocytes in the epidermis. Two methods have been found to work relatively effectively for the treatment of psoriasis. PD-169540 is a drug that antagonistically affects the EGFR RTK, and has been shown to decrease the symptoms of psoriasis. Additionally, cetuximab is a drug commonly used for chemotherapy that is an anti-EGFR antibody. Cetuximab has been found to reduce the symptoms of psoriasis in certain cases, as it is inhibitory to the EGFR RTK. [ 11 ] Paracrine regulation plays a vital role in many cellular processes throughout the human body. Although not exhaustive, this includes the regulation of insulin secretion, blood flow, and epidermal homeostasis. These processes as well as many others are crucial in maintaining the function of the human body. The endocrine system as a whole, including paracrine, autocrine, and endocrine methods of regulation, is a complex system that is responsible for the overall homeostasis of the body. Disruptions in this system cause a wide range of diseases and conditions that can be detrimental.
https://en.wikipedia.org/wiki/Paracrine_regulator
In cellular biology , paracrine signaling is a form of cell signaling , a type of cellular communication in which a cell produces a signal to induce changes in nearby cells, altering the behaviour of those cells. Signaling molecules known as paracrine factors diffuse over a relatively short distance (local action), as opposed to cell signaling by endocrine factors , hormones which travel considerably longer distances via the circulatory system ; juxtacrine interactions ; and autocrine signaling . Cells that produce paracrine factors secrete them into the immediate extracellular environment. Factors then travel to nearby cells in which the gradient of factor received determines the outcome. However, the exact distance that paracrine factors can travel is not certain. Although paracrine signaling elicits a diverse array of responses in the induced cells, most paracrine factors utilize a relatively streamlined set of receptors and pathways. In fact, different organs in the body - even between different species - are known to utilize a similar sets of paracrine factors in differential development. [ 1 ] The highly conserved receptors and pathways can be organized into four major families based on similar structures: fibroblast growth factor (FGF) family, Hedgehog family, Wnt family, and TGF-β superfamily . Binding of a paracrine factor to its respective receptor initiates signal transduction cascades, eliciting different responses. In order for paracrine factors to successfully induce a response in the receiving cell, that cell must have the appropriate receptors available on the cell membrane to receive the signals, also known as being competent . Additionally, the responding cell must also have the ability to be mechanistically induced. Although the FGF family of paracrine factors has a broad range of functions, major findings support the idea that they primarily stimulate proliferation and differentiation. [ 2 ] [ 3 ] To fulfill many diverse functions, FGFs can be alternatively spliced or even have different initiation codons to create hundreds of different FGF isoforms . [ 4 ] One of the most important functions of the FGF receptors (FGFR) is in limb development. This signaling involves nine different alternatively spliced isoforms of the receptor. [ 5 ] Fgf 8 and Fgf 10 are two of the critical players in limb development. In the forelimb initiation and limb growth in mice, axial (lengthwise) cues from the intermediate mesoderm produces Tbx 5, which subsequently signals to the same mesoderm to produce Fgf 10. Fgf 10 then signals to the ectoderm to begin production of Fgf 8, which also stimulates the production of Fgf 10. Deletion of Fgf 10 results in limbless mice. [ 6 ] Additionally, paracrine signaling of Fgf is essential in the developing eye of chicks. The fgf 8 mRNA becomes localized in what differentiates into the neural retina of the optic cup . These cells are in contact with the outer ectoderm cells, which will eventually become the lens. [ 4 ] Phenotype and survival of mice after knockout of some FGFR genes: [ 5 ] Paracrine signaling through fibroblast growth factors and its respective receptors utilizes the receptor tyrosine pathway. This signaling pathway has been highly studied, using Drosophila eyes and human cancers. [ 7 ] Binding of FGF to FGFR phosphorylates the idle kinase and activates the RTK pathway. This pathway begins at the cell membrane surface, where a ligand binds to its specific receptor. Ligands that bind to RTKs include fibroblast growth factors , epidermal growth factors, platelet-derived growth factors, and stem cell factor . [ 7 ] This dimerizes the transmembrane receptor to another RTK receptor, which causes the autophosphorylation and subsequent conformational change of the homodimerized receptor. This conformational change activates the dormant kinase of each RTK on the tyrosine residue. Due to the fact that the receptor spans across the membrane from the extracellular environment, through the lipid bilayer , and into the cytoplasm , the binding of the receptor to the ligand also causes the trans phosphorylation of the cytoplasmic domain of the receptor. [ 8 ] An adaptor protein (such as SOS) recognizes the phosphorylated tyrosine on the receptor. This protein functions as a bridge which connects the RTK to an intermediate protein (such as GNRP), starting the intracellular signaling cascade. In turn, the intermediate protein stimulates GDP-bound Ras to the activated GTP-bound Ras. GAP eventually returns Ras to its inactive state. Activation of Ras has the potential to initiate three signaling pathways downstream of Ras: Ras→Raf→MAP kinase pathway, PI3 kinase pathway, and Ral pathway. Each pathway leads to the activation of transcription factors which enter the nucleus to alter gene expression. [ 9 ] Paracrine signaling of growth factors between nearby cells has been shown to exacerbate carcinogenesis . In fact, mutant forms of a single RTK may play a causal role in very different types of cancer. The Kit proto-oncogene encodes a tyrosine kinase receptor whose ligand is a paracrine protein called stem cell factor (SCF), which is important in hematopoiesis (formation of cells in blood). [ 10 ] The Kit receptor and related tyrosine kinase receptors actually are inhibitory and effectively suppresses receptor firing. Mutant forms of the Kit receptor, which fire constitutively in a ligand-independent fashion, are found in a diverse array of cancerous malignancies. [ 11 ] Research on thyroid cancer has elucidated the theory that paracrine signaling may aid in creating tumor microenvironments. Chemokine transcription is upregulated when Ras is in the GTP-bound state. The chemokines are then released from the cell, free to bind to another nearby cell. Paracrine signaling between neighboring cells creates this positive feedback loop. Thus, the constitutive transcription of upregulated proteins form ideal environments for tumors to arise. [ citation needed ] Effectively, multiple bindings of ligands to the RTK receptors overstimulates the Ras-Raf-MAPK pathway, which overexpresses the mitogenic and invasive capacity of cells. [ 12 ] In addition to RTK pathway, fibroblast growth factors can also activate the JAK-STAT signaling pathway . Instead of carrying covalently associated tyrosine kinase domains, Jak-STAT receptors form noncovalent complexes with tyrosine kinases of the Jak ( Janus kinase ) class. These receptors bind are for erythropoietin (important for erythropoiesis ), thrombopoietin (important for platelet formation), and interferon (important for mediating immune cell function). [ 13 ] After dimerization of the cytokine receptors following ligand binding, the JAKs transphosphorylate each other. The resulting phosphotyrosines attract STAT proteins. The STAT proteins dimerize and enter the nucleus to act as transcription factors to alter gene expression. [ 13 ] In particular, the STATs transcribe genes that aid in cell proliferation and survival – such as myc. [ 14 ] Phenotype and survival of mice after knockout of some JAK or STAT genes: [ 15 ] The JAK-STAT signaling pathway is instrumental in the development of limbs, specifically in its ability to regulate bone growth through paracrine signaling of cytokines. However, mutations in this pathway have been implicated in severe forms of dwarfism: thanatophoric dysplasia (lethal) and achondroplasic dwarfism (viable). [ 16 ] This is due to a mutation in a Fgf gene, causing a premature and constitutive activation of the Stat1 transcription factor. Chondrocyte cell division is prematurely terminated, resulting in lethal dwarfism. Rib and limb bone growth plate cells are not transcribed. Thus, the inability of the rib cage to expand prevents the newborn's breathing. [ 17 ] Research on paracrine signaling through the JAK-STAT pathway revealed its potential in activating invasive behavior of ovarian epithelial cells . This epithelial to mesenchymal transition is highly evident in metastasis . [ 18 ] Paracrine signaling through the JAK-STAT pathway is necessary in the transition from stationary epithelial cells to mobile mesenchymal cells, which are capable of invading surrounding tissue. Only the JAK-STAT pathway has been found to induce migratory cells. [ 19 ] The Hedgehog protein family is involved in induction of cell types and the creation of tissue boundaries and patterning and are found in all bilateral organisms. Hedgehog proteins were first discovered and studied in Drosophila . Hedgehog proteins produce key signals for the establishment of limb and body plan of fruit flies as well as homeostasis of adult tissues, involved in late embryogenesis and metamorphosis . At least three "Drosophila" hedgehog homologs have been found in vertebrates: sonic hedgehog, desert hedgehog, and Indian hedgehog. Sonic hedgehog ( SHH ) has various roles in vertebrae development, mediating signaling and regulating the organization of central nervous system, limb, and somite polarity . Desert hedgehog ( DHH ) is expressed in the Sertoli cells involved in spermatogenesis . Indian hedgehog ( IHH ) is expressed in the gut and cartilage, important in postnatal bone growth. [ 20 ] [ 21 ] [ 22 ] Members of the Hedgehog protein family act by binding to a transmembrane " Patched " receptor, which is bound to the " Smoothened " protein, by which the Hedgehog signal can be transduced . In the absence of Hedgehog, the Patched receptor inhibits Smoothened action. Inhibition of Smoothened causes the Cubitus interruptus (Ci), Fused, and Cos protein complex attached to microtubules to remain intact. In this conformation, the Ci protein is cleaved so that a portion of the protein is allowed to enter the nucleus and act as a transcriptional repressor . In the presence of Hedgehog, Patched no longer inhibits Smoothened. Then active Smoothened protein is able to inhibit PKA and Slimb, so that the Ci protein is not cleaved. This intact Ci protein can enter the nucleus, associate with CPB protein and act as a transcriptional activator , inducing the expression of Hedgehog-response genes. [ 22 ] [ 23 ] [ 24 ] The Hedgehog Signaling pathway is critical in proper tissue patterning and orientation during normal development of most animals. Hedgehog proteins induce cell proliferation in certain cells and differentiations in others. Aberrant activation of the Hedgehog pathway has been implicated in several types of cancers , Basal Cell Carcinoma in particular. This uncontrolled activation of the Hedgehog proteins can be caused by mutations to the signal pathway, which would be ligand independent, or a mutation that causes overexpression of the Hedgehog protein, which would be ligand dependent. In addition, therapy-induced Hedgehog pathway activation has been shown to be necessary for progression of Prostate Cancer tumors after androgen deprivation therapy . [ 25 ] This connection between the Hedgehog signaling pathway and human cancers may provide for the possible of therapeutic intervention as treatment for such cancers. The Hedgehog signaling pathway is also involved in normal regulation of stem-cell populations, and required for normal growth and regeneration of damaged organs. This may provide another possible route for tumorigenesis via the Hedgehog pathway. [ 26 ] [ 27 ] [ 28 ] The Wnt protein family includes a large number of cysteine -rich glycoproteins . The Wnt proteins activate signal transduction cascades via three different pathways, the canonical Wnt pathway , the noncanonical planar cell polarity (PCP) pathway , and the noncanonical Wnt/Ca 2+ pathway. Wnt proteins appear to control a wide range of developmental processes and have been seen as necessary for control of spindle orientation, cell polarity, cadherin mediated adhesion, and early development of embryos in many different organisms. Current research has indicated that deregulation of Wnt signaling plays a role in tumor formation, because at a cellular level, Wnt proteins often regulated cell proliferation , cell morphology, cell motility , and cell fate. [ 29 ] In the canonical pathway , Wnt proteins binds to its transmembrane receptor of the Frizzled family of proteins. The binding of Wnt to a Frizzled protein activates the Dishevelled protein. In its active state the Dishevelled protein inhibits the activity of the glycogen synthase kinase 3 ( GSK3 ) enzyme. Normally active GSK3 prevents the dissociation of β-catenin to the APC protein, which results in β-catenin degradation. Thus inhibited GSK3, allows β-catenin to dissociate from APC, accumulate, and travel to nucleus. In the nucleus β-catenin associates with Lef/Tcf transcription factor , which is already working on DNA as a repressor, inhibiting the transcription of the genes it binds. Binding of β-catenin to Lef/Tcf works as a transcription activator, activating the transcription of the Wnt-responsive genes. [ 30 ] [ 31 ] [ 32 ] The noncanonical Wnt pathways provide a signal transduction pathway for Wnt that does not involve β-catenin . In the noncanonical pathways, Wnt affects the actin and microtubular cytoskeleton as well as gene transcription . The noncanonical PCP pathway regulates cell morphology , division , and movement . Once again Wnt proteins binds to and activates Frizzled so that Frizzled activates a Dishevelled protein that is tethered to the plasma membrane through a Prickle protein and transmembrane Stbm protein. The active Dishevelled activates RhoA GTPase through Dishevelled associated activator of morphogenesis 1 (Daam1) and the Rac protein . Active RhoA is able to induce cytoskeleton changes by activating Roh-associated kinase (ROCK) and affect gene transcription directly. Active Rac can directly induce cytoskeleton changes and affect gene transcription through activation of JNK. [ 30 ] [ 31 ] [ 32 ] The noncanonical Wnt/Ca 2+ pathway regulates intracellular calcium levels. Again Wnt binds and activates to Frizzled. In this case however activated Frizzled causes a coupled G-protein to activate a phospholipase (PLC), which interacts with and splits PIP 2 into DAG and IP 3 . IP 3 can then bind to a receptor on the endoplasmic reticulum to release intracellular calcium stores, to induce calcium-dependent gene expression. [ 30 ] [ 31 ] [ 32 ] The Wnt signaling pathways are critical in cell-cell signaling during normal development and embryogenesis and required for maintenance of adult tissue, therefore it is not difficult to understand why disruption in Wnt signaling pathways can promote human degenerative disease and cancer . The Wnt signaling pathways are complex, involving many different elements, and therefore have many targets for misregulation. Mutations that cause constitutive activation of the Wnt signaling pathway lead to tumor formation and cancer. Aberrant activation of the Wnt pathway can lead to increase cell proliferation. Current research is focused on the action of the Wnt signaling pathway the regulation of stem cell choice to proliferate and self renew. This action of Wnt signaling in the possible control and maintenance of stem cells, may provide a possible treatment in cancers exhibiting aberrant Wnt signaling. [ 33 ] [ 34 ] [ 35 ] " TGF " (Transforming Growth Factor) is a family of proteins that includes 33 members that encode dimeric , secreted polypeptides that regulate development. [ 36 ] Many developmental processes are under its control including gastrulation, axis symmetry of the body, organ morphogenesis, and tissue homeostasis in adults. [ 37 ] All TGF-β ligands bind to either Type I or Type II receptors, to create heterotetramic complexes. [ 38 ] The TGF-β pathway regulates many cellular processes in developing embryo and adult organisms, including cell growth , differentiation , apoptosis , and homeostasis . There are five kinds of type II receptors and seven types of type I receptors in humans and other mammals. These receptors are known as "dual-specificity kinases" because their cytoplasmic kinase domain has weak tyrosine kinase activity but strong serine / threonine kinase activity. [ 39 ] When a TGF-β superfamily ligand binds to the type II receptor, it recruits a type I receptor and activates it by phosphorylating the serine or threonine residues of its "GS" box. [ 40 ] This forms an activation complex that can then phosphorylate SMAD proteins. There are three classes of SMADs: Examples of SMADs in each class: [ 41 ] [ 42 ] [ 43 ] The TGF-β superfamily activates members of the SMAD family, which function as transcription factors. Specifically, the type I receptor, activated by the type II receptor, phosphorylates R-SMADs that then bind to the co-SMAD, SMAD4 . The R-SMAD/Co-SMAD forms a complex with importin and enters the nucleus, where they act as transcription factors and either up-regulate or down-regulate in the expression of a target gene. Specific TGF-β ligands will result in the activation of either the SMAD2/3 or the SMAD1/5 R-SMADs . For instance, when activin , Nodal , or TGF-β ligand binds to the receptors, the phosphorylated receptor complex can activate SMAD2 and SMAD3 through phosphorylation. However, when a BMP ligand binds to the receptors, the phosphorylated receptor complex activates SMAD1 and SMAD5 . Then, the Smad2/3 or the Smad1/5 complexes form a dimer complex with SMAD4 and become transcription factors . Though there are many R-SMADs involved in the pathway, there is only one co-SMAD, SMAD4 . [ 44 ] Non-Smad signaling proteins contribute to the responses of the TGF-β pathway in three ways. First, non-Smad signaling pathways phosphorylate the Smads. Second, Smads directly signal to other pathways by communicating directly with other signaling proteins, such as kinases. Finally, the TGF-β receptors directly phosphorylate non-Smad proteins. [ 45 ] This family includes TGF-β1 , TGF-β2 , TGF-β3 , and TGF-β5. They are involved in positively and negatively regulation of cell division , the formation of the extracellular matrix between cells, apoptosis , and embryogenesis . They bind to TGF-β type II receptor (TGFBRII). TGF-β1 stimulates the synthesis of collagen and fibronectin and inhibits the degradation of the extracellular matrix . Ultimately, it increases the production of extracellular matrix by epithelial cells . [ 38 ] TGF-β proteins regulate epithelia by controlling where and when they branch to form kidney, lung, and salivary gland ducts. [ 38 ] Members of the BMP family were originally found to induce bone formation , as their name suggests. However, BMPs are very multifunctional and can also regulate apoptosis , cell migration , cell division , and differentiation . They also specify the anterior/posterior axis, induce growth, and regulate homeostasis . [ 36 ] The BMPs bind to the bone morphogenetic protein receptor type II (BMPR2). Some of the proteins of the BMP family are BMP4 and BMP7 . BMP4 promotes bone formation, causes cell death, or signals the formation of epidermis , depending on the tissue it is acting on. BMP7 is crucial for kidney development, sperm synthesis, and neural tube polarization. Both BMP4 and BMP7 regulate mature ligand stability and processing, including degrading ligands in lysosomes. [ 36 ] BMPs act by diffusing from the cells that create them. [ 46 ] Growth factor and clotting factors are paracrine signaling agents. The local action of growth factor signaling plays an especially important role in the development of tissues. Also, retinoic acid , the active form of vitamin A , functions in a paracrine fashion to regulate gene expression during embryonic development in higher animals. [ 48 ] In insects, Allatostatin controls growth through paracrine action on the corpora allata. [ citation needed ] In mature organisms, paracrine signaling is involved in responses to allergens , tissue repair, the formation of scar tissue , and blood clotting . [ citation needed ] Histamine is a paracrine that is released by immune cells in the bronchial tree. Histamine causes the smooth muscle cells of the bronchi to constrict, narrowing the airways. [ 49 ]
https://en.wikipedia.org/wiki/Paracrine_signaling
In materials science , paracrystalline materials are defined as having short- and medium-range ordering in their lattice (similar to the liquid crystal phases) but lacking crystal -like long-range ordering at least in one direction. [ 1 ] The words "paracrystallinity" and "paracrystal" were coined by the late Friedrich Rinne in the year 1933. [ 2 ] Their German equivalents, e.g. "Parakristall", appeared in print one year earlier. [ 3 ] A general theory of paracrystals has been formulated in a basic textbook, [ 4 ] and then further developed/refined by various authors. Rolf Hosemann 's definition of an ideal paracrystal is: "The electron density distribution of any material is equivalent to that of a paracrystal when there is for every building block one ideal point so that the distance statistics to other ideal points are identical for all of these points. The electron configuration of each building block around its ideal point is statistically independent of its counterpart in neighboring building blocks. A building block corresponds then to the material content of a cell of this "blurred" space lattice, which is to be considered a paracrystal." [ 5 ] Ordering is the regularity in which atoms appear in a predictable lattice, as measured from one point. In a highly ordered, perfectly crystalline material, or single crystal , the location of every atom in the structure can be described exactly measuring out from a single origin. Conversely, in a disordered structure such as a liquid or amorphous solid , the location of the nearest and, perhaps, second-nearest neighbors can be described from an origin (with some degree of uncertainty) and the ability to predict locations decreases rapidly from there out. The distance at which atom locations can be predicted is referred to as the correlation length ξ {\displaystyle \xi } . A paracrystalline material exhibits a correlation somewhere between the fully amorphous and fully crystalline. The primary, most accessible source of crystallinity information is X-ray diffraction and cryo-electron microscopy , [ 6 ] although other techniques may be needed to observe the complex structure of paracrystalline materials, such as fluctuation electron microscopy [ 7 ] in combination with density of states modeling [ 8 ] of electronic and vibrational states. Scanning transmission electron microscopy can provide real-space and reciprocal space characterization of paracrystallinity in nanoscale material, such as quantum dot solids. [ 9 ] The scattering of X-rays, neutrons and electrons on paracrystals is quantitatively described by the theories of the ideal [ 10 ] and real [ 11 ] paracrystal. Numerical differences in analyses of diffraction experiments on the basis of either of these two theories of paracrystallinity can often be neglected. [ 12 ] Just like ideal crystals, ideal paracrystals extend theoretically to infinity. Real paracrystals, on the other hand, follow the empirical α*-law, [ 13 ] which restricts their size. That size is also indirectly proportional to the components of the tensor of the paracrystalline distortion. Larger solid state aggregates are then composed of micro-paracrystals. [ 14 ] The paracrystal model has been useful, for example, in describing the state of partially amorphous semiconductor materials after deposition. It has also been successfully applied to synthetic polymers, liquid crystals, biopolymers, quantum dot solids, and biomembranes. [ 15 ]
https://en.wikipedia.org/wiki/Paracrystallinity
Paracytophagy (from Ancient Greek para ' nearby ' kytos ' cell ' and phagy ' eating ' ) is the cellular process whereby a cell engulfs a protrusion which extends from a neighboring cell. This protrusion may contain material which is actively transferred between the cells. The process of paracytophagy [ 1 ] was first described as a crucial step during cell-to-cell spread of the intracellular bacterial pathogen Listeria monocytogenes , and is also commonly observed in Shigella flexneri . Paracytophagy allows these intracellular pathogens to spread directly from cell to cell, thus escaping immune detection and destruction. Studies of this process have contributed significantly to our understanding of the role of the actin cytoskeleton in eukaryotic cells. Actin is one of the main cytoskeletal proteins in eukaryotic cells. The polymerization of actin filaments is responsible for the formation of pseudopods , filopodia and lamellipodia during cell motility . Cells actively build actin microfilaments that push the cell membrane towards the direction of advance. [ 2 ] Nucleation factors are enhancers of actin polymerization and contribute to the formation of the trimeric polymerization nucleus. This is a structure required to initiate the process of actin filament polymerization in a stable and efficient way. Nucleation factors such as WASP (Wiskott-Aldrich syndrome protein) help to form the seven-protein Arp2/3 nucleation complex , which resembles two actin monomers and therefore allows for easier formation of the polymerization nucleus. Arp2/3 is able to cap the trailing ("minus") end of the actin filament, allowing for faster polymerization at the "plus" end. It can also bind to the side of existing filaments to promote filament branching. [ 3 ] Certain intracellular pathogens such as the bacterial species Listeria monocytogenes and Shigella flexneri can manipulate host cell actin polymerization to move through the cytosol and spread to neighboring cells (see below). Studies of these bacteria, especially of Listeria Actin assembly-inducing protein (ActA), have resulted in further understanding of the actions of WASP. ActA is a nucleation promoting factor that mimics WASP. It is expressed polarized to the posterior end of the bacterium, allowing Arp2/3-mediated actin nucleation. This pushes the bacterium in the anterior direction, leaving a trailing "comet tail" of actin. In the case of Shigella , which also moves using an actin comet tail, the bacterial factor recruits host cell WASPs in order to promote actin nucleation. [ 2 ] [ 3 ] Cells can exchange material through various mechanisms, such as by secreting proteins , releasing extracellular vesicles such as exosomes or microvesicles , or more directly engulfing pieces of adjacent cells. In one example, filopodia -like protrusions, or tunneling nanotubes directed toward neighboring cells in a culture of rat PC12 cells have been shown to facilitate transport of organelles through transient membrane fusion. [ 4 ] In another example, during bone marrow homing, cells of the surrounding bone engulf pieces of bone marrow hematopoietic cells. These osteoblasts make contact with hematopoietic stem-progenitor cells through membrane nanotubes, and pieces of the donor cells are transferred over time to various endocytic compartments of the target osteoblasts. [ 5 ] A distinct process known as trogocytosis , the exchange of lipid rafts or membrane patches between immune cells, can facilitate response to foreign stimuli. [ 6 ] Moreover, exosomes have been shown to deliver not only antigens for cross-presentation , [ 7 ] but also MHCII and co-stimulatory molecules for lymphocyte T activation. [ 8 ] In non-immune cells, it has been demonstrated that mitochondria can be exchanged intercellularly to rescue metabolically non-viable cells lacking mitochondria. [ 9 ] Mitochondrial transfer has also been observed in cancer cells. [ 10 ] Argosomes are derived from basolateral epithelial membranes and allow communication between adjacent cells. They were first described in Drosophila melanogaster , where they act as a vehicle for the spread of molecules through the epithelia of imaginal discs. [ 11 ] Melanosomes are also transferred by filopodia from melanocytes to keratinocytes. This transfer involves a classic filopodial forming pathway, with Cdc42 and WASP as key factors. [ 12 ] Argosomes , melanosomes , and other examples of epithelial transfer have been compared with the process of paracytophagy, all of which can be viewed as special cases of intercellular material transfer between epithelial cells. [ 4 ] The two main examples of paracytophagy are the modes of cell-cell transmission of Listeria monocytogenes and Shigella flexneri . In the case of Listeria , the process was first described in detail using electron microscopy [ 13 ] and video microscopy. [ 1 ] The following is a description of the process of cell-cell transmission of Listeria monocytogenes , primarily based on Robbins et al . (1999): [ 1 ] In an already infected "donor" cell, the Listeria bacterium expresses ActA , which results in formation of the actin comet tail and movement of the bacterium throughout the cytoplasm . When the bacterium encounters the donor cell membrane , it will either ricochet off it or adhere to it and begin to push outwards, distending the membrane and forming a protrusion of 3-18 μm. The close interaction between the bacterium and the host cell membrane is thought to depend on Ezrin , a member of the ERM family of membrane-associated proteins . Ezrin attaches the actin-propelled bacterium to the plasma membrane by crosslinking the actin comet tail to the membrane, and maintains this interaction throughout the protrusion process. [ 14 ] As the normal site of infection is the gut columnar epithelium , cells are packed closely together and a cell protrusion from one cell will easily push into a neighboring "target" cell without rupturing the target cell membrane or the donor protrusion membrane. At this point, the bacterium at the tip of the protrusion will begin to undergo "fitful movement" caused by continuing polymerization of actin at its rear. After 7–15 minutes, the donor cell membrane pinches off and fitful movement ceases for 15–25 minutes due to depletion of ATP. Subsequently, the target membrane pinches off (taking 30–150 seconds) and the secondary vacuole containing the bacterium forms inside the target cell cytoplasm. Within 5 minutes, the target cell becomes infected when the secondary vacuole begins to acidify and the inner (donor cell-derived) membrane breaks down through the action of bacterial phospholipases (PI-PLC and PC-PLC). Shortly thereafter, the outer membrane breaks down as a result of the actions of the bacterial protein listeriolysin O [ 15 ] which punctures the vacuolar membrane. A cloud of residual donor cell-derived actin persists around the bacterium for up to 30 minutes. The bacterial metalloprotease Mpl cleaves ActA in a pH-dependent fashion while the bacterium is still within the acidified secondary vacuole, but new ActA transcription is not required as pre-existing ActA mRNA can be utilized to translate new ActA protein. The bacterium regains motility and the infection proceeds. The most severe symptoms of Listeriosis result from involvement of the central nervous system (CNS). These severe and often fatal symptoms include meningitis , rhombencephalitis , and encephalitis . These forms of disease are a direct result of Listeria pathogenicity mechanisms at the cellular level. [ 16 ] Listerial infection involving the CNS can occur via three known routes: through the blood, through intracellular delivery, or through neuronal intracellular spread. Paracytophagous cell to cell spread offers Listeria access to the CNS by the latter two mechanisms. [ 17 ] In peripheral tissues, Listeria can invade cells such as monocytes and dendritic cells from infected endothelial cells via the paracytophagous mode of invasion. Using these phagocytic cells as vectors, Listeria travels throughout the nerves and reaches tissues usually inaccessible to other bacterial pathogens. Similar to the mechanism seen in HIV , infected leukocytes in the blood cross the blood brain barrier and transport Listeria into the CNS. Once in the CNS, cell to cell spreading causes associated damage leading to brain encephalitis and bacterial meningitis. Listeria uses phagocytic leukocytes as a “ Trojan Horse ” [ 18 ] to gain access to a greater range of target cells. In one study, mice treated with gentamicin via infusion pump displayed CNS and brain involvement during infection with Listeria , indicating that the population of bacteria responsible for severe pathogenesis resided within cells and was protected from the circulating antibiotic . [ 19 ] [ 20 ] Macrophages infected with Listeria pass the infection on to neurons more easily through paracytophagy than through extracellular invasion by free bacteria. [ 21 ] The mechanism which specifically targets these infected cells to the CNS is currently not known. This Trojan horse function is also observed and thought to be important in early stages of infection where gut-to- lymph node infection is mediated by infected dendritic cells. [ 22 ] A second mechanism of reaching the brain tissue is achieved through intra-axonal transport. In this mechanism, Listeria travels along the nerves to the brain, resulting in encephalitis or transverse myelitis. [ 23 ] In rats, the dorsal root ganglia can be infected directly by Listeria , and the bacteria can move in retrograde as well as anterograde direction through the nerve cells. [ 24 ] The specific mechanisms involved in brain disease are not yet known, but paracytophagy is thought to have some role. Bacteria have not been shown to infect neuronal cells directly in an efficient manner, and the previously described macrophage hand-off is thought to be necessary for this mode of spread. [ 21 ] [ 25 ] The process of paracytophagy is considered distinct from similar but unrelated processes such as phagocytosis and trogocytosis . Some related concepts include:
https://en.wikipedia.org/wiki/Paracytophagy
A paradigm shift is a fundamental change in the basic concepts and experimental practices of a scientific discipline . It is a concept in the philosophy of science that was introduced and brought into the common lexicon by the American physicist and philosopher Thomas Kuhn . Even though Kuhn restricted the use of the term to the natural sciences , the concept of a paradigm shift has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events. Kuhn presented his notion of a paradigm shift in his influential book The Structure of Scientific Revolutions (1962). Kuhn contrasts paradigm shifts, which characterize a Scientific Revolution , to the activity of normal science , which he describes as scientific work done within a prevailing framework or paradigm . Paradigm shifts arise when the dominant paradigm under which normal science operates is rendered incompatible with new phenomena, facilitating the adoption of a new theory or paradigm. [ 1 ] As one commentator summarizes: Kuhn acknowledges having used the term "paradigm" in two different meanings. In the first one, "paradigm" designates what the members of a certain scientific community have in common, that is to say, the whole of techniques, patents and values shared by the members of the community. In the second sense, the paradigm is a single element of a whole, say for instance Newton's Principia, which, acting as a common model or an example... stands for the explicit rules and thus defines a coherent tradition of investigation. Thus the question is for Kuhn to investigate by means of the paradigm what makes possible the constitution of what he calls "normal science". That is to say, the science which can decide if a certain problem will be considered scientific or not. Normal science does not mean at all a science guided by a coherent system of rules, on the contrary, the rules can be derived from the paradigms, but the paradigms can guide the investigation also in the absence of rules. This is precisely the second meaning of the term "paradigm", which Kuhn considered the most new and profound, though it is in truth the oldest. [ 2 ] The nature of scientific revolutions has been studied by modern philosophy since Immanuel Kant used the phrase in the preface to the second edition of his Critique of Pure Reason (1787). Kant used the phrase "revolution of the way of thinking" ( Revolution der Denkart ) to refer to Greek mathematics and Newtonian physics . In the 20th century, new developments in the basic concepts of mathematics , physics , and biology revitalized interest in the question among scholars. In his 1962 book The Structure of Scientific Revolutions , Kuhn explains the development of paradigm shifts in science into four stages: A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism : [ 10 ] the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation [ 11 ] and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better , not just different. These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another—that they are incommensurable . This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes—so different that whether or not one was better, they could not be understood by one another. Donald Davidson famously argued against this idea of conceptual relativism, claiming that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are. [ 12 ] Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous, with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour. [ 13 ] Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system. In The Structure of Scientific Revolutions , Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science" (p. 12). Kuhn's idea was itself revolutionary in its time as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science. Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it. [ 14 ] Some of the "classical cases" of Kuhnian paradigm shifts in science are: In Kuhn's view, the existence of a single reigning paradigm is characteristic of the natural sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals." [ 26 ] Others have applied Kuhn's concept of paradigm shift to the social sciences. More recently, paradigm shifts are also recognisable in applied sciences: The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought pattern—a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing: In a 2015 retrospective on Kuhn, [ 39 ] the philosopher Martin Cohen describes the notion of the paradigm shift as a kind of intellectual virus – spreading from hard science to social science and on to the arts and even everyday political rhetoric today. Cohen claims that Kuhn had only a very hazy idea of what it might mean and, in line with the Austrian philosopher of science Paul Feyerabend , accuses Kuhn of retreating from the more radical implications of his theory, which are that scientific facts are never really more than opinions whose popularity is transitory and far from conclusive. Cohen says scientific knowledge is less certain than it is usually portrayed, and that science and knowledge generally is not the 'very sensible and reassuringly solid sort of affair' that Kuhn describes, in which progress involves periodic paradigm shifts in which much of the old certainties are abandoned in order to open up new approaches to understanding that scientists would never have considered valid before. He argues that information cascades can distort rational, scientific debate. He has focused on health issues, including the example of highly mediatised ' pandemic ' alarms , and why they have turned out eventually to be little more than scares. [ 40 ]
https://en.wikipedia.org/wiki/Paradigm_shift
Paradise Lost is an epic poem in blank verse by the English poet John Milton (1608–1674). The poem concerns the biblical story of the fall of man : the temptation of Adam and Eve by the fallen angel Satan and their expulsion from the Garden of Eden . The first version, published in 1667, consists of ten books with over ten thousand lines of verse . A second edition followed in 1674, arranged into twelve books (in the manner of Virgil 's Aeneid ) with minor revisions throughout. [ 1 ] [ 2 ] It is considered to be Milton's masterpiece , and it helped solidify his reputation as one of the greatest English poets of all time. [ 3 ] At the heart of Paradise Lost are the themes of free will and the moral consequences of disobedience. Milton seeks to "justify the ways of God to men," addressing questions of predestination , human agency , and the nature of good and evil . The poem begins in medias res , with Satan and his fallen angels cast into Hell after their failed rebellion against God . Milton's Satan, portrayed with both grandeur and tragic ambition, is one of the most complex and debated characters in literary history, particularly for his perceived heroism by some readers. The poem's portrayal of Adam and Eve emphasizes their humanity, exploring their innocence before the Fall of Man and their subsequent awareness of sin. Through their story, Milton reflects on the complexities of human relationships , the tension between individual freedom and obedience to divine law, and the possibility of redemption . Despite their transgression, the poem ends on a note of hope, as Adam and Eve leave Paradise with the promise of salvation through Christ . Milton's epic has been praised for its linguistic richness, theological depth, and philosophical ambition. However, it has also sparked controversy, particularly for its portrayal of Satan, whom some readers interpret as a heroic or sympathetic figure. Paradise Lost continues to inspire scholars, writers, and artists, remaining a cornerstone of literary and theological discourse. The poem follows the epic tradition of starting in medias res ( lit. ' in the midst of things ' ), the background story being recounted later. Milton's story has two narrative arcs , one about Satan ( Lucifer ) and the other about Adam and Eve . It begins after Satan and the other fallen angels have been defeated and banished to Hell , or, as it is also called in the poem, Tartarus . In Pandæmonium , the capital city of Hell, Satan employs his rhetorical skill to organise his followers; he is aided by Mammon and Beelzebub ; Belial , Chemosh , and Moloch are also present. At the end of the debate, Satan volunteers to corrupt the newly created Earth and God's new and most favoured creation, Mankind. He braves the dangers of the Abyss alone, in a manner reminiscent of Odysseus or Aeneas . After an arduous traversal of the Chaos outside Hell, he enters God's new material World, and later the Garden of Eden . At several points in the poem, an Angelic War over Heaven is recounted from different perspectives. Satan's rebellion follows the epic convention of large-scale warfare. The battles between the faithful angels and Satan's forces take place over three days. At the final battle, the Son of God single-handedly defeats the entire legion of angelic rebels and banishes them from Heaven. Following this purge, God creates the World , culminating in his creation of Adam and Eve. While God gave Adam and Eve total freedom and power to rule over all creation, he gave them one explicit command: not to eat from the tree of the knowledge of good and evil on penalty of death. It is less often related that God was afraid that they would eat the fruit of the tree of life , and live forever. Adam and Eve are presented as having a romantic and sexual relationship while still being without sin . They have passions and distinct personalities. Satan, disguised in the form of a serpent, successfully tempts Eve to eat from the Tree by preying on her vanity and tricking her with rhetoric . Adam, learning that Eve has sinned, knowingly commits the same sin. He declares to Eve that since she was made from his flesh, they are bound to one another – if she dies, he must also die. In this manner, Milton portrays Adam as a heroic figure, but also as a greater sinner than Eve, as he is aware that what he is doing is wrong. After eating the fruit, Adam and Eve experience lust for the first time, which renders their next sexual encounter with one another unpleasant. At first, Adam is convinced that Eve was right in thinking that eating the fruit would be beneficial. However, they soon fall asleep and have terrible nightmares, and after they awake, they experience guilt and shame for the first time. Realising that they have committed a terrible act against God, they engage in mutual recrimination. Meanwhile, Satan returns triumphantly to Hell, amid the praise of his fellow fallen angels. He tells them about how their scheme worked and Mankind has fallen, giving them complete dominion over Paradise. As he finishes his speech, however, the fallen angels around him become hideous snakes, and soon enough, Satan himself turns into a snake, deprived of limbs and unable to talk. Thus, they share the same punishment, as they shared the same guilt. Eve appeals to Adam for reconciliation of their actions. Her encouragement enables them to approach God, and plead for forgiveness. In a vision shown to him by the Archangel Michael , Adam witnesses everything that will happen to Mankind until the Great Flood . Adam is very upset by this vision of the future, so Michael also tells him about Mankind's potential redemption from original sin through Jesus Christ (whom Michael calls "King Messiah "). Adam and Eve are cast out of Eden, and Michael says that Adam may find "a paradise within thee, happier far". Adam and Eve now have a more distant relationship with God, who is omnipresent but invisible (unlike the tangible Father in the Garden of Eden). It is uncertain when Milton composed Paradise Lost . [ 4 ] John Aubrey (1626–1697), Milton's contemporary and biographer, says that it was written between 1658 and 1663. [ 5 ] However, parts of the poem had likely been in development since Milton was young. [ 5 ] Having gone blind in 1652, Milton wrote Paradise Lost entirely through dictation with the help of amanuenses and friends. He was often ill, suffering from gout , and suffering emotionally after the early death of his second wife, Katherine Woodcock, in 1658, and their infant daughter. [ 6 ] The image of Milton dictating the poem to his daughters became a popular subject for paintings, especially in the Romantic period. [ 7 ] The Milton scholar John Leonard also notes that Milton "did not at first plan to write a biblical epic". [ 5 ] Since epics were typically written about heroic kings and queens (and with pagan gods), Milton originally envisioned his epic to be based on a legendary Saxon or British king like the legend of King Arthur . [ 8 ] [ 9 ] Leonard speculates that the English Civil War interrupted Milton's earliest attempts to start his "epic [poem] that would encompass all space and time". [ 5 ] In the 1667 version of Paradise Lost , the poem was divided into ten books. However, in the 1674 edition, the text was reorganized into twelve books. [ 10 ] In later printing, "Arguments" (brief summaries) were inserted at the beginning of each book. [ 11 ] Milton's previous work had been printed by Matthew Simmons who was favoured by radical writers. However he died in 1654 and the business was then run by Mary Simmons . Milton had not published work with the Simmons printing business for twenty years. Mary was increasingly relying on her son Samuel to help her manage the business and the first book that Samuel Simmons registered for publication in his name was Paradise Lost in 1667. [ 12 ] Key to the ambitions of Paradise Lost as a poem is the creation of a new kind of epic , one suitable for English, Christian morality rather than polytheistic Greek or Roman antiquity. This intention is indicated from the very beginning of the poem, when Milton uses the classical epic poetic device of an invocation for poetic inspiration. Rather than invoking the classical muses , however, Milton addresses the Christian God as his "Heav'nly Muse" (1.1). Other classical epic conventions include an in medias res opening, a journey in the underworld, large-scale battles, and an elevated poetic style. In particular, the poem often uses Homeric similes . Milton repurposes these epic conventions to create a new biblical epic, promoting a different kind of hero. Classical epic heroes like Achilles , Odysseus , and Aeneas were presented in the Iliad , Odyssey , and Aeneid as heroes for their military strength and guile, which might go hand in hand with wrath, pride, or lust. Milton attributes these traits instead to Satan, and depicts the Son as heroic for his love, mercy, humility, and self-sacrifice. The poem itself therefore presents the value system of classical heroism as one which has been superseded by Christian virtue. [ 4 ] The poem is written in blank verse , meaning the lines are metrically regular iambic pentameter but they do not rhyme . Milton used the flexibility of blank verse to support a high level of syntactic complexity. Although Milton was not the first to use blank verse, his use of it was very influential and he became known for the style. Blank verse was not much used in the non-dramatic poetry of the 17th century until Paradise Lost . Milton also wrote Paradise Regained (1671) and parts of Samson Agonistes (1671) in blank verse. Miltonic blank verse became the standard for those attempting to write English epics for centuries following the publication of Paradise Lost and his later poetry. [ 13 ] When Miltonic verse became popular, Samuel Johnson mocked Milton for inspiring bad blank verse imitators. [ 14 ] Alexander Pope 's final, incomplete work was intended to be written in the form, [ 15 ] and John Keats , who complained that he relied too heavily on Milton, [ 16 ] adopted and picked up various aspects of his poetry. Milton used a number of acrostics in the poem. In Book 9, a verse describing the serpent which tempted Eve to eat the forbidden fruit in the Garden of Eden spells out "SATAN" (9.510), while elsewhere in the same book, Milton spells out "FFAALL" and "FALL" (9.333). Respectively, these probably represent the double fall of humanity embodied in Adam and Eve, as well as Satan's fall from Heaven. [ 17 ] Satan , formerly called Lucifer , is the first major character introduced in the poem. He is a tragic figure who famously declares: "Better to reign in Hell than serve in Heaven" (1.263). Following his vain rebellion against God he is cast out from Heaven and condemned to Hell. The rebellion stems from Satan's pride and envy (5.660ff.). Opinions on the character are often sharply divided. Milton presents Satan as the origin of all evil, but some readers interpret Milton's Satan as a nuanced or sympathetic character. Romanticist critics in particular, among them William Blake , Lord Byron , Percy Bysshe Shelley , and William Hazlitt , are known for interpreting Satan as a hero of Paradise Lost . This has led other critics, such as C. S. Lewis and Charles Williams , both of whom were devout Christians, to argue against reading Satan as a sympathetic, heroic figure. [ 18 ] [ 19 ] Despite Blake thinking that Milton intended for Satan to have a heroic role in the poem, Blake himself described Satan as the "state of error", and as beyond salvation. [ 20 ] John Carey argues that this conflict cannot be solved, because the character of Satan exists in more modes and greater depth than the other characters of Paradise Lost : in this way, Milton has created an ambivalent character, and any "pro-Satan" or "anti-Satan" argument is by its nature discarding half the evidence. Satan's ambivalence, Carey says, is "a precondition of the poem's success – a major factor in the attention it has aroused". [ 21 ] C. S. Lewis argues in his A Preface to Paradise Lost that it is important to remember what society was like when Milton wrote the poem. In particular, during that time period, there were certain "stock responses" to elements that Milton would have expected every reader to have. As examples, Lewis lists "love is sweet, death bitter, virtue lovely, and children or gardens delightful." According to Lewis, Milton would have expected readers to not view Satan as a hero at all. Lewis argues readers far in the future romanticizing Milton's intentions is not accurate. [ 22 ] Comparative religion scholar R. J. Zwi Werblowsky argues in his Lucifer and Prometheus that Milton's Satan is a disproportionately appealing character because of attributes he shares with the Greek Titan Prometheus . It has been called "most illuminating" for its historical and typological perspective on Milton's Satan as embodying both positive and negative values. [ 23 ] The book has also been significant in pointing out the essential ambiguity of Prometheus and his dual Christ -like/Satanic nature as developed in the Christian tradition. [ 24 ] Adam is the first human created by God. Adam requests a companion from God: Of fellowship I speak Such as I seek, fit to participate All rational delight, wherein the brute Cannot be human consort. (8.389–392) God approves his request then creates Eve. God appoints Adam and Eve to rule over all the creatures of the world and to reside in the Garden of Eden. Adam is more gregarious than Eve and yearns for her company. He is completely infatuated with her. Raphael advises him to "take heed lest Passion sway / Thy Judgment" (5.635–636). But Adam's great love for Eve contributes to his disobedience to God. Unlike the biblical Adam, before Milton's Adam leaves Paradise he is given a glimpse of the future of mankind by the Archangel Michael, which includes stories from the Old and New Testaments . Eve is the second human created by God. God takes one of Adam's ribs and shapes it into Eve. Whether Eve is actually inferior to Adam is a vexed point. She is often unwilling to be submissive. Eve may be the more intelligent of the two. When she first met Adam she turned away, more interested in herself. She had been looking at her reflection in a lake before being led invisibly to Adam. Recounting this to Adam she confesses that she found him less enticing than her reflection (4.477–480). Eve delivers an autobiography in Book 4. [ 25 ] In Book 9, Milton stages a domestic drama between Adam and Eve, which results in Eve convincing Adam to separate for a time to work in different parts of the Garden. This allows Satan to deceive her while she is alone. To tempt her to eat the forbidden fruit, Satan tells a story about how he ate it, using the language of Renaissance love poetry. He overcomes her reason; she eats the fruit. [ 25 ] The Son of God is the spirit who will become incarnate as Jesus Christ , though he is never named explicitly because he has not yet entered human form. Milton believed in a subordinationist doctrine of Christology that regarded the Son as secondary to the Father and as God's "great Vice-regent" (5.609). Milton's God in Paradise Lost refers to the Son as "My word, my wisdom, and effectual might" (3.170). The poem is not explicitly anti-trinitarian , but it is consistent with Milton's convictions. The Son is the ultimate hero of the epic and is infinitely powerful—he single-handedly defeats Satan and his followers and drives them into Hell. After their fall, the Son of God tells Adam and Eve about God's judgment. Before their fall the Father foretells their "Treason" (3.207) and that Man with his whole posteritie must dye, Dye hee or Justice must; unless for him Som other able, and as willing, pay The rigid satisfaction, death for death. (3.210–212) The Father then asks whether there "Dwels in all Heaven charitie so deare?" (3.216) and the Son volunteers himself. In the final book a vision of Salvation through the Son is revealed to Adam by Michael. The name Jesus of Nazareth, and the details of Jesus' story are not depicted in the poem, [ 26 ] though they are alluded to. Michael explains that "Joshua, whom the Gentiles Jesus call", prefigures the Son of God, "his name and office bearing" to "quell / The adversarie Serpent, and bring back [...] long wander[e]d man / Safe to eternal Paradise of rest". [ 27 ] God the Father is the creator of Heaven, Hell, the world, of everyone and everything there is, through the agency of His Son. Milton presents God as all-powerful and all-knowing, as an infinitely great being who cannot be overthrown by even the great army of angels Satan incites against him. Milton portrays God as often conversing about his plans and his motives for his actions with the Son of God. The poem shows God creating the world in the way Milton believed it was done, that is, God created Heaven, Earth, Hell, and all the creatures that inhabit these separate planes from part of Himself, not out of nothing. [ 28 ] Thus, according to Milton, the ultimate authority of God over all things that happen derives from his being the "author" of all creation. Satan tries to justify his rebellion by denying this aspect of God and claiming self-creation, but he admits to himself the truth otherwise, and that God "deserved no such return / From me, whom He created what I was". [ 29 ] [ 30 ] Raphael is an archangel who is sent by God to Eden in order to strengthen Adam and Eve against Satan. He tells a heroic tale about the War in Heaven that takes up most of Book 6 of Paradise Lost . Ultimately, the story told by Raphael, in which Satan is portrayed as bold and decisive, does not prepare Adam and Eve to counter Satan's subtle temptations – and may even have caused the Fall in the first place. [ 31 ] Michael is an archangel who is preeminent in military prowess. He leads in battle and uses a sword which was "giv'n him temperd so, that neither keen / Nor solid might resist that edge" (6.322–323). God sends Michael to Eden, charging him: from the Paradise of God Without remorse drive out the sinful Pair From hallowd ground th' unholie, and denounce To them and to thir Progenie from thence Perpetual banishment. [...] If patiently thy bidding they obey, Dismiss them not disconsolate; reveale To Adam what shall come in future dayes, As I shall thee enlighten, intermix My Cov'nant in the womans seed renewd; So send them forth, though sorrowing, yet in peace. (11.103–117) He is also charged with establishing a guard for Paradise. When Adam sees him coming he describes him to Eve as not terrible, That I should fear, nor sociably mild, As Raphael, that I should much confide, But solemn and sublime, whom not to offend, With reverence I must meet, and thou retire. (11.233–237) Milton first presented Adam and Eve in Book IV with impartiality. The relationship between Adam and Eve is one of "mutual dependence, not a relation of domination or hierarchy". While the author placed Adam above Eve in his intellectual knowledge and, in turn, his relation to God, he granted Eve the benefit of knowledge through experience. Hermine Van Nuis clarifies, that although there was stringency specified for the roles of male and female, Adam and Eve unreservedly accept their designated roles. [ 32 ] Rather than viewing these roles as forced upon them, each uses their assignment as an asset in their relationship with each other. These distinctions can be interpreted as Milton's view on the importance of mutuality between husband and wife. When examining the relationship between Adam and Eve, some critics apply either an Adam-centered or Eve-centered view of hierarchy and importance to God. David Mikics argues, by contrast, these positions "overstate the independence of the characters' stances, and therefore miss the way in which Adam and Eve are entwined with each other". [ 33 ] Milton's narrative depicts a relationship where the husband and wife (here, Adam and Eve) depend on each other and, through each other's differences, thrive. [ 33 ] Still, there are several instances where Adam communicates directly with God while Eve must go through Adam to God; thus, some have described Adam as her guide. [ 34 ] [ page needed ] Although Milton does not directly mention divorce, critics posit theories on Milton's view of divorce based upon their inferences from the poem and from his tracts on divorce written earlier in his life. Other works by Milton suggest he viewed marriage as an entity separate from the church. Discussing Paradise Lost , Biberman entertains the idea that "marriage is a contract made by both the man and the woman". [ 35 ] These ideas imply Milton may have thought that both man and woman should have equal access to marriage and to divorce. Milton's 17th-century contemporaries by and large criticised his ideas and considered him a radical, mostly because of his republican political views and heterodox theological opinions. One of Milton's most controversial arguments centred on his concept of what is idolatrous, a subject which is deeply embedded in Paradise Lost . Milton's first criticism of idolatry focused on the constructing of temples and other buildings to serve as places of worship. In Book XI of Paradise Lost , Adam tries to atone for his sins by offering to build altars to worship God. In response, the angel Michael explains that Adam does not need to build physical objects to experience the presence of God. [ 36 ] Joseph Lyle points to this example, explaining: "When Milton objects to architecture, it is not a quality inherent in buildings themselves he finds offensive, but rather their tendency to act as convenient loci to which idolatry, over time, will inevitably adhere." [ 37 ] Even if the idea is pure in nature, Milton thought it would unavoidably lead to idolatry simply because of the nature of humans. That is, instead of directing their thoughts towards God, humans will turn to erected objects and falsely invest their faith there. While Adam attempts to build an altar to God, critics note Eve is similarly guilty of idolatry, but in a different manner. Harding believes Eve's narcissism and obsession with herself constitutes idolatry. [ 38 ] Specifically, Harding claims that "under the serpent's influence, Eve's idolatry and self-deification foreshadow the errors into which her 'Sons' will stray". [ 38 ] Much like Adam, Eve falsely places her faith in herself, the Tree of Knowledge, and to some extent the Serpent, all of which do not compare to the ideal nature of God. Milton made his views on idolatry more explicit with the creation of Pandæmonium and his allusion to Solomon's temple . In the beginning of Paradise Lost and throughout the poem, there are several references to the rise and eventual fall of Solomon's temple. Critics elucidate that "Solomon's temple provides an explicit demonstration of how an artefact moves from its genesis in devotional practice to an idolatrous end." [ 39 ] This example, out of the many presented, distinctly conveys Milton's views on the dangers of idolatry. Even if one builds a structure in the name of God, the best of intentions can become immoral in idolatry. The majority of these similarities revolve around a structural likeness, but as Lyle explains, they play a greater role. By linking Saint Peter's Basilica and the Pantheon to Pandemonium—an ideally false structure—the two famous buildings take on a false meaning. [ 40 ] This comparison best represents Milton's Protestant views, as it rejects both the purely Catholic perspective and the Pagan perspective. In addition to rejecting Catholicism, Milton revolted against the idea of a monarch ruling by divine right . He saw the practice as idolatrous. Barbara Lewalski concludes that the theme of idolatry in Paradise Lost "is an exaggerated version of the idolatry Milton had long associated with the Stuart ideology of divine kingship". [ 41 ] In the opinion of Milton, any object, human or non-human, that receives special attention befitting of God, is considered idolatrous. Although Satan's army inevitably loses the war against God, Satan achieves a position of power and begins his reign in Hell with his band of loyal followers, composed of fallen angels, which is described to be a "third of heaven". Similar to Milton's republican sentiments of overthrowing the King of England for both better representation and parliamentary power, Satan argues that his shared rebellion with the fallen angels is an effort to "explain the hypocrisy of God", [ citation needed ] and in doing so, they will be treated with the respect and acknowledgement that they deserve. As Wayne Rebhorn argues, "Satan insists that he and his fellow revolutionaries held their places by right and even leading him to claim that they were self-created and self-sustained" and thus Satan's position in the rebellion is much like that of his own real-world creator. [ 42 ] Milton scholar John Leonard interpreted the "impious war" between Heaven and Hell as civil war : [ 43 ] [ page needed ] Paradise Lost is, among other things, a poem about civil war. Satan raises "impious war in Heav'n" (i 43) by leading a third of the angels in revolt against God. The term "impious war" implies that civil war is impious. But Milton applauded the English people for having the courage to depose and execute King Charles I . In his poem, however, he takes the side of "Heav'n's awful Monarch" (iv 960). Critics have long wrestled with the question of why an antimonarchist and defender of regicide should have chosen a subject that obliged him to defend monarchical authority. The editors at the Poetry Foundation argue that Milton's criticism of the English monarchy was being directed specifically at the Stuart monarchy and not at the monarchical system of government in general. [ 3 ] In a similar vein, C. S. Lewis argued that there was no contradiction in Milton's position in the poem since "Milton believed that God was his 'natural superior' and that Charles Stuart was not." [ 43 ] [ page needed ] The critic William Empson claimed the poem was morally ambiguous, with Milton's complex characterization of Satan playing a large part in Empson's claim of moral ambiguity. [ 43 ] [ page needed ] For context, the second volume of Empson's authorized biography was titled: William Empson: Against the Christians . In it his authorized biographer describes "Empson’s visceral loathing of Christianity." [ 44 ] He spent a large amount of his career attacking Christianity, demonizing it as "wickedness" and claiming that Milton's God was "sickeningly bad." [ 45 ] For example, Empson portrays Milton's God as akin to a "Stalinist" tyrant "who enslaves His human creations to serve His own narcissism." From there, Empson gives fake praise that is really an attack, saying that "Milton deserves credit for making God wicked, since the God of Christianity is 'a wicked God'." John Leonard states that "Empson never denies that Satan's plan is wicked. What he does deny is that God is innocent of its wickedness: 'Milton steadily drives home that the inmost counsel of God was the Fortunate Fall of man; however wicked Satan's plan may be, it is God's plan too [since God in Paradise Lost is depicted as being both omniscient and omnipotent].'" [ 43 ] [ page needed ] Leonard notes that this interpretation was challenged by Dennis Danielson in his book Milton's Good God (1982). [ 43 ] [ page needed ] Alexandra Kapelos-Peters explains that: "as Danielson logically asserts, foreknowledge is not commensurate with culpability. Although God knew that Adam and Eve would eat the forbidden fruit of knowledge, He neither commanded them to do so, nor influenced their decision." Moreover, God gives humans free will to choose to do good or evil, while a tyrant would do the very opposite and deny free will by controlling his subjects' actions like a puppet-master. She says Danielson and Milton "demonstrate one crucial point: the presence of sin in the world is attributable to human agency and free will. Danielson argues that free will is crucial, because without it humanity would have only been serving necessity, and not participating in a free love act with the divine." [ 46 ] She notes that in Paradise Lost , God says: " They trespass, Authors to themselves in all, Both what they judge and what they choose; for so I formd them free, and free they must remain." Kapelos-Peters adds: "Milton demonstrates that far from being a tyrannical lord, God and the Son function as a collaborative team that desire nothing but the return of man to his pre-fallen state. Furthermore, God is not even able to dominate in this aspect because human agency and free-will are not abandoned. Not only will the Son sacrifice himself pre-emptively in Book 3 for the not-yet-occurred Fall of Man, but Man himself will have a role in his own salvation. To successfully navigate atonement, humanity will have to admit and repent of their former disobedience." C. S. Lewis also rebutted the approach of people like Empson super-imposing their own interpretations with an agenda onto the poem long after it was written. Lewis wrote: "The first qualification for judging any piece of workmanship from a corkscrew to a cathedral is to know what it is – what it was intended to do and how it is meant to be used." [ 47 ] Lewis said the poem was a genuine Christian morality tale. [ 43 ] [ page needed ] In Lewis's book A Preface to Paradise Lost , he discusses the theological similarities between Paradise Lost and St. Augustine, and says that "The Fall is simply and solely Disobedience – doing what you have been told not to do: and it results from Pride – from being too big for your boots, forgetting your place, thinking that you are God." [ 19 ] The writer and critic Samuel Johnson wrote that Paradise Lost shows off Milton's "peculiar power to astonish" and that Milton "seems to have been well acquainted with his own genius, and to know what it was that Nature had bestowed upon him more bountifully than upon others: the power of displaying the vast, illuminating the splendid, enforcing the awful, darkening the gloomy, and aggravating the dreadful". [ 48 ] William Blake famously wrote in The Marriage of Heaven and Hell : "The reason Milton wrote in fetters when he wrote of Angels & God, and at liberty when of Devils & Hell, is because he was a true Poet and of the Devil's party without knowing it." [ 49 ] This quotation succinctly represents the way in which some 18th- and 19th-century English Romantic poets viewed Milton. Tobias Gregory wrote that Milton was "the most theologically learned among early modern epic poets. He was, moreover, a theologian of great independence of mind, and one who developed his talents within a society where the problem of divine justice was debated with particular intensity." [ 50 ] Gregory says that Milton is able to establish divine action and his divine characters in a superior way to other Renaissance epic poets, including Ludovico Ariosto or Torquato Tasso . [ 51 ] In Paradise Lost Milton also ignores the traditional epic format of a plot based on a mortal conflict between opposing armies with deities watching over and occasionally interfering with the action. Instead, both divinity and humanity are involved in a conflict that, while momentarily ending in tragedy, offers a future salvation. [ 51 ] In both Paradise Lost and Paradise Regained , Milton incorporates aspects of Lucan 's epic model, the epic from the view of the defeated. Although he does not accept the model completely within Paradise Regained , he incorporates the "anti-Virgilian, anti-imperial epic tradition of Lucan". [ 52 ] Milton goes further than Lucan in this belief and " Paradise Lost and Paradise Regained carry further, too, the movement toward and valorization of romance that Lucan's tradition had begun, to the point where Milton's poems effectively create their own new genre". [ 53 ] The Catholic Church reacted by banning the poem and placing it on the Index Librorum Prohibitorum . [ a ] The first illustrations to accompany the text of Paradise Lost were added to the fourth edition of 1688, with one engraving prefacing each book, of which up to eight of the twelve were by Sir John Baptist Medina , one by Bernard Lens II , and perhaps up to four (including Books I and XII, perhaps the most memorable) by another hand. [ 54 ] The engraver was Michael Burghers (given as 'Burgesse' in some sources [ 55 ] ). By 1730, the same images had been re-engraved on a smaller scale by Paul Fourdrinier . Some of the most notable illustrators of Paradise Lost included William Blake , Gustave Doré , and Henry Fuseli . However, the epic's illustrators also include John Martin , Edward Francis Burney , Richard Westall , Francis Hayman , and many others. Outside of book illustrations, the epic has also inspired other visual works by well-known painters like Salvador Dalí who executed a set of ten colour engravings in 1974. [ 56 ] Milton's achievement in writing Paradise Lost while blind (he dictated to helpers) inspired loosely biographical paintings by both Fuseli [ 57 ] and Eugène Delacroix . [ 58 ]
https://en.wikipedia.org/wiki/Paradise_Lost
ParadisEO is an object-oriented framework dedicated to the flexible design of metaheuristics . It uses EO, a template-based , ANSI-C++ [ clarification needed ] compliant computation library . [ 1 ] ParadisEO is portable across both Windows system and sequential platforms ( Unix , Linux , Mac OS X , etc.). ParadisEO is distributed under the CeCill license and can be used under several environments. This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Paradiseo
Paradox Engineering SA is a Swiss technology company that designs and markets solutions and services enabling smart cities and Industry 4.0 applications. The company's mission is to offer technologies to unlock the value of data. Its solutions are ready for the Internet of things , and enable cities and companies to collect, transport, store and deliver any kind of data lying in industrial plants or urban objects, transforming information into actionable intelligence to feed business decisions. The technologies provided by the company are based on IPv6 / 6LoWPAN open standard protocol, and fully interoperable with other systems or applications. It was established in 2005, with headquarters in Novazzano, Switzerland. In July 2015 the Japanese Group Minebea Co. Ltd., the world's leading comprehensive manufacturer of high-precision components, acquired full capital and assets of Paradox Engineering SA. [ 1 ] The acquisition was aimed at accelerating the success of the Group in the Internet of Things and smart markets. Paradox Engineering was founded in 2005 in Novazzano, Ticino Canton, Switzerland. The company was born as a telecommunication company, serving the niche market of industrial data transportation . It developed at first a one-stop-shop business model providing virtual networks to connect any customer industrial operation site and enable remote and condition monitoring programs. In 2010 it began to design and engineer pioneer technologies to implement interoperable and highly scalable IPv6/6LoWPAN network infrastructures for industrial or urban applications. In 2011 it entered the Smart Metering , Smart Grid and Smart City markets with the introduction of a modular solution for urban architectures (PE.AMI). [ 2 ] Thanks to this product, the company was acknowledged with the Living Labs Global Award 2012 [ 3 ] for presenting a wireless sensor network solution to meet the needs of San Francisco Public Utilities Commission and start a pilot project supporting the management of streetlights, Ev charging stations, electric meters and traffic signals management in the city. [ 4 ] In May 2013 the company launched PE.STONE, an OEM solution for developers and companies willing to build their own Internet of Things and smart applications. In November 2013, the company announced the launch of a new vertical solution for parking management supporting utilities and municipalities to reduce traffic congestion and offering improved mobility service to citizens. [ 5 ] During 2013 the company unveiled two successful Smart City projects in Chiasso [ 6 ] and Bellinzona , [ 7 ] Switzerland. On December 2, 2013 Minebea Co. Ltd joined the company as shareholder to strengthen the presence in Smart City/Smart Grid, smart building and industrial sensor network markets. [ 8 ] The company continued to grow in the Smart City market. In May 2015 Tinynode SA entered Paradox Engineering's ecosystem. [ 9 ] As Tinynode was specialized in wireless vehicle detection systems, the two companies aimed at positioning as unique enabler of any kind of smart environment through compelling solutions for the Internet of Things. From July 2015 Paradox Engineering is part of MinebeaMitsumi Group. [ 10 ] Leveraging Paradox Engineering as IoT Excellence Center, the Group is accelerating the development of solutions for a fully networked IoT society, including applications for automotive, medical, consumer, industrial and Smart Cities markets. In 2020 the company launched a Smart Waste application. Latest developments relate to blockchain and cybersecurity services for Smart Cities.
https://en.wikipedia.org/wiki/Paradox_Engineering
In aquatic biology , the paradox of the plankton describes the situation in which a limited range of resources supports an unexpectedly wide range of plankton species, apparently flouting the competitive exclusion principle , which holds that when two species compete for the same resource, one will be driven to extinction. The paradox of the plankton results from the clash between the observed diversity of plankton and the competitive exclusion principle , [ 1 ] also known as Gause's law, [ 2 ] which states that, when two species compete for the same resource, ultimately only one will persist and the other will be driven to extinction . Coexistence between two such species is impossible because the dominant one will inevitably deplete the shared resources, thus decimating the inferior population. [ 3 ] Phytoplankton life is diverse at all phylogenetic levels despite the limited range of resources (e.g. light, nitrate, phosphate, silicic acid, iron) for which they compete amongst themselves. The paradox of the plankton was originally described in 1961 by G. Evelyn Hutchinson , who proposed that the paradox could be resolved by factors such as vertical gradients of light or turbulence , symbiosis or commensalism , differential predation , or constantly changing environmental conditions. [ 4 ] Later studies found that the paradox can be resolved by factors such as: zooplankton grazing pressure ; [ 5 ] chaotic fluid motion ; [ 6 ] size-selective grazing; [ 7 ] spatio-temporal heterogeneity; [ 8 ] bacterial mediation; [ 9 ] or environmental fluctuations. [ 10 ] In general, researchers suggest that ecological and environmental factors continually interact such that the planktonic habitat never reaches an equilibrium for which a single species is favoured. [ 11 ] While it was long assumed that turbulence disrupts plankton patches at spatial scales less than a few metres, researchers using small-scale analysis of plankton distribution found that these exhibited patches of aggregation (on the order of 10cm) that had sufficient lifetimes (more than 10 minutes) to enable plankton grazing, competition, and infection. [ 12 ] One potential resolution to the paradox is the control on plankton populations by marine lytic viruses . Marine viruses play an important role in bacteria and plankton ecology. They are a significant component of biogeochemical cycling [ 13 ] and horizontal gene transfer in both bacterial and plankton communities. Viruses are the most abundant organisms in the ocean, and have the capacity to deplete host populations very rapidly. Marine viruses infect specific host species, and therefore an abundance of a virus can quickly and effectively alter the structure of the phytoplankton and bacterial communities. Via the lytic cycle , a virus encounters a host and reproduces until the cell bursts, releasing viruses. Viruses can also enter a lysogenic cycle , in which the virus writes its DNA into the host genome. When a phytoplankton species enters a bloom period, cell concentration increases and many viral targets suddenly become available. [ 14 ] One explanation to the paradox of the plankton is the "Boom-and-busted dynamic" hypothesis, also called "Kill the winner." In a phytoplankton bloom , an individual species multiplies rapidly in ideal conditions, which increases its cell concentration in an area, outcompeting other phytoplankton. This "boom" in host cells creates an opportunity for rapid infection by viruses, leading to a "bust" in which the phytoplankton population rapidly diminishes. This creates a large gap in the local phytoplankton ecology and allows other species to fill in and continue growing. Such population control by viruses creates temporal and spatial diversity in phytoplankton communities. Long term control results, as the virus prevents the formerly dominant species from booming during future bloom events. [ 15 ]
https://en.wikipedia.org/wiki/Paradox_of_the_plankton
The paradox of tolerance is a philosophical concept suggesting that if a society extends tolerance to those who are intolerant, it risks enabling the eventual dominance of intolerance ; thereby undermining the very principle of tolerance. This paradox was articulated by philosopher Karl Popper in The Open Society and Its Enemies (1945), [ 1 ] where he argued that a truly tolerant society must retain the right to deny tolerance to those who promote intolerance. Popper posited that if intolerant ideologies are allowed unchecked expression, they could exploit open society values to erode or destroy tolerance itself through authoritarian or oppressive practices. The paradox has been widely discussed within ethics and political philosophy , with varying views on how tolerant societies should respond to intolerant forces. John Rawls , for instance, argued that a just society should generally tolerate the intolerant, reserving self-preservation actions only when intolerance poses a concrete threat to liberty and stability. Other thinkers, such as Michael Walzer , have examined how minority groups , which may hold intolerant beliefs, are nevertheless beneficiaries of tolerance within pluralistic societies. This paradox raises complex issues about the limits of freedom, especially concerning free speech and the protection of liberal democratic values. It has implications for contemporary debates on managing hate speech , political extremism , and social policies aimed at fostering inclusivity without compromising the integrity of democratic tolerance. One of the earliest formulations of "paradox of tolerance" is given in the notes of Karl Popper 's The Open Society and Its Enemies in 1945. Popper raises the paradox in the chapter notes regarding "The Principle of Leadership", connecting the paradox to his refutation of Plato 's defense of " benevolent despotism ". In the main text, Popper addresses Plato's similar "paradox of freedom": Plato points out the contradiction inherent in unchecked freedom, as it implies the freedom to act to limit the freedom of others. Plato argues that true democracy inevitably leads to tyranny, and suggests that the rule of an enlightened " philosopher-king " (cf. Noocracy ) is preferable to the tyranny of majority rule . [ 2 ] Popper rejects Plato's argument, in part because he argues that there are no readily available "enlightened philosopher-kings" prepared to adopt this role, and advocates for the institutions of liberal democracies as an alternative. In the corresponding chapter notes, Popper defines the paradox of tolerance and makes a similar argument. Of both tolerance and freedom, Popper argues for the necessity of limiting unchecked freedom and intolerance in order to prevent despotic rule rather than to embrace it. [ 1 ] There are earlier examples of the discourse on tolerance and its limits. In 1801, Thomas Jefferson addressed the notion of a tolerant society in his first inaugural speech as President of the United States. Concerning those who might destabilize the United States and its unity, Jefferson stated: "let them stand undisturbed as monuments of the safety with which error of opinion may be tolerated where reason is left free to combat it." [ 3 ] Political theorist Gaetano Mosca is also well-known to have remarked long before Popper: "[i]f tolerance is taken to the point where it tolerates the destruction of those same principles that made tolerance possible in the first place, it becomes intolerable." [ citation needed ] Either way, philosopher John Rawls concludes differently in his 1971 A Theory of Justice , stating that a just society must tolerate the intolerant, for otherwise, the society would then itself be intolerant, and thus unjust. However, Rawls qualifies this assertion, conceding that under extraordinary circumstances, if constitutional safeguards do not suffice to ensure the security of the tolerant and the institutions of liberty, a tolerant society has a reasonable right to self-preservation to act against intolerance if it would limit the liberty of others under a just constitution. Rawls emphasizes that the liberties of the intolerant should be constrained only insofar as they demonstrably affect the liberties of others: "While an intolerant sect does not itself have title to complain of intolerance, its freedom should be restricted only when the tolerant sincerely and with reason believe that their own security and that of the institutions of liberty are in danger." [ 4 ] [ 5 ] In On Toleration (1997), Michael Walzer asked, "Should we tolerate the intolerant?" He claims that most minority religious groups who are the beneficiaries of tolerance are themselves intolerant, at least in some respects. In a tolerant regime, such (intolerant) people may learn to tolerate, or at least to behave "as if they possessed this virtue". [ 6 ] Preston King describes tolerance as occurring when one objects to but voluntarily endures certain acts, ideas, organisations and identities. [ 7 ] This involves two components: Deciding whether to tolerate an item involves a balancing of reasons, for example when we weigh the reasons for rejecting an idea we find problematic against the benefit of accepting it in the name of social harmony, and it is in this balancing of reasons that the paradox of tolerance arises. [ 8 ] Most formulations of tolerance assert that tolerance is a reciprocal act, and the intolerant need not be tolerated. This necessitates drawing a limit between the tolerant and intolerant in every implementation of tolerance, which suggests that any act of tolerance requires an act of intolerance. [ 9 ] Philosopher Rainer Forst resolves the contradiction in philosophical terms by outlining tolerance as a social norm and distinguishing between two notions of "intolerance": the denial of tolerance as a social norm, and the rejection of this denial. [ 8 ] Other solutions to the paradox of intolerance frame it in more practical terms, a solution favored by philosophers such as Karl Popper . Popper underlines the importance of rational argument , drawing attention to the fact that many intolerant philosophies reject rational argument and thus prevent calls for tolerance from being received on equal terms: [ 1 ] Less well known [than other paradoxes] is the paradox of tolerance: Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be most unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant. We should claim that any movement preaching intolerance places itself outside the law and we should consider incitement to intolerance and persecution as criminal, in the same way as we should consider incitement to murder, or to kidnapping, or to the revival of the slave trade, as criminal. Popper also draws attention to the fact that intolerance is often asserted through the use of violence, drawing on a point re-iterated by philosophers such as John Rawls . In A Theory of Justice , Rawls asserts that a society must tolerate the intolerant in order to be a just society, but qualifies this assertion by stating that exceptional circumstances may call for society to exercise its right to self-preservation against acts of intolerance that threaten the liberty and security of the tolerant. [ 4 ] Such formulations address the inherent moral contradiction that arises from the assumption that the moral virtue of tolerance is at odds with the toleration of moral wrongs, which can be resolved by grounding toleration within limits defined by a higher moral order. [ 8 ] Another solution is to place tolerance in the context of social contract theory : to wit, tolerance should not be considered a virtue or moral principle, but rather an unspoken agreement within society to tolerate one another's differences as long as no harm to others arises from same. In this formulation, one being intolerant is violating the contract, and therefore is no longer protected by it against the rest of society. [ 10 ] Approaches in a defensive democracy which ban intolerant or extremist behavior are often ineffective against a strategy of a façade , which does not meet the legal criteria for a ban. [ 11 ] The paradox of tolerance is meaningful in the discussion of what, if any, boundaries are to be set on freedom of speech . In The Boundaries of Liberty and Tolerance: The Struggle Against Kahanism in Israel (1994), Raphael Cohen-Almagor asserts that to afford freedom of speech to those who would use it to eliminate the very principle upon which that freedom relies is paradoxical. [ 12 ] Michel Rosenfeld , in the Harvard Law Review in 1987, stated: "it seems contradictory to extend freedom of speech to extremists who ... if successful, ruthlessly suppress the speech of those with whom they disagree." [ 13 ] Rosenfeld contrasts the approach to hate speech between Western European democracies and the United States , pointing out that among Western European nations, extremely intolerant or fringe political materials (e.g., Holocaust denial ) are characterized as inherently socially disruptive, and are subject to legal constraints on their circulation as such, [ 14 ] while the US has ruled that such materials are protected by the principle of freedom of speech and press in the First Amendment to the US Constitution , and cannot be restricted except when incitement to violence or other illegal activities is made explicit. [ 15 ] Criticism of violent intolerance as a response to intolerant speech is characteristic of discourse ethics as developed by Jürgen Habermas [ 16 ] and Karl-Otto Apel . [ 17 ] A relationship between intolerance and homophily , a preference for interacting with those with similar traits, appears when a tolerant person's relationship with an intolerant member of an in-group is strained by the tolerant person's relationship with a member of an out-group that is the subject of this intolerance. An intolerant person would disapprove this person's positive relationship with a member of the out-group. If this view is generally supported by the social norms of the in-group, a tolerant person risks being ostracized because of their toleration. If they succumb to social pressure, they may be rewarded for adopting an intolerant attitude. [ 18 ] This dilemma has been considered by Fernando Aguiar and Antonio Parravano in "Tolerating the Intolerant: Homophily, Intolerance, and Segregation in Social Balanced Networks" (2013), [ 18 ] modeling a community of individuals whose relationships are governed by a modified form of the Heider balance theory . [ 19 ] [ 20 ] In the same work in which Popper elucidates the paradox of tolerance, [ 1 ] he brings up two closely-related concepts, the "paradox of democracy" and the "paradox of freedom". In the paradox of democracy, he points out the possibility that a democratic majority could vote for a tyrant to rule, thus ending democracy. In the "paradox of freedom", he instead points out that unlimited freedom would "make the bully free to enslave the meek", thus reducing freedom. [ 21 ] In their 2022 book, Paradox of Democracy , Zac Gershberg and Sean Illing argue that the accessibility of communications media potentiates the paradox of democracy. They write "the essential democratic freedom — freedom of expression — is both ingrained in and potentially harmful to democracy." They draw from historical examples such as isegoria (equal access to the civic discourse) in ancient Athens and the development of book publishing in Europe. [ 22 ]
https://en.wikipedia.org/wiki/Paradox_of_tolerance
The paradox of voting , also called Downs' paradox , is that for a rational and egoistic voter ( Homo economicus ), the costs of voting will normally exceed the expected benefits. Because the chance of exercising the pivotal vote is minuscule compared to any realistic estimate of the private individual benefits of the different possible outcomes, the expected benefits of voting are less than the costs. Responses to the paradox have included the view that voters vote to express their preference for a candidate rather than affect the outcome of the election, that voters exercise some degree of altruism , or that the paradox ignores the collateral benefits associated with voting besides the resulting electoral outcome. The issue was noted by Nicolas de Condorcet in 1793 when he stated, "In single-stage elections, where there are a great many voters, each voter's influence is very small. It is therefore possible that the citizens will not be sufficiently interested [to vote]" and "... we know that this interest [which voters have in an election] must decrease with each individual's [i.e. voter's] influence on the election and as the number of voters increases." [ 1 ] In 1821, Hegel made a similar observation in his Elements of the Philosophy of Right : "As for popular suffrage, it may be further remarked that especially in large states it leads inevitably to electoral indifference, since the casting of a single vote is of no significance where there is a multitude of electors." [ 2 ] [ 3 ] The mathematician Charles L. Dodgson, better known as Lewis Carroll , published the paper "A Method of Taking Votes on More than Two Issues" in 1876. [ 4 ] This problem in modern public choice theory was analysed by Anthony Downs in 1957. [ 5 ] In a rational voter model the expected utility of voting U can be described as: where B {\displaystyle B} is the benefit of a pivotal vote, p {\displaystyle p} is the probability of a pivotal vote and C {\displaystyle C} is the cost of voting. [ 6 ] Predictions of the rational voter model on voter turnout dependency on total number of voters, competitiveness of elections , underdog status and cost of voting have been confirmed in a 2007 laboratory study. [ 7 ] A stochastic element of voter turnout dependency on expected utility was found in the laboratory study. [ 7 ] A 2020 study found the anticipated competitiveness of elections based on polls resulted in an causal increase of voter turnout . [ 8 ] Bounded rationality with Quantal response equilibrium was found to be a better fit of observations in a 2007 laboratory study compared to a Nash equilibrium . [ 7 ] The Logit Quantal response equilibrium predicted a 17% voter turnout for elections with a large number of voters without additional altruistic or civic duty terms. [ 7 ] The altruism theory of voting assumes that voters are rational but not fully egoistic. In this view voters have some degree of altruism to voters of the same party. [ 9 ] The altruistic utility increases with the large number of voters of the same party, which can explain the rationality of voting despite only a small chance of individually affecting the outcome. [ 10 ] Voter turnout was found to increase with civic duty . [ 6 ] Civic duty can be represented in the rational voter model as an additional benefit to voting independent of casting a pivotal vote. [ 6 ] Voting and engaging in political discourse may increase the voter's political knowledge and community awareness, both of which may contribute to a general sense of civic duty. "I Voted" stickers and slogans such as "If you don’t vote, you can’t complain!" are connected to civic duty and citizenship models. [ 11 ] Geoffrey Brennan and Loren Lomasky suggest that voters derive "expressive" benefits from supporting particular candidates – analogous to cheering on a sports team – rather than voting in hopes of achieving the political outcomes they prefer. This implies that the rational behavior of voters includes the instrumental as opposed to only the intrinsic value they derive from their vote. [ 12 ] [ 13 ] The magnitudes of electoral wins and losses are very closely watched by politicians, their aides, pundits and voters, because they indicate the strength of support for candidates, and tend to be viewed as an inherently more accurate measure of such than mere opinion polls (which have to rely on imperfect sampling). [ citation needed ]
https://en.wikipedia.org/wiki/Paradox_of_voting
Paradoxes of the Infinite (German title: Paradoxien des Unendlichen ) is a mathematical work by Bernard Bolzano on the theory of sets . It was published by a friend and student, František Přihonský , in 1851, three years after Bolzano's death. The work contained many interesting results in set theory . Bolzano expanded on the theme of Galileo's paradox , giving more examples of correspondences between the elements of an infinite set and proper subsets of infinite sets. In the work he also explained the term Menge , rendered in English as "set", which he had coined and used in several works since the 1830s. This article about a mathematical publication is a stub . You can help Wikipedia by expanding it . This mathematical logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Paradoxes_of_the_Infinite
Paraffin oxidation is a historical industrial process for the production of synthetic fatty acids . [ 1 ] The fatty acids are further processed to consumer products such as soaps and fats as well as to lubricating greases for technical applications. Coal slack wax, a saturated, high molecular weight hydrocarbon mixture and by-product of the Fischer–Tropsch process was used as raw material. Side products were a wide range of carboxylic acids and oxidation products such as alcohols , aldehydes , esters , or ketones . The oxidation of paraffins was carried out in the liquid phase by molecular oxygen , e.g. by aerating with oxygen or atmospheric air, in the presence of catalysts such as permanganates , e.g. 0.1% - 0.3% potassium permanganate , at temperatures in the range of about 100 to 120 °C and under atmospheric pressure. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The process was commercially important from the mid-1930s on and was carried out until the first years after the Second World War on a large industrial scale. Paraffin oxidation enabled for first time the large-scale production of synthetic butter from coal by chemical means which was at that time seen as a sensation. [ 7 ] Because of the high availability of inexpensive natural fats and the competition by petroleum-based fatty alcohols, the process lost its importance in the early 1950s. The process consisted of three main steps: oxidation , reconditioning of the oxidation mixture to crude fatty acids and eventually their separation by fractional distillation into fatty acid fractions. [ 8 ] The chemical industry processed the fatty acid fractions further into finished products such as soaps , detergents , plasticizers and synthetic fat. The paraffin oxidation was almost exclusively run in a discontinuous batch mode. Fractions were selected based on the intended purposes of each of the desired products: [ 9 ] The first explanation for oxidation mechanism was given by the peroxide theory , developed by Alexei Nikolaevich Bach and Carl Engler , also known as the Engler-Bach theory. According to their theory, as the first step a secondary hydroperoxide is formed. The assumption that this hydroperoxide is then radically decomposed was confirmed by later studies by Eric Rideal . The function of metal catalyst is to increase the speed of both the formation and decomposition of the hydroperoxide. This produces, among other things, an alkyl radical , which forms with oxygen peroxo radikals . This forms by abstraction of a hydrogen atom from another molecule paraffin a new alkyl radical and a hydroperoxide . The mechanism of the reaction follows the following scheme: [ 11 ] As a first step the formation of a hydroperoxide occurs, which degrades as the main reaction into water and a ketone . As a side reaction secondary alcohols are formed according to the following reaction:
https://en.wikipedia.org/wiki/Paraffin_oxidation
Paraffin wax (or petroleum wax ) is a soft colorless solid derived from petroleum , coal , or oil shale that consists of a mixture of hydrocarbon molecules containing between 20 and 40 carbon atoms. It is solid at room temperature and begins to melt above approximately 37 °C (99 °F), [ 2 ] and its boiling point is above 370 °C (698 °F). [ 2 ] Common applications for paraffin wax include lubrication , electrical insulation , and candles ; [ 3 ] dyed paraffin wax can be made into crayons . Un-dyed, unscented paraffin candles are odorless and bluish-white. Paraffin wax was first created by Carl Reichenbach in Germany in 1830 and marked a major advancement in candlemaking technology, as it burned more cleanly and reliably than tallow candles and was cheaper to produce. [ 4 ] In chemistry, paraffin is used synonymously with alkane , indicating hydrocarbons with the general formula C n H 2 n +2 . The name is derived from Latin parum ("very little") + affinis , meaning "lacking affinity " or "lacking reactivity ", referring to paraffin's unreactive nature. [ 5 ] Paraffin wax is mostly found as a white, odorless, flavourless, waxy solid, with a typical melting point between about 46 and 68 °C (115 and 154 °F), [ 6 ] and a density of around 900 kg/m 3 . [ 7 ] It is insoluble in water, but soluble in ether , benzene , and certain esters . Paraffin is unaffected by most common chemical reagents but burns readily. [ 8 ] Its heat of combustion is 42 MJ/kg. [ 9 ] Paraffin wax is an excellent electrical insulator , with a resistivity of between 10 13 and 10 17 ohm-metre . [ 10 ] This is better than nearly all other materials except some plastics (notably PTFE ). It is an effective neutron moderator and was used in James Chadwick 's 1932 experiments to identify the neutron. [ 11 ] [ 12 ] Paraffin wax is an excellent material for storing heat , with a specific heat capacity of 2.14–2.9 J⋅g −1 ⋅K −1 ( joules per gram per kelvin ) and a heat of fusion of 200–220 J⋅g −1 . [ 13 ] Paraffin wax phase-change cooling coupled with retractable radiators was used to cool the electronics of the Lunar Roving Vehicle during the crewed missions to the Moon in the early 1970s. [ 14 ] Wax expands considerably when it melts and so is used in wax element thermostats for industrial, domestic and, particularly, automobile purposes. [ 15 ] [ 16 ] If pure paraffin wax melted to the approximate flash point in a half open glass vessel which is then suddenly cooled down, then its vapors may autoignite as result of reaching boiling liquid pressure . [ 17 ] Paraffin wax was first created in 1830 by German chemist Karl von Reichenbach when he attempted to develop a method to efficiently separate and refine waxy substances naturally occurring in petroleum. Paraffin represented a major advance in the candle-making industry because it burned cleanly and was cheaper to manufacture than other candle fuels such as beeswax and tallow . Paraffin wax initially suffered from a low melting point. This was remedied by adding stearic acid . The production of paraffin wax enjoyed a boom in the early 20th century due to the growth of the oil and meatpacking industries, which created paraffin and stearic acid as byproducts. [ 4 ] The feedstock for paraffin is slack wax , which is a mixture of oil and wax, a byproduct from the refining of lubricating oil. The first step in making paraffin wax is to remove the oil (de-oiling or de-waxing) from the slack wax. The oil is separated by crystallization. Most commonly, the slack wax is heated, mixed with one or more solvents such as a ketone and then cooled. As it cools, wax crystallizes out of the solution, leaving only oil. This mixture is filtered into two streams: solid (wax plus some solvent) and liquid (oil and solvent). After the solvent is recovered by distillation, the resulting products are called "product wax" (or "press wax") and "foots oil". The lower the percentage of oil in the wax, the more refined it is considered to be (semi-refined versus fully refined). [ 18 ] The product wax may be further processed to remove colors and odors. The wax may finally be blended together to give certain desired properties such as melt point and penetration. Paraffin wax is sold in either liquid or solid form. [ 19 ] [ 20 ] [ 21 ] In industrial applications, it is often useful to modify the crystal properties of the paraffin wax, typically by adding branching to the existing carbon backbone chain. The modification is usually done with additives, such as EVA copolymers, microcrystalline wax , or forms of polyethylene . The branched properties result in a modified paraffin with a higher viscosity, smaller crystalline structure, and modified functional properties. Pure paraffin wax is rarely used for carving original models for casting metal and other materials in the lost wax process , as it is relatively brittle at room temperature and presents the risks of chipping and breakage when worked. Soft and pliable waxes, like beeswax , may be preferred for such sculpture, but " investment casting waxes," often paraffin-based, are expressly formulated for the purpose. In a histology or pathology laboratory, paraffin wax is used to impregnate tissue prior to sectioning thin samples. Water is removed from the tissue through ascending strengths of alcohol (75% to absolute), and then the alcohol is cleared in an organic solvent such as xylene . The tissue is then placed in paraffin wax for several hours, then set in a mold with wax to cool and solidify. Sections are then cut on a microtome . People can be exposed to paraffin in the workplace by breathing it in, skin contact, and eye contact. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) for paraffin wax fume exposure of 2 mg/m 3 over an 8-hour workday. [ 29 ]
https://en.wikipedia.org/wiki/Paraffin_wax
A parafoil is a nonrigid ( textile ) airfoil with an aerodynamic cell structure which is inflated by the wind. Ram-air inflation forces the parafoil into a classic wing cross-section. Parafoils are most commonly constructed out of ripstop nylon . The device was developed in 1964 by Domina Jalbert (1904–1991). Jalbert had a history of designing kites and was involved in the development of hybrid balloon-kite aerial platforms for carrying scientific instruments. He envisaged the parafoil would be used to suspend an aerial platform or for the recovery of space equipment. A patent was granted in 1966. [ 1 ] Deployment shock prevented the parafoil's immediate acceptance as a parachute . It was not until the addition of a drag canopy on the riser lines (known as a " slider ") which slowed their spread that the parafoil became a suitable parachute. Compared to a simple round canopy , a parafoil parachute has greater steerability, will glide further and allows greater control of the rate of descent; the parachute format is mechanically a glider of the free-flight kite type and such aspects spawned paraglider use. [ 2 ] The air flow into the parafoil is coming more from below than the flight path might suggest, so the frontmost ropes tow against the airflow. When gliding, the angle of attack is lowered and the airflow meets the parafoil head on. This makes it difficult to achieve an optimum gliding angle without the parafoil deflating. In 2019 Jalbert was awarded posthumously the Fédération Aéronautique Internationale (FAI) Gold Parachuting Medal for inventing the parafoil. [ 3 ] Parafoils see wide use in a variety of windsports such as kite flying , powered parachutes , paragliding , kitesurfing , speed flying , wingsuit flying and skydiving . [ 2 ] [ 4 ] [ 5 ] [ 6 ] The world's largest kite is a parafoil-variant. [ 7 ] Today, SpaceX uses steerable Parafoils to recover the fairings of their Falcon 9 rocket.
https://en.wikipedia.org/wiki/Parafoil
Paraheliotropism refers to the phenomenon in which plants orient their leaves parallel to incoming rays of light, usually as a means of minimizing excess light absorption. Excess light absorption can cause a variety of physiological problems for plants, including overheating, dehydration, loss of turgor , photoinhibition , photo-oxidation , and photorespiration , so paraheliotropism can be viewed as an advantageous behavior in high light environments. [ 1 ] Not all plants exhibit this behavior, but it has developed in multiple lineages (e.g., both Styrax camporum and Phaseolus vulgaris exhibit paraheliotropic movement). [ 2 ] [ 3 ] While all mechanistic aspects of this behavior have yet to be elucidated (e.g., evidence indicates differential gene expression is involved, but the specifics have yet to be determined), many of the physiological aspects of paraheliotropic movement, at least in Phaseolus vulgaris (the common bean), are well understood. [ 4 ] In this plant, daily leaf movements are influenced by two main factors: an endogenous circadian oscillator and light-induced signals. [ 5 ] Physically, the movement is carried out by turgor-dependent changes in the volume of cortical parenchyma cells (called motor cells) in a turgor-sensitive part of the plant called the pulvinus , located at the juncture of the leaf base and the petiole . [ 6 ] [ 7 ] The cumulative effect of volume-changes in these motor cells manifests itself on the tissue/organ level as a swelling or shrinking of one or both sides of the pulvinus , which results in the reorientation of the adjacent leaf. [ 6 ] [ 7 ] Potassium and chloride have been shown to be the major osmolytes involved in the process, and plasma membrane-located proton pumps and ion transporters have been shown to play a critical role in creating osmotic potential . [ 8 ] [ 9 ] The hormones IAA and ABA are also involved in the process and play antagonistic roles, with IAA inducing pulvinar swelling and ABA inducing pulvinar shrinking. [ 4 ] Blue light has also been shown to induce rapid pulvinar shrinking. [ 10 ] Plants require light to perform photosynthesis , but receiving too much light can be just as damaging for a plant as receiving not enough light. [ 1 ] An excess of light leads to three main overarching physiological problems: a surplus of photochemical energy leads to the creation of reactive oxygen species , which are extremely damaging to numerous cellular structures; the temperature of the plant's cells becomes so high that proteins denature and/or that enzyme kinetics are negatively impacted; and transpiration increases, resulting in losses of turgor and photochemical efficiency. [ 1 ] [ 11 ] Paraheliotropic movement can help a plant avoid these problems by limiting the amount of light that is actually absorbed by the plant; when leaves are positioned parallel to incoming light, they intercept just a small fraction of the photons that they would intercept if they were positioned perpendicular to the incoming light. [ 1 ] So in essence, paraheliotropic plants avoid the physiological consequences of excess light by simply avoiding light. In 2003, Bielenberg et al. used two Phaseolus species, a quantum sensor , a light meter , a thermocouple meter , and an inclinometer to quantitatively demonstrate the effectiveness of this approach: leaves that displayed paraheliotropic behavior experienced lower photon flux densities (light intensity), lower temperatures, and higher water-use efficiency . [ 11 ]
https://en.wikipedia.org/wiki/Paraheliotropism
The Paraho process is an above ground retorting technology for shale oil extraction . The name "Paraho" is delivered from the words " para homem ", which means in Portuguese "for mankind". [ 1 ] The Paraho process was invented by John B. Jones, Jr., later president of the Paraho Development Corporation, and developed by Development Engineering, Inc., in the late 1960s. [ 1 ] [ 2 ] Its design was based on a gas combustion retort developed by the United States Bureau of Mines and the earlier Nevada–Texas–Utah Retort . In the late 1940s, these retorts were tested in the Oil Shale Experiment Station at Anvil Points in Rifle, Colorado . [ 1 ] In 1971, the Standard Oil of Ohio started to cooperate with Mr. John B. Jones providing financial support for obtaining an oil shale lease at Anvil Points. In May 1972, the lease was approved. [ 2 ] Before leasing a track at Anvil Points, a test of using the Paraho Direct process for limestone calcination in cement kilns was carried out. [ 1 ] The consortium for developing the Anvil Points lease – the Paraho Development Corporation – was formed in 1973. [ 3 ] In addition to the Standard Oil of Ohio, other participants of the consortium were Atlantic Richfield , Carter Oil, Chevron Research , Cleveland-Cliffs Iron , Gulf Oil , Kerr-McKee , Marathon Oil , Arthur G. McKee, Mobil Research , Phillips Petroleum Company , Shell Development , Southern California Edison , Standard Oil Company (Indiana) , Sun Oil , Texaco , and the Webb-Chambers-Gary-McLoraine Group. [ 2 ] Shale oil retorting started in 1974 when two operational retorts – pilot plant and semiworks – were put into operation. [ 3 ] The semiworks unit achieved a maximum throughput capacity of 290 tons (263 tonnes) of raw oil shale per day. [ 3 ] In March 1976, the Paraho Development Corporation tested a modification of its technology – the Paraho Indirect process. [ 2 ] The Anvil Points lease was closed in 1978. [ 1 ] In 1976–1978, under the contracts with the United States Navy , Paraho technology was used for production of 100,000 barrels of crude shale oil. It was tested for using as military transportation fuels. [ 4 ] [ 5 ] [ 6 ] The Gary Western Refinery in Fruita, Colorado , refined the Paraho shale oil for production of gasoline , jet fuels , diesel fuel marine, and heavy fuel oil . [ 7 ] Paraho JP-4 aviation fuel was tested by the United States Air Force in the T-39 jet aircraft flight, which took a place between the Wright Patterson Air Force Base ( Dayton, Ohio ) and the Carswell Air Force Base ( Fort Worth, Texas ). In addition, the Paraho heavy fuel oil was used for fueling a Cleveland-Cliffs Iron ore carrier during its 7-day cruise on Great Lakes . [ 2 ] On 13 June 1980, the Department of Energy awarded $4.4 million contract (participants providing additional $3.7 million) for an 18-month study to construct an 18,000 TPD modular demonstration shale oil plant producing 10,000 BPD on a lease 40 miles southeast of Vernal, Utah . [ 8 ] The demonstration module was never built. In 1982, Paraho’s semi-works plant was torn down when the Anvil Points station was decommissioned, but the pilot plant was moved to an adjacent plot of private land. In 1987, Paraho reorganized as New Paraho and began production of SOMAT asphalt additive used in test strips in 5 States. In 1991, New PARAHO reported successful tests of SOMAT shale oil asphalt additive. On 28 June 2000, Shale Technologies purchased Paraho Development Corporation and became owner of the proprietary information relating to the Paraho oil shale retorting technologies. [ 9 ] On 14 August 2008, Queensland Energy Resources announced that it will use the Paraho Indirect technology for its Stuart Oil Shale Project . [ 10 ] The Paraho process can be operated in two different heating modes, which are direct and indirect. [ 5 ] The Paraho Direct process evolved from gas combustion retort technology and is classified as an internal combustion method. [ 1 ] [ 11 ] [ 12 ] Accordingly, the Paraho Direct retort is a vertical shaft retort similar to the Kiviter and Fushun retorts, used correspondingly in Estonia and China. [ 13 ] However, compared to the earlier gas combustion retorts the Paraho retort's raw oil shale feeding mechanism, gas distributor, and discharge grate have different designs. In the Paraho Direct process, the crushed and screened raw oil shale is fed into the top of the retort through a rotating distributor. The oil shale descends the retort as a moving bed. [ 1 ] [ 14 ] The oil shale is heated by the rising combustion gases from the lower part of the retort and the kerogen in the shale decomposes at about 500 °C (932 °F) to oil vapour, shale oil gas and spent shale . Heat for pyrolysis comes from the combustion of char in the spent shale. The combustion takes place where air is injected at two levels in the middle of the retort below the pyrolysis section raising the temperature of the shale and the gas to 700 °C (1,292 °F) to 800 °C (1,472 °F). [ 14 ] Collecting tubes at the top of the retort carry shale oil mist, evolved gases and combustion gases into the product separation unit, where oil, water and dust are separated from the gases. For combined removal of liquid droplets and particulates, a wet electrostatic precipitator is used. [ 1 ] Cleaned gases from the precipitator are compressed in a compressor. Part of the gas from the compressor is recycled to the bottom of the retort to cool the combusted shale (shale ash) and carry the recovered heat back up the retort. Cooled shale ash exits the retort through the discharge grate in the bottom of the retort. After processing, shale ash is disposed of. [ 1 ] The liquid oil is separated from produced water and may be further refined into high quality products. The mixture of evolved gases and combustions gases is available for use as a low quality fuel gas for drying or power generation. The Paraho Indirect is classified as an externally generated hot gas technology . [ 12 ] The Paraho Indirect retort configuration is similar to the Paraho Direct except that a part of the gas from the compressor is heated to between 600 °C (1,112 °F) to 800 °C (1,472 °F) in a separate furnace and injected into the retort instead of air. [ 5 ] No combustion occurs in the Paraho Indirect retort itself. [ 1 ] As a result, the fuel gas from the Paraho Indirect is not diluted with combustion gases and the char remains on the disposed spent shale. The main advantage of the Paraho process is simplicity in process and design; it has few moving parts and therefore low construction and operating costs compared with more sophisticated technologies. The Paraho retort also consumes no water, which is especially important for oil shale extraction in areas with water scarcity . [ 2 ] A disadvantage common to both the Paraho Direct and Paraho Indirect is that neither are able to process oil shale particles smaller than about 12 millimetres (0.5 in). These fines may account for 10 to 30 per cent of the crushed feed.
https://en.wikipedia.org/wiki/Paraho_process
Parakaryon myojinensis , also known as the Myojin parakaryote , is a highly unusual species of single-celled organism known only from a single specimen, described in 2012. It has features of both prokaryotes and eukaryotes but is apparently distinct from either group, making it unique among organisms discovered thus far. [ 1 ] It is the sole species in the genus Parakaryon . The generic name Parakaryon comes from Greek παρά ( pará , "beside", "beyond", "near") and κάρυον ( káryon , "nut", "kernel", "nucleus"), and reflects its distinction from eukaryotes and prokaryotes. The specific name myojinensis reflects the locality where the only sample was collected: from the bristle of a scale worm collected from hydrothermal vents at Myōjin Knoll (明神海丘, [ 2 ] 32°06.2′N 139°52.1′E  /  32.1033°N 139.8683°E  / 32.1033; 139.8683 ), about 1,240 metres (4,070 ft) deep in the Pacific Ocean, near Aogashima island, southeast of the Japanese archipelago . The authors explain the full binomial as "next to (eu)karyote from Myojin". [ 1 ] Parakaryon myojinensis has some structural features unique to eukaryotes, some features unique to prokaryotes, and some features different from both. The table below details these structures, with matching traits coloured beige. [ 1 ] [ 3 ] Yamaguchi et al . proposed in their 2012 paper [ 1 ] that there were three reasons why the specimen they named P. myojinensis was not simply a result of parasitic or predatory bacteria living within another prokaryote host, which they acknowledged is known from several examples: In 2016, Yamaguchi et al . detailed the discovery of helical bacteria on polychaetes collected from the same location, which they named "Myojin spiral bacteria". [ 4 ] In 2020, Yamaguchi and two others published a new short paper on their studies of the microbiota of polychaetes from Myojin Knoll. The authors stated "Among them, we often observed bacteria that contained intracellular bacteria on ultrathin sections." They studied one such specimen and concluded that the "host" bacterium was dead and its cell wall broken. The smaller bacteria could have been feeding on the larger bacterium but they also suggest "The association of the bacteria with dead bacteria could also have been artificially caused by the centrifugation steps used for the preparation of specimens for electron microscopy." In this paper, all five mentions of P. myojinensis were as a valid taxon with no implication that it is an artifact. [ 5 ] It is not clear whether P. myojinensis can or should be classified as an eukaryote or a prokaryote, the two categories to which all other cellular life belongs. Adding to the difficulties of classification, only one instance of this organism has been discovered to date, and so scientists have been unable to study it further. Its discoverers suggested that additional specimens would be needed for culturing and DNA sequencing to place the organism in a phylogenetic context. [ 1 ] British evolutionary biochemist Nick Lane hypothesized in a 2015 book that the existence of P. myojinensis could be the first known example of symbiogenesis outside eukaryotes, which could offer clues to the requirements for the development of complex life in general. [ 3 ]
https://en.wikipedia.org/wiki/Parakaryon
In spherical astronomy , the parallactic angle is the angle between the great circle through a celestial object and the zenith , and the hour circle of the object. [ 1 ] It is usually denoted q . In the triangle zenith—object—celestial pole, the parallactic angle will be the position angle of the zenith at the celestial object. Despite its name, this angle is unrelated with parallax . The parallactic angle is 0° or 180° when the object crosses the meridian . For ground-based observatories, the Earth atmosphere acts like a prism which disperses light of different wavelengths such that a star generates a rainbow along the direction that points to the zenith. So given an astronomical picture with a coordinate system with a known direction to the Celestial pole , the parallactic angle represents the direction of that prismatic effect relative to that reference direction. Knowledge of that angle is needed to align Atmospheric Dispersion Correctors with the beam axis of the telescope [ 2 ] [ 3 ] Depending on the type of mount of the telescope , this angle may also affect the orientation of the celestial object's disk as seen in a telescope. With an equatorial mount , the cardinal points of the celestial object's disk are aligned with the vertical and horizontal direction of the view in the telescope. With an altazimuth mount , those directions are rotated by the amount of the parallactic angle. [ 4 ] The cardinal points referred to here are the points on the limb located such that a line from the center of the disk through them will point to one of the celestial poles or 90° away from them; these are not the cardinal points defined by the object's axis of rotation. The orientation of the disk of the Moon, as related to the horizon , changes throughout its diurnal motion and the parallactic angle changes equivalently. [ 5 ] This is also the case with other celestial objects. In an ephemeris , the position angle of the midpoint of the bright limb of the Moon or planets, and the position angles of their North poles may be tabulated. If this angle is measured from the North point on the limb, it can be converted to an angle measured from the zenith point (the vertex) as seen by an observer by subtracting the parallactic angle. [ 5 ] The position angle of the bright limb is directly related to that of the subsolar point . The vector algebra to derive the standard formula is equivalent to the calculation of the long derivation for the compass course. The sign of the angle is basically kept, north over east in both cases, but as astronomers look at stars from the inside of the celestial sphere, the definition uses the convention that the q is the angle in an image that turns the direction to the NCP counterclockwise into the direction of the zenith. In the equatorial system of right ascension, α , and declination, δ , the star is at The North Celestial Pole is at In this same coordinate system the zenith is found by inserting altitude, a=π/2 , cos a=0 , into the transformation formulas to get where φ is the observer's geographic latitude, and l the local sidereal time. This also describes a rotating, right-handed, observer coordinate frame, with X-axis aligned to the south, where the local meridian intersects the horizon, Y-axis toward the eastern horizon, and Z-axis toward the zenith. This is the coordinate frame in which altitude and azimuth are measured. For the star, at some moment, l , with expected altitude, a , define its zenith distance as z=π/2-a . Its hour-angle, h = l − α {\displaystyle h=l-\alpha } , measures the elapsed sidereal time interval since the star crossed the local Meridian and is negative if the star is east of the meridian and its crossing is pending. The normalized cross product is the rotation axis that turns the star into the direction of the zenith: Finally ω z × s is the third axis of the tilted coordinate system and the direction into which the star is moved on the great circle towards the zenith. The plane tangential to the celestial sphere at the star is spanned by the unit vectors to the north, and to the east These are orthogonal: The parallactic angle q is the angle of the initial section of the great circle at s , east of north, [ 6 ] (The previous formula is the sine formula of spherical trigonometry . [ 7 ] ) The values of sin z and of cos φ are positive, so using atan2 functions one may divide both expressions through these without losing signs; eventually yields the angle in the full range -π ≤ q ≤ π . The advantage of this expression is that it does not depend on the various offset conventions of azimuth, A ; the uncontroversial offset of the hour angle, h , takes care of this. For a sidereal target, by definition a target where δ and α are not time-dependent, the angle changes with a period of a sidereal day T s . Let dots denote time derivatives; then the hour angle changes as [ 8 ] and the time derivative of the tan q expression is [ 9 ] The value derived above always refers to the north celestial pole as the origin of coordinates even if it is not visible (i.e., if the telescope is south of the equator ). Some authors introduce more complicated formulas with variable signs to derive similar angles for telescopes south of the equator that use the south celestial pole as the reference. [ 10 ]
https://en.wikipedia.org/wiki/Parallactic_angle
The parallactic instrument of Kapteyn is a measuring instrument created by the Dutch astronomer Jacobus Kapteyn around 1886. Using this instrument, Kapteyn analyzed over 1,700 glass plate photos of stars seen from the southern hemisphere . [ 1 ] This research contributed to the Cape Photographic Durchmusterung , a star catalogue containing 454,875 entries. Together with the measurements of stars seen from the northern hemisphere (the Bonner Durchmusterung ) the measurements of Kapteyn formed a complete star catalogue with a scope and accuracy that was impressive for its time. [ 1 ] The instrument is currently located in the collection of the University Museum of Groningen . [ 2 ] Since Kapteyn lacked an observatory of his own in Groningen , he used a homemade instrument for the analysis of glass plate photos of stars, made by his colleague David Gill in Cape Town . Kapteyn built the instrument with several parts from other (measuring) instruments. [ 1 ] Although Kapteyn called it a ‘parallactic instrument’, the instrument is not related to the parallax effect . The name may come from the chassis of the instrument, which is originally from an instrument with a 'parallactic mount' . [ 3 ] Three researchers were needed to perform measurements with the instrument, each with their own task: To use the instrument, the researcher must look through the ocular (part J), and aim the lens (H) at a glass plate photo (see drawing). The distance between the center point of the instrument and the plate to be measured must be the same distance as the focal length of the telescope that was used to take the photos (in the case of Gill's photos: 54 inches (140 cm). By rotating the right axis (B) the researcher can aim the lens at a star of interest. The researcher can read the position of the star on the wheel (D) below the right axis (B). Similarly, parts A and C can be used to determine the right ascension. Part L is no longer on the instrument. Using this smaller telescope the researcher could correctly position the instrument in relation to the glass plate photo. [ 1 ] For each position on the sky, Kapteyn used two photos (each made on a different night). He placed these photos in sequence (with approximately 1 millimeter of space in between), with one being slightly displaced. This allowed him to easily distinguish stars from dust particles on the glass plate. [ 1 ] Kapteyn and his staff members analyzed the first photo (aimed at the South Pole ) on October 28, 1886, and the final photo (aimed at 85° declination) on June 9, 1887. They used in the instrument in a laboratory of Dirk Huizinga, a professor in physiology who made two of his rooms available to them. [ 1 ] Kapteyn and his staff members analyzed the glass plate photos in duplicate and darkened the room to get a better view of details in the photos. [ 5 ] Kapteyn and his staff performed some repeat measurements in 1892, 1896, 1897 and 1892. Kapteyn and Gill published their Durchmusterung in three volumes that together formed the Cape Photographic Durchmusterung: declination zones -18° to -37° (1896), -38° to -52° (1897) and -53° to -89° (1900). [ 1 ] Working with the instrument had a significant impact on the health and private life of Kapteyn. Kapteyn often felt pain in his eyes and stomach and became easily agitated due to the intense labor. After completing one of the last measurements, Kapteyn wrote to Gill: "...- and the truth is that I find my patience nearly exhausted", with which he referred to the analysis for the Cape Photographic Durchmusterung. Additionally, Kapteyn wrote about working on the Durchmusterung: "There is a sort of fate that which makes me do my life long just what I want to do least of all." [ 1 ] The British astronomer Arthur Stanley Eddington claimed that prisoners were part of the staff of Kapteyn that worked with his instrument. However, this fact is deemed implausible, since prisoners only performed relatively simple tasks in this time period and because this fact was never brought up in any correspondence with Kapteyn. [ 1 ] The publication of the measurements performed with the instrument of Kapteyn marked a major breakthrough for Kapteyn in the field of astronomy. In 1901 Kapteyn was the first Dutchman to receive a golden medal from the British Royal Astronomical Society . Kapteyn had been a member of this organisation since 1892. Furthermore, working with the instrument may have inspired the theories of Kapteyn about the shape of the Milky Way . [ citation needed ] Kapteyn first discussed these theories in 1891 during a rectorial speech. [ 6 ] The American astronomer Simon Newcomb praised Kapteyn and his work: "This work [the Cape Photographic Durchmusterung] of Kapteyn offers a remarkable example of the spirit which animates the born investigator of the heavens." [ 7 ] Jacob Halm remarked that the results of the Cape Photographic Durchmusterung had an accuracy comparable to that of the results of the northern hemisphere. [ 8 ] The astronomer Henry Sawerthal, who visited the laboratory of Kapteyn in 1889, described the results as "...sufficient in the present instance to give results more accurate than those of the Northern Durchmusterung, a remark which not only applies to positions, but to magnitude (also)." [ 9 ] The German astronomer Max Wolf had such admiration for the instrument of Kapteyn that he built his own 'improved' version of the instrument. [ 10 ] [ 11 ]
https://en.wikipedia.org/wiki/Parallactic_instrument_of_Kapteyn
Parallax is a displacement or difference in the apparent position of an object viewed along two different lines of sight and is measured by the angle or half-angle of inclination between those two lines. [ 1 ] [ 2 ] Due to foreshortening , nearby objects show a larger parallax than farther objects, so parallax can be used to determine distances. To measure large distances, such as the distance of a planet or a star from Earth , astronomers use the principle of parallax. Here, the term parallax is the semi-angle of inclination between two sight-lines to the star, as observed when Earth is on opposite sides of the Sun in its orbit. [ a ] These distances form the lowest rung of what is called "the cosmic distance ladder ", the first in a succession of methods by which astronomers determine the distances to celestial objects, serving as a basis for other distance measurements in astronomy forming the higher rungs of the ladder. Because parallax is weak if the triangle formed with an object under observation and two observation points has an angle much greater than 90°, the use of parallax for distance measurements is usually restricted to objects that are directly "faced" by the baseline (the line between two observation points) of the formed triangles. Parallax also affects optical instruments such as rifle scopes, binoculars , microscopes , and twin-lens reflex cameras that view objects from slightly different angles. Many animals, along with humans, have two eyes with overlapping visual fields that use parallax to gain depth perception ; this process is known as stereopsis . In computer vision the effect is used for computer stereo vision , and there is a device called a parallax rangefinder that uses it to find the range, and in some variations also altitude to a target. A simple everyday example of parallax can be seen in the dashboards of motor vehicles that use a needle-style mechanical speedometer . When viewed from directly in front, the speed may show exactly 60, but when viewed from the passenger seat, the needle may appear to show a slightly different speed due to the angle of viewing combined with the displacement of the needle from the plane of the numerical dial. Because the eyes of humans and other animals are in different positions on the head, they present different views simultaneously. This is the basis of stereopsis , the process by which the brain exploits the parallax due to the different views from the eye to gain depth perception and estimate distances to objects. [ 3 ] Some animals also use motion parallax , in which the animal (or just its head) moves to gain different viewpoints. For example, pigeons (whose eyes do not have overlapping fields of view and thus cannot use stereopsis) bob their heads up and down to see depth. [ 4 ] Motion parallax is also exploited in wiggle stereoscopy , computer graphics that provide depth cues through viewpoint-shifting animation rather than through binocular vision. Parallax arises due to a change in viewpoint occurring due to the motion of the observer, of the observed, or both. What is essential is relative motion. By observing parallax, measuring angles , and using geometry , one can determine distance . Distance measurement by parallax is a special case of the principle of triangulation , which states that, if one side length and two angles of a triangle are known, then the rest side lengths and the angle can be solved (i.e., the information of the triangle is fully determined). Thus, the careful measurement of the length of one baseline and two angles at the baseline edges can fix the scale of an entire triangulation network. In astronomy, the triangle is extremely long and narrow, and by measuring both its shortest side length (the motion of the observer) and the small top angle (always less than 1 arcsecond , [ 5 ] leaving the other two close to 90 degrees), the length of the long sides (in practice considered to be equal) can be determined. The distance d from the Sun to a star (measured in parsecs ) is the reciprocal of the parallax p (measured in arcseconds ): d ( p c ) = 1 / p ( a r c s e c ) . {\displaystyle d(\mathrm {pc} )=1/p(\mathrm {arcsec} ).} For example, the distance from the Sun to Proxima Centauri is 1/0.7687 = 1.3009 parsecs (4.243 ly), and a celestial object which distance is twice than this star has the half parallax 0.65045 [ 6 ] On Earth, a coincidence rangefinder or parallax rangefinder can be used to find distance to a target. In surveying , the problem of resection explores angular measurements from a known baseline for determining an unknown point's coordinates. The most important fundamental distance measurements in astronomy come from trigonometric parallax, as applied in the stellar parallax method . As the Earth orbits the Sun, the position of a nearby star will appear to shift slightly against the more distant background. This shift is the apex angle in an isosceles triangle , with 2 AU (the distance between the extreme positions of Earth's orbit around the Sun) making the base leg of the triangle and the distance to the star being the long equal-length legs (because of a very long distance from the Earth orbit to the observed star). The amount of shift is quite small, even for the nearest stars, measuring 1 arcsecond for an object at 1 parsec's distance (3.26 light-years ), and thereafter decreasing in angular amount as the distance increases. Astronomers usually express distances in units of parsecs (parallax arcseconds); light-years are used in popular media. Because parallax becomes smaller for a greater stellar distance, useful distances can be measured only for stars which are near enough to have a parallax larger than a few times the precision of the measurement. In the 1990s, for example, the Hipparcos mission obtained parallaxes for over a hundred thousand stars with a precision of about a milliarcsecond , [ 7 ] providing useful distances for stars out to a few hundred parsecs. The Hubble Space Telescope 's Wide Field Camera 3 has the potential to provide a precision of 20 to 40 micro arcseconds, enabling reliable distance measurements up to 5,000 parsecs (16,000 ly) for small numbers of stars. [ 8 ] [ 9 ] The Gaia space mission provided similarly accurate distances to most stars brighter than 15th magnitude. [ 10 ] Distances can be measured within 10% as far as the Galactic Center , about 30,000 light years away. Stars have a velocity relative to the Sun that causes proper motion (transverse across the sky) and radial velocity (motion toward or away from the Sun). The former is determined by plotting the changing position of the stars over many years, while the latter comes from measuring the Doppler shift of the star's spectrum caused by motion along the line of sight. For a group of stars with the same spectral class and a similar magnitude range, a mean parallax can be derived from statistical analysis of the proper motions relative to their radial velocities. This statistical parallax method is useful for measuring the distances of bright stars beyond 50 parsecs and giant variable stars , including Cepheids and the RR Lyrae variables . [ 11 ] The motion of the Sun through space provides a longer baseline of the parallax triangle that will increase the accuracy of parallax measurements, known as secular parallax . For stars in the Milky Way disk, this corresponds to a mean baseline of 4 AU per year, while for halo stars the baseline is 40 AU per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. However, secular parallax introduces a higher level of uncertainty because the relative velocity of observed stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the uncertainty is inversely proportional to the square root of the sample size. [ 14 ] Moving cluster parallax is a technique where the motions of individual stars in a nearby star cluster can be used to find the distance to the cluster. Only open clusters are near enough for this technique to be useful. In particular the distance obtained for the Hyades has historically been an important step in the distance ladder. Other individual objects can have fundamental distance estimates made for them under special circumstances. If the expansion of a gas cloud, like a supernova remnant or planetary nebula , can be observed over time, then an expansion parallax distance to that cloud can be estimated. Those measurements however suffer from uncertainties in the deviation of the object from sphericity. Binary stars which are both visual and spectroscopic binaries also can have their distance estimated by similar means, and do not suffer from the above geometric uncertainty. The common characteristic to these methods is that a measurement of angular motion is combined with a measurement of the absolute velocity (usually obtained via the Doppler effect ). The distance estimate comes from computing how far the object must be to make its observed absolute velocity appear with the observed angular motion. Measurements made by viewing the position of some markers relative to something to be measured are subject to an error caused by parallax, if the markers are some distance away from the object under measurement and not viewed from the correct position or angle. An example is reading the position of a pointer against a scale in an instrument such as an analog multimeter as shown in the right figure. The same effect alters the speed read on a car's speedometer by a driver in front of it and a passenger off to the side, values read from a graticule , not in actual contact with the display on an oscilloscope , etc. To help the user avoid this problem, the scale is sometimes printed above a narrow strip of mirror , and the user's eye is positioned so that the pointer obscures its reflection, guaranteeing that the user's line of sight is perpendicular to the mirror and therefore to the scale. When viewed through a stereo viewer, aerial picture pair offers a pronounced stereo effect of landscape and buildings. High buildings appear to "keel over" in the direction away from the center of the photograph. Measurements of this parallax are used to deduce the height of the buildings, provided that flying height and baseline distances are known. This is a key component of the process of photogrammetry . Parallax error can be seen when taking photos with many types of cameras, such as twin-lens reflex cameras and those including viewfinders (such as rangefinder cameras ). In such cameras, the eye sees the subject through different optics (the viewfinder, or a second lens) than the one through which the photo is taken. As the viewfinder is often found above the lens of the camera, photos with parallax error are often slightly lower than intended, the classic example being the image of a person with their head cropped off. This problem is addressed in single-lens reflex cameras , in which the viewfinder sees through the same lens through which the photo is taken (with the aid of a movable mirror), thus avoiding parallax error. Parallax is also an issue in image stitching , such as for panoramas. Parallax affects sighting devices of ranged weapons in many ways. On sights fitted on small arms and bows , etc., the perpendicular distance between the sight and the weapon's launch axis (e.g. the bore axis of a gun)—generally referred to as " sight height "—can induce significant aiming errors when shooting at close range, particularly when shooting at small targets. [ 16 ] This parallax error is compensated for (when needed) via calculations that also take in other variables such as bullet drop , windage , and the distance at which the target is expected to be. [ 17 ] Sight height can be used to advantage when "sighting in" rifles for field use. A typical hunting rifle (.222 with telescopic sights) sighted in at 75m will still be useful from 50 to 200 m (55 to 219 yd) without needing further adjustment. [ citation needed ] In some reticled optical instruments such as telescopes , microscopes or in telescopic sights ("scopes") used on small arms and theodolites , parallax can create problems when a reticle (or its image) is not coincident with the image plane of a target. This is because when the reticle and the target are not at the same focus, their optically corresponded distances being projected through the eyepiece are also different, and the user's eye will register the difference in parallax between the reticle and the target image (whenever eye position changes) as a relative lateral displacement on top of each other. The term parallax shift refers to the resultant apparent "floating" movements of the reticle over the target image when the user moves his/her head/eye laterally (up/down or left/right) behind the sight. [ 18 ] Some firearm scopes are equipped with a parallax compensation mechanism, which consists of a movable optical element that enables the optical system to shift the focus of the target image at varying distances into the same optical plane of the reticle (or vice versa). Many low-tier telescopic sights may have no parallax compensation because in practice they can still perform very acceptably without eliminating parallax shift. In this case, the scope is often set fixed at a designated parallax-free distance that best suits their intended usage. Typical standard factory parallax-free distances for hunting scopes are 100 yd (or 90 m) to make them suited for hunting shots that rarely exceed 300 yd/m. Some competition and military-style scopes without parallax compensation may be adjusted to be parallax free at ranges up to 300 yd/m to make them better suited for aiming at longer ranges. [ citation needed ] Scopes for guns with shorter practical ranges, such as airguns , rimfire rifles , shotguns , and muzzleloaders , will have parallax settings for shorter distances, commonly 50 m (55 yd) for rimfire scopes and 100 m (110 yd) for shotguns and muzzleloaders. [ citation needed ] Airgun scopes are very often found with adjustable parallax, usually in the form of an adjustable objective (or "AO" for short) design, and may adjust down to as near as 3 metres (3.3 yd). [ citation needed ] A non-magnifying reflector or "reflex" sight eliminates parallax for distant objects by using a collimating optic to image the reticle at infinity. For objects that are not infinitely far away, eye movement perpendicular to the device will cause parallax movement between the target and the reticle image that is proportional to how far the viewer's eye is off center in the cylindrical column of light created by the collimating optics. [ 19 ] [ 20 ] Firearm sights, such as some red dot sights , try to correct for this by not imaging the reticle at infinity, but instead at a designated target distance. [ 19 ] Spherical aberration in a reflector sight can also cause the reticle's image to move with change in eye position. Some reflector sights with optical systems that compensate for off-axis spherical aberration are marketed as "parallax free". [ 21 ] [ 22 ] [ 23 ] Because of the positioning of field or naval artillery , each gun has a slightly different perspective of the target relative to the location of the fire-control system . When aiming guns at the target, the fire control system must compensate for parallax to assure that fire from each gun converges on the target. Several of Mark Renn 's sculptural works play with parallax, appearing abstract until viewed from a specific angle. One such sculpture is The Darwin Gate (pictured) in Shrewsbury , England, which from a certain angle appears to form a dome, according to Historic England , in "the form of a Saxon helmet with a Norman window... inspired by features of St Mary's Church which was attended by Charles Darwin as a boy". [ 24 ] In a philosophic/geometric sense: an apparent change in the direction of an object, caused by a change in observational position that provides a new line of sight. The apparent displacement, or difference of position, of an object, as seen from two different stations, or points of view. In contemporary writing, parallax can also be the same story, or a similar story from approximately the same timeline, from one book, told from a different perspective in another book. The word and concept feature prominently in James Joyce 's 1922 novel, Ulysses . Orson Scott Card also used the term when referring to Ender's Shadow as compared to Ender's Game . The metaphor is invoked by Slovenian philosopher Slavoj Žižek in his 2006 book The Parallax View , borrowing the concept of "parallax view" from the Japanese philosopher and literary critic Kojin Karatani . Žižek notes The philosophical twist to be added (to parallax), of course, is that the observed distance is not simply "subjective", since the same object that exists "out there" is seen from two different stances or points of view. It is rather that, as Hegel would have put it, subject and object are inherently "mediated" so that an " epistemological " shift in the subject's point of view always reflects an " ontological " shift in the object itself. Or—to put it in Lacanese —the subject's gaze is always already inscribed into the perceived object itself, in the guise of its "blind spot", that which is "in the object more than the object itself", the point from which the object itself returns the gaze. "Sure the picture is in my eye, but I am also in the picture"... [ 25 ]
https://en.wikipedia.org/wiki/Parallax
The most important fundamental distance measurements in astronomy come from trigonometric parallax , as applied in the stellar parallax method . As the Earth orbits the Sun, the position of a nearby star will appear to shift slightly against the more distant background. This shift is the apex angle in an isosceles triangle , with 2 AU (the distance between the extreme positions of Earth's orbit around the Sun) making the base leg of the triangle and the distance to the star being the long equal-length legs (because of a very long distance from the Earth orbit to the observed star). The amount of shift is quite small, even for the nearest stars, measuring 1 arcsecond for an object at 1 parsec's distance (3.26 light-years ), and thereafter decreasing in angular amount as the distance increases. Astronomers usually express distances in units of parsecs (parallax arcseconds); light-years are used in popular media. Because parallax becomes smaller for a greater stellar distance, useful distances can be measured only for stars which are near enough to have a parallax larger than a few times the precision of the measurement. In the 1990s, for example, the Hipparcos mission obtained parallaxes for over a hundred thousand stars with a precision of about a milliarcsecond , [ 1 ] providing useful distances for stars out to a few hundred parsecs. The Hubble Space Telescope 's Wide Field Camera 3 has the potential to provide a precision of 20 to 40 micro arcseconds, enabling reliable distance measurements up to 5,000 parsecs (16,000 ly) for small numbers of stars. [ 2 ] [ 3 ] The Gaia space mission provided similarly accurate distances to most stars brighter than 15th magnitude. [ 4 ] Distances can be measured within 10% as far as the Galactic Center , about 30,000 light years away. Stars have a velocity relative to the Sun that causes proper motion (transverse across the sky) and radial velocity (motion toward or away from the Sun). The former is determined by plotting the changing position of the stars over many years, while the latter comes from measuring the Doppler shift of the star's spectrum caused by motion along the line of sight. For a group of stars with the same spectral class and a similar magnitude range, a mean parallax can be derived from statistical analysis of the proper motions relative to their radial velocities. This statistical parallax method is useful for measuring the distances of bright stars beyond 50 parsecs and giant variable stars , including Cepheids and the RR Lyrae variables . [ 5 ] The motion of the Sun through space provides a longer baseline of the parallax triangle that will increase the accuracy of parallax measurements, known as secular parallax . For stars in the Milky Way disk, this corresponds to a mean baseline of 4 AU per year, while for halo stars the baseline is 40 AU per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. However, secular parallax introduces a higher level of uncertainty because the relative velocity of observed stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the uncertainty is inversely proportional to the square root of the sample size. [ 8 ] Moving cluster parallax is a technique where the motions of individual stars in a nearby star cluster can be used to find the distance to the cluster. Only open clusters are near enough for this technique to be useful. In particular the distance obtained for the Hyades has historically been an important step in the distance ladder. Other individual objects can have fundamental distance estimates made for them under special circumstances. If the expansion of a gas cloud, like a supernova remnant or planetary nebula , can be observed over time, then an expansion parallax distance to that cloud can be estimated. Those measurements however suffer from uncertainties in the deviation of the object from sphericity. Binary stars which are both visual and spectroscopic binaries also can have their distance estimated by similar means, and do not suffer from the above geometric uncertainty. The common characteristic to these methods is that a measurement of angular motion is combined with a measurement of the absolute velocity (usually obtained via the Doppler effect ). The distance estimate comes from computing how far the object must be to make its observed absolute velocity appear with the observed angular motion. Expansion parallaxes in particular can give fundamental distance estimates for objects that are very far, because supernova ejecta have large expansion velocities and large sizes (compared to stars). Further, they can be observed with radio interferometers which can measure very small angular motions. These combine to provide fundamental distance estimates to supernovae in other galaxies. [ 9 ] Though valuable, such cases are quite rare, so they serve as important consistency checks on the distance ladder rather than workhorse steps by themselves. The parsec (symbol: pc) is a unit of length used to measure the large distances to astronomical objects outside the Solar System , approximately equal to 3.26 light-years or 206,265 astronomical units (AU), i.e. 30.9 trillion kilometres (19.2 trillion miles ). [ a ] The parsec unit is obtained by the use of parallax and trigonometry , and is defined as the distance at which 1 AU subtends an angle of one arcsecond [ 10 ] ( ⁠ 1 / 3600 ⁠ of a degree ). The nearest star, Proxima Centauri , is about 1.3 parsecs (4.2 light-years) from the Sun : from that distance, the gap between the Earth and the Sun spans slightly less than one arcsecond. [ 11 ] Most stars visible to the naked eye are within a few hundred parsecs of the Sun, with the most distant at a few thousand parsecs, and the Andromeda Galaxy at over 700,000 parsecs. [ 12 ] The word parsec is a shortened form of a distance corresponding to a parallax of one second , coined by the British astronomer Herbert Hall Turner in 1913. [ 13 ] The unit was introduced to simplify the calculation of astronomical distances from raw observational data. Partly for this reason, it is the unit preferred in astronomy and astrophysics , though in popular science texts and common usage the light-year remains prominent. Although parsecs are used for the shorter distances within the Milky Way , multiples of parsecs are required for the larger scales in the universe, including kilo parsecs (kpc) for the more distant objects within and around the Milky Way, mega parsecs (Mpc) for mid-distance galaxies, and giga parsecs (Gpc) for many quasars and the most distant galaxies. Stellar parallax created by the relative motion between the Earth and a star can be seen, in the Copernican model, as arising from the orbit of the Earth around the Sun: the star only appears to move relative to more distant objects in the sky. In a geostatic model, the movement of the star would have to be taken as real with the star oscillating across the sky with respect to the background stars. Stellar parallax is most often measured using annual parallax , defined as the difference in position of a star as seen from the Earth and Sun, i.e. the angle subtended at a star by the mean radius of the Earth's orbit around the Sun. The parsec (3.26 light-years ) is defined as the distance for which the annual parallax is 1 arcsecond . Annual parallax is normally measured by observing the position of a star at different times of the year as the Earth moves through its orbit. Measurement of annual parallax was the first reliable way to determine the distances to the closest stars. The first successful measurements of stellar parallax were made by Friedrich Bessel in 1838 for the star 61 Cygni using a heliometer . [ 16 ] Stellar parallax remains the standard for calibrating other measurement methods. Accurate calculations of distance based on stellar parallax require a measurement of the distance from the Earth to the Sun, now based on radar reflection off the surfaces of planets. [ 17 ] The angles involved in these calculations are very small and thus difficult to measure. The nearest star to the Sun (and thus the star with the largest parallax), Proxima Centauri , has a parallax of 0.7687 ± 0.0003 arcsec. [ 18 ] This angle is approximately that subtended by an object 2 centimeters in diameter located 5.3 kilometers away. The fact that stellar parallax was so small that it was unobservable at the time was used as the main scientific argument against heliocentrism during the early modern age. It is clear from Euclid 's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons such gigantic distances involved seemed entirely implausible: it was one of Tycho 's principal objections to Copernican heliocentrism that for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn (then the most distant known planet) and the eighth sphere (the fixed stars). [ 20 ] In 1989, the satellite Hipparcos was launched primarily for obtaining improved parallaxes and proper motions for over 100,000 nearby stars, increasing the reach of the method tenfold. Even so, Hipparcos was only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy . The European Space Agency 's Gaia mission , launched in December 2013, can measure parallax angles to an accuracy of 10 microarcseconds , thus mapping nearby stars (and potentially planets) up to a distance of tens of thousands of light-years from Earth. [ 21 ] [ 22 ] In April 2014, NASA astronomers reported that the Hubble Space Telescope , by using spatial scanning , can precisely measure distances up to 10,000 light-years away, a ten-fold improvement over earlier measurements. [ 19 ] Diurnal parallax is a parallax that varies with the rotation of the Earth or with a difference in location on the Earth. The Moon and to a smaller extent the terrestrial planets or asteroids seen from different viewing positions on the Earth (at one given moment) can appear differently placed against the background of fixed stars. [ 23 ] [ 24 ] The diurnal parallax has been used by John Flamsteed in 1672 to measure the distance to Mars at its opposition and through that to estimate the astronomical unit and the size of the Solar System . [ 25 ] Lunar parallax (often short for lunar horizontal parallax or lunar equatorial horizontal parallax ), is a special case of (diurnal) parallax: the Moon, being the nearest celestial body, has by far the largest maximum parallax of any celestial body, at times exceeding 1 degree. [ 26 ] The diagram for stellar parallax can illustrate lunar parallax as well if the diagram is taken to be scaled right down and slightly modified. Instead of 'near star', read 'Moon', and instead of taking the circle at the bottom of the diagram to represent the size of the Earth's orbit around the Sun, take it to be the size of the Earth's globe, and a circle around the Earth's surface. Then, the lunar (horizontal) parallax amounts to the difference in angular position, relative to the background of distant stars, of the Moon as seen from two different viewing positions on the Earth. One of the viewing positions is the place from which the Moon can be seen directly overhead at a given moment. That is, viewed along the vertical line in the diagram. The other viewing position is a place from which the Moon can be seen on the horizon at the same moment. That is, viewed along one of the diagonal lines, from an Earth-surface position corresponding roughly to one of the blue dots on the modified diagram. The lunar (horizontal) parallax can alternatively be defined as the angle subtended at the distance of the Moon by the radius of the Earth [ 27 ] [ 28 ] —equal to angle p in the diagram when scaled-down and modified as mentioned above. The lunar horizontal parallax at any time depends on the linear distance of the Moon from the Earth. The Earth-Moon linear distance varies continuously as the Moon follows its perturbed and approximately elliptical orbit around the Earth. The range of the variation in linear distance is from about 56 to 63.7 Earth radii, corresponding to a horizontal parallax of about a degree of arc, but ranging from about 61.4' to about 54'. [ 26 ] The Astronomical Almanac and similar publications tabulate the lunar horizontal parallax and/or the linear distance of the Moon from the Earth on a periodical e.g. daily basis for the convenience of astronomers (and of celestial navigators), and the study of how this coordinate varies with time forms part of lunar theory . Parallax can also be used to determine the distance to the Moon . One way to determine the lunar parallax from one location is by using a lunar eclipse. A full shadow of the Earth on the Moon has an apparent radius of curvature equal to the difference between the apparent radii of the Earth and the Sun as seen from the Moon. This radius can be seen to be equal to 0.75 degrees, from which (with the solar apparent radius of 0.25 degrees) we get an Earth apparent radius of 1 degree. This yields for the Earth-Moon distance 60.27 Earth radii or 384,399 kilometres (238,854 mi) This procedure was first used by Aristarchus of Samos [ 29 ] and Hipparchus , and later found its way into the work of Ptolemy . [ 30 ] The diagram at the right shows how daily lunar parallax arises on the geocentric and geostatic planetary model, in which the Earth is at the center of the planetary system and does not rotate. It also illustrates the important point that parallax need not be caused by any motion of the observer, contrary to some definitions of parallax that say it is, but may arise purely from motion of the observed. Another method is to take two pictures of the Moon at the same time from two locations on Earth and compare the positions of the Moon relative to the stars. Using the orientation of the Earth, those two position measurements, and the distance between the two locations on the Earth, the distance to the Moon can be triangulated: This is the method referred to by Jules Verne in his 1865 novel From the Earth to the Moon : Until then, many people had no idea how one could calculate the distance separating the Moon from the Earth. The circumstance was exploited to teach them that this distance was obtained by measuring the parallax of the Moon. If the word parallax appeared to amaze them, they were told that it was the angle subtended by two straight lines running from both ends of the Earth's radius to the Moon. If they had doubts about the perfection of this method, they were immediately shown that not only did this mean distance amount to a whole two hundred thirty-four thousand three hundred and forty-seven miles (94,330 leagues) but also that the astronomers were not in error by more than seventy miles (≈ 30 leagues). After Copernicus proposed his heliocentric system , with the Earth in revolution around the Sun, it was possible to build a model of the whole Solar System without scale. To ascertain the scale, it is necessary only to measure one distance within the Solar System, e.g., the mean distance from the Earth to the Sun (now called an astronomical unit , or AU). When found by triangulation , this is referred to as the solar parallax , the difference in position of the Sun as seen from the Earth's center and a point one Earth radius away, i.e., the angle subtended at the Sun by the Earth's mean radius. Knowing the solar parallax and the mean Earth radius allows one to calculate the AU, the first, small step on the long road of establishing the size and expansion age [ 31 ] of the visible Universe. A primitive way to determine the distance to the Sun in terms of the distance to the Moon was already proposed by Aristarchus of Samos in his book On the Sizes and Distances of the Sun and Moon . He noted that the Sun, Moon, and Earth form a right triangle (with the right angle at the Moon) at the moment of first or last quarter moon . He then estimated that the Moon–Earth–Sun angle was 87°. Using correct geometry but inaccurate observational data, Aristarchus concluded that the Sun was slightly less than 20 times farther away than the Moon. The true value of this angle is close to 89° 50', and the Sun is about 390 times farther away. [ 29 ] Aristarchus pointed out that the Moon and Sun have nearly equal apparent angular sizes , and therefore their diameters must be in proportion to their distances from Earth. He thus concluded that the Sun was around 20 times larger than the Moon. This conclusion, although incorrect, follows logically from his incorrect data. It suggests that the Sun is larger than the Earth, which could be taken to support the heliocentric model. [ 32 ] Although Aristarchus' results were incorrect due to observational errors, they were based on correct geometric principles of parallax, and became the basis for estimates of the size of the Solar System for almost 2000 years, until the transit of Venus was correctly observed in 1761 and 1769. [ 29 ] This method was proposed by Edmond Halley in 1716, although he did not live to see the results. The use of Venus transits was less successful than had been hoped due to the black drop effect , but the resulting estimate, 153 million kilometers, is just 2% above the currently accepted value, 149.6 million kilometers. Much later, the Solar System was "scaled" using the parallax of asteroids , some of which, such as Eros , pass much closer to Earth than Venus. In a favorable opposition, Eros can approach the Earth to within 22 million kilometers. [ 33 ] During the opposition of 1900–1901, a worldwide program was launched to make parallax measurements of Eros to determine the solar parallax [ 34 ] (or distance to the Sun), with the results published in 1910 by Arthur Hinks of Cambridge [ 35 ] and Charles D. Perrine of the Lick Observatory , University of California . [ 36 ] Perrine published progress reports in 1906 [ 37 ] and 1908. [ 38 ] He took 965 photographs with the Crossley Reflector and selected 525 for measurement. [ 39 ] A similar program was then carried out, during a closer approach, in 1930–1931 by Harold Spencer Jones . [ 40 ] The value of the Astronomical Unit (roughly the Earth-Sun distance) obtained by this program was considered definitive until 1968, when radar and dynamical parallax methods started producing more precise measurements. Also radar reflections, both off Venus (1958) and off asteroids, like Icarus , have been used for solar parallax determination. Today, use of spacecraft telemetry links has solved this old problem. The currently accepted value of solar parallax is 8.794 143 arcseconds. [ 41 ] The open stellar cluster Hyades in Taurus extends over such a large part of the sky, 20 degrees, that the proper motions as derived from astrometry appear to converge with some precision to a perspective point north of Orion. Combining the observed apparent (angular) proper motion in seconds of arc with the also observed true (absolute) receding motion as witnessed by the Doppler redshift of the stellar spectral lines, allows estimation of the distance to the cluster (151 light-years) and its member stars in much the same way as using annual parallax. [ 42 ] Dynamical parallax has sometimes also been used to determine the distance to a supernova when the optical wavefront of the outburst is seen to propagate through the surrounding dust clouds at an apparent angular velocity, while its true propagation velocity is known to be the speed of light . [ 43 ] From enhanced relativistic positioning systems, spatio-temporal parallax generalizing the usual notion of parallax in space only has been developed. Then, event fields in spacetime can be deduced directly without intermediate models of light bending by massive bodies such as the one used in the PPN formalism for instance. [ 44 ] Two related techniques can determine the mean distances of stars by modelling the motions of stars. Both are referred to as statistical parallaxes, or individually called secular parallaxes and classical statistical parallaxes. The motion of the Sun through space provides a longer baseline that will increase the accuracy of parallax measurements, known as secular parallax . For stars in the Milky Way disk, this corresponds to a mean baseline of 4 AU per year. For halo stars the baseline is 40 AU per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. Secular parallax introduces a higher level of uncertainty, because the relative velocity of other stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the precision is inversely proportional to the square root of the sample size. [ 8 ] The mean parallaxes and distances of a large group of stars can be estimated from their radial velocities and proper motions . This is known as a classical statistical parallax . The motions of the stars are modelled to statistically reproduce the velocity dispersion based on their distance. [ 8 ] [ 45 ] In astronomy, the term "parallax" has come to mean a method of estimating distances, not necessarily utilizing a true parallax, such as:
https://en.wikipedia.org/wiki/Parallax_in_astronomy
In geometry , parallel lines are coplanar infinite straight lines that do not intersect at any point. Parallel planes are planes in the same three-dimensional space that never meet. Parallel curves are curves that do not touch each other or intersect and keep a fixed minimum distance. In three-dimensional Euclidean space, a line and a plane that do not share a point are also said to be parallel. However, two noncoplanar lines are called skew lines . Line segments and Euclidean vectors are parallel if they have the same direction or opposite direction (not necessarily the same length). [ 1 ] Parallel lines are the subject of Euclid 's parallel postulate . [ 2 ] Parallelism is primarily a property of affine geometries and Euclidean geometry is a special instance of this type of geometry. In some other geometries, such as hyperbolic geometry , lines can have analogous properties that are referred to as parallelism. The parallel symbol is ∥ {\displaystyle \parallel } . [ 3 ] [ 4 ] For example, A B ∥ C D {\displaystyle AB\parallel CD} indicates that line AB is parallel to line CD . In the Unicode character set, the "parallel" and "not parallel" signs have codepoints U+2225 (∥) and U+2226 (∦), respectively. In addition, U+22D5 (⋕) represents the relation "equal and parallel to". [ 5 ] Given parallel straight lines l and m in Euclidean space , the following properties are equivalent: Since these are equivalent properties, any one of them could be taken as the definition of parallel lines in Euclidean space, but the first and third properties involve measurement, and so, are "more complicated" than the second. Thus, the second property is the one usually chosen as the defining property of parallel lines in Euclidean geometry. [ 6 ] The other properties are then consequences of Euclid's Parallel Postulate . The definition of parallel lines as a pair of straight lines in a plane which do not meet appears as Definition 23 in Book I of Euclid's Elements . [ 7 ] Alternative definitions were discussed by other Greeks, often as part of an attempt to prove the parallel postulate . Proclus attributes a definition of parallel lines as equidistant lines to Posidonius and quotes Geminus in a similar vein. Simplicius also mentions Posidonius' definition as well as its modification by the philosopher Aganis. [ 7 ] At the end of the nineteenth century, in England, Euclid's Elements was still the standard textbook in secondary schools. The traditional treatment of geometry was being pressured to change by the new developments in projective geometry and non-Euclidean geometry , so several new textbooks for the teaching of geometry were written at this time. A major difference between these reform texts, both between themselves and between them and Euclid, is the treatment of parallel lines. [ 8 ] These reform texts were not without their critics and one of them, Charles Dodgson (a.k.a. Lewis Carroll ), wrote a play, Euclid and His Modern Rivals , in which these texts are lambasted. [ 9 ] One of the early reform textbooks was James Maurice Wilson's Elementary Geometry of 1868. [ 10 ] Wilson based his definition of parallel lines on the primitive notion of direction . According to Wilhelm Killing [ 11 ] the idea may be traced back to Leibniz . [ 12 ] Wilson, without defining direction since it is a primitive, uses the term in other definitions such as his sixth definition, "Two straight lines that meet one another have different directions, and the difference of their directions is the angle between them." Wilson (1868 , p. 2) In definition 15 he introduces parallel lines in this way; "Straight lines which have the same direction , but are not parts of the same straight line, are called parallel lines ." Wilson (1868 , p. 12) Augustus De Morgan reviewed this text and declared it a failure, primarily on the basis of this definition and the way Wilson used it to prove things about parallel lines. Dodgson also devotes a large section of his play (Act II, Scene VI § 1) to denouncing Wilson's treatment of parallels. Wilson edited this concept out of the third and higher editions of his text. [ 13 ] Other properties, proposed by other reformers, used as replacements for the definition of parallel lines, did not fare much better. The main difficulty, as pointed out by Dodgson, was that to use them in this way required additional axioms to be added to the system. The equidistant line definition of Posidonius, expounded by Francis Cuthbertson in his 1874 text Euclidean Geometry suffers from the problem that the points that are found at a fixed given distance on one side of a straight line must be shown to form a straight line. This can not be proved and must be assumed to be true. [ 14 ] The corresponding angles formed by a transversal property, used by W. D. Cooley in his 1860 text, The Elements of Geometry, simplified and explained requires a proof of the fact that if one transversal meets a pair of lines in congruent corresponding angles then all transversals must do so. Again, a new axiom is needed to justify this statement. The three properties above lead to three different methods of construction [ 15 ] of parallel lines. Because parallel lines in a Euclidean plane are equidistant there is a unique distance between the two parallel lines. Given the equations of two non-vertical, non-horizontal parallel lines, the distance between the two lines can be found by locating two points (one on each line) that lie on a common perpendicular to the parallel lines and calculating the distance between them. Since the lines have slope m , a common perpendicular would have slope −1/ m and we can take the line with equation y = − x / m as a common perpendicular. Solve the linear systems and to get the coordinates of the points. The solutions to the linear systems are the points and These formulas still give the correct point coordinates even if the parallel lines are horizontal (i.e., m = 0). The distance between the points is which reduces to When the lines are given by the general form of the equation of a line (horizontal and vertical lines are included): their distance can be expressed as Two lines in the same three-dimensional space that do not intersect need not be parallel. Only if they are in a common plane are they called parallel; otherwise they are called skew lines . Two distinct lines l and m in three-dimensional space are parallel if and only if the distance from a point P on line m to the nearest point on line l is independent of the location of P on line m . This never holds for skew lines. A line m and a plane q in three-dimensional space, the line not lying in that plane, are parallel if and only if they do not intersect. Equivalently, they are parallel if and only if the distance from a point P on line m to the nearest point in plane q is independent of the location of P on line m . Similar to the fact that parallel lines must be located in the same plane, parallel planes must be situated in the same three-dimensional space and contain no point in common. Two distinct planes q and r are parallel if and only if the distance from a point P in plane q to the nearest point in plane r is independent of the location of P in plane q . This will never hold if the two planes are not in the same three-dimensional space. In non-Euclidean geometry , the concept of a straight line is replaced by the more general concept of a geodesic , a curve which is locally straight with respect to the metric (definition of distance) on a Riemannian manifold , a surface (or higher-dimensional space) which may itself be curved. In general relativity , particles not under the influence of external forces follow geodesics in spacetime , a four-dimensional manifold with 3 spatial dimensions and 1 time dimension. [ 16 ] In non-Euclidean geometry ( elliptic or hyperbolic geometry ) the three Euclidean properties mentioned above are not equivalent and only the second one (Line m is in the same plane as line l but does not intersect l) is useful in non-Euclidean geometries, since it involves no measurements. In general geometry the three properties above give three different types of curves, equidistant curves , parallel geodesics and geodesics sharing a common perpendicular , respectively. While in Euclidean geometry two geodesics can either intersect or be parallel, in hyperbolic geometry, there are three possibilities. Two geodesics belonging to the same plane can either be: In the literature ultra parallel geodesics are often called non-intersecting . Geodesics intersecting at infinity are called limiting parallel . As in the illustration through a point a not on line l there are two limiting parallel lines, one for each direction ideal point of line l. They separate the lines intersecting line l and those that are ultra parallel to line l . Ultra parallel lines have single common perpendicular ( ultraparallel theorem ), and diverge on both sides of this common perpendicular. In spherical geometry , all geodesics are great circles . Great circles divide the sphere in two equal hemispheres and all great circles intersect each other. Thus, there are no parallel geodesics to a given geodesic, as all geodesics intersect. Equidistant curves on the sphere are called parallels of latitude analogous to the latitude lines on a globe. Parallels of latitude can be generated by the intersection of the sphere with a plane parallel to a plane through the center of the sphere. If l, m, n are three distinct lines, then l ∥ m ∧ m ∥ n ⟹ l ∥ n . {\displaystyle l\parallel m\ \land \ m\parallel n\ \implies \ l\parallel n.} In this case, parallelism is a transitive relation . However, in case l = n , the superimposed lines are not considered parallel in Euclidean geometry. The binary relation between parallel lines is evidently a symmetric relation . According to Euclid's tenets, parallelism is not a reflexive relation and thus fails to be an equivalence relation . Nevertheless, in affine geometry a pencil of parallel lines is taken as an equivalence class in the set of lines where parallelism is an equivalence relation. [ 18 ] [ 19 ] [ 20 ] To this end, Emil Artin (1957) adopted a definition of parallelism where two lines are parallel if they have all or none of their points in common. [ 21 ] Then a line is parallel to itself so that the reflexive and transitive properties belong to this type of parallelism, creating an equivalence relation on the set of lines. In the study of incidence geometry , this variant of parallelism is used in the affine plane .
https://en.wikipedia.org/wiki/Parallel_(geometry)
The parallel operator ‖ {\displaystyle \|} (pronounced "parallel", [ 1 ] following the parallel lines notation from geometry ; [ 2 ] [ 3 ] also known as reduced sum , parallel sum or parallel addition ) is a binary operation which is used as a shorthand in electrical engineering , [ 4 ] [ 5 ] [ 6 ] [ nb 1 ] but is also used in kinetics , fluid mechanics and financial mathematics . [ 7 ] [ 8 ] The name parallel comes from the use of the operator computing the combined resistance of resistors in parallel . The parallel operator represents the reciprocal value of a sum of reciprocal values (sometimes also referred to as the "reciprocal formula" or " harmonic sum") and is defined by: [ 9 ] [ 6 ] [ 10 ] [ 11 ] where a , b , and a ∥ b {\displaystyle a\parallel b} are elements of the extended complex numbers C ¯ = C ∪ { ∞ } . {\displaystyle {\overline {\mathbb {C} }}=\mathbb {C} \cup \{\infty \}.} [ 12 ] [ 13 ] The operator gives half of the harmonic mean of two numbers a and b . [ 7 ] [ 8 ] As a special case, for any number a ∈ C ¯ {\displaystyle a\in {\overline {\mathbb {C} }}} : Further, for all distinct numbers a ≠ b {\displaystyle a\neq b} : with | a ∥ b | {\displaystyle {\big |}\,a\parallel b\,{\big |}} representing the absolute value of a ∥ b {\displaystyle a\parallel b} , and min ( x , y ) {\displaystyle \min(x,y)} meaning the minimum (least element) among x and y . If a {\displaystyle a} and b {\displaystyle b} are distinct positive real numbers then 1 2 min ( a , b ) < | a ∥ b | < min ( a , b ) . {\displaystyle {\tfrac {1}{2}}\min(a,b)<{\big |}\,a\parallel b\,{\big |}<\min(a,b).} The concept has been extended from a scalar operation to matrices [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] and further generalized . [ 19 ] The operator was originally introduced as reduced sum by Sundaram Seshu in 1956, [ 20 ] [ 21 ] [ 14 ] studied as operator ∗ by Kent E. Erickson in 1959, [ 22 ] [ 23 ] [ 14 ] and popularized by Richard James Duffin and William Niles Anderson, Jr. as parallel addition or parallel sum operator : in mathematics and network theory since 1966. [ 15 ] [ 16 ] [ 1 ] While some authors continue to use this symbol up to the present, [ 7 ] [ 8 ] for example, Sujit Kumar Mitra used ∙ as a symbol in 1970. [ 14 ] In applied electronics , a ∥ sign became more common as the operator's symbol around 1974. [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ nb 1 ] [ nb 2 ] This was often written as doubled vertical line (||) available in most character sets (sometimes italicized as // [ 29 ] [ 30 ] ), but now can be represented using Unicode character U+2225 ( ∥ ) for "parallel to". In LaTeX and related markup languages, the macros \| and \parallel are often used (and rarely \smallparallel is used) to denote the operator's symbol. Let C ~ {\displaystyle {\widetilde {\mathbb {C} }}} represent the extended complex plane excluding zero, C ~ := C ∪ { ∞ } ∖ { 0 } , {\displaystyle {\widetilde {\mathbb {C} }}:=\mathbb {C} \cup \{\infty \}\smallsetminus \{0\},} and φ {\displaystyle \varphi } the bijective function from C {\displaystyle \mathbb {C} } to C ~ {\displaystyle {\widetilde {\mathbb {C} }}} such that φ ( z ) = 1 / z . {\displaystyle \varphi (z)=1/z.} One has identities and This implies immediately that C ~ {\displaystyle {\widetilde {\mathbb {C} }}} is a field where the parallel operator takes the place of the addition, and that this field is isomorphic to C . {\displaystyle \mathbb {C} .} The following properties may be obtained by translating through φ {\displaystyle \varphi } the corresponding properties of the complex numbers. As for any field, ( C ~ , ∥ , ⋅ ) {\displaystyle ({\widetilde {\mathbb {C} }},\,\parallel \,,\,\cdot \,)} satisfies a variety of basic identities. It is commutative under parallel and multiplication: It is associative under parallel and multiplication: [ 12 ] [ 7 ] [ 8 ] Both operations have an identity element ; for parallel the identity is ∞ {\displaystyle \infty } while for multiplication the identity is 1 : Every element a {\displaystyle a} of C ~ {\displaystyle {\widetilde {\mathbb {C} }}} has an inverse under parallel, equal to − a , {\displaystyle -a,} the additive inverse under addition. (But 0 has no inverse under parallel.) The identity element ∞ {\displaystyle \infty } is its own inverse, ∞ ∥ ∞ = ∞ . {\displaystyle \infty \parallel \infty =\infty .} Every element a ≠ ∞ {\displaystyle a\neq \infty } of C ~ {\displaystyle {\widetilde {\mathbb {C} }}} has a multiplicative inverse a − 1 = 1 / a {\displaystyle a^{-1}=1/a} : Multiplication is distributive over parallel: [ 1 ] [ 7 ] [ 8 ] Repeated parallel is equivalent to division, Or, multiplying both sides by n , Unlike for repeated addition , this does not commute: Using the distributive property twice, the product of two parallel binomials can be expanded as The square of a binomial is The cube of a binomial is In general, the n th power of a binomial can be expanded using binomial coefficients which are the reciprocal of those under addition, resulting in an analog of the binomial formula : The following identities hold: As with a polynomial under addition, a parallel polynomial with coefficients a k {\displaystyle a_{k}} in C ~ {\textstyle {\widetilde {\mathbb {C} }}} (with a 0 ≠ ∞ {\displaystyle a_{0}\neq \infty } ) can be factored into a product of monomials: for some roots r k {\displaystyle r_{k}} (possibly repeated) in C ~ . {\textstyle {\widetilde {\mathbb {C} }}.} Analogous to polynomials under addition, the polynomial equation implies that x = r k {\textstyle x=r_{k}} for some k . A linear equation can be easily solved via the parallel inverse: To solve a parallel quadratic equation , complete the square to obtain an analog of the quadratic formula The extended complex numbers including zero, C ¯ := C ∪ ∞ , {\displaystyle {\overline {\mathbb {C} }}:=\mathbb {C} \cup \infty ,} is no longer a field under parallel and multiplication, because 0 has no inverse under parallel. (This is analogous to the way ( C ¯ , + , ⋅ ) {\displaystyle {\bigl (}{\overline {\mathbb {C} }},{+},{\cdot }{\bigr )}} is not a field because ∞ {\displaystyle \infty } has no additive inverse.) For every non-zero a , The quantity 0 ∥ ( − 0 ) = 0 ∥ 0 {\displaystyle 0\parallel (-0)=0\parallel 0} can either be left undefined (see indeterminate form ) or defined to equal 0 . In the absence of parentheses, the parallel operator is defined as taking precedence over addition or subtraction, similar to multiplication. [ 1 ] [ 31 ] [ 9 ] [ 10 ] There are applications of the parallel operator in mechanics, electronics, optics, and study of periodicity: Given masses m and M , the reduced mass μ = m M m + M = m ∥ M {\displaystyle \mu ={\frac {mM}{m+M}}=m\parallel M} is frequently applied in mechanics. For instance, when the masses orbit each other, the moment of inertia is their reduced mass times the distance between them. In electrical engineering , the parallel operator can be used to calculate the total impedance of various serial and parallel electrical circuits. [ nb 2 ] There is a duality between the usual (series) sum and the parallel sum. [ 7 ] [ 8 ] For instance, the total resistance of resistors connected in parallel is the reciprocal of the sum of the reciprocals of the individual resistors . Likewise for the total capacitance of serial capacitors . [ nb 2 ] The coalesced density function f coalesced (x) of n independent probability density functions f 1 (x), f 2 (x), …, f n (x), is equal to the reciprocal of the sum of the reciprocal densities. [ 32 ] In geometric optics the thin lens approximation to the lens maker's equation. The time between conjunctions of two orbiting bodies is called the synodic period . If the period of the slower body is T 2 , and the period of the faster is T 1 , then the synodic period is Question: Answer: Question: [ 7 ] [ 8 ] Answer: Suggested already by Kent E. Erickson as a subroutine in digital computers in 1959, [ 22 ] the parallel operator is implemented as a keyboard operator on the Reverse Polish Notation (RPN) scientific calculators WP 34S since 2008 [ 33 ] [ 34 ] [ 35 ] as well as on the WP 34C [ 36 ] and WP 43S since 2015, [ 37 ] [ 38 ] allowing to solve even cascaded problems with few keystrokes like 270 ↵ Enter 180 ∥ 120 ∥ . Given a field F there are two embeddings of F into the projective line P( F ): z → [ z : 1] and z → [1 : z ]. These embeddings overlap except for [0:1] and [1:0]. The parallel operator relates the addition operation between the embeddings. In fact, the homographies on the projective line are represented by 2 x 2 matrices M(2, F ), and the field operations (+ and ×) are extended to homographies. Each embedding has its addition a + b represented by the following matrix multiplications in M(2, A ): The two matrix products show that there are two subgroups of M(2, F ) isomorphic to ( F ,+), the additive group of F . Depending on which embedding is used, one operation is +, the other is ∥ . {\displaystyle \parallel .}
https://en.wikipedia.org/wiki/Parallel_(operator)
Parallel I/O , in the context of a computer , means the performance of multiple input/output operations at the same time, for instance simultaneously outputs to storage devices and display devices. [ 1 ] It is a fundamental feature of operating systems . [ 2 ] One particular instance is parallel writing of data to disk; when file data is spread across multiple disks, for example in a RAID array, one can store multiple parts of the data at the same time, thereby achieving higher write speeds than with a single device. [ 3 ] [ 4 ] Other ways of parallel access to data include: Parallel Virtual File System , Lustre , GFS etc. It is used for scientific computing and not for databases. It breaks up support into multiple layers including High level I/O library, Middleware layer and Parallel file system. [ 5 ] Parallel File System manages the single view, maintains logical space and provides access to data files. [ 6 ] A single file may be stripped across one or more object storage target, which increases the bandwidth while accessing the file and available disk space. [ 7 ] The caches are larger in Parallel I/O and shared through distributed memory systems. [ 8 ] [ 9 ] [ 10 ] [ 11 ] Companies have been running Parallel I/O on their servers to achieve results with regard to price and performance. Parallel processing is especially critical for scientific calculations where applications are not only CPU but also are I/O bound. [ 12 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parallel_I/O
The Parallel Patterns Library is a Microsoft library designed for use by native C++ developers that provides features for multicore programming . [ 1 ] It was first bundled with Visual Studio 2010 . It resembles the C++ Standard Library in style and works well with the C++11 language feature, lambdas, also introduced with Visual Studio 2010 . For example, this sequential loop: Can be made into a parallel loop by replacing the for with a parallel_for : This still requires the developer to know that the loop is parallelizable, but all the other work is done by the library. MSDN [ 2 ] describes the Parallel Patterns Library as an "imperative programming model that promotes scalability and ease-of-use for developing concurrent applications." It uses the Concurrency Runtime for scheduling and resource management and provides generic, type-safe algorithms and containers for use in parallel applications. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parallel_Patterns_Library
Parallel adoption is a method for transferring between a previous ( IT ) system to a target (IT) system in an organization. In order to reduce risk, the old and new system run simultaneously for some period of time after which, if the criteria for the new system are met, the old system is disabled. The process requires careful planning and control and a significant investment in labor hours. This entry focuses on the generic process of parallel adoption. Real-world examples are used to provide a more meaningful interpretation of the process if necessary. Additionally, a process-data model is utilized to visualize the process, aiming to offer a comprehensive overview of all the steps involved in parallel adoption. However, the emphasis will be on the unique characteristics of parallel adoption. Some common characteristics, particularly those related to defining an implementation strategy, that apply to all four generic types of adoption are described in Adoption (software implementation) . Besides parallel adoption, three other generic kinds of adoption can be identified. The choice for a specific adoption method depends on the organizational characteristics; more insight on this topic will be provided below. The three other adoption methods are: In some cases, parallel conversion may not be a suitable strategy. For example, if the new system has significant schema changes that result in data elements not being populated correctly, it can lead to data inaccuracies or corruption. Additionally, if the system relies on commercial off-the-shelf technology (COTS) and the vendor's documentation specifies that multiple applications cannot share the same database, parallel conversion may not be feasible. For instance, products like Oracle 's Siebel may have such restrictions. Similarly, other COTS products may have limitations when it comes to patches or major upgrades that require unique license keys, potentially causing issues with database changes and system functionality. There seem to be little conventions regarding the process of parallel adoption. Several sources (e.g.: Turban, 2002, Eason, 1988, Rooijmans, 2003, Brown, 1999), do not use a single process-description name. The term parallel adoption is denoted in these sources, although consistent per source as: parallel conversion, parallel running, shadow-running, parallel cutover and parallel implementation. This appears to be the case because a generic description of the process does not need a distinct classification. There are a quite some standard implementation methods, where different adoption techniques are described but often in a practical context; real-world case scenario or a more comprehensive set of implementation techniques like Regatta: adoption method , SIM and PRINCE2 . In general, parallel adoption can best be seen as a Systems Engineering method of implementation of a new system. In principle, the parallel adoption method is different from the decision to change a system in an organization and can be seen as one possible mean to achieve that goal. However, there are quite some factors that are being taken into account in determining the best implementation strategy. Moreover, a successful implementation can depend to a big extent on the adoption method. (Lee, 2004) The parallel adoption process can not be represented without paying attention to the steps before the actual conversion, namely the construction of a conversion scenario and the identification and testing of all the requirements . Therefore the process is explained by going through all the identified processes in figure 1, while addressing the common activities that are necessary for any of the identified conversion strategies briefly. Figure 1 gives an overview of the parallel adoption process. The left side depicts the flow of activities that contribute to the process. Activities that run simultaneously are preceded by a thick black line. When the parallel running of activities is over, the activities are joined again in a similar black line. When there is no arrow from an activity to another, this indicates that they are aggregates of a bigger activity above. The activities are divided in four main phases: The main phases are subdivided in other activities that will be described briefly in tables 1-1 to 1-4. The right side of the model describes the data involved in the processes. Some of these concepts, depicted as a pair of overlapping open rectangles, can be subdivided in more than one concept. A pair of overlapping closed rectangles indicate a closed concept which means that it can be subdivided in more concepts, but it is not of further interest for the parallel adoption process. The diamond shapes figure indicates that the concept linked to it, serves as an aggregate concept and that this concepts consists of the other concepts. Finally the open arrow represents a super class-subclass relation. The concept linked with the arrow is the super class of the concepts that are linked to it. This syntax in figure 1 is according to Unified Modeling Language ( UML ) standards. The concepts in figure 1 are defined in table 2. More context for these sub activities in the process will be given underneath the tables. The concepts from figure 1 are defined in table 2-1 below. The principal functioning entities comprising the product, e.g. hardware, software. Also an organized and disciplined approach to accomplish a task, e.g., a failure reporting system (ISO 9000) The parallel adoption is preceded with determining the implementation strategy, which is not unique for parallel adoption, but can be seen as part of the change management process that an organization enters. (Lee, 2004). Some factors involved in determining an implementation strategy regarding adoption methods is described more thoroughly in Adoption (software implementation) . The reason for an organization to choose for parallel adoption in favour of a pilot conversion, big bang or phased adoption is often a trade-off between costs and risk (Andersson, Hanson, 2003). Parallel adoption the most expensive adoption method (Chng, Vathanopas, 2002, Microsoft, 2004, Anderson et al., 2003), because it demands from the organization that two systems run parallel for a certain period. Running two systems simultaneously means that an investment in Human Resources has to be made. Besides a good preparation of the (extra) personnel , that has to go through a stressful period of parallel running where procedures cross each other. (Rooijmans, 2003, Eason, 1988) Efforts should be placed on data-consistency and preventing data corruption between the two systems. (Chng et al. 2002, Yusuf, 2004 ) Not only for the conversion process itself, but also in training them for handling the new system. When it is necessary for the new system to be implemented following a big bang approach, the risk of failure is high (Lee, 2004). When the organization demands heavily on the old (legacy) system to be changed, the trade-off between extra involved costs for a less risky parallel approach, should be in favour of those extra costs (Lee, 2004), despite this, we see that ERP adoption follows a big bang adoption in most cases (Microsoft, 2004, Yusuf, 2004). This means that an organization should think clearly about their implementation strategy and integrate this decision in their Risk management or Change management analysis. To prepare the organization properly a requirements analysis of both IT-requirements as well as organizational requirements is necessary. More information on requirements analysis and change management can be found elsewhere. For parallel adoption, the most important IT requirement (if applicable) is attention for running the two systems simultaneously. In the conversion phase there is a timeslot, where the old system is the leading system. In order to transfer the data from the old system in the catch-up period to the new system, there must be a transition module available (Microsoft, 2004). Other implementation methods do not directly have this requirement. More information about IT requirements can be found in Software Engineering . Besides the IT-requirements, the organizational requirements require Human Resource Management issues like, the training of personnel , deal with a perhaps changing organizational structure , organic organisation or Mechanistic organisation characteristics of the organization (Daft, 1998) and most importantly: Top management support (Brown, Vessey, 1999). Brown et al. (1999) identify two distinct roles top management can initiate: the so-called sponsor and champion roles: A parallel adoption process is very stressful and requires well prepared employees that can deal with mistakes that are being made, without conservatively eager to the old system. (Eason, 1988) It is very important to have a detailed plan of conducting the new system in an organization (Lee, 2004, Eason, 1988). The most important thing about time planning for a parallel conversion is not to rush things and not be afraid of possible delays in the actual conversion phase. (Lee, 2004). It can be very beneficial also to work with clearly defined milestones (Rooijmans, 2003), similar to the PRINCE2 method. More information on time planning can be found in Planning and Strategic planning . The requirements evaluation involves redefining the implementation script. The IT and (if possible) organizational requirements that were made should be tested. Some tests can be run where the organizational responsibilities can be evaluated (Rooijmans, 2003) as well as the IT-requirements. Here it is also again important to have top-management support and involvement (Eason, 1988). If they do not make resources available to evaluate, the implementation can be unsuccessful as a direct consequence. After this evaluation the implementation script is redefined into a more explicit conversion scenario. The conversion scenario thus consists of a blueprint for the organizational change in all aspects. However, there are two topics that did not yet get the attention they deserve in the parallel adoption scope. The actual conversion phase is now in place. During this process, the organization is in a stressful period (Eason, 1988, Rooijmans, 2003). The two systems run parallel according to the conversion scenario and the new system is being monitored closely. When the criteria of the new system are met, the old system will cease being the leading system and the new system takes over. The catch ups that are part of the workaround strategy are the back ups of the old system and provide the means for reliability engineering and data recovery . There are two kinds of ways to make catch-ups: automatic catch ups and catch ups by hand. (Rooijmans, 2003). If applicable a remote backup service can be deployed as well. There are several lessons that can be learned from case studies: The Nevada DMV system case, described by Lee (2004), learns that an implementation to a new process can also have a political implication. When the system that will be changed affects the general public and it is not only an internal system that is being changed, there are some more pressures that influence the organization. In this case, concepts as company image and reputation can drastically change if customers are faced with more delays in for example communication or ordering goods. It is suggested that if the system is politically sensitive, more attention should be paid to the conversion method and preferably parallel adoption is opted, since there is less risk involved. A series of lessons learned from a number of actual case scenario’s implementing a new portfolio system, performed by a business-consultancy firm (Venture, 2004) show some interesting lessons learned from the field. they seem to fit perfectly with the issues mentioned for a generic parallel adoption process, based on a combination of scientific work. To summarise: There are also at least two difficulties with parallel conversion that may make its use impractical in the 21st century, though it was a staple of industry practice when inputs consisted of decks of punched cards or reels of tape. These are: 1. It is impractical to expect end users, be they customers, production line workers or nearly anyone else, to enter every transaction twice via different interfaces. 2. Timing differences between two multi-user interactive systems can properly produce different results even when both systems are operating correctly, are internally consistent, and could be used successfully by themselves. As a result, parallel conversion is restricted to a few specific situations today, such as accounting systems where absolute verifiability of results is mandatory, where users are all internal to the organization and understand this requirement, and where the order of activities cannot be allowed to affect the output. In practice, the pilot and phased conversion methods are more relevant today.
https://en.wikipedia.org/wiki/Parallel_adoption
In medicinal chemistry , parallel artificial membrane permeability assay ( PAMPA ) is a method which determines the permeability of substances from a donor compartment, through a lipid -infused artificial membrane into an acceptor compartment. [ 1 ] A multi-well microtitre plate is used for the donor and a membrane/acceptor compartment is placed on top; the whole assembly is commonly referred to as a “sandwich”. At the beginning of the test, the drug is added to the donor compartment, and the acceptor compartment is drug-free. After an incubation period which may include stirring, the sandwich is separated and the amount of drug is measured in each compartment. Mass balance allows calculation of drug that remains in the membrane. To date, PAMPA models have been developed that exhibit a high degree of correlation with permeation across a variety of barriers, including Caco-2 cultures, [ 2 ] [ 3 ] the gastrointestinal tract , [ 4 ] blood–brain barrier [ 5 ] and skin. The donor and/or acceptor compartments may contain solubilizing agents, or additives that bind the drugs as they permeate. To improve the in vitro - in vivo correlation and performance of the PAMPA method, the lipid, pH and chemical composition of the system is often designed with biomimetic considerations in mind. Although active transport is not modeled by the artificial PAMPA membrane, up to 95% of known drugs are absorbed by passive transport . [ 6 ] Some experts support a lower figure, so the amount is open to some interpretation. Microtiter plates with 96 wells can be used for the assay which increases the speed and lowers the per sample cost. Since the first publication by Kansy and coworkers, [ 7 ] several companies developed their own versions of the assay. Early models incorporated iso-pH conditions in the compartments separated by a simple lipid membrane; subsequently, commercial products were introduced which incorporated more sophisticated lipid membranes. [ 8 ] The commercial products helped ensure that medicinal chemists across different corporate labs within a worldwide organization used the same standardized methodology, reagents and obtained equivalent system performance as demonstrated with a set of test compounds. This has proved very useful as various operational activities have been outsourced to other countries.
https://en.wikipedia.org/wiki/Parallel_artificial_membrane_permeability_assay
The parallel axis theorem , also known as Huygens–Steiner theorem , or just as Steiner's theorem , [ 1 ] named after Christiaan Huygens and Jakob Steiner , can be used to determine the moment of inertia or the second moment of area of a rigid body about any axis, given the body's moment of inertia about a parallel axis through the object's center of gravity and the perpendicular distance between the axes. Suppose a body of mass m is rotated about an axis z passing through the body's center of mass . The body has a moment of inertia I cm with respect to this axis. The parallel axis theorem states that if the body is made to rotate instead about a new axis z′ , which is parallel to the first axis and displaced from it by a distance d , then the moment of inertia I with respect to the new axis is related to I cm by Explicitly, d is the perpendicular distance between the axes z and z′ . The parallel axis theorem can be applied with the stretch rule and perpendicular axis theorem to find moments of inertia for a variety of shapes. We may assume, without loss of generality, that in a Cartesian coordinate system the perpendicular distance between the axes lies along the x -axis and that the center of mass lies at the origin. The moment of inertia relative to the z -axis is then The moment of inertia relative to the axis z′ , which is at a distance D from the center of mass along the x -axis, is Expanding the brackets yields The first term is I cm and the second term becomes MD 2 . The integral in the final term is a multiple of the x-coordinate of the center of mass – which is zero since the center of mass lies at the origin. So, the equation becomes: The parallel axis theorem can be generalized to calculations involving the inertia tensor . [ 2 ] Let I ij denote the inertia tensor of a body as calculated at the center of mass. Then the inertia tensor J ij as calculated relative to a new point is where R = R 1 x ^ + R 2 y ^ + R 3 z ^ {\displaystyle \mathbf {R} =R_{1}\mathbf {\hat {x}} +R_{2}\mathbf {\hat {y}} +R_{3}\mathbf {\hat {z}} \!} is the displacement vector from the center of mass to the new point, and δ ij is the Kronecker delta . For diagonal elements (when i = j ), displacements perpendicular to the axis of rotation results in the above simplified version of the parallel axis theorem. The generalized version of the parallel axis theorem can be expressed in the form of coordinate-free notation as where E 3 is the 3 × 3 identity matrix and ⊗ {\displaystyle \otimes } is the outer product . Further generalization of the parallel axis theorem gives the inertia tensor about any set of orthogonal axes parallel to the reference set of axes x, y and z, associated with the reference inertia tensor, whether or not they pass through the center of mass. [ 2 ] In this generalization, the inertia tensor can be moved from being reckoned about any reference point R r e f {\displaystyle \mathbf {R} _{ref}} to some final reference point R F {\displaystyle \mathbf {R} _{F}} via the relational matrix M {\displaystyle M} as: where C {\displaystyle \mathbf {C} } is the vector from the initial reference point to the object's center of mass and R {\displaystyle \mathbf {R} } is the vector from the initial reference point to the final reference point ( R F = R r e f + R {\displaystyle \mathbf {R} _{F}=\mathbf {R} _{ref}+\mathbf {R} } ). The relational matrix is given by The parallel axes rule also applies to the second moment of area (area moment of inertia) for a plane region D : where I z is the area moment of inertia of D relative to the parallel axis, I x is the area moment of inertia of D relative to its centroid , A is the area of the plane region D , and r is the distance from the new axis z to the centroid of the plane region D . The centroid of D coincides with the centre of gravity of a physical plate with the same shape that has uniform density. The mass properties of a rigid body that is constrained to move parallel to a plane are defined by its center of mass R = ( x , y ) in this plane, and its polar moment of inertia I R around an axis through R that is perpendicular to the plane. The parallel axis theorem provides a convenient relationship between the moment of inertia I S around an arbitrary point S and the moment of inertia I R about the center of mass R . Recall that the center of mass R has the property where r is integrated over the volume V of the body. The polar moment of inertia of a body undergoing planar movement can be computed relative to any reference point S , where S is constant and r is integrated over the volume V . In order to obtain the moment of inertia I S in terms of the moment of inertia I R , introduce the vector d from S to the center of mass R , The first term is the moment of inertia I R , the second term is zero by definition of the center of mass, and the last term is the total mass of the body times the square magnitude of the vector d . Thus, which is known as the parallel axis theorem. [ 3 ] The inertia matrix of a rigid system of particles depends on the choice of the reference point. [ 4 ] There is a useful relationship between the inertia matrix relative to the center of mass R and the inertia matrix relative to another point S . This relationship is called the parallel axis theorem. Consider the inertia matrix [I S ] obtained for a rigid system of particles measured relative to a reference point S , given by where r i defines the position of particle P i , i = 1, ..., n . Recall that [ r i − S ] is the skew-symmetric matrix that performs the cross product, for an arbitrary vector y . Let R be the center of mass of the rigid system, then where d is the vector from the reference point S to the center of mass R . Use this equation to compute the inertia matrix, Expand this equation to obtain The first term is the inertia matrix [ I R ] relative to the center of mass. The second and third terms are zero by definition of the center of mass R , And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix [ d ] constructed from d . The result is the parallel axis theorem, where d is the vector from the reference point S to the center of mass R . [ 4 ] In order to compare formulations of the parallel axis theorem using skew-symmetric matrices and the tensor formulation, the following identities are useful. Let [ R ] be the skew symmetric matrix associated with the position vector R = ( x , y , z ), then the product in the inertia matrix becomes This product can be computed using the matrix formed by the outer product [ R R T ] using the identity where [ E 3 ] is the 3 × 3 identity matrix. Also notice, that where tr denotes the sum of the diagonal elements of the outer product matrix, known as its trace.
https://en.wikipedia.org/wiki/Parallel_axis_theorem
Parallel compression , also known as New York compression , is a dynamic range compression technique used in sound recording and mixing . Parallel compression, a form of upward compression , is achieved by mixing an unprocessed 'dry', or lightly compressed signal with a heavily compressed version of the same signal. Rather than lowering the highest peaks for the purpose of dynamic range reduction, it decreases the dynamic range by raising up the softest sounds, adding audible detail. [ 1 ] It is most often used on stereo percussion buses in recording and mixdown, on electric bass , and on vocals in recording mixes and live concert mixes. [ 2 ] The internal circuitry of Dolby A noise reduction, introduced in 1965, contained parallel buses with compression on one of them, the two mixed in a flexible ratio. [ 2 ] In October 1977, an article by Mike Beville was published in Studio Sound magazine describing the technique as applied to classical recordings. [ 3 ] Many citations of this article claim that Beville called it "side-chain" compression, most likely due to a misquoting of a citation of the article in Roey Izhaki's book, Mixing Audio: Concepts, Practices and Tools . [ 2 ] However, Beville used the term "side-chain" to describe the internal electronics and signal flow of compressors, not to describe a technique for using compressors. His discussion of parallel compression technique occurs in a separate section at the end of the article where he outlines how to place a limiter-compressor "in parallel with the direct signal" to obtain effective compression at low input levels. As Izhaki mentions in his book, others have referred to the technique as "side-chain" compression, which has made for confusion with the side-chain compression technique which uses an external "key" or "side chain" signal to determine compression on a target signal. Beville's article, entitled "Compressors and Limiters," was reprinted in the same magazine in June 1988. [ 4 ] A follow-up article by Richard Hulse in the April 1996 Studio Sound included application tips and a description of implementing the technique in a digital audio workstation. [ 5 ] Bob Katz coined the term "parallel compression", [ 2 ] and has described it as an implementation of "upward compression", the increase in audibility of softer passages. [ 4 ] Studio engineers in New York City became known for reliance on the technique, and it picked up the name "New York compression". [ 2 ] The human ear is sensitive to loud sounds being suddenly reduced in volume, but less so to soft sounds being increased in volume—parallel compression takes advantage of this difference. [ 2 ] [ 4 ] Unlike normal limiting and downward compression, fast transients in music are retained in parallel compression, preserving the "feel" and immediacy of a live performance. Because the method is less audible to the human ear, the compressor can be set aggressively, with high ratios for strong effect. [ 2 ] In an audio mix using an analog mixing console and analog compressors, parallel compression is achieved by sending a monophonic or stereo signal in two or more directions and then summing the multiple pathways, mixing them together by ear to achieve the desired effect. One pathway is straight to the summing mixer, while other pathways go through mono or stereo compressors, set aggressively for high-ratio gain reduction. The compressed signals are brought back to the summing mixer and blended in with the straight signal. [ 2 ] If digital components are being used, latency must be taken into account. If the normal analog method is used for a digital compressor, the signals traveling through the parallel pathways will arrive at the summing mixer at slightly different times, creating unpleasant comb-filtering and phasing effects. The digital compressor pathway takes a little more time to process the sound—on the order of 0.3 to 3 milliseconds longer. Instead, the two pathways must both have the same number of processing stages: the "straight" pathway is assigned a compression stage which is not given an aggressively high ratio. In this case, the two signals both go through compression stages, and both pathways are delayed the same amount of time, but one is set to do no dynamic range compression, or to do very little, and the other is set for high amounts of gain reduction. [ 6 ] The method can be used artistically to "fatten" or "beef up" a mix, by careful setting of attack and release times on the compressor. [ 4 ] These settings may be adjusted further until the compressor causes the signal to "pump" or "breathe" in tempo with the song, adding its own character to the sound. Unusually extreme implementations have been achieved by studio mix engineers such as New York-based Michael Brauer who uses five parallel compressors, adjusted individually for timbral and tonal variations, mixed and blended to taste, to achieve his target sound on vocals for the Rolling Stones , Aerosmith , Bob Dylan , KT Tunstall and Coldplay . [ 7 ] Mix engineer Anthony "Rollmottle" Puglisi uses parallel compression applied conservatively across the entire mix, especially in dance-oriented electronic music: "it gives a track that extra oomph and power (not just make it louder—there's a difference) through quieter portions of the jam without resorting to one of those horrific 'maximizer' plugins that squeeze the dynamics right out of your song." [ 8 ] While parallel compression is widely utilized in electronic dance music, "side-chain" compression is the technique popularly used to give a synth lead or other melodic element the pulsating quality ubiquitous in the genre. One or more tracks may be side-chained to the kick, thereby compressing them only when the beat occurs.
https://en.wikipedia.org/wiki/Parallel_compression
Parallel evolution is the similar development of a trait in distinct species that are not closely related, but share a similar original trait in response to similar evolutionary pressure. [ 1 ] [ 2 ] Given a trait that occurs in each of two lineages descended from a specified ancestor, it is possible in theory to define parallel and convergent evolutionary trends strictly, and distinguish them clearly from one another. [ 2 ] However, the criteria for defining convergent as opposed to parallel evolution are unclear in practice, so that arbitrary diagnosis is common. When two species share a trait, evolution is defined as parallel if the ancestors are known to have shared that similarity; if not, it is defined as convergent. However, the stated conditions are a matter of degree; all organisms share common ancestors. Scientists differ on whether the distinction is useful. [ 3 ] [ 4 ] A number of examples of parallel evolution are provided by the two main branches of the mammals , the placentals and marsupials , which have followed independent evolutionary pathways following the break-up of land-masses such as Gondwanaland roughly 100 million years ago. In South America , marsupials and placentals shared the ecosystem (before the Great American Interchange ); in Australia , marsupials prevailed; and in the Old World and North America the placentals won out. However, in all these localities mammals were small and filled only limited places in the ecosystem until the mass extinction of dinosaurs sixty-five million years ago. At this time, mammals on all three landmasses began to take on a much wider variety of forms and roles. While some forms were unique to each environment, surprisingly similar animals have often emerged in two or three of the separated continents. Examples of these include the placental sabre-toothed cats ( Machairodontinae ) and the South American marsupial sabre-tooth ( Thylacosmilus ) ; the Tasmanian wolf and the European wolf ; likewise marsupial and placental moles , flying squirrels , and (arguably) mice . [ citation needed ] Hummingbirds and sunbirds, two nectarivorous bird lineages in the New and Old Worlds have parallelly evolved a suite of specialized behavioral and anatomical traits. These traits (bill shape, digestive enzymes, and flight ) allow the birds to optimally fit the flower-feeding-and-pollination ecological niche they occupy, which is shaped by the birds' suites of parallel traits. Thus, a parallel coevolved behavioral syndrome within the birds creates an emergent guild of highly specialized birds and highly adapted plants, each exploiting the other's involvement in the flowers' pollination in the Old World and New World alike. [ 5 ] The bill shape of nectarivores, being long and needle-like, allows them to reach down a flower's pistil/stamen and get at the nectar within. Nectarivores may also use their specialized bills to engage in nectar robbing , a practice seen in both hummingbirds and sunbirds in which the bird gets nectar by making a hole in the base of the flower's corolla tube instead of inserting its bill through the tube as is standard, thus "robbing" the flower of nectar since it is not pollinated it in return. [ 6 ] Nectarivores and ornithophilous flowers often exist in mutualistic guild relationships facilitated by the bird's bill shape, food source, and digestive ability acting in concert with the flower's tube shape and adaptation to pollination by hovering or perching birds. The birds eat nectar using their long, thin bills and, in so doing, collect pollen on their bills; this pollen is then transferred to the next flower they feed on. This mutualism coevolved in parallel between the Old World and New World birds and their respective flowers. [ 7 ] Moreover, the digestive enzyme activity in nectarivores matching the nectar composition in their respective flowers appears to have coevolved in parallel between plants and pollinators across continents, as the nectarivorous lineages independently evolved the ability to digest the nectar specific to their flowers, resulting in distinct guilds. [ 7 ] [ 8 ] The capacity of nectarivores to digest sucrose is far greater than that of other avian taxa . This difference is due to an analogous high concentration of sucrase-isomaltase , an enzyme that hydrolyzes sucrose. Sucrase activity per unit intestinal surface area appears to be higher in nectarivores than in other birds, meaning these nectarivorous avians can digest more sucrose more rapidly than other taxa. [ 8 ] Moreover, the Adaptive Modulation Hypothesis does not apply for nectarivores and sugar-digesting enzymes, meaning that two lineages of nectarivores should not necessarily both have high sucrase-isomaltase concentrations even though they both eat nectar. Thus, parallel acquisition of analogous sucrose digestive capability is a reasonable conclusion because there is no apparent cause for the two lineages to share this high enzyme concentration. [ 9 ]
https://en.wikipedia.org/wiki/Parallel_evolution
In engineering , a parallel force system is a type of force system where in all forces are oriented along one axis. An example of this type of system is a see saw . In a see saw, the children apply the two forces at the ends, and the fulcrum in the middle gives the counter force to maintain the see saw in a neutral position. Another example are the major vertical forces on an airplane in flight.
https://en.wikipedia.org/wiki/Parallel_force_system
In geometry , parallel lines are coplanar infinite straight lines that do not intersect at any point. Parallel planes are planes in the same three-dimensional space that never meet. Parallel curves are curves that do not touch each other or intersect and keep a fixed minimum distance. In three-dimensional Euclidean space, a line and a plane that do not share a point are also said to be parallel. However, two noncoplanar lines are called skew lines . Line segments and Euclidean vectors are parallel if they have the same direction or opposite direction (not necessarily the same length). [ 1 ] Parallel lines are the subject of Euclid 's parallel postulate . [ 2 ] Parallelism is primarily a property of affine geometries and Euclidean geometry is a special instance of this type of geometry. In some other geometries, such as hyperbolic geometry , lines can have analogous properties that are referred to as parallelism. The parallel symbol is ∥ {\displaystyle \parallel } . [ 3 ] [ 4 ] For example, A B ∥ C D {\displaystyle AB\parallel CD} indicates that line AB is parallel to line CD . In the Unicode character set, the "parallel" and "not parallel" signs have codepoints U+2225 (∥) and U+2226 (∦), respectively. In addition, U+22D5 (⋕) represents the relation "equal and parallel to". [ 5 ] Given parallel straight lines l and m in Euclidean space , the following properties are equivalent: Since these are equivalent properties, any one of them could be taken as the definition of parallel lines in Euclidean space, but the first and third properties involve measurement, and so, are "more complicated" than the second. Thus, the second property is the one usually chosen as the defining property of parallel lines in Euclidean geometry. [ 6 ] The other properties are then consequences of Euclid's Parallel Postulate . The definition of parallel lines as a pair of straight lines in a plane which do not meet appears as Definition 23 in Book I of Euclid's Elements . [ 7 ] Alternative definitions were discussed by other Greeks, often as part of an attempt to prove the parallel postulate . Proclus attributes a definition of parallel lines as equidistant lines to Posidonius and quotes Geminus in a similar vein. Simplicius also mentions Posidonius' definition as well as its modification by the philosopher Aganis. [ 7 ] At the end of the nineteenth century, in England, Euclid's Elements was still the standard textbook in secondary schools. The traditional treatment of geometry was being pressured to change by the new developments in projective geometry and non-Euclidean geometry , so several new textbooks for the teaching of geometry were written at this time. A major difference between these reform texts, both between themselves and between them and Euclid, is the treatment of parallel lines. [ 8 ] These reform texts were not without their critics and one of them, Charles Dodgson (a.k.a. Lewis Carroll ), wrote a play, Euclid and His Modern Rivals , in which these texts are lambasted. [ 9 ] One of the early reform textbooks was James Maurice Wilson's Elementary Geometry of 1868. [ 10 ] Wilson based his definition of parallel lines on the primitive notion of direction . According to Wilhelm Killing [ 11 ] the idea may be traced back to Leibniz . [ 12 ] Wilson, without defining direction since it is a primitive, uses the term in other definitions such as his sixth definition, "Two straight lines that meet one another have different directions, and the difference of their directions is the angle between them." Wilson (1868 , p. 2) In definition 15 he introduces parallel lines in this way; "Straight lines which have the same direction , but are not parts of the same straight line, are called parallel lines ." Wilson (1868 , p. 12) Augustus De Morgan reviewed this text and declared it a failure, primarily on the basis of this definition and the way Wilson used it to prove things about parallel lines. Dodgson also devotes a large section of his play (Act II, Scene VI § 1) to denouncing Wilson's treatment of parallels. Wilson edited this concept out of the third and higher editions of his text. [ 13 ] Other properties, proposed by other reformers, used as replacements for the definition of parallel lines, did not fare much better. The main difficulty, as pointed out by Dodgson, was that to use them in this way required additional axioms to be added to the system. The equidistant line definition of Posidonius, expounded by Francis Cuthbertson in his 1874 text Euclidean Geometry suffers from the problem that the points that are found at a fixed given distance on one side of a straight line must be shown to form a straight line. This can not be proved and must be assumed to be true. [ 14 ] The corresponding angles formed by a transversal property, used by W. D. Cooley in his 1860 text, The Elements of Geometry, simplified and explained requires a proof of the fact that if one transversal meets a pair of lines in congruent corresponding angles then all transversals must do so. Again, a new axiom is needed to justify this statement. The three properties above lead to three different methods of construction [ 15 ] of parallel lines. Because parallel lines in a Euclidean plane are equidistant there is a unique distance between the two parallel lines. Given the equations of two non-vertical, non-horizontal parallel lines, the distance between the two lines can be found by locating two points (one on each line) that lie on a common perpendicular to the parallel lines and calculating the distance between them. Since the lines have slope m , a common perpendicular would have slope −1/ m and we can take the line with equation y = − x / m as a common perpendicular. Solve the linear systems and to get the coordinates of the points. The solutions to the linear systems are the points and These formulas still give the correct point coordinates even if the parallel lines are horizontal (i.e., m = 0). The distance between the points is which reduces to When the lines are given by the general form of the equation of a line (horizontal and vertical lines are included): their distance can be expressed as Two lines in the same three-dimensional space that do not intersect need not be parallel. Only if they are in a common plane are they called parallel; otherwise they are called skew lines . Two distinct lines l and m in three-dimensional space are parallel if and only if the distance from a point P on line m to the nearest point on line l is independent of the location of P on line m . This never holds for skew lines. A line m and a plane q in three-dimensional space, the line not lying in that plane, are parallel if and only if they do not intersect. Equivalently, they are parallel if and only if the distance from a point P on line m to the nearest point in plane q is independent of the location of P on line m . Similar to the fact that parallel lines must be located in the same plane, parallel planes must be situated in the same three-dimensional space and contain no point in common. Two distinct planes q and r are parallel if and only if the distance from a point P in plane q to the nearest point in plane r is independent of the location of P in plane q . This will never hold if the two planes are not in the same three-dimensional space. In non-Euclidean geometry , the concept of a straight line is replaced by the more general concept of a geodesic , a curve which is locally straight with respect to the metric (definition of distance) on a Riemannian manifold , a surface (or higher-dimensional space) which may itself be curved. In general relativity , particles not under the influence of external forces follow geodesics in spacetime , a four-dimensional manifold with 3 spatial dimensions and 1 time dimension. [ 16 ] In non-Euclidean geometry ( elliptic or hyperbolic geometry ) the three Euclidean properties mentioned above are not equivalent and only the second one (Line m is in the same plane as line l but does not intersect l) is useful in non-Euclidean geometries, since it involves no measurements. In general geometry the three properties above give three different types of curves, equidistant curves , parallel geodesics and geodesics sharing a common perpendicular , respectively. While in Euclidean geometry two geodesics can either intersect or be parallel, in hyperbolic geometry, there are three possibilities. Two geodesics belonging to the same plane can either be: In the literature ultra parallel geodesics are often called non-intersecting . Geodesics intersecting at infinity are called limiting parallel . As in the illustration through a point a not on line l there are two limiting parallel lines, one for each direction ideal point of line l. They separate the lines intersecting line l and those that are ultra parallel to line l . Ultra parallel lines have single common perpendicular ( ultraparallel theorem ), and diverge on both sides of this common perpendicular. In spherical geometry , all geodesics are great circles . Great circles divide the sphere in two equal hemispheres and all great circles intersect each other. Thus, there are no parallel geodesics to a given geodesic, as all geodesics intersect. Equidistant curves on the sphere are called parallels of latitude analogous to the latitude lines on a globe. Parallels of latitude can be generated by the intersection of the sphere with a plane parallel to a plane through the center of the sphere. If l, m, n are three distinct lines, then l ∥ m ∧ m ∥ n ⟹ l ∥ n . {\displaystyle l\parallel m\ \land \ m\parallel n\ \implies \ l\parallel n.} In this case, parallelism is a transitive relation . However, in case l = n , the superimposed lines are not considered parallel in Euclidean geometry. The binary relation between parallel lines is evidently a symmetric relation . According to Euclid's tenets, parallelism is not a reflexive relation and thus fails to be an equivalence relation . Nevertheless, in affine geometry a pencil of parallel lines is taken as an equivalence class in the set of lines where parallelism is an equivalence relation. [ 18 ] [ 19 ] [ 20 ] To this end, Emil Artin (1957) adopted a definition of parallelism where two lines are parallel if they have all or none of their points in common. [ 21 ] Then a line is parallel to itself so that the reflexive and transitive properties belong to this type of parallelism, creating an equivalence relation on the set of lines. In the study of incidence geometry , this variant of parallelism is used in the affine plane .
https://en.wikipedia.org/wiki/Parallel_line
In geometry , a parallelepiped is a three-dimensional figure formed by six parallelograms (the term rhomboid is also sometimes used with this meaning). By analogy, it relates to a parallelogram just as a cube relates to a square . [ a ] Three equivalent definitions of parallelepiped are The rectangular cuboid (six rectangular faces), cube (six square faces), and the rhombohedron (six rhombus faces) are all special cases of parallelepiped. "Parallelepiped" is now usually pronounced / ˌ p ær ə ˌ l ɛ l ɪ ˈ p ɪ p ɪ d / or / ˌ p ær ə ˌ l ɛ l ɪ ˈ p aɪ p ɪ d / ; [ 1 ] traditionally it was / ˌ p ær ə l ɛ l ˈ ɛ p ɪ p ɛ d / PARR -ə-lel- EP -ih-ped [ 2 ] because of its etymology in Greek παραλληλεπίπεδον parallelepipedon (with short -i-), a body "having parallel planes ". Parallelepipeds are a subclass of the prismatoids . Any of the three pairs of parallel faces can be viewed as the base planes of the prism. A parallelepiped has three sets of four parallel edges; the edges within each set are of equal length. Parallelepipeds result from linear transformations of a cube (for the non-degenerate cases: the bijective linear transformations). Since each face has point symmetry , a parallelepiped is a zonohedron . Also the whole parallelepiped has point symmetry C i (see also triclinic ). Each face is, seen from the outside, the mirror image of the opposite face. The faces are in general chiral , but the parallelepiped is not. A space-filling tessellation is possible with congruent copies of any parallelepiped. A parallelepiped is a prism with a parallelogram as base. Hence the volume V {\displaystyle V} of a parallelepiped is the product of the base area B {\displaystyle B} and the height h {\displaystyle h} (see diagram). With V = B ⋅ h = ( | a | | b | sin ⁡ γ ) ⋅ | c | | cos ⁡ θ | = | a × b | | c | | cos ⁡ θ | = | ( a × b ) ⋅ c | . {\displaystyle V=B\cdot h=\left(\left|\mathbf {a} \right|\left|\mathbf {b} \right|\sin \gamma \right)\cdot \left|\mathbf {c} \right|\left|\cos \theta \right|=\left|\mathbf {a} \times \mathbf {b} \right|\left|\mathbf {c} \right|\left|\cos \theta \right|=\left|\left(\mathbf {a} \times \mathbf {b} \right)\cdot \mathbf {c} \right|.} The mixed product of three vectors is called triple product . It can be described by a determinant . Hence for a = ( a 1 , a 2 , a 3 ) T , b = ( b 1 , b 2 , b 3 ) T , c = ( c 1 , c 2 , c 3 ) T , {\displaystyle \mathbf {a} =(a_{1},a_{2},a_{3})^{\mathsf {T}},~\mathbf {b} =(b_{1},b_{2},b_{3})^{\mathsf {T}},~\mathbf {c} =(c_{1},c_{2},c_{3})^{\mathsf {T}},} the volume is: Another way to prove ( V1 ) is to use the scalar component in the direction of a × b {\displaystyle \mathbf {a} \times \mathbf {b} } of vector c {\displaystyle \mathbf {c} } : V = | a × b | | scal a × b ⁡ c | = | a × b | | ( a × b ) ⋅ c | | a × b | = | ( a × b ) ⋅ c | . {\displaystyle {\begin{aligned}V=\left|\mathbf {a} \times \mathbf {b} \right|\left|\operatorname {scal} _{\mathbf {a} \times \mathbf {b} }\mathbf {c} \right|=\left|\mathbf {a} \times \mathbf {b} \right|{\frac {\left|\left(\mathbf {a} \times \mathbf {b} \right)\cdot \mathbf {c} \right|}{\left|\mathbf {a} \times \mathbf {b} \right|}}=\left|\left(\mathbf {a} \times \mathbf {b} \right)\cdot \mathbf {c} \right|.\end{aligned}}} The result follows. An alternative representation of the volume uses geometric properties (angles and edge lengths) only: where α = ∠ ( b , c ) {\displaystyle \alpha =\angle (\mathbf {b} ,\mathbf {c} )} , β = ∠ ( a , c ) {\displaystyle \beta =\angle (\mathbf {a} ,\mathbf {c} )} , γ = ∠ ( a , b ) {\displaystyle \gamma =\angle (\mathbf {a} ,\mathbf {b} )} , and a , b , c {\displaystyle a,b,c} are the edge lengths. The proof of ( V2 ) uses properties of a determinant and the geometric interpretation of the dot product : Let M {\displaystyle M} be the 3×3-matrix, whose columns are the vectors a , b , c {\displaystyle \mathbf {a} ,\mathbf {b} ,\mathbf {c} } (see above). Then the following is true: V 2 = ( det M ) 2 = det M det M = det M T det M = det ( M T M ) = det [ a ⋅ a a ⋅ b a ⋅ c b ⋅ a b ⋅ b b ⋅ c c ⋅ a c ⋅ b c ⋅ c ] = a 2 ( b 2 c 2 − b 2 c 2 cos 2 ⁡ ( α ) ) − a b cos ⁡ ( γ ) ( a b cos ⁡ ( γ ) c 2 − a c cos ⁡ ( β ) b c cos ⁡ ( α ) ) + a c cos ⁡ ( β ) ( a b cos ⁡ ( γ ) b c cos ⁡ ( α ) − a c cos ⁡ ( β ) b 2 ) = a 2 b 2 c 2 − a 2 b 2 c 2 cos 2 ⁡ ( α ) − a 2 b 2 c 2 cos 2 ⁡ ( γ ) + a 2 b 2 c 2 cos ⁡ ( α ) cos ⁡ ( β ) cos ⁡ ( γ ) + a 2 b 2 c 2 cos ⁡ ( α ) cos ⁡ ( β ) cos ⁡ ( γ ) − a 2 b 2 c 2 cos 2 ⁡ ( β ) = a 2 b 2 c 2 ( 1 − cos 2 ⁡ ( α ) − cos 2 ⁡ ( γ ) + cos ⁡ ( α ) cos ⁡ ( β ) cos ⁡ ( γ ) + cos ⁡ ( α ) cos ⁡ ( β ) cos ⁡ ( γ ) − cos 2 ⁡ ( β ) ) = a 2 b 2 c 2 ( 1 + 2 cos ⁡ ( α ) cos ⁡ ( β ) cos ⁡ ( γ ) − cos 2 ⁡ ( α ) − cos 2 ⁡ ( β ) − cos 2 ⁡ ( γ ) ) . {\displaystyle {\begin{aligned}V^{2}&=\left(\det M\right)^{2}=\det M\det M=\det M^{\mathsf {T}}\det M=\det(M^{\mathsf {T}}M)\\&=\det {\begin{bmatrix}\mathbf {a} \cdot \mathbf {a} &\mathbf {a} \cdot \mathbf {b} &\mathbf {a} \cdot \mathbf {c} \\\mathbf {b} \cdot \mathbf {a} &\mathbf {b} \cdot \mathbf {b} &\mathbf {b} \cdot \mathbf {c} \\\mathbf {c} \cdot \mathbf {a} &\mathbf {c} \cdot \mathbf {b} &\mathbf {c} \cdot \mathbf {c} \end{bmatrix}}\\&=\ a^{2}\left(b^{2}c^{2}-b^{2}c^{2}\cos ^{2}(\alpha )\right)\\&\quad -ab\cos(\gamma )\left(ab\cos(\gamma )c^{2}-ac\cos(\beta )\;bc\cos(\alpha )\right)\\&\quad +ac\cos(\beta )\left(ab\cos(\gamma )bc\cos(\alpha )-ac\cos(\beta )b^{2}\right)\\&=\ a^{2}b^{2}c^{2}-a^{2}b^{2}c^{2}\cos ^{2}(\alpha )\\&\quad -a^{2}b^{2}c^{2}\cos ^{2}(\gamma )+a^{2}b^{2}c^{2}\cos(\alpha )\cos(\beta )\cos(\gamma )\\&\quad +a^{2}b^{2}c^{2}\cos(\alpha )\cos(\beta )\cos(\gamma )-a^{2}b^{2}c^{2}\cos ^{2}(\beta )\\&=\ a^{2}b^{2}c^{2}\left(1-\cos ^{2}(\alpha )-\cos ^{2}(\gamma )+\cos(\alpha )\cos(\beta )\cos(\gamma )+\cos(\alpha )\cos(\beta )\cos(\gamma )-\cos ^{2}(\beta )\right)\\&=\ a^{2}b^{2}c^{2}\;\left(1+2\cos(\alpha )\cos(\beta )\cos(\gamma )-\cos ^{2}(\alpha )-\cos ^{2}(\beta )-\cos ^{2}(\gamma )\right).\end{aligned}}} (The last steps use a ⋅ a = a 2 {\displaystyle \mathbf {a} \cdot \mathbf {a} =a^{2}} , ..., a ⋅ b = a b cos ⁡ γ {\displaystyle \mathbf {a} \cdot \mathbf {b} =ab\cos \gamma } , a ⋅ c = a c cos ⁡ β {\displaystyle \mathbf {a} \cdot \mathbf {c} =ac\cos \beta } , b ⋅ c = b c cos ⁡ α {\displaystyle \mathbf {b} \cdot \mathbf {c} =bc\cos \alpha } , ...) The volume of any tetrahedron that shares three converging edges of a parallelepiped is equal to one sixth of the volume of that parallelepiped (see proof ). The surface area of a parallelepiped is the sum of the areas of the bounding parallelograms: A = 2 ⋅ ( | a × b | + | a × c | + | b × c | ) = 2 ( a b sin ⁡ γ + b c sin ⁡ α + c a sin ⁡ β ) . {\displaystyle {\begin{aligned}A&=2\cdot \left(|\mathbf {a} \times \mathbf {b} |+|\mathbf {a} \times \mathbf {c} |+|\mathbf {b} \times \mathbf {c} |\right)\\&=2\left(ab\sin \gamma +bc\sin \alpha +ca\sin \beta \right).\end{aligned}}} (For labeling: see previous section.) A perfect parallelepiped is a parallelepiped with integer-length edges, face diagonals, and space diagonals . In 2009, dozens of perfect parallelepipeds were shown to exist, [ 3 ] answering an open question of Richard Guy . One example has edges 271, 106, and 103, minor face diagonals 101, 266, and 255, major face diagonals 183, 312, and 323, and space diagonals 374, 300, 278, and 272. Some perfect parallelepipeds having two rectangular faces are known. But it is not known whether there exist any with all faces rectangular; such a case would be called a perfect cuboid . Coxeter called the generalization of a parallelepiped in higher dimensions a parallelotope . In modern literature, the term parallelepiped is often used in higher (or arbitrary finite) dimensions as well. [ 4 ] Specifically in n -dimensional space it is called n -dimensional parallelotope, or simply n -parallelotope (or n -parallelepiped). Thus a parallelogram is a 2-parallelotope and a parallelepiped is a 3-parallelotope. The diagonals of an n -parallelotope intersect at one point and are bisected by this point. Inversion in this point leaves the n -parallelotope unchanged. See also Fixed points of isometry groups in Euclidean space . The edges radiating from one vertex of a k -parallelotope form a k -frame ( v 1 , … , v n ) {\displaystyle (v_{1},\ldots ,v_{n})} of the vector space, and the parallelotope can be recovered from these vectors, by taking linear combinations of the vectors, with weights between 0 and 1. The n -volume of an n -parallelotope embedded in R m {\displaystyle \mathbb {R} ^{m}} where m ≥ n {\displaystyle m\geq n} can be computed by means of the Gram determinant . Alternatively, the volume is the norm of the exterior product of the vectors: V = ‖ v 1 ∧ ⋯ ∧ v n ‖ . {\displaystyle V=\left\|v_{1}\wedge \cdots \wedge v_{n}\right\|.} If m = n , this amounts to the absolute value of the determinant of matrix formed by the components of the n vectors. A formula to compute the volume of an n -parallelotope P in R n {\displaystyle \mathbb {R} ^{n}} , whose n + 1 vertices are V 0 , V 1 , … , V n {\displaystyle V_{0},V_{1},\ldots ,V_{n}} , is V o l ( P ) = | det ( [ V 0 1 ] T , [ V 1 1 ] T , … , [ V n 1 ] T ) | , {\displaystyle \mathrm {Vol} (P)=\left|\det \left(\left[V_{0}\ 1\right]^{\mathsf {T}},\left[V_{1}\ 1\right]^{\mathsf {T}},\ldots ,\left[V_{n}\ 1\right]^{\mathsf {T}}\right)\right|,} where [ V i 1 ] {\displaystyle [V_{i}\ 1]} is the row vector formed by the concatenation of the components of V i {\displaystyle V_{i}} and 1. Similarly, the volume of any n - simplex that shares n converging edges of a parallelotope has a volume equal to one 1/ n ! of the volume of that parallelotope. The term parallelepiped stems from Ancient Greek παραλληλεπίπεδον ( parallēlepípedon , "body with parallel plane surfaces"), from parallēl ("parallel") + epípedon ("plane surface"), from epí- ("on") + pedon ("ground"). Thus the faces of a parallelepiped are planar, with opposite faces being parallel. [ 5 ] [ 6 ] In English, the term parallelipipedon is attested in a 1570 translation of Euclid's Elements by Henry Billingsley . The spelling parallelepipedum is used in the 1644 edition of Pierre Hérigone 's Cursus mathematicus . In 1663, the present-day parallelepiped is attested in Walter Charleton's Chorea gigantum . [ 5 ] Charles Hutton's Dictionary (1795) shows parallelopiped and parallelopipedon , showing the influence of the combining form parallelo- , as if the second element were pipedon rather than epipedon . Noah Webster (1806) includes the spelling parallelopiped . The 1989 edition of the Oxford English Dictionary describes parallelopiped (and parallelipiped ) explicitly as incorrect forms, but these are listed without comment in the 2004 edition, and only pronunciations with the emphasis on the fifth syllable pi ( /paɪ/ ) are given.
https://en.wikipedia.org/wiki/Parallelepiped
Paramagnetism is a form of magnetism whereby some materials are weakly attracted by an externally applied magnetic field , and form internal, induced magnetic fields in the direction of the applied magnetic field. In contrast with this behavior, diamagnetic materials are repelled by magnetic fields and form induced magnetic fields in the direction opposite to that of the applied magnetic field. [ 1 ] Paramagnetic materials include most chemical elements and some compounds ; [ 2 ] they have a relative magnetic permeability slightly greater than 1 (i.e., a small positive magnetic susceptibility ) and hence are attracted to magnetic fields. The magnetic moment induced by the applied field is linear in the field strength and rather weak. It typically requires a sensitive analytical balance to detect the effect and modern measurements on paramagnetic materials are often conducted with a SQUID magnetometer . Paramagnetism is due to the presence of unpaired electrons in the material, so most atoms with incompletely filled atomic orbitals are paramagnetic, although exceptions such as copper exist. Due to their spin , unpaired electrons have a magnetic dipole moment and act like tiny magnets. An external magnetic field causes the electrons' spins to align parallel to the field, causing a net attraction. Paramagnetic materials include aluminium , oxygen , titanium , and iron oxide (FeO). Therefore, a simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: [ 3 ] if all electrons in the particle are paired, then the substance made of this particle is diamagnetic; if it has unpaired electrons, then the substance is paramagnetic. Unlike ferromagnets , paramagnets do not retain any magnetization in the absence of an externally applied magnetic field because thermal motion randomizes the spin orientations. (Some paramagnetic materials retain spin disorder even at absolute zero , meaning they are paramagnetic in the ground state , i.e. in the absence of thermal motion.) Thus the total magnetization drops to zero when the applied field is removed. Even in the presence of the field there is only a small induced magnetization because only a small fraction of the spins will be oriented by the field. This fraction is proportional to the field strength and this explains the linear dependency. The attraction experienced by ferromagnetic materials is non-linear and much stronger, so that it is easily observed, for instance, in the attraction between a refrigerator magnet and the iron of the refrigerator itself. Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments ( dipoles ), even in the absence of an applied field. The permanent moment generally is due to the spin of unpaired electrons in atomic or molecular electron orbitals (see Magnetic moment ). In pure paramagnetism, the dipoles do not interact with one another and are randomly oriented in the absence of an external field due to thermal agitation, resulting in zero net magnetic moment. When a magnetic field is applied, the dipoles will tend to align with the applied field, resulting in a net magnetic moment in the direction of the applied field. In the classical description, this alignment can be understood to occur due to a torque being provided on the magnetic moments by an applied field, which tries to align the dipoles parallel to the applied field. However, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentum . [ 4 ] If there is sufficient energy exchange between neighbouring dipoles, they will interact, and may spontaneously align or anti-align and form magnetic domains, resulting in ferromagnetism (permanent magnets) or antiferromagnetism , respectively. Paramagnetic behavior can also be observed in ferromagnetic materials that are above their Curie temperature , and in antiferromagnets above their Néel temperature . At these temperatures, the available thermal energy simply overcomes the interaction energy between the spins. In general, paramagnetic effects are quite small: the magnetic susceptibility is of the order of 10 −3 to 10 −5 for most paramagnets, but may be as high as 10 −1 for synthetic paramagnets such as ferrofluids . [ 5 ] (SI units) In conductive materials, the electrons are delocalized , that is, they travel through the solid more or less as free electrons . Conductivity can be understood in a band structure picture as arising from the incomplete filling of energy bands. In an ordinary nonmagnetic conductor the conduction band is identical for both spin-up and spin-down electrons. When a magnetic field is applied, the conduction band splits apart into a spin-up and a spin-down band due to the difference in magnetic potential energy for spin-up and spin-down electrons. Since the Fermi level must be identical for both bands, this means that there will be a small surplus of the type of spin in the band that moved downwards. This effect is a weak form of paramagnetism known as Pauli paramagnetism . The effect always competes with a diamagnetic response of opposite sign due to all the core electrons of the atoms. Stronger forms of magnetism usually require localized rather than itinerant electrons. However, in some cases a band structure can result in which there are two delocalized sub-bands with states of opposite spins that have different energies. If one subband is preferentially filled over the other, one can have itinerant ferromagnetic order. This situation usually only occurs in relatively narrow (d-)bands, which are poorly delocalized. Generally, strong delocalization in a solid due to large overlap with neighboring wave functions means that there will be a large Fermi velocity ; this means that the number of electrons in a band is less sensitive to shifts in that band's energy, implying a weak magnetism. This is why s- and p-type metals are typically either Pauli-paramagnetic or as in the case of gold even diamagnetic. In the latter case the diamagnetic contribution from the closed shell inner electrons simply wins over the weak paramagnetic term of the almost free electrons. Stronger magnetic effects are typically only observed when d or f electrons are involved. Particularly the latter are usually strongly localized. Moreover, the size of the magnetic moment on a lanthanide atom can be quite large as it can carry up to 7 unpaired electrons in the case of gadolinium (III) (hence its use in MRI ). The high magnetic moments associated with lanthanides is one reason why superstrong magnets are typically based on elements like neodymium or samarium . The above picture is a generalization as it pertains to materials with an extended lattice rather than a molecular structure. Molecular structure can also lead to localization of electrons. Although there are usually energetic reasons why a molecular structure results such that it does not exhibit partly filled orbitals (i.e. unpaired spins), some non-closed shell moieties do occur in nature. Molecular oxygen is a good example. Even in the frozen solid it contains di-radical molecules resulting in paramagnetic behavior. The unpaired spins reside in orbitals derived from oxygen p wave functions, but the overlap is limited to the one neighbor in the O 2 molecules. The distances to other oxygen atoms in the lattice remain too large to lead to delocalization and the magnetic moments remain unpaired. The Bohr–Van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. The paramagnetic response has then two possible quantum origins, either coming from permanent magnetic moments of the ions or from the spatial motion of the conduction electrons inside the material. Both descriptions are given below. For low levels of magnetization, the magnetization of paramagnets follows what is known as Curie's law , at least approximately. This law indicates that the susceptibility, χ {\displaystyle \chi } , of paramagnetic materials is inversely proportional to their temperature, i.e. that materials become more magnetic at lower temperatures. The mathematical expression is: M = χ H = C T H {\displaystyle \mathbf {M} =\chi \mathbf {H} ={\frac {C}{T}}\mathbf {H} } where: Curie's law is valid under the commonly encountered conditions of low magnetization ( μ B H ≲ k B T ), but does not apply in the high-field/low-temperature regime where saturation of magnetization occurs ( μ B H ≳ k B T ) and magnetic dipoles are all aligned with the applied field. When the dipoles are aligned, increasing the external field will not increase the total magnetization since there can be no further alignment. For a paramagnetic ion with noninteracting magnetic moments with angular momentum J , the Curie constant is related to the individual ions' magnetic moments, C = n 3 k B μ e f f 2 where μ e f f = g J μ B J ( J + 1 ) . {\displaystyle C={\frac {n}{3k_{\mathrm {B} }}}\mu _{\mathrm {eff} }^{2}{\text{ where }}\mu _{\mathrm {eff} }=g_{J}\mu _{\mathrm {B} }{\sqrt {J(J+1)}}.} where n is the number of atoms per unit volume. The parameter μ eff is interpreted as the effective magnetic moment per paramagnetic ion. If one uses a classical treatment with molecular magnetic moments represented as discrete magnetic dipoles, μ , a Curie Law expression of the same form will emerge with μ appearing in place of μ eff . Curie's Law can be derived by considering a substance with noninteracting magnetic moments with angular momentum J . If orbital contributions to the magnetic moment are negligible (a common case), then in what follows J = S . If we apply a magnetic field along what we choose to call the z -axis, the energy levels of each paramagnetic center will experience Zeeman splitting of its energy levels, each with a z -component labeled by M J (or just M S for the spin-only magnetic case). Applying semiclassical Boltzmann statistics , the magnetization of such a substance is n m ¯ = n ∑ M J = − J J μ M J e − E M J / k B T ∑ M J = − J J e − E M J / k B T = n ∑ M J = − J J M J g J μ B e M J g J μ B H / k B T ∑ M J = − J J e M J g J μ B H / k B T . {\displaystyle n{\bar {m}}={\frac {n\sum \limits _{M_{J}=-J}^{J}{\mu _{M_{J}}e^{{-E_{M_{J}}}/{k_{\mathrm {B} }T}\;}}}{\sum \limits _{M_{J}=-J}^{J}{e^{{-E_{M_{J}}}/{k_{\mathrm {B} }T}\;}}}}={\frac {n\sum \limits _{M_{J}=-J}^{J}{M_{J}g_{J}\mu _{\mathrm {B} }e^{{M_{J}g_{J}\mu _{\mathrm {B} }H}/{k_{\mathrm {B} }T}\;}}}{\sum \limits _{M_{J}=-J}^{J}{e^{{M_{J}g_{J}\mu _{\mathrm {B} }H}/{k_{\mathrm {B} }T}\;}}}}.} Where μ M J {\displaystyle \mu _{M_{J}}} is the z -component of the magnetic moment for each Zeeman level, so μ M J = M J g J μ B − μ B {\displaystyle \mu _{M_{J}}=M_{J}g_{J}\mu _{\mathrm {B} }-\mu _{\mathrm {B} }} is called the Bohr magneton and g J is the Landé g-factor , which reduces to the free-electron g-factor, g S when J = S . (in this treatment, we assume that the x - and y -components of the magnetization, averaged over all molecules, cancel out because the field applied along the z -axis leave them randomly oriented.) The energy of each Zeeman level is E M J = − M J g J μ B H {\displaystyle E_{M_{J}}=-M_{J}g_{J}\mu _{\mathrm {B} }H} . For temperatures over a few K , M J g J μ B H / k B T ≪ 1 {\displaystyle M_{J}g_{J}\mu _{\mathrm {B} }H/k_{\mathrm {B} }T\ll 1} , and we can apply the approximation e M J g J μ B H / k B T ≃ 1 + M J g J μ B H / k B T {\displaystyle e^{M_{J}g_{J}\mu _{\mathrm {B} }H/k_{\mathrm {B} }T\;}\simeq 1+M_{J}g_{J}\mu _{\mathrm {B} }H/k_{\mathrm {B} }T\;} : m ¯ = ∑ M J = − J J M J g J μ B e M J g J μ B H / k B T ∑ M J = − J J e M J g J μ B H / k B T ≃ g J μ B ∑ M J = − J J M J ( 1 + M J g J μ B H / k B T ) ∑ M J = − J J ( 1 + M J g J μ B H / k B T ) = g J 2 μ B 2 H k B T ∑ − J J M J 2 ∑ M J = − J J ( 1 ) , {\displaystyle {\bar {m}}={\frac {\sum \limits _{M_{J}=-J}^{J}{M_{J}g_{J}\mu _{\mathrm {B} }e^{M_{J}g_{J}\mu _{\mathrm {B} }H/k_{\mathrm {B} }T\;}}}{\sum \limits _{M_{J}=-J}^{J}e^{M_{J}g_{J}\mu _{\mathrm {B} }H/k_{\mathrm {B} }T\;}}}\simeq g_{J}\mu _{\mathrm {B} }{\frac {\sum \limits _{M_{J}=-J}^{J}M_{J}\left(1+M_{J}g_{J}\mu _{\mathrm {B} }H/k_{\mathrm {B} }T\;\right)}{\sum \limits _{M_{J}=-J}^{J}\left(1+M_{J}g_{J}\mu _{\mathrm {B} }H/k_{\mathrm {B} }T\;\right)}}={\frac {g_{J}^{2}\mu _{\mathrm {B} }^{2}H}{k_{\mathrm {B} }T}}{\frac {\sum \limits _{-J}^{J}M_{J}^{2}}{\sum \limits _{M_{J}=-J}^{J}{(1)}}},} which yields: m ¯ = g J 2 μ B 2 H 3 k B T J ( J + 1 ) . {\displaystyle {\bar {m}}={\frac {g_{J}^{2}\mu _{\mathrm {B} }^{2}H}{3k_{\mathrm {B} }T}}J(J+1).} The bulk magnetization is then M = n m ¯ = n 3 k B T [ g J 2 J ( J + 1 ) μ B 2 ] H , {\displaystyle M=n{\bar {m}}={\frac {n}{3k_{\mathrm {B} }T}}\left[g_{J}^{2}J(J+1)\mu _{\mathrm {B} }^{2}\right]H,} and the susceptibility is given by χ = ∂ M m ∂ H = n 3 k B T μ e f f 2 ; and μ e f f = g J J ( J + 1 ) μ B . {\displaystyle \chi ={\frac {\partial M_{\rm {m}}}{\partial H}}={\frac {n}{3k_{\rm {B}}T}}\mu _{\mathrm {eff} }^{2}{\text{ ; and }}\mu _{\mathrm {eff} }=g_{J}{\sqrt {J(J+1)}}\mu _{\mathrm {B} }.} When orbital angular momentum contributions to the magnetic moment are small, as occurs for most organic radicals or for octahedral transition metal complexes with d 3 or high-spin d 5 configurations, the effective magnetic moment takes the form ( with g-factor g e = 2.0023... ≈ 2), μ e f f ≃ 2 S ( S + 1 ) μ B = N u ( N u + 2 ) μ B , {\displaystyle \mu _{\mathrm {eff} }\simeq 2{\sqrt {S(S+1)}}\mu _{\mathrm {B} }={\sqrt {N_{\rm {u}}(N_{\rm {u}}+2)}}\mu _{\mathrm {B} },} where N u is the number of unpaired electrons . In other transition metal complexes this yields a useful, if somewhat cruder, estimate. When Curie constant is null, second order effects that couple the ground state with the excited states can also lead to a paramagnetic susceptibility independent of the temperature, known as Van Vleck susceptibility . For some alkali metals and noble metals, conduction electrons are weakly interacting and delocalized in space forming a Fermi gas . For these materials one contribution to the magnetic response comes from the interaction between the electron spins and the magnetic field known as Pauli paramagnetism. For a small magnetic field H {\displaystyle \mathbf {H} } , the additional energy per electron from the interaction between an electron spin and the magnetic field is given by: where μ 0 {\displaystyle \mu _{0}} is the vacuum permeability , μ e {\displaystyle {\boldsymbol {\mu }}_{e}} is the electron magnetic moment , μ B {\displaystyle \mu _{\rm {B}}} is the Bohr magneton , ℏ {\displaystyle \hbar } is the reduced Planck constant, and the g-factor cancels with the spin S = ± ℏ / 2 {\displaystyle \mathbf {S} =\pm \hbar /2} . The ± {\displaystyle \pm } indicates that the sign is positive (negative) when the electron spin component in the direction of H {\displaystyle \mathbf {H} } is parallel (antiparallel) to the magnetic field. For low temperatures with respect to the Fermi temperature T F {\displaystyle T_{\rm {F}}} (around 10 4 kelvins for metals), the number density of electrons n ↑ {\displaystyle n_{\uparrow }} ( n ↓ {\displaystyle n_{\downarrow }} ) pointing parallel (antiparallel) to the magnetic field can be written as: with n e {\displaystyle n_{e}} the total free-electrons density and g ( E F ) {\displaystyle g(E_{\mathrm {F} })} the electronic density of states (number of states per energy per volume) at the Fermi energy E F {\displaystyle E_{\mathrm {F} }} . In this approximation the magnetization is given as the magnetic moment of one electron times the difference in densities: which yields a positive paramagnetic susceptibility independent of temperature: The Pauli paramagnetic susceptibility is a macroscopic effect and has to be contrasted with Landau diamagnetic susceptibility which is equal to minus one third of Pauli's and also comes from delocalized electrons. The Pauli susceptibility comes from the spin interaction with the magnetic field while the Landau susceptibility comes from the spatial motion of the electrons and it is independent of the spin. In doped semiconductors the ratio between Landau's and Pauli's susceptibilities changes as the effective mass of the charge carriers m ∗ {\displaystyle m^{*}} can differ from the electron mass m e {\displaystyle m_{e}} . The magnetic response calculated for a gas of electrons is not the full picture as the magnetic susceptibility coming from the ions has to be included. Additionally, these formulas may break down for confined systems that differ from the bulk, like quantum dots , or for high fields, as demonstrated in the De Haas-Van Alphen effect . Pauli paramagnetism is named after the physicist Wolfgang Pauli . Before Pauli's theory, the lack of a strong Curie paramagnetism in metals was an open problem as the leading Drude model could not account for this contribution without the use of quantum statistics . Pauli paramagnetism and Landau diamagnetism are essentially applications of the spin and the free electron model , the first is due to intrinsic spin of electrons; the second is due to their orbital motion. [ 7 ] [ 8 ] Materials that are called "paramagnets" are most often those that exhibit, at least over an appreciable temperature range, magnetic susceptibilities that adhere to the Curie or Curie–Weiss laws. In principle any system that contains atoms, ions, or molecules with unpaired spins can be called a paramagnet, but the interactions between them need to be carefully considered. The narrowest definition would be: a system with unpaired spins that do not interact with each other. In this narrowest sense, the only pure paramagnet is a dilute gas of monatomic hydrogen atoms. Each atom has one non-interacting unpaired electron. A gas of lithium atoms already possess two paired core electrons that produce a diamagnetic response of opposite sign. Strictly speaking Li is a mixed system therefore, although admittedly the diamagnetic component is weak and often neglected. In the case of heavier elements the diamagnetic contribution becomes more important and in the case of metallic gold it dominates the properties. The element hydrogen is virtually never called 'paramagnetic' because the monatomic gas is stable only at extremely high temperature; H atoms combine to form molecular H 2 and in so doing, the magnetic moments are lost ( quenched ), because of the spins pair. Hydrogen is therefore diamagnetic and the same holds true for many other elements. Although the electronic configuration of the individual atoms (and ions) of most elements contain unpaired spins, they are not necessarily paramagnetic, because at ambient temperature quenching is very much the rule rather than the exception. The quenching tendency is weakest for f-electrons because f (especially 4 f ) orbitals are radially contracted and they overlap only weakly with orbitals on adjacent atoms. Consequently, the lanthanide elements with incompletely filled 4f-orbitals are paramagnetic or magnetically ordered. [ 9 ] Thus, condensed phase paramagnets are only possible if the interactions of the spins that lead either to quenching or to ordering are kept at bay by structural isolation of the magnetic centers. There are two classes of materials for which this holds: As stated above, many materials that contain d- or f-elements do retain unquenched spins. Salts of such elements often show paramagnetic behavior but at low enough temperatures the magnetic moments may order. It is not uncommon to call such materials 'paramagnets', when referring to their paramagnetic behavior above their Curie or Néel-points, particularly if such temperatures are very low or have never been properly measured. Even for iron it is not uncommon to say that iron becomes a paramagnet above its relatively high Curie-point. In that case the Curie-point is seen as a phase transition between a ferromagnet and a 'paramagnet'. The word paramagnet now merely refers to the linear response of the system to an applied field, the temperature dependence of which requires an amended version of Curie's law, known as the Curie–Weiss law : This amended law includes a term θ that describes the exchange interaction that is present albeit overcome by thermal motion. The sign of θ depends on whether ferro- or antiferromagnetic interactions dominate and it is seldom exactly zero, except in the dilute, isolated cases mentioned above. Obviously, the paramagnetic Curie–Weiss description above T N or T C is a rather different interpretation of the word "paramagnet" as it does not imply the absence of interactions, but rather that the magnetic structure is random in the absence of an external field at these sufficiently high temperatures. Even if θ is close to zero this does not mean that there are no interactions, just that the aligning ferro- and the anti-aligning antiferromagnetic ones cancel. An additional complication is that the interactions are often different in different directions of the crystalline lattice ( anisotropy ), leading to complicated magnetic structures once ordered. Randomness of the structure also applies to the many metals that show a net paramagnetic response over a broad temperature range. They do not follow a Curie type law as function of temperature however; often they are more or less temperature independent. This type of behavior is of an itinerant nature and better called Pauli-paramagnetism, but it is not unusual to see, for example, the metal aluminium called a "paramagnet", even though interactions are strong enough to give this element very good electrical conductivity. Some materials show induced magnetic behavior that follows a Curie type law but with exceptionally large values for the Curie constants. These materials are known as superparamagnets . They are characterized by a strong ferromagnetic or ferrimagnetic type of coupling into domains of a limited size that behave independently from one another. The bulk properties of such a system resembles that of a paramagnet, but on a microscopic level they are ordered. The materials do show an ordering temperature above which the behavior reverts to ordinary paramagnetism (with interaction). Ferrofluids are a good example, but the phenomenon can also occur inside solids, e.g., when dilute paramagnetic centers are introduced in a strong itinerant medium of ferromagnetic coupling such as when Fe is substituted in TlCu 2 Se 2 or the alloy AuFe. Such systems contain ferromagnetically coupled clusters that freeze out at lower temperatures. They are also called mictomagnets .
https://en.wikipedia.org/wiki/Paramagnetism
Paramasivam Natarajan (17 September 1940 – 18 March 2016) was an Indian photochemist , the INSA Senior Scientist at the National Centre for Ultrafast Process of the University of Madras [ 1 ] and the director of Central Salt and Marine Chemicals Research Institute (CSMCRI) of the Council of Scientific and Industrial Research. [ 2 ] He was known for his research on photochemistry of co-ordination compounds and macromolecular dye coatings for stabilization of electrodes. [ 3 ] He was an elected fellow of the Indian National Science Academy , [ 4 ] International Union of Pure and Applied Chemistry (IUPAC) and the Indian Academy of Sciences . [ 5 ] The Council of Scientific and Industrial Research , the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology , one of the highest Indian science awards, in 1984, for his contributions to chemical sciences. [ 6 ] Born on 17 September 1940 in the south Indian state of Tamil Nadu , Paramasivam Natarajan graduated in chemistry from the University of Madras in 1959. He started his career as a lecturer at the Government Arts College of the Madras University in 1959 but later moved to the NGM College, Pollachi in 1963. The next year, he joined the Banaras Hindu University (BHU) as a CSIR Junior Research fellow and during the tenure of the fellowship, obtained his master's degree in 1963. [ 7 ] After continuing at BHU for a year more, he became a lecturer at the Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER) where he stayed till 1970. Later he went to the US as a teaching assistant at the University of Southern California , simultaneously pursuing his doctoral studies under the guidance of John F. Endicott. He secured a PhD in 1971 and did his post doctoral studies under his PhD guide, Endicott, at Wayne State University as the latter had moved to the Michigan-based university by that time. [ 7 ] Natarajan returned to India in 1974 and joined his alma mater, Madras University, as a reader of the department of physical chemistry. In 1977, he became a professor in charge of the Post Graduate Centre of the university in Tiruchirappalli . In 1982, he returned to the university headquarters in Chennai as the head of the department of inorganic chemistry. [ 8 ] In 1991, he was deputed by the university as the director of Central Salt and Marine Chemicals Research Institute (CSMCRI), a post he held till 1996. [ 9 ] After the completion of the assignment at CSMCRI, he resumed his duties at the university and became a senior professor in 1998. At time of his superannuation in 2001, he held the post of an INSA Senior Scientist at the National Centre for Ultrafast Process of the university and served as a member of the university syndicate. [ 7 ] Natarajan was married to Sivabagyam and the couple had two daughters, Shiva Sukanthi and Shakthi. He died on 18 March 2016, at the age of 75, survived by his wife, children and their families. [ 10 ] Focusing his research on photochemistry, Natarajan studied various areas of the discipline such as polymer dynamics using fluorescence, flash photolysis studies using picosecond and femtosecond lasers and solar energy conversion . [ 10 ] He demonstrated that micromolecular dye coatings of electrodes used in photoelectrochemical cells returned high current density . [ 11 ] This led to his subsequent studies of solar energy conversion using chemically modified electrodes. He published his research in peer-reviewed journals including Nature , Journal of the American Chemical Society , Journal of Physical Chemistry A , Inorganic Chemistry and Chemical Communications for a total of 107 articles. [ 12 ] [ note 1 ] He was granted patents for four of his findings. [ 13 ] He mentored over 30 doctoral scholars and was associated with a number of journals as their editorial board member. He also was on a number of government committees including those of the Department of Science and Technology and the Council of Scientific and Industrial Research [ 10 ] and delivered several featured talks and orations. [ 14 ] The Council of Scientific and Industrial Research awarded Natarajan the Shanti Swarup Bhatnagar Prize , one of the highest Indian science awards, in 1984. [ 15 ] He received the Best Teacher Award from the Government of Tamil Nadu the same year. [ 10 ] In 1999, he was awarded the Acharya P.C. Ray Memorial Award by the Indian Chemical Society . He held the National Lectureship and the National Fellowship of the University Grants Commission of India during 1986–87 and 1989–91 respectively. Sir M. Visweshwaraiah Chair of the University of Mysore (1999), Pandit Jawaharlal Nehru Chair of the University of Hyderabad (2004), Raja Ramanna Fellowship of the Department of Science and Technology (2006–09) and the senior scientist professorship of the Indian National Science Academy (at the time of his death) were some of the other notable positions he held. He was an elected fellow of the Indian Academy of Sciences, Indian National Science Academy, International Union of Pure and Applied Chemistry, Tamil Nadu Academy of Sciences, Society of Bio-Sciences and Gujarat Academy of Sciences and a member of Sigma Xi : The Scientific Research Society. [ 7 ]
https://en.wikipedia.org/wiki/Paramasivam_Natarajan
A parameter (from Ancient Greek παρά ( pará ) ' beside, subsidiary ' and μέτρον ( métron ) ' measure ' ), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc. Parameter has more specific meanings within various disciplines, including mathematics , computer programming , engineering , statistics , logic , linguistics , and electronic musical composition. In addition to its technical uses, there are also extended uses, especially in non-scientific contexts, where it is used to mean defining characteristics or boundaries, as in the phrases 'test parameters' or 'game play parameters'. [ citation needed ] When a system is modeled by equations, the values that describe the system are called parameters . For example, in mechanics , the masses, the dimensions and shapes (for solid bodies), the densities and the viscosities (for fluids), appear as parameters in the equations modeling movements. There are often several choices for the parameters, and choosing a convenient set of parameters is called parametrization . For example, if one were considering the movement of an object on the surface of a sphere much larger than the object (e.g. the Earth), there are two commonly used parametrizations of its position: angular coordinates (like latitude/longitude), which neatly describe large movements along circles on the sphere, and directional distance from a known point (e.g. "10km NNW of Toronto" or equivalently "8km due North, and then 6km due West, from Toronto" ), which are often simpler for movement confined to a (relatively) small area, like within a particular country or region. Such parametrizations are also relevant to the modelization of geographic areas (i.e. map drawing ). Mathematical functions have one or more arguments that are designated in the definition by variables . A function definition can also contain parameters, but unlike variables, parameters are not listed among the arguments that the function takes. When parameters are present, the definition actually defines a whole family of functions, one for every valid set of values of the parameters. For instance, one could define a general quadratic function by declaring Here, the variable x designates the function's argument, but a , b , and c are parameters (in this instance, also called coefficients ) that determine which particular quadratic function is being considered. A parameter could be incorporated into the function name to indicate its dependence on the parameter. For instance, one may define the base- b logarithm by the formula where b is a parameter that indicates which logarithmic function is being used. It is not an argument of the function, and will, for instance, be a constant when considering the derivative log b ′ ⁡ ( x ) = ( x ln ⁡ ( b ) ) − 1 {\displaystyle \textstyle \log _{b}'(x)=(x\ln(b))^{-1}} . In some informal situations it is a matter of convention (or historical accident) whether some or all of the symbols in a function definition are called parameters. However, changing the status of symbols between parameter and variable changes the function as a mathematical object. For instance, the notation for the falling factorial power defines a polynomial function of n (when k is considered a parameter), but is not a polynomial function of k (when n is considered a parameter). Indeed, in the latter case, it is only defined for non-negative integer arguments. More formal presentations of such situations typically start out with a function of several variables (including all those that might sometimes be called "parameters") such as as the most fundamental object being considered, then defining functions with fewer variables from the main one by means of currying . Sometimes it is useful to consider all functions with certain parameters as parametric family , i.e. as an indexed family of functions. Examples from probability theory are given further below . W.M. Woods ... a mathematician ... writes ... "... a variable is one of the many things a parameter is not." ... The dependent variable, the speed of the car, depends on the independent variable, the position of the gas pedal. [Kilpatrick quoting Woods] "Now ... the engineers ... change the lever arms of the linkage ... the speed of the car ... will still depend on the pedal position ... but in a ... different manner . You have changed a parameter" In the context of a mathematical model , such as a probability distribution , the distinction between variables and parameters was described by Bard as follows: In analytic geometry , a curve can be described as the image of a function whose argument, typically called the parameter , lies in a real interval . For example, the unit circle can be specified in the following two ways: with parameter t ∈ [ 0 , 2 π ) . {\displaystyle t\in [0,2\pi ).} As a parametric equation this can be written The parameter t in this equation would elsewhere in mathematics be called the independent variable . In mathematical analysis , integrals dependent on a parameter are often considered. These are of the form In this formula, t is the argument of the function F , and on the right-hand side the parameter on which the integral depends. When evaluating the integral, t is held constant, and so it is considered to be a parameter. If we are interested in the value of F for different values of t , we then consider t to be a variable. The quantity x is a dummy variable or variable of integration (confusingly, also sometimes called a parameter of integration ). In statistics and econometrics , the probability framework above still holds, but attention shifts to estimating the parameters of a distribution based on observed data, or testing hypotheses about them. In frequentist estimation parameters are considered "fixed but unknown", whereas in Bayesian estimation they are treated as random variables, and their uncertainty is described as a distribution. [ citation needed ] [ 2 ] In estimation theory of statistics, "statistic" or estimator refers to samples, whereas "parameter" or estimand refers to populations, where the samples are taken from. A statistic is a numerical characteristic of a sample that can be used as an estimate of the corresponding parameter, the numerical characteristic of the population from which the sample was drawn. For example, the sample mean (estimator), denoted X ¯ {\displaystyle {\overline {X}}} , can be used as an estimate of the mean parameter (estimand), denoted μ , of the population from which the sample was drawn. Similarly, the sample variance (estimator), denoted S 2 , can be used to estimate the variance parameter (estimand), denoted σ 2 , of the population from which the sample was drawn. (Note that the sample standard deviation ( S ) is not an unbiased estimate of the population standard deviation ( σ ): see Unbiased estimation of standard deviation .) It is possible to make statistical inferences without assuming a particular parametric family of probability distributions . In that case, one speaks of non-parametric statistics as opposed to the parametric statistics just described. For example, a test based on Spearman's rank correlation coefficient would be called non-parametric since the statistic is computed from the rank-order of the data disregarding their actual values (and thus regardless of the distribution they were sampled from), whereas those based on the Pearson product-moment correlation coefficient are parametric tests since it is computed directly from the data values and thus estimates the parameter known as the population correlation . In probability theory , one may describe the distribution of a random variable as belonging to a family of probability distributions , distinguished from each other by the values of a finite number of parameters . For example, one talks about "a Poisson distribution with mean value λ". The function defining the distribution (the probability mass function ) is: This example nicely illustrates the distinction between constants, parameters, and variables. e is Euler's number , a fundamental mathematical constant . The parameter λ is the mean number of observations of some phenomenon in question, a property characteristic of the system. k is a variable, in this case the number of occurrences of the phenomenon actually observed from a particular sample. If we want to know the probability of observing k 1 occurrences, we plug it into the function to get f ( k 1 ; λ ) {\displaystyle f(k_{1};\lambda )} . Without altering the system, we can take multiple samples, which will have a range of values of k , but the system is always characterized by the same λ. For instance, suppose we have a radioactive sample that emits, on average, five particles every ten minutes. We take measurements of how many particles the sample emits over ten-minute periods. The measurements exhibit different values of k , and if the sample behaves according to Poisson statistics, then each value of k will come up in a proportion given by the probability mass function above. From measurement to measurement, however, λ remains constant at 5. If we do not alter the system, then the parameter λ is unchanged from measurement to measurement; if, on the other hand, we modulate the system by replacing the sample with a more radioactive one, then the parameter λ would increase. Another common distribution is the normal distribution , which has as parameters the mean μ and the variance σ². In these above examples, the distributions of the random variables are completely specified by the type of distribution, i.e. Poisson or normal, and the parameter values, i.e. mean and variance. In such a case, we have a parameterized distribution. It is possible to use the sequence of moments (mean, mean square, ...) or cumulants (mean, variance, ...) as parameters for a probability distribution: see Statistical parameter . In computer programming , two notions of parameter are commonly used, and are referred to as parameters and arguments —or more formally as a formal parameter and an actual parameter . For example, in the definition of a function such as x is the formal parameter (the parameter ) of the defined function. When the function is evaluated for a given value, as in 3 is the actual parameter (the argument ) for evaluation by the defined function; it is a given value (actual value) that is substituted for the formal parameter of the defined function. (In casual usage the terms parameter and argument might inadvertently be interchanged, and thereby used incorrectly.) These concepts are discussed in a more precise way in functional programming and its foundational disciplines, lambda calculus and combinatory logic . Terminology varies between languages; some computer languages such as C define parameter and argument as given here, while Eiffel uses an alternative convention . In artificial intelligence , a model describes the probability that something will occur. Parameters in a model are the weight of the various probabilities. Tiernan Ray, in an article on GPT-3, described parameters this way: A parameter is a calculation in a neural network that applies a great or lesser weighting to some aspect of the data, to give that aspect greater or lesser prominence in the overall calculation of the data. It is these weights that give shape to the data, and give the neural network a learned perspective on the data. [ 3 ] In engineering (especially involving data acquisition) the term parameter sometimes loosely refers to an individual measured item. This usage is not consistent, as sometimes the term channel refers to an individual measured item, with parameter referring to the setup information about that channel. "Speaking generally, properties are those physical quantities which directly describe the physical attributes of the system; parameters are those combinations of the properties which suffice to determine the response of the system. Properties can have all sorts of dimensions, depending upon the system being considered; parameters are dimensionless, or have the dimension of time or its reciprocal." [ 4 ] The term can also be used in engineering contexts, however, as it is typically used in the physical sciences. In environmental science and particularly in chemistry and microbiology , a parameter is used to describe a discrete chemical or microbiological entity that can be assigned a value: commonly a concentration, but may also be a logical entity (present or absent), a statistical result such as a 95 percentile value or in some cases a subjective value. Within linguistics, the word "parameter" is almost exclusively used to denote a binary switch in a Universal Grammar within a Principles and Parameters framework. In logic , the parameters passed to (or operated on by) an open predicate are called parameters by some authors (e.g., Prawitz 's Natural Deduction ; [ 5 ] Paulson 's Designing a theorem prover ). Parameters locally defined within the predicate are called variables . This extra distinction pays off when defining substitution (without this distinction special provision must be made to avoid variable capture). Others (maybe most) just call parameters passed to (or operated on by) an open predicate variables , and when defining substitution have to distinguish between free variables and bound variables . In music theory, a parameter denotes an element which may be manipulated (composed), separately from the other elements. The term is used particularly for pitch , loudness , duration , and timbre , though theorists or composers have sometimes considered other musical aspects as parameters. The term is particularly used in serial music , where each parameter may follow some specified series. Paul Lansky and George Perle criticized the extension of the word "parameter" to this sense, since it is not closely related to its mathematical sense, [ 6 ] but it remains common. The term is also common in music production, as the functions of audio processing units (such as the attack, release, ratio, threshold, and other variables on a compressor) are defined by parameters specific to the type of unit (compressor, equalizer, delay, etc.).
https://en.wikipedia.org/wiki/Parameter
The parameter space is the space of all possible parameter values that define a particular mathematical model . It is also sometimes called weight space , and is often a subset of finite-dimensional Euclidean space . In statistics , parameter spaces are particularly useful for describing parametric families of probability distributions . They also form the background for parameter estimation . In the case of extremum estimators for parametric models , a certain objective function is maximized or minimized over the parameter space. [ 1 ] Theorems of existence and consistency of such estimators require some assumptions about the topology of the parameter space. For instance, compactness of the parameter space, together with continuity of the objective function, suffices for the existence of an extremum estimator. [ 1 ] Sometimes, parameters are analyzed to view how they affect their statistical model. In that context, they can be viewed as inputs of a function , in which case the technical term for the parameter space is domain of a function . The ranges of values of the parameters may form the axes of a plot , and particular outcomes of the model may be plotted against these axes to illustrate how different regions of the parameter space produce different types of behavior in the model. Parameter space contributed to the liberation of geometry from the confines of three-dimensional space . For instance, the parameter space of spheres in three dimensions, has four dimensions—three for the sphere center and another for the radius. According to Dirk Struik , it was the book Neue Geometrie des Raumes (1849) by Julius Plücker that showed The requirement for higher dimensions is illustrated by Plücker's line geometry . Struik writes Thus the Klein quadric describes the parameters of lines in space.
https://en.wikipedia.org/wiki/Parameter_space
In calculus , a parametric derivative is a derivative of a dependent variable with respect to another dependent variable that is taken when both variables depend on an independent third variable, usually thought of as "time" (that is, when the dependent variables are x and y and are given by parametric equations in t ). Let x ( t ) and y ( t ) be the coordinates of the points of the curve expressed as functions of a variable t : y = y ( t ) , x = x ( t ) . {\displaystyle y=y(t),\quad x=x(t).} The first derivative implied by these parametric equations is d y d x = d y / d t d x / d t = y ˙ ( t ) x ˙ ( t ) , {\displaystyle {\frac {dy}{dx}}={\frac {dy/dt}{dx/dt}}={\frac {{\dot {y}}(t)}{{\dot {x}}(t)}},} where the notation x ˙ ( t ) {\displaystyle {\dot {x}}(t)} denotes the derivative of x with respect to t . This can be derived using the chain rule for derivatives: d y d t = d y d x ⋅ d x d t {\displaystyle {\frac {dy}{dt}}={\frac {dy}{dx}}\cdot {\frac {dx}{dt}}} and dividing both sides by d x d t {\textstyle {\frac {dx}{dt}}} to give the equation above. In general all of these derivatives — dy / dt , dx / dt , and dy / dx — are themselves functions of t and so can be written more explicitly as, for example, d y d x ( t ) {\displaystyle {\frac {dy}{dx}}(t)} . The second derivative implied by a parametric equation is given by d 2 y d x 2 = d d x ( d y d x ) = d d t ( d y d x ) ⋅ d t d x = d d t ( y ˙ x ˙ ) 1 x ˙ = x ˙ y ¨ − y ˙ x ¨ x ˙ 3 {\displaystyle {\begin{aligned}{\frac {d^{2}y}{dx^{2}}}&={\frac {d}{dx}}\left({\frac {dy}{dx}}\right)\\[1ex]&={\frac {d}{dt}}\left({\frac {dy}{dx}}\right)\cdot {\frac {dt}{dx}}\\[1ex]&={\frac {d}{dt}}\left({\frac {\dot {y}}{\dot {x}}}\right){\frac {1}{\dot {x}}}\\[1ex]&={\frac {{\dot {x}}{\ddot {y}}-{\dot {y}}{\ddot {x}}}{{\dot {x}}^{3}}}\end{aligned}}} by making use of the quotient rule for derivatives. The latter result is useful in the computation of curvature . For example, consider the set of functions where: x ( t ) = 4 t 2 , y ( t ) = 3 t . {\displaystyle {\begin{aligned}x(t)&=4t^{2},&y(t)&=3t.\end{aligned}}} Differentiating both functions with respect to t leads to the functions d x d t = 8 t , d y d t = 3. {\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=8t,&{\frac {dy}{dt}}&=3.\end{aligned}}} Substituting these into the formula for the parametric derivative, we obtain d y d x = y ˙ x ˙ = 3 8 t , {\displaystyle {\frac {dy}{dx}}={\frac {\dot {y}}{\dot {x}}}={\frac {3}{8t}},} where x ˙ {\displaystyle {\dot {x}}} and y ˙ {\displaystyle {\dot {y}}} are understood to be functions of t .
https://en.wikipedia.org/wiki/Parametric_derivative
Parametric design is a design method in which features, such as building elements and engineering components, are shaped based on algorithmic processes rather than direct manipulation. In this approach, parameters and rules establish the relationship between design intent and design response. [ 1 ] [ 2 ] [ 3 ] The term parametric refers to the input parameters that are fed into the algorithms. [ 1 ] While the term now typically refers to the use of computer algorithms in design, early precedents can be found in the work of architects such as Antoni Gaudí . Gaudí used a mechanical model for architectural design (see analogical model ) by attaching weights to a system of strings to determine shapes for building features like arches. [ 3 ] Parametric modeling can be classified into two main categories: Propagation-based systems, where algorithms generate final shapes that are not predetermined based on initial parametric inputs. Constraint systems, in which final constraints are set, and algorithms are used to define fundamental aspects (such as structures or material usage) that satisfy these constraints. [ 4 ] Form-finding processes are often implemented through propagation-based systems. These processes optimize certain design objectives against a set of design constraints, allowing the final form of the designed object to be "found" based on these constraints. [ 4 ] Parametric tools enable reflection of both the associative logic and the geometry of the form generated by the parametric software. The design interface provides a visual screen to support visualization of the algorithmic structure of the parametric schema to support parametric modification. [ 5 ] The principle of parametric design can be defined as mathematical design, where the relationship between the design elements is shown as parameters which could be reformulated to generate complex geometries, these geometries are based on the elements’ parameters, by changing these parameters; new shapes are created simultaneously. [ 6 ] In parametric design software, designers and engineers are free to add and adjust the parameters that affect the design results. For example, materials, dimensions, user requirements, and user body data.  In the parametric design process, the designer can reveal the versions of the project and the final product, without going back to the beginning, by establishing the parameters and establishing the relationship between the variables after creating the first model. [ 7 ] In the parametric design process, any change of parameters like editing or developing will be automatically and immediately updated in the model, which is like a “short cut” to the final model. [ 8 ] The word parameter derives from the Greek for para (besides, before or instead of) + metron (measure). If we look at the Greek origin of the word, it becomes clear that the word means a term that stands in for or determines another measure. [ 9 ] In parametric CAD software, the term parameter usually signifies a variable term in equations that determine other values. A parameter, as opposed to a constant, is characterized by having a range of possible values. One of the most seductive powers of a parametric system is the ability to explore many design variations by modifying the value of a few controlling parameters [ 10 ] One of the earliest instances of parametric design was the upside-down model of churches by Antoni Gaudi . In his design for the Church of Colònia Güell , he created a model of strings weighted down with birdshot to create complex vaulted ceilings and arches. By adjusting the position of the weights or the length of the strings, he could alter the shape of each arch and observe the impact on the connected arches. He placed a mirror at the bottom of the model to see how it would appear when built right-side-up. Gaudí's analog method incorporated the main features of a computational parametric model (input parameters, equation, output): The string length, birdshot weight, and anchor point location function as independent input parameters. The vertex locations of the points on the strings serve as the model's outcomes. The outcomes are derived using explicit functions, in this case, gravity or Newton's law of motion. By modifying individual parameters of these models, Gaudí could generate different versions of his model while ensuring the resulting structure would stand in pure compression. Instead of manually calculating the results of parametric equations, he could automatically derive the shape of the catenary curves through the force of gravity acting on the strings. [ 11 ] German architect Frei Otto also experimented with non-digital parametric processes, using soap bubbles to find optimal shapes of tensegrity structures such as in the Munich Olympic Stadium , designed for the 1972 Summer Olympics in Munich . [ 12 ] Nature has often served as inspiration for architects and designers. [ 12 ] Computer technology has provided designers and architects with the tools to analyze and simulate the complexity observed in nature and apply it to structural building shapes and urban organizational patterns. In the 1980s, architects and designers began using computers running software developed for the aerospace and moving picture industries to "animate form". [ 13 ] One of the first architects and theorists to use computers to generate architecture was Greg Lynn . His blob and fold architecture are early examples of computer-generated architecture. The new Terminal 3 of Shenzhen Bao'an International Airport , completed in 2013, was designed by Italian architect Massimiliano Fuksas with parametric design support from engineering firm Knippers Helbig . It serves as an example of the use of parametric design and production technologies in a large-scale building. [ 14 ] In the general architectural design, all design aspects and their dimensions can be considered as parameters, such as location, orientation, shape, solar radiation and so on. [ 15 ] The iterative process is an approach to continuously improving a concept, design, or product. Creators produce a prototype, test it, tweak it, and repeat the cycle with the goal of getting closer to the solution. [ 16 ] In the case of parametric architecture, iteration can, in principle, create variation at every pass through the same set of instructions. Examples may include varying the size and shape of a floor plate as one builds a skyscraper, or changing the angle of a modular cladding system as it is tiled over an undulating surface. In addition to producing variation, iteration can be a powerful tool for both optimization and minimizing the time needed to achieve that optimization. Using a fluid parametric system, which can give immediate feedback, a designer can generate solutions and test them rapidly by iterating through many possibilities, each created with a different set of parameters. [ 17 ] Parametric urbanism focuses on the study and prediction of settlement patterns. Architect Frei Otto identifies occupying and connecting as the two fundamental processes involved in all urbanization. [ 18 ] Parametric processes can help optimize pedestrian or vehicle circulation, block and façade orientations, and instantly compare the different performances of multiple urban design options. [ 19 ] Parametric design techniques enable architects and urban designers to better address and respond to diverse urban contexts, environmental challenges, and social issues. By integrating data and analysis into the design process, parametric urbanism allows for more informed and adaptive solutions to urban design challenges, ultimately leading to more resilient and sustainable urban environments. With the development of technology and the improvement of people's quality of life, there are more and more factors that affect the final result of interior and furniture design. Space, form, color, line, light, color, pattern, and texture are all influencing elements. [ 20 ] The parametric design method brings industrial designers more design possibilities. The parametric design method gives furniture designers opportunities to challenge more complex furniture structures and create more complex shapes. When dealing with ergonomic problems parametric design methods can help designers create real-use digital scenarios and provide more comfortable design concepts. Using design tables in the furniture industry to implement parametric design is useful when a large order needs to be fulfilled with different sizes of the same model of furniture, as it reduces work time and the possibility of error. [ 21 ] Grasshopper is a visual programming environment built on top of Rhino. It allows you to create visual scripts or definitions that describe a design through a series of relationships between operations, geometries, and other data. Like any programming environment, Grasshopper allows you to create algorithms, or sets of instructions, for telling a computer what to do. In traditional, text-based programming, these instructions are written using text that follows strict formatting rules and has a specific vocabulary for describing computer operations. With visual programming, the instructions are described in a visual interface using a set of nodes, or components, that describe operations, and a set of lines, or wires, that create connections between them. [ 24 ]
https://en.wikipedia.org/wiki/Parametric_design
In mathematics and its applications, a parametric family or a parameterized family is a family of objects (a set of related objects) whose differences depend only on the chosen values for a set of parameters . [ 1 ] Common examples are parametrized (families of) functions , probability distributions , curves, shapes, etc. [ citation needed ] For example, the probability density function f X of a random variable X may depend on a parameter θ . In that case, the function may be denoted f X ( ⋅ ; θ ) {\displaystyle f_{X}(\cdot \,;\theta )} to indicate the dependence on the parameter θ . θ is not a formal argument of the function as it is considered to be fixed. However, each different value of the parameter gives a different probability density function. Then the parametric family of densities is the set of functions { f X ( ⋅ ; θ ) ∣ θ ∈ Θ } {\displaystyle \{f_{X}(\cdot \,;\theta )\mid \theta \in \Theta \}} , where Θ denotes the parameter space , the set of all possible values that the parameter θ can take. As an example, the normal distribution is a family of similarly-shaped distributions parametrized by their mean and their variance . [ 2 ] [ 3 ] In decision theory , two-moment decision models can be applied when the decision-maker is faced with random variables drawn from a location-scale family of probability distributions. [ citation needed ] In economics , the Cobb–Douglas production function is a family of production functions parametrized by the elasticities of output with respect to the various factors of production . [ citation needed ] In algebra , the quadratic equation , for example, is actually a family of equations parametrized by the coefficients of the variable and of its square and by the constant term . [ citation needed ]
https://en.wikipedia.org/wiki/Parametric_family
A parametric oscillator is a driven harmonic oscillator in which the oscillations are driven by varying some parameters of the system at some frequencies, typically different from the natural frequency of the oscillator. A simple example of a parametric oscillator is a child pumping a playground swing by periodically standing and squatting to increase the size of the swing's oscillations. [ 1 ] [ 2 ] [ 3 ] The child's motions vary the moment of inertia of the swing as a pendulum . The "pump" motions of the child must be at twice the frequency of the swing's oscillations. Examples of parameters that may be varied are the oscillator's resonance frequency ω {\displaystyle \omega } and damping β {\displaystyle \beta } . Parametric oscillators are used in several areas of physics. The classical varactor parametric oscillator consists of a semiconductor varactor diode connected to a resonant circuit or cavity resonator . It is driven by varying the diode's capacitance by applying a varying bias voltage . The circuit that varies the diode's capacitance is called the "pump" or "driver". In microwave electronics, waveguide / YAG -based parametric oscillators operate in the same fashion. Another important example is the optical parametric oscillator , which converts an input laser light wave into two output waves of lower frequency ( ω s , ω i {\displaystyle \omega _{s},\omega _{i}} ). When operated at pump levels below oscillation, the parametric oscillator can amplify a signal, forming a parametric amplifier ( paramp ). Varactor parametric amplifiers were developed as low-noise amplifiers in the radio and microwave frequency range. The advantage of a parametric amplifier is that it has much lower noise than an amplifier based on a gain device like a transistor or vacuum tube . This is because in the parametric amplifier a reactance is varied instead of a (noise-producing) resistance . They are used in very low noise radio receivers in radio telescopes and spacecraft communication antennas. [ 4 ] Parametric resonance occurs in a mechanical system when a system is parametrically excited and oscillates at one of its resonant frequencies. Parametric excitation differs from forcing since the action appears as a time varying modification on a system parameter. Parametric oscillations were first noticed in mechanics. Michael Faraday (1831) was the first to notice oscillations of one frequency being excited by forces of double the frequency, in the crispations (ruffled surface waves) observed in a wine glass excited to "sing". [ 5 ] Franz Melde (1860) generated parametric oscillations in a string by employing a tuning fork to periodically vary the tension at twice the resonance frequency of the string. [ 6 ] Parametric oscillation was first treated as a general phenomenon by Rayleigh (1883,1887). [ 7 ] [ 8 ] [ 9 ] One of the first to apply the concept to electric circuits was George Francis FitzGerald , who in 1892 tried to excite oscillations in an LC circuit by pumping it with a varying inductance provided by a dynamo. [ 10 ] [ 11 ] Parametric amplifiers ( paramps ) were first used in 1913-1915 for radio telephony from Berlin to Vienna and Moscow, and were predicted to have a useful future ( Ernst Alexanderson , 1916). [ 12 ] These early parametric amplifiers used the nonlinearity of an iron-core inductor , so they could only function at low frequencies. In 1948 Aldert van der Ziel pointed out a major advantage of the parametric amplifier: because it used a variable reactance instead of a resistance for amplification it had inherently low noise. [ 13 ] A parametric amplifier used as the front end of a radio receiver could amplify a weak signal while introducing very little noise. In 1952 Harrison Rowe at Bell Labs extended some 1934 mathematical work on pumped oscillations by Jack Manley and published the modern mathematical theory of parametric oscillations, the Manley-Rowe relations . [ 13 ] The varactor diode invented in 1956 had a nonlinear capacitance that was usable into microwave frequencies. The varactor parametric amplifier was developed by Marion Hines in 1956 at Western Electric . [ 13 ] At the time it was invented microwaves were just being exploited, and the varactor amplifier was the first semiconductor amplifier at microwave frequencies. [ 13 ] It was applied to low noise radio receivers in many areas, and has been widely used in radio telescopes , satellite ground stations , and long-range radar . It is the main type of parametric amplifier used today. Since that time parametric amplifiers have been built with other nonlinear active devices such as Josephson junctions . The technique has been extended to optical frequencies in optical parametric oscillators and amplifiers which use nonlinear crystals as the active element. A parametric oscillator is a harmonic oscillator whose physical properties vary with time. The equation of such an oscillator is This equation is linear in x ( t ) {\displaystyle x(t)} . By assumption, the parameters ω 2 {\displaystyle \omega ^{2}} and β {\displaystyle \beta } depend only on time and do not depend on the state of the oscillator. In general, β ( t ) {\displaystyle \beta (t)} and/or ω 2 ( t ) {\displaystyle \omega ^{2}(t)} are assumed to vary periodically, with the same period T {\displaystyle T} . If the parameters vary at roughly twice the natural frequency of the oscillator (defined below), the oscillator phase-locks to the parametric variation and absorbs energy at a rate proportional to the energy it already has. Without a compensating energy-loss mechanism provided by β {\displaystyle \beta } , the oscillation amplitude grows exponentially. (This phenomenon is called parametric excitation , parametric resonance or parametric pumping .) However, if the initial amplitude is zero, it will remain so; this distinguishes it from the non-parametric resonance of driven simple harmonic oscillators , in which the amplitude grows linearly in time regardless of the initial state. A familiar experience of both parametric and driven oscillation is playing on a swing. [ 1 ] [ 2 ] [ 3 ] Rocking back and forth pumps the swing as a driven harmonic oscillator , but once moving, the swing can also be parametrically driven by alternately standing and squatting at key points in the swing arc. This changes moment of inertia of the swing and hence the resonance frequency, and children can quickly reach large amplitudes provided that they have some amplitude to start with (e.g., get a push). Standing and squatting at rest, however, leads nowhere. We begin by making a change of variable where D ( t ) {\displaystyle D(t)} is the time integral of the damping coefficient This change of variable eliminates the damping term in the differential equation, reducing it to where the transformed frequency is defined as In general, the variations in damping and frequency are relatively small perturbations where ω 0 {\displaystyle \omega _{0}} and b {\displaystyle b} are constants, namely, the time-averaged oscillator frequency and damping, respectively. The transformed frequency can then be written in a similar way as where ω n {\displaystyle \omega _{n}} is the natural frequency of the damped harmonic oscillator and Thus, our transformed equation can be written as The independent variations g ( t ) {\displaystyle g(t)} and h ( t ) {\displaystyle h(t)} in the oscillator damping and resonance frequency, respectively, can be combined into a single pumping function f ( t ) {\displaystyle f(t)} . The converse conclusion is that any form of parametric excitation can be accomplished by varying either the resonance frequency or the damping, or both. Let us assume that f ( t ) {\displaystyle \ f(t)\ } is sinusoidal with a frequency approximately twice the natural frequency of the oscillator: where the pumping frequency ω p ≈ ω n {\displaystyle \ \omega _{p}\approx \omega _{n}\ } but need not equal ω n {\displaystyle \ \omega _{n}\ } exactly. Using the method of variation of parameters , the solution q ( t ) {\displaystyle \ q(t)\ } to our transformed equation may be written as where the rapidly varying components, cos ⁡ ( ω p t ) {\displaystyle \ \cos(\omega _{p}t)\ } and sin ⁡ ( ω p t ) , {\displaystyle \ \sin(\omega _{p}t)\ ,} have been factored out to isolate the slowly varying amplitudes A ( t ) {\displaystyle \ A(t)\ } and B ( t ) . {\displaystyle \ B(t)~.} We proceed by substituting this solution into the differential equation and considering that both the coefficients in front of cos ⁡ ( ω p t ) {\displaystyle \ \cos(\omega _{p}t)\ } and sin ⁡ ( ω p t ) {\displaystyle \ \sin(\omega _{p}t)\ } must be zero to satisfy the differential equation identically. We also omit the second derivatives of A ( t ) {\displaystyle \ A(t)\ } and B ( t ) {\displaystyle \ B(t)\ } on the grounds that A ( t ) {\displaystyle \ A(t)\ } and B ( t ) {\displaystyle \ B(t)\ } are slowly varying, as well as omit sinusoidal terms not near the natural frequency, ω n , {\displaystyle \ \omega _{n}\ ,} as they do not contribute significantly to resonance. The result is the following pair of coupled differential equations: This system of linear differential equations with constant coefficients can be decoupled and solved by eigenvalue / eigenvector methods. This yields the solution where λ 1 {\displaystyle \ \lambda _{1}\ } and λ 2 {\displaystyle \ \lambda _{2}\ } are the eigenvalues of the matrix V 1 → {\displaystyle \ {\vec {V_{1}}}\ } and V 2 → {\displaystyle \ {\vec {V_{2}}}\ } are corresponding eigenvectors, and c 1 {\displaystyle \ c_{1}\ } and c 2 {\displaystyle \ c_{2}\ } are arbitrary constants. The eigenvalues are given by If we write the difference between ω p {\displaystyle \ \omega _{p}\ } and ω n {\displaystyle \ \omega _{n}\ } as Δ ω = ω p − ω n , {\displaystyle \ \Delta \omega =\omega _{p}-\omega _{n}\ ,} and replace ω p {\displaystyle \ \omega _{p}\ } with ω n {\displaystyle \omega _{n}} everywhere where the difference is not important, we get If | Δ ω | < 1 4 f 0 ω n , {\displaystyle \ {\bigl |}\Delta \omega {\bigr |}<{\tfrac {1}{4}}f_{0}\ \omega _{n}\ ,} then the eigenvalues are real and exactly one is positive, which leads to exponential growth for A ( t ) {\displaystyle \ A(t)\ } and B ( t ) . {\displaystyle \ B(t)~.} This is the condition for parametric resonance, with the growth rate for q ( t ) {\displaystyle \ q(t)\ } given by the positive eigenvalue λ 1 = + ( 1 4 f 0 ω n ) 2 − ( Δ ω ) 2 . {\displaystyle \ \lambda _{1}=+{\sqrt {{\Bigl (}{\tfrac {1}{4}}f_{0}\ \omega _{n}{\Bigr )}^{2}-{\Bigl (}\Delta \omega {\Bigr )}^{2}\;}}~.} Note, however, that this growth rate corresponds to the amplitude of the transformed variable q ( t ) , {\displaystyle \ q(t)\ ,} whereas the amplitude of the original, untransformed variable x ( t ) = q ( t ) e − D ( t ) {\displaystyle \ x(t)=q(t)\ e^{-D(t)}\ } can either grow or decay depending on whether λ 1 t − D ( t ) {\displaystyle \ \lambda _{1}t-D(\ t\ )\ } is an increasing or decreasing function of time, t . {\displaystyle \ t~.} The above derivation may seem like a mathematical sleight-of-hand, so it may be helpful to give an intuitive derivation. The q {\displaystyle q} equation may be written in the form which represents a simple harmonic oscillator (or, alternatively, a bandpass filter ) being driven by a signal − ω n 2 f ( t ) q {\displaystyle -\omega _{n}^{2}f(t)q} that is proportional to its response q ( t ) {\displaystyle q(t)} . Assume that q ( t ) = A cos ⁡ ( ω p t ) {\displaystyle q(t)=A\cos(\omega _{p}t)} already has an oscillation at frequency ω p {\displaystyle \omega _{p}} and that the pumping f ( t ) = f 0 sin ⁡ ( 2 ω p t ) {\displaystyle f(t)=f_{0}\sin(2\omega _{p}t)} has double the frequency and a small amplitude f 0 ≪ 1 {\displaystyle f_{0}\ll 1} . Applying a trigonometric identity for products of sinusoids, their product q ( t ) f ( t ) {\displaystyle q(t)f(t)} produces two driving signals, one at frequency ω p {\displaystyle \omega _{p}} and the other at frequency 3 ω p {\displaystyle 3\omega _{p}} . Being off-resonance, the 3 ω p {\displaystyle 3\omega _{p}} signal is attenuated and can be neglected initially. By contrast, the ω p {\displaystyle \omega _{p}} signal is on resonance, serves to amplify q ( t ) {\displaystyle q(t)} , and is proportional to the amplitude A {\displaystyle A} . Hence, the amplitude of q ( t ) {\displaystyle q(t)} grows exponentially unless it is initially zero. Expressed in Fourier space, the multiplication f ( t ) q ( t ) {\displaystyle f(t)q(t)} is a convolution of their Fourier transforms F ~ ( ω ) {\displaystyle {\tilde {F}}(\omega )} and Q ~ ( ω ) {\displaystyle {\tilde {Q}}(\omega )} . The positive feedback arises because the + 2 ω p {\displaystyle +2\omega _{p}} component of f ( t ) {\displaystyle f(t)} converts the − ω p {\displaystyle -\omega _{p}} component of q ( t ) {\displaystyle q(t)} into a driving signal at + ω p {\displaystyle +\omega _{p}} , and vice versa (reverse the signs). This explains why the pumping frequency must be near 2 ω n {\displaystyle 2\omega _{n}} , twice the natural frequency of the oscillator. Pumping at a grossly different frequency would not couple (i.e., provide mutual positive feedback) between the − ω p {\displaystyle -\omega _{p}} and + ω p {\displaystyle +\omega _{p}} components of q ( t ) {\displaystyle q(t)} . Parametric resonance is the parametrical resonance phenomenon of mechanical perturbation and oscillation at certain frequencies (and the associated harmonics ). This effect is different from regular resonance because it exhibits the instability phenomenon. Parametric resonance occurs in a mechanical system when a system is parametrically excited and oscillates at one of its resonant frequencies. Parametric excitation differs from forcing since the action appears as a time varying modification on a system parameter. The classical example of parametric resonance is that of the vertically forced pendulum. Parametric resonance takes place when the external excitation frequency equals twice the natural frequency of the system divided by a positive integer n {\displaystyle n} . For a parametric excitation with small amplitude h {\displaystyle h} in the absence of friction, the bandwidth of the resonance is to leading order O ( | h | n ) {\displaystyle {\mathcal {O}}(|h|^{n})} . [ 14 ] The effect of friction is to introduce a finite threshold for the amplitude of parametric excitation to result in an instability. [ 15 ] For small amplitudes and by linearising, the stability of the periodic solution is given by Mathieu's equation : where u {\displaystyle u} is some perturbation from the periodic solution. Here the B cos ⁡ ( t ) {\displaystyle B\ \cos(t)} term acts as an ‘energy’ source and is said to parametrically excite the system. The Mathieu equation describes many other physical systems to a sinusoidal parametric excitation such as an LC Circuit where the capacitor plates move sinusoidally. Autoparametric resonance happens in a system with two coupled oscillators, such that the vibrations of one act as parametric resonance on the second. The zero point of the second oscillator becomes unstable, and thus it starts oscillating. [ 16 ] [ 17 ] A parametric amplifier is implemented as a mixer . The mixer's gain shows up in the output as amplifier gain. The input weak signal is mixed with a strong local oscillator signal, and the resultant strong output is used in the ensuing receiver stages. Parametric amplifiers also operate by changing a parameter of the amplifier. Intuitively, this can be understood as follows, for a variable capacitor-based amplifier. Charge Q {\displaystyle Q} in a capacitor obeys: Q = C × V {\displaystyle Q=C\times V} , therefore the voltage across is V = Q C {\displaystyle V={\frac {Q}{C}}} . Knowing the above, if a capacitor is charged until its voltage equals the sampled voltage of an incoming weak signal, and if the capacitor's capacitance is then reduced (say, by manually moving the plates further apart), then the voltage across the capacitor will increase. In this way, the voltage of the weak signal is amplified. If the capacitor is a varicap diode , then "moving the plates" can be done simply by applying time-varying DC voltage to the varicap diode. This driving voltage usually comes from another oscillator—sometimes called a "pump". The resulting output signal contains frequencies that are the sum and difference of the input signal (f1) and the pump signal (f2): (f1 + f2) and (f1 − f2). A practical parametric oscillator needs the following connections: one for the "common" or " ground ", one to feed the pump, one to retrieve the output, and maybe a fourth one for biasing. A parametric amplifier needs a fifth port to input the signal being amplified. Since a varactor diode has only two connections, it can only be a part of an LC network with four eigenvectors with nodes at the connections. This can be implemented as a transimpedance amplifier , a traveling-wave amplifier or by means of a circulator . The parametric oscillator equation can be extended by adding an external driving force E ( t ) {\displaystyle E(t)} : We assume that the damping D {\displaystyle D} is sufficiently strong that, in the absence of the driving force E {\displaystyle E} , the amplitude of the parametric oscillations does not diverge, i.e., that α t < D {\displaystyle \alpha t<D} . In this situation, the parametric pumping acts to lower the effective damping in the system. For illustration, let the damping be constant β ( t ) = ω 0 b {\displaystyle \beta (t)=\omega _{0}b} and assume that the external driving force is at the mean resonance frequency ω 0 {\displaystyle \omega _{0}} , i.e., E ( t ) = E 0 sin ⁡ ω 0 t {\displaystyle E(t)=E_{0}\sin \omega _{0}t} . The equation becomes whose solution is approximately As h 0 {\displaystyle h_{0}} approaches the threshold 2 b {\displaystyle 2b} , the amplitude diverges. When h 0 ≥ 2 b {\displaystyle h_{0}\geq 2b} , the system enters parametric resonance and the amplitude begins to grow exponentially, even in the absence of a driving force E ( t ) {\displaystyle E(t)} . If the parameters of any second-order linear differential equation are varied periodically, Floquet analysis shows that the solutions must vary either sinusoidally or exponentially. The q {\displaystyle q} equation above with periodically varying f ( t ) {\displaystyle f(t)} is an example of a Hill equation . If f ( t ) {\displaystyle f(t)} is a simple sinusoid, the equation is called a Mathieu equation .
https://en.wikipedia.org/wiki/Parametric_oscillator
Parametric stereo (abbreviated as PS ) [ 1 ] is an audio compression algorithm used as an audio coding format for digital audio . It is considered an Audio Object Type of MPEG-4 Part 3 (MPEG-4 Audio) that serves to enhance the coding efficiency of low bandwidth stereo audio media. Parametric Stereo digitally codes a stereo audio signal by storing the audio as monaural alongside a small amount of extra information. This extra information (defined as "parametric overhead") describes how the monaural signal will behave across both stereo channels, which allows for the signal to exist in true stereo upon playback. Advanced Audio Coding Low Complexity (AAC LC) combined with Spectral Band Replication (SBR) and Parametric Stereo (PS) was defined as HE-AAC v2 . A HE-AAC v1 decoder will only give a mono output when decoding a HE-AAC v2 bitstream. Parametric Stereo performs sparse coding in the spatial domain, somewhat similar to what SBR does in the frequency domain. An AAC HE v2 bitstream is obtained by downmixing the stereo audio to mono at the encoder along with 2–3 kbit/s of side info (the Parametric Stereo information) in order to describe the spatial intensity stereo generation and ambience regeneration at the decoder. By having the Parametric Stereo side info along with the mono audio stream, the decoder (player) can regenerate a faithful spatial approximation of the original stereo panorama at very low bitrates. Because only one audio channel is transmitted, along with the parametric side info, a 24 kbit/s coded audio signal with Parametric Stereo will be substantially improved in quality relative to discrete stereo audio signals encoded with conventional means. The additional bitrate spent on the single mono channel (combined with some PS side info) will substantially improve the perceived quality of the audio compared to a standard stereo stream at similar bitrate. However, this technique is only useful at the lowest bitrates (approx. 16–48 kbit/s and down to 14.4 kbps in xHE-AAC used in DRM ) to give a good stereo impression, so while it can improve perceived quality at very low bitrates, it generally does not achieve transparency , since simulating the stereo dynamics of the audio with the technique is limited and generally deteriorates perceived quality regardless of the bitrate. The development of Parametric Stereo was as a result of necessity to further enhance the coding efficiency of audio in low bandwidth stereo media. It has gone through various iterations and improvements, however, it was first standardized as an algorithm when included in the feature set of MPEG-4 Audio . [ 1 ] Parametric Stereo was originally developed in Stockholm, Sweden by companies Philips and Coding Technologies , and was first unveiled in Naples, Italy, in 2004 during the 7th International Conference on Digital Audio Effects (DAFx'04) . [ 2 ] The implementation in MPEG-4 is based on specifying the relative amount, delay, and correlation (coherence) of left and right channels by each frequency band in the mixed mono audio. Special handling is given to transient signals, as the approach would otherwise cause unacceptable delays. Compared to intensity stereo coding, which does not record delay or correlation, PS can provide more ambience. [ 2 ] Modifications to PS continue to be proposed. MPEG Surround uses a technique related to PS.
https://en.wikipedia.org/wiki/Parametric_stereo
A parametric surface is a surface in the Euclidean space R 3 {\displaystyle \mathbb {R} ^{3}} which is defined by a parametric equation with two parameters r : R 2 → R 3 {\displaystyle \mathbf {r} :\mathbb {R} ^{2}\to \mathbb {R} ^{3}} . Parametric representation is a very general way to specify a surface, as well as implicit representation . Surfaces that occur in two of the main theorems of vector calculus , Stokes' theorem and the divergence theorem , are frequently given in a parametric form. The curvature and arc length of curves on the surface, surface area , differential geometric invariants such as the first and second fundamental forms, Gaussian , mean , and principal curvatures can all be computed from a given parametrization. The same surface admits many different parametrizations. For example, the coordinate z -plane can be parametrized as r ( u , v ) = ( a u + b v , c u + d v , 0 ) {\displaystyle \mathbf {r} (u,v)=(au+bv,cu+dv,0)} for any constants a , b , c , d such that ad − bc ≠ 0 , i.e. the matrix [ a b c d ] {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}} is invertible . The local shape of a parametric surface can be analyzed by considering the Taylor expansion of the function that parametrizes it. The arc length of a curve on the surface and the surface area can be found using integration . Let the parametric surface be given by the equation r = r ( u , v ) , {\displaystyle \mathbf {r} =\mathbf {r} (u,v),} where r {\displaystyle \mathbf {r} } is a vector-valued function of the parameters ( u , v ) and the parameters vary within a certain domain D in the parametric uv -plane. The first partial derivatives with respect to the parameters are usually denoted r u := ∂ r ∂ u {\textstyle \mathbf {r} _{u}:={\frac {\partial \mathbf {r} }{\partial u}}} and r v , {\displaystyle \mathbf {r} _{v},} and similarly for the higher derivatives, r u u , r u v , r v v . {\displaystyle \mathbf {r} _{uu},\mathbf {r} _{uv},\mathbf {r} _{vv}.} In vector calculus , the parameters are frequently denoted ( s , t ) and the partial derivatives are written out using the ∂ -notation: ∂ r ∂ s , ∂ r ∂ t , ∂ 2 r ∂ s 2 , ∂ 2 r ∂ s ∂ t , ∂ 2 r ∂ t 2 . {\displaystyle {\frac {\partial \mathbf {r} }{\partial s}},{\frac {\partial \mathbf {r} }{\partial t}},{\frac {\partial ^{2}\mathbf {r} }{\partial s^{2}}},{\frac {\partial ^{2}\mathbf {r} }{\partial s\partial t}},{\frac {\partial ^{2}\mathbf {r} }{\partial t^{2}}}.} The parametrization is regular for the given values of the parameters if the vectors r u , r v {\displaystyle \mathbf {r} _{u},\mathbf {r} _{v}} are linearly independent. The tangent plane at a regular point is the affine plane in R 3 spanned by these vectors and passing through the point r ( u , v ) on the surface determined by the parameters. Any tangent vector can be uniquely decomposed into a linear combination of r u {\displaystyle \mathbf {r} _{u}} and r v . {\displaystyle \mathbf {r} _{v}.} The cross product of these vectors is a normal vector to the tangent plane . Dividing this vector by its length yields a unit normal vector to the parametrized surface at a regular point: n ^ = r u × r v | r u × r v | . {\displaystyle {\hat {\mathbf {n} }}={\frac {\mathbf {r} _{u}\times \mathbf {r} _{v}}{\left|\mathbf {r} _{u}\times \mathbf {r} _{v}\right|}}.} In general, there are two choices of the unit normal vector to a surface at a given point, but for a regular parametrized surface, the preceding formula consistently picks one of them, and thus determines an orientation of the surface. Some of the differential-geometric invariants of a surface in R 3 are defined by the surface itself and are independent of the orientation, while others change the sign if the orientation is reversed. The surface area can be calculated by integrating the length of the normal vector r u × r v {\displaystyle \mathbf {r} _{u}\times \mathbf {r} _{v}} to the surface over the appropriate region D in the parametric uv plane: A ( D ) = ∬ D | r u × r v | d u d v . {\displaystyle A(D)=\iint _{D}\left|\mathbf {r} _{u}\times \mathbf {r} _{v}\right|du\,dv.} Although this formula provides a closed expression for the surface area, for all but very special surfaces this results in a complicated double integral , which is typically evaluated using a computer algebra system or approximated numerically. Fortunately, many common surfaces form exceptions, and their areas are explicitly known. This is true for a circular cylinder , sphere , cone , torus , and a few other surfaces of revolution . This can also be expressed as a surface integral over the scalar field 1: ∫ S 1 d S . {\displaystyle \int _{S}1\,dS.} The first fundamental form is a quadratic form I = E d u 2 + 2 F d u d v + G d v 2 {\displaystyle \mathrm {I} =E\,du^{2}+2\,F\,du\,dv+G\,dv^{2}} on the tangent plane to the surface which is used to calculate distances and angles. For a parametrized surface r = r ( u , v ) , {\displaystyle \mathbf {r} =\mathbf {r} (u,v),} its coefficients can be computed as follows: E = r u ⋅ r u , F = r u ⋅ r v , G = r v ⋅ r v . {\displaystyle E=\mathbf {r} _{u}\cdot \mathbf {r} _{u},\quad F=\mathbf {r} _{u}\cdot \mathbf {r} _{v},\quad G=\mathbf {r} _{v}\cdot \mathbf {r} _{v}.} Arc length of parametrized curves on the surface S , the angle between curves on S , and the surface area all admit expressions in terms of the first fundamental form. If ( u ( t ), v ( t )) , a ≤ t ≤ b represents a parametrized curve on this surface then its arc length can be calculated as the integral: ∫ a b E u ′ ( t ) 2 + 2 F u ′ ( t ) v ′ ( t ) + G v ′ ( t ) 2 d t . {\displaystyle \int _{a}^{b}{\sqrt {E\,u'(t)^{2}+2F\,u'(t)v'(t)+G\,v'(t)^{2}}}\,dt.} The first fundamental form may be viewed as a family of positive definite symmetric bilinear forms on the tangent plane at each point of the surface depending smoothly on the point. This perspective helps one calculate the angle between two curves on S intersecting at a given point. This angle is equal to the angle between the tangent vectors to the curves. The first fundamental form evaluated on this pair of vectors is their dot product , and the angle can be found from the standard formula cos ⁡ θ = a ⋅ b | a | | b | {\displaystyle \cos \theta ={\frac {\mathbf {a} \cdot \mathbf {b} }{\left|\mathbf {a} \right|\left|\mathbf {b} \right|}}} expressing the cosine of the angle via the dot product. Surface area can be expressed in terms of the first fundamental form as follows: A ( D ) = ∬ D E G − F 2 d u d v . {\displaystyle A(D)=\iint _{D}{\sqrt {EG-F^{2}}}\,du\,dv.} By Lagrange's identity , the expression under the square root is precisely | r u × r v | 2 {\displaystyle \left|\mathbf {r} _{u}\times \mathbf {r} _{v}\right|^{2}} , and so it is strictly positive at the regular points. The second fundamental form I I = L d u 2 + 2 M d u d v + N d v 2 {\displaystyle \mathrm {I\!I} =L\,du^{2}+2M\,du\,dv+N\,dv^{2}} is a quadratic form on the tangent plane to the surface that, together with the first fundamental form, determines the curvatures of curves on the surface. In the special case when ( u , v ) = ( x , y ) and the tangent plane to the surface at the given point is horizontal, the second fundamental form is essentially the quadratic part of the Taylor expansion of z as a function of x and y . For a general parametric surface, the definition is more complicated, but the second fundamental form depends only on the partial derivatives of order one and two. Its coefficients are defined to be the projections of the second partial derivatives of r {\displaystyle \mathbf {r} } onto the unit normal vector n ^ {\displaystyle {\hat {\mathbf {n} }}} defined by the parametrization: L = r u u ⋅ n ^ , M = r u v ⋅ n ^ , N = r v v ⋅ n ^ . {\displaystyle L=\mathbf {r} _{uu}\cdot {\hat {\mathbf {n} }},\quad M=\mathbf {r} _{uv}\cdot {\hat {\mathbf {n} }},\quad N=\mathbf {r} _{vv}\cdot {\hat {\mathbf {n} }}.} Like the first fundamental form, the second fundamental form may be viewed as a family of symmetric bilinear forms on the tangent plane at each point of the surface depending smoothly on the point. The first and second fundamental forms of a surface determine its important differential-geometric invariants : the Gaussian curvature , the mean curvature , and the principal curvatures . The principal curvatures are the invariants of the pair consisting of the second and first fundamental forms. They are the roots κ 1 , κ 2 of the quadratic equation det ( I I − κ I ) = 0 , det [ L − κ E M − κ F M − κ F N − κ G ] = 0. {\displaystyle \det(\mathrm {I\!I} -\kappa \mathrm {I} )=0,\quad \det {\begin{bmatrix}L-\kappa E&M-\kappa F\\M-\kappa F&N-\kappa G\end{bmatrix}}=0.} The Gaussian curvature K = κ 1 κ 2 and the mean curvature H = ( κ 1 + κ 2 )/2 can be computed as follows: K = L N − M 2 E G − F 2 , H = E N − 2 F M + G L 2 ( E G − F 2 ) . {\displaystyle K={\frac {LN-M^{2}}{EG-F^{2}}},\quad H={\frac {EN-2FM+GL}{2(EG-F^{2})}}.} Up to a sign, these quantities are independent of the parametrization used, and hence form important tools for analysing the geometry of the surface. More precisely, the principal curvatures and the mean curvature change the sign if the orientation of the surface is reversed, and the Gaussian curvature is entirely independent of the parametrization. The sign of the Gaussian curvature at a point determines the shape of the surface near that point: for K > 0 the surface is locally convex and the point is called elliptic , while for K < 0 the surface is saddle shaped and the point is called hyperbolic . The points at which the Gaussian curvature is zero are called parabolic . In general, parabolic points form a curve on the surface called the parabolic line . The first fundamental form is positive definite , hence its determinant EG − F 2 is positive everywhere. Therefore, the sign of K coincides with the sign of LN − M 2 , the determinant of the second fundamental. The coefficients of the first fundamental form presented above may be organized in a symmetric matrix : F 1 = [ E F F G ] . {\displaystyle F_{1}={\begin{bmatrix}E&F\\F&G\end{bmatrix}}.} And the same for the coefficients of the second fundamental form , also presented above: F 2 = [ L M M N ] . {\displaystyle F_{2}={\begin{bmatrix}L&M\\M&N\end{bmatrix}}.} Defining now matrix A = F 1 − 1 F 2 {\displaystyle A=F_{1}^{-1}F_{2}} , the principal curvatures κ 1 and κ 2 are the eigenvalues of A . [ 1 ] Now, if v 1 = ( v 11 , v 12 ) is the eigenvector of A corresponding to principal curvature κ 1 , the unit vector in the direction of t 1 = v 11 r u + v 12 r v {\displaystyle \mathbf {t} _{1}=v_{11}\mathbf {r} _{u}+v_{12}\mathbf {r} _{v}} is called the principal vector corresponding to the principal curvature κ 1 . Accordingly, if v 2 = ( v 21 , v 22 ) is the eigenvector of A corresponding to principal curvature κ 2 , the unit vector in the direction of t 2 = v 21 r u + v 22 r v {\displaystyle \mathbf {t} _{2}=v_{21}\mathbf {r} _{u}+v_{22}\mathbf {r} _{v}} is called the principal vector corresponding to the principal curvature κ 2 .
https://en.wikipedia.org/wiki/Parametric_surface
Parametric thinking is the influence of engaging in a thinking process that links, relates and outputs calculated actions to generate solutions to problems, rather than simply seeking them. [ 1 ] It has its origins in the design fields of urban design , architectural design , interior design , industrial and furniture design . The process is associated with parametricism , a style within contemporary avant-garde architecture, promoted as a successor to post-modern architecture and modern architecture. Parametric thinking emerged as a theory-driven design education, project and delivery movement in the early 2010s, with its earliest practitioners harnessing and adapting the then new parametric design software and other advanced computational processes that had been introduced within architecture and related design fields. In the architecture education context in 2011, professors of the schools of architecture David Karle and Brian Kelly of University of Nebraska published advocacy for a paradigm shift in design education. Karle and Kelly also highlighted that "digital tools are currently being used in design schools across the country. This paradigm in both education and practice of architecture is continually changing the profession, from the way in which design is conceived, represented, documented, and fabricated. Parametric design can be defined as a series of questions to establish the variables of a design and a computational definition that can be used to facilitate a variety of solutions. Parametric thinking is a way of relating tangible and intangible systems into a design proposal removed from digital tool specificity and establishes relationships between properties within a system. It asks architects to start with the design parameters and not preconceived or predetermined design solutions." [ 2 ] Parametric thinking in the design process context of best practice as defined by designer/technologist Chris Swartout of M Moser Associates is "a pedagogic approach that combines design and generative solution delivery through reliance on a multidisciplinary team's knowledge expertise at the outset of the project. This leads to increased productivity, achieving the desired outcome that balances all the interests of design, schedule, cost, aesthetics and functionality, while not predetermining a solution. This approach enhances design output and reduces abortive work." [ 1 ] Farshid Moussavi , professor in practice at Harvard University Graduate School of Design , has argued that parametric thinking at some level has always existed in architecture as a discipline because "great architecture has been aware of its societal role, and has consequently been informed by multivalent parameters." [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Parametric_thinking
In mathematics , and more specifically in geometry , parametrization (or parameterization ; also parameterisation , parametrisation ) is the process of finding parametric equations of a curve , a surface , or, more generally, a manifold or a variety , defined by an implicit equation . The inverse process is called implicitization . [ 1 ] "To parameterize" by itself means "to express in terms of parameters ". [ 2 ] Parametrization is a mathematical process consisting of expressing the state of a system , process or model as a function of some independent quantities called parameters . The state of the system is generally determined by a finite set of coordinates , and the parametrization thus consists of one function of several real variables for each coordinate. The number of parameters is the number of degrees of freedom of the system. For example, the position of a point that moves on a curve in three-dimensional space is determined by the time needed to reach the point when starting from a fixed origin. If x , y , z are the coordinates of the point, the movement is thus described by a parametric equation [ 1 ] where t is the parameter and denotes the time. Such a parametric equation completely determines the curve, without the need of any interpretation of t as time, and is thus called a parametric equation of the curve (this is sometimes abbreviated by saying that one has a parametric curve ). One similarly gets the parametric equation of a surface by considering functions of two parameters t and u . Parametrizations are not generally unique . The ordinary three-dimensional object can be parametrized (or "coordinatized") equally efficiently with Cartesian coordinates ( x , y , z ), cylindrical polar coordinates ( ρ , φ , z ), spherical coordinates ( r , φ, θ) or other coordinate systems . Similarly, the color space of human trichromatic color vision can be parametrized in terms of the three colors red, green and blue, RGB , or with cyan, magenta, yellow and black, CMYK . Generally, the minimum number of parameters required to describe a model or geometric object is equal to its dimension , and the scope of the parameters—within their allowed ranges—is the parameter space . Though a good set of parameters permits identification of every point in the object space, it may be that, for a given parametrization, different parameter values can refer to the same point. Such mappings are surjective but not injective . An example is the pair of cylindrical polar coordinates (ρ, φ, z ) and (ρ, φ + 2π, z ). As indicated above, there is arbitrariness in the choice of parameters of a given model, geometric object, etc. Often, one wishes to determine intrinsic properties of an object that do not depend on this arbitrariness, which are therefore independent of any particular choice of parameters. This is particularly the case in physics, wherein parametrization invariance (or 'reparametrization invariance') is a guiding principle in the search for physically acceptable theories (particularly in general relativity ). For example, whilst the location of a fixed point on some curved line may be given by a set of numbers whose values depend on how the curve is parametrized, the length (appropriately defined) of the curve between two such fixed points will be independent of the particular choice of parametrization (in this case: the method by which an arbitrary point on the line is uniquely indexed). The length of the curve is therefore a parametrization-invariant quantity. In such cases parametrization is a mathematical tool employed to extract a result whose value does not depend on, or make reference to, the details of the parametrization. More generally, parametrization invariance of a physical theory implies that either the dimensionality or the volume of the parameter space is larger than is necessary to describe the physics (the quantities of physical significance) in question. Though the theory of general relativity can be expressed without reference to a coordinate system, calculations of physical (i.e. observable) quantities such as the curvature of spacetime invariably involve the introduction of a particular coordinate system in order to refer to spacetime points involved in the calculation. In the context of general relativity then, the choice of coordinate system may be regarded as a method of 'parametrizing' the spacetime, and the insensitivity of the result of a calculation of a physically-significant quantity to that choice can be regarded as an example of parametrization invariance. As another example, physical theories whose observable quantities depend only on the relative distances (the ratio of distances) between pairs of objects are said to be scale invariant . In such theories any reference in the course of a calculation to an absolute distance would imply the introduction of a parameter to which the theory is invariant.
https://en.wikipedia.org/wiki/Parametrization_(geometry)
Paramu Mafongoya is a Zimbabwean professor at the University of KwaZulu-Natal (UKZN) in South Africa , where he specialises in agriculture, earth and environmental sciences. He serves as the South African Research Chair (SARChI) in Agronomy and Rural Development at UKZN. He is affiliated with the African Academy of Sciences (AAS) and the Zimbabwe Academy of Sciences (ZAS). His work in agricultural research, development, education, and integrated natural resources management extends over three decades. He has authored more than 290 publications, including 190 articles in peer-reviewed journals, 49 chapters in peer-reviewed books, and 2 books. His research areas include agronomy, climate science, soil science, and agroforestry. Born on 23 October 1961 in Zimbabwe, Mafongoya completed his BSc (Hons) in Agriculture at the University of Zimbabwe in 1984. He then studied in the United Kingdom, earning his MSc in Applied Plant Sciences and his MSc in Agricultural Development from Wye College , University of London, in 1988 and 1990, respectively. He later earned his PhD in Agroforestry from the University of Florida in the United States in 1995. [ 1 ] [ 2 ] After earning his PhD, Mafongoya worked as a senior lecturer and head of the Department of Soil Science and Agricultural Engineering at the University of Zimbabwe from 1995 to 1999. He then joined the International Centre for Research in Agroforestry (ICRAF) as a principal scientist and regional coordinator for Southern Africa from 1999 to 2007. He also held positions at the Food and Agriculture Organization (FAO) of the United Nations, the International Atomic Energy Agency (IAEA), and the International Fund for Agricultural Development (IFAD). [ 1 ] [ 3 ] In 2007, Mafongoya joined UKZN as a professor of agriculture, earth and environmental sciences. Since 2015, he has served as the SARChI chair in agronomy and rural development. He leads a research group that focuses on tropical resources, ecology, environment and climate, crop-livestock integration, and sustainable agriculture. He has mentored over 100 postgraduate students and postdoctoral fellows. He has collaborated with various national and international institutions and networks, including the AAS, the ZAS, the InterAcademy Partnership, the Network of African Science Academies, and the African Union. [ 1 ] [ 4 ] Mafongoya has authored over 290 works, including 190 articles in peer-reviewed journals, 49 chapters in peer-reviewed books, and 2 books. [ 1 ] Some of his most cited works include: [ 5 ] Paramu Mafongoya has received several recognitions for his contributions to science and society. He was named a Fellow of the Zimbabwe Academy of Sciences in 2013 and the African Academy of Sciences in 2018. [ 4 ] He served as the vice-president of the Zimbabwe Academy of Sciences from 2017 to 2019. He was the president of the Soil Science Society of South Africa from 2015 to 2017, and its vice-president from 2013 to 2015. He became a member of the Academy of Science of South Africa in 2012, The World Academy of Sciences in 2010, the International Union of Soil Sciences in 2008, the Soil Science Society of America in 2007, and the American Society of Agronomy in 2007. [ 1 ] [ 4 ]
https://en.wikipedia.org/wiki/Paramu_Mafongoya
Paramural bodies are membranous or vesicular structures located between the cell walls and cell membranes of plant and fungal cells. [ 1 ] [ 2 ] When these are continuous with the cell wall, they are termed lomasomes , while they are referred to as plasmalemmasomes if associated with the plasmalemma . [ 3 ] [ 4 ] While their function has not yet been studied in great detail, it has been speculated that due to the morphological similarity of paramural bodies to the exosomes produced by mammalian cells, they may perform similar functions such as membrane vesicle trafficking between cells. [ 5 ] Current evidence suggests that, like exosomes, paramural bodies are derived from multivesicular bodies . [ 5 ] This cell biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Paramural_body
In epigenetics , a paramutation is an interaction between two alleles at a single locus , whereby one allele induces a heritable change in the other allele. [ 1 ] The change may be in the pattern of DNA methylation or histone modifications. [ 2 ] The allele inducing the change is said to be paramutagenic, while the allele that has been epigenetically altered is termed paramutable. [ 1 ] A paramutable allele may have altered levels of gene expression , which may continue in offspring which inherit that allele, even though the paramutagenic allele may no longer be present. [ 1 ] Through proper breeding, paramutation can result in siblings that have the same genetic sequence, but with drastically different phenotypes . [ 3 ] Though studied primarily in maize , paramutation has been described in a number of other systems, including animal systems like Drosophila melanogaster and mice. [ 1 ] [ 4 ] Despite its broad distribution, examples of this phenomenon are scarce and its mechanism is not fully understood. The first description of what would come to be called paramutation was given by William Bateson and Caroline Pellew in 1915, when they described "rogue" peas that always passed their "rogue" phenotype onto their progeny. [ 5 ] However, the first formal description of paramutation was given by R.A. Brink at the University of Wisconsin–Madison in the 1950s, who did his work in maize ( Zea mays ). [ 5 ] Brink noticed that specific weakly expressed alleles of the red1 (r1) locus in maize, which encodes a transcription factor that confers red pigment to corn kernels , can heritably change specific strongly expressed alleles to a weaker expression state. [ 1 ] The weaker expression state adopted by the changed allele is heritable and can, in turn, change the expression state of other active alleles in a process termed secondary paramutation. [ 1 ] Brink showed that the influence of the paramutagenic allele could persist for many generations. [ 1 ] The alleles that cause heritable changes in the alleles they come into contact which are called paramutagenic, and the alleles modified by them are paramutable. Alleles that do not take part in this interaction are called neutral. When present together in an organism, the paramutable allele is converted to the paramutagenic allele, and retains its paramutagenicity in subsequent generations. No change in DNA sequence accompanies this transformation, but instead epigenetic modifications (e.g. DNA methylation) differentiate the paramutagenic from paramutable alleles. In most cases, it is the paramutable allele that is highly transcribed and the paramutagenic allele that undergoes little to no transcription. The first described and most extensively researched example is the r1 locus in maize. The gene at this locus, when actively transcribed, codes for a transcription factor that promotes anthocyanin production, resulting in kernels with a purple color. One allele at this locus, referred to as B’, is capable of causing methylation at the other allele, B-I. This methylation results in reduced transcription and, as a result, decreased anthocyanin production. These alleles do not differ in DNA sequence, but they do differ in their degree of DNA methylation . As with other examples of paramutation, this change of the B-I allele to the B’ allele is stable and heritable. Other, similar examples of paramutation exist at other maize loci, as well as in other plants such as the model system Arabidopsis thaliana and transgenic petunias . [ 6 ] [ 7 ] [ 8 ] Paramutation has also been documented in animals such as fruit flies , C. elegans , and mice . [ 1 ] [ 4 ] [ 9 ] Though the specific mechanisms by which paramutation acts vary from organism to organism, all well-documented cases point towards epigenetic modification and RNA-silencing as the underlying mechanism for paramutation. [ 1 ] In the case of the r1 locus in maize, DNA methylation of a region of tandem repeats near the coding region of the gene is characteristic of the paramutagenic B’ allele, and when the paramutable B-I allele becomes paramutagenic, it too takes on the same DNA methylation pattern. [ 10 ] In order for this methylation to be successfully transferred, a number of genes coding for RNA-dependent RNA polymerases and other components of RNA-silencing pathways are required, suggesting that paramutation is mediated via endogenous RNA-silencing pathways. [ 1 ] The transcription of short interfering RNAs from the tandem repeat regions corroborates this. In animal systems such as Drosophila , piRNAs have also been implicated in mediating paramutation. In that case, paramutation can occur in absence of any possible pairing between the paramutagenic and the paramutated loci and can be stable other more than 50 generations. [ 4 ] In addition to the characteristic DNA methylation state changes, changes in histone modification patterns in the methylated DNA regions, and/or the requirement of histone modifying proteins to mediate paramutation have also been noted in multiple systems. [ 2 ] [ 9 ] It has been suggested that these histone modifications play a role in maintaining the paramutated state. [ 2 ] The previously mentioned tandem repeat region in the r1 locus is also typical of other loci showing paramutation or paramutation-like phenomena. [ 5 ] However, it has been noted that it is not possible to explain all occurrences and features of paramutation with what is known about RNAi-mediated transcriptional silencing, suggesting that other pathways and/or mechanisms are also at play. [ 7 ] It has been speculated that in any particular population, relatively few genes would show observable paramutation since the high penetrance of paramutagenic alleles (like B’ at the r1 locus in maize) would drive either the paramutagenic or paramutable allele to fixation . [ 3 ] Paramutation at other loci with paramutagenic alleles with lower penetrance may persist, however, which may need to be taken into account by plant breeders . [ 3 ] Since there are examples of paramutation, or paramutation-like phenomena, in animals such as fruit flies and mice, it has been suggested that paramutation may explain the occurrence of some human diseases that exhibit non-Mendelian inheritance patterns . [ 11 ]
https://en.wikipedia.org/wiki/Paramutation
In combinatorial game theory , the paranoid algorithm is an algorithm that aims to improve the alpha-beta pruning capabilities of the max n algorithm by making the player p chosen to maximize the score " paranoid " of the other players by assuming they are cooperating to minimize p 's score, thus minimizing any n -player game to a two-player game by making the opposing player the sum of the other player's scores. This returns the game to a zero-sum game and makes it analyzable via any optimization techniques usually applied in pair with the minimax theorem . [ 1 ] It performs notably faster than the max n algorithm because of those optimizations. [ 2 ] This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . This game theory article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Paranoid_algorithm