text
stringlengths
11
320k
source
stringlengths
26
161
Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants , the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. [ 1 ] Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands. A global overview of the Earth's major vegetation types is provided by O.W. Archibold. [ 2 ] He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands , freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees. One feature that defines plants is photosynthesis . Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. [ 3 ] One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth , an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations , distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events. [ 1 ] [ 4 ] One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements . [ 5 ] It talks broadly about plant communities, and particularly the importance of forces like competition and processes like succession. The term ecology itself was coined by German biologist Ernst Haeckel . [ 6 ] Plant ecology can also be divided by levels of organization including plant ecophysiology , plant population ecology , community ecology , ecosystem ecology , landscape ecology and biosphere ecology. [ 1 ] [ 7 ] First, most plants are rooted in the soil , which makes it difficult to observe and measure nutrient uptake and species interactions. Second, plants often reproduce vegetatively , that is asexually, in a way that makes it difficult to distinguish individual plants. Indeed, the very concept of an individual is doubtful, since even a tree may be regarded as a large collection of linked meristems. [ 8 ] Hence, plant ecology and animal ecology have different styles of approach to problems that involve processes like reproduction, dispersal and mutualism. Some plant ecologists have placed considerable emphasis upon trying to treat plant populations as if they were animal populations, focusing on population ecology. [ 9 ] Many other ecologists believe that while it is useful to draw upon population ecology to solve certain scientific problems, plants demand that ecologists work with multiple perspectives, appropriate to the problem, the scale and the situation. [ 1 ] Plant ecology has its origin in the application of plant physiology to the questions raised by plant geographers . [ 10 ] [ 11 ] : 13–16 Carl Ludwig Willdenow was one of the first to note that similar climates produced similar types of vegetation, even when they were located in different parts of the world. Willdenow's student, Alexander von Humboldt , used physiognomy to describe vegetation types and observed that the distribution vegetation types was based on environmental factors. Later plant geographers who built upon Humboldt's work included Joakim Frederik Schouw , A.P. de Candolle , August Grisebach and Anton Kerner von Marilaun . Schouw's work, published in 1822, linked plant distributions to environmental factors (especially temperature) and established the practice of naming plant associations by adding the suffix -etum to the name of the dominant species. Working from herbarium collections, De Candolle searched for general rules of plant distribution and settled on using temperature as well. [ 11 ] : 14–16 Grisebach's two-volume work, Die Vegetation der Erde nach Ihrer Klimatischen Anordnung , published in 1872, saw plant geography reach its "ultimate form" as a descriptive field. [ 10 ] : 29 Starting in the 1870s, Swiss botanist Simon Schwendener , together with his students and colleagues, established the link between plant morphology and physiological adaptations, laying the groundwork for the first ecology textbooks, Eugenius Warming 's Plantesamfund (published in 1895) and Andreas Schimper 's 1898 Pflanzengeographie auf Physiologischer Grundlage . [ 10 ] Warming successfully incorporated plant morphology, physiology taxonomy and biogeography into plant geography to create the field of plant ecology. Although more morphological than physiological, Schimper's has been considered the beginning of plant physiological ecology. [ 11 ] : 17–18 Plant ecology was initially built around static ideas of plant distribution ; incorporating the concept of succession added an element to change through time to the field. Henry Chandler Cowles ' studies of plant succession on the Lake Michigan sand dunes (published in 1899) and Frederic Clements ' 1916 monograph on the subject established it as a key element of plant ecology. [ 10 ] Plant ecology developed within the wider discipline of ecology over the twentieth century. Inspired by Warming's Plantesamfund , Arthur Tansley set out to map British plant communities . In 1904 he teamed up with William Gardner Smith and others involved in vegetation mapping to establish the Central Committee for the Survey and Study of British Vegetation, later shortened to British Vegetation Committee. In 1913, the British Vegetation Committee organised the British Ecological Society (BES), the first professional society of ecologists. [ 12 ] This was followed in 1917 by the establishment of the Ecological Society of America (ESA); plant ecologists formed the largest subgroup among the inaugural members of the ESA. [ 10 ] : 41 Cowles' students played an important role in the development of the field of plant ecology during the first half of the twentieth century, among them William S. Cooper , E. Lucy Braun and Edgar Transeau . [ 11 ] : 23 Plant distributions is governed by a combination of historical factors, ecophysiology and biotic interactions . The set of species that can be present at a given site is limited by historical contingency. In order to show up, a species must either have evolved in an area or dispersed there (either naturally or through human agency ), and must not have gone locally extinct. The set of species present locally is further limited to those that possess the physiological adaptations to survive the environmental conditions that exist. [ 13 ] This group is further shaped through interactions with other species. [ 14 ] : 2–3 Plant communities are broadly distributed into biomes based on the form of the dominant plant species. [ 13 ] For example, grasslands are dominated by grasses, while forests are dominated by trees. Biomes are determined by regional climates , mostly temperature and precipitation, and follow general latitudinal trends. [ 13 ] Within biomes, there may be many ecological communities , which are impacted not only by climate and a variety of smaller-scale features, including soils , hydrology , and disturbance regime . [ 13 ] Biomes also change with elevation, high elevations often resembling those found at higher latitudes. [ 13 ] Plants, like most life forms, require relatively few basic elements: carbon, hydrogen, oxygen, nitrogen, phosphorus and sulphur; hence they are known as CHNOPS life forms. There are also lesser elements needed as well, frequently termed micronutrients, such as magnesium and sodium. When plants grow in close proximity, they may deplete supplies of these elements and have a negative impact upon neighbours. [ 15 ] Competition for resources vary from complete symmetric (all individuals receive the same amount of resources, irrespective of their size) to perfectly size symmetric (all individuals exploit the same amount of resource per unit biomass) to absolutely size-asymmetric (the largest individuals exploit all the available resource). The degree of size asymmetry has major effects on the structure and diversity of ecological communities. In many cases (perhaps most) the negative effects upon neighbours arise from size asymmetric competition for light. In other cases, there may be competition below ground for water, nitrogen, or phosphorus. To detect and measure competition, experiments are necessary; these experiments require removing neighbours, and measuring responses in the remaining plants. [ 16 ] Many such studies are required before useful generalizations can be drawn. Overall, it appears that light is the most important resource for which plants compete, and the increase in plant height over evolutionary time likely reflects selection for taller plants to better intercept light. Many plant communities are therefore organized into hierarchies based upon the relative competitive abilities for light. [ 16 ] In some systems, particularly infertile or arid systems, below ground competition may be more significant. [ 17 ] Along natural gradients of soil fertility, it is likely that the ratio of above ground to below ground competition changes, with higher above ground competition in the more fertile soils. [ 18 ] [ 19 ] Plants that are relatively weak competitors may escape in time (by surviving as buried seeds) or in space (by dispersing to a new location away from strong competitors.) In principle, it is possible to examine competition at the level of the limiting resources if a detailed knowledge of the physiological processes of the competing plants is available. However, in most terrestrial ecological studies, there is only little information on the uptake and dynamics of the resources that limit the growth of different plant species, and, instead, competition is inferred from observed negative effects of neighbouring plants without knowing precisely which resources the plants were competing for. In certain situations, plants may compete for a single growth-limiting resource, perhaps for light in agricultural systems with sufficient water and nutrients, or in dense stands of marsh vegetation, but in many natural ecosystems plants may be colimited by several resources, e.g. light , phosphorus and nitrogen at the same time. [ 20 ] Therefore, there are many details that remain to be uncovered, particularly the kinds of competition that arise in natural plant communities, the specific resource(s), the relative importance of different resources, and the role of other factors like stress or disturbance in regulating the importance of competition. [ 1 ] [ 21 ] Mutualism is defined as an interaction "between two species or individuals that is beneficial to both". Probably the most widespread example in plants is the mutual beneficial relationship between plants and fungi, known as mycorrhizae . The plant is assisted with nutrient uptake (mainly phosphate), while the fungus receives carbohydrates. [ 22 ] Some the earliest known fossil plants even have fossil mycorrhizae on their rhizomes. [ 1 ] The flowering plants are a group that have evolved by using two major mutualisms. First, flowers are pollinated by insects. This relationship seems to have its origins in beetles feeding on primitive flowers, eating pollen and also acting (unwittingly) as pollinators. Second, fruits are eaten by animals, and the animals then disperse the seeds. Thus, the flowering plants actually have three major types of mutualism, since most higher plants also have mycorrhizae. [ 1 ] Plants may also have beneficial effects upon one another, but this is less common. Examples might include "nurse plants" whose shade allows young cacti to establish. Most examples of mutualism, however, are largely beneficial to only one of the partners, and may not really be true mutualism. The term used for these more one-sided relationships, which are mostly beneficial to one participant, is facilitation. Facilitation among neighboring plants may act by reducing the negative impacts of a stressful environment. [ 23 ] In general, facilitation is more likely to occur in physically stressful environments than in favorable environments, where competition may be the most important interaction among species. [ 24 ] Commensalism is similar to facilitation, in that one plant is mostly exploiting another. A familiar example is the epiphytes which grow on branches of tropical trees, or even mosses which grow on trees in deciduous forests . It is important to keep track of the benefits received by each species to determine the appropriate term. Although people are often fascinated by unusual examples, it is important to remember that in plants, the main mutualisms are mycorrhizae, pollination, and seed dispersal. [ 1 ] Parasitism in biology refers to an interaction between different species, where the parasite (one species) benefits at the expense of the host (the other species). Parasites depend on another organism (their host) for survival in general, which usually includes both habitat and nutrient requirements at the very minimum. [ 25 ] Parasitic plants attach themselves to host plants via a haustoria to the xylem and/or phloem. [ 26 ] Many parasitic plants are generalists and are able to attack multiple hosts at the same time, greatly affecting community structures. Host species' growth, reproduction, and metabolism are affected by the parasite due to the nutrients, water, and carbon being taken by the parasite. [ 26 ] They are also able to alter competitive interactions among hosts and indirectly affect competition in the community. [ 27 ] Commensalism refers to the biological interaction between two species in which one benefits while the other simply remains unaffected. The species that benefits is referred to as the commensal while the species that is unaffected is referred to as the host. For example, organisms that live attached to plants, known as epiphytes, are referred to as commensals. Algae that grow on the backs of turtles or sloths are considered as commensals, too. Their survival rate is higher when they are attached to their host, however they do not harm nor benefit the host. [ 28 ] Nearly 10% of all vascular plant species around the world are epiphytes, and most of them are found in tropical forests. Therefore, they make up a large fraction of the total plant biodiversity in the world, being 10% of all species, and 25% of all vascular plant species in tropical countries. [ 29 ] However, commensals have the capability to transform into parasites over time by which results in a decrease in success or an overall population decline. [ 28 ] An important ecological function of plants is that they produce organic compounds for herbivores [ 30 ] in the bottom of the food web . A large number of plant traits, from thorns to chemical defenses, can be related to the intensity of herbivory. Large herbivores can also have many effects on vegetation. These include removing selected species, creating gaps for regeneration of new individuals, recycling nutrients, and dispersing seeds. Certain ecosystem types, such as grasslands, may be dominated by the effects of large herbivores, although fire is also an equally important factor in this biome. In few cases, herbivores are capable of nearly removing all the vegetation at a site (for example, geese in the Hudson Bay Lowlands of Canada, and nutria in the marshes of Louisiana [ 31 ] ) but normally herbivores have a more selective impact, particularly when large predators control the abundance of herbivores. The usual method of studying the effects of herbivores is to build exclosures, where they cannot feed, and compare the plant communities in the exclosures to those outside over many years. Often such long term experiments show that herbivores have a significant effect upon the species that make up the plant community. [ 1 ] The ecological success of a plant species in a specific environment may be quantified by its abundance , and depending on the life form of the plant different measures of abundance may be relevant, e.g. density , biomass , or plant cover . The change in the abundance of a plant species may be due to both abiotic factors, [ 32 ] e.g. climate change , or biotic factors, e.g. herbivory or interspecific competition . Whether a plant species is present at a local area depends on the processes of colonisation and local extinction . The probability of colonisation decreases with distance to neighboring habitats where the species is present and increases with plant abundance and fecundity in neighboring habitats and the dispersal distance of the species. The probability of local extinction decreases with abundance (both living plants and seeds in the soil seed bank ). There are a few ways that reproduction occurs within plant life, and one way is through parthenogenesis. Parthenogenesis is defined as "a form of asexual reproduction in which genetically identical offspring (clones) are produced". [ 33 ] Another form of reproduction is through cross-fertilization, which is defined as "fertilization in which the egg and sperm are produced by different individuals", and in plants this occurs in the ovule. Once an ovule is fertilized within the plant this becomes what is known as a seed. A seed normally contains the nutritive tissue also known as the endosperm and the embryo. A seedling is a young plant that has recently gone through germination. [ 34 ] Another form of reproduction of a plant is self-fertilization; [ 35 ] in which both the sperm and the egg are produced from the same individual- this plant is therefore a self-compatible titled plant. [ 36 ]
https://en.wikipedia.org/wiki/Plant_ecology
Evolutionary developmental biology (evo-devo) is the study of developmental programs and patterns from an evolutionary perspective. [ 1 ] It seeks to understand the various influences shaping the form and nature of life on the planet. Evo-devo arose as a separate branch of science rather recently. An early sign of this occurred in 1999. [ 2 ] Most of the synthesis in evo-devo has been in the field of animal evolution , one reason being the presence of model systems like Drosophila melanogaster , C. elegans , zebrafish and Xenopus laevis . However, since 1980, a wealth of information on plant morphology , coupled with modern molecular techniques has helped shed light on the conserved and unique developmental patterns in the plant kingdom also. [ 3 ] [ 4 ] The origin of the term " morphology " is generally attributed to Johann Wolfgang von Goethe (1749–1832). He was of the opinion that there is an underlying fundamental organisation ( Bauplan ) in the diversity of flowering plants . In his book The Metamorphosis of Plants , he proposed that the Bauplan enabled us to predict the forms of plants that had not yet been discovered. [ 5 ] Goethe was the first to make the perceptive suggestion that flowers consist of modified leaves . He also entertained different complementary interpretations. [ 6 ] [ 7 ] In the middle centuries, several basic foundations of our current understanding of plant morphology were laid down. Nehemiah Grew , Marcello Malpighi , Robert Hooke , Antonie van Leeuwenhoek , Wilhelm von Nageli were just some of the people who helped build knowledge on plant morphology at various levels of organisation. It was the taxonomical classification of Carl Linnaeus in the eighteenth century though, that generated a firm base for the knowledge to stand on and expand. [ 8 ] The introduction of the concept of Darwinism in contemporary scientific discourse also had had an effect on the thinking on plant forms and their evolution. Wilhelm Hofmeister , one of the most brilliant botanists of his times, was the one to diverge away from the idealist way of pursuing botany. Over the course of his life, he brought an interdisciplinary outlook into botanical thinking. He came up with biophysical explanations on phenomena like phototaxis and geotaxis , and also discovered the alternation of generations in the plant life cycle. [ 5 ] The past century witnessed a rapid progress in the study of plant anatomy . The focus shifted from the population level to more reductionist levels. While the first half of the century saw expansion in developmental knowledge at the tissue and the organ level, in the latter half, especially since the 1990s, there has also been a strong impetus on gaining molecular information. Edward Charles Jeffrey was one of the early evo-devo researchers of the 20th century. He performed a comparative analyses of the vasculatures of living and fossil gymnosperms and came to the conclusion that the storage parenchyma has been derived from tracheids . [ 9 ] His research [ 10 ] focussed primarily on plant anatomy in the context of phylogeny . This tradition of evolutionary analyses of plant architectures was further advanced by Katherine Esau , best known for her book The Plant Anatomy . Her work focussed on the origin and development of various tissues in different plants. Working with Vernon Cheadle , [ 11 ] she also explained the evolutionary specialization of the phloem tissue with respect to its function. In 1959 Walter Zimmermann published a revised edition of Die Phylogenie der Planzen . [ 12 ] This very comprehensive work, which has not been translated into English, has no equal in the literature. It presents plant evolution as the evolution of plant development (hologeny). In this sense it is plant evolutionary developmental biology (plant evo-devo). According to Zimmermann, diversity in plant evolution occurs though various developmental processes. Three very basic processes are heterochrony (changes in the timing of developmental processes), heterotopy (changes in the relative positioning of processes), and heteromorphy (changes in form processes). [ 13 ] In the meantime, by the beginning of the latter half of the 1900s, Arabidopsis thaliana had begun to be used in some developmental studies. The first collection of Arabidopsis thaliana mutants were made around 1945. [ 14 ] However it formally became established as a model organism only in 1998. [ 15 ] The recent spurt in information on various plant-related processes has largely been a result of the revolution in molecular biology . Powerful techniques like mutagenesis and complementation were made possible in Arabidopsis thaliana via generation of T-DNA containing mutant lines, recombinant plasmids , techniques like transposon tagging etc. Availability of complete physical and genetic maps, [ 16 ] RNAi vectors, and rapid transformation protocols are some of the technologies that have significantly altered the scope of the field. [ 15 ] Recently, there has also been a massive increase in the genome and EST sequences [ 17 ] of various non-model species, which, coupled with the bioinformatics tools existing today, generate opportunities in the field of plant evo-devo research. Gérard Cusset provided a detailed in-depth analysis of the history of plant morphology, including plant development and evolution, from its beginnings to the end of the 20th century. [ 18 ] Rolf Sattler discussed fundamental principles of plant morphology [ 19 ] [ 20 ] and plant evo-devo. [ 21 ] [ 22 ] [ 23 ] Rolf Rutishauser surveyed the past and future of plant evo-devo with regard to continuum and process morphology. [ 24 ] The most recent innovation is Articulation Morphology. [ 25 ] Articulation Morphology, grounded in the open growth of plants, focuses on ramification as the key principle of plant morphology. This principle entails articulation: the formation of articles. Unlike organs, which are defined in terms of a morphological theory such as the root-stem-leaf model, articles, which have been almost completely overlooked, are directly observable, that is, they are factual and objective. As such, articulation morphology based on articles offers a common foundation for all morphologists, regardless of their theoretical preferences. While so far plant morphology has been centered around homology, articulation morphology shifts the focus to the transformation of ramification and articulation during development and evolution. This shift has far-reaching consequences for plant morphology and plant evo-devo. The most important model systems in plant development have been arabidopsis and maize . Maize has traditionally been the favorite of plant geneticists, while extensive resources in almost every area of plant physiology and development are available for Arabidopsis thaliana . Apart from these, rice , Antirrhinum majus , Brassica , and tomato are also being used in a variety of studies. The genomes of Arabidopsis thaliana and rice have been completely sequenced, while the others are in process. [ 26 ] It must be emphasized here that the information from these "model" organisms form the basis of our developmental knowledge. While Brassica has been used primarily because of its convenient location in the phylogenetic tree in the mustard family, Antirrhinum majus is a convenient system for studying leaf architecture. Rice has been traditionally used for studying responses to hormones like abscissic acid and gibberelin as well as responses to stress . However, recently, not just the domesticated rice strain, but also the wild strains have been studied for their underlying genetic architectures. [ 27 ] Some people have objected against extending the results of model organisms to the plant world. One argument is that the effect of gene knockouts in lab conditions wouldn't truly reflect even the same plant's response in the natural world. Also, these supposedly crucial genes might not be responsible for the evolutionary origin of that character. For these reasons, a comparative study of plant traits has been proposed as the way to go now. [ 28 ] Since the past few years, researchers have indeed begun looking at non-model, "non-conventional" organisms using modern genetic tools. One example of this is the Floral Genome Project , which envisages to study the evolution of the current patterns in the genetic architecture of the flower through comparative genetic analyses, with a focus on EST sequences. [ 29 ] Like the FGP, there are several such ongoing projects that aim to find out conserved and diverse patterns in evolution of the plant shape. Expressed sequence tag (EST) sequences of quite a few non-model plants like sugarcane , apple , barley , cycas , coffee , to name a few, are available freely online. [ 30 ] The Cycad Genomics Project, [ 31 ] for example, aims to understand the differences in structure and function of genes between gymnosperms and angiosperms through sampling in the order Cycadales . In the process, it intends to make available information for the study of evolution of seeds , cones and evolution of life cycle patterns. Presently the most important sequenced genomes from an evo-devo point of view include those of A. thaliana (a flowering plant), poplar (a woody plant), Physcomitrella patens (a bryophyte), Maize (extensive genetic information), and Chlamydomonas reinhardtii (a green alga). The impact of such a vast amount of information on understanding common underlying developmental mechanisms can easily be realised. Apart from EST and genome sequences, several other tools like PCR , yeast two-hybrid system, microarrays , RNA Interference , SAGE , QTL mapping etc. permit the rapid study of plant developmental patterns. Recently, cross-species hybridization has begun to be employed on microarray chips, to study the conservation and divergence in mRNA expression patterns between closely related species . [ 32 ] Techniques for analyzing this kind of data have also progressed over the past decade. We now have better models for molecular evolution , more refined analysis algorithms and better computing power as a result of advances in computer sciences . Evidence suggests that an algal scum formed on the land 1,200 million years ago , but it was not until the Ordovician period, around 500 million years ago , that land plants appeared. These began to diversify in the late Silurian period, around 420 million years ago , and the fruits of their diversification are displayed in remarkable detail in an early Devonian fossil assemblage known as the Rhynie chert . This chert preserved early plants in cellular detail, petrified in volcanic springs. By the middle of the Devonian period most of the features recognised in plants today are present, including roots and leaves. By the late Devonian, plants had reached a degree of sophistication that allowed them to form forests of tall trees. Evolutionary innovation continued after the Devonian period. Most plant groups were relatively unscathed by the Permo-Triassic extinction event , although the structures of communities changed. This may have set the scene for the evolution of flowering plants in the Triassic (~ 200 million years ago ), which exploded the Cretaceous and Tertiary. The latest major group of plants to evolve were the grasses, which became important in the mid Tertiary, from around 40 million years ago . The grasses, as well as many other groups, evolved new mechanisms of metabolism to survive the low CO 2 and warm, dry conditions of the tropics over the last 10 million years . Although animals and plants evolved their bodyplan independently, they both express a developmental constraint during mid- embryogenesis that limits their morphological diversification. [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] The meristem architectures differ between angiosperms , gymnosperms and pteridophytes . The gymnosperm vegetative meristem lacks organization into distinct tunica and corpus layers. They possess large cells called central mother cells. In angiosperms , the outermost layer of cells divides anticlinally to generate the new cells, while in gymnosperms, the plane of division in the meristem differs for different cells. However, the apical cells do contain organelles like large vacuoles and starch grains, like the angiosperm meristematic cells. Pteridophytes , like fern , on the other hand, do not possess a multicellular apical meristem. They possess a tetrahedral apical cell, which goes on to form the plant body. Any somatic mutation in this cell can lead to hereditary transmission of that mutation . [ 38 ] The earliest meristem-like organization is seen in an algal organism from group Charales that has a single dividing cell at the tip, much like the pteridophytes, yet simpler. One can thus see a clear pattern in evolution of the meristematic tissue, from pteridophytes to angiosperms: Pteridophytes, with a single meristematic cell; gymnosperms with a multicellular, but less defined organization ; and finally, angiosperms , with the highest degree of organization. Transcription factors and transcriptional regulatory networks play key roles in plant development and stress responses, as well as their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants. [ 39 ] Leaves are the primary photosynthetic organs of a plant. Based on their structure, they are classified into two types - microphylls , that lack complex venation patterns and megaphylls , that are large and with a complex venation . It has been proposed that these structures arose independently. [ 40 ] Megaphylls, according to the telome theory , have evolved from plants that showed a three-dimensional branching architecture, through three transformations: planation , which involved formation of a planar architecture, webbing , or formation of the outgrowths between the planar branches and fusion , where these webbed outgrowths fused to form a proper leaf lamina. Studies have revealed that these three steps happened multiple times in the evolution of today's leaves. [ 41 ] Contrary to the telome theory, developmental studies of compound leaves have shown that, unlike simple leaves, compound leaves branch in three dimensions. [ 42 ] [ 43 ] Consequently, they appear partially homologous with shoots as postulated by Agnes Arber in her partial-shoot theory of the leaf. [ 44 ] They appear to be part of a continuum between morphological categories, especially those of leaf and shoot. [ 45 ] [ 46 ] Molecular genetics confirmed these conclusions (see below). It has been proposed that the before the evolution of leaves , plants had the photosynthetic apparatus on the stems. Today's megaphyll leaves probably became commonplace some 360 mya , about 40 my after the simple leafless plants had colonized the land in the early Devonian period. This spread has been linked to the fall in the atmospheric carbon dioxide concentrations in the late Paleozoic era associated with a rise in density of stomata on leaf surface. This must have allowed for better transpiration rates and gas exchange. Large leaves with less stomata would have heated up in the sun's rays, but an increased stomatal density allowed for a better-cooled leaf, thus making its spread feasible. [ 47 ] [ 48 ] Various physical and physiological forces like light intensity, humidity , temperature , wind speeds etc. are thought to have influenced evolution of leaf shape and size. It is observed that high trees rarely have large leaves, owing to the obstruction they generate for winds. This obstruction can eventually lead to the tearing of leaves, if they are large. Similarly, trees that grow in temperate or taiga regions have pointed leaves, presumably to prevent nucleation of ice onto the leaf surface and reduce water loss due to transpiration. Herbivory , not only by large mammals, but also small insects has been implicated as a driving force in leaf evolution, an example being plants of the genus Aciphylla , that are commonly found in New Zealand . The now-extinct moas (birds) fed upon these plants, and the spines on the leaves probably discouraged the moas from feeding on them. Other members of Aciphylla that did not co-exist with the moas were spineless. [ 49 ] At the genetic level, developmental studies have shown that repression of the KNOX genes is required for initiation of the leaf primordium . This is brought about by ARP genes, which encode transcription factors . Genes of this type have been found in many plants studied till now, and the mechanism i.e. repression of KNOX genes in leaf primordia, seems to be quite conserved. Expression of KNOX genes in leaves produces complex leaves. It is speculated that the ARP function arose quite early in vascular plant evolution, because members of the primitive group lycophytes also have a functionally similar gene [ 50 ] Other players that have a conserved role in defining leaf primordia are the phytohormone auxin , gibberelin and cytokinin . One feature of a plant is its phyllotaxy . The arrangement of leaves on the plant body is such that the plant can maximally harvest light under the given constraints, and hence, one might expect the trait to be genetically robust . However, it may not be so. In maize , a mutation in only one gene called abphyl ( abnormal phyllotaxy ) was enough to change the phyllotaxy of the leaves. It implies that sometimes, mutational tweaking of a single locus on the genome is enough to generate diversity. The abphyl gene was later on shown to encode a cytokinin response regulator protein. [ 51 ] Once the leaf primordial cells are established from the SAM cells, the new axes for leaf growth are defined, one important (and more studied) among them being the abaxial-adaxial (lower-upper surface) axis. The genes involved in defining this, and the other axes seem to be more or less conserved among higher plants. Proteins of the HD-ZIPIII family have been implicated in defining the adaxial identity. These proteins deviate some cells in the leaf primordium from the default abaxial state, and make them adaxial . It is believed that in early plants with leaves, the leaves just had one type of surface - the abaxial one. This is the underside of today's leaves. The definition of the adaxial identity occurred some 200 million years after the abaxial identity was established. [ 28 ] One can thus imagine the early leaves as an intermediate stage in evolution of today's leaves, having just arisen from spiny stem-like outgrowths of their leafless ancestors, covered with stomata all over, and not optimized as much for light harvesting . How the infinite variety of plant leaves is generated is a subject of intense research. Some common themes have emerged. One of the most significant is the involvement of KNOX genes in generating compound leaves , as in tomato (see above) . But this again is not universal. For example, pea uses a different mechanism for doing the same thing. [ 52 ] [ 53 ] Mutations in genes affecting leaf curvature can also change leaf form, by changing the leaf from flat, to a crinkly shape, [ 54 ] like the shape of cabbage leaves. There also exist different morphogen gradients in a developing leaf which define the leaf's axis. Changes in these morphogen gradients may also affect the leaf form. Another very important class of regulators of leaf development are the microRNAs , whose role in this process has just begun to be documented. The coming years should see a rapid development in comparative studies on leaf development, with many EST sequences involved in the process coming online. Molecular genetics has also shed light on the relation between radial symmetry (characteristic of stems) and dorsiventral symmetry (typical for leaves). James (2009) stated that "it is now widely accepted that... radiality [characteristic of most shoots] and dorsiventrality [characteristic of leaves] are but extremes of a continuous spectrum. In fact, it is simply the timing of the KNOX gene expression!" [ 55 ] In fact there is evidence for this continuum already at the beginning of land plant evolution. [ 56 ] Furthermore, studies in molecular genetics confirmed that compound leaves are intermediate between simple leaves and shoots, that is, they are partially homologous with simple leaves and shoots, since "it is now generally accepted that compound leaves express both leaf and shoot properties”. [ 57 ] This conclusion was reached by several authors on purely morphological grounds. [ 42 ] [ 43 ] Flower-like structures first appear in the fossil records some ~130 mya, in the Cretaceous era. [ 58 ] The flowering plants have long been assumed to have evolved from within the gymnosperms ; according to the traditional morphological view, they are closely allied to the gnetales . However, recent molecular evidence is at odds to this hypothesis, [ 59 ] [ 60 ] and further suggests that gnetales are more closely related to some gymnosperm groups than angiosperms, [ 61 ] and that gymnosperms form a distinct clade to the angiosperms,. [ 59 ] [ 60 ] [ 61 ] Molecular clock analysis predicts the divergence of flowering plants (anthophytes) and gymnosperms to ~ 300 mya [ 62 ] Cycads Ginkgo Conifers Bennettitales Gnetales Angiosperms Angiosperms Cycads Bennettitales Ginkgo Conifers Gnetales The main function of a flower is reproduction , which, before the evolution of the flower and angiosperms , was the job of microsporophylls and megasporophylls. A flower can be considered a powerful evolutionary innovation , because its presence allowed the plant world to access new means and mechanisms for reproduction. It seems that on the level of the organ, the leaf may be the ancestor of the flower, or at least some floral organs. When we mutate some crucial genes involved in flower development, we end up with a cluster of leaf-like structures. Thus, sometime in history, the developmental program leading to formation of a leaf must have been altered to generate a flower. There probably also exists an overall robust framework within which the floral diversity has been generated. An example of that is a gene called LEAFY (LFY) , which is involved in flower development in Arabidopsis thaliana . The homologs of this gene are found in angiosperms as diverse as tomato , snapdragon , pea , maize and even gymnosperms . Expression of Arabidopsis thaliana LFY in distant plants like poplar and citrus also results in flower-production in these plants. The LFY gene regulates the expression of some gene belonging to the MADS-box family. These genes, in turn, act as direct controllers of flower development. The members of the MADS-box family of transcription factors play a very important and evolutionarily conserved role in flower development. According to the ABC model of flower development , three zones - A, B and C - are generated within the developing flower primordium, by the action of some transcription factors , that are members of the MADS-box family. Among these, the functions of the B and C domain genes have been evolutionarily more conserved than the A domain gene. Many of these genes have arisen through gene duplications of ancestral members of this family. Quite a few of them show redundant functions. The evolution of the MADS-box family has been extensively studied. These genes are present even in pteridophytes , but the spread and diversity is many times higher in angiosperms . [ 64 ] There appears to be quite a bit of pattern into how this family has evolved. Consider the evolution of the C-region gene AGAMOUS ( AG ). It is expressed in today's flowers in the stamens , and the carpel , which are reproductive organs. It's ancestor in gymnosperms also has the same expression pattern. Here, it is expressed in the strobili , an organ that produces pollens or ovules. [ 65 ] Similarly, the B-genes' (AP3 and PI) ancestors are expressed only in the male organs in gymnosperms . Their descendants in the modern angiosperms also are expressed only in the stamens , the male reproductive organ. Thus, the same, then-existing components were used by the plants in a novel manner to generate the first flower. This is a recurring pattern in evolution . How is the enormous diversity in the shape, color and sizes of flowers established? There is enormous variation in the developmental program in different plants. For example, monocots possess structures like lodicules and palea, that were believed to be analogous to the dicot petals and carpels respectively. It turns out that this is true, and the variation is due to slight changes in the MADS-box genes and their expression pattern in the monocots. Another example is that of the toad-flax, Linaria vulgaris , which has two kinds of flower symmetries: radial and bilateral . These symmetries are due to changes in copy number, timing, and location of expression in CYCLOIDEA, which is related to TCP1 in Arabidopsis. [ 58 ] [ 66 ] Arabidopsis thaliana has a gene called AGAMOUS that plays an important role in defining how many petals and sepals and other organs are generated. Mutations in this gene give rise to the floral meristem obtaining an indeterminate fate, and many floral organs keep on getting produced. We have flowers like roses , carnations and morning glory , for example, that have very dense floral organs. These flowers have been selected by horticulturists since long for increased number of petals . Researchers have found that the morphology of these flowers is because of strong mutations in the AGAMOUS homolog in these plants, which leads to them making a large number of petals and sepals. [ 67 ] Several studies on diverse plants like petunia , tomato , impatiens , maize etc. have suggested that the enormous diversity of flowers is a result of small changes in genes controlling their development. [ 68 ] Some of these changes also cause changes in expression patterns of the developmental genes, resulting in different phenotypes . The Floral Genome Project looked at the EST data from various tissues of many flowering plants. The researchers confirmed that the ABC Model of flower development is not conserved across all angiosperms . Sometimes expression domains change, as in the case of many monocots , and also in some basal angiosperms like Amborella . Different models of flower development like the fading boundaries model , or the overlapping-boundaries model which propose non-rigid domains of expression, may explain these architectures. [ 69 ] There is a possibility that from the basal to the modern angiosperms, the domains of floral architecture have gotten more and more fixed through evolution. Another floral feature that has been a subject of natural selection is flowering time. Some plants flower early in their life cycle, others require a period of vernalization before flowering. This decision is based on factors like temperature , light intensity , presence of pollinators and other environmental signals. In Arabidopsis thaliana it is known that genes like CONSTANS (CO) , FRIGIDA , Flowering Locus C ( FLC ) and FLOWERING LOCUS T (FT) integrate the environmental signals and initiate the flower development pathway. Allelic variation in these loci have been associated with flowering time variations between plants. For example, Arabidopsis thaliana ecotypes that grow in the cold temperate regions require prolonged vernalization before they flower, while the tropical varieties and common lab strains, do not. Much of this variation is due to mutations in the FLC and FRIGIDA genes, rendering them non-functional. [ 70 ] Many genes in the flowering time pathway are conserved across all plants studied to date. However, this does not mean that the mechanism of action is similarly conserved. For example, the monocot rice accelerates its flowering in short-day conditions, while Arabidopsis thaliana , a eudicot, responds to long-day conditions. In both plants, the proteins CO and FT are present but in Arabidopsis thaliana CO enhances FT production, while in rice the CO homolog represses FT production, resulting in completely opposite downstream effects. [ 71 ] There are many theories that propose how flowers evolved. Some of them are described below. The Anthophyte Theory was based on the observation that a gymnospermic family Gnetaceae has a flower-like ovule . It has partially developed vessels as found in the angiosperms , and the megasporangium is covered by three envelopes, like the ovary structure of angiosperm flowers. However, many other lines of evidence show that gnetophytes are not related to angiosperms. [ 63 ] The Mostly Male Theory has a more genetic basis. Proponents of this theory point out that the gymnosperms have two very similar copies of the gene LFY while angiosperms only have one. Molecular clock analysis has shown that the other LFY paralog was lost in angiosperms around the same time as flower fossils become abundant, suggesting that this event might have led to floral evolution. [ 72 ] According to this theory, loss of one of the LFY paralog led to flowers that were more male, with the ovules being expressed ectopically. These ovules initially performed the function of attracting pollinators , but sometime later, may have been integrated into the core flower. In 1878 Charles Darwin published a book “The Effects of Cross and Self-Fertilization in the Vegetable Kingdom” [ 73 ] and in the initial paragraph of chapter XII noted "The first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross-fertilisation is beneficial and self-fertilisation often injurious, at least with the plants on which I experimented." Flowers likely emerged in plant evolution as an adaptation to facilitate cross- fertilisation ( outcrossing ), a process that allows the masking of recessive deleterious mutations in the genome of progeny. This masking effect is referred to as genetic complementation . [ 74 ] This beneficial effect of cross-fertilisation on progeny is also considered to be the basis of hybrid vigor or heterosis . Once flowers became established in a lineage with the adaptive function of promoting cross-fertilization, subsequent switching to inbreeding usually then becomes disadvantageous, in large part because it allows expression of the previously masked deleterious recessive mutations, i.e. inbreeding depression . Also, meiosis , the process in flowering plants by which seed progeny are produced, provides a direct mechanism for repairing germ-line DNA through genetic recombination. [ 75 ] Thus, in flowering plants, the two fundamental aspects of sexual reproduction are cross-fertilization (outcrossing) and meiosis and these appear to be maintained respectively by the advantages of genetic complementation and recombinational repair of germline DNA. [ 74 ] Plant secondary metabolites are low molecular weight compounds, sometimes with complex structures that have no essential role in primary metabolism . They function in processes such as anti-herbivory, pollinator attraction, communication between plants, allelopathy , maintenance of symbiotic associations with soil flora and enhancing the rate of fertilization [ how? ] . Secondary metabolites have great structural and functional diversity and many thousands of enzymes may be involved in their synthesis, coded for by as much as 15–25% of the genome. [ 76 ] Many plant secondary metabolites such as the colour and flavor components of saffron and the chemotherapeutic drug taxol are of culinary and medical significance to humans and are therefore of commercial importance. In plants they seem to have diversified using mechanisms such as gene duplications, evolution of novel genes and the development of novel biosynthetic pathways. Studies have shown that diversity in some of these compounds may be positively selected for. [ citation needed ] Cyanogenic glycosides may have been proposed to have evolved multiple times in different plant lineages, and there are several other instances of convergent evolution . For example, the enzymes for synthesis of limonene – a terpene – are more similar between angiosperms and gymnosperms than to their own terpene synthesis enzymes. This suggests independent evolution of the limonene biosynthetic pathway in these two lineages. [ 77 ] While environmental factors are significantly responsible for evolutionary change, they act merely as agents for natural selection . Some of the changes develop through interactions with pathogens . Change is inherently brought about via phenomena at the genetic level – mutations , chromosomal rearrangements and epigenetic changes. While the general types of mutations hold true across the living world, in plants, some other mechanisms have been implicated as highly significant. Polyploidy is a very common feature in plants. It is believed that at least half plants are or have been polyploids. Polyploidy leads to genome doubling, thus generating functional redundancy in most genes. The duplicated genes may attain new function, either by changes in expression pattern or changes in activity. Polyploidy and gene duplication are believed to be among the most powerful forces in evolution of plant form. It is not known though, why genome doubling is such a frequent process in plants. One possible reason is the production of large amounts of secondary metabolites in plant cells. Some of them might interfere in the normal process of chromosomal segregation , leading to polypoidy . In recent times, plants have been shown to possess significant microRNA families, which are conserved across many plant lineages. In comparison to animals , while the number of plant miRNA families is less, the size of each family is much larger. The miRNA genes are also much more spread out in the genome than those in animals, where they are found clustered. It has been proposed that these miRNA families have expanded by duplications of chromosomal regions. [ 78 ] Many miRNA genes involved in regulation of plant development have been found to be quite conserved between plants studied. Domestication of plants such as maize , rice , barley , wheat etc. has also been a significant driving force in their evolution. Some studies [ clarification needed ] have looked at the origins of the maize plant and found that maize is a domesticated derivative of a wild plant from Mexico called teosinte . Teosinte belongs to the genus Zea , just as maize, but bears very small inflorescence , 5–10 hard cobs, and a highly branched and spread-out stem. Crosses between a particular teosinte variety and maize yield fertile offspring that are intermediate in phenotype between maize and teosinte. QTL analysis has also revealed some loci that when mutated in maize yield a teosinte-like stem or teosinte-like cobs. Molecular clock analysis of these genes estimates their origins to some 9000 years ago, well in accordance with other records of maize domestication. It is believed that a small group of farmers must have selected some maize-like natural mutant of teosinte some 9000 years ago in Mexico, and subjected it to continuous selection to yield the maize plant as known today. [ 79 ] Another case is that of cauliflower . The edible cauliflower is a domesticated version of the wild plant Brassica oleracea , which does not possess the dense undifferentiated inflorescence, called the curd, that cauliflower possesses. Cauliflower possesses a single mutation in a gene called CAL , controlling meristem differentiation into inflorescence. This causes the cells at the floral meristem to gain an undifferentiated identity, and instead of growing into a flower , they grow into a lump of undifferentiated cells. [ 80 ] This mutation has been selected through domestication at least since the Greek empire.
https://en.wikipedia.org/wiki/Plant_evolutionary_developmental_biology
Plant expressed vaccine or project GreenVax [ 1 ] In 2005 DARPA ’s Accelerated Manufacture of Pharmaceuticals (AMP) program was created In response to emerging and novel biologic threats. [ 2 ] In 2009 DARPA offered a government contract for a Non-GMO plant-based systems expressing recombinant proteins, due to The 2009 H1N1 swine flu pandemic that highlighted the national need for rapid and agile vaccine manufacturing capabilities. [ 3 ] The Texas A&M University and a Texas company (GreenVax LLC, later renamed to Caliber Biotherapeutics LLC and ultimately acquired by iBio, Inc.) have been awarded a $40 million U.S. Department of Defense grant to develop a plant expressed vaccine made from tobacco. [ 4 ] While egg-based vaccines typically take more than six months to develop after a virus is isolated, the new process will take only four to six weeks. [ 4 ] The vice chancellor for research at A&M System declared that if the project works it will be one of the largest and most capable vaccine facilities in the world. [ 4 ] However the major problem is the public acceptance of this technology, many of the companies are looking for the FDA approval [ 5 ] The plant-based vaccine production method works by isolating a specific antigen protein, one that triggers a human immune response from the targeted virus. A gene from the protein is transferred to bacteria, which are then used to “infect” plant cells. The plants then start producing the exact protein that will be used for vaccinations. [ 6 ] Other uses of plant-expressed vaccines including the successful creation of edible bananas that protect against the Norwalk virus . [ 7 ]
https://en.wikipedia.org/wiki/Plant_expressed_vaccine
Plant floor communications refers to the control and data communications typically found in automation environments, on a manufacturing plant floor or process plant. The difference between manufacturing and process is typically the types of control involved, discrete control or continuous control (aka process control ). Many plants offer a hybrid of both discrete and continuous control. The underlying commonality between them all is that the automation systems are often an integration of multi-vendor products to form one system. Each vendor product typically offers communication capability for programming, maintaining and collecting data from their products. A properly orchestrated plant floor environment will likely include a variety of communications , some for machine to machine (M2M) communications – to facilitate efficient primary control over the process and some for Machine to Enterprise (M2E) communications – to facilitate connectivity with Business Systems that provide overall reporting, scheduling and inventory management functions. Automation controllers typically offer communication modules to enable them to support a variety of industrial protocols , to facilitate machine to machine communications. These modules are often special designed for the protocol. A new class of module, the universal gateway , is becoming more prevalent as it offers the ability for an automation controller to communicate over one or more protocols simultaneously, and can be reconfigured for additional protocols without a module change. Few automation controllers offer direct connectivity to business systems such as MES and ERP systems. Overall integration of automation controllers to business systems are typically configured by system integrators , able to bring their unique knowledge on process, equipment and vendor solutions. Integration is typically managed through one of three mechanisms: Direct Integration – Business systems include connectivity (communications to plant floor equipment) as part of their product offering. This requires the business system developers to offer specific support for the variety of plant floor equipment that they want to interface with. Business system vendors must be expert in their own products, and connectivity to other vendor products, often those offered by competitors. Relational Database (RDB) Integration – Business systems connect to plant floor data sources through a Relational Database Staging Table. Plant floor systems will deposit the necessary information into a Relational Data Base. The business system will remove and use the information from the RDB Table. The benefit of RDB Staging is that business system vendors do not need to get involved in the complexities of plant floor equipment integration. Connectivity becomes the responsibility of the system integrator . EATM ( Enterprise Transaction Modules ) – These devices have the ability to communicate directly with plant floor equipment and will transact data with the business system in methods best supported by the business system. Again, this can be through a staging table, Web Services, or through system specific business system APIs . The benefit of an EATM is that it offers a complete, off the shelf solution, minimizing long term costs and customization. Custom Integrated Solutions – Many system integrators designs offer custom crafted solutions, created on a per instance basis to meet site and system requirements. There are a wide variety of communications drivers available for plant floor equipment and there are separate products that have the ability to log data to relational database tables. Standards exist within the industry to support interoperability between software products, the most widely known being OPC, managed by the OPC Foundation . Custom Integrated Solutions typically run on workstation or server class computers. These systems tend to have the highest level of initial integration cost, and can have a higher long term cost in terms on maintenance and reliability. Long term costs can be minimized through careful system testing and thorough documentation.
https://en.wikipedia.org/wiki/Plant_floor_communication
Plant genetic resources describe the variability within plants that comes from human and natural selection over millennia. Their intrinsic value mainly concerns agricultural crops ( crop biodiversity ). According to the 1983 revised International Undertaking on Plant Genetic Resources for Food and Agriculture of the Food and Agriculture Organization (FAO), plant genetic resources are defined as the entire generative and vegetative reproductive material of species with economical and/or social value, especially for the agriculture of the present and the future, with special emphasis on nutritional plants . [ 1 ] In the State of the World’s Plant Genetic Resources for Food and Agriculture (1998) the FAO defined Plant Genetic Resources for Food and Agriculture (PGRFA) as the diversity of genetic material contained in traditional varieties and modern cultivars as well as crop wild relatives and other wild plant species that can be used now or in the future for food and agriculture . [ 2 ] The first use of plant genetic resources dates to more than 10,000 years ago, when farmers selected from the genetic variation they found in wild plants to develop their crops. As human populations moved to different climates and ecosystems, taking the crops with them, the crops adapted to the new environments, developing, for example, genetic traits providing tolerance to conditions such as drought, water logging, frost and extreme heat. These traits - and the plasticity inherent in having wide genetic variability - are important properties of plant genetic resources. [ citation needed ] In recent centuries, although humans had been prolific in collecting exotic flora from all corners of the globe to fill their gardens, it wasn’t until the early 20th century that the widespread and organized collection of plant genetic resources for agricultural use began in earnest. Russian geneticist Nikolai Vavilov , considered by some as the father of plant genetic resources, realized the value of genetic variability for breeding and collected thousands of seeds during his extensive travels to establish one of the first gene banks . [ 3 ] Vavilov inspired the American Jack Harlan to collect seeds from across the globe for the United States Department of Agriculture (USDA). [ 4 ] David Fairchild , another botanist at USDA, successfully introduced many important crops (e.g. cherries, soybeans, pistachios) into the United States. [ 5 ] It was not until 1967 that the term genetic resources was coined by Otto Frankel and Erna Bennett at the historic International Conference on Crop Plant Exploration and Conservation, organized by the FAO and the International Biological Program (IBP) [ 6 ] [ 7 ] “The effective utilization of genetic resources requires that they are adequately classified and evaluated” was a key message from the conference. [ 8 ] Plant genetic resource conservation has become increasingly important as more plants have become threatened or rare. At the same time, an exploding world population and rapid climate change have led humans to seek new resilient and nutritious crops. Plant conservation strategies generally combine elements of conservation on farm (as part of the crop production cycle, where it continues to evolve and support farmer needs), ex situ (for example in gene banks or field collections as seed or tissue samples) or in situ (where they grow in the wild or protected areas). Most in situ conservation concerns crop wild relatives , an important source of genetic variation to crop breeding programs. [ 9 ] Plant genetic resources that are conserved by any of these methods are often referred to as germplasm , which is a shorthand term meaning "any genetic materials". The term originates from germ plasm , August Weismann 's theory that heritable information is transmitted only by germ cells, and which has been superseded by modern insights on inheritance, including epigenetics and non-nuclear DNA . After the Second World War , efforts to conserve plant genetic resources came mainly from breeders’ organizations in the USA and Europe, which led to crop-specific collections primarily located in developed countries (e.g. IRRI , CIMMYT ). In the 1960s and 1970s, more focus was put on the collection and conservation of plant genetic resources in face of genetic erosion by organizations such as the Rockefeller Foundation and the European Society of Breeding Research (EUCARPIA). [ 8 ] A key event in the conservation of plant genetic resources was the establishment of the International Board for Plant Genetic Resources (IBPGR) (now Bioversity International ) in 1974, whose mandate was to promote and assist in the worldwide effort to collect and conserve the plant germplasm needed for future research and production . IBPGR mobilized scientists to create a global network of gene banks, thus marking the international recognition of the importance of plant genetic resources. [ 8 ] In 2002, the Global Crop Diversity Trust was established by Bioversity International on behalf of the CGIAR and the FAO through a Crop Diversity Endowment Fund. The goal of the Trust is to provide a secure and sustainable source of funding for the world's most important ex situ crop collections. In response to the growing awareness of the global value of and threat to biological diversity, the United Nations drafted the 1992 Convention on Biological Diversity (CBD), [ 10 ] the first global multilateral treaty focused on the conservation and sustainable use of biodiversity . Article 15 of the CBD specified that countries have national sovereignty over their genetic resources, but that there should be facilitated access and benefit sharing (ABS) under mutually agreed terms and with prior informed consent. Going further to protect national sovereignty of plant genetic resources, an instrumental piece of legislation, The International Treaty on Plant Genetic Resources for Food and Agriculture (ITPGRFA), was adopted by the FAO in November 2001 and came into force in 2004. [ 11 ] The ITPGRFA established several mechanisms under the Multilateral System, which grants free access and equitable use of 64 of the world’s most important crops ( Annex 1 crops ) for some uses (research, breeding and training for food and agriculture). The treaty prevents the recipients of genetic resources from claiming intellectual property rights over those resources in the form in which they received them, and ensures that access to genetic resources is consistent with international and national laws. This is facilitated by the Standard Material Transfer Agreement , a mandatory contract between providers and recipients for the exchange of germplasm under the Multilateral System. The Governing Body of the treaty, through FAO as the Third Party Beneficiary, has an interest in the agreements. [ 11 ] The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization is a supplementary agreement to the Convention on Biological Diversity that was adopted in 2010 and enforced in 2014. It provides greater legal transparency to policies governing fair and equitable sharing of benefits arising from the utilization of genetic resources. [ 12 ] Due to the high value and complexity of plant genetic resources and the number of parties involved globally, some issues have arisen over their conservation and use. Much of the material for breeding programs was collected from the Southern hemisphere and sent to gene banks in the Northern hemisphere, a concern that led to more emphasis on the national sovereignty of plant genetic resources and instigated policies that addressed the imbalance. [ 13 ] The increased use of plant genetic information for research, for example to find genes of interest for drought tolerance, has led to controversy on whether and to what extent the genetic data (separate from the organism) are subject to the international ABS regulations described above. [ 13 ] Forest genetic resources represent a specific case of plant genetic resources.
https://en.wikipedia.org/wiki/Plant_genetic_resources
A plant genome assembly represents the complete genomic sequence of a plant species, which is assembled into chromosomes and other organelles by using DNA (deoxyribonucleic acid) fragments that are obtained from different types of sequencing technology. The genome of plants can vary in their structure and complexity from small genomes like green algae (15 Mbp). [ 1 ] to very large and complex genomes that have typically much higher ploidy , higher rates of heterozygosity and repetitive elements than species from other kingdoms. [ 2 ] One of the most complex plant genome assemblies available is that of loblolly pine (22 Gbp). [ 3 ] Due to their complexity, the plants' genome sequences can't be assembled back into chromosomes using only short reads provided by next-generation- sequencing technologies (NGS), [ 4 ] [ 5 ] and therefore most plant genome assemblies available that used NGS alone are highly fragmented, contain large numbers of contigs, and genome regions are not finished. Highly repetitive sequences, often larger than 10kbp, are the main challenge in plants. [ 6 ] [ 7 ] Most of the chromosomal sequences are produced by the activity of mobile genetic elements (MGEs) in the plant genomes. [ 8 ] MGEs are divided into two classes: class I or retrotransposons , and class II or DNA transposons . In plants, long- terminal repeat (LTR) retrotransposons are predominant and constitute from 15% [ 9 ] to 90% of the genome. [ 10 ] Polyploidy is another challenge in assembling a plant genome, and it is estimated that ≈80% of plants are polyploids. [ 11 ] The first complete plant genome assembly , that of Arabidopsis thaliana , was finished in 2000, [ 12 ] being the third multicellular eukaryotic genome published after C. elegans [ 13 ] and D. melanogaster . [ 14 ] Arabidopsis, unlike other plants' genomes (e.g. Malus ) has convenient traits, such as a small nuclear genome (135Mbp) and a short generation time (8 weeks from seed to seed). The genome has five chromosomes reflecting approximately 4% of the human genome size. The genome was sequenced and annotated by the Arabidopsis Genome Initiative (AGI). The initiative for sequencing the genome of rice ( Oryza sativa ), [ 15 ] began in September 1997, when scientists from many nations agreed to an international collaboration to sequence the rice genome, forming "The International Rice Genome Sequencing Project" (IRGSP). At an estimated size between 400 and 430 Mb, approximatively four times larger in dimensions than A. thaliana , rice has the smallest of the major cereal crop genomes. [ 15 ] Between 2000 and 2008 in total 10 plant genomes were published while in 2012 alone, 13 plant genomes were published. Since then the number was constantly increasing, and now more than 400 plant genomes are available in the NCBI genome database, of which 72 were re-annotated [NCBI]. EnsemblPlants [ 16 ] is part of EnsemblGenome database and contains resources for a reduced number of sequenced plant species (45, Oct. 2017). It mainly provides genome sequences, gene models, functional annotations and polymorphic loci. For some of the plant species, additional information is provided including population structure, individual genotypes, linkage, and phenotype data. Gramene [ 17 ] is an online web database resource for plant comparative genomics and pathway analysis based on Ensembl technology. Plant Genome DataBase Japan [ 18 ] (PGDBj) is a website that contains information related to genomes of model and crop plants from databases. It has three main components: ortholog db, DNA marker and linkage map db, and plant resource db, where multiple plant resources accumulated by different institutes are integrated. The aim is "to provide a platform, enabling comparative searches of different resources" (pgdbj.jp). PlantsDB [ 19 ] is a resource for analysing and storing genetic and genomic information from various plants, and offers tools to query these data and to perform comparative analysis with the help of in-house tools. PLAZA [ 20 ] [ 21 ] is another online resource for comparative genomics that integrates plant sequence data and comparative genomic methods, and performs evolutionary analysis within the green plant lineage ( Viridiplantae ). The Arabidopsis Information Resource (TAIR) [ 22 ] maintains a web database of the "model higher plant Arabidopsis Thaliana ". In general, for sequencing and assembling large and complex genomes like plants, different strategies are used, based on the technologies available at that time when the project started. Clone-by-clone sequencing strategies are based on the construction of a map for each chromosome before the sequencing, and rely on libraries made from large-insert clones. The most common type of large-insert clone is the bacterial artificial chromosome (BAC). With BAC, the genome is first split into smaller pieces with the location recorded. The pieces of DNA are then inserted into BAC clones that are further multiplied by inserting them into bacterial cells that grow very fast. These pieces are further fragmented into overlapping smaller pieces that are placed into a vector and then sequenced. The small pieces are then assembled into contigs by overlapping them. Next, using the map from the first step the contigs are assembled back into the chromosomes. The first complete plant genome assembly (also the first plant genome published) that used this type of technique was Arabidopsis thaliana, in 2000. [ 12 ] Different large-insert libraries like BACs, P1 artificial chromosomes (PAC), yeast artificial chromosome (YAC) and transformation-competent artificial chromosomes (TACs) were combined to assemble the genome. From clones with restriction fragment fingerprint , by comparison of the patterns and hybridization or polymerase chain reaction (PCR) the physical maps were constructed. The physical maps were integrated together with genetic maps to identify contig positions and orientations. End sequences from 47,788 BAC clones were used to extend contigs from anchored BACs and to select a minimum tiling path. A total of 1,569 clones found in minimum tiling path were selected and sequenced. Direct PCR products were used to clone remaining gaps, and YACs allowed the characterization of telomere sequences. The resulting sequenced regions were 115.4 Mb of the 125 Mb predicted size of the genome and a total of 25,498 of protein-coding genes. To sequence and assemble the genome of Oryza sativa ( japonica ), [ 15 ] the same strategy was used. For Oryza sativa a total of 3,401 mapped clones in a minimum tiling path were selected from the physical map and assembled. One of the most important crops in the world, maize ( Zea mays ), is the last plant genome project primarily based on Sanger BAC-by-BAC strategy. [ 23 ] The genome size of Maize, 2.3 Gb and 10 chromosomes, [ 23 ] is significantly larger than that of rice and Arabidopsis. [ 23 ] To assemble the genome of maize a set of 16,848 minimally overlapping BAC clones derived from combinations of physical and genetic map were selected and sequenced. The assembly on maize was performed in addition with external information data. The data was obtained from cDNA and sequences from libraries with methyl-filtered DNA (libraries that uses the knowledge that the bases in genic sequences tends to be less heavily methylated than those in non-genic regions) and high C0 t techniques. Sanger clone-by-clone strategy has the advantage of working in small units, which reduces the complexity and computational requirements, as well as minimized problems associated with the misassembly of highly repetitive DNA and therefore is an attractive solution in assembling plant genomes and other complex eukaryotic genomes. The main disadvantages of this method are the costs and the resources required. The cost of the first plant genome assemblies was estimated between 70 million dollars [ 24 ] and 200 million dollars per assembly. [ 25 ] In the WGS sequencing technology there is no order for the fragments that are sequenced. The DNA is randomly sheared and cloned fragments are sequenced and assembled using computational methods. This technology reduced the cost and the time associated with construction of the maps and relies on computational resources. A considerable number of important plant genomes like grapevine ( Vitis Vinifer ), [ 26 ] papaya ( Carica papaya ), [ 27 ] and cottonwood ( Populus trichocarpa ) [ 28 ] were sequenced and assembled with Sanger WGS strategy. The draft genome of grapevine [ 26 ] is the fourth genome published for a flowering plant and the first from a fruit crop. The sequences of the genome were obtained from different types of libraries, like plasmids, fosmids and BACs. All the data were generated by paired-end sequencing of cloned insert using Sanger technology on ABI3730x1 sequencers. To assemble the reads, Arachne, 2002, [ 29 ] a software designed to analyze reads obtained from both ends of plasmid clones, was used. In total 6.2 million paired-end tag reads were produced. The software produced 20.784 contigs that were combined into 3,830 supercontigs, having an N50 value of 64kb. Supercontigs had a total size of 498 Mb. The anchorage of the supercontigs along the genome was performed first by joining supercontigs together using paired BAC end sequences. The resulting ultracontigs and the remained supercontigs were then aligned along the genetic map of the genome. Later improvements of this strategy enabled the sequencing of Brachypodium distachyon , [ 30 ] Sorghum bicolor [ 31 ] and soybean . [ 32 ] Due to its relatively cheap cost in comparison to previous methods, most of the recent plant genomes were sequenced and assembled using data from NGS (next-generation- sequencing) technology. In general the NGS data are used in combination with Sanger Sequencing technology or long-reads obtained from the third generation sequencing . The genome of the cucumber , ( Cucumis sativus ), [ 33 ] was one of the plant genomes that used the NGS Illumina reads in combination with Sanger sequences. 72.2-fold genome coverage high quality base pairs were generated from which 3.9-fold coverage was provided from Sanger and the Illumina GA reads provided 68.3-fold coverage. From this two assemblies were produced based on the sequencing technology. The resulting contigs were compared between them, resulting in a total length of the assembled genome of 243.5 Mb. The result is about 30% smaller than the genome size estimated by flow cytometry of isolated nuclei stained with propidium iodide (367 Mb). A genetic map was constructed to anchor the assembled genome. 72.8% of the assembled sequences were successfully anchored onto the seven chromosomes. Another plant genome that combined NGS with Sanger sequencing was the genome of Theobroma cacao , 2010, [ 34 ] an economically important tropical fruit tree crop and the primary source of cocoa. The genome was sequenced in a consortium, "The International Cocoa Genome Sequencing consortium (ICGS) " and produced a total of 17.6 million 454 single end reads, 8.8 million 454 paired-end reads, 398.0 million Illumina paired-end reads and about 88,000 Sanger BAC reads. First by using genome assembly software, Newbler, an assembly was produced with 25,912 contigs and 4,792 scaffolds from the reads obtained from Roche/454 and Sanger raw data. This had a total length of 326.9 Mb, which represents 76% of the estimated genome size. The Illumina reads were used to complement the 454 assembly, by aligning the short reads on the cocoa genome assembly using the SOAP software. A similar strategy that combined NGS reads and Sanger Sequencing was used for other important plant species like the first published apple genome ( Malus domestica ), [ 35 ] cotton ( Gossypium Raimond ), [ 36 ] draft genome of sweet orange ( Citrus sinensis ) [ 37 ] and the domesticated tomato ( Solanum lycopersicum ) genome [ 38 ] With the emergence of third-generation sequencing (TGS) some of the limitations from previous methods of sequencing and assembling plant genomes have started to be addressed. This technology is characterized by the parallel sequencing of single molecules of DNA, that results in sequences up to 54 kbp length ( PacBio RS 2). [ 39 ] In general, long reads from TGS have relatively high error rates (≈10% on average) [ 40 ] and therefore repeated sequencing of the same DNA fragments is required. The price of such technology is still quite high and therefore is generally used in combination with short reads from NGS. One of the first plant genome that used long-reads from TGS, Pacific Biosciences in combination with short reads from NGS was the genome of spinach [ 41 ] having a genome size estimated at 989 Mb. For this, a 60× coverage of the genome was generated, with 20% of the reads larger than 20 kb. Data were assembled using PacBio's hierarchical genome assembly process (HGAP), [ 42 ] and showed that long-read assemblies revealed a 63-fold improvement in contig size over an Illumina-only assembly. Another plant genome that was recently published that used long reads in combination with short reads is the improved assembly of the apple genome. [ 43 ] In this project a hybrid approach was used, combining different data types from sequencing technologies. The sequences used came from: PacBio RS II, Illumina paired-end reads (PE) and Illumina mate- pair reads (MP). As a first step an assembly from Illumina paired-end reads was performed using a well-known de novo assembly software SOAPdevo. [ 44 ] Then using a hybrid assembly pipeline DBG2OLC. [ 45 ] the contigs obtained at the first step and the long reads from PacBio were combined. The assembly was then polished with the help of Illumina paired-end reads by mapping them to the contigs using BWA-MEM. [ 46 ] By mapping the mate-pair reads on the corrected contigs they scaffold the assembly. Further BioNano [ 47 ] optical mapping analysis with a total length of 649.7 Mb, were used in the hybrid assembly pipeline together with the scaffolds obtained from the previous step. The resulting scaffolds were anchored to a genetic map constructed from 15,417 single-nucleotide polymorphisms (SNPs) markers. For better understanding of the number and diversity of genes that were identified, ribonucleic acid RNA-seq, were used. The resulted genome has a dimension of 643.2 Mb getting closer to the estimated genome size than the previous published assembly [ 35 ] and a smaller number of protein-coding- genes. The use of long reads in the plant genome assemblies became more popular, for reducing the number of scaffolds and increasing the quality of the genome by improving the assembly and coverage in regions that are not clearly defined by NGS assembly.
https://en.wikipedia.org/wiki/Plant_genome_assembly
Plant health includes the protection of plants, as well as scientific and regulatory frameworks for controlling plant pests or pathogens. [ 1 ] Plant health is concerned with:
https://en.wikipedia.org/wiki/Plant_health
Plant intelligence is a field of plant biology which aims to understand how plants process the information they obtain from their environment . [ 2 ] [ 3 ] Plant intelligence has been defined as "any type of intentional and flexible behavior that is beneficial and enables the organism to achieve its goal". [ 4 ] Plant neurobiology is a subfield of plant intelligence research that claims plants possess abilities associated with cognition including anticipation, decision making, learning and memory . [ 5 ] [ 6 ] [ 7 ] [ 8 ] Terminology used in plant neurobiology is rejected by the majority of plant scientists as misleading, as plants do not possess consciousness or neurons. [ 9 ] [ 10 ] [ 11 ] [ 12 ] In 1811, James Perchard Tupper authored An Essay on the Probability of Sensation in Vegetables which argued that plants possess a low form of sensation. [ 13 ] [ 14 ] He has been cited as an early botanist "attracted to the notion that the ability of plants to feel pain or pleasure demonstrated the universal beneficence of a Creator". [ 15 ] The notion that plants are capable of feeling emotions was first recorded in 1848, when Gustav Fechner , an experimental psychologist , suggested that plants are capable of emotions and that one could promote healthy growth with talk, attention, attitude, and affection. [ 16 ] Federico Delpino wrote about plant intelligence in 1867. [ 17 ] The idea of cognition in plants was explored by Charles Darwin in 1880 in the book The Power of Movement in Plants , co-authored with his son Francis. Using a neurological metaphor, he described the sensitivity of plant roots in proposing that the tip of roots acts like the brain of some lower animals. This involves reacting to sensation in order to determine their next movement. [ 18 ] [ 19 ] Darwin's "root-brain hypothesis" influenced those in the field of plant neurobiology many years later. [ 19 ] John Ellor Taylor in his 1884 book The Sagacity and Morality of Plants argued that plants are conscious agents. [ 20 ] Jagadish Chandra Bose invented various devices and instruments to measure electrical responses in plants. [ 21 ] [ 22 ] According to biologist Patrick Geddes "In his investigations on response in general Bose had found that even ordinary plants and their different organs were sensitive— exhibiting, under mechanical or other stimuli, an electric response, indicative of excitation." [ 23 ] One visitor to his laboratory, the vegetarian playwright George Bernard Shaw , was intensely disturbed upon witnessing a demonstration in which a cabbage had "convulsions" as it boiled to death. [ 24 ] Jagadish Chandra Bose is considered an important forerunner of plant neurobiology by proponents of plant cognition. [ 25 ] [ 26 ] [ 1 ] Bose was the author of The Nervous Mechanism of Plants , published in 1926. Karl F. Kellerman, Associate Chief of the Bureau of Plant Industry, United States Department of Agriculture criticized Bose's interpretation of the results from his experiments, stating that he failed to prove the conclusions from his reports that plants feel pain. Kellerman commented that "Sir Jagadar passed an electric current through plants, and his instruments recorded a break in the current. Such variations in resistance to electric current are found even when passing a current through dead matter". [ 27 ] In 1900, ornithologist Thomas G. Gentry authored Intelligence in Plants and Animals which argued that plants have consciousness. Historian Ed Folsom described it as "an exhaustive investigation of how such animals as bees, ants, worms and buzzards, as well as all kinds of plants, display intelligence and thus have souls". [ 28 ] Captain Arthur Smith in the early 1900s authored the first article on "plant consciousness". [ 29 ] [ 30 ] In 1905, Rev. Charles Fletcher Argyll Saxby authored a pamphlet, Do Plants Think? Some speculations concerning a neurology and psychology of plants . [ 31 ] Maurice Maeterlinck wrote about the intelligence of flowers in 1907. [ 32 ] Royal Dixon in his 1914 book, The Human Side of Plants argued that plants are sentient and have minds and souls. [ 33 ] In the 1960s Cleve Backster , an interrogation specialist with the CIA, conducted research that led him to believe that plants can feel and respond to emotions and intents from other organisms including humans. Backster's interest in the subject began in February 1966 when he tried to measure the rate at which water rises from a philodendron 's root into its leaves. Because a polygraph or "lie detector" can measure electrical resistance, which would alter when the plant was watered, he attached a polygraph to one of the plant's leaves. Backster stated that, to his immense surprise, "the tracing began to show a pattern typical of the response you get when you subject a human to emotional stimulation of short duration". [ 34 ] His ideas about primary perception (plants responding to emotions and intents) became known as the "Backster effect". [ 35 ] [ 36 ] In 1975, K. A. Horowitz, D. C. Lewis and E. L. Gasteiger published an article in Science giving their results when repeating one of Backster's effects – plant response to the killing of brine shrimp in boiling water. [ 37 ] The researchers grounded the plants to reduce electrical interference and rinsed them to remove dust particles. As a control, three of five pipettes contained brine shrimp while the remaining two only had water; the pipettes were delivered to the boiling water at random. This investigation used a total of 60 brine shrimp deliveries to boiling water while Backster's had used 13. Positive correlations did not occur at a rate great enough to be considered statistically significant. [ 37 ] Other controlled experiments that attempted to replicate Backster's findings also produced negative results. [ 38 ] [ 39 ] [ 40 ] [ 41 ] Botanist Arthur Galston and physiologist Clifford L. Slayman who investigated Backster's claims wrote: There is no objective scientific evidence for the existence of such complex behaviour in plants. The recent spate of popular literature on "plant consciousness" appears to have been triggered by "experiments" with a lie detector, subsequently reported and embellished in a book called The Secret Life of Plants . Unfortunately, when scientists in the discipline of plant physiology attempted to repeat the experiments, using either identical or improved equipment, the results were uniformly negative. Further investigation has shown that the original observations probably arose from defective measuring procedures. [ 38 ] John M. Kmetz noted that the Backster effect was based on observations of only seven plants which nobody including Backster was able to replicate. [ 35 ] The television show MythBusters also performed experiments (season 4, episode 18, 2006) to test the concept. The tests involved connecting plants to a polygraph galvanometer and employing actual and imagined harm upon the plants or upon others in the plants' vicinity. The galvanometer showed a reaction about one third of the time. The experimenters, who were in the room with the plant, posited that the vibrations of their actions or the room itself could have affected the polygraph. After isolating the plant, the polygraph showed a response slightly less than one third of the time. Later experiments with an EEG failed to detect anything. The show concluded that the results were not repeatable, and that the theory was not true. [ 42 ] Backster's research was cited in the pseudoscientific book The Secret Life of Plants in 1973. [ 37 ] [ 43 ] Whilst the book captured public attention it severely damaged the credibility of the field of plant intelligence. Philosopher Yogi H. Hendlin noted that the book's "combination of haphazard, panpsychist metaphysical speculations and unmethodical citizen science stigmatised legitimate progressive plant research, alongside the era’s new-age pseudoscience, tarring the discipline’s serious inquiry". [ 44 ] In 1973, Dorothy Retallack authored The Sound of Music and Plants . [ 45 ] In the book Retallack records experiments she conducted at Colorado Women's College on applying different music to plants. She stated that the plants died in response to acid rock but flourished in response to classical music and jazz . [ 46 ] The experiments were described as pseudoscientific as they were poorly designed and did not control for other factors such as humidity, light or water. [ 47 ] Colorado Women's College was embarrassed by the experiments. [ 46 ] Anthony Trewavas is credited with reintroducing the idea of plant intelligence in the early 2000s. [ 32 ] [ 48 ] [ 49 ] In 2003, Trewavas led a study to see how the roots interact with one another and study their signal transduction methods. He was able to draw similarities between water stress signals in plants affecting developmental changes and signal transductions in neural networks causing responses in muscle. [ 48 ] Particularly, when plants are under water stress, there are abscisic acid dependent and independent effects on development. [ 50 ] This brings to light further possibilities of plant decision-making based on its environmental stresses. The integration of multiple chemical interactions show evidence of the complexity in these root systems. [ 51 ] In 2012, Paco Calvo Garzón and Fred Keijzer speculated that plants exhibited structures equivalent to (1) action potentials (2) neurotransmitters and (3) synapses . Also, they stated that a large part of plant activity takes place underground, and that the notion of a 'root brain' was first mooted by Charles Darwin in 1880. Free movement was not necessarily a criterion of cognition, they held. The authors gave five conditions of minimal cognition in living beings, and concluded that 'plants are cognitive in a minimal, embodied sense that also applies to many animals and even bacteria.' [ 52 ] In 2017 biologists from University of Birmingham announced that they found a "decision-making center" in the root tip of dormant Arabidopsis seeds. [ 53 ] In 2014, Anthony Trewavas released a book called Plant Behavior and Intelligence that highlighted a plant's cognition through its colonial-organization skills reflecting insect swarm behaviors. [ 54 ] This organizational skill reflects the plant's ability to interact with its surroundings to improve its survivability, and a plant's ability to identify exterior factors. Evidence of the plant's minimal cognition of spatial awareness can be seen in their root allocation relative to neighboring plants. [ 52 ] The organization of these roots have been found to originate from the root tip of plants. [ 55 ] On the other hand, Peter A. Crisp and his colleagues proposed a novel view on plant memory in their review: plant memory could be advantageous under recurring and predictable stress; however, resetting or forgetting about the brief period of stress may be more beneficial for plants to grow as soon as the desirable condition returns. [ 56 ] Affifi (2018) proposed an empirical approach to examining the ways plants model coordinate goal-based behaviour to environmental contingency as a way of understanding plant learning. [ 57 ] According to this author, associative learning will only demonstrate intelligence if it is seen as part of teleologically integrated activity. Otherwise, it can be reduced to mechanistic explanation. In 2017 Yokawa, K. et al. found that, when exposed to anesthetics, a number of plants lost both their autonomous and touch-induced movements. Venus flytraps no longer generate electrical signals and their traps remain open when trigger hairs were touched, and growing pea tendrils stopped their autonomous movements and were immobilized in a curled shape. [ 58 ] Raja et al (2020) found that potted French bean plants, when planted 30 centimetres from a garden cane, would adjust their growth patterns to enable themselves to use the cane as a support in the future. Raja later stated that "If the movement of plants is controlled and affected by objects in their vicinity, then we are talking about more complex behaviours (rather than simple) reactions". Raja proposed that researchers should look for corresponding cognitive signatures. [ 59 ] [ 60 ] A minority of researchers within the field of plant neurobiology argue that plants are conscious organisms. [ 61 ] [ 62 ] [ 63 ] Peter Wohlleben argued for plant sentience in his 2016 book The Hidden Life of Trees . [ 64 ] The book was widely criticized by biologists and forest scientists for using strong anthropomorphic and teleological language such as describing trees as having friendships and registering fear, love and pain. [ 64 ] It has been described as containing a "conglomeration of half-truths, biased judgements, and wishful thinking". [ 64 ] František Baluška argues for a model called the Cellular Basis of Consciousness (CBC) which proposes that all cells are conscious. [ 61 ] The model has been criticized for being based on only speculation and lacking empirical evidence for its claim that cells have consciousness. [ 65 ] [ 66 ] Modern research on plant cognition is conducted by researchers associated with the Society for Plant Neurobiology that was established in 2005. [ 8 ] Due to criticisms from botanists and complaints from early members that affiliations with the Society were negatively impacting their careers, the Society was renamed the Society of Plant Signaling and Behavior (SPSB) in 2009. [ 8 ] [ 67 ] Research on plant intelligence is also conducted by the International Laboratory of Plant Neurobiology headed by Stefano Mancuso . It has been described as "the world's only laboratory dedicated to plant intelligence". [ 68 ] The idea of plant cognition is a source of controversy and is rejected by the majority of plant scientists. [ 9 ] [ 10 ] [ 11 ] [ 69 ] Plant neurobiology has been criticized for misleading the public with false terminology. [ 10 ] [ 70 ] There is no scientific evidence that plants possess consciousness or are sentient. [ 9 ] [ 10 ] [ 11 ] [ 71 ] Amadeo Alpi and 35 other scientists published an article in 2007 titled "Plant Neurobiology: No Brain, No Gain?" in Trends in Plant Science . [ 9 ] In this article, they argue that since there is no evidence for the presence of neurons in plants, the idea of plant neurobiology and cognition is unfounded and needs to be redefined. [ 9 ] They commented that "plant neurobiology does not add to our understanding of plant physiology, plant cell biology or signaling". [ 9 ] In response to this article, Francisco Calvo Garzón published an article in Plant Signaling and Behavior . [ 7 ] He states that, while plants do not have neurons as animals do, they do possess an information-processing system composed of cells. He argues that this system can be used as a basis for discussing the cognitive abilities of plants.
https://en.wikipedia.org/wiki/Plant_intelligence
Plant life-form schemes constitute a way of classifying plants alternatively to the ordinary species-genus-family scientific classification . In colloquial speech , plants may be classified as trees , shrubs , herbs ( forbs and graminoids ), etc. The scientific use of life-form schemes emphasizes plant function in the ecosystem and that the same function or "adaptedness" to the environment may be achieved in a number of ways, i.e. plant species that are closely related phylogenetically may have widely different life-form, for example Adoxa moschatellina and Sambucus nigra are from the same family, but the former is a small herbaceous plant and the latter is a shrub or tree . Conversely, unrelated species may share a life-form through convergent evolution . While taxonomic classification is concerned with the production of natural classifications (being natural understood either in philosophical basis for pre-evolutionary thinking, or phylogenetically as non- polyphyletic ), plant life form classifications uses other criteria than naturalness, like morphology, physiology and ecology. Life-form and growth-form are essentially synonymous concepts, despite attempts to restrict the meaning of growth-form to types differing in shoot architecture. [ 1 ] Most life form schemes are concerned with vascular plants only. Plant construction types may be used in a broader sense to encompass planktophytes , benthophytes (mainly algae ) and terrestrial plants . [ 2 ] A popular life-form scheme is the Raunkiær system . One of the earliest attempts to classify the life-forms of plants and animals was made by Aristotle , whose writings are lost. His pupil, Theophrastus , in Historia Plantarum ( c. 350 BC ), was the first who formally recognized plant habits : trees, shrubs and herbs. [ 3 ] Some earlier authors (e.g., Humboldt , 1806) did classify species according to physiognomy, [ 4 ] [ 5 ] [ 6 ] but were explicit about the entities being merely practical classes without any relation to plant function. A marked exception was A. P. de Candolle (1818) attempt to construct a natural system of botanical classification. [ 7 ] His system was based on the height of the lignified stem and on plant longevity. Eugenius Warming , in his account, is explicit about his Candollean legacy. [ 8 ] [ 9 ] Warming's first attempt in life-form classification was his work Om Skudbygning, Overvintring og Foryngelse (translated title "On shoot architecture, perennation and rejuvenation" - See line drawings ) (1884). The classification was based on his meticulous observations while raising wild plants from seed in the Copenhagen Botanical Garden . Fourteen informal groups were recognized, based on longevity of the plant, power of vegetative propagation , duration of tillers, hypogeous or epigeous type of shoots, mode of wintering, and degree and mode of branching of rhizomes . The term life-form was first coined by Warming ("livsform") in his 1895 book Plantesamfund , [ 8 ] but was translated to "growthform" in the 1909 English version Oecology of Plants . Warming developed his life-form scheme further in his "On the life forms in the vegetable kingdom". [ 10 ] He presented a hierarchic scheme, first dividing plants into heterotrophic and autotrophic , the latter group then into aquatic and terrestrial , the land plants into muscoid , lichenoid , lianoid and all other autonomous land plants, which again were divided into monocarpic and polycarpic . This system was incorporated into the English version of his 1895 book Oecology of Plants . [ 9 ] Warming continued working on plant life-forms and intended to develop his system further. However, due to high age and illness, he was able to publish a draft of his last system only [ 11 ] Following Warming's line of emphasizing functional characters, Oscar Drude devised a life-form scheme in his Die Systematische und Geographische Anordnung der Phanerogamen (1887). This was, however, a hybrid between physiognomic and functional classification schemes as it recognized monocots and dicots as groups. Drude later modified his scheme in Deutschlands Pflanzengeographie (1896), and this scheme was adopted by the influential American plant ecologists Frederic Clements and Roscoe Pound . [ 12 ] Christen C. Raunkiær 's classification (1904) recognized life-forms (first called "biological types") on the basis of plant adaptation to survive the unfavorable season, be it cold or dry, that is the position of buds with respect to the soil surface. [ 13 ] In subsequent works, he showed the correspondence between gross climate and the relative abundance of his life-forms. [ 14 ] [ 15 ] [ 16 ] G.E. Du Rietz [ de ] (1931) reviewed the previous life-form schemes in 1931 and strongly criticized the attempt to include "epharmonic" characters, i.e., those that can change in response to the environment (see phenotypic plasticity ). [ 1 ] He tabulated six parallel ways of life-form classification: [ 17 ] Later authors have combined these or other types of unidimensional life-form schemes into more complex schemes, in which life-forms are defined as combinations of states of several characters. Examples are the schemes proposed by Pierre Dansereau [ 18 ] and Stephan Halloy. [ 19 ] These schemes approach the concept of plant functional type , which has recently replaced life-form in a narrow sense. Following, some relevant schemes. Based on plant habit : [ 20 ] Humboldt described 19 (originally 16) Hauptformen , named mostly after some characteristic genus or family: [ 20 ] Based upon the duration of life and the height of the ligneous stem: [ 21 ] Based on the place of the plant's growth-point (bud) during seasons with adverse conditions (cold seasons, dry seasons): Vegetation-forms: [ 23 ] Main life-forms ("Grundformen") system: [ 25 ] Growth-form system: Main groups of plant life forms: [ 28 ] Following, other morphological, ecological, physiological or economic categorizations of plants. According to the general appearance ( habit ): According to leaf hardness, size and orientation in relation to sunlight: According to the habitat : According to the water content of the environment: According to latitude (in vegetation classification): According to climate (in vegetation classification): According to altitude (in vegetation classification): According to the loss of leaves (in vegetation classification): According to the luminosity of the environment: [ citation needed ] According to the mode of nutrition: According to soil factors: According to the capacity to avoid dehydration : According to short-term fluctuations in water balance : According to the range of drought / humidity tolerance: According to longevity : According to the type of photosynthesis : According to origin: [ 30 ] [ 31 ] According to biogeographic distribution : According to invasiveness: According to establishment time in an ecological succession : According to human cultivation: According to importance to humans (see ethnobotany ):
https://en.wikipedia.org/wiki/Plant_life-form
Plant litter (also leaf litter , tree litter , soil litter , litterfall or duff ) is dead plant material (such as leaves , bark , needles , twigs , and cladodes ) that have fallen to the ground. This detritus or dead organic material and its constituent nutrients are added to the top layer of soil, commonly known as the litter layer or O horizon ("O" for "organic"). Litter is an important factor in ecosystem dynamics , as it is indicative of ecological productivity and may be useful in predicting regional nutrient cycling and soil fertility . [ 1 ] Litterfall is characterized as fresh, undecomposed, and easily recognizable (by species and type) plant debris. This can be anything from leaves, cones, needles, twigs, bark, seeds/nuts, logs, or reproductive organs (e.g. the stamen of flowering plants). Items larger than 2 cm diameter are referred to as coarse litter , while anything smaller is referred to as fine litter or litter. The type of litterfall is most directly affected by ecosystem type. For example, leaf tissues account for about 70 percent of litterfall in forests, but woody litter tends to increase with forest age. [ 2 ] In grasslands, there is very little aboveground perennial tissue so the annual litterfall is very low and quite nearly equal to the net primary production. [ 3 ] In soil science , soil litter is classified in three layers, which form on the surface of the O Horizon. These are the L, F, and H layers: [ 4 ] The litter layer is quite variable in its thickness, decomposition rate and nutrient content and is affected in part by seasonality , plant species, climate, soil fertility, elevation, and latitude , [ 1 ] as well as water retention of the soil. The most extreme variability of litterfall is seen as a function of seasonality; each individual species of plant has seasonal losses of certain parts of its body, which can be determined by the collection and classification of plant litterfall throughout the year, and in turn affects the thickness of the litter layer. In tropical environments, the largest amount of debris falls in the latter part of dry seasons and early during wet season. [ 5 ] As a result of this variability due to seasons, the decomposition rate for any given area will also be variable. Latitude also has a strong effect on litterfall rates and thickness. Specifically, litterfall declines with increasing latitude. In tropical rainforests, there is a thin litter layer due to the rapid decomposition, [ 7 ] while in boreal forests , the rate of decomposition is slower and leads to the accumulation of a thick litter layer, also known as a mor . [ 3 ] Net primary production works inversely to this trend, suggesting that the accumulation of organic matter is mainly a result of decomposition rate. Surface detritus facilitates the capture and infiltration of rainwater into lower soil layers. The surface detritus also protects soil from excess drying and warming. [ 8 ] Soil litter protects soil aggregates from raindrop impact, preventing the release of clay and silt particles from plugging soil pores. [ 9 ] Releasing clay and silt particles reduces the capacity for soil to absorb water and increases cross surface flow, accelerating soil erosion . In addition soil litter reduces wind erosion by preventing soil from losing moisture and providing cover preventing soil transportation. Organic matter accumulation also helps protect soils from wildfire damage. Soil litter can be completely removed depending on intensity and severity of wildfires and season. [ 10 ] Regions with high frequency wildfires have reduced vegetation density and reduced soil litter accumulation. Climate also influences the depth of plant litter. Typically humid tropical and sub-tropical climates have reduced organic matter layers and horizons due to year-round decomposition and high vegetation density and growth. In temperate and cold climates, litter tends to accumulate and decompose slower due to a shorter growing season as decomposers work faster in environments with a stable temperature(citation needed). Net primary production and litterfall are intimately connected. In every terrestrial ecosystem, the largest fraction of all net primary production is lost to herbivores and litter fall. [ citation needed ] Due to their interconnectedness, global patterns of litterfall are similar to global patterns of net primary productivity. [ 3 ] Plant litter, which can be made up of fallen leaves, twigs, seeds, flowers, and other woody debris, makes up a large portion of above ground net primary production of all terrestrial ecosystems. Fungus plays a large role in cycling the nutrients from the plant litter back into the ecosystem. [ 11 ] Litter provides habitat for a variety of organisms. Certain plants are specially adapted for germinating and thriving in the litter layers. [ 12 ] For example, bluebell ( Hyacinthoides non-scripta ) shoots puncture the layer to emerge in spring. Some plants with rhizomes , such as common wood sorrel ( Oxalis acetosella ) do well in this habitat. [ 7 ] Many organisms that live on the forest floor are decomposers , such as fungi . Organisms whose diet consists of plant detritus, such as earthworms , are termed detritivores . The community of decomposers in the litter layer also includes bacteria , amoeba , nematodes , rotifer , tardigrades , springtails , cryptostigmata , potworms , insect larvae , mollusks , oribatid mites , woodlice , and millipedes . [ 7 ] Even some species of microcrustaceans, especially copepods (for instance Bryocyclops spp ., Graeteriella spp. , Olmeccyclops hondo , Moraria spp ., Bryocamptus spp ., Atheyella spp . ) [ 13 ] live in moist leaf litter habitats and play an important role as predators and decomposers. [ 14 ] The consumption of the litterfall by decomposers results in the breakdown of simple carbon compounds into carbon dioxide (CO 2 ) and water (H 2 O), and releases inorganic ions (like nitrogen and phosphorus ) into the soil where the surrounding plants can then reabsorb the nutrients that were shed as litterfall. In this way, litterfall becomes an important part of the nutrient cycle that sustains forest environments. As litter decomposes, nutrients are released into the environment. The portion of the litter that is not readily decomposable is known as humus . Litter aids in soil moisture retention by cooling the ground surface and holding moisture in decaying organic matter. The flora and fauna working to decompose soil litter also aid in soil respiration . A litter layer of decomposing biomass provides a continuous energy source for macro- and micro-organisms. [ 15 ] [ 8 ] Numerous reptiles , amphibians , birds , and even some mammals rely on litter for shelter and forage. Amphibians such as salamanders and caecilians inhabit the damp microclimate underneath fallen leaves for part or all of their life cycle. This makes them difficult to observe. A BBC film crew captured footage of a female caecilian with young for the first time in a documentary that aired in 2008. [ 16 ] Some species of birds, such as the ovenbird of eastern North America for example, require leaf litter for both foraging and material for nests . [ 17 ] Sometimes litterfall even provides energy to much larger mammals, such as in boreal forests where lichen litterfall is one of the main constituents of wintering deer and elk diets. [ 18 ] During leaf senescence , a portion of the plant's nutrients are reabsorbed from the leaves. The nutrient concentrations in litterfall differ from the nutrient concentrations in the mature foliage by the reabsorption of constituents during leaf senescence. [ 3 ] Plants that grow in areas with low nutrient availability tend to produce litter with low nutrient concentrations, as a larger proportion of the available nutrients is reabsorbed. After senescence, the nutrient-enriched leaves become litterfall and settle on the soil below. Litterfall is the dominant pathway for nutrient return to the soil, especially for nitrogen (N) and phosphorus (P). The accumulation of these nutrients in the top layer of soil is known as soil immobilization . Once the litterfall has settled, decomposition of the litter layer, accomplished through the leaching of nutrients by rainfall and throughfall and by the efforts of detritivores, releases the breakdown products into the soil below and therefore contributes to the cation exchange capacity of the soil. This holds especially true for highly weathered tropical soils. [ 20 ] Decomposition rate is tied to the type of litterfall present. [ 12 ] Leaching is the process by which cations such as iron (Fe) and aluminum (Al), as well as organic matter are removed from the litterfall and transported downward into the soil below. This process is known as podzolization and is particularly intense in boreal and cool temperate forests that are mainly constituted by coniferous pines whose litterfall is rich in phenolic compounds and fulvic acid . [ 3 ] By the process of biological decomposition by microfauna , bacteria, and fungi, CO 2 and H 2 O, nutrient elements , and a decomposition-resistant organic substance called humus are released. Humus composes the bulk of organic matter in the lower soil profile. [ 3 ] The decline of nutrient ratios is also a function of decomposition of litterfall (i.e. as litterfall decomposes, more nutrients enter the soil below and the litter will have a lower nutrient ratio). Litterfall containing high nutrient concentrations will decompose more rapidly and asymptote as those nutrients decrease. [ 21 ] Knowing this, ecologists have been able to use nutrient concentrations as measured by remote sensing as an index of a potential rate of decomposition for any given area. [ 22 ] Globally, data from various forest ecosystems shows an inverse relationship in the decline in nutrient ratios to the apparent nutrition availability of the forest. [ 3 ] Once nutrients have re-entered the soil, the plants can then reabsorb them through their roots . Therefore, nutrient reabsorption during senescence presents an opportunity for a plant's future net primary production use. A relationship between nutrient stores can also be defined as: Non-terrestrial litterfall follows a very different path. Litter is produced both inland by terrestrial plants and moved to the coast by fluvial processes , and by mangrove ecosystems . [ 23 ] From the coast Robertson & Daniel 1989 found it is then removed by the tide , crabs and microbes . They also noticed that which of those three is most significant depends on the tidal regime . Nordhaus et al. 2011 find crabs forage for leaves at low tide and if their detritivory is the predominant disposal route, they can take 80% of leaf material. Bakkar et al 2017 studied the chemical contribution of the resulting crab defecation. They find crabs pass a noticeable amount of undegraded lignins to both the sediments and water composition. They also find that the exact carbonaceous contribution of each plant species can be traced from the plant, through the crab, to its sediment or water disposition in this way. Crabs are usually the only significant macrofauna in this process, however Raw et al 2017 find Terebralia palustris competes with crabs unusually vigorously in southeast Asia . [ 24 ] The main objectives of litterfall sampling and analysis are to quantify litterfall production and chemical composition over time in order to assess the variation in litterfall quantities, and hence its role in nutrient cycling across an environmental gradient of climate (moisture and temperature) and soil conditions. [ 25 ] Ecologists employ a simple approach to the collection of litterfall, most of which centers around one piece of equipment, known as a litterbag . A litterbag is simply any type of container that can be set out in any given area for a specified amount of time to collect the plant litter that falls from the canopy above. Litterbags are generally set in random locations within a given area and marked with GPS or local coordinates, and then monitored on a specific time interval. Once the samples have been collected, they are usually classified on type, size and species (if possible) and recorded on a spreadsheet. [ 27 ] When measuring bulk litterfall for an area, ecologists will weigh the dry contents of the litterbag. By this method litterfall flux can be defined as: The litterbag may also be used to study decomposition of the litter layer. By confining fresh litter in the mesh bags and placing them on the ground, an ecologist can monitor and collect the decay measurements of that litter. [ 7 ] An exponential decay pattern has been produced by this type of experiment: X X o = e − k {\displaystyle {\frac {X}{X_{o}}}=e^{-k}} , where X o {\displaystyle X_{o}} is the initial leaf litter and k {\displaystyle k} is a constant fraction of detrital mass. [ 3 ] The mass-balance approach is also utilized in these experiments and suggests that the decomposition for a given amount of time should equal the input of litterfall for that same amount of time. For study various groups from edaphic fauna you need a different mesh sizes in the litterbags [ 29 ] In some regions of glaciated North America, earthworms have been introduced where they are not native. Non-native earthworms have led to environmental changes by accelerating the rate of decomposition of litter. These changes are being studied, but may have negative impacts on some inhabitants such as salamanders. [ 30 ] Leaf litter accumulation depends on factors like wind, decomposition rate and species composition of the forest. The quantity, depth and humidity of leaf litter varies in different habitats. The leaf litter found in primary forests is more abundant, deeper and holds more humidity than in secondary forests. This condition also allows for a more stable leaf litter quantity throughout the year. [ 31 ] This thin, delicate layer of organic material can be easily affected by humans. For instance, forest litter raking as a replacement for straw in husbandry is an old non-timber practice in forest management that has been widespread in Europe since the seventeenth century. [ 32 ] [ 33 ] In 1853, an estimated 50 Tg of dry litter per year was raked in European forests, when the practice reached its peak. [ 34 ] This human disturbance, if not combined with other degradation factors, could promote podzolisation; if managed properly (for example, by burying litter removed after its use in animal husbandry), even the repeated removal of forest biomass may not have negative effects on pedogenesis . [ 35 ]
https://en.wikipedia.org/wiki/Plant_litter
The plant microbiome , also known as the phytomicrobiome , plays roles in plant health and productivity and has received significant attention in recent years. [ 1 ] [ 2 ] The microbiome has been defined as "a characteristic microbial community occupying a reasonably well-defined habitat which has distinct physio-chemical properties. The term thus not only refers to the microorganisms involved but also encompasses their theatre of activity". [ 3 ] [ 4 ] Plants live in association with diverse microbial consortia . These microbes, referred to as the plant's microbiota , live both inside (the endosphere) and outside (the episphere) of plant tissues , and play important roles in the ecology and physiology of plants. [ 5 ] "The core plant microbiome is thought to comprise keystone microbial taxa that are important for plant fitness and established through evolutionary mechanisms of selection and enrichment of microbial taxa containing essential functions genes for the fitness of the plant holobiont." [ 6 ] Plant microbiomes are shaped by both factors related to the plant itself, such as genotype, organ, species and health status, as well as factors related to the plant's environment, such as management, land use and climate. [ 7 ] The health status of a plant has been reported in some studies to be reflected by or linked to its microbiome. [ 8 ] [ 1 ] [ 9 ] [ 2 ] The study of the association of plants with microorganisms precedes that of the animal and human microbiomes, notably the roles of microbes in nitrogen and phosphorus uptake. The most notable examples are plant root - arbuscular mycorrhizal (AM) and legume-rhizobial symbioses , both of which greatly influence the ability of roots to uptake various nutrients from the soil. Some of these microbes cannot survive in the absence of the plant host ( obligate symbionts include viruses and some bacteria and fungi), which provides space, oxygen, proteins, and carbohydrates to the microorganisms. The association of AM fungi with plants has been known since 1842, and over 80% of land plants are found associated with them. [ 11 ] It is thought AM fungi helped in the domestication of plants. [ 5 ] Traditionally, plant-microbe interaction studies have been confined to culturable microbes . The numerous microbes that could not be cultured have remained uninvestigated, so knowledge of their roles is largely unknown. [ 5 ] The possibilities of unraveling the types and outcomes of these plant-microbe interactions has generated considerable interest among ecologists, evolutionary biologists, plant biologists, and agronomists. [ 8 ] [ 12 ] [ 1 ] Recent developments in multiomics and the establishment of large collections of microorganisms have dramatically increased knowledge of the plant microbiome composition and diversity. The sequencing of marker genes of entire microbial communities, referred to as metagenomics , sheds light on the phylogenetic diversity of the microbiomes of plants. It also adds to the knowledge of the major biotic and abiotic factors responsible for shaping plant microbiome community assemblages . [ 12 ] [ 5 ] The composition of microbial communities associated with different plant species is correlated with the phylogenetic distance between the plant species, that is, closely related plant species tend to have more alike microbial communities than distant species. [ 13 ] The composition of these microbiomes is dynamic and can be modulated by the environment and by climatic conditions. [ 14 ] The focus of plant microbiome studies has been directed at model plants, such as Arabidopsis thaliana , as well as important economic crop species including barley (Hordeum vulgare), corn (Zea mays), rice (Oryza sativa), soybean (Glycine max), wheat (Triticum aestivum), whereas less attention has been given to fruit crops and tree species. [ 15 ] [ 2 ] Cyanobacteria are an example of a microorganism which widely interacts in a symbiotic manner with land plants . [ 16 ] [ 17 ] [ 18 ] [ 19 ] Cyanobacteria can enter the plant through the stomata and colonise the intercellular space, forming loops and intracellular coils. [ 20 ] Anabaena spp. colonize the roots of wheat and cotton plants. [ 21 ] [ 22 ] [ 23 ] Calothrix sp. has also been found on the root system of wheat. [ 22 ] [ 23 ] Monocots , such as wheat and rice, have been colonised by Nostoc spp., [ 24 ] [ 25 ] [ 26 ] [ 27 ] In 1991, Ganther and others isolated diverse heterocystous nitrogen-fixing cyanobacteria, including Nostoc , Anabaena and Cylindrospermum , from plant root and soil. Assessment of wheat seedling roots revealed two types of association patterns: loose colonization of root hair by Anabaena and tight colonization of the root surface within a restricted zone by Nostoc . [ 24 ] [ 28 ] The rhizosphere comprises the 1–10 mm zone of soil immediately surrounding the roots that is under the influence of the plant through its deposition of root exudates , mucilage and dead plant cells. [ 31 ] A diverse array of organisms specialize in living in the rhizosphere, including bacteria , fungi , oomycetes , nematodes , algae , protozoa , viruses , and archaea . [ 32 ] "Experimental evidence underlines the importance of the root microbiome in plant health and it is becoming increasingly clear that the plant is able to control the composition of its microbiome. It stands to reason that those plants that manage their microbiome in a way that is beneficial to their reproductive success will be favored during evolutionary selection. It appears that such selective pressure has brought about many specific interactions between plants and microbes, and evidence is accumulating that plants call for microbial help in time of need." Mycorrhizal fungi are abundant members of the rhizosphere community, and have been found in over 200,000 plant species, and are estimated to associate with over 80% of all plants. [ 34 ] Mycorrhizae–root associations play profound roles in land ecosystems by regulating nutrient and carbon cycles . Mycorrhizae are integral to plant health because they provide up to 80% of the nitrogen and phosphorus requirements. In return, the fungi obtain carbohydrates and lipids from host plants. [ 35 ] Recent studies of arbuscular mycorrhizal fungi using sequencing technologies show greater between-species and within-species diversity than previously known. [ 36 ] [ 5 ] The most frequently studied beneficial rhizosphere organisms are mycorrhizae , rhizobium bacteria , plant-growth promoting rhizobacteria (PGPR), and biocontrol microbes . It has been projected that one gram of soil could contain more than one million distinct bacterial genomes, [ 37 ] and over 50,000 OTUs ( operational taxonomic units ) have been found within the potato rhizosphere. [ 38 ] Among the prokaryotes in the rhizosphere, the most frequent bacteria are within the Acidobacteriota , Pseudomonadota , Planctomycetota , Actinomycetota , Bacteroidota , and Bacillota . [ 39 ] [ 40 ] In some studies, no significant differences were reported in the microbial community composition between the bulk soil (soil not attached to the plant root) and rhizosphere soil. [ 41 ] [ 42 ] Certain bacterial groups (e.g. Actinomycetota, Xanthomonadaceae ) are less abundant in the rhizosphere than in nearby bulk soil . [ 39 ] [ 5 ] Some microorganisms, such as endophytes , penetrate and occupy the plant internal tissues, forming the endospheric microbiome . The arbuscular mycorrhizal and other endophytic fungi are the dominant colonizers of the endosphere. [ 43 ] Bacteria, and to some degree archaea , are important members of endosphere communities. Some of these endophytic microbes interact with their host and provide obvious benefits to plants. [ 39 ] [ 44 ] [ 45 ] Unlike the rhizosphere and the rhizoplane, the endospheres harbor highly specific microbial communities. The root endophytic community can be very distinct from that of the adjacent soil community. In general, diversity of the endophytic community is lower than the diversity of the microbial community outside the plant. [ 42 ] The identity and diversity of the endophytic microbiome of above-and below-ground tissues may also differ within the plant. [ 46 ] [ 43 ] [ 5 ] The aerial surface of a plant (stem, leaf, flower, fruit) is called the phyllosphere and is considered comparatively nutrient poor when compared to the rhizosphere and endosphere. The environment in the phyllosphere is more dynamic than the rhizosphere and endosphere environments. Microbial colonizers are subjected to diurnal and seasonal fluctuations of heat, moisture, and radiation. In addition, these environmental elements affect plant physiology (such as photosynthesis, respiration, water uptake etc.) and indirectly influence microbiome composition. [ 5 ] Rain and wind also cause temporal variation to the phyllosphere microbiome. [ 48 ] Interactions between plants and their associated microorganisms in many of these microbiomes can play pivotal roles in host plant health, function, and evolution. [ 49 ] The leaf surface, or phyllosphere , harbours a microbiome comprising diverse communities of bacteria, fungi, algae, archaea, and viruses. [ 50 ] [ 51 ] Interactions between the host plant and phyllosphere bacteria have the potential to drive various aspects of host plant physiology. [ 52 ] [ 53 ] [ 54 ] However, as of 2020 knowledge of these bacterial associations in the phyllosphere remains relatively modest, and there is a need to advance fundamental knowledge of phyllosphere microbiome dynamics. [ 55 ] [ 56 ] Overall, there remains high species richness in phyllosphere communities. Fungal communities are highly variable in the phyllosphere of temperate regions and are more diverse than in tropical regions. [ 57 ] There can be up to 107 microbes per square centimetre present on the leaf surfaces of plants, and the bacterial population of the phyllosphere on a global scale is estimated to be 10 26 cells. [ 58 ] The population size of the fungal phyllosphere is likely to be smaller. [ 59 ] Phyllosphere microbes from different plants appear to be somewhat similar at high levels of taxa, but at the lower levels taxa there remain significant differences. This indicates microorganisms may need finely tuned metabolic adjustment to survive in phyllosphere environment. [ 57 ] Pseudomonadota seems to be the dominant colonizers, with Bacteroidota and Actinomycetota also predominant in phyllospheres. [ 60 ] Although there are similarities between the rhizosphere and soil microbial communities, very little similarity has been found between phyllosphere communities and microorganisms floating in open air ( aeroplankton ). [ 43 ] [ 5 ] The assembly of the phyllosphere microbiome, which can be strictly defined as epiphytic bacterial communities on the leaf surface, can be shaped by the microbial communities present in the surrounding environment (i.e., stochastic colonisation ) and the host plant (i.e., biotic selection). [ 50 ] [ 58 ] [ 56 ] However, although the leaf surface is generally considered a discrete microbial habitat, [ 61 ] [ 62 ] there is no consensus on the dominant driver of community assembly across phyllosphere microbiomes. For example, host-specific bacterial communities have been reported in the phyllosphere of co-occurring plant species, suggesting a dominant role of host selection. [ 62 ] [ 43 ] [ 63 ] [ 56 ] Conversely, microbiomes of the surrounding environment have also been reported to be the primary determinant of phyllosphere community composition. [ 61 ] [ 64 ] [ 57 ] [ 65 ] As a result, the processes that drive phyllosphere community assembly are not well understood but unlikely to be universal across plant species. However, the existing evidence does indicate that phyllosphere microbiomes exhibiting host-specific associations are more likely to interact with the host than those primarily recruited from the surrounding environment. [ 52 ] [ 66 ] [ 67 ] [ 68 ] [ 56 ] The search for a core microbiome in host-associated microbial communities is a useful first step in trying to understand the interactions that may be occurring between a host and its microbiome. [ 69 ] [ 70 ] The prevailing core microbiome concept is built on the notion that the persistence of a taxon across the spatiotemporal boundaries of an ecological niche is directly reflective of its functional importance within the niche it occupies; it therefore provides a framework for identifying functionally critical microorganisms that consistently associate with a host species. [ 69 ] [ 71 ] [ 41 ] [ 56 ] Divergent definitions of "core microbiome" have arisen across scientific literature with researchers variably identifying "core taxa" as those persistent across distinct host microhabitats [ 72 ] [ 73 ] and even different species. [ 63 ] [ 66 ] Given the functional divergence of microorganisms across different host species [ 63 ] and microhabitats, [ 74 ] defining core taxa sensu stricto as those persistent across broad geographic distances within tissue- and species-specific host microbiomes, represents the most biologically and ecologically appropriate application of this conceptual framework. [ 75 ] [ 56 ] Tissue- and species-specific core microbiomes across host populations separated by broad geographical distances have not been widely reported for the phyllosphere using the stringent definition established by Ruinen. [ 53 ] [ 56 ] The flowering tea tree commonly known as mānuka is indigenous to New Zealand. [ 76 ] Mānuka honey , produced from the nectar of mānuka flowers, is known for its non-peroxide antibacterial properties. [ 77 ] [ 78 ] Microorganisms have been studied in the mānuka rhizosphere and endosphere. [ 79 ] [ 80 ] [ 81 ] Earlier studies primarily focussed on fungi, and a 2016 study provided the first investigation of endophytic bacterial communities from three geographically and environmentally distinct mānuka populations using fingerprinting techniques and revealed tissue-specific core endomicrobiomes. [ 82 ] [ 56 ] A 2020 study identified a habitat-specific and relatively abundant core microbiome in the mānuka phyllosphere, which was persistent across all samples. In contrast, non-core phyllosphere microorganisms exhibited significant variation across individual host trees and populations that was strongly driven by environmental and spatial factors. The results demonstrated the existence of a dominant and ubiquitous core microbiome in the phyllosphere of mānuka. [ 56 ] Plant seeds can serve as natural vectors for vertical transmission of their beneficial endophytes, such as those that confer disease resistance. A 2021 research paper explained, "It makes sense that their most important symbionts would be vertically transmitted through seed rather than gambling that all of the correct soil-dwelling microbes might be available at the germination site." [ 83 ] The new paradigm regarding mutualistic fungi and bacterial transmission via the seeds of host plants has been fostered largely by research pertaining to plants of agricultural value. [ 83 ] [ 84 ] Rice seeds were found to entail high microbial diversity, with the greatest diversity inhabiting the embryo rather than the pericarp . [ 85 ] Fungi of genus Fusarium transmitted via seeds were found to be dominant members of the microbiome within the stems of maize . [ 83 ] This facet of the plant microbiome came to be known as the seed microbiome. [ 86 ] Forestry researchers have also begun to identify members of the seed microbiome pertaining to valuable tree species. Vertical transmission of fungal and bacterial mutualists was confirmed in 2021 for the acorns of oak trees. [ 46 ] [ 87 ] If the research on oaks turns out to apply to other tree species, it will be understood that the above-soil portions of a plant (the phyllosphere ) obtain nearly all of their beneficial fungi from those carried in the seed. [ 46 ] In contrast, the roots (the rhizosphere ) acquire only a small fraction of their mutualists from the seed. Most arrive via the surrounding soil, and this includes their vital associations with arbuscular mycorrhizal fungi . [ 84 ] Microbial species consistently found in plant seeds are known as the "core microbiome." [ 83 ] [ 88 ] Benefits to the host plant include their ability to assist in the production of antimicrobial compounds, detoxification, nutrient uptake, and growth-promoting activities. [ 46 ] Discerning the functions of symbiotic microbes in seeds is shifting the agricultural paradigm away from seed breeding and preparation that traditionally sought to minimize the presence of fungal and bacterial propagules. The likelihood that a microbe found within a seed is mutualistic is now a routine presumption. Such partners may contribute to "seed dormancy and germination, environmental adaptation, resistance and tolerance against diseases, and growth promotion." [ 88 ] Application of the new understanding of beneficial microbes inhabiting seeds has been suggested for use beyond agriculture and for biodiversity conservation. [ 84 ] A citizen group advocating for northward assisted migration of an endangered tree in the USA has pointed to the seed microbiome paradigm shift as a reason for the official institutions to lift their ban on seed transfer beyond the ex situ conservation plantings in northern Georgia. [ 89 ] Since the colonization of land by ancestral plant lineages 450 million years ago, plants and their associated microbes have been interacting with each other, forming an assemblage of species that is often referred to as a holobiont . Selective pressure acting on holobiont components has likely shaped plant-associated microbial communities and selected for host-adapted microorganisms that impact plant fitness. However, the high microbial densities detected on plant tissues, together with the fast generation time of microbes and their more ancient origin compared to their host, suggest that microbe-microbe interactions are also important selective forces sculpting complex microbial assemblages in the phyllosphere , rhizosphere , and plant endosphere compartments. [ 33 ]
https://en.wikipedia.org/wiki/Plant_microbiome
Phytomorphology is the study of the physical form and external structure of plants. [ 1 ] This is usually considered distinct from plant anatomy , [ 1 ] which is the study of the internal structure of plants, especially at the microscopic level. [ 2 ] Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification. [ 3 ] Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". [ 4 ] There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences. [ citation needed ] First of all, morphology is comparative , meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous . For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany . [ citation needed ] Secondly, plant morphology observes both the vegetative ( somatic ) structures of plants, as well as the reproductive structures. The vegetative structures of vascular plants includes the study of the shoot system, composed of stems and leaves, as well as the root system. The reproductive structures are more varied, and are usually specific to a particular group of plants, such as flowers and seeds, fern sori , and moss capsules. The detailed study of reproductive structures in plants led to the discovery of the alternation of generations found in all plants and most algae. This area of plant morphology overlaps with the study of biodiversity and plant systematics . [ citation needed ] Thirdly, plant morphology studies plant structure at a range of scales. At the smallest scales are ultrastructure , the general structural features of cells visible only with the aid of an electron microscope , and cytology , the study of cells using optical microscopy . At this scale, plant morphology overlaps with plant anatomy as a field of study. At the largest scale is the study of plant growth habit , the overall architecture of a plant. The pattern of branching in a tree will vary from species to species, as will the appearance of a plant as a tree, herb, or grass. [ citation needed ] Fourthly, plant morphology examines the pattern of development , the process by which structures originate and mature as a plant grows. While animals produce all the body parts they will ever have from early in their life, plants constantly produce new tissues and structures throughout their life. A living plant always has embryonic tissues. The way in which new structures mature as they are produced may be affected by the point in the plant's life when they begin to develop, as well as by the environment to which the structures are exposed. A morphologist studies this process, the causes, and its result. This area of plant morphology overlaps with plant physiology and ecology . A plant morphologist makes comparisons between structures in many different plants of the same or different species. Making such comparisons between similar structures in different plants tackles the question of why the structures are similar. It is quite likely that similar underlying causes of genetics, physiology, or response to the environment have led to this similarity in appearance. The result of scientific investigation into these causes can lead to one of two insights into the underlying biology: [ citation needed ] Understanding which characteristics and structures belong to each type is an important part of understanding plant evolution. The evolutionary biologist relies on the plant morphologist to interpret structures, and in turn provides phylogenies of plant relationships that may lead to new morphological insights. [ citation needed ] When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous . For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. [ citation needed ] When structures in different species are believed to exist and develop as a result of common adaptive responses to environmental pressure, those structures are termed convergent . For example, the fronds of Bryopsis plumosa and stems of Asparagus setaceus both have the same feathery branching appearance, even though one is an alga and one is a flowering plant. The similarity in overall structure occurs independently as a result of convergence. The growth form of many cacti and species of Euphorbia is very similar, even though they belong to widely distant families. The similarity results from common solutions to the problem of surviving in a hot, dry environment. [ citation needed ] Plant morphology treats both the vegetative structures of plants, as well as the reproductive structures. The vegetative ( somatic ) structures of vascular plants include two major organ systems: (1) a shoot system , composed of stems and leaves, and (2) a root system . These two systems are common to nearly all vascular plants, and provide a unifying theme for the study of plant morphology. By contrast, the reproductive structures are varied, and are usually specific to a particular group of plants. Structures such as flowers and fruits are only found in the angiosperms ; sori are only found in ferns; and seed cones are only found in conifers and other gymnosperms . Reproductive characters are therefore regarded as more useful for the classification of plants than vegetative characters. Plant biologists use morphological characters of plants which can be compared, measured, counted and described to assess the differences or similarities in plant taxa and use these characters for plant identification, classification and descriptions. When characters are used in descriptions or for identification they are called diagnostic or key characters which can be either qualitative and quantitative. Both kinds of characters can be very useful for the identification of plants. The detailed study of reproductive structures in plants led to the discovery of the alternation of generations, found in all plants and most algae, by the German botanist Wilhelm Hofmeister . This discovery is one of the most important made in all of plant morphology, since it provides a common basis for understanding the life cycle of all plants. [ citation needed ] The primary function of pigments in plants is photosynthesis, which uses the green pigment chlorophyll along with several red and yellow pigments that help to capture as much light energy as possible the other pigments ic carotenoids'. Pigments are also an important factor in attracting insects to flowers to encourage pollination. [ citation needed ] Plant pigments include a variety of different kinds of molecule, including porphyrins , carotenoids , anthocyanins and betalains . All biological pigments selectively absorb certain wavelengths of light while reflecting others. The light that is absorbed may be used by the plant to power chemical reactions, while the reflected wavelengths of light determine the color the pigment will appear to the eye. [ citation needed ] Plant development is the process by which structures originate and mature as a plant grows. It is a subject studies in plant anatomy and plant physiology as well as plant morphology. [ citation needed ] The process of development in plants is fundamentally different from that seen in vertebrate animals. When an animal embryo begins to develop, it will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature. By contrast, plants constantly produce new tissues and structures throughout their life from meristems [ 5 ] located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. The properties of organisation seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts." [ 6 ] In other words, knowing everything about the molecules in a plant are not enough to predict characteristics of the cells; and knowing all the properties of the cells will not predict all the properties of a plant's structure. [ citation needed ] A vascular plant begins from a single celled zygote , formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis . As this happens, the resulting cells will organise so that one end becomes the first root, while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" ( cotyledons ). By the end of embryogenesis, the young plant will have all the parts necessary to begin in its life. [ citation needed ] Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis . New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the tip of the shoot. [ 7 ] Branching occurs when small clumps of cells left behind by the meristem, and which have not yet undergone cellular differentiation to form a specialised tissue, begin to grow as the tip of a new root or shoot. Growth from any such meristem at the tip of a root or shoot is termed primary growth and results in the lengthening of that root or shoot. Secondary growth results in widening of a root or shoot from divisions of cells in a cambium . [ 8 ] In addition to growth by cell division, a plant may grow through cell elongation . This occurs when individual cells or groups of cells grow longer. Not all plant cells will grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem will bend to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light ( phototropism ), gravity ( gravitropism ), water, ( hydrotropism ), and physical contact ( thigmotropism ). [ citation needed ] Plant growth and development are mediated by specific plant hormones and plant growth regulators (PGRs) (Ross et al. 1983). [ 9 ] Endogenous hormone levels are influenced by plant age, cold hardiness, dormancy, and other metabolic conditions; photoperiod, drought, temperature, and other external environmental conditions; and exogenous sources of PGRs, e.g., externally applied and of rhizospheric origin. [ citation needed ] Plants exhibit natural variation in their form and structure. While all organisms vary from individual to individual, plants exhibit an additional type of variation. Within a single individual, parts are repeated which may differ in form and structure from other similar parts. This variation is most easily seen in the leaves of a plant, though other organs such as stems and flowers may show similar variation. There are three primary causes of this variation: positional effects, environmental effects, and juvenility. [ citation needed ] Transcription factors and transcriptional regulatory networks play key roles in plant morphogenesis and their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants. [ 10 ] Although plants produce numerous copies of the same organ during their lives, not all copies of a particular organ will be identical. There is variation among the parts of a mature plant resulting from the relative position where the organ is produced. For example, along a new branch the leaves may vary in a consistent pattern along the branch. The form of leaves produced near the base of the branch will differ from leaves produced at the tip of the plant, and this difference is consistent from branch to branch on a given plant and in a given species. This difference persists after the leaves at both ends of the branch have matured, and is not the result of some leaves being younger than others. [ citation needed ] The way in which new structures mature as they are produced may be affected by the point in the plant's life when they begin to develop, as well as by the environment to which the structures are exposed. This can be seen in aquatic plants. [ citation needed ] Temperature has a multiplicity of effects on plants depending on a variety of factors, including the size and condition of the plant and the temperature and duration of exposure. The smaller and more succulent the plant, the greater the susceptibility to damage or death from temperatures that are too high or too low. Temperature affects the rate of biochemical and physiological processes, rates generally (within limits) increasing with temperature. However, the Van't Hoff relationship for monomolecular reactions (which states that the velocity of a reaction is doubled or trebled by a temperature increase of 10 °C) does not strictly hold for biological processes, especially at low and high temperatures. [ citation needed ] When water freezes in plants, the consequences for the plant depend very much on whether the freezing occurs intracellularly (within cells) or outside cells in intercellular (extracellular) spaces. [ 11 ] Intracellular freezing usually kills the cell regardless of the hardiness of the plant and its tissues. [ 12 ] Intracellular freezing seldom occurs in nature, but moderate rates of decrease in temperature, e.g., 1 °C to 6 °C/hour, cause intercellular ice to form, and this "extraorgan ice" [ 13 ] may or may not be lethal, depending on the hardiness of the tissue. At freezing temperatures, water in the intercellular spaces of plant tissues freezes first, though the water may remain unfrozen until temperatures fall below 7 °C. [ 11 ] After the initial formation of ice intercellularly, the cells shrink as water is lost to the segregated ice. The cells undergo freeze-drying, the dehydration being the basic cause of freezing injury. [ citation needed ] The rate of cooling has been shown to influence the frost resistance of tissues, [ 14 ] but the actual rate of freezing will depend not only on the cooling rate, but also on the degree of supercooling and the properties of the tissue. [ 15 ] Sakai (1979a) [ 14 ] demonstrated ice segregation in shoot primordia of Alaskan white and black spruces when cooled slowly to 30 °C to -40 °C. These freeze-dehydrated buds survived immersion in liquid nitrogen when slowly rewarmed. Floral primordia responded similarly. Extraorgan freezing in the primordia accounts for the ability of the hardiest of the boreal conifers to survive winters in regions when air temperatures often fall to -50 °C or lower. [ 13 ] The hardiness of the winter buds of such conifers is enhanced by the smallness of the buds, by the evolution of faster translocation of water, and an ability to tolerate intensive freeze dehydration. In boreal species of Picea and Pinus , the frost resistance of 1-year-old seedlings is on a par with mature plants, [ 16 ] given similar states of dormancy. The organs and tissues produced by a young plant, such as a seedling, are often different from those that are produced by the same plant when it is older. This phenomenon is known as juvenility or heteroblasty . For example, young trees will produce longer, leaner branches that grow upwards more than the branches they will produce as a fully grown tree. In addition, leaves produced during early growth tend to be larger, thinner, and more irregular than leaves on the adult plant. Specimens of juvenile plants may look so completely different from adult plants of the same species that egg-laying insects do not recognise the plant as food for their young. Differences are seen in rootability and flowering and can be seen in the same mature tree. Juvenile cuttings taken from the base of a tree will form roots much more readily than cuttings originating from the mid to upper crown. Flowering close to the base of a tree is absent or less profuse than flowering in the higher branches especially when a young tree first reaches flowering age. [ 17 ] The transition from early to late growth forms is referred to as ' vegetative phase change ', but there is some disagreement about terminology. [ 18 ] Rolf Sattler has revised fundamental concepts of comparative morphology such as the concept of homology. He emphasised that homology should also include partial homology and quantitative homology. [ 19 ] [ 20 ] This leads to a continuum morphology that demonstrates a continuum between the morphological categories of root, shoot, stem (caulome), leaf (phyllome), and hair (trichome). How intermediates between the categories are best described has been discussed by Bruce K. Kirchoff et al. [ 21 ] A recent study conducted by Stalk Institute extracted coordinates corresponding to each plant's base and leaves in 3D space. When plants on the graph were placed according to their actual nutrient travel distances and total branch lengths, the plants fell almost perfectly on the Pareto curve. "This means the way plants grow their architectures also optimises a very common network design tradeoff. Based on the environment and the species, the plant is selecting different ways to make tradeoffs for those particular environmental conditions." [ 22 ] Honoring Agnes Arber, author of the partial-shoot theory of the leaf, Rutishauser and Isler called the continuum approach Fuzzy Arberian Morphology (FAM). "Fuzzy" refers to fuzzy logic , "Arberian" to Agnes Arber . Rutishauser and Isler emphasised that this approach is not only supported by many morphological data but also by evidence from molecular genetics. [ 23 ] More recent evidence from molecular genetics provides further support for continuum morphology. James (2009) concluded that "it is now widely accepted that... radiality [characteristic of most stems] and dorsiventrality [characteristic of leaves] are but extremes of a continuous spectrum. In fact, it is simply the timing of the KNOX gene expression!." [ 24 ] Eckardt and Baum (2010) concluded that "it is now generally accepted that compound leaves express both leaf and shoot properties." [ 25 ] Process morphology describes and analyses the dynamic continuum of plant form. According to this approach, structures do not have process(es), they are process(es). [ 26 ] [ 27 ] [ 28 ] Thus, the structure/process dichotomy is overcome by "an enlargement of our concept of 'structure' so as to include and recognise that in the living organism it is not merely a question of spatial structure with an 'activity' as something over or against it, but that the concrete organism is a spatio- temporal structure and that this spatio-temporal structure is the activity itself". [ 29 ] For Jeune, Barabé and Lacroix, classical morphology (that is, mainstream morphology, based on a qualitative homology concept implying mutually exclusive categories) and continuum morphology are sub-classes of the more encompassing process morphology (dynamic morphology). [ 30 ] Classical morphology, continuum morphology, and process morphology are highly relevant to plant evolution, especially the field of plant evolutionary biology (plant evo-devo) that tries to integrate plant morphology and plant molecular genetics. [ 31 ] In a detailed case study on unusual morphologies, Rutishauser (2016) illustrated and discussed various topics of plant evo-devo such as the fuzziness (continuity) of morphological concepts, the lack of a one-to-one correspondence between structural categories and gene expression, the notion of morphospace, the adaptive value of bauplan features versus patio ludens, physiological adaptations, hopeful monsters and saltational evolution, the significance and limits of developmental robustness, etc. [ 32 ] Rutishauser (2020) discussed the past and future of plant evo-devo. [ 33 ] Our conception of the gynoecium and the search for a fossil ancestor of Angiosperms changes fundamentally from the perspective of evo-devo. [ 34 ] Whether we like it or not, morphological research is influenced by philosophical assumptions such as either/or logic, fuzzy logic, structure/process dualism or its transcendence. And empirical findings may influence the philosophical assumptions. Thus there are interactions between philosophy and empirical findings. These interactions are the subject of what has been referred to as philosophy of plant morphology. [ 35 ] One important and unique event in plant morphology of the 21st century was the publication of Kaplan's Principles of Plant Morphology by Donald R. Kaplan, edited by Chelsea D. Specht (2020). [ 36 ] It is a well illustrated volume of 1305 pages in a very large format that presents a wealth of morphological data. Unfortunately, all of these data are only interpreted in terms of classical morphology and the qualitative homology concept, disregarding modern conceptional innovations. [ 37 ] Including continuum and process morphology as well as molecular genetics would provide an enlarged scope. [ 38 ] An even more important event was the publication of a book by Classen-Bockhoff: Die Pflanze: Morphologie, Entwicklung und Evolution von Vielfalt. [ 39 ] Like Kaplan's book, this book is very comprehensive (over a thousand pages) and beautifully illustrated (she worked with two illustrators), but unlike Kaplan's book, her book presents major conceptual innovations. Although, for the vegetative region, she accepts the categories of classical morphology, contrary to Kaplan, she recognizes that not all structures can be pressed into these categories. For flowers, she abandoned the classical framework altogether. Instead of interpreting the flower as a modified short shoot (as posited by classical morphology), she proposed that flowers are sporangia bearing units so that stamens and carpels are sporangiophores, which are considered 'de novo' structures not necessarily homologous with vegetative leaves. Rolf Sattler proposed an Articulation Morphology. [ 40 ] It is based on the open growth of plants, which occurs through ramification that leads to articulation - the formation of articles between successive ramifications or after a single ramification. Thus, the plant is seen as an articulated whole, consisting of articles. In articulation morphology, the central and most basic concept is no longer morphological homology but transformation: transformation of ramification and articulation. This changes the most basic questions we ask. Instead of asking questions about morphological homology, we ask how ramification and articulation have changed during development and evolution. For this reason, the new approach of articulation morphology may be considered a new paradigm of plant morphology. It changes fundamentally our way of thinking about morphology and morphological investigation.
https://en.wikipedia.org/wiki/Plant_morphology
Plant nutrition is the study of the chemical elements and compounds necessary for plant growth and reproduction, plant metabolism and their external supply. In its absence the plant is unable to complete a normal life cycle, or that the element is part of some essential plant constituent or metabolite . This is in accordance with Justus von Liebig's law of the minimum . [ 1 ] The total essential plant nutrients include seventeen different elements: carbon , oxygen and hydrogen which are absorbed from the air, whereas other nutrients including nitrogen are typically obtained from the soil (exceptions include some parasitic or carnivorous plants ). Plants must obtain the following mineral nutrients from their growing medium: [ 2 ] These elements stay beneath soil as salts , so plants absorb these elements as ions . The macronutrients are taken up in larger quantities; hydrogen, oxygen, nitrogen and carbon contribute to over 95% of a plant's entire biomass on a dry matter weight basis. Micronutrients are present in plant tissue in quantities measured in parts per million, ranging from 0.1 [ 3 ] to 200 ppm, or less than 0.02% dry weight. [ 4 ] Most soil conditions across the world can provide plants adapted to that climate and soil with sufficient nutrition for a complete life cycle, without the addition of nutrients as fertilizer . However, if the soil is cropped it is necessary to artificially modify soil fertility through the addition of fertilizer to promote vigorous growth and increase or sustain yield. This is done because, even with adequate water and light, nutrient deficiency can limit growth and crop yield. Carbon , hydrogen and oxygen are the basic nutrients plants receive from air and water. Justus von Liebig proved in 1840 that plants needed nitrogen , potassium and phosphorus . Liebig's law of the minimum states that a plant's growth is limited by nutrient deficiency. [ 5 ] Plant cultivation in media other than soil was used by Arnon and Stout in 1939 to show that molybdenum was essential to tomato growth. [ citation needed ] Plants take up essential elements from the soil through their roots and from the air through their leaves. Nutrient uptake in the soil is achieved by cation exchange , wherein root hairs pump hydrogen ions (H + ) into the soil through proton pumps . These hydrogen ions displace cations attached to negatively charged soil particles so that the cations are available for uptake by the root. In the leaves, stomata open to take in carbon dioxide and expel oxygen. The carbon dioxide molecules are used as the carbon source in photosynthesis . The root , especially the root hair, a unique cell, is the essential organ for the uptake of nutrients. The structure and architecture of the root can alter the rate of nutrient uptake. Nutrient ions are transported to the center of the root, the stele , in order for the nutrients to reach the conducting tissues, xylem and phloem. [ 6 ] The Casparian strip , a cell wall outside the stele but in the root, prevents passive flow of water and nutrients, helping to regulate the uptake of nutrients and water. Xylem moves water and mineral ions in the plant and phloem accounts for organic molecule transportation. Water potential plays a key role in a plant's nutrient uptake. If the water potential is more negative in the plant than the surrounding soils, the nutrients will move from the region of higher solute concentration—in the soil—to the area of lower solute concentration - in the plant. There are three fundamental ways plants uptake nutrients through the root: Nutrients can be moved in plants to where they are most needed. For example, a plant will try to supply more nutrients to its younger leaves than to its older ones. When nutrients are mobile in the plant, symptoms of any deficiency become apparent first on the older leaves. However, not all nutrients are equally mobile. Nitrogen, phosphorus, and potassium are mobile nutrients while the others have varying degrees of mobility. When a less-mobile nutrient is deficient, the younger leaves suffer because the nutrient does not move up to them but stays in the older leaves. This phenomenon is helpful in determining which nutrients a plant may be lacking. Many plants engage in symbiosis with microorganisms. Two important types of these relationship are The Earth's atmosphere contains over 78 percent nitrogen. Plants called legumes, including the agricultural crops alfalfa and soybeans, widely grown by farmers, harbour nitrogen-fixing bacteria that can convert atmospheric nitrogen into nitrogen the plant can use. Plants not classified as legumes such as wheat, corn and rice rely on nitrogen compounds present in the soil to support their growth. These can be supplied by mineralization of soil organic matter or added plant residues, nitrogen fixing bacteria, animal waste, through the breaking of triple bonded N 2 molecules by lightning strikes or through the application of fertilizers . At least 17 elements are known to be essential nutrients for plants. In relatively large amounts, the soil supplies nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur; these are often called the macronutrients . In relatively small amounts, the soil supplies iron, manganese, boron, molybdenum, copper, zinc, chlorine, and cobalt, the so-called micronutrients . Nutrients must be available not only in sufficient amounts but also in appropriate ratios. Plant nutrition is a difficult subject to understand completely, partially because of the variation between different plants and even between different species or individuals of a given clone. Elements present at low levels may cause deficiency symptoms, and toxicity is possible at levels that are too high. Furthermore, deficiency of one element may present as symptoms of toxicity from another element, and vice versa. An abundance of one nutrient may cause a deficiency of another nutrient. For example, K + uptake can be influenced by the amount of NH + 4 available. [ 6 ] Nitrogen is plentiful in the Earth's atmosphere, and a number of commercially-important agricultural plants engage in nitrogen fixation (conversion of atmospheric nitrogen to a biologically useful form). However, plants mostly receive their nitrogen through the soil, where it is already converted in biological useful form. This is important because the nitrogen in the atmosphere is too large for the plant to consume, and takes a lot of energy to convert into smaller forms. These include soybeans, edible beans and peas as well as clovers and alfalfa used primarily for feeding livestock. Plants such as the commercially-important corn, wheat, oats, barley and rice require nitrogen compounds to be present in the soil in which they grow. Carbon and oxygen are absorbed from the air while other nutrients are absorbed from the soil. Green plants ordinarily obtain their carbohydrate supply from the carbon dioxide in the air by the process of photosynthesis . Each of these nutrients is used for a different essential function. [ 7 ] The basic nutrients are derived from air and water. [ 8 ] Carbon forms the backbone of most plant biomolecules , including proteins, starches and cellulose . Carbon is fixed through photosynthesis ; this converts carbon dioxide from the air into carbohydrates which are used to store and transport energy within the plant. Hydrogen is necessary for building sugars and building the plant. It is obtained almost entirely from water. Hydrogen ions are imperative for a proton gradient to help drive the electron transport chain in photosynthesis and for respiration. [ 6 ] Oxygen is a component of many organic and inorganic molecules within the plant, and is acquired in many forms. These include: O 2 and CO 2 (mainly from the air via leaves) and H 2 O , NO − 3 , H 2 PO − 4 and SO 2− 4 (mainly from the soil water via roots). Plants produce oxygen gas (O 2 ) along with glucose during photosynthesis but then require O 2 to undergo aerobic cellular respiration and break down this glucose to produce ATP . Nitrogen is a major constituent of several of the most important plant substances. For example, nitrogen compounds comprise 40% to 50% of the dry matter of protoplasm , and it is a constituent of amino acids , the building blocks of proteins . [ 9 ] It is also an essential constituent of chlorophyll . [ 10 ] In many agricultural settings, nitrogen is the limiting nutrient for rapid growth. Like nitrogen, phosphorus is involved with many vital plant processes. Within a plant, it is present mainly as a structural component of the nucleic acids : deoxyribonucleic acid (DNA) and ribonucleic acid (RNA), as well as a constituent of fatty phospholipids , that are important in membrane development and function. It is present in both organic and inorganic forms, both of which are readily translocated within the plant. All energy transfers in the cell are critically dependent on phosphorus. As with all living things, phosphorus is part of the Adenosine triphosphate (ATP), which is of immediate use in all processes that require energy with the cells. Phosphorus can also be used to modify the activity of various enzymes by phosphorylation , and is used for cell signaling . Phosphorus is concentrated at the most actively growing points of a plant and stored within seeds in anticipation of their germination. Unlike other major elements, potassium does not enter into the composition of any of the important plant constituents involved in metabolism, [ 9 ] but it does occur in all parts of plants in substantial amounts. It is essential for enzyme activity including enzymes involved in primary metabolism. It plays a role in turgor regulation, effecting the functioning of the stomata and cell volume growth. [ 11 ] It seems to be of particular importance in leaves and at growing points. Potassium is outstanding among the nutrient elements for its mobility and solubility within plant tissues. Processes involving potassium include the formation of carbohydrates and proteins , the regulation of internal plant moisture, as a catalyst and condensing agent of complex substances, as an accelerator of enzyme action, and as contributor to photosynthesis , especially under low light intensity. Potassium regulates the opening and closing of the stomata by a potassium ion pump. Since stomata are important in water regulation, potassium regulates water loss from the leaves and increases drought tolerance. Potassium serves as an activator of enzymes used in photosynthesis and respiration. [ 6 ] Potassium is used to build cellulose and aids in photosynthesis by the formation of a chlorophyll precursor. The potassium ion (K + ) is highly mobile and can aid in balancing the anion (negative) charges within the plant. A relationship between potassium nutrition and cold resistance has been found in several tree species, including two species of spruce. [ 12 ] Potassium helps in fruit coloration, shape and also increases its brix . Hence, quality fruits are produced in potassium-rich soils. Research has linked K + transport with auxin homeostasis, cell signaling, cell expansion, membrane trafficking and phloem transport. [ 11 ] Sulfur is a structural component of some amino acids (including cysteine and methionine ) and vitamins, and is essential for chloroplast growth and function; it is found in the iron-sulfur complexes of the electron transport chains in photosynthesis. It is needed for N 2 fixation by legumes, and the conversion of nitrate into amino acids and then into protein. [ 13 ] Calcium in plants occurs chiefly in the leaves , with lower concentrations in seeds, fruits, and roots. A major function is as a constituent of cell walls. When coupled with certain acidic compounds of the jelly-like pectins of the middle lamella, calcium forms an insoluble salt. It is also intimately involved in meristems , and is particularly important in root development, with roles in cell division, cell elongation, and the detoxification of hydrogen ions. Other functions attributed to calcium are: the neutralization of organic acids; inhibition of some potassium-activated ions; and a role in nitrogen absorption. A notable feature of calcium-deficient plants is a defective root system. [ 14 ] Roots are usually affected before above-ground parts. [ 15 ] Blossom end rot is also a result of inadequate calcium. [ 16 ] Calcium regulates transport of other nutrients into the plant and is also involved in the activation of certain plant enzymes. Calcium deficiency results in stunting. This nutrient is involved in photosynthesis and plant structure. [ 16 ] [ 17 ] It is needed as a balancing cation for anions in the vacuole and as an intracellular messenger in the cytosol . [ 18 ] The outstanding role of magnesium in plant nutrition is as a constituent of the chlorophyll molecule. As a carrier, it is also involved in numerous enzyme reactions as an effective activator, in which it is closely associated with energy-supplying phosphorus compounds. Plants are able sufficiently to accumulate most trace elements. Some plants are sensitive indicators of the chemical environment in which they grow (Dunn 1991), [ 19 ] and some plants have barrier mechanisms that exclude or limit the uptake of a particular element or ion species, e.g., alder twigs commonly accumulate molybdenum but not arsenic, whereas the reverse is true of spruce bark (Dunn 1991). [ 19 ] Otherwise, a plant can integrate the geochemical signature of the soil mass permeated by its root system together with the contained groundwaters. Sampling is facilitated by the tendency of many elements to accumulate in tissues at the plant's extremities. Some micronutrients can be applied as seed coatings. Iron is necessary for photosynthesis and is present as an enzyme cofactor in plants. Iron deficiency can result in interveinal chlorosis and necrosis . Iron is not a structural part of chlorophyll but very much essential for its synthesis. Copper deficiency can be responsible for promoting an iron deficiency. [ 20 ] It helps in the electron transport of plant. As with other biological processes, the main useful form of iron is that of iron(II) due to its higher solubility in neutral pH. However, plants are also capable of using iron(III) via citric acid, using the photo-reduction of ferric citrate . [ 21 ] In the field, as with many other transitional metal elements, iron fertilizer is supplied as a chelate . [ 22 ] Molybdenum is a cofactor to enzymes important in building amino acids and is involved in nitrogen metabolism. Molybdenum is part of the nitrate reductase enzyme (needed for the reduction of nitrate) and the nitrogenase enzyme (required for biological nitrogen fixation ). [ 10 ] Reduced productivity as a result of molybdenum deficiency is usually associated with the reduced activity of one or more of these enzymes. Boron has many functions in a plant: [ 23 ] it affects flowering and fruiting, pollen germination, cell division, and active salt absorption. The metabolism of amino acids and proteins, carbohydrates, calcium, and water are strongly affected by boron. Many of those listed functions may be embodied by its function in moving the highly polar sugars through cell membranes by reducing their polarity and hence the energy needed to pass the sugar. If sugar cannot pass to the fastest growing parts rapidly enough, those parts die. Copper is important for photosynthesis. Symptoms for copper deficiency include chlorosis. It is involved in many enzyme processes; necessary for proper photosynthesis; involved in the manufacture of lignin (cell walls) and involved in grain production. It is difficult to find in some soil conditions. Manganese is necessary for photosynthesis, [ 17 ] including the building of chloroplasts . Manganese deficiency may result in coloration abnormalities, such as discolored spots on the foliage . Sodium is involved in the regeneration of phosphoenolpyruvate in CAM and C4 plants. Sodium can potentially replace potassium's regulation of stomatal opening and closing. [ 6 ] Essentiality of sodium: Zinc is required in a large number of enzymes and plays an essential role in DNA transcription . A typical symptom of zinc deficiency is the stunted growth of leaves, commonly known as "little leaf" and is caused by the oxidative degradation of the growth hormone auxin . In vascular plants , nickel is absorbed by plants in the form of Ni 2+ ion. Nickel is essential for activation of urease , an enzyme involved with nitrogen metabolism that is required to process urea. Without nickel, toxic levels of urea accumulate, leading to the formation of necrotic lesions. In non-vascular plants , nickel activates several enzymes involved in a variety of processes, and can substitute for zinc and iron as a cofactor in some enzymes. [ 24 ] Chlorine , as compounded chloride, is necessary for osmosis and ionic balance ; it also plays a role in photosynthesis . Cobalt has proven to be beneficial to at least some plants although it does not appear to be essential for most species. [ 25 ] It has, however, been shown to be essential for nitrogen fixation by the nitrogen-fixing bacteria associated with legumes and other plants. [ 25 ] Silicon is not considered an essential element for plant growth and development. It is always found in abundance in the environment and hence if needed it is available. It is found in the structures of plants and improves the health of plants. [ 26 ] In plants, silicon has been shown in experiments to strengthen cell walls , improve plant strength, health, and productivity. [ 27 ] There have been studies showing evidence of silicon improving drought and frost resistance , decreasing lodging potential and boosting the plant's natural pest and disease fighting systems. [ 28 ] Silicon has also been shown to improve plant vigor and physiology by improving root mass and density, and increasing above ground plant biomass and crop yields . [ 27 ] Silicon is currently under consideration by the Association of American Plant Food Control Officials (AAPFCO) for elevation to the status of a "plant beneficial substance". [ 29 ] [ 30 ] Vanadium may be required by some plants, but at very low concentrations. It may also be substituting for molybdenum . Selenium is probably not essential for flowering plants, but it can be beneficial; it can stimulate plant growth, improve tolerance of oxidative stress, and increase resistance to pathogens and herbivory. [ 31 ] Nitrogen is transported via the xylem from the roots to the leaf canopy as nitrate ions, or in an organic form, such as amino acids or amides. Nitrogen can also be transported in the phloem sap as amides, amino acids and ureides; it is therefore mobile within the plant, and the older leaves exhibit chlorosis and necrosis earlier than the younger leaves. [ 6 ] [ 10 ] Because phosphorus is a mobile nutrient, older leaves will show the first signs of deficiency. Magnesium is very mobile in plants, and, like potassium, when deficient is translocated from older to younger tissues, so that signs of deficiency appear first on the oldest tissues and then spread progressively to younger tissues. Because calcium is phloem immobile, calcium deficiency can be seen in new growth. When developing tissues are forced to rely on the xylem , calcium is supplied by transpiration only. Boron is not relocatable in the plant via the phloem . It must be supplied to the growing parts via the xylem . Foliar sprays affect only those parts sprayed, which may be insufficient for the fastest growing parts, and is very temporary. [ citation needed ] In plants, sulfur cannot be mobilized from older leaves for new growth, so deficiency symptoms are seen in the youngest tissues first. [ 32 ] Symptoms of deficiency include yellowing of leaves and stunted growth. [ 33 ] The effect of a nutrient deficiency can vary from a subtle depression of growth rate to obvious stunting, deformity, discoloration, distress, and even death. Visual symptoms distinctive enough to be useful in identifying a deficiency are rare. Most deficiencies are multiple and moderate. However, while a deficiency is seldom that of a single nutrient, nitrogen is commonly the nutrient in shortest supply. Chlorosis of foliage is not always due to mineral nutrient deficiency. Solarization can produce superficially similar effects, though mineral deficiency tends to cause premature defoliation, whereas solarization does not, nor does solarization depress nitrogen concentration. [ 34 ] Nitrogen deficiency most often results in stunted growth, slow growth, and chlorosis. Nitrogen deficient plants will also exhibit a purple appearance on the stems, petioles and underside of leaves from an accumulation of anthocyanin pigments. [ 6 ] Phosphorus deficiency can produce symptoms similar to those of nitrogen deficiency, [ 35 ] characterized by an intense green coloration or reddening in leaves due to lack of chlorophyll. If the plant is experiencing high phosphorus deficiencies the leaves may become denatured and show signs of death. Occasionally the leaves may appear purple from an accumulation of anthocyanin . As noted by Russel: [ 14 ] "Phosphate deficiency differs from nitrogen deficiency in being extremely difficult to diagnose, and crops can be suffering from extreme starvation without there being any obvious signs that lack of phosphate is the cause". Russell's observation applies to at least some coniferous seedlings, but Benzian [ 36 ] found that although response to phosphorus in very acid forest tree nurseries in England was consistently high, no species (including Sitka spruce) showed any visible symptom of deficiency other than a slight lack of lustre. Phosphorus levels have to be exceedingly low before visible symptoms appear in such seedlings. In sand culture at 0 ppm phosphorus, white spruce seedlings were very small and tinted deep purple; at 0.62 ppm, only the smallest seedlings were deep purple; at 6.2 ppm, the seedlings were of good size and color. [ 37 ] [ 38 ] The root system is less effective without a continuous supply of calcium to newly developing cells. Even short term disruptions in calcium supply can disrupt biological functions and root function. [ 39 ] A common symptom of calcium deficiency in leaves is the curling of the leaf towards the veins or center of the leaf. Many times this can also have a blackened appearance. [ 40 ] The tips of the leaves may appear burned and cracking may occur in some calcium deficient crops if they experience a sudden increase in humidity. [ 18 ] Calcium deficiency may arise in tissues that are fed by the phloem , causing blossom end rot in watermelons, peppers and tomatoes, empty peanut pods and bitter pits in apples. In enclosed tissues, calcium deficiency can cause celery black heart and "brown heart" in greens like escarole . [ 41 ] Researchers found that partial deficiencies of K or P did not change the fatty acid composition of phosphatidyl choline in Brassica napus L. plants. Calcium deficiency did, on the other hand, lead to a marked decline of polyunsaturated compounds that would be expected to have negative impacts for integrity of the plant membrane , that could effect some properties like its permeability, and is needed for the ion uptake activity of the root membranes. [ 42 ] Potassium deficiency may cause necrosis or interveinal chlorosis . Deficiency may result in higher risk of pathogens, wilting, chlorosis, brown spotting, and higher chances of damage from frost and heat. When potassium is moderately deficient, the effects first appear in the older tissues, and from there progress towards the growing points. Acute deficiency severely affects growing points, and die-back commonly occurs. Symptoms of potassium deficiency in white spruce include: browning and death of needles (chlorosis); reduced growth in height and diameter; impaired retention of needles; and reduced needle length. [ 43 ] Mo deficiency is usually found on older growth. Fe, Mn and Cu effect new growth, causing green or yellow veins, Zn ca effect old and new leaves, and B will be seem on terminal buds. A plant with zinc deficiency may have leaves on top of each other due to reduced internodal expansion. [ 44 ] Zinc is the most widely deficient micronutrient for industrial crop cultivation, followed by boron. Acidifying N fertilizers create micro-sites around the granule that keep micronutrient cations soluble for longer in alkaline soils, but high concentrations of P or C may negate these effects. Boron deficiencies effecting seed yields and pollen fertility are common in laterite soils . [ 45 ] Boron is essential for the proper forming and strengthening of cell walls. Lack of boron results in short thick cells producing stunted fruiting bodies and roots. Deficiency results in the death of the terminal growing points and stunted growth. [ citation needed ] Inadequate amounts of boron affect many agricultural crops, legume forage crops most strongly. [ citation needed ] Boron deficiencies can be detected by analysis of plant material to apply a correction before the obvious symptoms appear, after which it is too late to prevent crop loss. Strawberries deficient in boron will produce lumpy fruit; apricots will not blossom or, if they do, will not fruit or will drop their fruit depending on the level of boron deficit. Broadcast of boron supplements is effective and long term; a foliar spray is immediate but must be repeated. [ citation needed ] Boron concentration in soil water solution higher than one ppm is toxic to most plants. Toxic concentrations within plants are 10 to 50 ppm for small grains and 200 ppm in boron-tolerant crops such as sugar beets, rutabaga, cucumbers, and conifers. Toxic soil conditions are generally limited to arid regions or can be caused by underground borax deposits in contact with water or volcanic gases dissolved in percolating water. [ citation needed ] There is an abundant supply of nitrogen in the earth's atmosphere—N 2 gas comprises nearly 79% of air. However, N 2 is unavailable for use by most organisms because there is a triple bond between the two nitrogen atoms in the molecule, making it almost inert. In order for nitrogen to be used for growth it must be "fixed" (combined) in the form of ammonium (NH + 4 ) or nitrate (NO − 3 ) ions. The weathering of rocks releases these ions so slowly that it has a negligible effect on the availability of fixed nitrogen. Therefore, nitrogen is often the limiting factor for growth and biomass production in all environments where there is a suitable climate and availability of water to support life. Microorganisms have a central role in almost all aspects of nitrogen availability, and therefore for life support on earth. Some bacteria can convert N 2 into ammonia by the process termed nitrogen fixation ; these bacteria are either free-living or form symbiotic associations with plants or other organisms (e.g., termites, protozoa), while other bacteria bring about transformations of ammonia to nitrate , and of nitrate to N 2 or other nitrogen gases. Many bacteria and fungi degrade organic matter, releasing fixed nitrogen for reuse by other organisms. All these processes contribute to the nitrogen cycle . Nitrogen enters the plant largely through the roots . A "pool" of soluble nitrogen accumulates. Its composition within a species varies widely depending on several factors, including day length, time of day, night temperatures, nutrient deficiencies, and nutrient imbalance. Short day length promotes asparagine formation, whereas glutamine is produced under long day regimes. Darkness favors protein breakdown accompanied by high asparagine accumulation. Night temperature modifies the effects due to night length, and soluble nitrogen tends to accumulate owing to retarded synthesis and breakdown of proteins. Low night temperature conserves glutamine ; high night temperature increases accumulation of asparagine because of breakdown. Deficiency of K accentuates differences between long- and short-day plants. The pool of soluble nitrogen is much smaller than in well-nourished plants when N and P are deficient since uptake of nitrate and further reduction and conversion of N to organic forms is restricted more than is protein synthesis. Deficiencies of Ca, K, and S affect the conversion of organic N to protein more than uptake and reduction. The size of the pool of soluble N is no guide per se to growth rate, but the size of the pool in relation to total N might be a useful ratio in this regard. Nitrogen availability in the rooting medium also affects the size and structure of tracheids formed in the long lateral roots of white spruce (Krasowski and Owens 1999). [ 46 ] Phosphorus is most commonly found in the soil in the form of polyprotic phosphoric acid (H 3 PO 4 ), but is taken up most readily in the form of H 2 PO − 4 . Phosphorus is available to plants in limited quantities in most soils because it is released very slowly from insoluble phosphates and is rapidly fixed once again. Under most environmental conditions it is the element that limits growth because of this constriction and due to its high demand by plants and microorganisms. Plants can increase phosphorus uptake by a mutualism with mycorrhiza. [ 6 ] On some soils , the phosphorus nutrition of some conifers , including the spruces, depends on the ability of mycorrhizae to take up, and make soil phosphorus available to the tree, hitherto unobtainable to the non-mycorrhizal root. Seedling white spruce, greenhouse-grown in sand testing negative for phosphorus, were very small and purple for many months until spontaneous mycorrhizal inoculation, the effect of which was manifested by a greening of foliage and the development of vigorous shoot growth. When soil -potassium levels are high, plants take up more potassium than needed for healthy growth. The term luxury consumption has been applied to this. Potassium intake increases with root temperature and depresses calcium uptake. [ 47 ] Calcium to boron ratio must be maintained in a narrow range for normal plant growth. Lack of boron causes failure of calcium metabolism which produces hollow heart in beets and peanuts. [ citation needed ] Calcium and magnesium inhibit the uptake of trace metals. Copper and zinc mutually reduce uptake of each other. Zinc also effects iron levels of plants. These interactions are dependent on species and growing conditions. For example, for clover, lettuce and red beet plants nearing toxic levels of zinc, copper and nickel, these three elements increased the toxicity of the others in a positive relationship. In barley positive interaction was observed between copper and zinc, while in French beans the positive interaction occurred between nickel and zinc. Other researchers have studied the synergistic and antagonistic effects of soil conditions on lead, zinc, cadmium and copper in radish plants to develop predictive indicators for uptake like soil pH . [ 48 ] Calcium absorption is increased by water-soluble phosphate fertilizers, and is used when potassium and potash fertilizers decrease the uptake of phosphorus, magnesium and calcium. For these reasons, imbalanced application of potassium fertilizers can markedly decrease crop yields. [ 39 ] Boron is available to plants over a range of pH, from 5.0 to 7.5. Boron is absorbed by plants in the form of the anion BO 3− 3 . It is available to plants in moderately soluble mineral forms of Ca, Mg and Na borates and the highly soluble form of organic compounds. It is mobile in the soil, hence, it is prone to leaching. Leaching removes substantial amounts of boron in sandy soil, but little in fine silt or clay soil. Boron's fixation to those minerals at high pH can render boron unavailable, while low pH frees the fixed boron, leaving it prone to leaching in wet climates. It precipitates with other minerals in the form of borax in which form it was first used over 400 years ago as a soil supplement. Decomposition of organic material causes boron to be deposited in the topmost soil layer. When soil dries it can cause a precipitous drop in the availability of boron to plants as the plants cannot draw nutrients from that desiccated layer. Hence, boron deficiency diseases appear in dry weather. [ citation needed ] Most of the nitrogen taken up by plants is from the soil in the forms of NO − 3 , although in acid environments such as boreal forests where nitrification is less likely to occur, ammonium NH + 4 is more likely to be the dominating source of nitrogen. [ 49 ] Amino acids and proteins can only be built from NH + 4 , so NO − 3 must be reduced. Fe and Mn become oxidized and are highly unavailable in acidic soils. [ citation needed ] Nutrient status (mineral nutrient and trace element composition, also called ionome and nutrient profile) of plants are commonly portrayed by tissue elementary analysis. Interpretation of the results of such studies, however, has been controversial. [ 50 ] During recent decades the nearly two-century-old "law of minimum" or "Liebig's law" (that states that plant growth is controlled not by the total amount of resources available, but by the scarcest resource) has been replaced by several mathematical approaches that use different models in order to take the interactions between the individual nutrients into account. [ citation needed ] Later developments in this field were based on the fact that the nutrient elements (and compounds) do not act independently from each other; [ 50 ] Baxter, 2015, [ 51 ] because there may be direct chemical interactions between them or they may influence each other's uptake, translocation, and biological action via a number of mechanisms [ 50 ] as exemplified [ how? ] for the case of ammonia. [ 52 ] Boron is highly soluble in the form of borax or boric acid and is too easily leached from soil making these forms unsuitable for use as a fertilizer. Calcium borate is less soluble and can be made from sodium tetraborate . Boron is often applied to fields as a contaminant in other soil amendments but is not generally adequate to make up the rate of loss by cropping. The rates of application of borate to produce an adequate alfalfa crop range from 15 pounds per acre for a sandy-silt, acidic soil of low organic matter, to 60 pounds per acre for a soil with high organic matter, high cation exchange capacity and high pH. Application rates should be limited to a few pounds per acre in a test plot to determine if boron is needed generally. Otherwise, testing for boron levels in plant material is required to determine remedies. Excess boron can be removed by irrigation and assisted by application of elemental sulfur to lower the pH and increase boron solubility. Foliar sprays are used on fruit crop trees in soils of high alkalinity. [ citation needed ] Selenium is, however, an essential mineral element for animal (including human) nutrition and selenium deficiencies are known to occur when food or animal feed is grown on selenium-deficient soils. The use of inorganic selenium fertilizers can increase selenium concentrations in edible crops and animal diets thereby improving animal health. [ 31 ] It is useful to apply a high phosphorus content fertilizer, such as bone meal, to perennials to help with successful root formation. [ 6 ] Hydroponics is a method for growing plants in a water-nutrient solution without using nutrient-rich soil or substrates. Researchers and home gardeners can grow their plants in a controlled environment. The most common artificial nutrient solution is the Hoagland solution , developed by D. R. Hoagland and W. C. Snyder in 1933. The solution (known as A-Z solution ) consists of all the essential macro- and micronutrients in the correct proportions necessary for most plant growth. [ 6 ] An aerator is used to prevent an anoxic event or hypoxia. Hypoxia can affect the nutrient uptake of a plant because, without oxygen present, respiration becomes inhibited within the root cells. The nutrient film technique is a hydroponic technique in which the roots are not fully submerged. Incomplete submergence allows for adequate aeration of the roots, while a "film" thin layer of nutrient-rich water is pumped through the system to provide nutrients and water to the plant.
https://en.wikipedia.org/wiki/Plant_nutrition
Plant ontology (PO) is a collection of ontologies developed by the Plant Ontology Consortium. These ontologies describe anatomical structures and growth and developmental stages across Viridiplantae . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] The PO is intended for multiple applications, including genetics , genomics , phenomics , and development , taxonomy and systematics , semantic applications and education. [ 1 ] [ 8 ] This botany article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plant_ontology
Plant physiology is a subdiscipline of botany concerned with the functioning, or physiology , of plants . [ 1 ] Plant physiologists study fundamental processes of plants, such as photosynthesis , respiration , plant nutrition , plant hormone functions, tropisms , nastic movements , photoperiodism , photomorphogenesis , circadian rhythms , environmental stress physiology, seed germination , dormancy and stomata function and transpiration . Plant physiology interacts with the fields of plant morphology (structure of plants), plant ecology (interactions with the environment), phytochemistry ( biochemistry of plants), cell biology , genetics, biophysics and molecular biology . The field of plant physiology includes the study of all the internal activities of plants—those chemical and physical processes associated with life as they occur in plants. This includes study at many levels of scale of size and time. At the smallest scale are molecular interactions of photosynthesis and internal diffusion of water, minerals, and nutrients. At the largest scale are the processes of plant development , seasonality , dormancy , and reproductive control. Major subdisciplines of plant physiology include phytochemistry (the study of the biochemistry of plants) and phytopathology (the study of disease in plants). The scope of plant physiology as a discipline may be divided into several major areas of research. First, the study of phytochemistry (plant chemistry) is included within the domain of plant physiology. To function and survive, plants produce a wide array of chemical compounds not found in other organisms. Photosynthesis requires a large array of pigments , enzymes , and other compounds to function. Because they cannot move, plants must also defend themselves chemically from herbivores , pathogens and competition from other plants. They do this by producing toxins and foul-tasting or smelling chemicals. Other compounds defend plants against disease, permit survival during drought, and prepare plants for dormancy, while other compounds are used to attract pollinators or herbivores to spread ripe seeds. Secondly, plant physiology includes the study of biological and chemical processes of individual plant cells . Plant cells have a number of features that distinguish them from cells of animals , and which lead to major differences in the way that plant life behaves and responds differently from animal life. For example, plant cells have a cell wall which maintains the shape of plant cells. Plant cells also contain chlorophyll , a chemical compound that interacts with light in a way that enables plants to manufacture their own nutrients rather than consuming other living things as animals do. Thirdly, plant physiology deals with interactions between cells, tissues , and organs within a plant. Different cells and tissues are physically and chemically specialized to perform different functions. Roots and rhizoids function to anchor the plant and acquire minerals in the soil. Leaves catch light in order to manufacture nutrients. For both of these organs to remain living, minerals that the roots acquire must be transported to the leaves, and the nutrients manufactured in the leaves must be transported to the roots. Plants have developed a number of ways to achieve this transport, such as vascular tissue , and the functioning of the various modes of transport is studied by plant physiologists. Fourthly, plant physiologists study the ways that plants control or regulate internal functions. Like animals, plants produce chemicals called hormones which are produced in one part of the plant to signal cells in another part of the plant to respond. Many flowering plants bloom at the appropriate time because of light-sensitive compounds that respond to the length of the night, a phenomenon known as photoperiodism . The ripening of fruit and loss of leaves in the winter are controlled in part by the production of the gas ethylene by the plant. Finally, plant physiology includes the study of plant response to environmental conditions and their variation, a field known as environmental physiology . Stress from water loss, changes in air chemistry, or crowding by other plants can lead to changes in the way a plant functions. These changes may be affected by genetic, chemical, and physical factors. The chemical elements of which plants are constructed—principally carbon , oxygen , hydrogen , nitrogen , phosphorus , sulfur , etc.—are the same as for all other life forms: animals, fungi, bacteria and even viruses . Only the details of their individual molecular structures vary. Despite this underlying similarity, plants produce a vast array of chemical compounds with unique properties which they use to cope with their environment. Pigments are used by plants to absorb or detect light, and are extracted by humans for use in dyes . Other plant products may be used for the manufacture of commercially important rubber or biofuel . Perhaps the most celebrated compounds from plants are those with pharmacological activity, such as salicylic acid from which aspirin is made, morphine , and digoxin . Drug companies spend billions of dollars each year researching plant compounds for potential medicinal benefits. Plants require some nutrients , such as carbon and nitrogen , in large quantities to survive. Some nutrients are termed macronutrients , where the prefix macro- (large) refers to the quantity needed, not the size of the nutrient particles themselves. Other nutrients, called micronutrients , are required only in trace amounts for plants to remain healthy. Such micronutrients are usually absorbed as ions dissolved in water taken from the soil, though carnivorous plants acquire some of their micronutrients from captured prey. The following tables list element nutrients essential to plants. Uses within plants are generalized. Among the most important molecules for plant function are the pigments . Plant pigments include a variety of different kinds of molecules, including porphyrins , carotenoids , and anthocyanins . All biological pigments selectively absorb certain wavelengths of light while reflecting others. The light that is absorbed may be used by the plant to power chemical reactions , while the reflected wavelengths of light determine the color the pigment appears to the eye. Chlorophyll is the primary pigment in plants; it is a porphyrin that absorbs red and blue wavelengths of light while reflecting green . It is the presence and relative abundance of chlorophyll that gives plants their green color. All land plants and green algae possess two forms of this pigment: chlorophyll a and chlorophyll b . Kelps , diatoms , and other photosynthetic heterokonts contain chlorophyll c instead of b , red algae possess chlorophyll a . All chlorophylls serve as the primary means plants use to intercept light to fuel photosynthesis . Carotenoids are red, orange, or yellow tetraterpenoids . They function as accessory pigments in plants, helping to fuel photosynthesis by gathering wavelengths of light not readily absorbed by chlorophyll. The most familiar carotenoids are carotene (an orange pigment found in carrots ), lutein (a yellow pigment found in fruits and vegetables), and lycopene (the red pigment responsible for the color of tomatoes ). Carotenoids have been shown to act as antioxidants and to promote healthy eyesight in humans. Anthocyanins (literally "flower blue") are water-soluble flavonoid pigments that appear red to blue, according to pH . They occur in all tissues of higher plants, providing color in leaves , stems , roots , flowers , and fruits , though not always in sufficient quantities to be noticeable. Anthocyanins are most visible in the petals of flowers, where they may make up as much as 30% of the dry weight of the tissue. [ 2 ] They are also responsible for the purple color seen on the underside of tropical shade plants such as Tradescantia zebrina . In these plants, the anthocyanin catches light that has passed through the leaf and reflects it back towards regions bearing chlorophyll, in order to maximize the use of available light Betalains are red or yellow pigments. Like anthocyanins they are water-soluble, but unlike anthocyanins they are indole -derived compounds synthesized from tyrosine . This class of pigments is found only in the Caryophyllales (including cactus and amaranth ), and never co-occur in plants with anthocyanins. Betalains are responsible for the deep red color of beets , and are used commercially as food-coloring agents. Plant physiologists are uncertain of the function that betalains have in plants which possess them, but there is some preliminary evidence that they may have fungicidal properties. [ 3 ] Plants produce hormones and other growth regulators which act to signal a physiological response in their tissues. They also produce compounds such as phytochrome that are sensitive to light and which serve to trigger growth or development in response to environmental signals. Plant hormones , known as plant growth regulators (PGRs) or phytohormones, are chemicals that regulate a plant's growth. According to a standard animal definition, hormones are signal molecules produced at specific locations, that occur in very low concentrations, and cause altered processes in target cells at other locations. Unlike animals, plants lack specific hormone-producing tissues or organs. Plant hormones are often not transported to other parts of the plant and production is not limited to specific locations. Plant hormones are chemicals that in small amounts promote and influence the growth , development and differentiation of cells and tissues. Hormones are vital to plant growth; affecting processes in plants from flowering to seed development, dormancy , and germination . They regulate which tissues grow upwards and which grow downwards, leaf formation and stem growth, fruit development and ripening, as well as leaf abscission and even plant death. The most important plant hormones are abscissic acid (ABA), auxins , ethylene , gibberellins , and cytokinins , though there are many other substances that serve to regulate plant physiology. While most people know that light is important for photosynthesis in plants, few realize that plant sensitivity to light plays a role in the control of plant structural development ( morphogenesis ). The use of light to control structural development is called photomorphogenesis , and is dependent upon the presence of specialized photoreceptors , which are chemical pigments capable of absorbing specific wavelengths of light. Plants use four kinds of photoreceptors: [ 1 ] phytochrome , cryptochrome , a UV-B photoreceptor, and protochlorophyllide a . The first two of these, phytochrome and cryptochrome, are photoreceptor proteins , complex molecular structures formed by joining a protein with a light-sensitive pigment. Cryptochrome is also known as the UV-A photoreceptor, because it absorbs ultraviolet light in the long wave "A" region. The UV-B receptor is one or more compounds not yet identified with certainty, though some evidence suggests carotene or riboflavin as candidates. [ 4 ] Protochlorophyllide a , as its name suggests, is a chemical precursor of chlorophyll . The most studied of the photoreceptors in plants is phytochrome . It is sensitive to light in the red and far-red region of the visible spectrum . Many flowering plants use it to regulate the time of flowering based on the length of day and night ( photoperiodism ) and to set circadian rhythms. It also regulates other responses including the germination of seeds, elongation of seedlings, the size, shape and number of leaves, the synthesis of chlorophyll, and the straightening of the epicotyl or hypocotyl hook of dicot seedlings. Many flowering plants use the pigment phytochrome to sense seasonal changes in day length, which they take as signals to flower. This sensitivity to day length is termed photoperiodism . Broadly speaking, flowering plants can be classified as long day plants, short day plants, or day neutral plants, depending on their particular response to changes in day length. Long day plants require a certain minimum length of daylight to start flowering, so these plants flower in the spring or summer. Conversely, short day plants flower when the length of daylight falls below a certain critical level. Day neutral plants do not initiate flowering based on photoperiodism, though some may use temperature sensitivity ( vernalization ) instead. Although a short day plant cannot flower during the long days of summer, it is not actually the period of light exposure that limits flowering. Rather, a short day plant requires a minimal length of uninterrupted darkness in each 24-hour period (a short daylength) before floral development can begin. It has been determined experimentally that a short day plant (long night) does not flower if a flash of phytochrome activating light is used on the plant during the night. Plants make use of the phytochrome system to sense day length or photoperiod. This fact is utilized by florists and greenhouse gardeners to control and even induce flowering out of season, such as the poinsettia ( Euphorbia pulcherrima ). Paradoxically, the subdiscipline of environmental physiology is on the one hand a recent field of study in plant ecology and on the other hand one of the oldest. [ 1 ] Environmental physiology is the preferred name of the subdiscipline among plant physiologists, but it goes by a number of other names in the applied sciences. It is roughly synonymous with ecophysiology , crop ecology, horticulture and agronomy . The particular name applied to the subdiscipline is specific to the viewpoint and goals of research. Whatever name is applied, it deals with the ways in which plants respond to their environment and so overlaps with the field of ecology . Environmental physiologists examine plant response to physical factors such as radiation (including light and ultraviolet radiation), temperature , fire , and wind . Of particular importance are water relations (which can be measured with the Pressure bomb ) and the stress of drought or inundation , exchange of gases with the atmosphere , as well as the cycling of nutrients such as nitrogen and carbon . Environmental physiologists also examine plant response to biological factors. This includes not only negative interactions, such as competition , herbivory , disease and parasitism , but also positive interactions, such as mutualism and pollination . While plants, as living beings, can perceive and communicate physical stimuli and damage, they do not feel pain as members of the animal kingdom do simply because of the lack of any pain receptors , nerves , or a brain , [ 6 ] and, by extension, lack of consciousness . [ 7 ] Many plants are known to perceive and respond to mechanical stimuli at a cellular level, and some plants such as the venus flytrap or touch-me-not , are known for their "obvious sensory abilities". [ 6 ] Nevertheless, the plant kingdom as a whole do not feel pain notwithstanding their abilities to respond to sunlight, gravity, wind, and any external stimuli such as insect bites, since they lack any nervous system. The primary reason for this is that, unlike the members of the animal kingdom whose evolutionary successes and failures are shaped by suffering, the evolution of plants are simply shaped by life and death. [ 6 ] Plants may respond both to directional and non-directional stimuli . A response to a directional stimulus, such as gravity or sun light , is called a tropism. A response to a nondirectional stimulus, such as temperature or humidity , is a nastic movement. Tropisms in plants are the result of differential cell growth, in which the cells on one side of the plant elongates more than those on the other side, causing the part to bend toward the side with less growth. Among the common tropisms seen in plants is phototropism , the bending of the plant toward a source of light. Phototropism allows the plant to maximize light exposure in plants which require additional light for photosynthesis, or to minimize it in plants subjected to intense light and heat. Geotropism allows the roots of a plant to determine the direction of gravity and grow downwards. Tropisms generally result from an interaction between the environment and production of one or more plant hormones. Nastic movements results from differential cell growth (e.g. epinasty and hiponasty), or from changes in turgor pressure within plant tissues (e.g., nyctinasty ), which may occur rapidly. A familiar example is thigmonasty (response to touch) in the Venus fly trap , a carnivorous plant . The traps consist of modified leaf blades which bear sensitive trigger hairs. When the hairs are touched by an insect or other animal, the leaf folds shut. This mechanism allows the plant to trap and digest small insects for additional nutrients. Although the trap is rapidly shut by changes in internal cell pressures, the leaf must grow slowly to reset for a second opportunity to trap insects. [ 8 ] Economically, one of the most important areas of research in environmental physiology is that of phytopathology , the study of diseases in plants and the manner in which plants resist or cope with infection. Plant are susceptible to the same kinds of disease organisms as animals, including viruses , bacteria , and fungi , as well as physical invasion by insects and roundworms . Because the biology of plants differs with animals, their symptoms and responses are quite different. In some cases, a plant can simply shed infected leaves or flowers to prevent the spread of disease, in a process called abscission. Most animals do not have this option as a means of controlling disease. Plant diseases organisms themselves also differ from those causing disease in animals because plants cannot usually spread infection through casual physical contact. Plant pathogens tend to spread via spores or are carried by animal vectors . One of the most important advances in the control of plant disease was the discovery of Bordeaux mixture in the nineteenth century. The mixture is the first known fungicide and is a combination of copper sulfate and lime . Application of the mixture served to inhibit the growth of downy mildew that threatened to seriously damage the French wine industry. [ 9 ] Francis Bacon published one of the first plant physiology experiments in 1627 in the book, Sylva Sylvarum. Bacon grew several terrestrial plants, including a rose, in water and concluded that soil was only needed to keep the plant upright. Jan Baptist van Helmont published what is considered the first quantitative experiment in plant physiology in 1648. He grew a willow tree for five years in a pot containing 200 pounds of oven-dry soil. The soil lost just two ounces of dry weight and van Helmont concluded that plants get all their weight from water, not soil. In 1699, John Woodward published experiments on growth of spearmint in different sources of water. He found that plants grew much better in water with soil added than in distilled water. Stephen Hales is considered the Father of Plant Physiology for the many experiments in the 1727 book, Vegetable Staticks ; [ 10 ] though Julius von Sachs unified the pieces of plant physiology and put them together as a discipline. His Lehrbuch der Botanik was the plant physiology bible of its time. [ 11 ] Researchers discovered in the 1800s that plants absorb essential mineral nutrients as inorganic ions in water. In natural conditions, soil acts as a mineral nutrient reservoir but the soil itself is not essential to plant growth. When the mineral nutrients in the soil are dissolved in water, plant roots absorb nutrients readily, soil is no longer required for the plant to thrive. This observation is the basis for hydroponics , the growing of plants in a water solution rather than soil, which has become a standard technique in biological research, teaching lab exercises, crop production and as a hobby. In horticulture and agriculture along with food science , plant physiology is an important topic relating to fruits , vegetables , and other consumable parts of plants. Topics studied include: climatic requirements, fruit drop, nutrition, ripening , fruit set. The production of food crops also hinges on the study of plant physiology covering such topics as optimal planting and harvesting times and post harvest storage of plant products for human consumption and the production of secondary products like drugs and cosmetics. Crop physiology steps back and looks at a field of plants as a whole, rather than looking at each plant individually. Crop physiology looks at how plants respond to each other and how to maximize results like food production through determining things like optimal planting density .
https://en.wikipedia.org/wiki/Plant_physiology
A plant press is a set of equipment used by botanists to flatten and dry field samples so that they can be easily stored. A professional plant press is made to the standard maximum size for biological specimens to be filed in a particular herbarium . A flower press is a similar device of no standard size that is used to make flat dried flowers for pressed flower craft . Specimens prepared in a plant press are later glued to archival -quality card stock with their labels, and are filed in a herbarium . Labels are made with archival ink (or pencil) and paper, and attached with archival-quality glue. A modern plant press consists of two strong outer boards with straps that can be tightened around them to exert pressure. Between the boards, fresh plant samples are placed, carefully labelled, between layers of paper. Further layers of absorbent paper and corrugated cardboard are usually added to help to dry the samples as quickly as possible, which prevents decay and improves colour retention. Layers of a sponge material can be used in order to prevent squashing parts of the specimens, such as fruit. Older plant presses and some modern flower presses have screws to supply the pressure, which often limits the thickness of the stack of samples that can be put into one press. Luca Ghini (1490—1556) Italian physician and botanist, created the first recorded herbarium, and is considered the first person to have used drying under pressure to prepare a plant collection. [ 1 ] William Withering English botanist, geologist, chemist and physician wrote popular books on British botany , and by describing the screw-down plant press (and the vasculum ) he brought it to the attention of amateur naturalists in Britain around 1771. [ 2 ] This tool article is a stub . You can help Wikipedia by expanding it . This botany article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plant_press
Plant rights are rights to which certain plants may be entitled. Such issues are often raised in connection with discussions about human rights , animal rights , biocentrism , or sentientism . Samuel Butler 's Erewhon contains a chapter, "The Views of an Erewhonian Philosopher Concerning the Rights of Vegetables". [ 1 ] On the question of whether animal rights can be extended to plants, animal rights philosopher Tom Regan argues that animals acquire rights due to being aware, what he calls "subjects-of-a-life". He argues that this does not apply to plants, and that even if plants did have rights, abstaining from eating meat would still be moral due to the use of plants to rear animals. [ 2 ] According to philosopher Michael Marder , the idea that plants should have rights derives from "plant subjectivity", which is distinct from human personhood. [ 3 ] Paul W. Taylor holds that all life has inherent worth and argues for respect for plants, but does not assign them rights. [ 4 ] Christopher D. Stone, the son of investigative journalist I. F. Stone , proposed in a 1972 paper titled "Should Trees Have Standing?" that, if corporations are assigned rights, so should natural objects such as trees. Citing the broadening of rights of blacks, Jews, women, and fetuses as examples, Stone explains that, throughout history, societies have been conferring rights to new "entities" which, at the time, people thought to be "unthinkable". [ 5 ] [ 6 ] Whilst not appealing directly to "rights", Matthew Hall has argued that plants should be included within the realm of human moral consideration. His Plants as Persons: A Philosophical Botany discusses the moral background of plants in western philosophy and contrasts this with other traditions, including indigenous cultures, which recognise plants as persons—active, intelligent beings that are appropriate recipients of respect and care. [ 7 ] Hall backs up his call for the ethical consideration of plants with arguments based on plant neurobiology , which says that plants are autonomous, perceptive organisms capable of complex, adaptive behaviours, including recognizing self/non-self. In the study of plant physiology , plants are understood to have mechanisms by which they recognize environmental changes. This definition of plant perception differs from the notion that plants are capable of feeling emotions, an idea also called plant perception . The latter concept, along with plant intelligence , can be traced to 1848, when Gustav Theodor Fechner , a German experimental psychologist , suggested that plants are capable of emotions , and that one could promote healthy growth with talk, attention, and affection. [ 8 ] While plants, as living beings, can perceive and communicate physical stimuli and damage , they are widely regarded as incapable of feeling pain due to the absence of pain receptors , nerves , and a brain , [ 9 ] and, by extension, consciousness . [ 10 ] Many plants are known to perceive and respond to mechanical stimuli at a cellular level, and some plants such as the venus flytrap or touch-me-not , are known for their "obvious sensory abilities". [ 9 ] Despite the scientific uncertainty about what causes consciousness, notably highlighted by the hard problem of consciousness , [ 11 ] it is generally considered that none of the members of the plant kingdom can feel pain since they lack any nervous system , notwithstanding their ability to respond to sunlight, gravity, wind, and any external stimuli such as insect bites. [ 9 ] The Swiss Federal Ethics Committee on Non-Human Biotechnology analyzed scientific data on plants, and concluded in 2009 that plants are entitled to a certain amount of "dignity", but "dignity of plants is not an absolute value." [ 12 ] The single-issue Party for Plants entered candidates in the 2010 parliamentary election in the Netherlands . [ 13 ] It focuses on topics such as climate, biodiversity and sustainability in general. Such concerns have been criticized as evidence that modern culture is "causing us to lose the ability to think critically and distinguish serious from frivolous ethical concerns". [ 14 ] In his dissent to the 1972 Sierra Club v. Morton decision by the United States Supreme Court , Justice William O. Douglas wrote about whether plants might have legal standing : Inanimate objects are sometimes parties in litigation. A ship has a legal personality, a fiction found useful for maritime purposes... So it should be as respects valleys, alpine meadows, rivers, lakes, estuaries, beaches, ridges, groves of trees, swampland, or even air that feels the destructive pressures of modern technology and modern life... The voice of the inanimate object, therefore, should not be stilled. The Swiss Constitution contains a provision requiring "account to be taken of the dignity of creation when handling animals, plants and other organisms", and the Swiss government has conducted ethical studies pertaining to how the dignity of plants is to be protected. [ 15 ] In 2012, a river in New Zealand, including the plants and other organisms contained within its boundaries, was legally declared a person with standing (via guardians) to bring legal actions to protect its interests. [ 16 ] a. ^ Quote: "The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates." [ 17 ]
https://en.wikipedia.org/wiki/Plant_rights
Plant senescence is the process of aging in plants. Plants have both stress-induced and age-related developmental aging. [ 1 ] Chlorophyll degradation during leaf senescence reveals the carotenoids , such as anthocyanin and xanthophylls, which are the cause of autumn leaf color in deciduous trees. Leaf senescence has the important function of recycling nutrients, mostly nitrogen, to growing and storage organs of the plant. Unlike animals, plants continually form new organs and older organs undergo a highly regulated senescence program to maximize nutrient export. Programmed senescence seems to be heavily influenced by plant hormones . The hormones abscisic acid , ethylene , jasmonic acid and salicylic acid are accepted by most scientists as promoters of senescence, but at least one source lists gibberellins , brassinosteroids and strigolactone as also being involved. [ 2 ] Cytokinins help to maintain the plant cell and expression of cytokinin biosynthesis genes late in development prevents leaf senescence. [ 3 ] A withdrawal of or inability of the cell to perceive cytokinin may cause it to undergo apoptosis or senescence. [ 4 ] In addition, mutants that cannot perceive ethylene show delayed senescence. Genome-wide comparison of mRNAs expressed during dark-induced senescence versus those expressed during age-related developmental senescence demonstrate that jasmonic acid and ethylene are more important for dark-induced (stress-related) senescence while salicylic acid is more important for developmental senescence. [ 5 ] Some plants have evolved into annuals which die off at the end of each season and leave seeds for the next, whereas closely related plants in the same family have evolved to live as perennials . This may be a programmed "strategy" [ clarification needed ] for the plants. The benefit of an annual strategy may be genetic diversity, as one set of genes does continue year after year, but a new mix is produced each year. Secondly, being annual may allow the plants a better survival strategy, since the plant can put most of its accumulated energy and resources into seed production rather than saving some for the plant to overwinter, which would limit seed production. [ citation needed ] Conversely, the perennial strategy may sometimes be the more effective survival strategy, because the plant has a head start every spring with growing points, roots, and stored energy that have survived through the winter. In trees for example, the structure can be built on year after year so that the tree and root structure can become larger, stronger, and capable of producing more fruit and seed than the year before, out-competing other plants for light, water, nutrients, and space. This strategy will fail when environmental conditions change rapidly. If a certain bug quickly takes advantage and kills all of the nearly identical perennials , then there will be a far lesser chance that a random mutation will slow the bug compared to more diverse annuals . [ citation needed ] There is a speculative hypothesis on how and why a plant induces part of itself to die off. [ 2 ] The theory holds that leaves and roots are routinely pruned off during the growing season whether they are annual or perennial. This is done mainly to mature leaves and roots and is for one of two reasons; either both the leaves and roots that are pruned are no longer efficient enough nutrient acquisition-wise or that energy and resources are needed in another part of the plant because that part of the plant is faltering in its resource acquisition. This is an oversimplification, in that it is arguable that some shoot and root cells serve other functions than to acquire nutrients. In these cases, whether they are pruned or not would be "calculated" by the plant using some other criteria. It is also arguable that, for example, mature nutrient-acquiring shoot cells would have to acquire more than enough shoot nutrients to support both it and its share of both shoot and root cells that do not acquire sugar and gases whether they are of a structural, reproductive, immature, or just plain, root nature. The idea that a plant does not impose efficiency demands on immature cells is that most immature cells are part of so-called dormant buds in plants. These are kept small and non-dividing until the plant needs them. They are found in buds, for instance in the base of every lateral stem. There is little theory on how plants induce themselves to senesce, although it is reasonably widely accepted that some of it is done hormonally. Botanists generally concentrate on ethylene and abscisic acid as culprits in senescence, but neglect gibberellin and brassinosteroid which inhibits root growth if not causing actual root pruning. This is perhaps because roots are below the ground and thus harder to study. Seed germination performance is a major determinant of crop yield . Deterioration of seed quality with age is associated with accumulation of DNA damage . [ 6 ] In dry, aging rye seeds, DNA damages occur with loss of viability of embryos . [ 7 ] Dry seeds of Vicia faba accumulate DNA damage with time in storage, and undergo DNA repair upon germination . [ 8 ] In Arabidopsis , a DNA ligase is employed in repair of DNA single- and double-strand breaks during seed germination and this ligase is an important determinant of seed longevity. [ 9 ] In eukaryotes , the cellular repair response to DNA damage is orchestrated, in part, by the DNA damage checkpoint kinase ATM . ATM has a major role in controlling germination of aged seeds by integrating progression through germination with the repair response to DNA damages accumulated during the dry quiescent state. [ 10 ]
https://en.wikipedia.org/wiki/Plant_senescence
A plant soul is the religious philosophical concept that plants contain souls . Religions that recognize the existence of plant souls include Jainism and Manichaeism . Jains believe that plants have souls ( jīva ) that experience only one sense, which is touch. The Ācārāṅga Sūtra states that "plants ... and the rest of creation (experience) individually pleasure or displeasure, pain, great terror, and unhappiness" (1.1.6). In another excerpt from the Ācārāṅga Sūtra (1.1.5), [ 1 ] As the nature of men is to be born and to grow old, so is the nature of plants to be born and to grow old; as men have reason, so plants have reason; as men fall sick when cut, so plants fall sick when cut; as men need food, so plants need food; as men will decay, so plants will decay; as men are not eternal, so plants are not eternal; as men take increment, so plants take increment; as men are changing, so plants are changing. He who injures plants does not comprehend and renounce the sinful acts; he who does not injure plants, comprehends and renounces the sinful acts. Knowing them, a wise man should not act sinfully towards plants, nor cause others to act so, nor allow others to act so. He who knows these causes of sin relating to plants, is called a reward-knowing sage. (Note that the pronouns "this" and "that" in Hermann Jacobi 's original 1884 translation have been substituted with "men" and "plants".) The Cologne Mani Codex contains stories showing that Manichaeans believed in the existence of sentient plant souls. [ 2 ] In Augustine of Hippo 's Confessions (4.10), Augustine wrote that while he was a Manichaean, he believed that "a fig-tree wept when it was plucked, and the tree, its mother, shed milky tears". [ 3 ] Fynes (1996) argues that Jain ideas about the existence of plant souls were transmitted from Western Kshatrapa territories to Mesopotamia and then integrated into Manichaean beliefs. [ 2 ] This article about religious philosophy is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plant_soul
Plant strategies include mechanisms and responses plants use to reproduce, defend, survive, and compete on the landscape. The term “plant strategy” has existed in the literature since at least 1965, [ 1 ] however multiple definitions exist. Strategies have been classified as adaptive strategies (through a change in the genotype ), [ 1 ] [ 2 ] reproductive strategies, [ 3 ] resource allocation strategies, [ 4 ] [ 5 ] [ 6 ] ecological strategies, [ 7 ] and functional trait based strategies, [ 6 ] [ 8 ] to name a few. While numerous strategies exist, one underlying theme is constant: plants must make trade-offs when responding to their environment . These trade-offs and responses lay the groundwork for classifying the strategies that emerge. The concept of plant strategies started gaining attention in the 1960s and 1970s. At this time, strategies were often associated with genotypic changes, such that plants could respond to their environment by changing their “genotypic programme” (i.e., strategy). [ 1 ] [ 2 ] Around this same time, the r/K selection theory was introduced, which classifies plants by life history strategies, particularly reproductive strategies. [ 3 ] [ 9 ] In general, plants alter their reproductive strategies (i.e., number of offspring ) and their growth rate to respond to their ecological niche . [ 3 ] [ 9 ] The theory is still popular in the 21st century and frequently taught in science curricula. However, plant strategies really gained notoriety in 1977 with the introduction of Grime ’s C-S-R Triangle , [ 4 ] which categorizes plants according to how they respond under varying levels of stress and competition . According to Grime , plants develop strategies that demonstrate resource trade-offs between growth, reproduction, and maintenance. [ 4 ] The association between genotypic change and strategies was also still present in Grime’s theories, as he noted that the “genotypes of the majority of plants appear to represent compromises between the conflicting selection pressures” that generally classify plants into three strategy types. [ 4 ] The C-S-R Triangle remained the dominant plant strategy for several decades. However, in the early 1980s David Tilman introduced the R* theory, which focused on resource partitioning as strategies to deal with competition. [ 10 ] More recently, additional strategies have been introduced. In 1998, the L-H-S strategy scheme was introduced as an alternative to Grime's C-S-R scheme. [ 7 ] The L-H-S strategy focuses on leaf and seed mass traits to classify plant strategies, noting that these traits can be measured and compared between species, which cannot easily be done with Grime's abstract categories. [ 7 ] The goal of the L-H-S scheme was to develop an international network that could provide quantifiable comparisons between plant strategies. This started a movement towards incorporating functional traits in plant strategies, and understanding how plant functional traits and environmental factors are related. [ 6 ] [ 8 ] While Grime's C-S-R Triangle is still frequently referenced in plant ecology , new strategies are being introduced and gaining momentum in the 21st century. J. P. Grime identified two factor gradients, broadly categorized as disturbance and stress , which limit plant biomass. Stresses include factors such as the availability of water, nutrients, and light, along with growth-inhibiting influences like temperature and toxins. Conversely, disturbance encompasses herbivory , pathogens , anthropogenic interactions, fire , wind , etc. Emerging from high and low combinations of stress and disturbance are three life strategies commonly used to categorize plants based on environment: (1) C-competitors, (2) S-stress tolerators, and (3) R- ruderals. [ 4 ] There is no viable strategy for plants in high stress and high disturbance environments, therefore categorization for this habitat type is absent. [ 4 ] Each life strategy varies in trade-offs of resource allocation to seed production, leaf morphology , leaf longevity , relative growth rate , and other factors, which can be summarized as allocation to (1) growth, (2) reproduction , and (3) maintenance. Competitors are primarily composed of species with high relative growth rate, short leaf-life, relatively low seed production, and high allocation to leaf construction. They persist in high nutrient, low disturbance environments, and “rapidly monopolize resource capture by the spatially-dynamic foraging of roots and shoots.” [ 5 ] Stress-tolerators, found in high stress, low disturbance habitats, allocate resources to maintenance and defenses, such as anti-herbivory. Species are often evergreen with small, long-lived leaves or needles, slow resource turnover, and low plasticity and relative growth rate. Due to high stress conditions, vegetative growth and reproduction are reduced. Ruderals, inhabiting low stress, high disturbance regimes, allocate resources mainly to seed reproduction and are often annuals or short-lived perennials . Common characteristics of ruderal species include high relative growth rate, short-lived leaves, and short statured plants with minimal lateral expansion. G. David Tilman developed the R* rule in support of resource competition theory . Theoretically, a plant species growing in monoculture , and utilizing a single limiting resource, will deplete the resource until reaching an equilibrium level where growth and losses are balanced. [ 10 ] The concentration of the resource at the equilibrium level is termed R*; this is the minimum concentration at which the plant is able to persist in the environment. [ 11 ] Population growth is indicated by values greater than the R*. Conversely, population decline is associated with values lower than the R*. [ 12 ] If two species are competing for the same limiting resource , the superior competitor will have the lowest R* value for that resource. This will eventually lead to the displacement of the inferior competitor, regardless of initial plant densities . [ 12 ] Displacement rate depends on the magnitude of the difference in R*. [ 13 ] Greater differences lead to a faster exclusion. Every plant species differs in R* values due to differences in plant morphology and physiology . The realized R* level is dependent on physical factors that vary by habitat, such as temperature, pH , and humidity . [ 12 ] In 1998, Mark Westoby proposed a plant ecology strategy scheme (PESS) to explain species distributions based on traits. [ 7 ] The dynamic model incorporated a three axes trade-off among specific leaf area (SLA), canopy height at maturity, and seed mass. SLA is defined as the area per unit dry mass of mature leaves, developed in the fullest natural light of the species. [ 7 ] These traits were selected for incorporation because of their trade-off functionality. Resource allocation to one trait is only possible by diverting resources from the others. Similarly to Grime's C-S-R triangle, each gradient represents different strategic responses to the environment; variation in disturbance adaptation is represented by canopy height and seed mass (Grime's R-axis), whereas SLA reflects variation in growth in response to stress (Grime's C-S axis). [ 4 ] [ 7 ] The L-H-S strategy avoids the assumption that high disturbance, high stress environments lack viable plant strategies, unlike Grime's model. However, Westoby's model is at a disadvantage when predicting potential variation in plant strategies since the axes only include single variables, compared to Grime's multivariable axes. [ 7 ] This linear model, first introduced by MacArthur and Wilson (1967), [ 3 ] has been commonly applied to both plants and animals to describe reproductive strategies. Representing opposing extremes of a continuum, r-species commit all energy into maximizing seed production with minimal input to individual propagules, whereas K-species allocate energy into a few, highly fit individuals; this is a spectrum of quantity versus quality. [ 14 ] The model assumes that perfect r-species function under competitive-free environments with no density effects and K-species under maximum competitive and density saturation. [ 14 ] Most species are categorized as intermediates between both extremes. The term “plant strategies” has many definitions, and includes several different mechanisms for responding to one's environment. While different strategies focus on different plant characteristics, all strategies have an overarching theme: plants must make trade-offs between where and how to allocate resources . Whether that's allocation to growth, reproduction, or maintenance, plants are responding to their environment by employing strategies that allow them to persist, survive, and reproduce. Plants may have multiple strategies to survive at different life-stages and therefore be subject to multiple trade-off throughout their life-cycle. [ 15 ]
https://en.wikipedia.org/wiki/Plant_strategies
Tolerance is the ability of plants to mitigate the negative fitness effects caused by herbivory . It is one of the general plant defense strategies against herbivores , the other being resistance , which is the ability of plants to prevent damage (Strauss and Agrawal 1999). Plant defense strategies play important roles in the survival of plants as they are fed upon by many different types of herbivores, especially insects, which may impose negative fitness effects (Strauss and Zangerl 2002). Damage can occur in almost any part of the plants, including the roots, stems, leaves, flowers and seeds (Strauss and Zergerl 2002). In response to herbivory, plants have evolved a wide variety of defense mechanisms and although relatively less studied than resistance strategies, tolerance traits play a major role in plant defense (Strauss and Zergerl 2002, Rosenthal and Kotanen 1995). Traits that confer tolerance are controlled genetically and therefore are heritable traits under selection (Strauss and Agrawal 1999). Many factors intrinsic to the plants, such as growth rate, storage capacity, photosynthetic rates and nutrient allocation and uptake , can affect the extent to which plants can tolerate damage (Rosenthal and Kotanen 1994). Extrinsic factors such as soil nutrition, carbon dioxide levels, light levels, water availability and competition also have an effect on tolerance (Rosenthal and Kotanen 1994). Studies of tolerance to herbivory has historically been the focus of agricultural scientists (Painter 1958; Bardner and Fletcher 1974). Tolerance was actually initially classified as a form of resistance (Painter 1958). Agricultural studies on tolerance, however, are mainly concerned with the compensatory effect on the plants' yield and not its fitness , since it is of economical interest to reduce crop losses due to herbivory by pests (Trumble 1993; Bardner and Fletcher 1974). One surprising discovery made about plant tolerance was that plants can overcompensate for the damaged caused by herbivory, causing controversy whether herbivores and plants can actually form a mutualistic relationship (Belsky 1986). It was soon recognized that many factors involved in plants tolerance, such as photosynthetic rates and nutrient allocation , were also traits intrinsic to plant growth and so resource availability may play an important role (Hilbert et al. 1981; Maschinski and Whitham 1989). The growth rate model proposed by Hilbert et al. (1981) predicts plants have higher tolerance in environments that does not allow them to grow at maximum capacity, while the compensatory continuum hypothesis by Maschinski and Whitham (1989) predicts higher tolerance in resource rich environments. Although it was the latter that received higher acceptance, 20 years later, the limiting resource model was proposed to explain the lack of agreement between empirical data and existing models (Wise and Abrahamson 2007). Currently, the limiting resource model is able to explain much more of the empirical data on plant tolerance relative to either of the previous models (Wise and Abrahamson 2008a). It was only recently that the assumption that tolerance and resistance must be negatively associated has been rejected (Nunez-Farfan et al. 2007). The classical assumption that tolerance traits confer no negative fitness consequences on herbivores has also been questioned (Stinchcombe 2002). Further studies using techniques in quantitative genetics have also provided evidence that tolerance to herbivory is heritable (Fornoni 2011). Studies of plant tolerance have only received increased attention recently, unlike resistance traits which were much more heavily studied (Fornoni 2011). Many aspects of plant tolerance such as its geographic variation, its macroevolutionary implications and its coevolutionary effects on herbivores are still relatively unknown (Fornoni 2011). Plants utilize many mechanisms to recover fitness from damage. Such traits include increased photosynthetic activity, compensatory growth, phenological changes, utilizing stored reserves, reallocating resources, increase in nutrients uptake, and plant architecture (Rosenthal and Kotanen 1994; Strauss and Agrawal 1999; Tiffin 2000). An increase in photosynthetic rate in undamaged tissues is commonly cited as a mechanism for plants to achieve tolerance (Trumble et al. 1993; Strauss and Agrawal 1999). This is possible since leaves often function at below their maximum capacity (Trumble et al. 1993). Several different pathways may lead to increases in photosynthesis, including higher levels of the Rubisco enzyme and delays in leaf senescence (Stowe et al. 2000). However, detecting an increase in environment does not mean plants are tolerant to damage. The resources gained from these mechanisms can be used to increase resistance instead of tolerance, such as for the production secondary compounds in the plant (Tiffin 2000). Also, whether the increase in photosynthetic rate is able to compensate for the damage is still not well studied (Trumble et al. 1993; Stowe et al. 2000). Biomass regrowth following herbivory is often reported as an indicator of tolerance and plant response after apical meristem damage (AMD) is one of the most heavily studied mechanisms of tolerance (Tiffin 2000; Suwa and Maherali 2008; Wise and Abrahamson 2008). Meristems are sites of rapid cell divisions and so have higher nutrition than most other tissues on the plants . Damage to apical meristems of plants may release it from apical dominance , activating the growth of axillary meristems which increases branching (Trumble et al. 1993; Wise and Abrahamson 2008). Studies have found branching after AMD to undercompensate, fully compensate and overcompensate for the damage received (Marquis 1996, Haukioja and Koricheva 2000, Wise and Abrahamson 2008). The variation in the extent of growth following herbivory may depend on the number and distribution of meristems, the pattern in which they are activated and the number of new meristems (Stowe et al. 2000). The wide occurrence of overcompensation after AMD has also brought up a controversial idea that there may be a meristem relationship between plants and their herbivores (Belsky 1986; Agrawal 2000; Edwards 2009). As will be further discussed below, herbivores may actually be mutualists of plants, such as Ipomopsis aggregata , which overcompensate for herbivory (Edwards 2009). Although there are many examples showing biomass regrowth following herbivory, it has been criticized as a useful predictor of fitness since the resources used for regrowth may translate to fewer resources allocated to reproduction (Suwa and Maherali 2008). Studies have shown herbivory can cause delays in plant growth, flowering and fruit production (Tiffin 2000). How plants respond to these phenological delays is likely a tolerance mechanism that will depend highly on their life history and other ecological factors such as, the abundance of pollinators at different times during the season (Tiffin 2000). If the growing season is short, plants that are able to shorten the delay of seed production caused by herbivory are more tolerant than those that cannot shorten this phenological change (Tiffin 2000). These faster recovering plant will be selectively favored over those that cannot as they will pass on more of their offspring to the next generation. In longer growing seasons , however, there may be enough time for most plants to produce seeds before the season ends regardless of damage. In this case, plants that can shorten the phenological delay are not any more tolerant than those that cannot as all plants can reproduce before the season ends (Tiffin 2000). Resource allocation following herbivory is commonly studied in agricultural systems (Trumble et al. 1993). Resources are most often allocated to reproductive structures after damage, as shown by Irwin et al. (2008) in which Polemonium viscosum and Ipomopsis aggregata increased flower production after flower larceny. When these reproductive structures are not present, resources are allocated to other tissues, such as leaves and shoots as seen in juvenile Plantago lanceolata (Trumble et al. 1993; Barton 2008). Utilizing stored reserves may be an important tolerance mechanism for plants which have abundant time to collect and store resources, such as perennial plants (Tiffin 2000; Erb et al. 2009). Resources are often stored in leaves and specialized storage organs such as tubers and roots , and studies have shown evidence that these resources are allocated for regrowth following herbivory (Trumble et al. 1993; Tiffin 2000; Erb et al. 2009). However, the importance of this mechanism to tolerance is not well studied and it is unknown how much it contributes to tolerance since stored reserves mostly consist of carbon resources, whereas tissue damage causes a loss of carbon , nitrogen and other nutrients (Tiffin 2000). This form of tolerance relies on constitutive mechanisms, such as morphology , at the time of damage, unlike the induced mechanisms mentioned above. plant architecture includes roots to shoots ratios, stem number, stem rigidity and plant vasculature (Marquis 1996, Tiffin 2000). A high roots to shoots ratio will allow plants to better absorb nutrients following herbivory and rigid stems will prevent collapse after sustaining damage, increasing plant tolerance (Tiffin 2000). Since plants have a meristemic construction, how resources are restricted among different regions of the plants, referred to as sectoriality, will affect the ability to transfer resources from undamaged areas to damaged areas (Marquis 1996). Although plant vasculature may play important roles in tolerance, it is not well studied due to the difficulties in identifying the flow of resources (Marquis 1996). Increasing a plant's vasculature would seem advantageous since it increases the flow of resources to all sites of damage but it may also increase its susceptibility to herbivores, such as phloem suckers (Marquis 1996, Stowe et al. 2000). Tolerance is operationally defined as the slope of the regression between fitness and level of damage (Stinchcombe 2002). Since an individual plant can only sustain one level of damage, it is necessary to measure fitness using a group of related individuals, preferably full-sibs or clones to minimize other factors that may influence tolerance, after sustaining different levels of damage (Stinchcombe 2002). Tolerance is often presented as a reaction norm , where slopes larger than, equal to and less than zero reflect overcompensation, full compensation and undercompensation, respectively (Strauss and Agrawal 1999). Both fitness and herbivory can be measured or analyzed using an absolute (additive) scale or a relative (multiplicative) scale (Wise and Carr 2008b). The absolute scale may refer to number of fruits produced or total area of leaf eaten, while the relative scale may refer to proportion of fruits damaged or proportion of leaves eaten. Wise and Carr (2008b) suggested that it is best to keep the measure of fitness and the measure of damage on the same scale when analyzing tolerance since having them on different scales may result is misleading outcomes. Even if the data were measured using different scales, data on the absolute scale can be log-transformed to be more similar to data on a relative (multiplicative) scale (Wise and Carr 2008b). A majority of studies use simulated or manipulated herbivory, such as clipping leaves or herbivore exclusions, due to the difficulty in controlling damage levels under natural conditions (Tiffin and Inouye 2000). The advantage of using natural herbivory is that plants will experience the pattern of damage that selection has favored tolerance for, but there may be biases resulting from unmeasured environmental variables that may affect both plant and herbivores. Using simulated herbivory allows for the control of environmental variables, but replicating natural herbivory is difficult, causing plants to respond differently from imposed and natural herbivory (Tiffin and Inouye 2000). Growing plants in the control environment of the greenhouse may also affect their response as it is still a novel environment to the plants. Even if the plots are grown in natural settings, the methods of excluding or including herbivores, such as using cages or pesticides , may also affect plant tolerance (Tiffin and Inouye 2000). Lastly, models have predicted that manipulated herbivory may actually result in less precise estimates of tolerance relative to that from natural herbivory (Tiffin and Inoue 2000). Many studies have shown that using different measurements of fitness may give varying outcomes of tolerance (Strauss and Agrawal 1999; Suwa and Maherali 2008; Banta et al. 2010). Banta et al. (2010) found that their measure of tolerance will differ depending on whether fruit production or total viable seed production was used to reflect fitness in Arabdopsis thaliana . Careful considerations must be made to choose traits that are linked to fitness as closely as possible when measuring tolerance. It is classically assumed that there is a negative correlation between the levels of tolerance and resistance in plants (Stowe et al. 2000; Nunez-Farfan et al. 2007). For this trade-off to exist, it requires that tolerance and resistance be redundant defense strategies with similar costs to the plant (Nunez-Farfan et al. 2007). If this is the case, then plants that are able to tolerate damage will suffer little decrease in fitness and so resistance would not be selectively favored. For highly resistant plants, allocating resources to tolerance would not be selectively favored as the plant received minimal damage in the first place. There is now increasing evidence that many plants allocate resources to both types of defense strategies (Nunez-Farfan et al. 2007). There is also evidence that there may not be a trade-off between tolerance and resistance at all and that they may evolve independently (Leimu and Koricheva 2006; Nunez-Farfan et al. 2007; Muola et al. 2010). Models have shown that intermediate levels of resistance and tolerance are evolutionary stable as long as the benefits of having both traits are more than additive (Nunez-Farfan et al. 2007). Tolerance and resistance may not be redundant strategies since tolerance could be necessary for damage from large mammalian herbivores or specialist herbivores which have the ability to circumvent resistance traits of the plant (Nunez-Farfan et al. 2007; Muola et al. 2010). Also, as traits that confer tolerance are usually basic characteristics of plants, the result of photosynthetic on growth and not herbivory may also affect tolerance (Rosenthal and Kotanen 1994). It has been suggested that the trade-off between resistance and tolerance may change throughout the development of the plants. It is often assumed that seedlings and juveniles are less tolerant of herbivory since they did not develop the structures required for resource acquisition and so will rely more on traits that confer resistance (Boege et al. 2007; Barton 2008, Barton and Koricheva 2010; Tucker and Avila-Sakar 2010). Although many studies find lower tolerance in seedlings , this is not always the case, as seen in juveniles of Plant ago lanceolata which can fully compensate for 50% defoliation (Barton 2008). There is also the added complexity of shifts in herbivore communities as the plant develops and so may favor tolerance or resistance at different life stages (Barton and Koricheva 2010). The response of plants to herbivory is often plastic and varies according to the conditions it is experiencing (Wise and Abrahamson 2005). The major resources that affect plant growth and also tolerance are water , light , carbon dioxide and soil nutrients . Water and light levels are generally assumed to be positively associated with tolerance (Strauss and Agrawal 1999). However, there are exceptions such as evidence of decreased tolerance in Madia sativa with increased water availability (Wise and Abrahamson 2007, Gonzales et al. 2008). Many studies have found CO 2 levels to decrease tolerance in plants (Lau and Tiffin 2009). Increased nutrient levels are also commonly found to be negatively associated with tolerance (Wise and Abrahamson 2007). There are currently three prominent models that predict how resource levels may alter a plants 's tolerance to herbivory. The GRM proposes that the growth rate of the plant at the time of damage is important in determining its response (Hilbert et al. 1981). Plants that are growing in stressful conditions, such as low resource levels or high competition , are growing below their maximum growth rate and so may have a higher capacity for regrowth after receiving damage (Hilbert et al. 1981). In contrast, plants in relatively benign conditions are growing near their maximum growth rate. These plants are less able to recover from damage since they are already near their innate maximum growth rate (Hilbert et al. 1981). The CCH suggests that there is a continuum of responses to herbivory (Maschinski and Whitham 1989). It predicts that plants growing in less stressful environment conditions, such as high resource or low competition , are better able to tolerate herbivory since they have abundant resources to replace lost tissues and recover from the damage. Plants growing in stressful environments are then predicted to have lower tolerance (Maschinski and Whitham 1989). This recently proposed model takes into account the resource that is limiting plant fitness, the resource affected by herbivory and how the acquisition of resources is affected by herbivory (Wise and Abrahamson 2005). Unlike the GRM and CCH, it is able to incorporate the type of damage received since different modes of herbivory may cause different resources to be affected by herbivory. The LRM encompasses every possible outcome of tolerance (i.e. equal tolerance in both environments, higher tolerance in low stress environments and lower tolerance in low stress environments) and allows multiple pathways to reach these outcome. Currently, the LRM seems to be most useful in predicting the effects that varying resources levels may have on tolerance (Wise and Abrahamson 2007). Meta-analyses by Hawkes and Sullivan (2001) and Wise and Abrahamson (2007, 2008a) found that the CCH and GRM were insufficient in predicting the diversity of plant tolerance to herbivory. Banta et al. (2010), however, suggested that the LRM should be represented as a set of seven models, instead of one, since each individual part of the LRM requires different assumptions. It is classically assumed that tolerance traits do not impose selection on herbivore fitness (Strauss and Agrawal 1999). This is in contrast to traits that confer resistance, which are likely to affect herbivore fitness and lead to a co-evolutionary arms race (Stinchcombe 2002; Espinosa and Fornoni 2006). However, there are possible mechanisms in which tolerance may affect herbivore fitness . One mechanism requires a genetic association between loci that confers resistance and tolerance either through tight linkage or pleiotropy (Stinchcombe 2002). photosynthetic for either trait will then also affect the other. If there is a positive correlation between the two traits, then selection for increased tolerance will also increase resistance in the plants. If there is a negative correlation between the two traits then selection for increased tolerance will decrease resistance. How common this association exists, however, is uncertain as there are many studies which find no correlation between tolerance and resistance and others which find significant correlations between them (Leimu and Koricheva 2006; Nunez-Farfan et al. 2007; Muola et al. 2010). If the traits that allow for tolerance affects the plant tissue's quality, quantity or availability, tolerance may also impose selection on herbivores. Consider a case where tolerance is achieved through activation of dormant meristems in the plants . These new plant tissues may be of lower quality than what was previously eaten by herbivores. herbivores which have higher rates of consumption or can more efficiently use this new resource may be selectively favored over those that cannot (Stinchcombe 2002). Espinosa and Fornoni (2006) was one study which directly investigated whether tolerance may impose selection on herbivores. As suggested by Stinchcombe (2002), they used plants which had similar resistance but differed in tolerance to more easily differentiate the effects of each trait. As expected, they found evidence that resistance in plants affected herbivore fitness, but they were unable to find any effects of tolerance on herbivore fitness . A recent model by Restif and Koella (2003) found that plant tolerance can directly impose selection on pathogens. Assuming that investment in tolerance will reduce plant fecundity , infection by pathogens will decrease the number of uninfected hosts. There may then be selection for decreased virulence in the pathogens , so that their plant host will survive long enough to produce enough offspring for future pathogens to infect (Restif and Koelle 2003). However, this may only have limited application to herbivores. Herbivory can have large effects on the succession and diversity of plants communities (Anderson and Briske 1995; Stowe et al. 2000; Pejman et al. 2009). Thus, plant defense strategies are important in determining temporal and spatial variation of plant species as it may change the competitive abilities of plants following herbivory. (Anderson and Briske 1995; Stowe et al. 2000). Past studies have suggested plant resistance to play the major role in species diversity within communities, but tolerance may also be an important factor (Stowe et al. 2000; Pejman et al. 2009). Herbivory may allow less competitive, but tolerant plants to survive in communities dominated by highly competitive but intolerant plant species, thereby increasing diversity (Mariotte et al. 2013). Pejman et al. (2009) found support for this idea in an experimental study on grassland species. In low resource environments, highly competitive (dominant) plants species had lower tolerance than the less competitive (subordinate) species. They also found that the addition of fertilizers offset the negative effects of herbivory on dominant plants. It has also been suggested that the observation of species that occur late in ecological succession (late-seral) being replaced by species that occur in the middle of ecological succession (mid-seral) after high herbivory is due to differences in tolerance between them (Anderson and Briske 1995; Off and Ritchie 1998). However, tolerance between these two groups of species do not always differ and other factors, such as selective herbivory on late-seral species, may contribute to these observations (Anderson and Briske 1995). The large number of studies indicating overcompensation in plants following herbivory, especially after apical meristem damage, has led some authors to suggest that there may be meristem relationships between plants and herbivores (Belsky 1986; Agrawal 2000; Edwards 2009). If herbivores provide some benefit for the plant despite causing damage, the plant may evolve tolerance to minimize the damage imposed by the herbivore to shift the relationship more towards mutualism (Edwards 2009). Such benefits include the release from apical dominance , inducing resistance traits to temporally separate herbivores, providing information of future attacks and pollination (Agrawal 2000). One of the best examples occurs in Ipomopsis aggregata where there is increased seed production and seed siring in damaged plants compared to undamaged plants (Figure 4; Edwards 2009). The probability of attack after the first bout of herbivory is low in the environment inhabited by I. aggregata . Due to the predictability of attacks, these plants have evolved to overcompensate for the damage and produce the majority of their seeds after the initial bout of herbivory (Edwards 2009). Another example involves endophytic fungi , such as Neophtodium , which parasitize plants and produce spores that destroy host inflorescences (Edwards 2009). The fungi also produce alkaloids which protect the plant from herbivores and so the plant may have evolved tolerance to flower damage to acquire this benefit (Edwards 2009). Tolerance may also be involved in the mutualism between the myremecophyte , Cordia nodosa , and its ant symbiont Allomerus octoarticulatus (Edwards and Yu, 2008). The plant provides the ant with shelter and food bodies in return for protection against herbivory, but the ants also sterilize the plant by removing flower buds. C. nodosa is able to compensate for this by reallocating resources to produce flowers on branches not occupied by castrating ants (Edwards and Yu, 2008). A similar type of meristem involves plants and mycorrhizal fungi (Bennett and Bever 2007). Mycorrhizal fungi inhabit plant roots and increase nutrient uptake for the plant in exchange for food resources. These fungi are also able to alter the tolerance of plants to herbivory and may cause undercompesation, full compensation and overcompensation depending on the species of fungi involved (Bennett and Bever 2007). Modern agriculture has focuses on using genetically modified crops which possess toxic compounds to reduce damage by pests (Nunez-Farfan et al. 2007). However, the effectiveness of resistance traits may decrease as herbivores fungi counter adaptations to the toxic compound, especially since most farmers are reluctant to assign a proportion of their land to contain susceptible crops (Nunez-Farfan et al. 2007). Another method to increase crop yield is to use lines that are tolerant to herbivory and can compensate or even overcompensate for the damage inflicted (Nunez-Farfan et al. 2007; Poveda et al. 2010). Alterations in resource allocation due to herbivory is studied heavily in agricultural systems (Trumble et al. 1993). Domestication of plants by selecting for higher yield have undoubtedly also caused changes in various plant growth traits, such as decreased resource allocation to non-yield tissues (Welter and Steggall 1993). Alterations in growth traits is likely to affect plant tolerance since the mechanisms overlap. That domesticated tomato plants have lower tolerance to folivory than their wild progenitors suggests this as well (Welter and Steggall 1993). Most agricultural studies however, are more focused on comparing tolerance between damaged and undamaged crops, not between crops and their wild counterparts. Many have found crops, such as cucumbers , cabbages and cauliflowers , can fully compensate and overcompensate for the damaged received (Trumble et al. 1993). A recent study by Poveda et al. (2010) also found evidence of overcompensation in potato plants in response to tuber damage by the potato tuber moth, Phthorimaea operculella . Unlike previous examples, the potato plant does not reallocate resources, but actually increases overall productivity to increase mass of tubers and aboveground tissues (Poveda et al. 2010).
https://en.wikipedia.org/wiki/Plant_tolerance_to_herbivory
Plant transformation vectors are plasmids that have been specifically designed to facilitate the generation of transgenic plants . The most commonly used plant transformation vectors are T-DNA binary vectors and are often replicated in both E. coli , a common lab bacterium , and Agrobacterium tumefaciens , a plant-virulent bacterium used to insert the recombinant DNA into plants. Plant transformation vectors contain three key elements: A custom DNA plasmid sequence can be created and replicated in various ways, but generally, all methods share the following processes: Plant transformation using plasmids begins with the propagation of the binary vector in E. coli. When the bacterial culture reaches the appropriate density, the binary vector is isolated and purified. Then, a foreign gene can be introduced. The engineered binary vector, including the foreign gene, is re-introduced in E. coli for amplification. The engineered binary factor is isolated from E. coli and is introduced into Agrobacteria containing a modified (relatively small) Ti plasmid. This engineered Agrobacteria can be used to infect plant cells. The T-DNA, which contains the foreign gene, becomes integrated into the plant cell genome. In each infected cell, the T-DNA is integrated at a different site in the genome. The entire plant will regenerate from a single transformed cell, resulting in an organism with the transformed DNA integrated identically across all cells. A selector gene can be used to distinguish successfully genetically modified cells from unmodified ones. The selector gene is integrated into the plasmid along with the desired target gene, providing the cells with resistance to an antibiotic , such as kanamycin , ampicillin , spectinomycin or tetracycline . The desired cells, along with any other organisms growing within the culture, can be treated with an antibiotic, allowing only the modified cells to survive. The antibiotic gene is not usually transferred to the plant cell but instead remains within the bacterial cell. Plasmids replicate to produce many plasmid molecules in each host bacterial cell. The number of copies of each plasmid in a bacterial cell is determined by the replication origin , which is the position within the plasmid molecule where DNA replication is initiated. Most binary vectors have a higher number of plasmid copies when they replicate in E. coli ; however, the plasmid copy-number is usually lower when the plasmid is resident within Agrobacterium tumefaciens . Plasmids can also be replicated using the polymerase chain reaction (PCR). T-DNA contains two types of genes: the oncogenic genes , encoding for enzymes involved in the synthesis of auxins and cytokinins and responsible for tumor formation, and the genes encoding for the synthesis of opines . These compounds, produced by the condensation between amino acids and sugars, are synthesized and excreted by the crown gall cells, and they are consumed by A. tumefaciens as carbon and nitrogen sources. The genes involved in opine catabolism , T-DNA transfer from the bacterium to the plant cell and bacterium-bacterium plasmid conjugative transfer are located outside the T-DNA. [ 3 ] [ 4 ] The T-DNA fragment is flanked by 25-bp direct repeats, which act as a cis-element signal for the transfer apparatus. The process of T-DNA transfer is mediated by the cooperative action of proteins encoded by genes determined in the Ti plasmid virulence region (vir genes) and in the bacterial chromosome. The Ti plasmid also contains the genes for opine catabolism produced by the crown gall cells and regions for conjugative transfer and for its own integrity and stability. The 30 kb virulence (vir) region is a regulon organized in six operons essential for the T-DNA transfer (virA, virB, virD, and virG) or for the increasing of transfer efficiency (virC and virE). [ 3 ] [ 4 ] [ 5 ] Several chromosomal-determined genetic elements have shown their functional role in the attachment of A. tumefaciens to the plant cell and bacterial colonization. The loci chvA and chvB are involved in the synthesis and excretion of the b -1,2 glucan , [ 6 ] the chvE required for the sugar enhancement of vir genes induction and bacterial chemotaxis . [ 7 ] [ 8 ] [ 9 ] The cell locus is responsible for the synthesis of cellulose fibrils. [ 10 ] The pscA (exoC) locus is involved in the synthesis of both cyclic glucan and acid succinoglycan . [ 11 ] [ 9 ] The att locus is involved in the cell surface proteins . [ 12 ]
https://en.wikipedia.org/wiki/Plant_transformation_vector
Plant use of endophytic fungi in defense occurs when endophytic fungi , which live symbiotically with the majority of plants by entering their cells, are utilized as an indirect defense against herbivores. [ 1 ] [ 2 ] In exchange for carbohydrate energy resources, the fungus provides benefits to the plant which can include increased water or nutrient uptake and protection from phytophagous insects , birds or mammals. [ 3 ] Once associated, the fungi alter nutrient content of the plant and enhance or begin production of secondary metabolites . [ 4 ] The change in chemical composition acts to deter herbivory by insects, grazing by ungulates and/or oviposition by adult insects. [ 5 ] Endophyte-mediated defense can also be effective against pathogens and non-herbivory damage. [ 6 ] This differs from other forms of indirect defense in that the fungi live within the plant cells and directly alter their physiology. In contrast, other biotic defenses such as predators or parasites of the herbivores consuming a plant are normally attracted by volatile organic compounds (known as semiochemicals ) released following damage or by food rewards and shelter produced by the plant. [ 7 ] These defenders vary in the time spent with the plant: from long enough to oviposit to remaining there for numerous generations, as in the ant-acacia mutualism . [ 8 ] Endophytic fungi tend to live with the plant over its entire life. The endophytic fungi grow in the intercellular spaces of the plants, parallel to the leaves and stems, as elongated and thinly-dispersed branched hyphae. [ 9 ] The fungal hyphae penetrates the host plant's embryo and grows along the seeds to infect the new plants that will grow from the seeds, which is a process of transmission that is known as vertical transmission. [ 9 ] The fungal endophytes are a diverse group of organisms forming associations almost ubiquitously throughout the plant kingdom . The endophytes which provide indirect defense against herbivores may have come from a number of origins, including mutualistic root endophyte associations and the evolution of entomopathogenic fungi into plant-associated endophytes. [ 10 ] The endomycorrhiza , which live in plant roots, are made up of five groups: arbuscular , arbutoid, ericoid, monotropoid, and orchid mycorrhizae. The majority of species are from the phylum Glomeromycota with the ericoid species coming from the Ascomycota , while the arbutoid, monotropoid and orchid mycorrhizae are classified as Basidiomycota . [ 11 ] The entomopathogenic view has gained support from observations of increased fungal growth in response to induced plant defenses [ 12 ] and colonization of plant tissues. [ 13 ] Examples of host specialists are numerous – especially in temperate environments – with multiple specialist fungi frequently infecting one plant individual simultaneously. [ 14 ] [ 15 ] These specialists demonstrate high levels of specificity for their host species and may form physiologically adapted host-races on closely related congeners . [ 16 ] Piriformospora indica is an interesting endophytic fungus of the order Sebacinales , the fungus is capable of colonising roots and forming symbiotic relationship with every possible plant on earth. P. indica has also been shown to increase both crop yield and plant defence of a variety of crops(barley, tomato, maize etc.) against root-pathogens. [ 17 ] [ 18 ] However, there are also many examples of generalist fungi which may occur on different hosts at different frequencies (e.g. Acremonium endophytes from five subgenera of Festuca [ 19 ] ) and as part of a variety of fungal assemblages. [ 20 ] [ 21 ] They may even spread to novel, introduced plant species . [ 22 ] Endophytic mutualists associate with species representative of every growth form and life history strategy in the grasses and many other groups of plants. [ 23 ] The effects of associating with multiple strains or species of fungus at once can vary, but in general, one type of fungus will be providing the majority of benefit to the plant. [ 24 ] [ 25 ] Some chemical defenses once thought to be produced by the plant have since been shown to be synthesized by endophytic fungi. The chemical basis of insect resistance in endophyte-plant defense mutualisms has been most extensively studied in the perennial ryegrass and three major classes of secondary metabolites are found: indole diterpenes, ergot alkaloids and peramine. [ 26 ] [ 27 ] [ 28 ] Related compounds are found across the range of endophytic fungal associations with plants. The terpenes and alkaloids are inducible defenses which act similarly to defensive compounds produced by plants and are highly toxic to a wide variety of phytophagous insects as well as mammalian herbivores. [ 29 ] [ 30 ] [ 31 ] [ 32 ] Peramine occurs widely in endophyte-associated grasses and may also act as a signal to invertebrate herbivores of the presence of more dangerous defensive chemicals. [ 33 ] Terpenoids and ketones have been linked to protection from specialist and generalist herbivores (both insect and vertebrate) across the higher plants. [ 34 ] [ 35 ] Generalist herbivores are more likely than specialists to be negatively affected by the defense chemicals that endophytes produce because they have, on average, less resistance to these specific, qualitative defenses. [ 36 ] Among the chewing insects, infection by mycorrhizae can actually benefit specialist feeders even if it negatively affects generalists. [ 37 ] The overall pattern of effects on insect herbivores seems to support this, with generalist mesophyll feeders experiencing negative effects of host infection, although phloem feeders appear to be affected little by fungal defenses. [ 38 ] Secondary metabolites may also affect the behaviour of natural enemies of herbivorous species in a multi-trophic defense/predation association. [ 7 ] For instance, terpenoid production attracts natural enemies of herbivores to damaged plants. [ 39 ] These enemies can reduce numbers of invertebrate herbivores substantially and may not be attracted in the absence of endophytic symbionts . [ 40 ] Multi-trophic interactions can have cascading consequences for the entire plant community , with the potential to vary widely depending on the combination of fungal species infecting a given plant and the abiotic conditions. [ 41 ] [ 42 ] [ 43 ] Due to the inherently nutrient-exchange based economy of the plant-endophyte association, it is not surprising that infection by fungi directly alters the chemical composition of plants, with corresponding impacts on their herbivores. Endophytes frequently increase apoplastic carbohydrate concentration, altering the C:N ratio of leaves and making them a less efficient source of protein. [ 44 ] This effect can be compounded when the fungus also uses plant nitrogen to form N-based secondary metabolites such as alkaloids. For example, the thistle gall fly ( Urophora cardui ) experiences reduced performance on plants infected with endophytic fungi due to the decrease in N-content and ability to produce large quantities of high-quality gall tissue . [ 45 ] Additionally, increased availability of limiting nutrients to plants improves overall performance and health, potentially increasing the ability of infected plants to defend themselves. [ 46 ] Studies of fungal infection consistently reveal that plants with endophytes are less likely to suffer substantial damage, and herbivores feeding on infected plants are less productive . [ 47 ] [ 48 ] There are multiple modes through which endophytic fungi reduce insect herbivore damage, including avoidance (deterrence), reduced feeding, reduced development rate, reduced growth and/or population growth , reduced survival, and reduced oviposition. [ 49 ] Vertebrate herbivores such as birds, [ 50 ] rabbits [ 51 ] and deer [ 52 ] show the same patterns of avoidance and reduced performance. Even below-ground herbivores such as nematodes and root-feeding insects are reduced by endophyte infection. [ 53 ] [ 54 ] [ 55 ] [ 56 ] The strongest evidence for anti-herbivore benefits of fungal endophytes come from studies of herbivore populations being extirpated when allowed to feed only on infected plants. Examples of local extinction have been documented in crickets, [ 57 ] larval armyworms and flour beetles. [ 58 ] Yet chemical defenses produced by fungal endophytes are not universally effective, and numerous insect herbivores are unaffected by a given compound at one or more life history stages; [ 59 ] larval stages are often more susceptible to toxins than adults. [ 60 ] [ 61 ] Even endophytes which purportedly provide some defense benefit to their hosts such as the Neotyphidium partner of many grass species in the alpine tundra do not always lead to avoidance or ill-effects on herbivores due to spatial variation in levels of consumption. [ 62 ] Not all endophytic symbioses confer protection from herbivores – only some species associations act as defense mutualisms. [ 63 ] The difference between a mutualistic endophyte and a pathogenic one can be indistinct and dependent on interactions with other species or environmental conditions. Some endophytic fungi can counteract the negative impacts of pathogenic fungi in some plants such as Siberian ryegrass ( Elymus sibiricus ) by increasing seed germination, coleoptile and radicle length, and seedling weight. [ 64 ] Some fungi which are pathogens in the absence of herbivores may become beneficial under high levels of insect damage, such as species which kill plant cells in order to make nutrients available for their own growth, thereby altering nutritional content of leaves and making them a less desirable foodstuff. [ 44 ] Some endomycorrhizae may provide defense benefits but at the cost of lost reproductive potential by rendering grasses partially sterile with their own fungal reproductive structures taking precedence. [ 65 ] This is not unusual among fungi, as non-endophytic plant pathogens have similar conditionally beneficial effects on defense. [ 66 ] Some species of endophyte may be beneficial for the plants in other ways (e.g. nutrient and water uptake) but will provide less benefit as a plant receives more damage and not produce defensive chemicals in response. [ 67 ] [ 68 ] The effect of one fungus on the plant can be altered when multiple strains of fungi are infecting a given individual in combination. [ 69 ] Some endomycorrhizae may actually promote herbivore damage by making plants more susceptible to it. [ 70 ] For example, some oak fungal endophytes are positively correlated with the levels of damage from leaf miners ( Cameraria spp. ), although negatively correlated with number of larvae present due to a reduction of oviposition on infected plants, which partially mitigates the higher damage rate. [ 71 ] [ 72 ] This continuum between mutualism and pathogenicity of endophytic fungi has major implications for plant fitness depending on the species of partners available in a given environment; mutualist status is conditional in a way similar to pollination and can shift from one to the other just as frequently. [ 73 ] [ 74 ] Fungal endophytes which provide defensive services to their host plants may exert selective pressures favouring association through enhanced fitness relative to uninfected hosts. [ 75 ] The fungus Neotyphodium spp. infects grasses and increases fitness under conditions with high levels of interspecific competition . [ 76 ] It does this through a combination of benefits including anti-herbivore defenses and growth promoting factors. The customary assumption that plant growth promotion is the main way fungal mutualists improve fitness under attack from herbivores is changing; alteration of plant chemical composition and induced resistance are now recognized as factors of great importance in improving competitive ability and fecundity. [ 77 ] Plants undefended by chemical or physical means at certain points in their life histories have higher survival rates when infected with beneficial endophytic fungi. [ 78 ] The general trend of plants infected with mutualistic fungi outperforming uninfected plants under moderate to high herbivory exerts selection for higher levels of fungal association as herbivory levels increase. [ 79 ] Unsurprisingly, low to moderate levels of herbivore damage also increases the levels of infection by beneficial endophytic fungi. [ 38 ] [ 80 ] In some cases the symbiosis between fungus and plant reaches a point of inseparability; fungal material is transmitted vertically from the maternal parent plant to seeds, forming a near-obligate mutualism. [ 81 ] [ 82 ] Having a mutualistic relationship with endophytic fungi can promote seed production and seed germination rates in some plant species, such as perennial ryegrass ( Lolium perenne ) and tall fescue ( Festuca arundinacea ). [ 83 ] The fungi can also benefit the growth of the seedlings as it can enhance seedling growth rate, tiller number and height, and overall biomass. [ 64 ] Because seeds are an important aspect of both fecundity and competitive ability for plants, high germination rates and seedling survival increase lifetime fitness. [ 5 ] When fitness of plant and fungus become tightly intertwined, it is in the best interest of the endophyte to act in a manner beneficial to the plant, pushing it further toward the mutualism end of the continuum. Such effects of seed defense can also occur in dense stands of conspecifics through horizontal transmission of beneficial fungi. [ 84 ] Mechanisms of microbial association defense, protecting the seeds rather than the already established plants, can have such drastic impacts on seed survival that they have been recognized to be an important aspect of the larger 'seed defence theory'. [ 85 ] The range of associated plants and fungi may be altered as climate changes , and not necessarily in a synchronous fashion. Plants may lose or gain endophytes, with as yet unknown impacts on defense and fitness, although generalist species may provide indirect defense in new habitats more often than not. [ 86 ] Above-ground and below-ground associations can be mutual drivers of diversity, so altering the interactions between plants and their fungi may also have drastic effects on the community at large, including herbivores. [ 86 ] [ 87 ] Changes in distribution may bring plants into competition with previously established local species, making the fungal community – and particularly the pathogenic role of fungus – important in determining outcomes of competition with non-native invasive species . [ 4 ] [ 88 ] As carbon dioxide levels rise, the amplified photosynthesis will increase the pool of carbohydrates available to endophytic partners, potentially altering the strength of associations. [ 89 ] Infected C 3 plants show greater relative growth rate under high CO 2 conditions compared to uninfected plants, and it is possible that the fungi drive this pattern of increased carbohydrate production. [ 90 ] Levels of herbivory may also increase as temperature and carbon dioxide concentrations rise. [ 91 ] However, should plants remain associated with their current symbiotic fungi, evidence suggests that the degree of defense afforded them should not be altered. Although the amount of damage caused by herbivores frequently increases under elevated levels of atmospheric CO 2 , the proportion of damage remains constant when host plants are infected by their fungal endophytes. [ 92 ] The change in carbon-nitrogen ratio will also have important consequences for herbivores. As carbohydrate levels increase within plants, relative nitrogen content will fall, having the dual effects of reducing nutritional benefit per unit biomass and also lowering concentrations of nitrogen-based defenses such as alkaloids. [ 93 ] The effects of endophytic fungi on the chemical composition of plants have been known by humans for centuries in the form of poisoning and disease as well as medicinal uses. Especially noted were impacts on agricultural products and livestock. [ 94 ] [ 95 ] Recognition and study of the mutualism did not begin in earnest until the 1980s when early studies on the impacts of alkaloids on animal herbivory confirmed their importance as agents of deterrence. [ 44 ] Biologists began to characterize the diversity of endophytic mutualists through primitive techniques such as isozyme analysis and measuring the effects of infection on herbivores. [ 16 ] [ 19 ] [ 21 ] Basic descriptive accounts of these previously neglected species of fungus became a major goal for mycologists , and a lot of research focus shifted to associates of the grass family ( Poaceae ) in particular, because of the large number of species which represent economically important commodities to humans. [ 5 ] [ 28 ] [ 96 ] [ 97 ] In addition to continuing descriptive studies of the effects of infection by defense mutualist endophytes, there has been a sharp increase in the number of studies which delve further into the ecology of plant-fungus associations and especially their multi-trophic impacts. [ 40 ] [ 41 ] [ 98 ] The processes by which endophytic fungi alter plant physiology and volatile chemical levels are virtually unknown, and limited current results show a lack of consistency under differing environmental conditions, especially differing levels of herbivory. [ 99 ] Studies comparing the relative impacts of mutualistic endophytes on inducible defenses and tolerance show a central function of infection in determining both responses to herbivore damage. [ 100 ] On the whole, molecular mechanisms behind endophyte-mediated plant defense has been an increasing focus of research over the past ten years. [ 101 ] [ 102 ] Since the beginning of the biotechnology revolution, much research has been also focused on using genetically modified endophytes to improve plant yields and defensive properties. [ 93 ] The genetic basis of response to herbivory is being explored in tall fescue, where it appears the production of jasmonic acid may play a role in downregulation of the host plant's chemical defense pathways when a fungal endophyte is present. [ 103 ] In some cases, fungi that are closely associated with their hosts have transferred genes for secondary metabolite production to the host genome , which could help to explain multiple origins of chemical defenses within the phylogeny of various groups of plants. [ 104 ] [ 105 ] This represents an important line of inquiry to pursue, especially in regards to understanding the chemical pathways that can be utilized in biotechnological applications. [ 106 ] The secondary chemicals produced by endophytic fungi when associated with their host plants can be very harmful to mammals including livestock and humans, causing more than 600 million dollars in losses due to dead livestock every year. [ 107 ] For example, the ergot alkaloids produced by Claviceps spp . have been dangerous contaminants of rye crops for centuries. [ 97 ] When not lethal, defense chemicals produced by fungal endophytes may lead to lower productivity in cows and other livestock feeding on infected forage . [ 108 ] Reduced nutritional quality of infected plant tissue also lowers the performance of farm animals, compounding the effect of reduced feed uptake when provided with infected plant matter. [ 48 ] [ 109 ] Reduced frequency of pregnancy and birth has also been reported in cattle and horses fed with infected forage. [ 93 ] Endophytic fungi can even cause severe toxicity in grazing livestock, which is often referred to as fescue toxicosis. [ 110 ] Cattle that graze on tall fescue ( Festuca arundinacea ) develop symptoms such as fescue foot, fat necrosis and summer slump, which is a general malady of fescue toxicosis. [ 110 ] Fungi, plants and herbivore population sizes can have a cyclical predator-prey pattern. Infection rates of endophytic fungi in plants tend to increase with rise in grazing pressure. [ 111 ] If endophytic fungi becomes highly prevalent in grazer food sources, it can even lead to population crashes in grazing animals. [ 111 ] Consequently, the dairy and meat-production industries must endure substantial economic losses. [ 107 ] Fungal resistance to herbivores represents an environmentally sustainable alternative to pesticides that has experienced reasonable success in agricultural applications. [ 112 ] The organic farming industry has embraced mycorrhizal symbionts as one tool for improving yields and protecting plants from damage. [ 46 ] [ 106 ] Infected crops of soybean , [ 113 ] ribwort plantain, [ 114 ] cabbage, banana, [ 115 ] coffee bean plant [ 10 ] and tomato [ 116 ] all show markedly lower rates of herbivore damage compared to uninfected plants. Endophytic fungi show great promise as a means of indirect biocontrol in large-scale agricultural applications. [ 49 ] [ 117 ] The potential for biotechnology to improve crop populations through inoculation with modified fungal strains could reduce toxicity to livestock and improve yields of human-consumed foods. [ 93 ] The endophyte, either with detrimental genes removed or beneficial new genes added, is used as a surrogate host to transform the crops genetically. An endophyte of ryegrass has been genetically transformed in this way and used successfully to deter herbivores. [ 118 ] Understanding how to mediate top-down effects on crop populations caused by the enemies of herbivores as well as bottom-up effects of chemical composition in infected plants has important consequences for the management of agricultural industries. [ 119 ] The selection of endophytes for agricultural use must be careful and consideration must be paid to the specific impacts of infection on all species of pest and predators or parasites, which may vary on a geographic scale. [ 106 ] The union of ecological and molecular techniques to increase yield without sacrificing the health of the local or global environment is a growing area of research. Many secondary metabolites from endophyte-plant interactions have also been isolated and used in raw or derived forms to produce a variety of drugs treating many conditions. The toxic properties of ergot alkaloids also make them useful in the treatment of headaches and throughout the process of giving birth by inducing contractions and stemming hemorrhages. [ 120 ] Drugs used to treat Parkinson's disease have been created from isolates of ergot toxins, although health risks may accompany their use. [ 121 ] Ergotamine has also been used to synthesize lysergic acid diethylamide because of its chemical similarity to lysergic acid . [ 122 ] The generally chemically based defense properties of endophytic fungi make them a perfect group of organisms to search for new antibiotic compounds within, as other fungi have in the past yielded such useful drugs as penicillin and streptomycin and plants use their antibiotic qualities as a defense against pathogens. [ 123 ]
https://en.wikipedia.org/wiki/Plant_use_of_endophytic_fungi_in_defense
Plant viruses are viruses that have the potential to affect plants . Like all other viruses, plant viruses are obligate intracellular parasites that do not have the molecular machinery to replicate without a host . Plant viruses can be pathogenic to vascular plants ("higher plants") . Many plant viruses are rod-shaped , with protein discs forming a tube surrounding the viral genome ; isometric particles are another common structure. They rarely have an envelope . The great majority have an RNA genome, which is usually small and single stranded (ss), but some viruses have double-stranded (ds) RNA, ssDNA or dsDNA genomes. Although plant viruses are not as well understood as their animal counterparts, one plant virus has become very recognizable: tobacco mosaic virus (TMV), the first virus to be discovered. This and other viruses cause an estimated US$60 billion loss in crop yields worldwide each year. Plant viruses are grouped into 73 genera and 49 families . However, these figures relate only to cultivated plants, which represent only a tiny fraction of the total number of plant species. Viruses in wild plants have not been well-studied, but the interactions between wild plants and their viruses often do not appear to cause disease in the host plants. [ 1 ] To transmit from one plant to another and from one plant cell to another, plant viruses must use strategies that are usually different from animal viruses . Most plants do not move, and so plant-to-plant transmission usually involves vectors (such as insects). Plant cells are surrounded by solid cell walls , therefore transport through plasmodesmata is the preferred path for virions to move between plant cells. Plants have specialized mechanisms for transporting mRNAs through plasmodesmata, and these mechanisms are thought to be used by RNA viruses to spread from one cell to another. [ 2 ] Plant defenses against viral infection include, among other measures, the use of siRNA in response to dsRNA . [ 3 ] Most plant viruses encode a protein to suppress this response. [ 4 ] Plants also reduce transport through plasmodesmata in response to injury. [ 2 ] The discovery of plant viruses causing disease is often accredited to A. Mayer (1886) working in the Netherlands demonstrated that the sap of mosaic obtained from tobacco leaves developed mosaic symptom when injected in healthy plants. However the infection of the sap was destroyed when it was boiled. He thought that the causal agent was bacteria. However, after larger inoculation with a large number of bacteria, he failed to develop a mosaic symptom. In 1898, Martinus Beijerinck, who was a professor of microbiology at the Technical University the Netherlands, put forth his concepts that viruses were small and determined that the "mosaic disease" remained infectious when passed through a Chamberland filter-candle . This was in contrast to bacteria microorganisms , which were retained by the filter. Beijerinck referred to the infectious filtrate as a " contagium vivum fluidum ", thus the coinage of the modern term "virus". After the initial discovery of the 'viral concept' there was need to classify any other known viral diseases based on the mode of transmission even though microscopic observation proved fruitless. In 1939 Holmes published a classification list of 129 plant viruses. This was expanded and in 1999 there were 977 officially recognized, and some provisional, plant virus species. The purification (crystallization) of TMV was first performed by Wendell Stanley , who published his findings in 1935, although he did not determine that the RNA was the infectious material. However, he received the Nobel Prize in Chemistry in 1946. In the 1950s a discovery by two labs simultaneously proved that the purified RNA of the TMV was infectious which reinforced the argument. The RNA carries genetic information to code for the production of new infectious particles. More recently virus research has been focused on understanding the genetics and molecular biology of plant virus genomes , with a particular interest in determining how the virus can replicate, move and infect plants. Understanding the virus genetics and protein functions has been used to explore the potential for commercial use by biotechnology companies. In particular, viral-derived sequences have been used to provide an understanding of novel forms of resistance . The recent boom in technology allowing humans to manipulate plant viruses may provide new strategies for production of value-added proteins in plants. Viruses are so small that they can only be observed under an electron microscope . The structure of a virus is given by its coat of proteins , which surround the viral genome . Assembly of viral particles takes place spontaneously . Over 50% of known plant viruses are rod-shaped ( flexuous or rigid). The length of the particle is normally dependent on the genome but it is usually between 300 and 500 nm with a diameter of 15–20 nm. Protein subunits can be placed around the circumference of a circle to form a disc. In the presence of the viral genome, the discs are stacked, then a tube is created with room for the nucleic acid genome in the middle. [ 5 ] The second most common structure amongst plant viruses are isometric particles. They are 25–50 nm in diameter. In cases when there is only a single coat protein, the basic structure consists of 60 T subunits, where T is an integer . Some viruses may have 2 coat proteins that associate to form an icosahedral shaped particle. There are three genera of Geminiviridae that consist of particles that are like two isometric particles stuck together. A few number of plant viruses have, in addition to their coat proteins, a lipid envelope . This is derived from the plant cell membrane as the virus particle buds off from the cell . Viruses can be spread by direct transfer of sap by contact of a wounded plant with a healthy one. Such contact may occur during agricultural practices, as by damage caused by tools or hands, or naturally, as by an animal feeding on the plant. Generally TMV, potato viruses and cucumber mosaic viruses are transmitted via sap. Plant viruses need to be transmitted by a vector , most often insects such as leafhoppers . One class of viruses, the Rhabdoviridae , has been proposed to actually be insect viruses that have evolved to replicate in plants. The chosen insect vector of a plant virus will often be the determining factor in that virus's host range: it can only infect plants that the insect vector feeds upon. This was shown in part when the old world white fly made it to the United States, where it transferred many plant viruses into new hosts. Depending on the way they are transmitted, plant viruses are classified as non-persistent, semi-persistent and persistent. In non-persistent transmission, viruses become attached to the distal tip of the stylet of the insect and on the next plant it feeds on, it inoculates it with the virus. [ 6 ] Semi-persistent viral transmission involves the virus entering the foregut of the insect. Those viruses that manage to pass through the gut into the haemolymph and then to the salivary glands are known as persistent. There are two sub-classes of persistent viruses: propagative and circulative. Propagative viruses are able to replicate in both the plant and the insect (and may have originally been insect viruses), whereas circulative can not. Circulative viruses are protected inside aphids by the chaperone protein symbionin , produced by bacterial symbionts . Many plant viruses encode within their genome polypeptides with domains essential for transmission by insects. In non-persistent and semi-persistent viruses, these domains are in the coat protein and another protein known as the helper component. A bridging hypothesis has been proposed to explain how these proteins aid in insect-mediated viral transmission. The helper component will bind to the specific domain of the coat protein, and then the insect mouthparts – creating a bridge. In persistent propagative viruses, such as tomato spotted wilt virus (TSWV), there is often a lipid coat surrounding the proteins that is not seen in other classes of plant viruses. In the case of TSWV, 2 viral proteins are expressed in this lipid envelope. It has been proposed that the viruses bind via these proteins and are then taken into the insect cell by receptor-mediated endocytosis . Soil-borne nematodes have been shown to transmit viruses. They acquire and transmit them by feeding on infected roots . Viruses can be transmitted both non-persistently and persistently, but there is no evidence of viruses being able to replicate in nematodes. The virions attach to the stylet (feeding organ) or to the gut when they feed on an infected plant and can then detach during later feeding to infect other plants. Nematodes transmit viruses such as tobacco ringspot virus and tobacco rattle virus . [ 7 ] A number of virus genera are transmitted, both persistently and non-persistently, by soil borne zoosporic protozoa . These protozoa are not phytopathogenic themselves, but parasitic . Transmission of the virus takes place when they become associated with the plant roots. Examples include Polymyxa graminis , which has been shown to transmit plant viral diseases in cereal crops [ 8 ] and Polymyxa betae which transmits Beet necrotic yellow vein virus . Plasmodiophorids also create wounds in the plant's root through which other viruses can enter. Plant virus transmission from generation to generation occurs in about 20% of plant viruses. When viruses are transmitted by seeds, the seed is infected in the generative cells and the virus is maintained in the germ cells and sometimes, but less often, in the seed coat. [ 9 ] When the growth and development of plants is delayed because of situations like unfavorable weather, there is an increase in the amount of virus infections in seeds. There does not seem to be a correlation between the location of the seed on the plant and its chances of being infected. Little is known about the mechanisms involved in the transmission of plant viruses via seeds, although it is known that it is environmentally influenced and that seed transmission occurs because of a direct invasion of the embryo via the ovule or by an indirect route with an attack on the embryo mediated by infected gametes. These processes can occur concurrently or separately depending on the host plant. It is unknown how the virus is able to directly invade and cross the embryo and boundary between the parental and progeny generations in the ovule. Many plants species can be infected through seeds including but not limited to the families Leguminosae , Solanaceae , Compositae , Rosaceae , Cucurbitaceae , Gramineae . Bean common mosaic virus is transmitted through seeds. There is tenuous evidence that a virus common to peppers, the Pepper Mild Mottle Virus (PMMoV) may have moved on to infect humans. [ 10 ] This is a rare and unlikely event as, to enter a cell and replicate, a virus must "bind to a receptor on its surface, and a plant virus would be highly unlikely to recognize a receptor on a human cell. One possibility is that the virus does not infect human cells directly. Instead, the naked viral RNA may alter the function of the cells through a mechanism similar to RNA interference , in which the presence of certain RNA sequences can turn genes on and off," according to Virologist Robert Garry. [ 11 ] The intracellular life of plant viruses in hosts is still understudied especially the earliest stages of infection . [ 12 ] Many membranous structures which viruses induce plant cells to produce are motile, often being used to traffic new virions within the producing cell and into their neighbors. [ 12 ] Viruses also induce various changes to plants' own intracellular membranes . [ 12 ] The work of Perera et al. 2012 in mosquito virus infection and various others studying yeast models of plant viruses find this to be due to changes in homeostasis of the lipids that compose their intracellular membranes, including increasing synthesis . [ 12 ] These comparable lipid alterations inform our expectations and research directions for the lesser understood area of plant viruses. [ 12 ] 75% of plant viruses have genomes that consist of single stranded RNA (ssRNA). 65% of plant viruses have +ssRNA, meaning that they are in the same sense orientation as messenger RNA but 10% have -ssRNA, meaning they must be converted to +ssRNA before they can be translated. 5% are double stranded RNA and so can be immediately translated as +ssRNA viruses. 3% require a reverse transcriptase enzyme to convert between RNA and DNA. 17% of plant viruses are ssDNA and very few are dsDNA, in contrast a quarter of animal viruses are dsDNA and three-quarters of bacteriophage are dsDNA. [ 14 ] Viruses use the plant ribosomes to produce the 4-10 proteins encoded by their genome. However, since many of the proteins are encoded on a single strand (that is, they are polycistronic ) this will mean that the ribosome will either only produce one protein, as it will terminate translation at the first stop codon , or that a polyprotein will be produced. Plant viruses have had to evolve special techniques to allow the production of viral proteins by plant cells . For translation to occur, eukaryotic mRNAs require a 5' Cap structure. This means that viruses must also have one. This normally consists of 7MeGpppN where N is normally adenine or guanine . The viruses encode a protein, normally a replicase , with a methyltransferase activity to allow this. Some viruses are cap-snatchers. During this process, a 7m G-capped host mRNA is recruited by the viral transcriptase complex and subsequently cleaved by a virally encoded endonuclease. The resulting capped leader RNA is used to prime transcription on the viral genome. [ 15 ] However some plant viruses do not use cap, yet translate efficiently due to cap-independent translation enhancers present in 5' and 3' untranslated regions of viral mRNA. [ 16 ] Some viruses (e.g. tobacco mosaic virus (TMV)) have RNA sequences that contain a "leaky" stop codon. In TMV 95% of the time the host ribosome will terminate the synthesis of the polypeptide at this codon but the rest of the time it continues past it. This means that 5% of the proteins produced are larger than and different from the others normally produced, which is a form of translational regulation . In TMV, this extra sequence of polypeptide is an RNA polymerase that replicates its genome. Some viruses use the production of subgenomic RNAs to ensure the translation of all proteins within their genomes. In this process the first protein encoded on the genome, and is the first to be translated, is a replicase . This protein will act on the rest of the genome producing negative strand sub-genomic RNAs then act upon these to form positive strand sub-genomic RNAs that are essentially mRNAs ready for translation. Some viral families, such as the Bromoviridae instead opt to have multipartite genomes, genomes split between multiple viral particles. For infection to occur, the plant must be infected with all particles across the genome. For instance Brome mosaic virus has a genome split between 3 viral particles, and all 3 particles with the different RNAs are required for infection to take place. Polyprotein processing is adopted by 45% of plant viruses, such as the Potyviridae and Tymoviridae . [ 13 ] The ribosome translates a single protein from the viral genome. Within the polyprotein is an enzyme (or enzymes) with proteinase function that is able to cleave the polyprotein into the various single proteins or just cleave away the protease, which can then cleave other polypeptides producing the mature proteins. Besides involvement in the infection process, viral replicase is a directly necessary part of the packaging of RNA viruses' genetic material . This was expected due to replicase involvement already being confirmed in various other viruses. [ 17 ] The genome of Beet necrotic yellow vein virus (BNYVV) consists of five RNAs, each encapsidated into rod-shaped virus particles. RNA 1, which is 6746 nucleotides long, encodes a single open reading frame (ORF) that produces the 237 kDa protein P237. This protein is cleaved into P150 and P66 by a papain-like proteinase. RNA 2, 4612 nucleotides long, encodes six proteins, including movement proteins (P42, P13, P15), a coat protein (P21), and a regulatory protein (P14). RNA 3, 1775 nucleotides long, encodes P25, which is involved in symptom expression. RNA 4, 1431 nucleotides long, encodes P31, crucial for vector transmission. RNA 5, found in certain isolates, encodes P26 and is associated with more severe symptoms. [ 18 ] Plant viruses can be used to engineer viral vectors , tools commonly used by molecular biologists to deliver genetic material into plant cells ; they are also sources of biomaterials and nanotechnology devices. [ 19 ] [ 20 ] Knowledge of plant viruses and their components has been instrumental for the development of modern plant biotechnology. The use of plant viruses to enhance the beauty of ornamental plants can be considered the first recorded application of plant viruses. Tulip breaking virus is famous for its dramatic effects on the color of the tulip perianth , an effect highly sought after during the 17th-century Dutch " tulip mania ." Tobacco mosaic virus (TMV) and cauliflower mosaic virus (CaMV) are frequently used in plant molecular biology. Of special interest is the CaMV 35S promoter , which is a very strong promoter most frequently used in plant transformations . Viral vectors based on tobacco mosaic virus include those of the magnICON® and TRBO plant expression technologies. [ 20 ] Building on the market approvals and sales of recombinant virus-based biopharmaceuticals for veterinary and human medicine, the use of engineered plant viruses has been proposed to enhance crop performance and promote sustainable production. [ 21 ] Representative applications of plant viruses are listed below.
https://en.wikipedia.org/wiki/Plant_virus
Plantae Delavayanae: Plants from China collected in Yunnan by Father Delavay. is a book by Adrien René Franchet and Père Jean Marie Delavay , with Franchet describing and establishing the taxonomy for flora found by Delavay in Yunnan . Père Jean Marie Delavay was a missionary sent to China for Missions Etrangères de Paris (Foreign Missions of Paris) on an extended assignment in Yunnan. [ 1 ] While in France in 1881, he met Père Armand David , a natural history collector and fellow missionary, and was persuaded to take up David's role of collecting plant specimens in China for the Paris Museum of Natural History . [ 1 ] His meticulous methodology led to a prolific collection of plants, which included 200,000 specimens of 4,000 distinct species of flora. [ 2 ] As Delavay did not have extensive training on botany, he would collect specimen with even the most minor of differences, which led to the discovery of 1,500 new species of plants within his collections. [ 2 ] His work was only slowed when he contracted the bubonic plague in 1888, from which he only partially recovered. [ 1 ] Much of Delavay's collections that were sent to the Paris Museum of Natural History were processed by Adrien René Franchet. Franchet was a trained botanist focused on the authorship of taxonomy for the plant specimens arriving at the museum. [ 3 ] Franchet primarily worked on the taxonomy of the collections from French missionaries in China and Japan, including Delavay, David, Paul Guillaume Farges , and Jean-André Soulié . [ 3 ] Franchet published much of his taxonomy work in academic journals, including "Les Primula du Yun-nan" for Bulletin de la Société botanique de France in 1885. [ 3 ] From 1889 to 1890, Franchet would publish Plantae Delavayanae. Plantes de Chine recueillies au Yun-nan par l'abbé Delavay . [ 1 ] "Plantae Delavayanae: Plants from China collected in Yunnan by Father Delavay" is a book focused on the taxonomy of Père Jean Marie Delavay's flora collection. The text is written in Latin. [ 4 ] The book consists of 240 pages of text and 45 plates of illustrations. [ 5 ] The original copy consisted of three fascicles, with pages 1-80 and plates 1-15 released in 1889; pages 81-160 and plates 16-30 later released in 1889; and pages 161-240 and plates 31-45 released in 1890. [ 4 ] The book provided considerable credibility to Delavay's work in the field of botany. [ 2 ] The International Plant Names Index acknowledges that 142 plant names were originally published in the "Pl. Delavay". [ 6 ]
https://en.wikipedia.org/wiki/Plantae_Delavayanae
In the field of computational biology , a planted motif search (PMS) also known as a ( l, d )-motif search (LDMS) is a method for identifying conserved motifs within a set of nucleic acid or peptide sequences . PMS is known to be NP-complete . The time complexities of most of the planted motif search algorithms depend exponentially on the alphabet size and l . The PMS problem was first introduced by Keich and Pevzner. [ 1 ] The problem of identifying meaningful patterns (e.g., motifs) from biological data has been studied extensively since they play a vital role in understanding gene function , human disease , and may serve as therapeutic drug targets . The search problem may be summarized as follows: Input are n strings (s 1 , s 2 , ... , s n ) of length m each from an alphabet Σ and two integers l and d. Find all strings x such that |x| = l and every input string contains at least one variant of x at a Hamming distance of at most d. Each such x is referred to as an (l, d) motif. For example, if the input strings are GCGCGAT, CACGTGA, and CGGTGCC; l = 3 and d = 1, then GGT is a motif of interest. Note that the first input string has GAT as a substring , the second input string has CGT as a substring, and the third input string has GGT as a substring. GAT is a variant of GGT that is within a Hamming distance of 1 from GGT, etc. Call the variants of a motif that occur in the input strings as instances of the motif. For example, GAT is an instance of the motif GGT that occurs in the first input string. Zero or more ( l , d ) motifs are contained in any given set of input strings. Many of the known algorithms for PMS consider DNA strings for which Σ ={G, C, T, A}. There exist algorithms that deal with protein strings as well. The PMS problem is also known as the ( l , d )-motif search (LDMS) problem. The following mathematical notation is often used to describe PMS algorithms. Assume that S = { s 1 , s 2 , s 3 , ..., s n } is the given set of input strings from an alphabet Σ. An l -mer of any string is nothing but a substring of the string of length l . Let d H (a, b) stand for the Hamming distance between any two l -mers a and b . Let a be an l -mer and s be an input string. Then, let d H (a, s) stand for the minimum Hamming distance between a and any l -mer b of s . If a is any l -mer and S is a set of input strings then let d H (a, S) stand for max sєS d H (a, s) . Let u be any l -mer. Then, the d -neighborhood of u , (denoted as B d (u) ), is nothing but the set of all the l -mers v such that d H (u, v) ≤ d . In other words, B d (u)={v: d H (u, v)≤d} . Refer to any such l -mer v as a d -neighbor of u . B d (x, y) is used to denote the common d -neighborhood of x and y , where x and y are two l -mers. B d (x, y) is nothing but the set of all l -mers that are within a distance of d from both x and y . Similarly, B d (x, y, z) , etc. can be defined. The scientific literature describes numerous algorithms for solving the PMS problem. These algorithms can be classified into two major types. Those algorithms that may not return the optimal answer(s) are referred to as approximation algorithms (or heuristic algorithms) and those that always return the optimal answer(s) are called exact algorithms. Examples of approximation (or heuristic) algorithms include Random Projection, [ 2 ] PatternBranching, [ 3 ] MULTIPROFILER, [ 1 ] CONSENSUS, [ 4 ] and ProfileBranching. [ 3 ] These algorithms have been experimentally demonstrated to perform well. The algorithm [ 2 ] is based on random projections. Let the motif M of interest be an l -mer and C be the collection of all the l -mers from all the n input strings. The algorithm projects these l -mers along k randomly chosen positions (for some appropriate value of k ). The projection of each l -mer may be thought of as an integer. The projected values (which are k -mers) are grouped according to their integer values. In other words, hash all the l -mers using the k -mer of any l -mer as its hash value. All the l -mers that have the same hash value fall into the same hash bucket. Since the instances of any ( l , d ) motif are similar to each other, many of these instances will fall into the same bucket. Note that the Hamming distance between any two instances of an ( l , d ) motif is no more than 2 d . The key idea of this algorithm is to examine those buckets that have a large number of l -mers in them. For each such bucket, an expectation maximization (EM) algorithm is used to check if an ( l , d ) motif can be found using the l -mers in the bucket. This algorithm [ 3 ] is a local searching algorithm . If u is any l -mer, then there are ( l d ) 3 d {\displaystyle {\tbinom {l}{d}}3^{d}} l -mers that are d -neighbors of u , for DNA strings. This algorithm starts from each l -mer u in the input, searches the neighbors of u , scores them appropriately and outputs the best scoring neighbor. Many exact algorithms are known for solving the PMS problem as well. Examples include the ones in (Martinez 1983), [ 5 ] (Brazma, et al. 1998), [ 6 ] (Galas, et al. 1985), [ 7 ] (Sinha, et al. 2000), [ 8 ] (Staden 1989), [ 9 ] (Tompa 1999), [ 10 ] (Helden, et al. 1998) [ 11 ] (Rajasekaran, et al.), [ 12 ] (Davila and Rajasekaran 2006), [ 13 ] (Davila, Balla, and Rajasekaran 2006), [ 14 ] Voting [ 15 ] and RISOTTO. [ 16 ] The WINNOWER algorithm [ 17 ] is a heuristic algorithm and it works as follows. If A and B are two instances of the same motif in two different input strings, then the Hamming distance between A and B is at most 2 d . It can be shown that the expected Hamming distance between A and B is 2 d − 4 d 2 3 l {\displaystyle 2d-{\tfrac {4d^{2}}{3l}}} . WINNOWER constructs a collection C of all possible l -mers in the input. A graph G(V,E) is constructed in which each l -mer of C will be a node. Two nodes u and v in G are connected by an edge if and only if the Hamming distance between u and v is at most 2 d and they come from two different input strings. If M is an ( l , d ) motif and if M 1 , M 2 , ..., and M n are instances of M in the input strings, then, clearly, these instances will form a clique in G . The WINNOWER algorithm has two phases. In the first phase, it identifies large cliques in G . In the second phase each such clique is examined to see if a motif can be extracted from this clique. Since the CLIQUE problem is intractable , WINNOWER uses a heuristic to solve CLIQUE. It iteratively constructs cliques of larger and larger sizes. If N = mn , then the run time of the algorithm is O ( N 2 d + 1 ) {\displaystyle O(N^{2d+1})} . This algorithm runs in a reasonable amount of time in practice especially for small values of d . Another algorithm called SP-STAR, [ 17 ] is faster than WINNOWER and uses less memory. WINNOWER algorithm treats all the edges of G equally without distinguishing between edges based on similarities. SP-STAR scores the l -mers of C as well as the edges of G appropriately and hence eliminates more edges than WINNOWER per iteration. (Bailey and Elkan, 1994) [ 18 ] employs expectation maximization algorithms while Gibbs sampling is used by (Lawrence et al., 1993). [ 19 ] MULTIPROFILER [ 1 ] MEME, [ 20 ] are also known PMS algorithms. In the last decade a series of algorithms with PMS as a prefix has been developed in the lab of Rajasekaran . Some of these algorithms are described below. PMSo [ 12 ] works as follows. Let s 1 , s 2 , ..., s n be a given set of input strings each of length m . Let C be the collection of l -mers in s 1 . Let C′ = ∪ u∈C B d ( u ). For each element v of C′ check if it is a valid ( l , d )-motif or not. Given an l -mer v , a check if it is a valid ( l , d )-motif or not can be made in O( mnl ) time. Thus the run time of PMS0, assuming an alphabet of size 4, is O ( m 2 n l ( l d ) 3 d ) {\displaystyle O(m^{2}nl{\tbinom {l}{d}}3^{d})} . This algorithm [ 12 ] is based on radix sorting and has the following steps. Let the motif M of interest be of length l . If M occurs in every input string then any substring of M also occurs in every input string. Here occurrence means occurrence within a Hamming distance of d . It follows that there are at least l - k +1 strings each of length k (for k ≤ l ) such that each of these occurs in every input string. Let Q be a collection of k -mers in M . Note that, in every input string s i , there will be at least one position i j such that a k -mer of Q occurs starting from i j . Another k -mer of Q occurs starting from i j +1 and so on, with the last k -mer occurring at i j + l – k . An l -mer can be obtained by combining these k -mers that occur starting from each such i j . PMS2 [ 12 ] works as follows. In the first phase find all the ( k , d ) motifs present in all the input strings (for some appropriate value of k < l ). In the second phase, look for ( l - k +1) of these ( k , d ) motifs that occur starting from successive positions in each of the input strings. From every such collection of ( l - k +1) ( k , d )-motifs, l -mer can be generated (if possible). Each such l -mer is a candidate ( l , d )-motif. For each candidate motif, check if it is an ( l , d )-motif or not in O( mnl ) time. This l -mer is returned as output if this is an ( l , d )-motif. This algorithm [ 12 ] enables one to handle large values of d . Let d′ = d /2. Let M be the motif to be found with | M |= l =2 l′ for some integer l′ . Let M 1 refer to the first half of M and M 2 be the next half. Let s = a 1 a 2 ...a m be one of the input strings. M occurs in every input string. Let the occurrence of M (within a Hamming distance of d ) in s start at position i . Let s′ = a i a i+1 ...a i+l'-1 and s′′ = a i+l '...a i+l-1 . It is clear that either the Hamming distance between M 1 and s′ is at most d′ or the Hamming distance between M 2 and s′′ is at most d′ . Either M 1 or M 2 occurs in every input string at a Hamming distance of at most d′ . As a result, in at least n′ strings (where n′ = n /2) either M 1 or M 2 occurs with a Hamming distance of at most d . The algorithm first obtains all the ( l′ , d′ )-motifs that occur in at least n /2 of the input strings. It then uses these motifs and the above observations to identify all the ( l , d )-motifs present in the input strings. This algorithm introduces a tree structure for the motif candidates and uses a branch-and-bound algorithm to reduce the search space. [ 21 ] Let S = { s 1 , s 2 , ..., s n } be a given set of input strings. PMSprune follows the same strategy as PMS0: For every l -mer y in s 1 , it generates the set of neighbors of y and, for each of them, checks whether this is a motif or not. Some key steps in the algorithm are: PMS4 [ 22 ] is a technique that can be used to speedup any algorithm for the PMS problem. In many of the above algorithms there are two phases. In the first phase we come up with a set of candidate motifs and in the second phase check, for each candidate motif, if it is a valid ( l , d )-motif. For each candidate motif it takes O( mnl ) time to check if it is a valid motif or not. PMS4 employs a similar two phase strategy. These phases are explained below. Let A be any PMS algorithm. PMS5 [ 23 ] is an extension of PMS0. If S = { s 1 , s 2 , ..., s n } is a set of strings (not necessarily of the same length), let M l d ( S ) {\displaystyle M_{l}^{d}(S)} denote the ( l , d )-motifs present in S . Let S′ = { s 2 , s 3 , ..., s n }. PMS5 computes the ( l , d )-motifs of S as ⋃ L ∈ s 1 M l d ( L , S ′ ) {\displaystyle \bigcup _{L\in s_{1}}M_{l}^{d}(L,S^{'})} . Here L refers to an l -mer. One of the key steps in the algorithm is a subroutine to compute the common d -neighborhood of three l -mers. Let x , y , z be any three l -mers. To compute B d (x, y, z) , PMS5 represents B d (x) as a tree T d (x) . Each node in this tree represents an l -mer in B d (x) . The root of T d (x) stands for the l -mer x. T d (x) has a depth of d . Nodes of T d (x) are traversed in a depth-first manner. The node and the l -mer it represents may be used interchangeably. While the tree is traversed , any node t will be output if t is in B d ( y ) ⋂ B d ( z ) {\displaystyle B_{d}(y)\bigcap B_{d}(z)} . When any node t is visited, check if there is a descendent t′ of t such that t′ is in B d ( y ) ⋂ B d ( z ) {\displaystyle B_{d}(y)\bigcap B_{d}(z)} . Prune the subtree rooted at t if there is no such descendent. In PMS5, the problem of checking if t has any descendent that is in B d ( y ) ⋂ B d ( z ) {\displaystyle B_{d}(y)\bigcap B_{d}(z)} is formulated as an integer linear program (ILP) on ten variables. This ILP is solved in O(1) time. Solving the ILP instances is done as a preprocessing step and the results are stored in a lookup table. Algorithm PMS6 [ 24 ] is an extension of PMS5 that improves the preprocessing step and also it uses efficient hashing techniques to store the lookup tables. As a result, it is typically faster than PMS5. Shibdas Bandyopadhyay, Sartaj Sahni, Sanguthevar Rajasekaran, "PMS6: A fast algorithm for motif discovery," iccabs, pp. 1–6, 2012 IEEE 2nd International Conference on Computational Advances in Bio and medical Sciences, 2012 Given a set S ={ s 1 , s 2 , ..., s n } of strings, and integers l , d , and q , an ( l, d, q )-motif is defined to be a string M of length l that occurs in at least q of the n input strings within a Hamming distance of d . The qPMS ( Quorum Planted Motif Search ) problem is to find all the ( l, d, q )-motifs present in the input strings. The qPMS problem captures the nature of motifs more precisely than the PMS problem does because, in practice, some motifs may not have motif instances in all of the input strings. Any algorithm for solving the qPMS problem (when q ≠ n ) is typically named with a prefix of q . qPMSPrune is one of the first algorithms to address this version of the PMS problem. [ 21 ] qPMSPrune exploits the following fact: If M is any ( l, d, q )-motif of the input strings s 1 , s 2 , ..., s n , then there exists an i (with 1 ≤ i ≤ n – q + 1) and an l -mer x ∈ s i {\displaystyle {x\in s_{i}}} such that M is in B d (x) and M is an ( l, d, q -1)-motif of the input strings excluding s i . The algorithm processes every s i , 1≤ i ≤ n . While processing s i , it considers every l -mer x of s i . When considering x , it constructs B d (x) and identifies elements of B d (x) that are ( l, d, q -1) motifs (with respect to input strings other than s i ). B d (x) is represented as a tree with x as the root. This tree will be traversed in a depth first manner. The algorithm does not traverse the entire tree. Some of the subtrees are pruned using effective pruning conditions. In particular, a subtree is pruned if it can be inferred that none of the nodes in this subtree carries a motif of interest. Algorithm qPMS7 [ 25 ] is an extension of qPMSPrune. Specifically, it is based on the following observation: If M is any ( l, d, q )-motif of the input strings s 1 , s 2 , ..., s n , then there exist 1 ≤ i ≠ j ≤ n and l -mer x ∈ s i {\displaystyle x\in s_{i}} and l -mer y ∈ s j {\displaystyle y\in s_{j}} such that M is in B d ( x ) ⋂ B d ( y ) {\displaystyle B_{d}(x)\bigcap B_{d}(y)} and M is an ( l, d, q -2)-motif of the input strings excluding s i and s j . The algorithm considers every possible pair ( i, j ), 1≤ i, j ≤ n and i ≠ j . For any pair ( i , j ), every possible pair of l -mers ( x, y ) is considered (where x is from s i and y is from s j ). While considering any x and y , the algorithm identifies all the elements of B d ( x ) ⋂ B d ( y ) {\displaystyle B_{d}(x)\bigcap B_{d}(y)} that are ( l, d, q -2) motifs (with respect to input strings other than s i and s j ). An acyclic graph is used to represent and explore B d ( x ) ⋂ B d ( y ) {\displaystyle B_{d}(x)\bigcap B_{d}(y)} . Call this graph G d (x, y) . G d (x, y) is traversed in a depth first manner. Like in qPMSPrune, qPMS7 also employs some pruning conditions to prune subgraphs of G d (x, y) . RISOTTO [ 16 ] employs a suffix tree to identify the ( l, d )-motifs. It is somewhat similar to PMS0. For every l -mer in s 1 , it generates the d -neighborhood and for every l -mer in this neighborhood it walks through a suffix tree to check if this l -mer is an ( l, d )-motif. Voting [ 15 ] is similar to PMS1. Instead of using radix sorting, it uses hashing to compute L i ' s and their intersections . PMS algorithms are typically tested on random benchmark data generated as follows: Twenty strings each of length 600 are generated randomly from the alphabet of interest. The motif M is also generated randomly and planted in each of the input strings within a Hamming distance of d . The motif instances are also generated randomly. Certain instances of the ( l, d )-motif problem have been identified to be challenging . For a given value of l , the instance ( l, d ) is called challenging if d is the smallest integer for which the expected number of ( l, d )-motifs that occur by random chance (in addition to the planted one) is one or more. For example, the following instances are challenging: (9, 2), (11, 3), (13, 4), (15, 5), (17, 6), (19, 7), etc. The performance of PMS algorithms is customarily shown only for challenging instances. Following is a table of time comparison of different PMS algorithms on the challenging instances of DNA sequences for the special case. This table is taken from the paper qPMS7. [ 25 ] In this table several algorithms have been compared: qPMSPrune, [ 21 ] qPMSPruneI, [ 25 ] Pampa, [ 26 ] Voting, [ 15 ] RISOTTO, [ 16 ] PMS5, [ 23 ] PMS6, [ 24 ] qPMS7. [ 25 ] In the following table, the alphabet Σ={ A , C , G , T }, n =20, m =600, and q = n =20.
https://en.wikipedia.org/wiki/Planted_motif_search
A plantibody is an antibody produced by plants that have been genetically engineered with animal DNA encoding a specific human antibody known to neutralize a particular pathogen or toxin. The transgenic plants produce antibodies similar to their human counterparts, and following purification, plantibodies can be administered therapeutically to acutely ill patients or prophylactically to at-risk individuals (such as healthcare workers). The term plantibody was trademarked by the company Biolex . Passive immunization is a medical strategy long employed to provide temporary protection against pathogens. Early implementations involved recovering ostensibly cell-free plasma from the blood of human survivors or from non-human animals deliberately exposed to a specific pathogen or toxin. These approaches resulted in crude purifications of plasma-soluble proteins including antibodies . Antibodies (also known as an immunoglobulins) are complex proteins produced by vertebrates [ 1 ] that recognize antigens (or molecular patterns) on pathogens and some dangerous compounds in order to alert the adaptive immune system that there are pathogens within the body. [ 2 ] A plantibody is produced by insertion of genes encoding antibodies into a transgenic plant. The plantibodies are then modified by intrinsic plant mechanisms (N-glycosylation). [ 3 ] Plantibodies are purified from plant tissues by mechanical disruption and denaturation/removal of intrinsic plant proteins by treatment at high temperature and low pH, as antibodies tend to be stable under these conditions. Antibodies can further be purified away from other acid- and temperature- stable proteins by capture on commercially produced protein A resins. Production of antibodies in whole transgenic plants, such as species in the genus Nicotiana, is cheaper and safer than in cultured animal cells. [ 4 ] Transgenic plants offer an attractive method for large-scale production of antibodies for immunotherapy . [ 5 ] Antibodies produced in plants have many advantages that are beneficial to humans, plants, and the economy as well. They can be purified cheaply and in large numbers. The many seeds of plants allow for ample storage, and they have no risk of transmitting diseases to humans because the antibodies are produced without the need of the antigen or infectious microorganisms. Plants could be engineered to produce antibodies which fight off their own plant diseases and pests, for example, nematodes, and eliminate the need for toxic pesticides. Antibodies generated by plants are cheaper, easier to manage, and safer to use than those obtained from animals. [ 6 ] The applications are increasing because recombinant DNA (rDNA) is very useful in creating proteins that are identical when exposed into a plant's. A recombinant DNA is an artificial DNA that is created by combining two or more sequences that would not normally come together. In this way, DNA injected into a plant is turned into recombinant DNA and manipulated. The favorable properties of plants are likely to make the plant systems a useful alternative for small, medium and large scale production throughout the development of new antibody-based pharmaceuticals. [ 7 ] The main reason plants are being used to produce antibodies is for treatment of illnesses such as immune disorders, cancer, and inflammatory diseases, given the fact that the plantibodies also have no risk of spreading diseases to humans. [ 5 ] In the past 2 decades, research has shown that plant-derived antibodies have become easier to produce. [ 8 ] Plantibodies are close to passing clinical trials and becoming approved commercially because of key points. [ citation needed ] Plants are more economical than most forms of creating antibodies and the technology for harvesting and maintaining them is already present. Plants also reduce the chance of coming in contact with pathogens , making their antibodies safer to use. Plantibodies can be made at an affordable cost and easier manufacturing due to the availability and relatively easy manipulation of genetic information in crops such as potatoes, soybean, alfalfa, rice, wheat and tobacco. Commercial use is not yet legalized [ where? ] , but clinical trials are underway to implement the use of plantibodies for humans as injections. So far, companies have started conducting human tests of pharmaceutical products, creating plantibodies that include: [ 9 ] By being able to genetically alter plants to create specific antibodies, it is easier to produce antibodies that will fight diseases not only for plants but for human as well. For that reason, plantibody applications will move more towards the medicinal field.
https://en.wikipedia.org/wiki/Plantibody
A plantlet is a young or small plant, produced on the leaf margins or the aerial stems of another plant. [ 1 ] Many plants such as spider plants naturally create stolons with plantlets on the ends as a form of asexual reproduction . Vegetative propagules or clippings of mature plants may form plantlets. An example is mother of thousands . Many plants reproduce by throwing out long shoots or runners that can grow into new plants. Mother of thousands appears to have lost the ability to reproduce sexually and make seeds, but transferred at least part of the embryo-making process to the leaves to make plantlets. [ dubious – discuss ] [ citation needed ] This horticulture article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plantlet
The growth of plants in outer space has elicited much scientific interest. [ 1 ] In the late 20th and early 21st century, plants were often taken into space in low Earth orbit to be grown in a weightless but pressurized controlled environment, sometimes called space gardens. [ 1 ] In the context of human spaceflight, they can be consumed as food and provide a refreshing atmosphere. [ 2 ] Plants can metabolize carbon dioxide in the air to produce valuable oxygen, and can help control cabin humidity. [ 3 ] Growing plants in space may provide a psychological benefit to human spaceflight crews. [ 3 ] Usually the plants were part of studies or technical development to further develop space gardens or conduct science experiments. [ 1 ] To date plants taken into space have had mostly scientific interest, with only limited contributions to the functionality of the spacecraft, however the Apollo Moon tree project was more or less forestry inspired mission and the trees are part of a country's bicentennial celebration. The first challenge in growing plants in space is how to get plants to grow without gravity. [ 4 ] This runs into difficulties regarding the effects of gravity on root development, soil integration, and watering without gravity, providing appropriate types of lighting, and other challenges. In particular, the nutrient supply to root as well as the nutrient biogeochemical cycles , and the microbiological interactions in soil-based substrates are particularly complex, but have been shown to make possible space farming in hypo- and micro-gravity. [ 5 ] [ 6 ] NASA plans to grow plants in space to help feed astronauts and to provide psychological benefits for long-term space flight. [ 7 ] In 2017, aboard ISS in one plant growth device, the 5th crop of Chinese cabbage ( Brassica rapa ) from it included an allotment for crew consumption, while the rest was saved for study. [ 8 ] An early discussion of plants in space, were the trees on the brick moon space station, in the 1869 short story " The Brick Moon ". [ 9 ] In the 2010s there was an increased desire for long-term space missions, which led to desire for space-based plant production as food for astronauts. [ 10 ] An example of this is vegetable production on the International Space Station in Earth orbit. [ 10 ] By the year 2010, 20 plant growth experiments had been conducted aboard the International Space Station . [ 1 ] Several experiments have been focused on how plant growth and distribution compares in micro-gravity, space conditions versus Earth conditions. This enables scientists to explore whether certain plant growth patterns are innate or environmentally driven. For instance, Allan H. Brown tested seedling movements aboard the Space Shuttle Columbia in 1983. Sunflower seedling movements were recorded while in orbit. They observed that the seedlings still experienced rotational growth and circumnutation despite lack of gravity, showing these behaviors are instinctual. [ 11 ] Other experiments have found that plants have the ability to exhibit gravitropism , even in low-gravity conditions. For instance, the ESA's European Modular Cultivation System [ 12 ] enables experimentation with plant growth; acting as a miniature greenhouse , scientists aboard the International Space Station can investigate how plants react in variable-gravity conditions. The Gravi-1 experiment (2008) utilized the EMCS to study lentil seedling growth and amyloplast movement on the calcium-dependent pathways. [ 13 ] The results of this experiment found that the plants were able to sense the direction of gravity even at very low levels. [ 14 ] A later experiment with the EMCS placed 768 lentil seedlings in a centrifuge to stimulate various gravitational changes; this experiment, Gravi-2 (2014), displayed that plants change calcium signalling towards root growth while being grown in several gravity levels. [ 15 ] Many experiments have a more generalized approach in observing overall plant growth patterns as opposed to one specific growth behavior. One such experiment from the Canadian Space Agency , for example, found that white spruce seedlings grew differently in the anti-gravity space environment compared with Earth-bound seedlings; [ 16 ] the space seedlings exhibited enhanced growth from the shoots and needles, and also had randomized amyloplast distribution compared with the Earth-bound control group. [ 17 ] Food production is key to making Space exploration feasible. Currently, the cost of sending food to the International Space Station (ISS) is estimated as USD$20 000–40 000/kg, with each crew member receiving ~1.8 kg of food (plus packaging) per day . Re-stocking from Earth, a lunar orbiting Space station or Mars habitation with food will be significantly more costly. The first trips to Mars are expected to be a three-year round trip, and it has been estimated that a four-person crew would need 10–11 000 kgs of food. [ 18 ] The first organisms in space were "specially developed strains of seeds" launched to 134 km (83 mi) on 9 July 1946 on a U.S. launched V-2 rocket . These samples were not recovered. The first seeds launched into space and successfully recovered were maize seeds launched on 30 July 1946. Soon followed rye and cotton . These early suborbital biological experiments were handled by Harvard University and the Naval Research Laboratory and were concerned with radiation exposure on living tissue. [ 19 ] On September 22 1966, Kosmos 110 launched with two dogs and moisturized seeds. Several of those seeds germinated, the first to do so, resulting in lettuce, cabbage and some beans that had greater yield than their controls on Earth. [ 20 ] In 1971, 500 tree seeds ( Loblolly pine , Sycamore , Sweetgum , Redwood , and Douglas fir ) were flown around the Moon on Apollo 14 . These Moon trees were planted and grown with controls back on Earth where no changes were detected. In 1982, the crew of the Soviet Salyut 7 space station conducted an experiment, prepared by Lithuanian scientists ( Alfonsas Merkys and others), and grew some Arabidopsis using Fiton-3 experimental micro-greenhouse apparatus, thus becoming the first plants to flower and produce seeds in space. [ 22 ] [ 23 ] A Skylab experiment studied the effects of gravity and light on rice plants. [ 24 ] [ 25 ] The SVET-2 Space Greenhouse successfully achieved seed to seed plant growth in 1997 aboard space station Mir . [ 3 ] Bion 5 carried Daucus carota and Bion 7 carried maize (aka corn). Plant research continued on the International Space Station . Biomass Production System was used on the ISS Expedition 4 . The Vegetable Production System (Veggie) system was later used aboard ISS. [ 26 ] Plants tested in Veggie before going into space included lettuce, Swiss chard, radishes, Chinese cabbage and peas. [ 27 ] Red Romaine lettuce was grown in space on Expedition 40 which were harvested when mature, frozen and tested back on Earth. Expedition 44 members became the first American astronauts to eat plants grown in space on 10 August 2015, when their crop of Red Romaine was harvested. [ 28 ] Since 2003 Russian cosmonauts have been eating half of their crop while the other half goes towards further research. [ 29 ] In 2012, a sunflower bloomed aboard the ISS under the care of NASA astronaut Donald Pettit . [ 30 ] In January 2016, US astronauts announced that a zinnia had blossomed aboard the ISS. [ 31 ] In 2017 the Advanced Plant Habitat was designed for ISS, which was a nearly self-sustaining plant growth system for that space station in low Earth orbit. [ 32 ] The system is installed in parallel with another plant grown system aboard the station, VEGGIE, and a major difference with that system is that APH is designed to need less upkeep by humans. [ 32 ] APH is supported by the Plant Habitat Avionics Real-Time Manager . [ 32 ] Some plants that were to be tested in APH include Dwarf Wheat and Arabidopsis. [ 32 ] In December 2017 hundreds of seeds were delivered to ISS for growth in the VEGGIE system. [ 33 ] APH is an important advancement in the understanding of plant growth in space and therefore the future of space exploration in general. [ 34 ] In 2018 the Veggie-3 experiment at the ISS, was tested with plant pillows and root mats. [ 35 ] One of the goals is to grow food for crew consumption. [ 35 ] Crops tested at this time include cabbage , lettuce , and mizuna . [ 35 ] In 2018, the PONDS system for nutrient deliver in microgravity was tested. [ 36 ] In December 2018, the German Aerospace Center launched the EuCROPIS satellite into low Earth orbit. This mission carried two greenhouses intended to grow tomatoes under simulated gravity of first the Moon and then Mars (6 months each) using by-products of human presence in space as source of nutrients. When scientists activated the experiment, they found that the greenhouses were functional, but the irrigation system was not; therefore the dormant seeds could not be used. [ 37 ] The Seedling Growth series of experiments to study the mechanisms of tropisms and the cell/cycle were performed on the ISS between 2013 and 2017. [ 38 ] [ 39 ] These experiments also involved using the model plant Arabidopsis thaliana, and were a collaboration between NASA ( John Z. Kiss as PI) and ESA (F. Javier Medina as PI). [ 39 ] [ 40 ] On 30 November 2020, astronauts aboard the ISS collected the first harvest of radishes grown on the station. A total of 20 plants was collected and prepared for transportation back to Earth. There are currently plans to repeat the experiment and grow a second batch. [ 41 ] Chang'e 4 lunar lander in January 2019, carried a 3 kg (6.6 lb) sealed "biosphere" with many seeds and insect eggs to test whether plants and insects could hatch and grow together in synergy. [ 42 ] The experiment included seeds of potatoes, tomatoes, and Arabidopsis thaliana (a flowering plant), as well as silkworm eggs. On January 15, 2019, it was reported that cotton seeds had grown in the biosphere - this became the first plant grown on the Moon . [ 43 ] [ 44 ] Environmental systems were in place to keep the container hospitable and Earth-like, except for the low lunar gravity. [ 45 ] It was hoped that if the eggs hatched, the larvae would produce carbon dioxide, while the germinated plants would release oxygen through photosynthesis . It was hoped that together, the plants and silkworms can establish a simple synergy within the container. A miniature camera was to photograph any growth. The biological experiment was designed by 28 Chinese universities. [ 46 ] [ 47 ] In 2023 it was reported that the original 100 day experiment was scaled back to 9 days; the insects did not hatch and the potatoes did not sprout. [ 48 ] The cotton survived for 2 days before succumbing to temperature changes. [ 49 ] Lunar soil has also been proven [ verification needed ] to allow plants to grow on, tested in a laboratory at the University of Florida. [ 50 ] These experiments showed that while the plant Arabidopsis thaliana can germinate and grow in lunar soil, that there are challenges presented in the plants ability to thrive, as many were slow to develop. Plants that did germinate showed morphological and transcriptomic indications of stress. [ 51 ] Plants grown in space include: Some experiments involving plants include: The Vegetable Production System (Veggie), began in May 2014 aboard the ISS. This included; [ 79 ]
https://en.wikipedia.org/wiki/Plants_in_space
A plantsman is an enthusiastic and knowledgeable gardener (amateur or professional), nurseryman or nurserywoman. [ 1 ] "Plantsman" can refer to a male or female person, though the terms plantswoman , [ 2 ] or even plantsperson , [ 3 ] are sometimes used. The word is sometimes said to be synonymous with " botanist " or " horticulturist ", but that would indicate a professional involvement, whereas "plantsman" reflects an attitude to (and perhaps even an obsession with) plants. A horticulturist may be a plantsman, but a plantsman is not necessarily a horticulturist. In the first edition (June 1979) of The Plantsman (a specialist magazine , published by the Royal Horticultural Society from 1994 until June 2019, when it was announced that the title would be changed to The Plant Review ), [ 4 ] Sandra Raphael (then a senior editor in the Dictionary Department of the Oxford University Press ) contributed a short article on the history and meaning of the word. Her first example came from an issue of the Gardeners' Chronicle of 1881, when it seemed to mean "A nurseryman, a florist" (in the early sense of "florist" as a grower and breeder of flowers, rather than the more recent meaning of someone who sells or arranges them). She added that a modern definition should point out that "plantsman" In her article, Raphael also quotes botanist David McClintock (writing in the Botanical Society of the British Isles ' BSBI News , December 1976) on how to distinguish a botanist from a plantsman, beginning with the simple definition: John Tradescant the elder ( ca 1570s–1638) and his son, John Tradescant the younger (1608–1662), must head the list of historic plantsmen. Charles de l'Ecluse , better known as Carolus Clusius (1526–1609), and Carl Linnaeus (1707–1778) are other examples. These early botanists, who certainly grew (and sometimes had also collected) many of the plants they described, can therefore be described as plantsmen (though such a term did not exist in their lifetimes). By contrast, adventurous plant-hunters such as David Douglas (1799–1834), who dedicated (and lost) his life to searching out and collecting plants from the wild, were seldom gardeners and rarely grew the plants they had collected, so perhaps do not count as plantsmen, despite their great knowledge and dedication. [ 6 ] Augustine Henry (1857–1930) was a pioneering plant-collector in Western China in the late 19th century who became a professor of forestry in later life. [ 7 ] On the other hand, EH Wilson (1876–1930), also famed for his work in China (to the extent that he was known as Ernest "Chinese" Wilson), began as a gardener and, after working at the Royal Botanic Gardens, Kew , became a plant collector, first for James Veitch & Sons (nurserymen) and later for the Arnold Arboretum . [ 8 ] Irish nurseryman William Baylor Hartland (1836–1912) specialised in daffodils in the late 19th century from his nursery in Cork. He was also an authority on apples. [ 9 ] Because of their in-depth knowledge, specialist plant-breeders may be considered as plantsmen in their own fields (though the term is often taken to imply a more encyclopaedic interest in a wide range of plants). Influential garden writers such as William Robinson (1838–1935) [ 10 ] [ 11 ] and garden-designer Gertrude Jekyll (1843–1932) [ 12 ] disseminated their knowledge of plants through their writing, as did a later generation of plant-lovers including Margery Fish (1892–1969) [ 13 ] and Vita Sackville-West (1892–1962), whose garden at Sissinghurst Castle , created with her husband Harold Nicolson , is now owned by the National Trust and one of the most popular in Britain. [ 14 ] Reginald Farrer (1880–1920) was a notable plant-hunter and influential writer in the more specialised area of alpine plants and rock gardening . [ 15 ] [ 16 ] Notable modern British plantsmen include Roy Lancaster , [ 17 ] [ 18 ] the late Christopher Lloyd of Great Dixter (1921–2006) [ 19 ] and the late Beth Chatto (1923–2018). [ 20 ] American nurserymen and plant-collectors who qualify for the title include plant-breeder Dan Heims of Terra Nova Nurseries (who styles himself a "hortiholic"), [ 21 ] Dan Hinkley , co-founder of Heronswood (now an independent author, lecturer and horticultural consultant), [ 22 ] and Tony Avent , owner of the renowned Plant Delights Nursery . [ 23 ] European candidates include the late Princess Greta Sturdza of Le Vasterival, near Dieppe ; [ 24 ] the late Robert and Jelena de Belder, the principal creators of Arboretum Kalmthout , Belgium; [ 25 ] and influential Dutch garden designer Piet Oudolf , who has pioneered the use of " prairie -style" planting [ 26 ] with bold drifts of perennials and grasses at gardens such as Scampston Hall , North Yorkshire and the RHS Garden, Wisley , Surrey in the UK and at Enköping in Sweden . [ 27 ] Oudolf is designing a Garden of Remembrance for the victims of 9/11 in Battery Park (New York) . Landscape architect Louis Benech of France is also a famous plantsman. [ 28 ]
https://en.wikipedia.org/wiki/Plantsman
Plant-animal interactions are important pathways for the transfer of energy within ecosystems, where both advantageous and unfavorable interactions support ecosystem health. [ 1 ] [ 2 ] Plant-animal interactions can take on important ecological functions and manifest in a variety of combinations of favorable and unfavorable associations, for example predation, frugivory and herbivory, parasitism, and mutualism. [ 3 ] Without mutualistic relationships, some plants may not be able to complete their life cycles, and the animals may starve due to resource deficiency. [ 4 ] The earliest vascular plants initially formed on the planet about 425 million years ago, in the Devonian period of the early Paleozoic era. About every feeding method an animal might employ to consume plants had already been well-developed by the time the first herbivorous insects started consuming ferns during the Carboniferous epoch. [ 5 ] In the earliest known antagonistic relationships with plants, insects consume plant pollen and spores. [ 6 ] Since 300 million years ago, insects have been known to consume nectar and pollinate flowers. In the Mesozoic , between 200 and 150 million years ago, insects' feeding patterns started to diversify. [ 7 ] The evolution of plant defenses to reduce cost and increase resistance to herbivores is a crucial component of the Optimal Defense Hypothesis . In order to deal with the plant's adaptability, the animal likewise evolved counter-adaptations. [ 8 ] Over the history of their shared evolution, plants and animals have significantly diverged, in large part because of productive co-evolutionary processes that emerged from antagonistic interactions. [ 9 ] [ 10 ] Mutualistic interactions between plants and insects have developed and disintegrated over the course of the evolution of angiosperms. [ 11 ] Defoliation or root removal caused by herbivory can control or reduce the overall phytomass, but it can also promote species diversity and have an impact on plant dispersion, which in turn controls ecological stability. [ 12 ] [ 13 ] In mutualistic relationships between pollinators and plants, the former receives food from the latter and in exchange acts as a plant propagation agent and a gene-transfer vector. [ 14 ] The intricate web of species-specificity, habitat choice, and coevolution between plants and their pollinators has already been clarified by studies examining the feeding behaviors of pollinators and their interactive role in maintaining ecosystems. [ 15 ] True mutualisms also promote development and provide pathogen protection. [ 16 ] Plant growth and development are aided by mutualistic interactions between animals and plants, such as those between nematodes and insects . [ 17 ] Predation is a biological interaction where one organism, the predator, kills and eats another organism, its prey. There are carnivorous plants as well as herbivores and carnivores that consume plants and animals, respectively. Due to the extremely low nutritional content of the soil in which they grow and extra nitrogen is needed by the plants, therefore carnivorous plants eat insects. By photosynthesis, these plants continue to receive energy from the sun. [ 18 ] Parasitism is a close relationship between species, where one organism, the parasite, lives on or inside another organism, the host, causing it some harm, and is adapted structurally to this way of life. Plant parasites are a common term for sap-sucking insects like aphids. [ 19 ] Commensalism is the term used to describe a situation in which one organism gains and the other is neither harmed nor benefited. [ 20 ] For instance, epiphytes on tree trunks in rain forests are aided by the trees because they provide a surface for their growth. Unless the epiphytes' weight becomes so great that the tree branches break, the epiphytes don't seem to have any effect on the trees. [ 21 ] When both species gain from their interaction, mutualism develops. The mutualistic link between pollinators and plants is very well illustrated. In this instance, the animal pollinator (bee, butterfly, beetle, hummingbird, etc.) receives nourishment in exchange for carrying the plants' pollen from flower to flower (usually nectar or pollen). Another common method of seed dispersion involves an alliance between the plant and the animal that disperses the seeds. The tasty fruit that encases the seeds is consumed by numerous animals. The seeds are subsequently dispersed in a new spot some distance from the parent plant, frequently with feces that also serves as a little amount of fertilizer. In every ecosystem, there are interactions of this nature between species. [ 22 ] [ 23 ] [ 24 ]
https://en.wikipedia.org/wiki/Plant–animal_interaction
Plant–fungus horizontal gene transfer is the movement of genetic material between individuals in the plant and fungus kingdoms . Horizontal gene transfer is universal in fungi , viruses , bacteria , and other eukaryotes . [ 1 ] Horizontal gene transfer research often focuses on prokaryotes because of the abundant sequence data from diverse lineages, and because it is assumed not to play a significant role in eukaryotes. [ 2 ] Most plant–fungus horizontal gene transfer events are ancient and rare, but they may have provided important gene functions leading to wider substrate use and habitat spread for plants and fungi. [ 3 ] Since these events are rare and ancient, they have been difficult to detect and remain relatively unknown. [ 4 ] Plant–fungus interactions could play a part in a multi-horizontal gene transfer pathway among many other organisms. [ 5 ] Fungus–plant-mediated horizontal gene transfer can occur via phagotrophic mechanisms (mediated by phagotrophic eukaryotes) and nonphagotropic mechanisms. Nonphagotrophic mechanisms have been seen in the transmission of transposable elements , plastid -derived endosymbiotic gene transfer, prokaryote-derived gene transfer, Agrobacterium tumefaciens -mediated DNA transfer, cross-species hybridization events, and gene transfer between mitochondrial genes. [ 3 ] Horizontal gene transfer could bypass eukaryotic barrier features like linear chromatin -based chromosomes , intron – exon gene structures, and the nuclear envelope . [ 6 ] Horizontal gene transfer occurs between microorganisms sharing overlapping ecological niches and associations like parasitism or symbiosis . Ecological association can facilitate horizontal gene transfer in plants and fungi and is an unstudied factor in shared evolutionary histories. Most horizontal gene transfers from fungi into plants predate the rise of land plants. A greater genomic inventory of gene family and taxon sampling has been identified as a desirable prerequisite for identifying further plant–fungus events. [ 4 ] Evidence for gene transfer between fungi and eukaryotes is discovered indirectly. Evidence is found in the unusual features of genetic elements. These features include: inconsistency between phylogeny across genetic elements, high DNA or amino acid similarity from phylogenetically distant organisms, irregular distribution of genetic elements in a variety of species, similar genes shared among species within a specific habitat or geography independent of their phylogenetic relationship, and gene characteristics inconsistent with the resident genome such as high guanine and cytosine content, codon usage, and introns. [ 4 ] Alternative hypotheses and explanations for such findings include erroneous species phylogenies, inappropriate comparison of paralogous sequences, sporadic retention of shared ancestral characteristics, uneven rates of character change in other lineages, and introgressive hybridization . [ 4 ] The "complexity hypothesis" is a different approach to understanding why informational genes have less success in being transferred than operational genes. It has been proposed that informational genes are part of larger, more conglomerate systems, while operational genes are less complex, allowing them to be horizontally transferred at higher frequencies. The hypothesis incorporates the "continual hypothesis", which states that horizontal gene transfer is constantly occurring in operational genes. [ 7 ] Plant–fungus horizontal gene transfer could take place during plant infection. There are many possible vectors, such as plant–fungus–insect interactions. The ability for fungi to infect other organisms provides this possible pathway. [ 5 ] A fungus–plant pathway has been demonstrated in rice ( Oryza sativa ) through ancestral lineages. A phylogeny was constructed from 1689 identified genes and all homologs available from the rice genome (3177 gene families). Fourteen candidate plant–fungus horizontal gene transfer events were identified, nine of which showed infrequent patterns of transfer between plants and fungi. From the phylogenetic analysis, horizontal gene transfer events could have contributed to the L-fucose permease sugar transporter, zinc binding alcohol dehydrogenase , membrane transporter , phospholipase / carboxylesterase , iucA / iucC family protein in siderophore biosynthesis, DUF239 domain protein, phosphate-response 1 family protein, a hypothetical protein similar to zinc finger (C2H2-type) protein, and another conserver hypothetical protein. [ 3 ] Some plants may have obtained the shikimate pathway from symbiotic fungi. Plant shikimate pathway enzymes share similarities to prokaryote homologs and could have ancestry from a plastid progenitor genome. It is possible that the shikimate pathway and the pentafunctional arom have their ancient origins in eukaryotes or were conveyed by eukaryote–eukaryote horizontal gene transfer. The evolutionary history of the pathway could have been influenced by a prokaryote-to-eukaryote gene transfer event. Ascomycete fungi along with zygomycetes , basidiomycetes , apicomplexa , ciliates , and oomycetes retained elements of an ancestral pathway given through the bikont/unikont eukaryote root. [ 8 ] Fungi and bacteria could have contributed to the phenylpropanoid pathway in ancestral land plants for the synthesis of flavonoids and lignin through horizontal gene transfer. Phenylalanine ammonia lyase (PAL) is known to be present in fungi, such as Basidiomycota yeast like Rhodotorula and Ascomycota such as Aspergillus and Neurospora . These fungi participate in the catabolism of phenylalanine for carbon and nitrogen. PAL in some plants and fungi also has a tyrosine ammonia lyase (TAL) for the synthesis of p-coumaric acid into p-coumaroyl-CoA . PAL likely emerged from bacteria in an antimicrobial role. Horizontal gene transfer took place through a pre- Dikarya divergent fungal lineage and a Nostocale or soil-sediment bacterium through symbiosis. The fungal PAL was then transferred to an ancestor of a land plant by an ancient arbuscular mycorrhizal symbiosis that later developed in the phenylpropanoid pathway and land plant colonization. PAL enzymes in early bacteria and fungi could have contributed to protection against ultraviolet radiation , acted as a light capturing pigment, or assisted in antimicrobial defense. [ 9 ] Sterigmatocystin gene transfer has been observed with Podospora anserina and Aspergillus . Horizontal gene transfer in Aspergillus and Podospora contributed to fungal metabolic diversity in secondary metabolism. Aspergillus nidulans produces sterigmatocystin – a precursor to aflatoxins . Aspergillus was found to have horizontally transferred genes to Podospora anserina . Podospora and Aspergillus show high conservation and microsynteny sterigmatocystin/aflatoxin clusters along with intergenic regions containing 14 binding sites for AfIR , a transcription factor for the activation of sterigmatocystin/aflatoxin biosynthetic genes. Aspergillus to Podospora represents a large metabolic gene transfer which could have contributed to fungal metabolic diversity. Transposable elements and other mobile genetic elements like plasmids and viruses could allow for chromosomal rearrangement and integration of foreign genetic material. Horizontal gene transfer could have significantly contributed to fungal genome remodeling and metabolic diversity. [ 10 ] In Stagonospora and Pyrenophora , as well as in Fusarium and Alternaria , horizontal gene transfer provides a powerful mechanism for fungi to acquire pathogenic capabilities to infect a new host plant. Horizontal gene transfer and interspecific hybridization between pathogenic species allow for hybrid offspring with an expanded host range. This can cause disease outbreaks on new crops when an encoded protein is able to cause pathogenicity. [ 11 ] The interspecific transfer of virulence factors in fungal pathogens has been shown between Stagonospora modorum and Pyrenophora tritici-repentis , where a host-selective toxin from S. nodorum conferred virulence to P. tritici-repentis on wheat. [ 12 ] In Fusarium , a nonpathogenic strain was experimentally converted into a pathogen and could have contributed to pathogen adaption in large genome portions. Fusarium graminearum , Fusarium verticilliodes , and Fusarium oxysprorum are maize and tomato pathogens that produce fumonisin mycotoxins that contaminate grain. These examples highlight the apparent polyphyletic origins of host specialization and the emergence of new pathogenic lineages distinct from genetic backgrounds. [ 13 ] The ability to transfer genetic material could increase disease in susceptible plant populations.
https://en.wikipedia.org/wiki/Plant–fungus_horizontal_gene_transfer
Plant–soil feedback is a process where plants alter the biotic and abiotic qualities of soil they grow in, which then alters the ability of plants to grow in that soil in the future. [ 1 ] [ 2 ] Negative plant–soil feedback occurs when plants are less able to grow in soil that was previously occupied by a member of the same species, and positive plant–soil feedback occurs when plants are more able to grow in soil that was previously occupied by a member of the same species. [ 2 ] Although it was originally assumed that negative plant–soil feedback was caused by plants depleting the soil of nutrients , recent work has suggested that a major cause of plant–soil feedback is a buildup of soil-borne pathogens . [ 1 ] Mutualism and allelopathy are also thought to cause plant–soil feedback. Studies have shown that, on average, plant–soil feedback tends to be negative; [ 3 ] however, there have been many notable exceptions, such as many invasive species. [ 4 ] Negative plant–soil feedback is thought to be an important factor in helping plants to coexist . If a plant is over-abundant, then soil pathogens and other negative factors will become common, hurting its growth. [ 2 ] Similarly, if a plant becomes overly rare, then so too will its soil pathogens and other negative factors, helping its growth. [ 2 ] This negative feedback will help populations to stay in the community. Negative plant–soil feedback has been called a particular case of the Janzen–Connell hypothesis . [ 5 ] Plant–soil feedback is best measured using Bever's interaction coefficient, I s . [ 2 ] This value quantifies how much each plant's growth is limited by its own soil community compared to how much it limits others. It is for two-species comparisons. To measure this quantity, one must measure the growth of two plants, both in soil conditioned by members of their own species ( G x (home) for species x ), and in soil conditioned by members of the other species ( G x (away) for plant species x ). Then, the interaction coefficient is calculated as If I s is negative, it means that both species grow worse in their own site compared to their competitor's soil, and therefore plant–soil feedback helps these species to coexist. [ 2 ]
https://en.wikipedia.org/wiki/Plant–soil_feedback
Plaque hybridization is a technique used in Molecular biology for the identification of recombinant phages . [ 1 ] The procedure can also be used for the detection of differentially represented repetitive DNA. The technique (similar to colony hybridization ) involves hybridizing isolated phage DNA to a label probe for the gene of study. This is followed by autoradiography to detect the position of the label. [ 2 ] The plaque hybridization procedure has some advantages over colony hybridization due to the smaller and well defined area of the filter to which the DNA binds. [ 3 ]
https://en.wikipedia.org/wiki/Plaque_hybridization
The plaque reduction neutralization test is used to quantify the titer of neutralizing antibody for a virus . [ 1 ] [ 2 ] The serum sample or solution of antibody to be tested is diluted and mixed with a viral suspension. This is incubated to allow the antibody to react with the virus. This is poured over a confluent monolayer of host cells. The surface of the cell layer is covered in a layer of agar or carboxymethyl cellulose to prevent the virus from spreading indiscriminately. The concentration of plaque forming units can be estimated by the number of plaques (regions of infected cells) formed after a few days. Depending on the virus, the plaque forming units are measured by microscopic observation, fluorescent antibodies or specific dyes that react with infected cells. [ 3 ] The concentration of serum to reduce the number of plaques by 50% compared to the serum free virus gives the measure of how much antibody is present or how effective it is. This measurement is denoted as the PRNT 50 value. Currently it is considered to be the "gold standard" for detecting and measuring antibodies that can neutralise the viruses that cause many diseases. [ 4 ] [ 5 ] It has a higher sensitivity than other tests like hemagglutination and many commercial Enzyme immunoassay without compromising specificity . Moreover, it is more specific than other serological methods for the diagnosis of some arbovirus . However, the test is relatively cumbersome and time intensive (taking a few days) relative to EIA kits that give quick results (usually several minutes to a few hours). [ citation needed ] An issue with this assay that has recently been identified is that the neutralization ability of the antibodies is dependent on the virion maturation state and the cell-type used in the assay. [ 6 ] Therefore, if the wrong cell line is used for the assay it may seem that the antibodies have neutralization ability when they actually do not, or vice versa they may seem ineffective when they actually possess neutralization ability. [ citation needed ]
https://en.wikipedia.org/wiki/Plaque_reduction_neutralization_test
Plascore Incorporated manufactures honeycomb core, cleanrooms, and composite panels marketed under the brand Plascore. Honeycomb is used in aerospace, marine, military, safety, transportation, and other applications. [ 1 ] When honeycomb is sandwiched between two surfaces, it effectively creates a distance between the two surfaces more or less like an I-beam. The resulting composite structure exhibits a high strength-to-weight ratio and shear strength. Shear strength is the measured ability of a material to resist structural failure. Plascore honeycomb is designed to increase shear strength while adding minimal additional weight. [ 2 ] Plascore is a global organization, with a 185,000 sq. ft. headquarters and three additional manufacturing facilities in Zeeland, Michigan . Those three facilities are 50,000, 40,000 and 80,000 sq. ft. in size. The company also has an 85,000 sq. ft. manufacturing plant in Waldlaubersheim , Germany; and sales offices throughout the world. Plascore is AS/EN/JISQ9100, ISO 14001:2004, and ISO 9001:2008 Certified, and the company's PP Honeycomb has received a Lloyds Register Certificate of Approval of a Core Material. Plascore is also AS9100 certified, [ 3 ] an industry standard required by the majority of major aircraft manufacturers that provides verification of consistent aerospace product quality. Plascore was founded in 1977 as a manufacturer of honeycomb core sold to various value added manufacturers. By the 1980s, Plascore’s product development and manufacturing capabilities led the company into value-added markets, including cleanroom walls, ceiling and door systems for semi-conductor and pharmaceutical sectors; panels for transportation products and building materials and energy absorbers for various markets. In 2002, Plascore received an award from CERN for its role in the design and manufacture of large honeycomb panels used in detection chambers. [ 4 ] In August 2009, Plascore received tax abatements from the city of Zeeland for new equipment and machinery installed at its facilities in Zeeland. [ 5 ] With the increase in fuel costs and the growing need for lighter structure products, honeycomb gained increasing popularity in the marine industry, where it allows boat designers to add strength to hull manufacturing and also save weight, for example by using thin layers of stone laminated to honeycomb for luxury yacht countertops rather than solid stone. [ 6 ] In 2012, Plascore constructed a new facility for increased Nomex manufacturing capabilities.
https://en.wikipedia.org/wiki/Plascore_Incorporated
Plasma-enhanced chemical vapor deposition ( PECVD ) is a chemical vapor deposition process used to deposit thin films from a gas state ( vapor ) to a solid state on a substrate . Chemical reactions are involved in the process, which occur after creation of a plasma of the reacting gases. The plasma is generally created by radio frequency (RF) alternating current (AC) frequency or direct current (DC) discharge between two electrodes , the space between which is filled with the reacting gases. A plasma is any gas in which a significant percentage of the atoms or molecules are ionized. Fractional ionization in plasmas used for deposition and related materials processing varies from about 10 −4 in typical capacitive discharges to as high as 5–10% in high-density inductive plasmas. Processing plasmas are typically operated at pressures of a few millitorrs to a few torr , although arc discharges and inductive plasmas can be ignited at atmospheric pressure. Plasmas with low fractional ionization are of great interest for materials processing because electrons are so light, compared to atoms and molecules, that energy exchange between the electrons and neutral gas is very inefficient. Therefore, the electrons can be maintained at very high equivalent temperatures – tens of thousands of kelvins, equivalent to several electronvolts average energy—while the neutral atoms remain at the ambient temperature. These energetic electrons can induce many processes that would otherwise be very improbable at low temperatures, such as dissociation of precursor molecules and the creation of large quantities of free radicals. The second benefit of deposition within a discharge arises from the fact that electrons are more mobile than ions. As a consequence, the plasma is normally more positive than any object it is in contact with, as otherwise, a large flux of electrons would flow from the plasma to the object. The difference in voltage between the plasma and the objects in its contacts normally occurs across a thin sheath region. Ionized atoms or molecules that diffuse to the edge of the sheath region feel an electrostatic force and are accelerated towards the neighboring surface. Thus, all surfaces exposed to the plasma receive energetic ion bombardment. The potential across the sheath surrounding an electrically isolated object (the floating potential) is typically only 10–20 V, but much higher sheath potentials are achievable by adjustments in reactor geometry and configuration. Thus, films can be exposed to energetic ion bombardment during deposition. This bombardment can lead to increases in the density of the film, and help remove contaminants, improving the film's electrical and mechanical properties. When a high-density plasma is used, the ion density can be high enough that significant sputtering of the deposited film occurs; this sputtering can be employed to help planarize the film and fill trenches or holes. A simple DC discharge can be readily created at a few torr between two conductive electrodes, and may be suitable for deposition of conductive materials. However, insulating films will quickly extinguish this discharge as they are deposited. It is more common to excite a capacitive discharge by applying an AC or RF signal between an electrode and the conductive walls of a reactor chamber, or between two cylindrical conductive electrodes facing one another. The latter configuration is known as a parallel plate reactor. Frequencies of a few tens of Hz to a few thousand Hz will produce time-varying plasmas that are repeatedly initiated and extinguished; frequencies of tens of kilohertz to tens of megahertz result in reasonably time-independent discharges. Excitation frequencies in the low-frequency (LF) range, usually around 100 kHz, require several hundred volts to sustain the discharge. These large voltages lead to high-energy ion bombardment of surfaces. High-frequency plasmas are often excited at the standard 13.56 MHz frequency widely available for industrial use; at high frequencies, the displacement current from sheath movement and scattering from the sheath assist in ionization, and thus lower voltages are sufficient to achieve higher plasma densities. Thus one can adjust the chemistry and ion bombardment in the deposition by changing the frequency of excitation, or by using a mixture of low- and high-frequency signals in a dual-frequency reactor. Excitation power of tens to hundreds of watts is typical for an electrode with a diameter of 200 to 300 mm. Capacitive plasmas are usually very lightly ionized, resulting in limited dissociation of precursors and low deposition rates. Much denser plasmas can be created using inductive discharges, in which an inductive coil excited with a high-frequency signal induces an electric field within the discharge, accelerating electrons in the plasma itself rather than just at the sheath edge. Electron cyclotron resonance reactors and helicon wave antennas have also been used to create high-density discharges. Excitation powers of 10 kW or more are often used in modern reactors. High density plasmas can also be generated by a DC discharge in an electron-rich environment, obtained by thermionic emission from heated filaments. The voltages required by the arc discharge are of the order of a few tens of volts , resulting in low energy ions. The high density, low energy plasma is exploited for the epitaxial deposition at high rates in low-energy plasma-enhanced chemical vapor deposition reactors. Working at Standard Telecommunication Laboratories (STL), Harlow, Essex, R C G Swann discovered that RF discharge promoted the deposition of silicon compounds onto the quartz glass vessel wall. [ 1 ] Several internal STL publications were followed in 1964 by French, [ 2 ] British [ 3 ] and US [ 4 ] patent applications. An article was published in the August 1965 volume of Solid State Electronics. [ 5 ] Swann attending to his original prototype glow discharge equipment in the laboratory at STL Harlow, Essex in the 1960s. It represented a breakthrough in the deposition of thin films of amorphous silicon, silicon nitride, silicon dioxide at temperatures significantly lower than that deposited by pyrolytic chemistry. Plasma deposition is often used in semiconductor manufacturing to deposit films conformally (covering sidewalls) and onto wafers containing metal layers or other temperature-sensitive structures. PECVD also yields some of the fastest deposition rates while maintaining film quality (such as roughness, defects/voids), as compared with sputter deposition and thermal/electron-beam evaporation, often at the expense of uniformity. Silicon dioxide can be deposited using a combination of silicon precursor gasses like dichlorosilane or silane and oxygen precursors, such as oxygen and nitrous oxide , typically at pressures from a few millitorr to a few torr. Plasma-deposited silicon nitride , formed from silane and ammonia or nitrogen , is also widely used, although it is important to note that it is not possible to deposit a pure nitride in this fashion. Plasma nitrides always contain a large amount of hydrogen , which can be bonded to silicon (Si-H) or nitrogen (Si-NH); [ 6 ] this hydrogen has an important influence on IR and UV absorption, [ 7 ] stability, mechanical stress, and electrical conductivity. [ 8 ] This is often used as a surface and bulk passivating layer for commercial multicrystalline silicon photovoltaic cells. [ 9 ] Silicon dioxide can also be deposited from a tetraethylorthosilicate (TEOS) silicon precursor in an oxygen or oxygen-argon plasma. These films can be contaminated with significant carbon and hydrogen as silanol , and can be unstable in air [ citation needed ] . Pressures of a few torr and small electrode spacings, and/or dual frequency deposition, are helpful to achieve high deposition rates with good film stability. High-density plasma deposition of silicon dioxide from silane and oxygen/argon has been widely used to create a nearly hydrogen-free film with good conformality over complex surfaces, the latter resulting from intense ion bombardment and consequent sputtering of the deposited molecules from vertical onto horizontal surfaces [ citation needed ] .
https://en.wikipedia.org/wiki/Plasma-enhanced_chemical_vapor_deposition
In nuclear fusion power research, the plasma-facing material (or materials) ( PFM ) is any material used to construct the plasma-facing components ( PFC ), those components exposed to the plasma within which nuclear fusion occurs, and particularly the material used for the lining the first wall or divertor region of the reactor vessel . Plasma-facing materials for fusion reactor designs must support the overall steps for energy generation, these include: In addition PFMs have to operate over the lifetime of a fusion reactor vessel by handling the harsh environmental conditions, such as: Currently, fusion reactor research focuses on improving efficiency and reliability in heat generation and capture and on raising the rate of transfer. Generating electricity from heat is beyond the scope of current research, due to existing efficient heat-transfer cycles, such as heating water to operate steam turbines that drive electrical generators. Current reactor designs are fueled by deuterium-tritium (D-T) fusion reactions, which produce high-energy neutrons that can damage the first wall, [ 1 ] however, high-energy neutrons (14.1 MeV) are needed for blanket and Tritium breeder operation . Tritium is not a naturally abundant isotope due to its short half-life, therefore for a fusion D-T reactor it will need to be bred by the nuclear reaction of lithium (Li), boron (B), or beryllium (Be) isotopes with high-energy neutrons that collide within the first wall. [ 2 ] Most magnetic confinement fusion devices (MCFD) consist of several key components in their technical designs, including: The core fusion plasma must not actually touch the first wall. ITER and many other current and projected fusion experiments, particularly those of the tokamak and stellarator designs, use intense magnetic fields in an attempt to achieve this, although plasma instability problems remain. Even with stable plasma confinement, however, the first wall material would be exposed to a neutron flux higher than in any current nuclear power reactor , which leads to two key problems in selecting the material: The lining material must also: Some critical plasma-facing components, such as and in particular the divertor , are typically protected by a different material than that used for the major area of the first wall. [ 3 ] Materials currently in use or under consideration include: Multi-layer tiles of several of these materials are also being considered and used, for example: Graphite was used for the first wall material of the Joint European Torus (JET) at its startup (1983), in Tokamak à configuration variable (1992) and in National Spherical Torus Experiment (NSTX, first plasma 1999). [ 11 ] Beryllium was used to reline JET in 2009 in anticipation of its proposed use in ITER . [ 12 ] Tungsten is used for the divertor in JET, and will be used for the divertor in ITER. [ 12 ] [ 13 ] It is also used for the first wall in ASDEX Upgrade . [ 14 ] Graphite tiles plasma sprayed with tungsten were used for the ASDEX Upgrade divertor. [ 15 ] Studies of tungsten in the divertor have been conducted at the DIII-D facility. [ 16 ] These experiments utilized two rings of tungsten isotopes embedded in the lower divertor to characterize erosion tungsten during operation. Molybdenum is used for the first wall material in Alcator C-Mod (1991). Liquid lithium (LL) was used to coat the PFC of the Tokamak Fusion Test Reactor in the Lithium Tokamak Experiment (TFTR, 1996). [ 8 ] Development of satisfactory plasma-facing materials is one of the key problems still to be solved by current programs. [ 17 ] [ 18 ] Plasma-facing materials can be measured for performance in terms of: [ 9 ] The International Fusion Materials Irradiation Facility (IFMIF) will particularly address this. Materials developed using IFMIF will be used in DEMO , the proposed successor to ITER. French Nobel laureate in physics Pierre-Gilles de Gennes said of nuclear fusion, "We say that we will put the sun into a box. The idea is pretty. The problem is, we don't know how to make the box." [ 19 ] Solid plasma-facing materials are known to be susceptible to damage under large heat loads and high neutron flux. If damaged, these solids can contaminate the plasma and decrease plasma confinement stability. In addition, radiation can leak through defects in the solids and contaminate outer vessel components. [ 1 ] Liquid metal plasma-facing components that enclose the plasma have been proposed to address challenges in the PFC. In particular, liquid lithium (LL) has been confirmed to have various properties that are attractive for fusion reactor performance. [ 1 ] Tungsten is widely recognized as the preferred material for plasma-facing components in next-generation fusion devices, largely due to its unique combination of properties and potential for enhancement. Its low erosion rates make it particularly suitable for the high-stress environment of fusion reactors, where it can withstand the intense conditions without degrading rapidly. Additionally, tungsten's low tritium retention through co-deposition and implantation is crucial in fusion contexts, helping to minimize the accumulation of this radioactive isotope. [ 20 ] [ 21 ] [ 22 ] Another key advantage of tungsten is its high thermal conductivity, essential for managing the extreme heat generated in fusion processes. This property ensures efficient heat dissipation, reducing the risk of damage to the reactor's internal components. Furthermore, the potential for developing radiation-hardened alloys of tungsten presents an opportunity to enhance its durability and performance under the intense radiation conditions typical in fusion reactors. Despite these benefits, tungsten is not without its drawbacks. One notable issue is its tendency to contribute to high core radiation, a significant challenge in maintaining the plasma performance in fusion reactors. Nevertheless, tungsten has been selected as the plasma-facing material for the ITER project's first-generation divertor, and it is likely to be used for the reactor's first wall as well. Understanding the behavior of tungsten in fusion environments, including its sourcing, migration, and transport in the scrape-off-layer (SOL), as well as its potential for core contamination, is a complex task. Significant research is ongoing to develop a mature and validated understanding of these dynamics, particularly for predicting the behavior of high-Z (high atomic number) materials like tungsten in next-step tokamak devices. To address tungsten's intrinsic brittleness, which limits its operational window, a composite material known as W-fibre enhanced W-composite (Wf/W) has been developed. This material incorporates extrinsic toughening mechanisms to significantly increase toughness, as demonstrated in small Wf/W samples. In the context of future fusion power plants, tungsten stands out for its resilience against erosion, the highest melting point among metals, and relatively benign behavior under neutron irradiation. However, its ductile to brittle transition temperature (DBTT) is a concern, especially as it increases under neutron exposure. To overcome this brittleness, several strategies are being explored, including the use of nanocrystalline materials, tungsten alloying, and W-composite materials. Particularly notable are the tungsten laminates and fiber-reinforced composites, which leverage tungsten's exceptional mechanical properties. When combined with copper's high thermal conductivity, these composites offer improved thermomechanical properties, extending beyond the operational range of traditional materials like CuCrZr. For applications requiring even higher temperature resilience, tungsten-fibre reinforced tungsten-composites (Wf/W) have been developed, incorporating mechanisms to enhance toughness, thereby broadening the potential applications of tungsten in fusion technology. Lithium (Li) is an alkali metal with a low Z (atomic number). Li has a low first ionization energy of ~5.4 eV and is highly chemically reactive with ion species found in the plasma of fusion reactor cores. In particular, Li readily forms stable lithium compounds with hydrogen isotopes, oxygen, carbon, and other impurities found in D-T plasma. [ 1 ] The fusion reaction of D-T produces charged and neutral particles in the plasma. The charged particles remain magnetically confined to the plasma. The neutral particles are not magnetically confined and will move toward the boundary between the hotter plasma and the colder PFC. Upon reaching the first wall, both neutral particles and charged particles that escaped the plasma become cold neutral particles in gaseous form. An outer edge of cold neutral gas is then “recycled”, or mixed, with the hotter plasma. A temperature gradient between the cold neutral gas and the hot plasma is believed to be the principal cause of anomalous electron and ion transport from the magnetically confined plasma. As recycling decreases, the temperature gradient decreases and plasma confinement stability increases. With better conditions for fusion in the plasma, the reactor performance increases. [ 23 ] Initial use of lithium in 1990s was motivated by a need for a low-recycling PFC. In 1996, ~ 0.02 grams of lithium coating was added to the PFC of TFTR, resulting in the fusion power output and the fusion plasma confinement to improve by a factor of two. On the first wall, lithium reacted with neutral particles to produce stable lithium compounds, resulting in low-recycling of cold neutral gas. In addition, lithium contamination in the plasma tended to be well below 1%. [ 1 ] Since 1996, these results have been confirmed by a large number of magnetic confinement fusion devices (MCFD) that have also used lithium in their PFC, for example: [ 1 ] The primary energy generation in fusion reactor designs is from the absorption of high-energy neutrons. Results from these MCFD highlight additional benefits of liquid lithium coatings for reliable energy generation, including: [ 1 ] [ 23 ] [ 8 ] Newer developments in liquid lithium are currently being tested, for example: [ 9 ] [ 10 ] Silicon carbide (SiC), a low-Z refractory ceramic material, has emerged as a promising candidate for structural materials in magnetic fusion energy devices. While the remarkable properties of SiC once attracted attention for fusion experiments, past technological limitations hindered its wider use. However, the evolving capabilities of SiC fiber composites (SiCf/SiC) in Gen-IV fission reactors have renewed interest in SiC as a fusion material. [ 24 ] Modern versions of SiCf/SiC combine many desirable attributes found in carbon fiber composites, such as thermo-mechanical strength and high melting point. These versions also present unique benefits: they exhibit minimal degradation of properties when exposed to high levels of neutron damage. SiC has demonstrated a tritium diffusivity lower than that observed in other structural materials, a property that can be further optimized by applying a thin layer of monolithic SiC on a SiC/SiCf substrate. [ 25 ] [ 26 ] However, high helium production in SiC during neutron irradiation leads to swelling, particularly at intermediate and high temperatures (>1000°C), which may impact its structural integrity. Additionally, SiC’s most common fabrication method, chemical vapor infiltration (CVI), results in approximately 10% porosity, making it permeable to gases and reducing both its thermal conductivity and mechanical stress limit. Tritium retention in silicon carbide plasma-facing components is about 1.5-2 times higher than in graphite, leading to reduced fuel efficiency and increased safety risks in fusion reactors. SiC traps more tritium, limiting its availability for fusion and increasing the potential for hazardous buildup, which complicates tritium management. [ 27 ] [ 28 ] Displacement damage, particle deposition, redeposition, and fuel accumulation on the SiC divertor surface lead to significant microstructural changes, resulting in enhanced sputtering erosion compared to the original crystalline material. The chemical and physical sputtering of SiC is still significant and contributes to the key issue of increasing tritium inventory through co-deposition over time and with particle fluency. For those reasons, carbon-based materials have been ruled out in ITER , DEMO , and other devices. [ 29 ] Siliconization, as a wall conditioning method, has been demonstrated to reduce oxygen impurities and enhance plasma performance. [ 30 ] [ 31 ] Current research efforts focus on understanding SiC behavior under conditions relevant to reactors, providing valuable insights into its potential role in future fusion technology. Silicon-rich films on divertor PFCs were recently developed using Si pellet injections in high confinement mode scenarios in DIII-D , prompting further research into refining the technique for broader fusion applications. [ 32 ]
https://en.wikipedia.org/wiki/Plasma-facing_material
Plasma-immersion ion implantation (PIII) [ 1 ] or pulsed-plasma doping (pulsed PIII) is a surface modification technique of extracting the accelerated ions from the plasma by applying a high voltage pulsed DC or pure DC power supply and targeting them into a suitable substrate or electrode with a semiconductor wafer placed over it, so as to implant it with suitable dopants . The electrode is a cathode for an electropositive plasma , while it is an anode for an electronegative plasma . Plasma can be generated in a suitably designed vacuum chamber with the help of various plasma sources such as electron cyclotron resonance plasma source which yields plasma with the highest ion density and lowest contamination level, helicon plasma source, capacitively coupled plasma source, inductively coupled plasma source, DC glow discharge and metal vapor arc (for metallic species). The vacuum chamber can be of two types - diode and triode type [ 2 ] depending upon whether the power supply is applied to the substrate as in the former case or to the perforated grid as in the latter. In a conventional immersion type of PIII system, also called as the diode type configuration, [ 2 ] the wafer is kept at a negative potential since the positively charged ions of the electropositive plasma are the ones who get extracted and implanted. The wafer sample to be treated is placed on a sample holder in a vacuum chamber. The sample holder is connected to a high voltage power supply and is electrically insulated from the chamber wall. By means of pumping and gas feed systems, an atmosphere of a working gas at a suitable pressure is created. [ 3 ] When the substrate is biased to a negative voltage (few KV's), the resultant electric field drives electrons away from the substrate in the time scale of the inverse electron plasma frequency ω e −1 ( ~ 10 −9 sec). Thus an ion matrix Debye sheath [ 2 ] [ 4 ] which is depleted of electrons forms around it. The negatively biased substrate will accelerate the ions within a time scale of the inverse ion plasma frequency ω i −1 ( ~ 10 −6 sec). This ion movement lowers the ion density in the bulk, which causes the sheath-plasma boundary to expand in order to sustain the applied potential drop, in the process exposing more ions. The plasma sheath expands until either a steady-state condition is reached, which is called Child Langmuir law limit; or the high voltage is switched off as in the case of Pulsed DC biasing. Pulse biasing is preferred over DC biasing because it creates less damage during the pulse ON time and neutralization of unwanted charges accumulated on the wafer in the afterglow period (i.e. after the pulse has ended). In case of pulsed biasing the T ON time of the pulse is generally kept at 20-40 μs, while the T OFF is kept at 0.5-2 ms i.e. a duty cycle of 1-8%. The power supply used is in range of 500 V to hundreds of KV and the pressure in the range of 1-100 mTorr . [ 4 ] This is the basic principle of the operation of immersion type PIII. In case of a triode type configuration, a suitable perforated grid is placed in between the substrate and the plasma and a pulsed DC bias is applied to this grid. Here the same theory applies as previously discussed, but with a difference that the extracted ions from the grid holes bombard the substrate, thus causing implantation. In this sense a triode type PIII implanter is a crude version of ion implantation because it does not contain plethora of components like ion beam steering , beam focusing, additional grid accelerators etc. C.R. Viswanathan, "Plasma induced damage," Microelectronic Engineering , Vol. 49, No. 1-2, November 1999, pp. 65–81.
https://en.wikipedia.org/wiki/Plasma-immersion_ion_implantation
Plasma Science Society of India [ 1 ] was founded in 1978 at Institute for Plasma Research , Ahmedabad in India for the benefit of the fusion community working on plasma . [ 2 ] This serves the thrive for knowledge [ clarification needed ] towards the fusion research in the field of theoretical and experimental research, it associated with India Science, Technology and Innovation, . [ 3 ] The devices are SST-1 , SINP - Tokamak , Aditya Tokamak , SST-2 (DEMO) to generate the electricity.There are over 950 life-member of this society along with number of annual members. Official website This article about a scientific organization is a stub . You can help Wikipedia by expanding it . This article about an organisation in India is a stub . You can help Wikipedia by expanding it . This nuclear chemistry –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plasma_Science_Society_of_India
A plasma antenna is a type of radio antenna currently in development in which plasma is used instead of the metal elements of a traditional antenna. [ 1 ] A plasma antenna can be used for both transmission and reception . [ 2 ] Although plasma antennas have only become practical in recent years [ when? ] , the idea is not new; a patent for an antenna using the concept was granted to J. Hettinger in 1919. [ 3 ] Early practical examples of the technology used discharge tubes to contain the plasma and are referred to as ionized gas plasma antennas. Ionized gas plasma antennas can be turned on and off and are good for stealth and resistance to electronic warfare and cyber attacks. Ionized gas plasma antennas can be nested such that the higher frequency plasma antennas are placed inside lower frequency plasma antennas. Higher frequency ionized gas plasma antenna arrays can transmit and receive through lower frequency ionized gas plasma antenna arrays. This means that the ionized gas plasma antennas can be co-located and ionized gas plasma antenna arrays can be stacked. Ionized gas plasma antennas can eliminate or reduce co-site interference. Smart ionized gas plasma antennas use plasma physics to shape and steer the antenna beams without the need of phased arrays. Satellite signals can be steered or focused in the reflective or refractive modes using banks of plasma tubes making unique ionized gas satellite plasma antennas. The thermal noise of ionized gas plasma antennas is less than in the corresponding metal antennas at the higher frequencies. [ 1 ] Solid state plasma antennas (also known as plasma silicon antennas) with steerable directional functionality that can be manufactured using standard silicon chip fabrication techniques are now also in development. [ 4 ] Plasma silicon antennas are candidates for use in WiGig (the planned enhancement to Wi-Fi ), and have other potential applications, for example in reducing the cost of vehicle-mounted radar collision avoidance systems . [ 4 ] In an ionized gas plasma antenna, a gas is ionized to create a plasma. Unlike gases , plasmas have very high electrical conductivity so it is possible for radio frequency signals to travel through them so that they act as a driven element (such as a dipole antenna ) to radiate radio waves, or to receive them. Alternatively the plasma can be used as a reflector or a lens to guide and focus radio waves from another source. [ 5 ] Solid-state antennas differ in that the plasma is created from electrons generated by activating thousands of diodes on a silicon chip. [ 4 ] Plasma antennas possess a number of advantages over metal antennas, including:
https://en.wikipedia.org/wiki/Plasma_antenna
In semiconductor manufacturing plasma ashing is the process of removing the photoresist (light sensitive coating) from an etched wafer. Using a plasma source, a monatomic (single atom) substance known as a reactive species is generated. Oxygen or fluorine are the most common reactive species. Other gases used are N2/H2 where the H2 portion is 2%. The reactive species combines with the photoresist to form ash which is removed with a vacuum pump . [ 1 ] Typically, monatomic oxygen plasma is created by exposing oxygen gas (O 2 ) at a low pressure to high power radio waves, which ionise it . This process is done under vacuum in order to create a plasma. As the plasma is formed, many free radicals and also oxygen ions are created. These ions could damage the wafer due to the electric field build up between the plasma and the wafer surface. Newer, smaller circuitry is increasingly susceptible to these charged particles that can get implanted into the surface. Originally, plasma was generated in the process chamber, but as the need to get rid of the ions has increased, many machines now use a downstream plasma configuration, where plasma is formed remotely and the desired particles are channeled to the wafer. This allows electrically charged particles time to recombine before they reach the wafer surface, and prevents damage to the wafer surface. Two forms of plasma ashing are typically performed on wafers. High temperature ashing, or stripping, is performed to remove as much photo resist as possible, while the "descum" process is used to remove residual photo resist in trenches. The main difference between the two processes is the temperature the wafer is exposed to while in an ashing chamber. Typical issues arise when this photoresist has undergone an implant step previously and heavy metal are embedded in the photoresist and it has experienced high temperatures causing it to be resistant to oxidizing. Monatomic oxygen is electrically neutral and although it does recombine during the channeling, it does so at a slower rate than the positively or negatively charged free radicals, which attract one another. This means that when all of the free radicals have recombined, there is still a portion of the active species available for process. Because a large portion of the active species is lost to recombination, process times may take longer. To some extent, these longer process times can be mitigated by increasing the temperature of the reaction area. This also contribute to the observation of the spectral optical traces, these can be what is normally expected when the emission declines, the process is over; it can also mean that spectral lines increase in illuminance as the available reactants are consumed causing a rise in certain spectral lines representing the available ionic species. This article about materials science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plasma_ashing
Plasma cells , also called plasma B cells or effector B cells , are white blood cells that originate in the lymphoid organs as B cells [ 1 ] [ 2 ] and secrete large quantities of proteins called antibodies in response to being presented specific substances called antigens . These antibodies are transported from the plasma cells by the blood plasma and the lymphatic system to the site of the target antigen (foreign substance), where they initiate its neutralization or destruction. B cells differentiate into plasma cells that produce antibody molecules closely modeled after the receptors of the precursor B cell. [ 3 ] Plasma cells are large lymphocytes with abundant cytoplasm and a characteristic appearance on light microscopy . They have basophilic cytoplasm and an eccentric nucleus with heterochromatin in a characteristic cartwheel or clock face arrangement. Their cytoplasm also contains a pale zone that on electron microscopy contains an extensive Golgi apparatus and centrioles . Abundant rough endoplasmic reticulum combined with a well-developed Golgi apparatus makes plasma cells well-suited for secreting immunoglobulins. [ 4 ] Other organelles in a plasma cell include ribosomes, lysosomes, mitochondria, and the plasma membrane. [ citation needed ] Terminally differentiated plasma cells express relatively few surface antigens, and do not express common pan-B cell markers, such as CD19 and CD20 . Instead, plasma cells are identified through flow cytometry by their additional expression of CD138 , CD78 , and the Interleukin-6 receptor . In humans, CD27 is a good marker for plasma cells; naïve B cells are CD27−, memory B-cells are CD27+ and plasma cells are CD27++. [ 5 ] The surface antigen CD138 (syndecan-1) is expressed at high levels. [ 6 ] Another important surface antigen is CD319 (SLAMF7). This antigen is expressed at high levels on normal human plasma cells. It is also expressed on malignant plasma cells in multiple myeloma. Compared with CD138, which disappears rapidly ex vivo, the expression of CD319 is considerably more stable. [ 7 ] After leaving the bone marrow, the B cell acts as an antigen-presenting cell (APC) and internalizes offending antigens, which are taken up by the B cell through receptor-mediated endocytosis and processed. Pieces of the antigen (which are now known as antigenic peptides ) are loaded onto MHC II molecules, and presented on its extracellular surface to CD4+ T cells (sometimes called T helper cells ). These T cells bind to the MHC II-antigen molecule and cause activation of the B cell. This is a type of safeguard to the system, similar to a two-factor authentication method. First, the B cells must encounter a foreign antigen and are then required to be activated by T helper cells before they differentiate into specific cells. [ 8 ] Upon stimulation by a T cell, which usually occurs in germinal centers of secondary lymphoid organs such as the spleen and lymph nodes , the activated B cell begins to differentiate into more specialized cells. Germinal center B cells may differentiate into memory B cells or plasma cells. Most of these B cells will become plasmablasts (or "immature plasma cells"), and eventually plasma cells, and begin producing large volumes of antibodies. Some B cells will undergo a process known as affinity maturation . [ 9 ] This process favors, by selection for the ability to bind antigen with higher affinity, the activation and growth of B cell clones able to secrete antibodies of higher affinity for the antigen. [ 10 ] The most immature blood cell that is considered of plasma cell lineage is the plasmablast. [ 11 ] Plasmablasts secrete more antibodies than B cells, but less than plasma cells. [ 12 ] They divide rapidly and are still capable of internalizing antigens and presenting them to T cells. [ 12 ] A cell may stay in this state for several days, and then either die or irrevocably differentiate into a mature, fully differentiated plasma cell. [ 12 ] Differentiation of mature B cells into plasma cells is dependent upon the transcription factors Blimp-1 / PRDM1 , BCL6 , and IRF4 . [ 10 ] Unlike their precursors, plasma cells cannot switch antibody classes , cannot act as antigen-presenting cells because they no longer display MHC-II, and do not take up antigen because they no longer display significant quantities of immunoglobulin on the cell surface. [ 12 ] However, continued exposure to antigen through those low levels of immunoglobulin is important, as it partly determines the cell's lifespan. [ 12 ] The lifespan, class of antibodies produced, and the location that the plasma cell moves to also depends on signals, such as cytokines , received from the T cell during differentiation. [ 13 ] Differentiation through a T cell-independent antigen stimulation (stimulation of a B cell that does not require the involvement of a T cell) can happen anywhere in the body [ 9 ] and results in short-lived cells that secrete IgM antibodies. [ 13 ] The T cell-dependent processes are subdivided into primary and secondary responses: a primary response (meaning that the T cell is present at the time of initial contact by the B cell with the antigen) produces short-lived cells that remain in the extramedullary regions of lymph nodes; a secondary response produces longer-lived cells that produce IgG and IgA, and frequently travel to the bone marrow. [ 13 ] For example, plasma cells will likely secrete IgG3 antibodies if they matured in the presence of the cytokine interferon-gamma . Since B cell maturation also involves somatic hypermutation (a process completed before differentiation into a plasma cell), these antibodies frequently have a very high affinity for their antigen. [ citation needed ] Plasma cells can only produce a single kind of antibody in a single class of immunoglobulin. In other words, every B cell is specific to a single antigen, but each cell can produce several thousand matching antibodies per second. [ 14 ] This prolific production of antibodies is an integral part of the humoral immune response . [ citation needed ] The current findings suggest that after the process of affinity maturation in germinal centers, plasma cells develop into one of two types of cells: short-lived plasma cells (SLPC) or long-lived plasma cells (LLPC). LLPC mainly reside in the bone marrow for a long period of time and secrete antibodies, thus providing long-term protection. LLPC can maintain antibody production for decades or even for the lifetime of an individual, [ 15 ] [ 16 ] and, unlike B cells, LLPC do not need antigen restimulation to generate antibodies. Human LLPC population can be identified as CD19 – CD38 hi CD138 + cells. [ 17 ] The long-term survival of LLPC are dependent on a specific environment in the bone marrow, the plasma cell survival niche. [ 18 ] Removal of an LLPC from its survival niche results in its rapid death. A survival niche can only support limited number of LLPC, thus the niche's environment must protect its LLPC cells but be able to accept new arrivals. [ 19 ] [ 20 ] The plasma cell survival niche is defined by a combination of cellular and molecular factors and though it has yet to be properly defined, molecules such as IL-5 , IL-6 , TNF-α , stromal cell-derived factor-1α and signalling via CD44 have been shown to play a role in the survival of LLPC. [ 21 ] LLPC can also be found, to a lesser degree, in gut-associated lymphoid tissue (GALT), where they produce IgA antibodies and contribute to mucosal immunity. Recent findings suggest that plasma cells in the gut do not necessarily need to be generated de novo from active B cells but there are also long-lived PC, suggesting the existence of a similar survival niche. [ 22 ] Tissue specific niches that allow for the survival of LLPC have been also described in nasal-associated lymphoid tissues (NALT), human tonsillar lymphoid tissues and human mucosa or mucosa-associated lymphoid tissues (MALT). [ 23 ] [ 24 ] [ 25 ] [ 26 ] Originally it was thought that the continuous production of antibodies is a result of constant replenishment of short-lived plasma cells by memory B cell re-stimulation. Recent findings, however, show that some PC are truly long-lived. The absence of antigens and the depletion of B cells does not appear to have an effect on the production of high-affinity antibodies by the LLPC. Prolonged depletion of B cells (with anti-CD20 monoclonal antibody treatment that affects B cells but not PC) also did not affect antibody titres. [ 27 ] [ 28 ] [ 29 ] LLPC secrete high levels of IgG independently of B cells. LLPC in bone marrow are the main source of circulating IgG in humans. [ 30 ] Even though IgA production is traditionally associated with mucosal sites, some plasma cells in bone marrow also produce IgA. [ 31 ] LLPC in bone marrow have been observed producing IgM . [ 32 ] Plasmacytoma , multiple myeloma , Waldenström macroglobulinemia , heavy chain disease , and plasma cell leukemia are cancers of the plasma cells. [ 33 ] Multiple myeloma is frequently identified because malignant plasma cells continue producing an antibody, which can be detected as a paraprotein . Monoclonal gammopathy of undetermined significance (MGUS) is a plasma cell dyscrasia characterized by the secretion of a myeloma protein into the blood and may lead to multiple myeloma. [ 34 ] Common variable immunodeficiency is thought to be due to a problem in the differentiation from lymphocytes to plasma cells. The result is a low serum antibody level and risk of infections. [ citation needed ] Primary amyloidosis (AL) is caused by the deposition of excess immunoglobulin light chains which are secreted from plasma cells. [ citation needed ]
https://en.wikipedia.org/wiki/Plasma_cell
Plasma deep drilling technology is one of several drilling technologies that may be able to replace conventional, contact-based rotary systems. These new technologies include plasma deep drilling, water jet , hydrothermal spallation and laser . Companies that embrace plasma-drilling method include GA Drilling , headquartered in Bratislava, Slovakia. High-energy plasma is a technology that targets deep drilling applications. It addresses issues related to drilling in water environments or boreholes with varying diameters. An electric arc is a breakdown of a gas that produces a plasma discharge, resulting from a current flowing through normally nonconductive media such as air or another gas. An arc discharge is characterized by a lower voltage than a glow discharge , and relies on thermionic emission of electrons from the electrodes supporting the arc. The electric arc is influenced by factors such as: the gas flow, inner and outer magnetic fields, and construction elements of the chamber that confines the arc. The development of plasma torches to be used as a source of the thermal plasma demands a deep understanding of the discharge chamber processes.
https://en.wikipedia.org/wiki/Plasma_deep_drilling_technology
Plasma diagnostics are a pool of methods, instruments, and experimental techniques used to measure properties of a plasma , such as plasma components' density , distribution function over energy ( temperature ), their spatial profiles and dynamics, which enable to derive plasma parameters . A ball-pen probe is novel technique used to measure directly the plasma potential in magnetized plasmas. The probe was invented by Jiří Adámek in the Institute of Plasma Physics AS CR in 2004. [ 1 ] The ball-pen probe balances the electron saturation current to the same magnitude as that of the ion saturation current. In this case, its floating potential becomes identical to the plasma potential. This goal is attained by a ceramic shield, which screens off an adjustable part of the electron current from the probe collector due to the much smaller gyro–radius of the electrons. The electron temperature is proportional to the difference of ball-pen probe(plasma potential) and Langmuir probe (floating potential) potential. Thus, the electron temperature can be obtained directly with high temporal resolution without additional power supply . The conventional Faraday cup is applied for measurements of ion (or electron) flows from plasma boundaries and for mass spectrometry . Measurements with electric probes, called Langmuir probes , are the oldest and most often used procedures for low-temperature plasmas. The method was developed by Irving Langmuir and his co-workers in the 1920s, and has since been further developed in order to extend its applicability to more general conditions than those presumed by Langmuir. Langmuir probe measurements are based on the estimation of current versus voltage characteristics of a circuit consisting of two metallic electrodes that are both immersed in the plasma under study. Two cases are of interest: (a) The surface areas of the two electrodes differ by several orders of magnitude. This is known as the single-probe method. (b) The surface areas are very small in comparison with the dimensions of the vessel containing the plasma and approximately equal to each other. This is the double-probe method. Conventional Langmuir probe theory assumes collisionless movement of charge carriers in the space charge sheath around the probe. Further it is assumed that the sheath boundary is well-defined and that beyond this boundary the plasma is completely undisturbed by the presence of the probe. This means that the electric field caused by the difference between the potential of the probe and the plasma potential at the place where the probe is located is limited to the volume inside the probe sheath boundary. The general theoretical description of a Langmuir probe measurement requires the simultaneous solution of the Poisson equation , the collision-free Boltzmann equation or Vlasov equation , and the continuity equation with regard to the boundary condition at the probe surface and requiring that, at large distances from the probe, the solution approaches that expected in an undisturbed plasma. If the magnetic field in the plasma is not stationary, either because the plasma as a whole is transient or because the fields are periodic (radio-frequency heating), the rate of change of the magnetic field with time ( B ˙ {\displaystyle {\dot {B}}} , read "B-dot") can be measured locally with a loop or coil of wire. Such coils exploit Faraday's law , whereby a changing magnetic field induces an electric field. [ 2 ] The induced voltage can be measured and recorded with common instruments. Also, by Ampere's law , the magnetic field is proportional to the currents that produce it, so the measured magnetic field gives information about the currents flowing in the plasma. Both currents and magnetic fields are important in understanding fundamental plasma physics. An energy analyzer is a probe used to measure the energy distribution of the particles in a plasma. The charged particles are typically separated by their velocities from the electric and/or magnetic fields in the energy analyzer, and then discriminated by only allowing particles with the selected energy range to reach the detector. Energy analyzers that use an electric field as the discriminator are also known as retarding field analyzers. [ 3 ] [ 4 ] It usually consists of a set of grids biased at different potentials to set up an electric field to repel particles lower than the desired amount of energy away from the detector. Analyzers with cylindrical or conical face-field [ 5 ] can be more effective in such measurements. In contrast, energy analyzers that employ the use of a magnetic field as a discriminator are very similar to mass spectrometers . Particles travel through a magnetic field in the probe and require a specific velocity in order to reach the detector. These were first developed in the 1960s, [ 6 ] and are typically built to measure ions. (The size of the device is on the order the particle's gyroradius because the discriminator intercepts the path of the gyrating particle.) The energy of neutral particles can also be measured by an energy analyzer, but they first have to be ionized by an electron impact ionizer. Proton radiography uses a proton beam from a single source to interact with the magnetic field and/or the electric field in the plasma and the intensity profile of the beam is measured on a screen after the interaction. The magnetic and electric fields in the plasma deflect the beam's trajectory and the deflection causes modulation in the intensity profile. From the intensity profile, one can measure the integrated magnetic field and/or electric field. Nonlinear effects like the I-V characteristic of the boundary sheath are utilized for Langmuir probe measurements but they are usually neglected for modelling of RF discharges due to their very inconvenient mathematical treatment. The Self Excited Electron Plasma Resonance Spectroscopy (SEERS) utilizes exactly these nonlinear effects and known resonance effects in RF discharges. The nonlinear elements, in particular the sheaths, provide harmonics in the discharge current and excite the plasma and the sheath at their series resonance characterized by the so-called geometric resonance frequency. SEERS provides the spatially and reciprocally averaged electron plasma density and the effective electron collision rate. The electron collision rate reflects stochastic (pressure) heating and ohmic heating of the electrons. The model for the plasma bulk is based on 2d-fluid model (zero and first order moments of Boltzmann equation) and the full set of the Maxwellian equations leading to the Helmholtz equation for the magnetic field. The sheath model is based additionally on the Poisson equation . Passive spectroscopic methods simply observe the radiation emitted by the plasma. They can be collected by diagnostics such as the filterscope, which is used in various tokamak devices. [ 7 ] If the plasma (or one ionic component of the plasma) is flowing in the direction of the line of sight to the observer, emission lines will be seen at a different frequency due to the Doppler effect . The thermal motion of ions will result in a shift of emission lines up or down, depending on whether the ion is moving toward or away from the observer. The magnitude of the shift is proportional to the velocity along the line of sight. The net effect is a characteristic broadening of spectral lines, known as Doppler broadening , from which the ion temperature can be determined. [ 8 ] The splitting of some emission lines due to the Stark effect can be used to determine the local electric field. Irrespectively of the presence of macroscopic electric fields, any single atom is affected by microscopic electric fields due to the neighboring charged plasma particles. This results in the Stark broadening of spectral lines that can be used to determine the plasma density. [ 9 ] The brightness of spectral lines emitted by atoms in a plasma depends on the plasma temperature and density. If a sufficiently complete collisional radiative model is used, the temperature (and, to a lesser degree, density) of plasmas can often be inferred by taking ratios of the emission intensities of various atomic spectral lines. [ 10 ] [ 11 ] The presence of a magnetic field splits the atomic energy levels due to the Zeeman effect . This leads to broadening or splitting of spectral lines. Analyzing these lines can, therefore, yield the magnetic field strength in the plasma. Active spectroscopic methods stimulate the plasma atoms in some way and observe the result (emission of radiation, absorption of the stimulating light or others). By shining through the plasma a laser with a wavelength, tuned to a certain transition of one of the species present in the plasma, the absorption profile of that transition could be obtained. This profile provides information not only for the plasma parameters, that could be obtained from the emission profile, but also for the line-integrated number density of the absorbing species. A beam of neutral atoms is fired into a plasma. Some atoms are excited by collisions within the plasma and emit radiation. This can be used to probe density fluctuations in a turbulent plasma. In extremely high-temperature plasmas, such as those found in magnetic fusion experiments, light elements become fully ionized and do not emit line radiation. However, when a beam of neutral atoms is fired into the plasma, a process known as charge exchange occurs. During charge exchange, electrons from the neutral beam atoms are transferred to the highly energetic plasma ions, leading to the formation of hydrogenic ions. These newly formed ions promptly emit line radiation, which is subsequently analyzed to obtain information about the plasma, including ion density, temperature, and velocity. One example of this is the Fast-Ion Deuterium-Alpha (FIDA) method employed in tokamaks. [ 12 ] [ 13 ] In this technique, charge exchange occurs between the neutral beam atoms and the fast deuterium ions present in the plasma. This method exploits the substantial Doppler shift exhibited by Balmer-alpha light emitted by the energetic atoms in order to determine the density of the fast ions. [ 14 ] Laser-induced fluorescence (LIF) is a spectroscopic technique employed for the investigation of plasma properties by observing the fluorescence emitted when the plasma is stimulated by laser radiation. This method allows for the measurement of plasma parameters such as ion flow, ion temperature, magnetic field strength, and plasma density. [ 15 ] Typically, tunable dye lasers are utilized to carry out these measurements. The pioneering application of LIF in plasma physics occurred in 1975 when researchers used it to measure the ion velocity distribution function in an argon plasma. [ 16 ] Various LIF techniques have since been developed, including the one-photon LIF technique and the two-photon absorption laser-induced fluorescence (TALIF). [ 17 ] TALIF is a modification of the laser-induced fluorescence technique. In this approach, the upper energy level is excited through the absorption of two photons, and subsequent fluorescence resulting from the radiative decay of the excited level is observed. TALIF is capable of providing precise measurements of absolute ground state atomic densities, such as those of hydrogen, oxygen, and nitrogen. However, achieving such precision necessitates appropriate calibration methods, which can be accomplished through titration or a more modern approach involving a comparison with a noble gases. [ 18 ] TALIF also offers insight into the temperature of species within the plasma, apart from atomic densities. However, this requires the use of lasers with a high spectral resolution to distinguish the Gaussian contribution of temperature broadening against the natural broadening of the two-photon excitation profile and the spectral broadening of the laser itself. Photodetachment combines Langmuir probe measurements with an incident laser beam. The incident laser beam is optimised, spatially, spectrally, and pulse energy, to detach an electron bound to a negative ion. Langmuir probe measurements are conducted to measure the electron density in two situations, one without the incident laser and one with the incident laser. The increase in the electron density with the incident laser gives the negative ion density. If an atom is moving in a magnetic field, the Lorentz force will act in opposite directions on the nucleus and the electrons, just as an electric field does. In the frame of reference of the atom, there is an electric field, even if there is none in the laboratory frame. Consequently, certain lines will be split by the Stark effect . With an appropriate choice of beam species and velocity and of geometry, this effect can be used to determine the magnetic field in the plasma. The optical diagnostics above measure line radiation from atoms. Alternatively, the effects of free charges on electromagnetic radiation can be used as a diagnostic. In magnetized plasmas, electrons will gyrate around magnetic field lines and emit cyclotron radiation . The frequency of the emission is given by the cyclotron resonance condition. In a sufficiently thick and dense plasma, the intensity of the emission will follow Planck's law , and only depend on the electron temperature. The Faraday effect will rotate the plane of polarization of a beam passing through a plasma with a magnetic field in the direction of the beam. This effect can be used as a diagnostic of the magnetic field, although the information is mixed with the density profile and is usually an integral value only. If a plasma is placed in one arm of an interferometer , the phase shift will be proportional to the plasma density integrated along the path. Scattering of laser light from the electrons in a plasma is known as Thomson scattering . The electron temperature can be determined very reliably from the Doppler broadening of the laser line. The electron density can be determined from the intensity of the scattered light, but a careful absolute calibration is required. Although Thomson scattering is dominated by scattering from electrons, since the electrons interact with the ions, in some circumstances information on the ion temperature can also be extracted. Fusion plasmas using D-T fuel produce 3.5 MeV alpha particles and 14.1 MeV neutrons. By measuring the neutron flux, plasma properties such as ion temperature and fusion power can be determined.
https://en.wikipedia.org/wiki/Plasma_diagnostics
Plasma electrochemistry is a new field of research where the interaction of plasma with an electrolyte solution is studied. It uses plasma to drive chemical reactions in liquid. [ 1 ] This electrochemistry -related article is a stub . You can help Wikipedia by expanding it . This plasma physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plasma_electrochemistry
Plasma electrolytic oxidation ( PEO ), also known as electrolytic plasma oxidation ( EPO ) or microarc oxidation ( MAO ), is an electrochemical surface treatment process for generating oxide coatings on metals . It is similar to anodizing , but it employs higher potentials , so that discharges [ 1 ] occur and the resulting plasma modifies the structure of the oxide layer. This process can be used to grow thick (tens or hundreds of micrometers), largely crystalline , oxide coatings on metals such as aluminium , magnesium [ 2 ] and titanium . Because they can present high hardness [ 3 ] and a continuous barrier, these coatings can offer protection against wear , corrosion or heat as well as electrical insulation . The coating is a chemical conversion of the substrate metal into its oxide , and grows both inwards and outwards from the original metal surface. Because it grows inward into the substrate, it has excellent adhesion to the substrate metal. A wide range of substrate alloys can be coated, including all wrought aluminum alloys and most cast alloys, although high levels of silicon can reduce coating quality. Metals such as aluminum naturally form a passivating oxide layer which provides moderate protection against corrosion. The layer is strongly adherent to the metal surface, and it will regrow quickly if scratched off. In conventional anodizing , this layer of oxide is grown on the surface of the metal by the application of electrical potential , while the part is immersed in an acidic electrolyte . In plasma electrolytic oxidation, higher potentials are applied. For example, in the plasma electrolytic oxidation of aluminum, at least 200 V must be applied. This locally exceeds the dielectric breakdown potential of the growing oxide film, and discharges occur. These discharges result in localized plasma reactions, with conditions of high temperature and pressure which modify the growing oxide. Processes include melting, melt-flow, re-solidification, sintering and densification of the growing oxide. One of the most significant effects, is that the oxide is partially converted from amorphous alumina into crystalline forms such as corundum (α-Al 2 O 3 ) which is much harder. [ 3 ] As a result, mechanical properties such as wear resistance and toughness are enhanced. The part to be coated is immersed in a bath of electrolyte which usually consists of a dilute alkaline solution such as KOH. It is electrically connected, so as to become one of the electrodes in the electrochemical cell , with the other "counter-electrode" typically being made from an inert material such as stainless steel , and often consisting of the wall of the bath itself. Potentials of over 200 V are applied between these two electrodes. These may be continuous or pulsed direct current (DC) (in which case the part is simply an anode in DC operation), or alternating pulses ( alternating current or "pulsed bi-polar" operation) where the stainless steel counter electrode might just be earthed . One of the remarkable features of plasma electrolyte coatings is the presence of micro pores and cracks on the coating surface. [ 2 ] Plasma electrolytic oxide coatings are generally recognized for high hardness, wear resistance, and corrosion resistance. However, the coating properties are highly dependent on the substrate used, as well as on the composition of the electrolyte and the electrical regime used (see 'Equipment used' section, above). Even on aluminium, the coating properties can vary strongly according to the exact alloy composition. For instance, the hardest coatings can be achieved on 2XXX series aluminium alloys , where the highest proportion of crystalline phase corundum (α-Al 2 O 3 ) is formed, resulting in hardnesses of ~2000 HV , whereas coatings on the 5XXX series have less of this important constituent and are hence softer. Extensive work is being pursued by Prof. T. W. Clyne at the University of Cambridge to investigate the fundamental electrical and plasma physical processes [ 1 ] involved in this process, having previously elucidated some of the micromechanical [ 3 ] (& pore architectural [ 4 ] ), mechanical [ 3 ] and thermal [ 5 ] characteristics of PEO coatings.
https://en.wikipedia.org/wiki/Plasma_electrolytic_oxidation
Plasma modeling refers to solving equations of motion that describe the state of a plasma . It is generally coupled with Maxwell's equations for electromagnetic fields or Poisson's equation for electrostatic fields. There are several main types of plasma models: single particle, kinetic, fluid, hybrid kinetic/fluid, gyrokinetic and as system of many particles. The single-particle model describes the plasma as individual electrons and ions moving in imposed (rather than self-consistent) electric and magnetic fields. The motion of each particle is thus described by the Lorentz Force Law . In many cases of practical interest, this motion can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The kinetic model is the most fundamental way to describe a plasma, resultantly producing a distribution function where the independent variables x → {\displaystyle {\vec {x}}} and v → {\displaystyle {\vec {v}}} are position and velocity , respectively. A kinetic description is achieved by solving the Boltzmann equation or, when the correct description of long-range Coulomb interaction is necessary, by the Vlasov equation which contains self-consistent collective electromagnetic field, or by the Fokker–Planck equation , in which approximations have been used to derive manageable collision terms. The charges and currents produced by the distribution functions self-consistently determine the electromagnetic fields via Maxwell's equations . To reduce the complexities in the kinetic description, the fluid model describes the plasma based on macroscopic quantities (velocity moments of the distribution such as density, mean velocity, and mean energy). The equations for macroscopic quantities, called fluid equations, are obtained by taking velocity moments of the Boltzmann equation or the Vlasov equation . The fluid equations are not closed without the determination of transport coefficients such as mobility, diffusion coefficient , averaged collision frequencies, and so on. To determine the transport coefficients, the velocity distribution function must be assumed/chosen. But this assumption can lead to a failure of capturing some physics. Although the kinetic model describes the physics accurately, it is more complex (and in the case of numerical simulations, more computationally intensive) than the fluid model. The hybrid model is a combination of fluid and kinetic models, treating some components of the system as a fluid, and others kinetically. The hybrid model is sometimes applied in space physics , when the simulation domain exceeds thousands of ion gyroradius scales, making it impractical to solve kinetic equations for electrons. In this approach, magnetohydrodynamic fluid equations describe electrons, while the kinetic Vlasov equation describes ions. [ 1 ] [ 2 ] In the gyrokinetic model , which is appropriate to systems with a strong background magnetic field, the kinetic equations are averaged over the fast circular motion of the gyroradius . This model has been used extensively for simulation of tokamak plasma instabilities (for example, the GYRO and Gyrokinetic ElectroMagnetic codes), and more recently in astrophysical applications. Quantum methods are not yet very common in plasma modeling. They can be used to solve unique modeling problems; like situations where other methods do not apply. [ 3 ] They involve the application of quantum field theory to plasma. In these cases, the electric and magnetic fields made by particles are modeled like a field ; A web of forces. Particles that move, or are removed from the population push and pull on this web of forces, this field. The mathematical treatment for this involves Lagrangian mathematics. Collisional-radiative modeling is used to calculate quantum state densities and the emission/absorption properties of a plasma. This plasma radiation physics is critical for the diagnosis and simulation of astrophysical and nuclear fusion plasma. [ 4 ] It is one of the most general approaches [ 5 ] and lies between the extrema of a local thermal equilibrium and a coronal picture. In a local thermal equilibrium the population of excited states is distributed according to a Boltzmann distribution. However, this holds only if densities are high enough for an excited hydrogen atom to undergo many collisions such that the energy is distributed before the radiative process sets in. In a coronal picture the timescale of the radiative process is small compared to the collisions since densities are very small. [ 6 ] The use of the term coronal equilibrium is ambiguous and may also refer to the non-transport ionization balance of recombination and ionization. The only thing they have in common is that a coronal equilibrium is not sufficient for tokamak plasma. [ 7 ]
https://en.wikipedia.org/wiki/Plasma_modeling
Plasma parameters define various characteristics of a plasma , an electrically conductive collection of charged and neutral particles of various species ( electrons and ions ) that responds collectively to electromagnetic forces . [ 1 ] Such particle systems can be studied statistically , i.e., their behaviour can be described based on a limited number of global parameters instead of tracking each particle separately. [ 2 ] The fundamental plasma parameters in a steady state are Using these parameters and physical constants , other plasma parameters can be derived. [ 3 ] All quantities are in Gaussian ( cgs ) units except energy E {\displaystyle E} and temperature T {\displaystyle T} which are in electronvolts . For the sake of simplicity, a single ionic species is assumed. The ion mass is expressed in units of the proton mass, μ = m i / m p {\displaystyle \mu =m_{i}/m_{p}} and the ion charge in units of the elementary charge e {\displaystyle e} , Z = q i / e {\displaystyle Z=q_{i}/e} (in the case of a fully ionized atom, Z {\displaystyle Z} equals to the respective atomic number ). The other physical quantities used are the Boltzmann constant ( k B {\displaystyle k_{\text{B}}} ), speed of light ( c {\displaystyle c} ), and the Coulomb logarithm ( ln ⁡ Λ {\displaystyle \ln \Lambda } ). In the study of tokamaks , collisionality is a dimensionless parameter which expresses the ratio of the electron-ion collision frequency to the banana orbit frequency. The plasma collisionality ν ∗ {\displaystyle \nu ^{*}} is defined as [ 4 ] [ 5 ] ν ∗ = ν e i m e k B T e ε − 3 2 q R , {\displaystyle \nu ^{*}=\nu _{\mathrm {ei} }\,{\sqrt {\frac {m_{\mathrm {e} }}{k_{\mathrm {B} }T_{\mathrm {e} }}}}\,\varepsilon ^{-{\frac {3}{2}}}\,qR,} where ν e i {\displaystyle \nu _{\mathrm {ei} }} denotes the electron-ion collision frequency , R {\displaystyle R} is the major radius of the plasma, ε {\displaystyle \varepsilon } is the inverse aspect-ratio , and q {\displaystyle q} is the safety factor . The plasma parameters m i {\displaystyle m_{\mathrm {i} }} and T i {\displaystyle T_{\mathrm {i} }} denote, respectively, the mass and temperature of the ions , and k B {\displaystyle k_{\mathrm {B} }} is the Boltzmann constant . Temperature is a statistical quantity whose formal definition is T = ( ∂ U ∂ S ) V , N , {\displaystyle T=\left({\frac {\partial U}{\partial S}}\right)_{V,N},} or the change in internal energy with respect to entropy , holding volume and particle number constant. A practical definition comes from the fact that the atoms, molecules, or whatever particles in a system have an average kinetic energy. The average means to average over the kinetic energy of all the particles in a system. If the velocities of a group of electrons , e.g., in a plasma , follow a Maxwell–Boltzmann distribution , then the electron temperature is defined as the temperature of that distribution. For other distributions, not assumed to be in equilibrium or have a temperature, two-thirds of the average energy is often referred to as the temperature, since for a Maxwell–Boltzmann distribution with three degrees of freedom , ⟨ E ⟩ = 3 2 k B T {\textstyle \langle E\rangle ={\frac {3}{2}}\,k_{\text{B}}T} . The SI unit of temperature is the kelvin (K), but using the above relation the electron temperature is often expressed in terms of the energy unit electronvolt (eV). Each kelvin (1 K) corresponds to 8.617 333 262 ... × 10 −5 eV ; this factor is the ratio of the Boltzmann constant to the elementary charge . [ 6 ] Each eV is equivalent to 11,605 kelvins , which can be calculated by the relation ⟨ E ⟩ = k B T {\displaystyle \langle E\rangle =k_{\text{B}}T} . The electron temperature of a plasma can be several orders of magnitude higher than the temperature of the neutral species or of the ions . This is a result of two facts. Firstly, many plasma sources heat the electrons more strongly than the ions. Secondly, atoms and ions are much heavier than electrons, and energy transfer in a two-body collision is much more efficient if the masses are similar. Therefore, equilibration of the temperature happens very slowly, and is not achieved during the time range of the observation.
https://en.wikipedia.org/wiki/Plasma_parameters
Plasma polymerization (or glow discharge polymerization ) uses plasma sources to generate a gas discharge that provides energy to activate or fragment gaseous or liquid monomer , often containing a vinyl group , in order to initiate polymerization . Polymers formed from this technique are generally highly branched and highly cross-linked , and adhere to solid surfaces well. The biggest advantage to this process is that polymers can be directly attached to a desired surface while the chains are growing, which reduces steps necessary for other coating processes such as grafting . This is very useful for pinhole-free coatings of 100 picometers to 1-micrometer thickness with solvent insoluble polymers. [ 1 ] In as early as the 1870s "polymers" formed by this process were known, but these polymers were initially thought of as undesirable byproducts associated with electric discharge , with little attention being given to their properties. [ 1 ] It was not until the 1960s that the properties of these polymers were found to be useful. [ 2 ] It was found that flawless thin polymeric coatings could be formed on metals , although for very thin films (<10nm) this has recently been shown to be an oversimplification. [ 3 ] [ 4 ] By selecting the monomer type and the energy density per monomer, known as the Yasuda parameter, the chemical composition and structure of the resulting thin film can be varied with a wide range. These films are usually inert , adhesive , and have low dielectric constants . [ 1 ] Some common monomers polymerized by this method include styrene, ethylene, methacrylate, and pyridine, just to name a few. The 1970s brought about many advances in plasma polymerization, including the polymerization of many different types of monomers. The mechanisms of deposition however were largely ignored until more recently. Since this time most attention devoted to plasma polymerization has been in the fields of coatings, but since it is difficult to control polymer structure, it has limited applications. Plasma consists of a mixture of electrons, ions, radicals, neutrals, and photons. [ 5 ] Some of these species are in local thermodynamic equilibrium, while others are not. Even for simple gases like argon, this mixture can be complex. For plasmas of organic monomers, the complexity can rapidly increase as some components of the plasma fragment, while others interact and form larger species. Glow discharge is a technique in polymerization which forms free electrons which gain energy from an electric field , and then lose energy through collisions with neutral molecules in the gas phase . This leads to many chemically reactive species, which then leads to a plasma polymerization reaction. [ 6 ] The electric discharge process for plasma polymerization is the "low-temperature plasma" method because higher temperatures cause degradation . These plasmas are formed by a direct current , alternating current or radio frequency generator. [ 7 ] There are a few designs for apparatus used in plasma polymerization, one of which is the Bell (static type), in which monomer gas is put into the reaction chamber, but does not flow through the chamber. It comes in and polymerizes without removal. This type of reactor is shown in Figure 1. [ 8 ] This reactor has internal electrodes , and polymerization commonly takes place on the cathode side. All devices contain the thermostatic bath, which is used to regulate temperature, and a vacuum to regulate pressure. [ 6 ] Operation: The monomer gas comes into the Bell-type reactor as a gaseous species, and then is put into the plasma state by the electrodes, in which the plasma may consist of radicals , anions and cations . These monomers are then polymerized on the cathode surface, or some other surface placed in the apparatus by different mechanisms of which details are discussed below. The deposited polymers then propagate off the surface and form growing chains with seemingly uniform consistency. Another popular reactor type is the flow-through reactor ( continuous flow reactor ), which also has internal electrodes, but this reactor allows monomer gas to flow through the reaction chamber as its name implies, which should give a more even coating for polymer film deposition. [ 7 ] It has the advantage that more monomer keeps flowing into the reactor to deposit more polymer. It has the disadvantage of forming what is called "tail flame", which is when polymerization extends into the vacuum line. A third popular type of reactor is the electrodeless. [ 9 ] This uses an RF coil wrapped around the glass apparatus, which then uses a radio frequency generator to form the plasma inside of the housing without the use of direct electrodes (see Inductively Coupled Plasma ). The polymer can then be deposited as it is pushed through this RF coil toward the vacuum end of the apparatus. This has the advantage of not having polymer building up on the electrode surface, which is desirable when polymerizing onto other surfaces. A fourth type of system growing in popularity is the atmospheric-pressure plasma system, which is useful for depositing thin polymer films. [ 10 ] This system bypasses the requirements for special hardware involving vacuums, which then makes it favorable for integrated industrial use. It has been shown that polymers formed at atmospheric pressure can have similar properties for coatings as those found in low-pressure systems. [ citation needed ] The formation of plasma for polymerization depends on many of the following. An electron energy of 1–10 eV is required, with electron densities of 10 9 to 10 12 per cubic centimeter, to form the desired plasma state. The formation of a low-temperature plasma is important; the electron temperatures are not equal to the gas temperatures and have a ratio of T e /T g of 10 to 100, so that this process can occur at near ambient temperatures , which is advantageous because polymers degrade at high temperatures, so if a high-temperature plasma was used the polymers would degrade after formation or would never be formed. [ 6 ] This entails non-equilibrium plasmas, which means that charged monomer species have more kinetic energy than neutral monomer species, and cause the transfer of energy to a substrate instead of an uncharged monomer. The kinetic rate of these reactions depends mostly on the monomer gas, which must be either gaseous or vaporized. However, other parameters are also important as well, such as power , pressure , flow rate , frequency , electrode gap, and reactor configuration. [ 6 ] Low flow rates usually only depend on the number of reactive species present for polymerization, whereas high flow rates depend on the amount of time that is spent in the reactor. Therefore, the maximum rate of polymerization is somewhere in the middle. The fastest reactions tend to be in the order of triple-bonded > double-bonded > single bonded molecules, and also lower molecular weight molecules are faster than higher ones. So acetylene is faster than ethylene , and ethylene is faster than propene , etc. [ 6 ] The molecular weight factor in polymer deposition is dependent on the monomer flow rate, in which a higher molecular weight monomer typically near 200 g/mol needs a much higher flow rate of 15 × 10 4 g/cm 2 , whereas lower molecular weights around 50 g/mol require a flow rate of only 5 × 10 4 g/cm 2 . [ 1 ] A heavy monomer, therefore, needs a faster flow, and would likely lead to increased pressures, decreasing polymerization rates. Increased pressure tends to decrease polymerization rates reducing uniformity of deposition since uniformity is controlled by constant pressure. This is a reason that high-pressure plasma or atmospheric-pressure plasmas are not usually used in favor of low-pressure systems. At pressures greater than 1 torr , oligomers is formed on the electrode surface, and the monomers also on the surface can dissolve them to get a low degree of polymerization forming an oily substance. At low pressures, the reactive surfaces are low in monomer and facilitate the growth of high molecular weight polymers. The rate of polymerization depends on input power, until power saturation occurs and the rate becomes independent of it. [ 6 ] A narrower electrode gap also tends to increase polymerization rates because a higher electron density per unit area is formed. Polymerization rates also depend on the type of apparatus used for the process. In general, increasing the frequency of alternating current glow discharge up to about 5 kHz increases the rate due to the formation of more free radicals. After this frequency, the inertial effects of colliding monomers inhibit polymerization. This forms the first plateau for polymerization frequencies. A second maximum in frequency occurs at 6  MHz, where side reactions are overcome again and the reaction occurs through free radicals diffused from plasma to the electrodes, at which point a second plateau is obtained. [ 6 ] These parameters differ slightly for each monomer and must be optimized in-situ. Plasma contains many species such as ions, free radicals, and electrons, so it is important to look at what contributes to the polymerization process most. [ 6 ] The first suggested process by Westwood et al. was that of a cationic polymerization since in a direct current system polymerization occurs mainly on the cathode. [ 6 ] However, more investigation has led to the belief that the mechanism is more of a radical polymerization process, since radicals tend to be trapped in the films, and termination can be overcome by reinitiation of oligomers. [ 7 ] Other kinetic studies also appear to support this theory. [ 6 ] However, since the mid-1990s several papers focusing on the formation of highly functionalized plasma polymers have postulated a more significant role for cations, particularly where the plasma sheath is collisionless. [ 11 ] [ 12 ] The assumption that the plasma ion density is low and consequently the ion flux to surfaces is low has been challenged, pointing out that ion flux is determined according to the Bohm sheath criterion i.e. ion flux is proportional to the square root of the electron temperature and not RT. [ 13 ] In polymerization, both gas phase and surface reactions occur, but the mechanism differs between high and low frequencies. At high frequencies, it occurs in reactive intermediates, whereas at low frequencies polymerization happens mainly on surfaces. As polymerization occurs, the pressure inside the chamber decreases in a closed system, since gas-phase monomers go to solid polymers. An example diagram of the ways that polymerization can take place is shown in Figure 2, wherein the most abundant pathway is shown in blue with double arrows, with side pathways shown in black. The ablation occurs by gas formation during polymerization. Polymerization has two pathways, either the plasma state or plasma-induced processes, which both lead to the deposited polymer. [ 7 ] Polymers can be deposited on many substrates other than the electrode surfaces, such as glass , other organic polymers, or metals, when either a surface is placed in front of the electrodes, or placed in the middle between them. The ability for them to build off of electrode surfaces is likely to be an electrostatic interaction, while on other surfaces covalent attachment is possible. Polymerization is likely to take place through either ionic and/or radical processes which are initiated by plasma formed from the glow discharge. [ 1 ] The classic view presented by Yasuda [ 14 ] based upon thermal initiation of Parylene polymerization is that there are many propagating species present at any given time as shown in Figure 3. This figure shows two different pathways by which the polymerization may take place. The first pathway is a monofunctionalization process, which bears resemblance to a standard free radical polymerization mechanism (M•)- although with the caveat that the reactive species may be ionic and not necessarily radical. The second pathway refers to a difunctional mechanism, which for example may contain a cationic and a radical propagating center on the same monomer (•M•). A consequence is that 'polymer' can grow in multiple directions by multiple pathways off one species, such as a surface or other monomer. This possibility let Yasuda to term the mechanism as a very rapid step-growth polymerization . [ 7 ] In the diagram, M x refers to the original monomer molecule or any of many dissociation products such as chlorine , fluorine and hydrogen . The M• species refers to those that are activated and capable of participating in reactions to form new covalent bonds . The •M• species refers to an activated difunctional monomer species. The subscripts i, j, and k show the sizes of the different species involved. Even though radicals represent the activated species, any ion or radical could be used in the polymerization. [ 7 ] As can be seen here, plasma polymerization is a very complex process, with many parameters affecting everything from rate to chain length. Selection or the favoring of one particular pathway can be achieved by altering the plasma parameters. For example, pulsed plasma with selected monomers appears to favor much more regular polymer structures and it has been postulated these grow by a mechanism akin to (radical) chain growth in the plasma off-time. [ 15 ] As can be seen in the monomer table, many simple monomers are readily polymerized by this method, but most must be smaller ionizable species because they have to be able to go into the plasma state. Though monomers with multiple bonds polymerize readily, it is not a requirement, as ethane, silicones and many others polymerize also. Other stipulations exist. Yasuda et al. studied 28 monomers and found that those containing aromatic groups, silicon , olefinic group or nitrogen (NH, NH 2 , CN) were readily polymerizable, while those containing oxygen , halides , aliphatic hydrocarbons and cyclic hydrocarbons were decomposed more readily. [ 7 ] The latter compounds have more ablation or side reactions present, which inhibit stable polymer formation. It is also possible to incorporate N 2 , H 2 O, and CO into copolymers of styrene . Plasma polymers can be thought of as a type of graft polymer since they are grown off of a substrate . These polymers are known to form nearly uniform surface deposition, which is one of their desirable properties. Polymers formed from this process often cross-link and form branches due to the multiple propagating species present in the plasma. This often leads to very insoluble polymers, which gives an advantage to this process, since hyperbranched polymers can be deposited directly without solvent. Common polymers include: polythiophene , [ 19 ] polyhexafluoropropylene, [ 20 ] polytetramethyltin, [ 21 ] polyhexamethyldisiloxane, [ 22 ] polytetramethyldisiloxane, polypyridine, polyfuran , and poly-2-methyloxazoline. [ 17 ] [ 18 ] The following are listed in order of decreasing rate of polymerization: polystyrene , polymethyl styrene, polycyclopentadiene, polyacrylate , polymethyl acrylate, polymethyl methacrylate , polyvinyl acetate , polyisoprene , polyisobutene , and polyethylene . [ 23 ] Nearly all polymers created by this method have excellent appearance, are clear, and are significantly cross-linked. Linear polymers are not formed readily by plasma polymerization methods based on propagating species. Many other polymers could be formed by this method. The properties of plasma polymers differ greatly from those of conventional polymers. While both types are dependent on the chemical properties of the monomer, the properties of plasma polymers depend more greatly on the design of the reactor and the chemical and physical characteristics of the substrate on which the plasma polymer is deposited. [ 7 ] The location within the reactor where the deposition occurs also affects the resultant polymer's properties. In fact, by using plasma polymerization with a single monomer and varying the reactor, substrate, etc. a variety of polymers, each having different physical and chemical properties, can be prepared. [ 7 ] The large dependence of the polymer features on these factors makes it difficult to assign a set of basic characteristics, but a few common properties that set plasma polymers apart from conventional polymers do exist. The most significant difference between conventional polymers and plasma polymers is that plasma polymers do not contain regular repeating units. Due to the number of different propagating species present at any one time as discussed above, the resultant polymer chains are highly branched and are randomly terminated with a high degree of cross-linking. [ 24 ] An example of a proposed structure for plasma polymerized ethylene demonstrating a large extend of cross-linking and branching is shown in Figure 4. All plasma polymers contain free radicals as well. The amount of free radicals present varies between polymers and is dependent on the chemical structure of the monomer. Because the formation of the trapped free radicals is tied to the growth mechanism of the plasma polymers, the overall properties of the polymers directly correlate to the number of free radicals. [ 25 ] Plasma polymers also contain internal stress. If a thick layer (e.g. 1 µm) of a plasma polymer is deposited on a glass slide, the plasma polymer will buckle and frequently crack. The curling is attributed to an internal stress formed in the plasma polymer during the polymer deposition. The degree of curling is dependent on the monomer as well as the conditions of the plasma polymerization. [ 7 ] Most plasma polymers are insoluble and infusible. [ 7 ] These properties are due to the large amount of cross-linking in the polymers, previously discussed. Consequently, the kinetic path length for these polymers must be sufficiently long, so these properties can be controlled to a point. [ 7 ] The permeabilities of plasma polymers also differ greatly from those of conventional polymers. Because of the absence of large-scale segmental mobility and the high degree of cross-linking within the polymers, the permeation of small molecules does not strictly follow the typical mechanisms of "solution-diffusion" or molecular-level sieve for such small permeants. The permeability characteristics of plasma polymers fall between these two ideal cases. [ 7 ] A final common characteristic of plasma polymers is the adhesion ability. The specifics of the adhesion ability for a given plasma polymer, such as thickness and characteristics of the surface layer, again are particular for a given plasma polymer and few generalizations can be made. [ 7 ] Plasma polymerization offers several advantages over other polymerization methods in general. The most significant advantage of plasma polymerization is its ability to produce polymer films of organic compounds that do not polymerize under normal chemical polymerization conditions. Nearly all monomers, even saturated hydrocarbons and organic compounds without a polymerizable structure such as a double bond, can be polymerized with this technique. [ 24 ] A second advantage is the ease of application of the polymers as coatings versus conventional coating processes. While coating a substrate with conventional polymers requires several steps, plasma polymerization accomplishes all these in essentially a single step. [ 1 ] This leads to a cleaner and 'greener' synthesis and coating process since no solvent is needed during the polymer preparation and no cleaning of the resultant polymer is needed either. Another 'green' aspect of the synthesis is that no initiator is needed for the polymer preparation since reusable electrodes cause the reaction to proceed. The resultant polymer coatings also have several advantages over typical coatings. These advantages include being nearly pinhole-free, highly dense, and the thickness of the coating can easily be varied. [ 26 ] There are also several disadvantages relating to plasma polymerization versus conventional methods. The most significant disadvantage is the high cost of the process. A vacuum system is required for the polymerization, significantly increasing the set-up price. [ 26 ] Another disadvantage is due to the complexity of plasma processes. Because of the complexity, it is not easy to achieve good control over the chemical composition of the surface after modification. The influence of process parameters on the chemical composition of the resultant polymer means it can take a long time to determine the optimal conditions. [ 26 ] The complexity of the process also makes it impossible to theorize what the resultant polymer will look like, unlike conventional polymers which can be easily determined based on the monomer. The advantages offered by plasma polymerization have resulted in substantial research on the applications of these polymers. The vastly different chemical and mechanical properties offered by polymers formed with plasma polymerization means they can be applied to countless different systems. Applications ranging from adhesion, composite materials , protective coatings, printing , membranes , biomedical applications, water purification, and so on have all been studied. [ 27 ] Of particular interest since the 1980s has been the deposition of functionalized plasma polymer films. For example, functionalized films are used as a means of improving biocompatibility for biological implants6 and for producing super-hydrophobic coatings. They have also been extensively employed in biomaterials for cell attachment, protein binding, and anti-fouling surfaces. Through the use of low-power and pressure plasma, high functional retention can be achieved which has led to substantial improvements in the biocompatibility of some products, a simple example being the development of extended-wear contact lenses. Due to these successes, the huge potential of functional plasma polymers is slowly being realized by workers in previously unrelated fields such as water treatment and wound management. Emerging technologies such as nanopatterning, 3D scaffolds, micro-channel coating, and microencapsulation are now also utilizing functionalized plasma polymers, areas for which traditional polymers are often unsuitable A significant area of research has been on the use of plasma polymer films as permeation membranes. The permeability characteristics of plasma polymers deposited on porous substrates are different than usual polymer films. The characteristics depend on the deposition and polymerization mechanism. [ 28 ] Plasma polymers as membranes for separation of oxygen and nitrogen, ethanol and water, and water vapor permeation have all been studied. [ 28 ] The application of plasma polymerized thin films as reverse osmosis membranes has received considerable attention as well. Yasuda et al. have shown membranes prepared with plasma polymerization made from nitrogen-containing monomers can yield up to 98% salt rejection with a flux of 6.4 gallons/ft 2 a day. [ 7 ] Further research has shown that varying the monomers of the membrane offers other properties as well, such as chlorine resistance. [ 7 ] Plasma-polymerized films have also found electrical applications. Given that plasma polymers frequently contain many polar groups, which form when the radicals react with oxygen in the air during the polymerization process, the plasma polymers were expected to be good dielectric materials in thin film form. [ 28 ] Studies have shown that plasma polymers generally do have a higher dielectric property. Some plasma polymers have been applied as chemical sensory devices due to their electrical properties. Plasma polymers have been studied as chemical sensory devices for humidity, propane, and carbon dioxide amongst others. Thus far issues with instability against aging and humidity have limited their commercial applications. [ 28 ] The application of plasma polymers as coatings has also been studied. Plasma polymers formed from tetramethoxysilane have been studied as protective coatings and have been shown to increase the hardness of polyethylene and polycarbonate . [ 28 ] The use of plasma polymers to coat plastic lenses is increasing in popularity. Plasma depositions can easily coat curved materials with good uniformity, such as those of bifocals . The different plasma polymers used can be not only scratch resistant but also hydrophobic leading to anti-fogging effects. [ 29 ] Plasma polymer surfaces with tunable wettability and reversibly switchable pH-responsiveness have shown promising prospects due to their unique property in applications, such as drug delivery, biomaterial engineering, oil/water separation processes, sensors, and biofuel cells. [ 30 ]
https://en.wikipedia.org/wiki/Plasma_polymerization
Plasma protein binding refers to the degree to which medications attach to blood proteins within the blood plasma . A drug 's efficacy may be affected by the degree to which it binds. The less bound a drug is, the more efficiently it can traverse or diffuse through cell membranes . Common blood proteins that drugs bind to are human serum albumin , lipoprotein , glycoprotein , and α, β‚ and γ globulins . A drug in blood exists in two forms: bound and unbound. Depending on a specific drug's affinity for plasma proteins , a proportion of the drug may become bound to the proteins, with the remainder being unbound. If the protein binding is reversible, then a chemical equilibrium will exist between the bound and unbound states, such that: Notably, it is the unbound fraction which exhibits pharmacologic effects. It is also the fraction that may be metabolized and/or excreted. For example, the "fraction bound" of the anticoagulant warfarin is 97%. This means that out of the amount of warfarin in the blood, 97% is bound to plasma proteins. The remaining 3% (the fraction unbound) is the fraction that is actually active and may be excreted. Protein binding can influence the drug's biological half-life . The bound portion may act as a reservoir or depot from which the drug is slowly released as the unbound form. Since the unbound form is being metabolized and/or excreted from the body, the bound fraction will be released in order to maintain equilibrium. Since albumin is alkalotic, acidic and neutral drugs will primarily bind to albumin . If albumin becomes saturated, then these drugs will bind to lipoprotein . Basic drugs will bind to the acidic alpha-1 acid glycoprotein . This is significant because various medical conditions may affect the levels of albumin, alpha-1 acid glycoprotein, and lipoproteins. Only the unbound fraction of the drug undergoes metabolism in the liver and other tissues. As the drug dissociates from the protein, more and more drug undergoes metabolism. Changes in the levels of free drug change the volume of distribution because free drug may distribute into the tissues leading to a decrease in plasma concentration profile. For the drugs which rapidly undergo metabolism, clearance is dependent on the hepatic blood flow. For drugs which slowly undergo metabolism, changes in the unbound fraction of the drug directly change the clearance of the drug. The most commonly used methods for measuring drug concentration levels in the plasma measure bound as well as unbound fractions of the drug. The fraction unbound can be altered by a number of variables, such as the concentration of drug in the body, the amount and quality of plasma protein, and other drugs that bind to plasma proteins. Higher drug concentrations would lead to a higher fraction unbound, because the plasma protein would be saturated with drug and any excess drug would be unbound. If the amount of plasma protein is decreased (such as in catabolism , malnutrition , liver disease , renal disease ), there would also be a higher fraction unbound. Additionally, the quality of the plasma protein may affect how many drug-binding sites there are on the protein. Using 2 drugs at the same time can sometimes affect each other's fraction unbound. For example, assume that Drug A and Drug B are both protein-bound drugs. If Drug A is given, it will bind to the plasma proteins in the blood. If Drug B is also given, it can displace Drug A from the protein, thereby increasing Drug A's fraction unbound. This may increase the effects of Drug A, since only the unbound fraction may exhibit activity. Note that for Drug A, the % increase in unbound fraction is 100% – hence, Drug A's pharmacological effect can potentially double (depending on whether the free molecules get to their target before they are eliminated by metabolism or excretion). This change in pharmacologic effect could have adverse consequences. However, this effect is really only noticeable in closed systems where the pool of available proteins could potentially be exceeded by the number of drug molecules. Biological systems, such as humans and animals, are open systems where molecules can be gained, lost or redistributed and where the protein pool capacity is almost never exceeded by the number of drug molecules. A drug that is 99% bound means that 99% of the drug molecules are bound to blood proteins not that 99% of the blood proteins are bound with drug. When two, highly protein-bound drugs (A and B) are added into the same biological system it will lead to an initial small increase in the concentration of free drug A (as drug B ejects some of the drug A from its proteins). However, this free drug A is now more available for redistribution into the body tissues and/or for excretion. This means the total amount of drug in the system will decrease quite rapidly, keeping the free drug fraction (the concentration of free drug divided by the total drug concentration) constant and yielding almost no change in clinical effect. [ 1 ] The effects of drugs displacing each other and changing the clinical effect (though important in some examples) is vastly overestimated usually and a common example incorrectly used to display the importance of this effect is the anticoagulant warfarin . Warfarin is highly protein-bound (>95%) and has a low therapeutic index . Since a low therapeutic index indicates that there is a high risk of toxicity when using the drug, any potential increases in warfarin concentration could be very dangerous and lead to hemorrhage. In horses, it is very true that if warfarin and phenylbutazone are administered concurrently, the horse can develop bleeding issues which can be fatal. This is often explained as being due to the effect of phenylbutazone ejecting warfarin from its plasma protein, thus increasing the concentration of free warfarin and increasing its anticoagulant effect. However, the real problem is that phenylbutazone interferes with the liver's ability to metabolize warfarin so free warfarin cannot be metabolized properly or excreted. This leads to an increase in free warfarin and the resulting bleeding problems. [ citation needed ]
https://en.wikipedia.org/wiki/Plasma_protein_binding
Plasma recombination is a process by which positive ions of a plasma capture a free (energetic) electron and combine with electrons or negative ions to form new neutral atoms ( gas ). The process of recombination can be described as the reverse of ionization, whereby conditions allow the plasma to evert to a gas. [ 1 ] Recombination is an exothermic process , meaning that the plasma releases some of its internal energy , usually in the form of heat . [ 2 ] Except for plasma composed of pure hydrogen (or its isotopes ), there may also be multiply charged ions. Therefore, a single electron capture results in decrease of the ion charge, but not necessarily in a neutral atom or molecule. Recombination usually takes place in the whole volume of a plasma (volume recombination), although in some cases it is confined to some region of the volume. Each kind of reaction is called a recombining mode and their individual rates are strongly affected by the properties of the plasma such as its energy (heat), density of each species, pressure and temperature of the surrounding environment. An everyday example of rapid plasma recombination occurs when a fluorescent lamp is switched off. The low-density plasma in the lamp (which generates the light by bombardment of the fluorescent coating on the inside of the glass wall) recombines in a fraction of a second after the plasma-generating electric field is removed by switching off the electric power source. Hydrogen recombination modes are of vital importance in the development of divertor regions for tokamak reactors. In fact they will provide a good way for extracting the energy produced in the core of the plasma. At the present time, it is believed that the most likely plasma losses observed in the recombining region are due to two different modes: electron ion recombination (EIR) and molecular activated recombination (MAR). [ citation needed ] This plasma physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plasma_recombination
Thermal spraying techniques are coating processes in which melted (or heated) materials are sprayed onto a surface. The "feedstock" (coating precursor) is heated by electrical (plasma or arc) or chemical means (combustion flame). Thermal spraying can provide thick coatings (approx. thickness range is 20 microns to several mm, depending on the process and feedstock), over a large area at high deposition rate as compared to other coating processes such as electroplating , physical and chemical vapor deposition . Coating materials available for thermal spraying include metals, alloys, ceramics, plastics and composites. They are fed in powder or wire form, heated to a molten or semimolten state and accelerated towards substrates in the form of micrometer-size particles. Combustion or electrical arc discharge is usually used as the source of energy for thermal spraying. Resulting coatings are made by the accumulation of numerous sprayed particles. The surface may not heat up significantly, allowing the coating of flammable substances. Coating quality is usually assessed by measuring its porosity , oxide content, macro and micro- hardness , bond strength and surface roughness . Generally, the coating quality increases with increasing particle velocities. Several variations of thermal spraying are distinguished: In classical (developed between 1910 and 1920) but still widely used processes such as flame spraying and wire arc spraying, the particle velocities are generally low (< 150 m/s), and raw materials must be molten to be deposited. Plasma spraying, developed in the 1970s, uses a high-temperature plasma jet generated by arc discharge with typical temperatures >15,000 K, which makes it possible to spray refractory materials such as oxides, molybdenum , etc. [ 1 ] A typical thermal spray system consists of the following: The detonation gun consists of a long water-cooled barrel with inlet valves for gases and powder. Oxygen and fuel (acetylene most common) are fed into the barrel along with a charge of powder. A spark is used to ignite the gas mixture, and the resulting detonation heats and accelerates the powder to supersonic velocity through the barrel. A pulse of nitrogen is used to purge the barrel after each detonation. This process is repeated many times a second. The high kinetic energy of the hot powder particles on impact with the substrate results in a buildup of a very dense and strong coating. The coating adheres through a mechanical bond resulting from the deformation of the base substrate wrapping around the sprayed particles after the high speed impact. In plasma spraying process, the material to be deposited (feedstock) — typically as a powder , sometimes as a liquid , [ 2 ] suspension [ 3 ] or wire — is introduced into the plasma jet, emanating from a plasma torch . In the jet, where the temperature is on the order of 10,000 K, the material is melted and propelled towards a substrate. There, the molten droplets flatten, rapidly solidify and form a deposit. Commonly, the deposits remain adherent to the substrate as coatings; free-standing parts can also be produced by removing the substrate. There are a large number of technological parameters that influence the interaction of the particles with the plasma jet and the substrate and therefore the deposit properties. These parameters include feedstock type, plasma gas composition and flow rate, energy input, torch offset distance, substrate cooling, etc. The deposits consist of a multitude of pancake-like 'splats' called lamellae , formed by flattening of the liquid droplets. As the feedstock powders typically have sizes from micrometers to above 100 micrometers, the lamellae have thickness in the micrometer range and lateral dimension from several to hundreds of micrometers. Between these lamellae, there are small voids, such as pores, cracks and regions of incomplete bonding. As a result of this unique structure, the deposits can have properties significantly different from bulk materials. These are generally mechanical properties, such as lower strength and modulus , higher strain tolerance, and lower thermal and electrical conductivity . Also, due to the rapid solidification , metastable phases can be present in the deposits. This technique is mostly used to produce coatings on structural materials. Such coatings provide protection against high temperatures (for example thermal barrier coatings for exhaust heat management ), corrosion , erosion , wear ; they can also change the appearance, electrical or tribological properties of the surface, replace worn material, etc. When sprayed on substrates of various shapes and removed, free-standing parts in the form of plates, tubes, shells, etc. can be produced. It can also be used for powder processing (spheroidization, homogenization, modification of chemistry, etc.). In this case, the substrate for deposition is absent and the particles solidify during flight or in a controlled environment (e.g., water). This technique with variation may also be used to create porous structures, suitable for bone ingrowth, as a coating for medical implants. A polymer dispersion aerosol can be injected into the plasma discharge in order to create a grafting of this polymer on to a substrate surface. [ 3 ] This application is mainly used to modify the surface chemistry of polymers. Plasma spraying systems can be categorized by several criteria. Plasma jet generation: Plasma-forming medium: Spraying environment: Another variation consists of having a liquid feedstock instead of a solid powder for melting, this technique is known as Solution precursor plasma spray Vacuum plasma spraying (VPS) is a technology for etching and surface modification to create porous layers with high reproducibility and for cleaning and surface engineering of plastics, rubbers and natural fibers as well as for replacing CFCs for cleaning metal components. This surface engineering can improve properties such as frictional behavior, heat resistance , surface electrical conductivity , lubricity , cohesive strength of films, or dielectric constant , or it can make materials hydrophilic or hydrophobic . The process typically operates at 39–120 °C to avoid thermal damage. It can induce non-thermally activated surface reactions, causing surface changes which cannot occur with molecular chemistries at atmospheric pressure. Plasma processing is done in a controlled environment inside a sealed chamber at a medium vacuum, around 13–65 Pa . The gas or mixture of gases is energized by an electrical field from DC to microwave frequencies, typically 1–500 W at 50 V. The treated components are usually electrically isolated. The volatile plasma by-products are evacuated from the chamber by the vacuum pump , and if necessary can be neutralized in an exhaust scrubber . In contrast to molecular chemistry, plasmas employ: Plasma also generates electromagnetic radiation in the form of vacuum UV photons to penetrate bulk polymers to a depth of about 10 μm. This can cause chain scissions and cross-linking. Plasmas affect materials at an atomic level. Techniques like X-ray photoelectron spectroscopy and scanning electron microscopy are used for surface analysis to identify the processes required and to judge their effects. As a simple indication of surface energy , and hence adhesion or wettability, often a water droplet contact angle test is used. The lower the contact angle, the higher the surface energy and more hydrophilic the material is. At higher energies ionization tends to occur more than chemical dissociations . In a typical reactive gas, 1 in 100 molecules form free radicals whereas only 1 in 10 6 ionizes. The predominant effect here is the forming of free radicals. Ionic effects can predominate with selection of process parameters and if necessary the use of noble gases. Wire arc spray is a form of thermal spraying where two consumable metal wires are fed independently into the spray gun. These wires are then charged and an arc is generated between them. The heat from this arc melts the incoming wire, which is then entrained in an air jet from the gun. This entrained molten feedstock is then deposited onto a substrate with the help of compressed air. This process is commonly used for metallic, heavy coatings. [ 1 ] Plasma transferred wire arc (PTWA) is another form of wire arc spray which deposits a coating on the internal surface of a cylinder, or on the external surface of a part of any geometry. It is predominantly known for its use in coating the cylinder bores of an engine, enabling the use of Aluminum engine blocks without the need for heavy cast iron sleeves. A single conductive wire is used as "feedstock" for the system. A supersonic plasma jet melts the wire, atomizes it and propels it onto the substrate. The plasma jet is formed by a transferred arc between a non-consumable cathode and the type of a wire. After atomization, forced air transports the stream of molten droplets onto the bore wall. The particles flatten when they impinge on the surface of the substrate, due to the high kinetic energy. The particles rapidly solidify upon contact. The stacked particles make up a high wear resistant coating. The PTWA thermal spray process utilizes a single wire as the feedstock material. All conductive wires up to and including 0.0625 in (1.59 mm) can be used as feedstock material, including "cored" wires. PTWA can be used to apply a coating to the wear surface of engine or transmission components to replace a bushing or bearing. For example, using PTWA to coat the bearing surface of a connecting rod offers a number of benefits including reductions in weight, cost, friction potential, and stress in the connecting rod. During the 1980s, a class of thermal spray processes called high velocity oxy-fuel spraying was developed. A mixture of gaseous or liquid fuel and oxygen is fed into a combustion chamber , where they are ignited and combusted continuously. The resultant hot gas at a pressure close to 1 MPa emanates through a converging–diverging nozzle and travels through a straight section. The fuels can be gases ( hydrogen , methane , propane , propylene , acetylene , natural gas , etc.) or liquids ( kerosene , etc.). The jet velocity at the exit of the barrel (>1000 m/s) exceeds the speed of sound . A powder feed stock is injected into the gas stream, which accelerates the powder up to 800 m/s. The stream of hot gas and powder is directed towards the surface to be coated. The powder partially melts in the stream, and deposits upon the substrate. The resulting coating has low porosity and high bond strength . [ 1 ] HVOF coatings may be as thick as 12 mm ( 1 ⁄ 2 in). It is typically used to deposit wear and corrosion resistant coatings on materials, such as ceramic and metallic layers. Common powders include WC -Co, chromium carbide , MCrAlY, and alumina . The process has been most successful for depositing cermet materials (WC–Co, etc.) and other corrosion-resistant alloys ( stainless steels , nickel-based alloys, aluminium, hydroxyapatite for medical implants , etc.). [ 1 ] HVAF coating technology is the combustion of propane in a compressed air stream. Like HVOF, this produces a uniform high velocity jet. HVAF differs by including a heat baffle to further stabilize the thermal spray mechanisms. Material is injected into the air-fuel stream and coating particles are propelled toward the part. [ 4 ] HVAF has a maximum flame temperature of 3,560° to 3,650 °F and an average particle velocity of 3,300 ft/sec. Since the maximum flame temperature is relatively close to the melting point of most spray materials, HVAF results in a more uniform, ductile coating. This also allows for a typical coating thickness of 0.002-0.050". HVAF coatings also have a mechanical bond strength of greater that 12,000 psi. Common HVAF coating materials include, but are not limited to; tungsten carbide , chrome carbide, stainless steel , hastelloy , and inconel . Due to its ductile nature hvaf coatings can help resist cavitation damage. [ 5 ] Spray and fuse uses high heat to increase the bond between the thermal spray coating and the substrate of the part. Unlike other types of thermal spray, spray and fuse creates a metallurgical bond between the coating and the surface. This means that instead of relying on friction for coating adhesion, it melds the surface and coating material into one material. Spray and fuse comes down to the difference between adhesion and cohesion. This process usually involves spraying a powdered material onto the component then following with an acetylene torch. The torch melts the coating material and the top layer of the component material; fusing them together. Due to the high heat of spray and fuse, some heat distortion may occur, and care must be taken to determine if a component is a good candidate. These high temperatures are akin to those used in welding. This metallurgical bond creates an extremely wear and abrasion resistant coating. Spray and fuse delivers the benefits of hardface welding with the ease of thermal spray. [ 6 ] Cold spraying (or gas dynamic cold spraying) was introduced to the market in the 1990s. The method was originally developed in the Soviet Union – while experimenting with the erosion of the target substrate, which was exposed to a two-phase high-velocity flow of fine powder in a wind tunnel, scientists observed accidental rapid formation of coatings. [ 1 ] In cold spraying, particles are accelerated to very high speeds by the carrier gas forced through a converging–diverging de Laval type nozzle . Upon impact, solid particles with sufficient kinetic energy deform plastically and bond mechanically to the substrate to form a coating. The critical velocity needed to form bonding depends on the material's properties, powder size and temperature. Metals , polymers , ceramics , composite materials and nanocrystalline powders can be deposited using cold spraying. [ 7 ] Soft metals such as Cu and Al are best suited for cold spraying, but coating of other materials (W, Ta, Ti, MCrAlY, WC–Co, etc.) by cold spraying has been reported. [ 1 ] The deposition efficiency is typically low for alloy powders, and the window of process parameters and suitable powder sizes is narrow. To accelerate powders to higher velocity, finer powders (<20 micrometers) are used. It is possible to accelerate powder particles to much higher velocity using a processing gas having high speed of sound (helium instead of nitrogen). However, helium is costly and its flow rate, and thus consumption, is higher. To improve acceleration capability, nitrogen gas is heated up to about 900 °C. As a result, deposition efficiency and tensile strength of deposits increase. [ 1 ] Warm spraying is a novel modification of high velocity oxy-fuel spraying, in which the temperature of combustion gas is lowered by mixing nitrogen with the combustion gas, thus bringing the process closer to the cold spraying. The resulting gas contains much water vapor, unreacted hydrocarbons and oxygen, and thus is dirtier than the cold spraying. However, the coating efficiency is higher. On the other hand, lower temperatures of warm spraying reduce melting and chemical reactions of the feed powder, as compared to HVOF. These advantages are especially important for such coating materials as Ti, plastics, and metallic glasses, which rapidly oxidize or deteriorate at high temperatures. [ 1 ] Thermal spraying is a line of sight process and the bond mechanism is primarily mechanical. Thermal spray application is not compatible with the substrate if the area to which it is applied is complex or blocked by other bodies. [ 9 ] Thermal spraying need not be a dangerous process if the equipment is treated with care and correct spraying practices are followed. As with any industrial process, there are a number of hazards of which the operator should be aware and against which specific precautions should be taken. Ideally, equipment should be operated automatically in enclosures specially designed to extract fumes, reduce noise levels, and prevent direct viewing of the spraying head. Such techniques will also produce coatings that are more consistent. There are occasions when the type of components being treated, or their low production levels, require manual equipment operation. Under these conditions, a number of hazards peculiar to thermal spraying are experienced in addition to those commonly encountered in production or processing industries. [ 10 ] [ 11 ] Metal spraying equipment uses compressed gases which create noise. Sound levels vary with the type of spraying equipment, the material being sprayed, and the operating parameters. Typical sound pressure levels are measured at 1 meter behind the arc. [ 12 ] Combustion spraying equipment produces an intense flame, which may have a peak temperature more than 3,100 °C and is very bright. Electric arc spraying produces ultra-violet light which may damage delicate body tissues. Plasma also generates quite a lot of UV radiation, easily burning exposed skin and can also cause "flash burn" to the eyes. Spray booths and enclosures should be fitted with ultra-violet absorbent dark glass. Where this is not possible, operators, and others in the vicinity should wear protective goggles containing BS grade 6 green glass. Opaque screens should be placed around spraying areas. The nozzle of an arc pistol should never be viewed directly unless it is certain that no power is available to the equipment. [ 10 ] The atomization of molten materials produces a large amount of dust and fumes made up of very fine particles (ca. 80–95% of the particles by number <100 nm). [ 13 ] Proper extraction facilities are vital not only for personal safety, but to minimize entrapment of re-frozen particles in the sprayed coatings. The use of respirators fitted with suitable filters is strongly recommended where equipment cannot be isolated. [ 13 ] Certain materials offer specific known hazards: [ 10 ] Combustion spraying guns use oxygen and fuel gases. The fuel gases are potentially explosive. In particular, acetylene may only be used under approved conditions. Oxygen, while not explosive, will sustain combustion and many materials will spontaneously ignite if excessive oxygen levels are present. Care must be taken to avoid leakage and to isolate oxygen and fuel gas supplies when not in use. [ 10 ] Electric arc guns operate at low voltages (below 45 V dc), but at relatively high currents. They may be safely hand-held. The power supply units are connected to 440 V AC sources, and must be treated with caution. [ 10 ]
https://en.wikipedia.org/wiki/Plasma_spraying
Plasma transferred wire arc (PTWA) thermal spraying is a thermal spraying process that deposits a coating on the internal surface of a cylindrical surface, or external surface of any geometry. It is predominantly known for its use in coating the cylinder bores of an internal combustion engine , enabling the construction of aluminium engine blocks without cast iron cylinder sleeves. The inventors of PTWA received the 2009 IPO National Inventor of the Year award. [ 1 ] This technology was initially patented and developed by Flame-Spray Industries , and subsequently improved upon by Flame-Spray and Ford . A single conductive wire is used as feedstock for the system. A supersonic plasma jet—formed by a transferred arc between a non-consumable cathode and the wire—melts and atomizes the wire. A stream of air transports the atomized metal onto the substrate. The particles flatten upon striking the surface of the substrate due to their high kinetic energy . The particles rapidly solidify upon contact and can assume both crystalline and amorphous phases. [ 2 ] There is also the possibility of producing multi-layer coatings via stacked layers of particles, increasing wear resistance. All conductive wires up to and including 1.59 mm (0.0625 in) can be used as feedstock material, including "cored" wires. Refractory metals , as well as low melt materials, are easily deposited. PTWA can be used to apply a coating to wear surfaces of engine or transmission components, serving as a plain bearing . For the cylinder bores of hypoeutectic aluminum-silicon alloy blocks, PTWA's main advantages over cast iron liners are reduced weight and cost. The thinner bore surface also allows for more compact bore spacing , and can potentially provide better heat transfer . Automotive engines that use PTWA include Nissan VR38DETT , [ 3 ] and Ford Coyote . [ 4 ] [ 5 ] Caterpillar and Ford also use PTWA to remanufacture engines. [ 6 ]
https://en.wikipedia.org/wiki/Plasma_transferred_wire_arc_thermal_spraying
Plasmalysis is a electrochemical process that requires a voltage source . On the one hand, it describes the plasma-chemical dissociation of organic and inorganic compounds (e.g. C-H and N-H compounds) in interaction with a thermal/non-thermal plasma between two electrodes. On the other hand, it describes the synthesis , i.e. the combination of two or more elements to form a new molecule (e.g. methane synthesis/ methanation ). Plasmalysis is an artificial word made of plasma and lysis (Greek λύσις, "[dissolution]"). Thermal plasmas. [ 1 ] can be technically generated, for example, by inductive coupling of high-frequency fields in the MHz range (ICP: Inductively coupled plasma ) or by direct current coupling (arc discharges). A thermal plasma is characterized by the fact that electrons, ions and neutral particles are in thermodynamic equilibrium. For atmospheric-pressure plasmas, the temperatures in thermal plasmas are usually above 6000 K. This corresponds to average kinetic energies of less than 1 eV. Nonthermal plasmas are found in low-pressure arc discharges, such as fluorescent lamps , in dielectrically barrier discharges (DBD), such as ozone tubes, in microwave plasmas ( plasma torches , i.e. PLexc oder MagJet) or in GHz-plasmajets. A non-thermal plasma shows a significant difference between the electron and gas temperature. For example, the electron temperature can be several 10,000 K, which corresponds to average kinetic energies of more than 1 eV while a gas temperature close to room temperature is measured. Despite their low temperature, such plasmas can trigger chemical reactions and excitation states via electron collisions. Pulsed coronal and dielectrically impeded discharges belong to the family of nonthermal plasmas. Here the electrons are much hotter (several eV) than the ions/neutral gas particles (room temperature). [ 2 ] [ 3 ] To generate a nonthermal plasma at atmospheric pressure, a working gas (molecular or inert gas, e.g. air, nitrogen, argon, helium) is passed through an electric field. Electrons originating from ionization processes can be accelerated in this field to trigger impact ionization processes. If more free electrons are produced during this process than are lost, a discharge can build up. The degree of ionization in technically used plasmas is usually very low, typically a few per mille or less. The electrical conductivity generated by these free charge carriers is used to couple in electrical power. When colliding with other gas atoms or molecules, the free electrons can transfer their energy to them and thus generate highly reactive species that act on the material to be treated (gaseous, liquid, solid). The electron energy is sufficient to split covalent bonds in organic molecules. The energy required to split single bonds is in the range of about 1.5 - 6.2 eV, for double bonds in the range of about 4.4 - 7.4 eV and for triple bonds in the range of 8.5 - 11.2 eV . For gases that can also be used as process gases, dissociation energies are e.g. 5.7 eV (O2) and 9.8 eV (N2) [ 4 ] Atmospheric-pressure plasmas have been used for a variety of industrial applications, including volatile organic compound (VOC) removal, exhaust gas emission treatment and polymer surface and food treatment. For decades, non-thermal plasmas have also been used to generate ozone for water purification. Atmospheric pressure plasmas can be characterized primarily by a large number of electrical discharges in which the majority of the electrical energy is used to generate energetic electrons . These energetic electrons produce chemically excited species - free radicals and ions - and additional electrons by dissociation , excitation and ionization of background gas molecules by electron impact. These excited species in turn oxidize , reduce or decompose the molecules, such as wastewater [ 5 ] or biomethane , that are brought into contact with them. Part of the electrical energy is converted into chemical energy . Plasmalysis can thus be used to store energy, for example in the plasma analysis of ammonium from waste water or liquid fermentation residue, which produces hydrogen and nitrogen . The hydrogen thus produced can serve as an energy carrier for a hydrogen economy . In the following section XH stands for any hydrogen compound, e.g. CH- and NH-compounds. The density of radicals scales with the electron density and higher gas and electron temperatures (thermal dissociation and electron impact). This process generates negative ions as well as neutral particles. The collision electron is captured by collision excitation. The energy difference between the ground state and the excited state dissociates the molecule. The electron-induced dissociation of water depends on the electron temperature, which influences the ratio of the OH density (n_OH) to the electron density (n_e) significantly. The maximum OH density is reached in the early afterglow when the electron temperature (T_e) is low. Since the focus is always on the most energy-efficient dissociation of chemical compounds, the benchmark is the energy input of the electrolysis of distilled water (45 kWh/kg H2 ) as in the following reaction equation: A particularly efficient way of generating hydrogen (10 kWh/kg H2 ) is the methane plasmalysis. [ 8 ] In this process, methane (e.g. from natural gas) is decomposed in the plasma under oxygen exclusion, forming hydrogen and elemental carbon, as in the following reaction equation: Methane plasmalysis offers, among other things, the possibility of decentralized decarbonization of natural gas or, if biogas is used, also the realization of a CO2 sink , [ 10 ] whereby, in contrast to the CCS process commonly used to date, no gas has to be compressed and stored, but the elemental carbon produced can be bound in product form. This technology can also be used to prevent the flaring of so-called " flare gases " by using them as a feedstock for the production of hydrogen and carbon. The plasmalysis of wastewater and liquid manure enables hydrogen to be recovered from pollutants contained in the wastewater (ammonium (NH4) or hydrocarbon compounds ( COD )). The plasma-catalytic decomposition of ammonia takes place as shown in the following reaction equation: The treated wastewater is purified in the process. The energy requirement for the production of green hydrogen is approx. 12 kWh/kg H2 . This technology can also be used as ammonia cracking (chemistry) technology for splitting the hydrogen carrier ammonia. Hydrogen sulfide - a component of crude oil and natural gas and a by-product in anaerobic digestion of biomass - is also suitable for plasma-catalytic decomposition to produce hydrogen and elemental sulfur due to its weak binding energy. The energy requirement for the production of hydrogen from H 2 S is approx. 5 kWh/kg H2 . It is apparent that both the reactor geometry and the method by which the plasma is generated strongly influence the performance of the system.
https://en.wikipedia.org/wiki/Plasmalysis
A plasmid is a small, extrachromosomal DNA molecule within a cell that is physically separated from chromosomal DNA and can replicate independently. They are most commonly found as small circular, double-stranded DNA molecules in bacteria ; however, plasmids are sometimes present in archaea and eukaryotic organisms . [ 1 ] [ page needed ] [ 2 ] Plasmids often carry useful genes, such as those involved in antibiotic resistance , virulence , [ 3 ] [ 4 ] [ 5 ] secondary metabolism [ 6 ] and bioremediation . [ 7 ] [ 8 ] While chromosomes are large and contain all the essential genetic information for living under normal conditions, plasmids are usually very small and contain additional genes for special circumstances. Artificial plasmids are widely used as vectors in molecular cloning , serving to drive the replication of recombinant DNA sequences within host organisms. In the laboratory, plasmids may be introduced into a cell via transformation . Synthetic plasmids are available for procurement over the internet by various vendors using submitted sequences typically designed with software, if a design does not work the vendor may make additional edits from the submission. [ 9 ] [ 10 ] [ 11 ] Plasmids are considered replicons , units of DNA capable of replicating autonomously within a suitable host. However, plasmids, like viruses , are not generally classified as life . [ 12 ] Plasmids are transmitted from one bacterium to another (even of another species) mostly through conjugation . [ 3 ] This host-to-host transfer of genetic material is one mechanism of horizontal gene transfer , and plasmids are considered part of the mobilome . Unlike viruses, which encase their genetic material in a protective protein coat called a capsid , plasmids are "naked" DNA and do not encode genes necessary to encase the genetic material for transfer to a new host; however, some classes of plasmids encode the conjugative "sex" pilus necessary for their own transfer. Plasmids vary in size from 1 to over 400 k bp , [ 13 ] and the number of identical plasmids in a single cell can range from one up to thousands. The term plasmid was coined in 1952 by the American molecular biologist Joshua Lederberg to refer to "any extrachromosomal hereditary determinant." [ 14 ] [ 15 ] The term's early usage included any bacterial genetic material that exists extrachromosomally for at least part of its replication cycle, but because that description includes bacterial viruses, the notion of plasmid was refined over time to refer to genetic elements that reproduce autonomously. [ 16 ] Later in 1968, it was decided that the term plasmid should be adopted as the term for extrachromosomal genetic element, [ 17 ] and to distinguish it from viruses, the definition was narrowed to genetic elements that exist exclusively or predominantly outside of the chromosome, can replicate autonomously, and contribute to transferring mobile elements between unrelated bacteria. [ 3 ] [ 4 ] [ 16 ] In order for plasmids to replicate independently within a cell, they must possess a stretch of DNA that can act as an origin of replication . The self-replicating unit, in this case, the plasmid, is called a replicon . A typical bacterial replicon may consist of a number of elements, such as the gene for plasmid-specific replication initiation protein (Rep), repeating units called iterons , DnaA boxes, and an adjacent AT-rich region. [ 16 ] Smaller plasmids make use of the host replicative enzymes to make copies of themselves, while larger plasmids may carry genes specific for the replication of those plasmids. A few types of plasmids can also insert into the host chromosome, and these integrative plasmids are sometimes referred to as episomes in prokaryotes . [ 18 ] Plasmids almost always carry at least one gene. Many of the genes carried by a plasmid are beneficial for the host cells, for example: enabling the host cell to survive in an environment that would otherwise be lethal or restrictive for growth. Some of these genes encode traits for antibiotic resistance or resistance to heavy metal, while others may produce virulence factors that enable a bacterium to colonize a host and overcome its defences or have specific metabolic functions that allow the bacterium to utilize a particular nutrient, including the ability to degrade recalcitrant or toxic organic compounds. [ 19 ] Plasmids can also provide bacteria with the ability to fix nitrogen . Some plasmids, called cryptic plasmids , don't appear to provide any clear advantage to its host, yet still persist in bacterial populations. [ 20 ] However, recent studies show that they may play a role in antibiotic resistance by contributing to heteroresistance within bacterial populations. [ 21 ] Naturally occurring plasmids vary greatly in their physical properties. Their size can range from very small mini-plasmids of less than 1-kilobase pairs (kbp) to very large megaplasmids of several megabase pairs (Mbp). At the upper end, little differs between a megaplasmid and a minichromosome . Plasmids are generally circular, but examples of linear plasmids are also known. These linear plasmids require specialized mechanisms to replicate their ends. [ 16 ] Plasmids may be present in an individual cell in varying number, ranging from one to several hundreds. The normal number of copies of plasmid that may be found in a single cell is called the plasmid copy number , and is determined by how the replication initiation is regulated and the size of the molecule. Larger plasmids tend to have lower copy numbers. [ 18 ] Low-copy-number plasmids that exist only as one or a few copies in each bacterium are, upon cell division , in danger of being lost in one of the segregating bacteria. Such single-copy plasmids have systems that attempt to actively distribute a copy to both daughter cells. These systems, which include the parABS system and parMRC system , are often referred to as the partition system or partition function of a plasmid. [ 22 ] Plasmids of linear form are unknown among phytopathogens with one exception, Rhodococcus fascians . [ 23 ] Plasmids may be classified in a number of ways. Plasmids can be broadly classified into conjugative plasmids and non-conjugative plasmids. Conjugative plasmids contain a set of transfer genes which promote sexual conjugation between different cells. [ 18 ] In the complex process of conjugation , plasmids may be transferred from one bacterium to another via sex pili encoded by some of the transfer genes (see figure). [ 24 ] Non-conjugative plasmids are incapable of initiating conjugation, hence they can be transferred only with the assistance of conjugative plasmids. An intermediate class of plasmids are mobilizable, and carry only a subset of the genes required for transfer. They can parasitize a conjugative plasmid, transferring at high frequency only in its presence. [ 25 ] Plasmids can also be classified into incompatibility groups. A microbe can harbour different types of plasmids, but different plasmids can only exist in a single bacterial cell if they are compatible. If two plasmids are not compatible, one or the other will be rapidly lost from the cell. Different plasmids may therefore be assigned to different incompatibility groups depending on whether they can coexist together. Incompatible plasmids (belonging to the same incompatibility group) normally share the same replication or partition mechanisms and can thus not be kept together in a single cell. [ 26 ] [ 27 ] Incompatibility typing (or Inc typing) was traditionally achieved by genetic phenotyping methods, testing whether cells stably transmit plasmid pairs to their progeny. [ 28 ] This has largely been superseded by genetic methods such as PCR, and more recently by whole-genome sequencing methods with bioinformatic tools such as PlasmidFinder. [ 29 ] Another way to classify plasmids is by function. There are five main classes: Plasmids can belong to more than one of these functional groups. With the wider availability of whole genome sequencing which is able to capture the genetic sequence of plasmids, methods have been developed to cluster or type plasmids based on their sequence content. Plasmid multi-locus sequence typing (pMLST) is based on chromosomal Multilocus sequence typing by matching the sequence of replication machinery genes to databases of previously classified sequences. If the sequence allele matches the database, this is used as the plasmid classification, and therefore has higher sensitivity than a simple presence or absence test of these genes. [ 29 ] A related method is to use average nucleotide identity between plasmids to find close genetic neighbours. Tools which use this approach include COPLA [ 36 ] and MOB-cluster. [ 37 ] Creating typing classifications using unsupervised learning , that is without a pre-existing database or 'reference-free', has been shown to be useful in grouping plasmids in new datasets without biasing or being limited to representations in a pre-built database—tools to do this include mge-cluster. [ 38 ] As plasmid frequently change their gene content and order, modelling genetic distances between them using methods designed for point mutations can lead to poor estimates of the true evolutionary distance between plasmids. Tools such as pling find homologous sequence regions between plasmids, and more accurately reconstruct the number of evolutionary events ( structural variants ) between each pair, then use unsupervised clustering apporaches to group plasmids. [ 39 ] Although most plasmids are double-stranded DNA molecules, some consist of single-stranded DNA , or predominantly double-stranded RNA . RNA plasmids are non-infectious extrachromosomal linear RNA replicons, both encapsidated and unencapsidated, which have been found in fungi and various plants, from algae to land plants. In many cases, however, it may be difficult or impossible to clearly distinguish RNA plasmids from RNA viruses and other infectious RNAs. [ 40 ] Chromids are elements that exist at the boundary between a chromosome and a plasmid, found in about 10% of bacterial species sequenced by 2009. These elements carry core genes and have codon usage similar to the chromosome, yet use a plasmid-type replication mechanism such as the low copy number RepABC. As a result, they have been variously classified as minichromosomes or megaplasmids in the past. [ 41 ] In Vibrio , the bacterium synchronizes the replication of the chromosome and chromid by a conserved genome size ratio. [ 42 ] Artificially constructed plasmids may be used as vectors in genetic engineering . These plasmids serve as important tools in genetics and biotechnology labs, where they are commonly used to clone and amplify (make many copies of) or express particular genes. [ 43 ] A wide variety of plasmids are commercially available for such uses. The gene to be replicated is normally inserted into a plasmid that typically contains a number of features for their use. These include a gene that confers resistance to particular antibiotics ( ampicillin is most frequently used for bacterial strains), an origin of replication to allow the bacterial cells to replicate the plasmid DNA, and a suitable site for cloning (referred to as a multiple cloning site ). DNA structural instability can be defined as a series of spontaneous events that culminate in an unforeseen rearrangement, loss, or gain of genetic material. Such events are frequently triggered by the transposition of mobile elements or by the presence of unstable elements such as non-canonical (non-B) structures. Accessory regions pertaining to the bacterial backbone may engage in a wide range of structural instability phenomena. Well-known catalysts of genetic instability include direct, inverted, and tandem repeats, which are known to be conspicuous in a large number of commercially available cloning and expression vectors. [ 44 ] Insertion sequences can also severely impact plasmid function and yield, by leading to deletions and rearrangements, activation, down-regulation or inactivation of neighboring gene expression . [ 45 ] Therefore, the reduction or complete elimination of extraneous noncoding backbone sequences would pointedly reduce the propensity for such events to take place, and consequently, the overall recombinogenic potential of the plasmid. [ 46 ] [ 47 ] Plasmids are the most-commonly used bacterial cloning vectors. [ 48 ] These cloning vectors contain a site that allows DNA fragments to be inserted, for example a multiple cloning site or polylinker which has several commonly used restriction sites to which DNA fragments may be ligated . After the gene of interest is inserted, the plasmids are introduced into bacteria by a process called transformation . These plasmids contain a selectable marker , usually an antibiotic resistance gene, which confers on the bacteria an ability to survive and proliferate in a selective growth medium containing the particular antibiotics. The cells after transformation are exposed to the selective media, and only cells containing the plasmid may survive. In this way, the antibiotics act as a filter to select only the bacteria containing the plasmid DNA. The vector may also contain other marker genes or reporter genes to facilitate selection of plasmids with cloned inserts. Bacteria containing the plasmid can then be grown in large amounts, harvested, and the plasmid of interest may then be isolated using various methods of plasmid preparation . A plasmid cloning vector is typically used to clone DNA fragments of up to 15 kbp . [ 49 ] To clone longer lengths of DNA, lambda phage with lysogeny genes deleted, cosmids , bacterial artificial chromosomes , or yeast artificial chromosomes are used. Suicide vectors are plasmids that are unable to replicate in the host cell and therefore have to integrate in the chromosome or disappear. [ 50 ] One example of these vectors are pMQ30 plasmid. This plasmid has SacB gene from Bacillus subtilis which can be induced by sucrose and it'll be lethal when expressed in Gram-negative bacteria. [ 51 ] The benefit of this system( two-step success monitoring ) shows when the experiment design needs a target gene to be integrated into the chromosome of the bacterial host. In the first step after transforming the host cells with the plasmid, a media with specific antibiotic could be used to select for bacteria that contain the plasmid. The second step makes sure that only the bacteria with integrated plasmid would survive. Since the plasmid contain the SacB gene that will induce toxicity in presence of sucrose, only the bacteria would survive and grow that has the plasmid integrated in their chromosome. Another major use of plasmids is to make large amounts of proteins. In this case, researchers grow bacteria containing a plasmid harboring the gene of interest. Just as the bacterium produces proteins to confer its antibiotic resistance, it can also be induced to produce large amounts of proteins from the inserted gene. This is a cheap and easy way of mass-producing the protein, for example, utilizing the rapid reproduction of E.coli with a plasmid containing the insulin gene leads to a large production of insulin. [ 52 ] [ 53 ] [ 54 ] Plasmids may also be used for gene transfer as a potential treatment in gene therapy so that it may express the protein that is lacking in the cells. Some forms of gene therapy require the insertion of therapeutic genes at pre-selected chromosomal target sites within the human genome . Plasmid vectors are one of many approaches that could be used for this purpose. Zinc finger nucleases (ZFNs) offer a way to cause a site-specific double-strand break to the DNA genome and cause homologous recombination . Plasmids encoding ZFN could help deliver a therapeutic gene to a specific site so that cell damage , cancer-causing mutations, or an immune response is avoided. [ 55 ] Plasmids were historically used to genetically engineer the embryonic stem cells of rats to create rat genetic disease models. The limited efficiency of plasmid-based techniques precluded their use in the creation of more accurate human cell models. However, developments in adeno-associated virus recombination techniques, and zinc finger nucleases , have enabled the creation of a new generation of isogenic human disease models . Plasmids assist in transporting biosynthetic gene clusters - a set of gene that contain all the necessary enzymes that lead to the production of special metabolites (formally known as secondary metabolite ). [ 56 ] A benefit of using plasmids to transfer BGC is demonstrated by using a suitable host that can mass produce specialized metabolites, some of these molecules are able to control microbial population. [ 57 ] [ 58 ] Plasmids can contain and express several BGCs with a few plasmids known to be exclusive for transferring BGCs. [ 58 ] BGC's can also be transfers to the host organism's chromosome, utilizing a plasmid vector, which allows for studies in gene knockout experiments. [ 59 ] By using plasmids for the uptake of BGCs, microorganisms can gain an advantage as production is not limited to antibiotic resistant biosynthesis genes but the production of toxins /antitoxins. [ 60 ] The term episome was introduced by François Jacob and Élie Wollman in 1958 to refer to extra-chromosomal genetic material that may replicate autonomously or become integrated into the chromosome. [ 61 ] [ 62 ] Since the term was introduced, however, its use has changed, as plasmid has become the preferred term for autonomously replicating extrachromosomal DNA. At a 1968 symposium in London some participants suggested that the term episome be abandoned, although others continued to use the term with a shift in meaning. [ 63 ] [ 64 ] Today, some authors use episome in the context of prokaryotes to refer to a plasmid that is capable of integrating into the chromosome. The integrative plasmids may be replicated and stably maintained in a cell through multiple generations, but at some stage, they will exist as an independent plasmid molecule. [ 65 ] In the context of eukaryotes, the term episome is used to mean a non-integrated extrachromosomal closed circular DNA molecule that may be replicated in the nucleus. [ 66 ] [ 67 ] Viruses are the most common examples of this, such as herpesviruses , adenoviruses , and polyomaviruses , but some are plasmids. Other examples include aberrant chromosomal fragments, such as double minute chromosomes , that can arise during artificial gene amplifications or in pathologic processes (e.g., cancer cell transformation). Episomes in eukaryotes behave similarly to plasmids in prokaryotes in that the DNA is stably maintained and replicated with the host cell. Cytoplasmic viral episomes (as in poxvirus infections) can also occur. Some episomes, such as herpesviruses, replicate in a rolling circle mechanism, similar to bacteriophages (bacterial phage viruses). Others replicate through a bidirectional replication mechanism ( Theta type plasmids). In either case, episomes remain physically separate from host cell chromosomes. Several cancer viruses, including Epstein-Barr virus and Kaposi's sarcoma-associated herpesvirus , are maintained as latent, chromosomally distinct episomes in cancer cells, where the viruses express oncogenes that promote cancer cell proliferation. In cancers, these episomes passively replicate together with host chromosomes when the cell divides. When these viral episomes initiate lytic replication to generate multiple virus particles, they generally activate cellular innate immunity defense mechanisms that kill the host cell. Some plasmids or microbial hosts include an addiction system or postsegregational killing system (PSK), such as the hok/sok (host killing/suppressor of killing) system of plasmid R1 in Escherichia coli . [ 68 ] This variant produces both a long-lived poison and a short-lived antidote . Several types of plasmid addiction systems (toxin/ antitoxin, metabolism-based, ORT systems) were described in the literature [ 69 ] and used in biotechnical (fermentation) or biomedical (vaccine therapy) applications. Daughter cells that retain a copy of the plasmid survive, while a daughter cell that fails to inherit the plasmid dies or suffers a reduced growth-rate because of the lingering poison from the parent cell. Finally, the overall productivity could be enhanced. [ clarification needed ] In contrast, plasmids used in biotechnology, such as pUC18, pBR322 and derived vectors, hardly ever contain toxin-antitoxin addiction systems, and therefore need to be kept under antibiotic pressure to avoid plasmid loss. Yeasts naturally harbour various plasmids. Notable among them are 2 μm plasmids—small circular plasmids often used for genetic engineering of yeast—and linear pGKL plasmids from Kluyveromyces lactis , that are responsible for killer phenotypes . [ 70 ] Other types of plasmids are often related to yeast cloning vectors that include: The mitochondria of many higher plants contain self-replicating , extra-chromosomal linear or circular DNA molecules which have been considered to be plasmids. These can range from 0.7 kb to 20 kb in size. The plasmids have been generally classified into two categories- circular and linear. [ 71 ] Circular plasmids have been isolated and found in many different plants, with those in Vicia faba and Chenopodium album being the most studied and whose mechanism of replication is known. The circular plasmids can replicate using the θ model of replication (as in Vicia faba ) and through rolling circle replication (as in C.album ). [ 72 ] Linear plasmids have been identified in some plant species such as Beta vulgaris , Brassica napus , Zea mays , etc. but are rarer than their circular counterparts. The function and origin of these plasmids remains largely unknown. It has been suggested that the circular plasmids share a common ancestor, some genes in the mitochondrial plasmid have counterparts in the nuclear DNA suggesting inter-compartment exchange. Meanwhile, the linear plasmids share structural similarities such as invertrons with viral DNA and fungal plasmids, like fungal plasmids they also have low GC content, these observations have led to some hypothesizing that these linear plasmids have viral origins, or have ended up in plant mitochondria through horizontal gene transfer from pathogenic fungi. [ 71 ] [ 73 ] Plasmids are often used to purify a specific sequence, since they can easily be purified away from the rest of the genome. For their use as vectors, and for molecular cloning , plasmids often need to be isolated. There are several methods to isolate plasmid DNA from bacteria, ranging from the plasmid extraction kits ( miniprep to the maxiprep or bulkprep ), alkaline lysis , enzymatic lysis, and mechanical lysis . [ 43 ] The former can be used to quickly find out whether the plasmid is correct in any of several bacterial clones. The yield is a small amount of impure plasmid DNA, which is sufficient for analysis by restriction digest and for some cloning techniques. In the latter, much larger volumes of bacterial suspension are grown from which a maxi-prep can be performed. In essence, this is a scaled-up miniprep followed by additional purification. This results in relatively large amounts (several hundred micrograms) of very pure plasmid DNA. Many commercial kits have been created to perform plasmid extraction at various scales, purity, and levels of automation. Plasmid DNA may appear in one of five conformations, which (for a given size) run at different speeds in a gel during electrophoresis . The conformations are listed below in order of electrophoretic mobility (speed for a given applied voltage) from slowest to fastest: The rate of migration for small linear fragments is directly proportional to the voltage applied at low voltages. At higher voltages, larger fragments migrate at continuously increasing yet different rates. Thus, the resolution of a gel decreases with increased voltage. At a specified, low voltage, the migration rate of small linear DNA fragments is a function of their length. Large linear fragments (over 20 kb or so) migrate at a certain fixed rate regardless of length. This is because the molecules 'respirate', with the bulk of the molecule following the leading end through the gel matrix. Restriction digests are frequently used to analyse purified plasmids. These enzymes specifically break the DNA at certain short sequences. The resulting linear fragments form 'bands' after gel electrophoresis . It is possible to purify certain fragments by cutting the bands out of the gel and dissolving the gel to release the DNA fragments. Because of its tight conformation, supercoiled DNA migrates faster through a gel than linear or open-circular DNA. The use of plasmids as a technique in molecular biology is supported by bioinformatics software . These programs record the DNA sequence of plasmid vectors, help to predict cut sites of restriction enzymes , and to plan manipulations. Examples of software packages that handle plasmid maps are ApE, Clone Manager , GeneConstructionKit, Geneious, Genome Compiler , LabGenius, Lasergene, MacVector , pDraw32, Serial Cloner, UGENE , VectorFriends, Vector NTI , and WebDSV. These pieces of software help conduct entire experiments in silico before doing wet experiments. [ 74 ] Many plasmids have been created over the years and researchers have given out plasmids to plasmid databases such as the non-profit organisations Addgene and BCCM/GeneCorner . One can find and request plasmids from those databases for research. Researchers also often upload plasmid sequences to the NCBI database , from which sequences of specific plasmids can be retrieved. There have been multiple efforts to create curated and quality controlled databases from these uploaded sequences; an early example is by Orlek et al , [ 75 ] which limited itself to Enterobacteriaceae plasmids, while COMPASS also encompassed plasmids from other bacteria. More recently, PLSDB [ 76 ] was made as a more up to date curated database of NCBI plasmids, and as of 2024 contains over 72,000 entries. [ 77 ] A similar database is pATLAS, which additionally includes visual analytics tools to show relationships between plasmids. [ 78 ] The largest plasmid database made from publicly available data is IMG/PR, which not only contains full plasmid sequences retrieved from NCBI, but novel plasmid genomes found from metagenomes and metatranscriptomes. [ 79 ] Other datasets have been created by sequencing and computing plasmid genomes from pre-existing bacterial collections, e.g. the NORM collection [ 80 ] [ 81 ] and the Murray Collection. [ 82 ] [ 83 ]
https://en.wikipedia.org/wiki/Plasmid
Plasmid-mediated resistance is the transfer of antibiotic resistance genes which are carried on plasmids . [ 1 ] Plasmids possess mechanisms that ensure their independent replication as well as those that regulate their replication number and guarantee stable inheritance during cell division. By the conjugation process, they can stimulate lateral transfer between bacteria from various genera and kingdoms. [ 2 ] Numerous plasmids contain addiction-inducing systems that are typically based on toxin-antitoxin factors and capable of killing daughter cells that don't inherit the plasmid during cell division. [ 3 ] Plasmids often carry multiple antibiotic resistance genes, contributing to the spread of multidrug-resistance (MDR). [ 4 ] Antibiotic resistance mediated by MDR plasmids severely limits the treatment options for the infections caused by Gram-negative bacteria , especially family Enterobacteriaceae . [ 5 ] The global spread of MDR plasmids has been enhanced by selective pressure from antimicrobial medications used in medical facilities and when raising animals for food. [ 6 ] Resistance plasmids by definition carry one or more antibiotic resistance genes. [ 7 ] They are frequently accompanied by the genes encoding virulence determinants, [ 8 ] specific enzymes or resistance to toxic heavy metals . [ 9 ] Multiple resistance genes are commonly arranged in the resistance cassettes. [ 7 ] The antibiotic resistance genes found on the plasmids confer resistance to most of the antibiotic classes used nowadays, for example, beta-lactams , fluoroquinolones and aminoglycosides . [ 10 ] It is very common for the resistance genes or entire resistance cassettes to be re-arranged on the same plasmid or be moved to a different plasmid or chromosome by means of recombination systems. Examples of such systems include integrons , transposons , and IS CR -promoted gene mobilization. [ 7 ] Most of the resistance plasmids are conjugative, meaning that they encode all the needed components for the transfer of the plasmid to another bacterium, [ 11 ] and that isn't present in mobilizable plasmids. According to that, Mobilizable plasmids are smaller in size (usually < 10 kb) while conjugative plasmids are larger (usually > 30 kb) due to the considerable size of DNA required to encode the conjugation mechanisms that allow for cell-to-cell conjugation. [ 7 ] R-factors are also called resistance factors or resistance plasmids. They are tiny, circular DNA elements that are self-replicating and contain antibiotic resistance genes. [ citation needed ] They were first found in Japan in 1959 when it was discovered that some Shigella strains had developed resistance to a number of antibiotics used to treat a dysentery epidemic. Shigella is a genus of Gram-negative, aerobic, non-spore-forming, non-motile, rod-shaped bacteria. [ citation needed ] Resistance genes are ones that give rise to proteins that modify the antibiotic or pump it out. They are different from mutations that give bacteria resistance to antibiotics by preventing the antibiotic from getting in or changing the shape of the target protein. [ 12 ] R-factors have been known to contain up to ten resistance genes. They can also spread easily as they contain genes for constructing pili, which allow them to transfer the R-factor to other bacteria. [ 13 ] R-factors have contributed to the growing antibiotic resistance crisis because they quickly spread resistance genes among bacteria. [ 14 ] The R factor by itself cannot be transmitted. [ citation needed ] The majority of the R-RTF (Resistance Transfer Factor) genes are found in the R-factor (resistance plasmid), which can be conceptualized as a circular piece of DNA with a length of 80 to 95 kb. [ citation needed ] This plasmid shares many genes with the F factor and is largely homologous to it. [ 15 ] Additionally, it has a fin 0 gene that inhibits the transfer operon's functionality. The size and number of drug resistance genes in each R factor varies. For example, the RTF is bigger than the R determinant. An IS 1 element separates the RTF and R determinant on either side before they combine into a single unit. The IS 1 components simplify it for R determinants to be transferred between different R-RTF unit types. [ citation needed ] Bacteria containing F-factors (said to be "F+") have the capability for horizontal gene transfer ; they can construct a sex pilus , which emerges from the donor bacterium and ensnares the recipient bacterium, draws it in, [ 16 ] and eventually triggers the formation of a mating bridge, merging the cytoplasms of two bacteria via a controlled pore. [ 17 ] This pore allows the transfer of genetic material, such as a plasmid . Conjugation allows two bacteria , not necessarily from the same species , to transfer genetic material one way. [ 18 ] Since many F+ bacteria contain R-factors, antibiotic resistance can be easily spread among a population of bacteria . [ 19 ] Also, R-factors can be taken up by "DNA pumps" in their membranes via transformation , [ 20 ] or less commonly through viral-mediated transduction [ 21 ] via bacteriophages; however, conjugation is the most common means of antibiotic resistance spread. They contain the gene called RTF (Resistance transfer factor). it is a family of Gram-negative rod-shaped (bacilli) bacteria, the pathogenic bacteria that are most frequently found in the environment and clinical cases, as a result, they are significantly impacted by the use of antibiotics in agriculture, the ecosystem, or the treatment of diseases. [ 22 ] In Enterobacteriaceae, 28 different plasmid types can be identified by PCR-based replicon typing (PBRT).The plasmids that have been frequently reported [IncF, IncI, IncA/C, IncL (previously designated IncL/M), IncN, and IncH] contain a broad variety of resistance genes. [ 23 ] Members of family Enterobacteriaceae, for example, Escherichia coli or Klebsiella pneumoniae pose the biggest threat regarding plasmid-mediated resistance in hospital- and community-acquired infections. [ 5 ] B-lactamases are antibiotic-hydrolyzing enzymes that typically cause resistance to b-lactam antibiotics. These enzymes are prevalent in Streptomyces, and together with related enzymes discovered in pathogenic and non-pathogenic bacteria, they form the protein family known as the "b-lactamase superfamily". [ 12 ] it is hypothesized that b-lactamases also serve a double purpose, such as housekeeping and antibiotic resistance. [ 24 ] Both narrow spectrum beta-lactamases (e.g. penicillinases) and extended spectrum beta-lactamases (ESBL) are common for resistance plasmids in Enterobacteriaceae . Often multiple beta-lactamase genes are found on the same plasmid hydrolyzing a wide spectrum of beta-lactam antibiotics. [ 5 ] ESBL enzymes can hydrolyze all beta-lactam antibiotics, including cephalosporins, except for the carpabepenems. The first clinically observed ESBL enzymes were mutated versions of the narrow spectrum beta-lactamases, like TEM and SHV. Other ESBL enzymes originate outside of family Enterobacteriaceae, but have been spreading as well. [ 5 ] In addition, since the plasmids that carry ESBL genes also commonly encode resistance determinants for many other antibiotics, ESBL strains are often resistant to many non-beta-lactam antibiotics as well, [ 25 ] leaving very few options for the treatment. Carbapenemases represent type of ESBL which are able to hydrolyze carbapenem antibiotics that are considered as the last-resort treatment for ESBL-producing bacteria. KPC, NDM-1, VIM and OXA-48 carbapenemases have been increasingly reported worldwide as causes of hospital-acquired infections . [ 5 ] Several studies have shown that fluoroquinolone resistance has enhanced worldwide, especially in Enterobacteriaceae members. QnrA was the first known plasmid-mediated gene associated in quinolone resistance. [ 26 ] Quinolone resistance genes are frequently located on the same plasmid as the ESBL genes. [ 27 ] The proteins known as QnrS, QnrB, QnrC, and QnrD are four others that are similar. Numerous variants have been found for qnrA, qnrS, and qnrB, and they are distinguished by sequential numbers. [ 28 ] The qnr genes can be discovered in integrons and transposons on MDR plasmids of various incompatibility groups, which could carry a number of resistance-related molecules, such as carbapenemases and ESBLs. [ 29 ] Examples of resistance mechanisms include different Qnr proteins, aminoglycose acetyltransferase aac(6')-Ib-cr that is able to hydrolyze ciprofloxacin and norfloxacin , as well as efflux transporters OqxAB and QepA. [ 5 ] xResistance to aminoglycosides in Gram-negative pathogens is primarily caused by enzymes that acetylate, adenylate, or phosphorylate the medication. [ 30 ] On mobile elements, such as plasmids, are the genes that encode these enzymes. [ 31 ] Aminoglycoside resistance genes are also commonly found together with ESBL genes. Resistance to aminoglycosides is conferred via numerous aminoglycoside-modifying enzymes and 16S rRNA methyltransferases. [ 5 ] Resistance to aminoglycosides is conferred via numerous mechanisms: Study investigating physiological effect of pHK01 plasmid in host E.coli J53 found that the plasmid reduced bacterial motility and conferred resistance to beta-lactams. The pHK01 produced plasmid-encoded small RNAs and mediated expression of host sRNAs. These sRNAs were antisense to genes involved in replication, conjugate transfer and plasmid stabilisation : AS-repA3 (CopA) , AS-traI, AS-finO, AS-traG, AS-pc02 . The over-expression of one of the plasmid-encoded antisense sRNAs : AS-traI shortened t la log phase of host growth. [ 33 ]
https://en.wikipedia.org/wiki/Plasmid-mediated_resistance
A plasmid partition system is a mechanism that ensures the stable inheritance of plasmids during bacterial cell division. Each plasmid has its independent replication system which controls the number of copies of the plasmid in a cell. The higher the copy number, the more likely the two daughter cells will contain the plasmid. Generally, each molecule of plasmid diffuses randomly, so the probability of having a plasmid-less daughter cell is 2 1−N , where N is the number of copies. For instance, if there are 2 copies of a plasmid in a cell, there is 50% chance of having one plasmid-less daughter cell. However, high-copy number plasmids have a cost for the hosting cell. This metabolic burden is lower for low-copy plasmids, but those have a higher probability of plasmid loss after a few generations. To control vertical transmission of plasmids, in addition to controlled-replication systems, bacterial plasmids use different maintenance strategies, such as multimer resolution systems , post-segregational killing systems (addiction modules) , and partition systems. [ 1 ] Plasmid copies are paired around a centromere -like site and then separated in the two daughter cells. Partition systems involve three elements, organized in an auto-regulated operon : [ 2 ] The centromere-like DNA site is required in cis for plasmid stability. It often contains one or more inverted repeats which are recognized by multiple CBPs. This forms a nucleoprotein complex termed the partition complex. This complex recruits the motor protein, which is a nucleotide triphosphatase (NTPase). The NTPase uses energy from NTP binding and hydrolysis to directly or indirectly move and attach plasmids to specific host location (e.g. opposite bacterial cell poles). The partition systems are divided in four types, based primarily on the type of NTPases: [ 3 ] [ 4 ] This system is also used by most bacteria for chromosome segregation . [ 3 ] Type I partition systems are composed of an ATPase which contains Walker motifs and a CBP which is structurally distinct in type Ia and Ib. ATPases and CBP from type Ia are longer than the ones from type Ib, but both CBPs contain an arginine finger in their N-terminal part. [ 5 ] [ 1 ] [ 6 ] ParA proteins from different plasmids and bacterial species show 25 to 30% of sequence identity to the protein ParA of the plasmid P1 . [ 7 ] The partition of type I system uses a "diffusion-ratchet" mechanism. This mechanism works as follows: [ 8 ] There are likely to be differences in the details of type I mechanisms. [ 6 ] Type 1 partition has been mathematically modelled with variations in the mechanism described above. [ 16 ] [ 17 ] [ 18 ] [ 19 ] The CBP of this type consists in three domains: [ 6 ] The CBP of this type, also known as parG is composed of: [ 6 ] For this type, the parS site is called parC . This system is the best understood of the plasmid partition system. [ 6 ] It is composed of an actin-like ATPAse, ParM, and a CBP called ParR. The centromere like site, parC contains two sets of five 11 base pair direct repeats separated by the parMR promoter. The amino-acid sequence identity can go down to 15% between ParM and other actin-like ATPase. [ 7 ] [ 22 ] The mechanism of partition involved here is a pushing mechanism: [ 23 ] The filament of ParM is regulated by the polymerization allowed by the presence the partition complex (ParR- parC ), and by the depolymerization controlled by the ATPase activity of ParM. The type III partition system is the most recently discovered partition system. It is composed of tubulin-like GTPase termed TubZ, and the CBP is termed TubR. Amino-acid sequence identity can go down to 21% for TubZ proteins. [ 7 ] The mechanism is similar to a treadmill mechanism: [ 24 ] The net result being transport of partition complex to the cell pole. The partition system of the plasmid R388 has been found within the stb operon. This operon is composed of three genes, stbA , stbB and stbC . [ 25 ] The StbA- stbDRs complex may be used to pair plasmid the host chromosome, using indirectly the bacterial partitioning system. StbA and StbB have opposite but connected effect related to conjugation. This system has been proposed to be the type IV partition system. [ 26 ] It is thought to be a derivative of the type I partition system, given the similar operon organization. This system represents the first evidence for a mechanistic interplay between plasmid segregation and conjugation processes. [ 26 ] pSK1 is a plasmid from Staphylococcus aureus . This plasmid has a partition system determined by a single gene, par , previously known as orf245 . This gene does not effect the plasmid copy number nor the grow rate (excluding its implication in a post-segregational killing system). A centromere-like binding sequence is present upstream of the par gene, and is composed of seven direct repeats and one inverted repeat.
https://en.wikipedia.org/wiki/Plasmid_partition_system
A plasmid preparation is a method of DNA extraction and purification for plasmid DNA . It is an important step in many molecular biology experiments and is essential for the successful use of plasmids in research and biotechnology . [ 1 ] [ 2 ] Many methods have been developed to purify plasmid DNA from bacteria . [ 1 ] [ 3 ] During the purification procedure, the plasmid DNA is often separated from contaminating proteins and genomic DNA. These methods invariably involve three steps: growth of the bacterial culture, harvesting and lysis of the bacteria, and purification of the plasmid DNA. [ 4 ] Purification of plasmids is central to molecular cloning . A purified plasmid can be used for many standard applications, such as sequencing and transfections into cells. Plasmids are almost always purified from liquid bacteria cultures , usually E. coli , which have been transformed and isolated. [ 5 ] [ 6 ] Virtually all plasmid vectors in common use encode one or more antibiotic resistance genes as a selectable marker , for example a gene encoding ampicillin or kanamycin resistance, which allows bacteria that have been successfully transformed to multiply uninhibited. [ 7 ] [ 8 ] [ 9 ] Bacteria that have not taken up the plasmid vector are assumed to lack the resistance gene, and thus only colonies representing successful transformations are expected to grow. [ 5 ] [ 9 ] [ 10 ] Bacteria are grown under favourable conditions. There are several methods for cell lysis, including alkaline lysis, mechanical lysis, and enzymatic lysis. [ 11 ] [ 12 ] [ 13 ] [ 14 ] The most common method is alkaline lysis, which involves the use of a high concentration of a basic solution, such as sodium hydroxide , to lyse the bacterial cells. [ 15 ] [ 16 ] [ 17 ] When bacteria are lysed under alkaline conditions (pH 12.0–12.5) both chromosomal DNA and protein are denatured ; the plasmid DNA however, remains stable. [ 16 ] [ 17 ] Some scientists reduce the concentration of NaOH used to 0.1M in order to reduce the occurrence of ssDNA . After the addition of acetate -containing neutralization buffer to lower the pH to around 7, the large and less supercoiled chromosomal DNA and proteins form large complexes and precipitate; but the small bacterial DNA plasmids stay in solution. [ 17 ] [ 14 ] Mechanical lysis involves the use of physical force, such as grinding or sonication , to break down bacterial cells and release the plasmid DNA. There are several different mechanical lysis methods that can be used, including French press , bead-beating , and ultrasonication . [ 11 ] [ 12 ] [ 13 ] [ 14 ] Enzymatic lysis, also called Lysozyme lysis, involves the use of enzymes to digest the cell wall and release the plasmid DNA. [ 11 ] The most commonly used enzyme for this purpose is lysozyme, which breaks down the peptidoglycan in the cell wall of Gram-positive bacteria . Lysozyme is usually added to the bacterial culture, followed by heating and/or shaking the culture to release the plasmid DNA. [ 11 ] [ 12 ] [ 13 ] [ 14 ] Plasmid preparation can be divided into five main categories based on the scale of the preparation: minipreparation, midipreparation, maxipreparation, megapreparation, and gigapreparation. The choice of which method to use will depend on the amount of plasmid DNA required, as well as the specific application for which it will be used. [ 18 ] [ 19 ] Kits are available from varying manufacturers to purify plasmid DNA, which are named by size of bacterial culture and corresponding plasmid yield. In increasing order they are: miniprep, midiprep, maxiprep, megaprep, and gigaprep. The plasmid DNA yield will vary depending on the plasmid copy number, type and size, the bacterial strain , the growth conditions, and the kit. [ 2 ] Minipreparation of plasmid DNA is a rapid, small-scale isolation of plasmid DNA from bacteria . [ 20 ] [ 21 ] Commonly used miniprep methods include alkaline lysis and spin-column based kits. [ 3 ] [ 22 ] It is based on the alkaline lysis method. The extracted plasmid DNA resulting from performing a miniprep is itself often called a "miniprep". Minipreps are used in the process of molecular cloning to analyze bacterial clones . A typical plasmid DNA yield of a miniprep is 5 to 50 μg depending on the cell strain. Miniprep of a large number of plasmids can also be done conveniently on filter paper by lysing the cell and eluting the plasmid on to filter paper. [ 21 ] The starting E. coli culture volume is 15-25 mL of Lysogeny broth (LB) and the expected DNA yield is 100-350 μg. The starting E. coli culture volume is 100-200 mL of LB and the expected DNA yield is 500-850 μg. The starting E. coli culture volume is 500 mL – 2.5 L of LB and the expected DNA yield is 1.5-2.5 mg. The starting E. coli culture volume is 2.5-5 L of LB and the expected DNA yield is 7.5–10 mg. It is important to consider the downstream applications of the plasmid DNA when choosing a purification method. For example, if the plasmid is to be used for transfection or electroporation , a purification method that results in high purity and low endotoxin levels is desirable. Similarly, if the plasmid is to be used for sequencing or PCR, a purification method that results in high yield and minimal contaminants is desirable. [ 2 ] However, multiple methods of nucleic acid purification exist. [ 23 ] [ 24 ] [ 25 ] All work on the principle of generating conditions where either only the nucleic acid precipitates, or only other biomolecules precipitate, allowing the nucleic acid to be separated. [ 15 ] [ 23 ] In high-throughput DNA extraction workflows, laboratory equipment such as 96 well plate template can be utilized to efficiently process multiple samples in parallel. These templates allow for the automation of extraction protocols, significantly increasing the throughput of plasmid DNA isolation while maintaining consistency across large sample sets. When used in combination with automated liquid handling systems, a 96 well plate template helps streamline the process of extracting plasmid DNA from bacterial cultures, ensuring uniformity and reducing manual errors during the purification steps. Ethanol precipitation is a widely used method for purifying and concentrating nucleic acids , including plasmid DNA. [ 26 ] The basic principle of this method is that nucleic acids are insoluble in ethanol or isopropanol but soluble in water. Therefore, it works by using ethanol as an antisolvent of DNA, causing it to precipitate out of solution and then it can be collected by centrifugation . The soluble fraction is discarded to remove other biomolecules. [ 27 ] Spin column-based nucleic acid purification is a method of purifying DNA, RNA or plasmid from a sample using a spin column filter. [ 25 ] The method is based on the principle of selectively binding nucleic acids to a solid matrix in the spin column, while other contaminants, such as proteins and salts, are washed away. The conditions are then changed to elute the purified nucleic acid off the column using a suitable elution buffer. [ 25 ] The basic principle of the phenol-chloroform extraction is that DNA and RNA are relatively insoluble in phenol and chloroform, while other cellular components are relatively soluble in these solvents. The addition of a phenol / chloroform mixture will dissolve protein and lipid contaminants, leaving the nucleic acids in the aqueous phase. It also denatures proteins, like DNase , which is especially important if the plasmids are to be used for enzyme digestion. Otherwise, smearing may occur in enzyme restricted form of plasmid DNA. [ 24 ] In beads-based extraction, addition of a mixture containing magnetic beads commonly made of iron ions binds to plasmid DNA, separating them from unwanted compounds by a magnetic rod or stand. [ 25 ] The plasmid-bound beads are then released by removal of the magnetic field and extracted in an elution solution for down-stream experiments such as transformation or restriction digestion . This form of miniprep can also be automated, which increases the conveniency while reducing mechanical error.
https://en.wikipedia.org/wiki/Plasmid_preparation
An environment's plasmidome refers to the plasmids present in it. [ 1 ] The term is a portmanteau of the two English words Plasmid and Kingdom. In biological research, plasmidome may refer to the actual plasmids that were found and isolated from a certain microorganism by means of culturing isolated microorganism and investigating the plasmids it possesses or by taking an environmental sample and performing a metagenomic survey using next generation sequencing methods in order to reveal and characterize plasmid genomes that belong to that environment. [ 2 ]
https://en.wikipedia.org/wiki/Plasmidome
Plasmin-α2-antiplasmin complex ( PAP ) is a 1:1 irreversibly formed inactive complex of the enzyme plasmin and its inhibitor α 2 -antiplasmin . [ 1 ] [ 2 ] [ 3 ] [ 4 ] It is a marker of the activity of the fibrinolytic system and a marker of net activation of fibrinolysis . [ 5 ] [ 6 ] PAP levels are increased with pregnancy [ 7 ] and by ethinylestradiol -containing combined birth control pills . [ 5 ] Conversely, levels of PAP do not appear to be affected with menopausal hormone therapy . [ 6 ] PAP levels have been reported to be elevated in men with prostate cancer . [ 8 ] This biochemistry article is a stub . You can help Wikipedia by expanding it . This hematology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plasmin-α2-antiplasmin_complex
Plasmogamy is a stage in the sexual reproduction of fungi , in which the protoplasm of two parent cells (usually from the mycelia ) fuse without the fusion of nuclei , effectively bringing two haploid nuclei close together in the same cell. This state is followed by karyogamy , where the two nuclei fuse and then undergo meiosis to produce spores. [ 1 ] [ 2 ] The dikaryotic state that comes after plasmogamy will often persist for many generations before the fungi undergoes karyogamy. In lower fungi however, plasmogamy is usually immediately followed by karyogamy. [ 1 ] A comparative genomic study indicated the presence of the machinery for plasmogamy, karyogamy and meiosis in the Amoebozoa . [ 3 ] This mycology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plasmogamy
A plasmoid is a coherent structure of plasma and magnetic fields . Plasmoids have been proposed to explain natural phenomena such as ball lightning , [ 1 ] [ 2 ] magnetic bubbles in the magnetosphere , [ 3 ] and objects in cometary tails, [ 4 ] in the solar wind, [ 5 ] [ 6 ] solar atmosphere, [ 7 ] and in the heliospheric current sheet . Plasmoids produced in the laboratory include the compact toroids (similar to a vortex ring in low temperature fluid dynamics or hydrodynamics) field-reversed configurations , spheromaks , and filamentary variants in dense plasma focuses . The word plasmoid was coined in 1956 by Winston H. Bostick (1916–1991) to mean a "plasma-magnetic entity": [ 8 ] The plasma is emitted not as an amorphous blob, but in the form of a torus . We shall take the liberty of calling this toroidal structure a plasmoid, a word which means plasma-magnetic entity. The word plasmoid will be employed as a generic term for all plasma-magnetic entities. Bostick researched the basic traits, and many details, of plasmoids. Plasmoids appear to be plasma cylinders elongated in the direction of the magnetic field. Plasmoids possess a measurable magnetic moment, a measurable translational speed, a transverse electric field, and a measurable size. Plasmoids can interact with each other, seemingly by reflecting off one another. Their orbits can also be made to curve toward one another. Plasmoids can be made to spiral to a stop if projected into a gas at about 10 −3 mm Hg pressure. Plasmoids can also be made to smash each other into fragments. There is some scant evidence to support the hypothesis that they undergo fission and possess spin. [ 8 ] A plasmoid has an internal pressure stemming from both the gas pressure of the plasma and the magnetic pressure of the field. To maintain an approximately static plasmoid radius, this pressure must be balanced by an external confining pressure. In a field-free vacuum, a plasmoid expands and dissipates rapidly. Plasmoids have been formed in discharges with local magnetic field strengths on the order of 16,000 Tesla. [ 9 ] Bostick went on to apply his theory of plasmoids to astrophysics phenomena. His 1958 paper, [ 10 ] applied plasma similarity transformations to pairs of plasmoids fired from a plasma gun ( dense plasma focus device) that interact in such a way as to simulate an early model of galaxy formation. [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Plasmoid
In chemistry , plasmonic catalysis is a type of catalysis that uses plasmons to increase the rate of a chemical reaction. [ 1 ] A plasmonic catalyst is made up of a metal nanoparticle surface (usually gold, silver, or a combination of the two) which generates localized surface plasmon resonances (LSPRs) when excited by light. [ 2 ] These plasmon oscillations create an electron-rich region near the surface of the nanoparticle, which can be used to excite the electrons of nearby molecules . [ 3 ] Similar to photocatalysts , plasmonic catalysts can transfer their excitation energy to reactant molecules through resonance energy transfer (RET). [ 4 ] Unlike photocatalysts, plasmonic catalysts can also excite reactant molecules by the release of hot carrier electrons which have a high enough energy to completely dissociate from the metal surface. [ 5 ] The energy of these hot carrier electrons can be altered by changing the wavelength of light striking the surface and the size of the nanoparticles present, which allows the hot electrons to take on the excitation state needed to catalyze multiple different reactions. Although the field of plasmonic catalysis is still in its infancy, [ 6 ] there are clear advantages to utilizing a plasmon-active surface over traditional photocatalysts. Their ability to utilize energy from near-infrared , visible , and ultraviolet light gives plasmon surfaces higher light-capturing efficiency than photocatalysts, which can only utilize ultraviolet light, and the larger possible energy range of the electromagnetic field and emitted electrons make the resulting catalytic effects both broadly applicable and highly tunable. [ 7 ] Broadly speaking, plasmonic catalysis increases the reaction rate through two major pathways. The first of these is through the generation of an electromagnetic field during plasmon oscillations. [ 8 ] This field lowers the activation energy of the reaction through excitation of the reactants electrons by resonance energy transfer. It can also provide localized transition state stabilization, further increasing the rate of reaction. [ 9 ] The second pathway is through the generation of hot carrier electron/ phonon pairs. When a plasmon is generated, some electrons may have the energy to break completely free of the nanoparticle's electron shells . These highly excited electrons can then excite reactant electrons in the highest occupied molecular orbital or fill the lowest unoccupied molecular orbital, raising the energy of the molecule and allowing for a lower energy transition state. [ 3 ] In most cases, these hot electrons do not find a reactant molecule to excite and instead fill the phonon and return to a ground state energy. [ 10 ] The excess energy from the process is released as thermal energy , creating a localized temperature increase which can also increase the rate of reaction. [ 11 ] The photocatalytic electrolysis of water has been shown to be up to 66 times more efficient when using a gold nanoparticle surface. [ 12 ] The rate of demethylation of methylene blue by a Titanium dioxide photocatalyst has been increased sevenfold in the presence of silver nanoparticles. [ 13 ] The plasmonically catalyzed oxidation of several common gases- including carbon monoxide , ammonia , and oxygen - can occur at far lower temperatures than are normally required due to the strong catalytic effects of plasmonic surfaces when excited by visible light. [ 14 ] Recently hybrid plasmonic nanomaterials started being explored for organic synthesis [ 15 ] or the production of solar fuels . [ 16 ]
https://en.wikipedia.org/wiki/Plasmonic_catalysis
In nano-optics , a plasmonic lens generally refers to a lens for surface plasmon polaritons (SPPs), i.e. a device that redirects SPPs to converge towards a single focal point . Because SPPs can have very small wavelength , they can converge into a very small and very intense spot, much smaller than the free space wavelength and the diffraction limit . [ 1 ] [ 2 ] A simple example of a plasmonic lens is a series of concentric rings on a metal film . Any light that hits the film from free space at a 90-degree angle, known as the normal , will get coupled into a SPP (this part works like a diffraction grating coupler ), and that SPP will be heading towards the center of the circles, which is the focal point. [ 1 ] [ 2 ] Another example is a tapered "dimple". [ 3 ] In 2007, a novel, or technologically new, plasmonic lenses and waveguide by modulating light a mesoscale dielectric structure on a metallic film with arrayed nano-slits , which have constant depth but variant widths. [ 4 ] The slits transport electromagnetic energy in the form of SPPs in nanometer sized waveguides and provide desired phase adjustments for manipulating the beam of light . The scientists claim that it is an improvement over other subwavelength imaging techniques, such as " superlenses ", where the object and image are confined to the near field . [ 5 ] These devices have been suggested for various applications that take advantage of the small size and high intensity of the SPPs at the focal point. These include photolithography , [ 2 ] heat-assisted magnetic recording , microscopy , biophotonics , biological molecule sensors, and solar cells , as well as other applications. [ citation needed ] The term "plasmonic lens" is also sometimes used to describe something different: Any free-space lens (i.e., a lens that focuses free-space light, rather than SPPs), that has something to do with plasmonics. [ 6 ]
https://en.wikipedia.org/wiki/Plasmonic_lens
A plasmonic metamaterial is a metamaterial that uses surface plasmons to achieve optical properties not seen in nature. Plasmons are produced from the interaction of light with metal- dielectric materials. Under specific conditions, the incident light couples with the surface plasmons to create self-sustaining, propagating electromagnetic waves known as surface plasmon polaritons (SPPs). Once launched, the SPPs ripple along the metal-dielectric interface. Compared with the incident light, the SPPs can be much shorter in wavelength. [ 1 ] The properties stem from the unique structure of the metal-dielectric composites, with features smaller than the wavelength of light separated by subwavelength distances. Light hitting such a metamaterial is transformed into surface plasmon polaritons, which are shorter in wavelength than the incident light. Plasmonic materials are metals or metal-like [ 2 ] materials that exhibit negative real permittivity . The most common plasmonic materials are gold and silver. However, many other materials show metal-like optical properties in specific wavelength ranges. [ 3 ] Various research groups are experimenting with different approaches to make plasmonic materials that exhibit lower losses and tunable optical properties. Plasmonic metamaterials are realizations of materials first proposed by Victor Veselago, a Russian theoretical physicist, in 1967. Also known as left-handed or negative index materials, Veselago theorized that they would exhibit optical properties opposite to those of glass or air. In negative index materials energy is transported in a direction opposite to that of propagating wavefronts , rather than paralleling them, as is the case in positive index materials. [ 4 ] [ 5 ] Normally, light traveling from, say, air into water bends upon passing through the normal (a plane perpendicular to the surface) and entering the water. In contrast, light reaching a negative index material through air would not cross the normal. Rather, it would bend the opposite way. Negative refraction was first reported for microwave and infrared frequencies. A negative refractive index in the optical range was first demonstrated in 2005 by Shalaev et al. (at the telecom wavelength λ = 1.5 μm) [ 6 ] and by Brueck et al. (at λ = 2 μm) at nearly the same time. [ 7 ] In 2007, a collaboration between the California Institute of Technology , and the NIST reported narrow band, negative refraction of visible light in two dimensions. [ 4 ] [ 5 ] To create this response, incident light couples with the undulating, gas-like charges (plasmons) normally on the surface of metals. This photon-plasmon interaction results in SPPs that generate intense, localized optical fields. The waves are confined to the interface between metal and insulator. This narrow channel serves as a transformative guide that, in effect, traps and compresses the wavelength of incoming light to a fraction of its original value. [ 5 ] Nanomechanical systems incorporating metamaterials exhibit negative radiation pressure . [ 8 ] Light falling on conventional materials, with a positive index of refraction, exerts a positive pressure, meaning that it can push an object away from the light source. In contrast, illuminating negative index metamaterials should generate a negative pressure that pulls an object toward light. [ 8 ] Computer simulations predict plasmonic metamaterials with a negative index in three dimensions. Potential fabrication methods include multilayer thin film deposition, focused ion beam milling and self-assembly . [ 8 ] PMMs can be made with a gradient index (a material whose refractive index varies progressively across the length or area of the material). One such material involved depositing a thermoplastic , known as a PMMA , on a gold surface via electron beam lithography . Hyperbolic metamaterials behave as a metal when light passes through it in one direction and like a dielectric when light passes in the perpendicular direction, called extreme anisotropy . The material's dispersion relation forms a hyperboloid . The associated wavelength can in principle be infinitely small. [ 9 ] Recently, hyperbolic metasurfaces in the visible region has been demonstrated with silver or gold nanostructures by lithographic techniques. [ 10 ] [ 11 ] The reported hyperbolic devices showed multiple functions for sensing and imaging, e.g., diffraction-free, negative refraction and enhanced plasmon resonance effects, enabled by their unique optical properties. [ 12 ] These specific properties are also highly required to fabricate integrated optical meta-circuits for the quantum information applications. The first metamaterials created exhibit anisotropy in their effects on plasmons. I.e., they act only in one direction. More recently, researchers used a novel self-folding technique to create a three-dimensional array of split-ring resonators that exhibits isotropy when rotated in any direction up to an incident angle of 40 degrees. Exposing strips of nickel and gold deposited on a polymer/silicon substrate to air allowed mechanical stresses to curl the strips into rings, forming the resonators. By arranging the strips at different angles to each other, 4-fold symmetry was achieved, which allowed the resonators to produce effects in multiple directions. [ 13 ] [ 14 ] Negative refraction for visible light was first produced in a sandwich-like construction with thin layers. An insulating sheet of silicon nitride was covered by a film of silver and underlain by another of gold. The critical dimension is the thickness of the layers, which summed to a fraction of the wavelength of blue and green light . By incorporating this metamaterial into integrated optics on an IC chip , negative refraction was demonstrated over blue and green frequencies. The collective result is a relatively significant response to light. [ 4 ] [ 5 ] Graphene also accommodates surface plasmons, [ 15 ] observed via near field infrared optical microscopy techniques [ 16 ] [ 17 ] and infrared spectroscopy . [ 18 ] Potential applications of graphene plasmonics involve terahertz to midinfrared frequencies, in devices such as optical modulators , photodetectors and biosensors . [ 19 ] A hyperbolic metamaterial made from titanium nitride (metal) and aluminum scandium nitride (dielectric) have compatible crystal structures and can form a superlattice , a crystal that combines two (or more) materials. The material is compatible with existing CMOS technology (unlike traditional gold and silver), mechanically strong and thermally stable at higher temperatures. The material exhibits higher photonic densities of states than Au or Ag. [ 20 ] The material is an efficient light absorber. [ 21 ] The material was created using epitaxy inside a vacuum chamber with a technique known as magnetron sputtering . The material featured ultra-thin and ultra-smooth layers with sharp interfaces. [ 21 ] Possible applications include a "planar hyperlens " that could make optical microscopes able to see objects as small as DNA , advanced sensors, more efficient solar collectors, nano-resonators, quantum computing and diffraction free focusing and imaging. [ 21 ] The material works across a broad spectrum from near-infrared to visible light. Near-infrared is essential for telecommunications and optical communications, and visible light is important for sensors, microscopes and efficient solid-state light sources. [ 21 ] One potential application is microscopy beyond the diffraction limit . [ 4 ] Gradient index plasmonics were used to produce Luneburg and Eaton lenses that interact with surface plasmon polaritons rather than photons. A theorized superlens could exceed the diffraction limit that prevents standard (positive-index) lenses from resolving objects smaller than one-half of the wavelength of visible light . Such a superlens would capture spatial information that is beyond the view of conventional optical microscopes . Several approaches to building such a microscope have been proposed. The subwavelength domain could be optical switches , modulators, photodetectors and directional light emitters. [ 22 ] Other proof-of-concept applications under review involve high sensitivity biological and chemical sensing . They may enable the development of optical sensors that exploit the confinement of surface plasmons within a certain type of Fabry-Perot nano-resonator. This tailored confinement allows efficient detection of specific bindings of target chemical or biological analytes using the spatial overlap between the optical resonator mode and the analyte ligands bound to the resonator cavity sidewalls. Structures are optimized using finite difference time domain electromagnetic simulations , fabricated using a combination of electron beam lithography and electroplating , and tested using both near-field and far-field optical microscopy and spectroscopy . [ 4 ] Optical computing replaces electronic signals with light processing devices. [ 23 ] In 2014 researchers announced a 200 nanometer, terahertz speed optical switch. The switch is made of a metamaterial consisting of nanoscale particles of vanadium dioxide ( VO 2 ), a crystal that switches between an opaque, metallic phase and a transparent, semiconducting phase. The nanoparticles are deposited on a glass substrate and overlain by even smaller gold nanoparticles [ 24 ] that act as a plasmonic photocathode . [ 25 ] Femtosecond laser pulses free electrons in the gold particles that jump into the VO 2 and cause a subpicosecond phase change. [ 24 ] The device is compatible with current integrated circuit technology, silicon-based chips and high-K dielectrics materials. It operates in the visible and near-infrared region of the spectrum. It generates only 100 femtojoules/bit/operation, allowing the switches to be packed tightly. [ 24 ] Gold group metals (Au, Ag and Cu) have been used as direct active materials in photovoltaics and solar cells. The materials act simultaneously as electron [ 26 ] and hole donor, [ 27 ] and thus can be sandwiched between electron and hole transport layers to make a photovoltaic cell. At present these photovoltaic cells allow powering smart sensors for the Internet of Things (IoT) platform. [ 28 ]
https://en.wikipedia.org/wiki/Plasmonic_metamaterial
Plasmonic nanoparticles are particles whose electron density can couple with electromagnetic radiation of wavelengths that are far larger than the particle due to the nature of the dielectric - metal interface between the medium and the particles: unlike in a pure metal where there is a maximum limit on what size wavelength can be effectively coupled based on the material size. [ 2 ] What differentiates these particles from normal surface plasmons is that plasmonic nanoparticles also exhibit interesting scattering , absorbance , and coupling properties based on their geometries and relative positions. [ 3 ] [ 4 ] These unique properties have made them a focus of research in many applications including solar cells, spectroscopy, signal enhancement for imaging, and cancer treatment. [ 5 ] [ 6 ] Their high sensitivity also identifies them as good candidates for designing mechano-optical instrumentation. [ 7 ] Plasmons are the oscillations of free electrons that are the consequence of the formation of a dipole in the material due to electromagnetic waves. The electrons migrate in the material to restore its initial state; however, the light waves oscillate, leading to a constant shift in the dipole that forces the electrons to oscillate at the same frequency as the light. This coupling only occurs when the frequency of the light is equal to or less than the plasma frequency and is greatest at the plasma frequency that is therefore called the resonant frequency . The scattering and absorbance cross-sections describe the intensity of a given frequency to be scattered or absorbed. Many fabrication processes or chemical synthesis methods exist for preparation of such nanoparticles, depending on the desired size and geometry. The nanoparticles can form clusters (the so-called "plasmonic molecules") and interact with each other to form cluster states. The symmetry of the nanoparticles and the distribution of the electrons within them can affect a type of bonding or antibonding character between the nanoparticles similarly to molecular orbitals. Since light couples with the electrons, polarized light can be used to control the distribution of the electrons and alter the mulliken term symbol for the irreducible representation. Changing the geometry of the nanoparticles can be used to manipulate the optical activity and properties of the system, but so can the polarized light by lowering the symmetry of the conductive electrons inside the particles and changing the dipole moment of the cluster. These clusters can be used to manipulate light on the nano scale. [ 8 ] The quasistatic equations that describe the scattering and absorbance cross-sections for very small spherical nanoparticles are: σ s c a t t = 8 π 3 k 4 R 6 | ε p a r t i c l e − ε m e d i u m ε p a r t i c l e + 2 ε m e d i u m | 2 {\displaystyle {{\sigma }_{\rm {scatt}}}={\frac {8\pi }{3}}{{k}^{4}}{{R}^{6}}{{\left|{\frac {{{\varepsilon }_{\rm {particle}}}-{{\varepsilon }_{\rm {medium}}}}{{{\varepsilon }_{\rm {particle}}}+2{{\varepsilon }_{\rm {medium}}}}}\right|}^{2}}} σ a b s = 4 π k R 3 Im ⁡ | ε p a r t i c l e − ε m e d i u m ε p a r t i c l e + 2 ε m e d i u m | {\displaystyle {{\sigma }_{\rm {abs}}}=4\pi k{{R}^{3}}\operatorname {Im} \left|{\frac {{{\varepsilon }_{\rm {particle}}}-{{\varepsilon }_{\rm {medium}}}}{{{\varepsilon }_{\rm {particle}}}+2{{\varepsilon }_{\rm {medium}}}}}\right|} where k {\displaystyle k} is the wavenumber of the electric field, R {\displaystyle R} is the radius of the particle, ε m e d i u m {\displaystyle {{\varepsilon }_{\rm {medium}}}} is the relative permittivity of the dielectric medium and ε p a r t i c l e {\displaystyle {{\varepsilon }_{\rm {particle}}}} is the relative permittivity of the nanoparticle defined by ε p a r t i c l e = 1 − ω p 2 ω 2 + i ω γ {\displaystyle {{\varepsilon }_{\rm {particle}}}=1-{\frac {\omega _{\rm {p}}^{2}}{{{\omega }^{2}}+\mathrm {i} {\omega }{\gamma }}}} also known as the Drude Model for free electrons where ω p {\displaystyle {{\omega }_{\rm {p}}}} is the plasma frequency , γ {\displaystyle {\gamma }} is the relaxation frequency of the charge carries, and ω {\displaystyle \omega } is the frequency of the electromagnetic radiation. This equation is the result of solving the differential equation for a harmonic oscillator with a driving force proportional to the electric field that the particle is subjected to. For a more thorough derivation, see surface plasmon . It logically follows that the resonance conditions for these equations is reached when the denominator is around zero such that ε p a r t i c l e + 2 ε m e d i u m ≈ 0 {\displaystyle {{\varepsilon }_{\rm {particle}}}+2{{\varepsilon }_{\rm {medium}}}\approx 0} When this condition is fulfilled the cross-sections are at their maximum. These cross-sections are for single, spherical particles. The equations change when particles are non-spherical, or are coupled to 1 or more other nanoparticles, such as when their geometry changes. This principle is important for several applications. Rigorous electrodynamic analysis of plasma oscillations in a spherical metal nanoparticle of a finite size was performed in. [ 9 ] Due to their ability to scatter light back into the photovoltaic structure and low absorption, plasmonic nanoparticles are under investigation as a method for increasing solar cell efficiency. [ 10 ] [ 5 ] Forcing more light to be absorbed by the dielectric increases efficiency. [ 11 ] Plasmons can be excited by optical radiation and induce an electric current from hot electrons in materials fabricated from gold particles and light-sensitive molecules of porphin , of precise sizes and specific patterns. The wavelength to which the plasmon responds is a function of the size and spacing of the particles. The material is fabricated using ferroelectric nanolithography . Compared to conventional photoexcitation , the material produced three to 10 times the current. [ 12 ] [ 13 ] In the past 5 years plasmonic nanoparticles have been explored as a method for high resolution spectroscopy . One group utilized 40 nm gold nanoparticles that had been functionalized such that they would bind specifically to epidermal growth factor receptors to determine the density of those receptors on a cell. This technique relies on the fact that the effective geometry of the particles change when they appear within one particle diameter (40 nm) of each other. Within that range, quantitative information on the EGFR density in the cell membrane can be retrieved based on the shift in resonant frequency of the plasmonic particles. [ 14 ] Plasmonic nanoparticles have demonstrated a wide potential for the establishment of innovative cancer treatments. [ 15 ] Despite that, there are still not plasmonic nanomaterials employed in the clinical practice, because the associated metal persistence. [ 15 ] Preliminary research indicates that some nanomaterials, among which gold nanorods [ 16 ] and ultrasmall-in-nano architectures, [ 17 ] can convert IR laser light into localized heat, also in combination with other established cancer treatments. [ 18 ]
https://en.wikipedia.org/wiki/Plasmonic_nanoparticles
Plasmonics or nanoplasmonics [ 1 ] refers to the generation, detection, and manipulation of signals at optical frequencies along metal-dielectric interfaces in the nanometer scale. [ 2 ] Inspired by photonics , plasmonics follows the trend of miniaturizing optical devices (see also nanophotonics ), and finds applications in sensing, microscopy, optical communications, and bio-photonics. [ 3 ] [ 4 ] Plasmonics typically utilizes surface plasmon polaritons (SPPs) , [ 2 ] that are coherent electron oscillations travelling together with an electromagnetic wave along the interface between a dielectric (e.g. glass, air) and a metal (e.g. silver, gold). The SPP modes are strongly confined to their supporting interface, giving rise to strong light-matter interactions. In particular, the electron gas in the metal oscillates with the electro-magnetic wave. Because the moving electrons are scattered, ohmic losses in plasmonic signals are generally large, which limits the signal transfer distances to the sub-centimeter range, [ 5 ] unless hybrid optoplasmonic light guiding networks, [ 6 ] [ 7 ] [ 8 ] or plasmon gain amplification [ 9 ] are used. Besides SPPs, localized surface plasmon modes supported by metal nanoparticles are referred to as plasmonics modes. Both modes are characterized by large momentum values, which enable strong resonant enhancement of the local density of photon states, [ 10 ] and can be utilized to enhance weak optical effects of opto-electronic devices. [ 4 ] An effort is currently being made to integrate plasmonics with electric circuits , or in an electric circuit analog, to combine the size efficiency of electronics with the data capacity of photonic integrated circuits (PIC) . [ 11 ] While gate lengths of CMOS nodes used for electrical circuits are ever decreasing, the size of conventional PICs is limited by diffraction , thus constituting a barrier for further integration. Plasmonics could bridge this size mismatch between electronic and photonic components. At the same time, photonics and plasmonics can complement each other, since, under the right conditions, optical signals can be converted to SPPs and vice versa. One of the biggest issues in making plasmonic circuits a feasible reality is the short propagation length of surface plasmons. Typically, surface plasmons travel distances only on the scale of millimeters before damping diminishes the signal. [ 12 ] This is largely due to ohmic losses, which become increasingly important the deeper the electric field penetrates into the metal. Researchers are attempting to reduce losses in surface plasmon propagation by examining a variety of materials, geometries, the frequency and their respective properties. [ 13 ] New promising low-loss plasmonic materials include metal oxides and nitrides [ 14 ] as well as graphene . [ 15 ] Key to more design freedom are improved fabrication techniques that can further contribute to reduced losses by reduced surface roughness. Another foreseeable barrier plasmonic circuits will have to overcome is heat; heat in a plasmonic circuit may or may not exceed the heat generated by complex electronic circuits. [ 12 ] It has recently been proposed to reduce heating in plasmonic networks by designing them to support trapped optical vortices, which circulate light powerflow through the inter-particle gaps thus reducing absorption and Ohmic heating, [ 16 ] [ 17 ] [ 18 ] In addition to heat, it is also difficult to change the direction of a plasmonic signal in a circuit without significantly reducing its amplitude and propagation length. [ 11 ] One clever solution to the issue of bending the direction of propagation is the use of Bragg mirrors to angle the signal in a particular direction, or even to function as splitters of the signal. [ 19 ] Finally, emerging applications of plasmonics for thermal emission manipulation [ 20 ] and heat-assisted magnetic recording [ 21 ] leverage Ohmic losses in metals to obtain devices with new enhanced functionalities. Optimal plasmonic waveguide designs strive to maximize both the confinement and propagation length of surface plasmons within a plasmonic circuit. Surface plasmon polaritons are characterized by a complex wave vector , with components parallel and perpendicular to the metal-dielectric interface. The imaginary part of the wave vector component is inversely proportional to the SPP propagation length, while its real part defines the SPP confinement. [ 22 ] The SPP dispersion characteristics depend on the dielectric constants of the materials comprising the waveguide. The propagation length and confinement of the surface plasmon polariton wave are inversely related. Therefore, stronger confinement of the mode typically results in shorter propagation lengths. The construction of a practical and usable surface plasmon circuit is heavily dependent on a compromise between propagation and confinement. Maximizing both confinement and propagation length helps mitigate the drawbacks of choosing propagation length over confinement and vice versa. Multiple types of waveguides have been created in pursuit of a plasmonic circuit with strong confinement and sufficient propagation length. Some of the most common types include insulator-metal-insulator (IMI), [ 23 ] metal-insulator-metal (MIM), [ 24 ] dielectric loaded surface plasmon polariton (DLSPP), [ 25 ] [ 26 ] gap plasmon polariton (GPP), [ 27 ] channel plasmon polariton (CPP), [ 28 ] wedge surface plasmon polariton (wedge), [ 29 ] and hybrid opto-plasmonic waveguides and networks. [ 30 ] [ 7 ] Dissipation losses accompanying SPP propagation in metals can be mitigated by gain amplification or by combining them into hybrid networks with photonic elements such as fibers and coupled-resonator waveguides. [ 30 ] [ 7 ] This design can result in the previously mentioned hybrid plasmonic waveguide, which exhibits subwavelength mode on a scale of one-tenth of the diffraction limit of light, along with an acceptable propagation length. [ 31 ] [ 32 ] [ 33 ] [ 34 ] The input and output ports of a plasmonic circuit will receive and send optical signals, respectively. To do this, coupling and decoupling of the optical signal to the surface plasmon is necessary. [ 35 ] The dispersion relation for the surface plasmon lies entirely below the dispersion relation for light, which means that for coupling to occur additional momentum should be provided by the input coupler to achieve the momentum conservation between incoming light and surface plasmon polariton waves launched in the plasmonic circuit. [ 11 ] There are several solutions to this, including using dielectric prisms, gratings, or localized scattering elements on the surface of the metal to help induce coupling by matching the momenta of the incident light and the surface plasmons. [ 36 ] After a surface plasmon has been created and sent to a destination, it can then be converted into an electrical signal. This can be achieved by using a photodetector in the metal plane, or decoupling the surface plasmon into freely propagating light that can then be converted into an electrical signal. [ 11 ] Alternatively, the signal can be out-coupled into a propagating mode of an optical fiber or waveguide. [ citation needed ] The progress made in surface plasmons over the last 50 years has led to the development in various types of devices, both active and passive. A few of the most prominent areas of active devices are optical, thermo-optical, and electro-optical. All-optical devices have shown the capacity to become a viable source for information processing, communication, and data storage when used as a modulator. In one instance, the interaction of two light beams of different wavelengths was demonstrated by converting them into co-propagating surface plasmons via cadmium selenide quantum dots . [ 37 ] Electro-optical devices have combined aspects of both optical and electrical devices in the form of a modulator as well. Specifically, electro-optic modulators have been designed using evanescently coupled resonant metal gratings and nanowires that rely on long-range surface plasmons (LRSP). [ 38 ] Likewise, thermo-optic devices, which contain a dielectric material whose refractive index changes with variation in temperature, have also been used as interferometric modulators of SPP signals in addition to directional-coupler switches. Some thermo-optic devices have been shown to utilize LRSP waveguiding along gold stripes that are embedded in a polymer and heated by electrical signals as a means for modulation and directional-coupler switches. [ 39 ] Another potential field lies in the use of spasers in areas such as nanoscale lithography, probing, and microscopy. [ 40 ] Although active components play an important role in the use of plasmonic circuitry, passive circuits are just as integral and, surprisingly, not trivial to make. Many passive elements such as prisms , lenses , and beam splitters can be implemented in a plasmonic circuit, however fabrication at the nano scale has proven difficult and has adverse effects. Significant losses can occur due to decoupling in situations where a refractive element with a different refractive index is used. However, some steps have been taken to minimize losses and maximize compactness of the photonic components. One such step relies on the use of Bragg reflectors , or mirrors composed of a succession of planes to steer a surface plasmon beam. When optimized, Bragg reflectors can reflect nearly 100% of the incoming power. [ 11 ] Another method used to create compact photonic components relies on CPP waveguides as they have displayed strong confinement with acceptable losses less than 3 dB within telecommunication wavelengths. [ 41 ] Maximizing loss and compactness with regards to the use of passive devices, as well as active devices, creates more potential for the use of plasmonic circuits. [ citation needed ]
https://en.wikipedia.org/wiki/Plasmonics
Plasmonics is a bimonthly peer-reviewed scientific journal covering plasmonics , including the theory of plasmonic metamaterials , fluorescence and surface-enhanced Raman spectroscopy . It is published by Springer Science+Business Media . Its current editor is Chris D. Geddes, Director of the Institute of Fluorescence at the University of Maryland Biotechnology Institute . [ 1 ] According to the Journal Citation Reports , the journal has a 2023 impact factor of 3.3. [ 2 ] The journal is abstracted and indexed in: [ 1 ] This article about a physics journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . This article about an engineering journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Plasmonics_(journal)
Plastarch Material ( PSM ) is a biodegradable , thermoplastic resin. It is composed of starch combined with several other biodegradable materials. The starch is modified in order to obtain heat-resistant properties, making PSM one of few bioplastics capable of withstanding high temperatures. PSM began to be commercially available in 2005. PSM is stable in the atmosphere, but biodegradable in compost, wet soil, fresh water, seawater, and activated sludge where microorganisms exist. It has a softening temperature of 257 °F (125 °C) and a melting temperature of 313 °F (156 °C). [ 1 ] It is also hygroscopic . The material has to be dried in a material dryer at 150 °F (66 °C) for five hours or 180 °F (82 °C) for three hours. For injection molding and extrusion the barrel temperatures should be at 340° +/- 10 °F (171 °C) with the nozzle/die at 360 °F (182 °C). Due to how similar PSM is to other plastics (such as polypropylene and CPET), PSM can run on many existing thermoforming and injection molding lines. PSM is currently used for a wide variety of applications in the plastic market, such as food packaging and utensils, personal care items, plastic bags, temporary construction tubing, industrial foam packaging, industrial and agricultural film, window insulation, construction stakes, and horticulture planters. Since PSM is derived from a renewable resource ( corn starch ), it has become an attractive alternative to petrochemical-derived products. Unlike plastic, PSM can also be disposed of through incineration, resulting in non-toxic smoke and a white residue which can be used as fertilizer. However, concerns have been expressed about the impact of such technologies on food prices . Some PSM products - such as cutlery - contain a mix of PSM and plastics. These plastics prevent the PSM from degrading, making the entire product non-biodegradable.
https://en.wikipedia.org/wiki/Plastarch_material
Plaster is a building material used for the protective or decorative coating of walls and ceilings and for moulding and casting decorative elements. [ 1 ] In English, "plaster" usually means a material used for the interiors of buildings, while "render" commonly refers to external applications. [ 2 ] The term stucco refers to plasterwork that is worked in some way to produce relief decoration, rather than flat surfaces. The most common types of plaster mainly contain either gypsum , lime , or cement , [ 3 ] but all work in a similar way. The plaster is manufactured as a dry powder and is mixed with water to form a stiff but workable paste immediately before it is applied to the surface. The reaction with water liberates heat through crystallization and the hydrated plaster then hardens. Plaster can be relatively easily worked with metal tools and sandpaper and can be moulded, either on site or in advance, and worked pieces can be put in place with adhesive . Plaster is suitable for finishing rather than load-bearing, and when thickly applied for decoration may require a hidden supporting framework. Forms of plaster have several other uses. In medicine, plaster orthopedic casts are still often used for supporting set broken bones. In dentistry, plaster is used to make dental models by pouring the material into dental impressions . Various types of models and moulds are made with plaster. In art, lime plaster is the traditional matrix for fresco painting; the pigments are applied to a thin wet top layer of plaster and fuse with it so that the painting is actually in coloured plaster. In the ancient world, as well as the sort of ornamental designs in plaster relief that are still used, plaster was also widely used to create large figurative reliefs for walls, though few of these have survived. Plaster was first used as a building material and for decoration in the Middle East at least 5,000 years ago. In Egypt, gypsum was burned in open fires, crushed into powder, and mixed with water to create plaster, used as a mortar between the blocks of pyramids and to provide a smooth wall facing. In Jericho, a cult arose where human skulls were decorated with plaster and painted to appear lifelike. The Romans brought plaster-work techniques to Europe . Clay plaster is a mixture of clay , sand and water often with the addition of plant fibers for tensile strength over wood lath . Clay plaster has been used around the world at least since antiquity. Settlers in the American colonies used clay plaster on the interiors of their houses: "Interior plastering in the form of clay antedated even the building of houses of frame, and must have been visible in the inside of wattle filling in those earliest frame houses in which … wainscot had not been indulged. Clay continued in use long after the adoption of laths and brick filling for the frame." [ 4 ] Where lime was not easily accessible it was rationed and usually substituted with clay as a binder. In Martin E. Weaver 's seminal work he says, "Mud plaster consists of clay or earth which is mixed with water to give a 'plastic' or workable consistency. If the clay mixture is too plastic it will shrink, crack and distort on drying. Sand, fine gravels and fibres were added to reduce the concentrations of fine clay particles which were the cause of the excessive shrinkage." [ 5 ] Manure was often added for its fibre content. In some building techniques straw or grass was used as reinforcement. In the Earliest European settlers' plasterwork, a mud plaster was used [ 5 ] McKee [ 4 ] wrote, of a circa 1675 Massachusetts contract that specified the plasterer, "Is to lath and siele [ 6 ] the four rooms of the house betwixt the joists overhead with a coat of lime and haire upon the clay; also to fill the gable ends of the house with ricks and plaister them with clay. 5. To lath and plaster partitions of the house with clay and lime, and to fill, lath, and plaister them with lime and haire besides; and to siele and lath them overhead with lime; also to fill, lath, and plaster the kitchen up to the wall plate on every side. 6. The said Daniel Andrews is to find lime, bricks, clay, stone, haire, together with laborers and workmen." [ 7 ] Records of the New Haven colony in 1641 mention clay and hay as well as lime and hair also. In German houses of Pennsylvania the use of clay persisted. [ 8 ] Old Economy Village is one such German settlement. The early Nineteenth-Century utopian village in present-day Ambridge , Pennsylvania, used clay plaster substrate exclusively in the brick and wood frame high architecture of the Feast Hall, Great House and other large and commercial structures as well as in the brick, frame and log dwellings of the society members. The use of clay in plaster and in laying brickwork appears to have been a common practice at that time not just in the construction of Economy village when the settlement was founded in 1824. Specifications for the construction of, "Lock keepers houses on the Chesapeake and Ohio Canal, written about 1828, require stone walls to be laid with clay mortar, excepting 3 inches on the outside of the walls … which (are) to be good lime mortar and well pointed." [ 9 ] The choice of clay was because of its low cost, but also the availability. At Economy, root cellars dug under the houses yielded clay and sand (stone), or the nearby Ohio river yielded washed sand from the sand bars; and lime outcroppings and oyster shell for the lime kiln . The surrounding forests of the new village of Economy provided straight grain, old- growth oak trees for lath. [ 10 ] Hand split lath starts with a log of straight grained wood of the required length. The log is split into quarters and then smaller and smaller bolts with wedges and a sledge. When small enough, a froe and mallet were used to split away narrow strips of lath. Farm animals provided hair and manure for the float coat of plaster. Fields of wheat and grains provided straw and hay to reinforce the clay plaster. But there was no uniformity in clay plaster recipes. Manure provides fiber for tensile strength as well as protein adhesive. Unlike casein used with lime plaster, hydrogen bonds of manure proteins are weakened by moisture. [ 11 ] With braced timber-framed structures clay plaster was used on interior walls and ceilings as well as exterior walls as the wall cavity and exterior cladding isolated the clay plaster from moisture penetration. Application of clay plaster in brick structures risked water penetration from failed mortar joints on the exterior brick walls. In Economy Village, the rear and middle wythes of brick dwelling walls are laid in a clay and sand mortar with the front wythe bedded in a lime and sand mortar to provide a weather proof seal to protect from water penetration. This allowed a rendering of clay plaster and setting coat of thin lime and fine sand on exterior-walled rooms. Split lath was nailed with square cut lath nails, one into each framing member. With hand split lath the plasterer had the luxury of making lath to fit the cavity being plastered. Lengths of lath two to six foot are not uncommon at Economy Village. Hand split lath is not uniform like sawn lath. The straightness or waviness of the grain affected the thickness or width of each lath, and thus the spacing of the lath. The clay plaster rough coat varied to cover the irregular lath. Window and door trim as well as the mudboard (baseboard) acted as screeds. With the variation of the lath thickness and use of coarse straw and manure, the clay coat of plaster was thick in comparison to later lime-only and gypsum plasters. In Economy Village, the lime top coats are thin veneers often an eighth inch or less attesting to the scarcity of limestone supplies there. Clay plasters with their lack of tensile and compressive strength fell out of favor as industrial mining and technology advances in kiln production led to the exclusive use of lime and then gypsum in plaster applications. However, clay plasters still exist after hundreds of years clinging to split lath on rusty square nails. The wall variations and roughness reveal a hand-made and pleasing textured alternative to machine-made modern substrate finishes. But clay plaster finishes are rare and fleeting. According to Martin Weaver, "Many of North America's historic building interiors … are all too often … one of the first things to disappear in the frenzy of demolition of interiors which has unfortunately come to be a common companion to 'heritage preservation' in the guise of building rehabilitation." [ 5 ] Gypsum plaster, [ 12 ] also known as plaster of Paris , [ 13 ] is a white powder consisting of calcium sulfate hemihydrate . The natural form of the compound is the mineral bassanite . [ 14 ] [ 15 ] The name "plaster of Paris" was given because it was originally made by heating gypsum from a large deposit at Montmartre , a hill in the north end of Paris . [ 13 ] [ 16 ] [ 17 ] Gypsum plaster, gypsum powder, or plaster of Paris, is produced by heating gypsum to about 120–180 °C (248–356 °F) in a kiln: [ 18 ] [ 13 ] CaSO 4 ⋅ 2 H 2 O ⟶ heat CaSO 4 ⋅ 1 2 H 2 O + 1 1 2 H 2 O ↑ {\displaystyle {\ce {CaSO4.2H2O {\overset {heat}{{}->{}}}{CaSO4.1/2H2O}+ 1\!1/2 H2O ^}}} (released as steam). Plaster of Paris has a remarkable property of setting into a hard mass on wetting with water. CaSO 4 ⋅ 1 2 H 2 O + 1 1 2 H 2 O ⟶ CaSO 4 ⋅ 2 H 2 O {\displaystyle {\ce {CaSO4.1/2H2O + 1 1/2H2O -> CaSO4.2H2O}}} Plaster of Paris is stored in moisture -proof containers, because the presence of moisture can cause slow setting of plaster of Paris by bringing about its hydration, which will make it useless after some time. [ 19 ] When the dry plaster powder is mixed with water, it rehydrates over time into gypsum. The setting of plaster slurry starts about 10 minutes after mixing and is complete in about 45 minutes. The setting of plaster of Paris is accompanied by a slight expansion of volume. It is used in making casts for statues, toys , and more. [ 19 ] The initial matrix consists mostly of orthorhombic crystals: the kinetic product. Over the next 72 hours, the rhombic crystals give way to an interlocking mass of monoclinic crystal needles, and the plaster increases in hardness and strength. [ 20 ] If plaster or gypsum is heated to between 130 and 180 °C (266 and 356 °F), hemihydrate is formed, which will also re-form as gypsum if mixed with water. [ 21 ] [ 22 ] On heating to 180 °C (356 °F), the nearly water-free form, called γ-anhydrite (CaSO 4 · n H 2 O where n = 0 to 0.05) is produced. γ-anhydrite reacts slowly with water to return to the dihydrate state, a property exploited in some commercial desiccants . On heating above 250 °C (482 °F), the completely anhydrous form called β-anhydrite or dead burned plaster is formed. [ 19 ] [ 22 ] Lime plaster is a mixture of calcium hydroxide and sand (or other inert fillers). Carbon dioxide in the atmosphere causes the plaster to set by transforming the calcium hydroxide into calcium carbonate ( limestone ). Whitewash is based on the same chemistry. To make lime plaster, limestone (calcium carbonate) is heated above approximately 850 °C (1,560 °F) to produce quicklime (calcium oxide). Water is then added to produce slaked lime (calcium hydroxide), which is sold as a wet putty or a white powder. Additional water is added to form a paste prior to use. The paste may be stored in airtight containers. When exposed to the atmosphere, the calcium hydroxide very slowly turns back into calcium carbonate through reaction with atmospheric carbon dioxide, causing the plaster to increase in strength. Lime plaster was a common building material for wall surfaces in a process known as lath and plaster , whereby a series of wooden strips on a studwork frame was covered with a semi-dry plaster that hardened into a surface. The plaster used in most lath and plaster construction was mainly lime plaster , with a cure time of about a month. To stabilize the lime plaster during curing, small amounts of plaster of Paris were incorporated into the mix. Because plaster of Paris sets quickly, "retardants" were used to slow setting time enough to allow workers to mix large working quantities of lime putty plaster. A modern form of this method uses expanded metal mesh over wood or metal structures, which allows a great freedom of design as it is adaptable to both simple and compound curves. Today this building method has been partly replaced with drywall , also composed mostly of gypsum plaster. In both these methods, a primary advantage of the material is that it is resistant to a fire within a room and so can assist in reducing or eliminating structural damage or destruction provided the fire is promptly extinguished. Lime plaster is used for frescoes , where pigments , diluted in water, are applied to the still wet plaster. USA and Iran are the main plaster producers in the world. [ citation needed ] Cement plaster is a mixture of suitable plaster, sand, Portland cement and water which is normally applied to masonry interiors and exteriors to achieve a smooth surface. Interior surfaces sometimes receive a final layer of gypsum plaster. Walls constructed with stock bricks are normally plastered while face brick walls are not plastered. Various cement-based plasters are also used as proprietary spray fireproofing products. These usually use vermiculite as lightweight aggregate. Heavy versions of such plasters are also in use for exterior fireproofing, to protect LPG vessels, pipe bridges and vessel skirts. Cement plaster was first introduced in America around 1909 and was often called by the generic name adamant plaster after a prominent manufacturer of the time. The advantages of cement plaster noted at that time were its strength, hardness, quick setting time and durability. [ 23 ] Heat-resistant plaster is a building material used for coating walls and chimney breasts and for use as a fire barrier in ceilings. Its purpose is to replace conventional gypsum plasters in cases where the temperature can get too high for gypsum plaster to stay on the wall or ceiling. An example of a heat-resistant plaster composition is a mixture of Portland cement , gypsum, lime, exfoliated insulating aggregate ( perlite and vermiculite or mica ), phosphate shale , and small amounts of adhesive binder (such as Gum karaya ), and a detergent agent (such as sodium dodecylbenzene sulfonate ). [ 24 ] Plaster may also be used to create complex detailing for use in room interiors. These may be geometric (simulating wood or stone) or naturalistic (simulating leaves, vines, and flowers). These are also often used to simulate wood or stone detailing found in more substantial buildings. In modern days this material is also used for false ceiling . In this, the powder form is converted in a sheet form and the sheet is then attached to the basic ceiling with the help of fasteners. It is done in various designs containing various combinations of lights and colors. The common use of this plaster can be seen in the construction of houses. Post-construction, direct painting is possible (which is commonly seen in French architecture), but elsewhere plaster is used. The walls are painted with the plaster which (in some countries) is nothing but calcium carbonate. After drying the calcium carbonate plaster turns white and then the wall is ready to be painted. Elsewhere in the world, such as the UK, ever finer layers of plaster are added on top of the plasterboard (or sometimes the brick wall directly) to give a smooth brown polished texture ready for painting. Mural paintings are commonly painted onto a plaster secondary support. Some, like Michelangelo's Sistine Chapel ceiling , are executed in fresco , meaning they are painted on a thin layer of wet plaster, called intonaco ; the pigments sink into this layer so that the plaster itself becomes the medium holding them, which accounts for the excellent durability of fresco. Additional work may be added a secco on top of the dry plaster, though this is generally less durable. Plaster (often called stucco in this context) is a far easier material for making reliefs than stone or wood, and was widely used for large interior wall-reliefs in Egypt and the Near East from antiquity into Islamic times (latterly for architectural decoration, as at the Alhambra ), Rome, and Europe from at least the Renaissance, as well as probably elsewhere. However, it needs very good conditions to survive long in unmaintained buildings – Roman decorative plasterwork is mainly known from Pompeii and other sites buried by ash from Mount Vesuvius . Plaster may be cast directly into a damp clay mold. In creating this piece molds (molds designed for making multiple copies) or waste molds (for single use) would be made of plaster. This "negative" image, if properly designed, may be used to produce clay productions, which when fired in a kiln become terra cotta building decorations, or these may be used to create cast concrete sculptures. If a plaster positive was desired this would be constructed or cast to form a durable image artwork. As a model for stonecutters this would be sufficient. If intended for producing a bronze casting the plaster positive could be further worked to produce smooth surfaces. An advantage of this plaster image is that it is relatively cheap; should a patron approve of the durable image and be willing to bear further expense, subsequent molds could be made for the creation of a wax image to be used in lost wax casting , a far more expensive process. In lieu of producing a bronze image suitable for outdoor use the plaster image may be painted to resemble a metal image; such sculptures are suitable only for presentation in a weather-protected environment. Plaster expands while hardening then contracts slightly just before hardening completely. This makes plaster excellent for use in molds, and it is often used as an artistic material for casting. Plaster is also commonly spread over an armature (form), made of wire mesh, cloth, or other materials; a process for adding raised details. For these processes, limestone or acrylic based plaster may be employed, known as stucco. [ citation needed ] Products composed mainly of plaster of Paris and a small amount of Portland cement are used for casting sculptures and other art objects as well as molds. Considerably harder and stronger than straight plaster of Paris, these products are for indoor use only as they degrade in moist conditions. Plaster is widely used as a support for broken bones; a bandage impregnated with plaster is moistened and then wrapped around the damaged limb, setting into a close-fitting yet easily removed tube, known as an orthopedic cast . Plaster is also used in preparation for radiotherapy when fabricating individualized immobilization shells for patients. Plaster bandages are used to construct an impression of a patient's head and neck, and liquid plaster is used to fill the impression and produce a plaster bust. The transparent material polymethyl methacrylate (Plexiglas, Perspex) is then vacuum formed over this bust to create a clear face mask which will hold the patient's head steady while radiation is being delivered. [ citation needed ] In dentistry, plaster is used for mounting casts or models of oral tissues. These diagnostic and working models are usually made from dental stone, a stronger, harder and denser derivative of plaster which is manufactured from gypsum under pressure. Plaster is also used to invest and flask wax dentures, the wax being subsequently removed by "burning out," and replaced with flowable denture base material. The typically acrylic denture base then cures in the plaster investment mold. Plaster investments can withstand the high heat and pressure needed to ensure a rigid denture base. Moreover, in dentistry there are 5 types of gypsum products depending on their consistency and uses: 1) impression plaster (type 1), 2) model plaster (type 2), dental stones (types 3, 4 and 5) [ citation needed ] In orthotics and prosthetics, plaster bandages traditionally were used to create impressions of the patient's limb (or residuum). This negative impression was then, itself, filled with plaster of Paris, to create a positive model of the limb and used in fabricating the final medical device. In addition, dentures (false teeth) are made by first taking a dental impression using a soft, pliable material that can be removed from around the teeth and gums without loss of fidelity and using the impression to creating a wax model of the teeth and gums. The model is used to create a plaster mold (which is heated so the wax melts and flows out) and the denture materials are injected into the mold. After a curing period, the mold is opened and the dentures are cleaned up and polished. Plasters have been in use in passive fire protection , as fireproofing products, for many decades. Gypsum plaster releases water vapor when exposed to flame, acting to slow the spread of the fire, for as much as an hour or two depending on thickness. Plaster also provides some insulation to retard heat flow into structural steel elements, that would otherwise lose their strength and collapse in a fire. Early versions of protective plasters often contain asbestos fibres, which since have been outlawed in many industrialized nations. Recent plasters for fire protection either contain cement or gypsum as binding agents as well as mineral wool or glass fiber to add mechanical strength. Vermiculite , polystyrene beads or chemical expansion agents are often added to decrease the density of the finished product and increase thermal insulation. One differentiates between interior and exterior fireproofing. Interior products are typically less substantial, with lower densities and lower cost. Exterior products have to withstand harsher environmental conditions. A rough surface is typically forgiven inside of buildings as dropped ceilings often hide them. Fireproofing plasters are losing ground to more costly intumescent and endothermic products, simply on technical merit. Trade jurisdiction on unionized construction sites in North America remains with the plasterers, regardless of whether the plaster is decorative in nature or is used in passive fire protection. Cementitious and gypsum based plasters tend to be endothermic. Fireproofing plasters are closely related to firestop mortars. Most firestop mortars can be sprayed and tooled very well, due to the fine detail work that is required of firestopping. Powder bed and inkjet head 3D printing is commonly based on the reaction of gypsum plaster with water, where the water is selectively applied by the inkjet head. The chemical reaction that occurs when plaster is mixed with water is exothermic . When plaster sets, it can reach temperatures of more than 60 °C (140 °F) and, in large volumes, can burn the skin. In January 2007, a secondary school student in Lincolnshire , England sustained third-degree burns after encasing her hands in a bucket of plaster as part of a school art project. [ 25 ] Plaster that contains powdered silica or asbestos presents health hazards if inhaled repeatedly. Asbestos is a known irritant when inhaled and can cause cancer, especially in people who smoke, [ 26 ] [ 27 ] and inhalation can also cause asbestosis . Inhaled silica can cause silicosis and (in very rare cases) can encourage the development of cancer . [ 28 ] Persons working regularly with plaster containing these additives should take precautions to avoid inhaling powdered plaster, cured or uncured. People can be exposed to plaster of Paris in the workplace by breathing it in, swallowing it, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit ( permissible exposure limit ) for plaster of Paris exposure in the workplace as 15 mg/m 3 total exposure and 5 mg/m 3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 10 mg/m 3 total exposure and 5 mg/m 3 respiratory exposure over an 8-hour workday. [ 29 ]
https://en.wikipedia.org/wiki/Plaster
Plasterwork is construction or ornamentation done with plaster , such as a layer of plaster on an interior or exterior wall structure, or plaster decorative moldings on ceilings or walls. This is also sometimes called pargeting . The process of creating plasterwork, called plastering or rendering , has been used in building construction for centuries. For the art history of three-dimensional plaster, see stucco . The earliest plasters known to us were lime-based. Around 7500 BC, the people of 'Ain Ghazal in Jordan used lime mixed with unheated crushed limestone to make plaster which was used on a large scale for covering walls, floors, and hearths in their houses. Often, walls and floors were decorated with red, finger-painted patterns and designs. In ancient India and China, renders in clay and gypsum plasters were used to produce a smooth surface over rough stone or mud brick walls, while in early Egyptian tombs, walls were coated with lime and gypsum plaster and the finished surface was often painted or decorated. Modelled stucco was employed throughout the Roman Empire. The Romans used mixtures of lime and sand to build up preparatory layers over which finer applications of gypsum , lime, sand and marble dust were made; pozzolanic materials were sometimes added to produce a more rapid set. Following the fall of the Roman Empire, the addition of marble dust to plaster to allow the production of fine detail and a hard, smooth finish in hand-modelled and moulded decoration was not used until the Renaissance. Around the 4th century BC, the Romans discovered the principles of the hydraulic set of lime, which by the addition of highly reactive forms of silica and alumina, such as volcanic earths , could solidify rapidly even under water. There was little use of hydraulic mortar after the Roman period until the 18th century. Plaster decoration was widely used in Europe in the Middle Ages where, from the mid-13th century, gypsum plaster was used for internal and external plaster. Hair was employed as reinforcement, with additives to assist set or plasticity including malt, urine, beer, milk and eggs. In the 14th century, decorative plasterwork called pargeting was being used in South-East England to decorate the exterior of timber-framed buildings. This is a form of incised, moulded or modelled ornament, executed in lime putty or mixtures of lime and gypsum plaster. During this same period, terracotta was reintroduced into Europe and was widely used for the production of ornament. In the mid-15th century, Venetian skilled workers developed a new type of external facing, called marmorino made by applying lime directly onto masonry. In the 16th century, a new highly decorative type of decorative internal plasterwork, called scagliola , was invented by stuccoists working in Bavaria. This was composed of gypsum plaster, animal glue and pigments, used to imitate coloured marbles and pietre dure ornament. Sand or marble dust, and lime, were sometimes added. In this same century, the sgraffito technique, also known as graffito or scratchwork was introduced into Germany by Italian artists, combining it with modelled stucco decoration. This technique was practised in antiquity and was described by Vasari as being a quick and durable method for decorating building facades. Here, layers of contrasting lime plaster were applied and a design scratched through the upper layer to reveal the colour beneath. The 17th century saw the introduction of different types of internal plasterwork. Stucco marble was an artificial marble made using gypsum (sometimes with lime), pigments, water and glue. Stucco lustro was another a form of imitation marble (sometimes called stucco lucido) where a thin layer of lime or gypsum plaster was applied over a scored support of lime, with pigments scattered on surface of the wet plaster. The 18th century gave rise to renewed interest in innovative external plasters. Oil mastics introduced in the UK in this period included a "Composition or stone paste" patented in 1765 by David Wark. This was a lime-based mix and included "oyls of tar, turpentine and linseed" besides many other ingredients. Another "Composition or cement", including drying oil, was patented in 1773 by Rev. John Liardet. A similar product was patented in 1777 by John Johnson. Widely used by the architect Robert Adam who in turn commissioned George Jackson to produce reverse-cut boxwood moulds (many of which to Adam designs). Jackson formed an independent company which still today produces composition pressings and retains a very large boxwood mould collection. In 1774, in France, a mémoire was published on the composition of ancient mortars. This was translated into English as "A Practical Essay on a Cement, and Artificial Stone, justly supposed to be that of the Greeks and Romans" and was published in the same year. Following this, and as a backlash to the disappointment felt due to the repeated failure of oil mastics, in the second half of the 18th century water-based renders gained popularity once more. Mixes for renders were patented, including a "Water Cement, or Stucco" consisting of lime, sand, bone ash and lime-water (Dr Bryan Higgins, 1779). Various experiments mixing different limes with volcanic earths took place in the 18th century. John Smeaton (from 1756) experimented with hydraulic limes and concluded that the best limes were those fired from limestones containing a considerable quantity of clay]ey material. In 1796, Revd James Parker patented Parker's " Roman Cement ". This was a hydraulic cement which, when mixed with sand, could be used for stucco. It could also be cast to form mouldings and other ornaments. It was however of an unattractive brown colour, which needed to be disguised by surface finishes. Natural cements were frequently used in stucco mixes during the 1820s. The popularisation of Portland cement changed the composition of stucco, as well as mortar , to a harder material. The development of artificial cements had started early in the 19th century. In 1811, James Frost took out a patent for an artificial cement obtained by lightly calcining ground chalk and clay together. The French Engineer Louis Vicat in 1812–1813 experimented with calcining synthetic mixtures of limestone and clay, a product he introduced in 1818. In 1822, in the UK, James Frost patented (another?) process, similar to Vicat's, producing what he called "British cement". Portland cement, patented in 1824 by Joseph Aspdin , was called so because it was supposed to resemble Portland stone. Aspdinís son William, and later Isaac Johnson, improved the production process. A product, very similar to modern Portland cement, was available from about 1845, with other improvements taking place in the following years. Thus, after about 1860, most stucco was composed primarily of Portland cement, mixed with some lime. This made it even more versatile and durable. No longer used just as a coating for a substantial material like masonry or log, stucco could now be applied over wood or metal lath attached to a light wood frame. With this increased strength, it ceased to be just a veneer and became a more integral part of the building structure. Early 19th century rendered façades were colour-washed with distemper ; oil paint for external walls was introduced around 1840. The 19th century also saw the revival of the use of oil mastics. In the UK, patents were obtained for "compositions" in 1803 (Thomas Fulchner), 1815 (Christopher Dihl) and 1817 (Peter Hamelin). These oil mastics, as the ones before them, also proved to be short-lived. Moulded or cast masonry substitutes, such as cast stone and poured concrete , became popular in place of quarried stone during the 19th century. However, this was not the first time "artificial stone" had been widely used. Coade Stone , a brand name for a cast stone made from fired clay, had been developed and manufactured in England from 1769 to 1843 and was used for decorative architectural elements. Following the closure of the factory in South London, Coade stone stopped being produced, and the formula was lost. By the mid 19th century manufacturing centres were preparing cast stones based on cement for use in buildings. These were made primarily with a cement mix often incorporating fine and coarse aggregates for texture, pigments or dyes to imitate colouring and veining of natural stones, as well as other additives. Also in the 19th century, various mixtures of modified gypsum plasters, such as Keene's cement, appeared. These materials were developed for use as internal wall plasters, increasing the usefulness of simple plaster of Paris as they set more slowly and were thus easier to use. Tools and materials include trowels , floats, hammers , screeds, hawk , scratching tools, utility knives , laths , lath nails , lime , sand , hair , plaster of Paris , a variety of cements , and various ingredients to form color washes . While most tools have remained unchanged over the centuries, developments in modern materials have led to some changes. Trowels, originally constructed from steel, are now available in a polycarbonate material that allows the application of certain new, acrylic-based materials without staining the finish. Floats, traditionally made of timber (ideally straight-grained, knot-free, yellow pine), are often finished with a layer of sponge or expanded polystyrene. Traditionally, plaster was laid onto laths, rather than plasterboard as is more commonplace nowadays. Wooden laths are narrow strips of straight-grained wood depending on availability of species in lengths of from two to four or five feet to suit the distances at which the timbers of a floor or partition are set. Laths are about an inch wide, and are made in three thicknesses; single ( 1 ⁄ 8 to 3 ⁄ 16 in or 3.2 to 4.8 mm thick), lath and a half ( 1 ⁄ 4 in or 6.4 mm thick), and double ( 3 ⁄ 8 – 1 ⁄ 2 in or 9.5–12.7 mm thick). The thicker laths should be used in ceilings, to stand the extra strain (sometimes they were doubled for extra strength), and the thinner variety in vertical work such as partitions, except where the latter will be subjected to rough usage, in which case thicker laths become necessary. [ citation needed ] Laths are usually nailed with a space of about 3 ⁄ 8 inch (9.5 mm) between them to form a key for the plaster. Laths were formerly all made by hand. Most are now made by machinery and are known as sawn laths, those made by hand being called rent or riven laths. Rent laths give the best results, as they split in a line with the grain of the wood, and are stronger and not so liable to twist as machine-made laths, some of the fibers of which are usually cut in the process of sawing. Laths must be nailed so as to break joint in bays three or four feet wide with ends butted one against the other. By breaking the joints of the lathing in this way, the tendency for the plaster to crack along the line of joints is diminished and a better key is obtained. Every lath should be nailed at each end and wherever it crosses a joist or stud. All timbers over 3 inches (76 mm) wide should be counter-lathed, that is, have a fillet or double lath nailed along the centre upon which the laths are then nailed. This is done to preserve a good key for the plaster. Walls liable to damp are sometimes battened and lathed to form an air cavity between the damp wall and the plastering. Lathing in metal, either in wire or in the form of perforated galvanised sheets, is now extensively used on account of its fireproof and lasting quality. There are many kinds of this material in different designs, the best known in England being the Jhilmil, the Bostwick, Lathing, and Expanded Metal lathing. The two last-named are also widely used in the United States. Lathing nails are usually of iron, cut, wrought or cast, and in the better class of work they are galvanized to prevent rusting. Zinc nails are sometimes used, but are costly. Lime plastering is composed of lime, sand, hair and water in proportions varying according to the nature of the work to be done. The lime mortar principally used for internal plastering is that calcined from chalk , oyster shells or other nearly pure limestone , and is known as fat, pure, chalk or rich lime. Hydraulic limes are also used by the plasterer, but chiefly for external work. Perfect slaking of the calcined lime before being used is very important as, if used in a partially slaked condition, it will "blow" when in position and blister the work. Lime should therefore be run as soon as the building is begun, and at least three weeks should elapse between the operation of running the lime and its use. Hair is used in plaster as a binding medium, and gives tenacity to the material. Traditionally horsehair was the most commonly used binder, as it was easily available before the development of the motor-car. Hair functions in much the same way as the strands in fiberglass resin, by controlling and containing any small cracks within the mortar while it dries or when it is subject to flexing. Ox -hair, which is sold in three qualities, is now the kind usually specified; but horsehair, which is shorter, is sometimes substituted or mixed with the ox-hair in the lower qualities. Good hair should be long (In the UK cow and horse hair of short and long lengths is used), and left greasey (lanolin grease) because this protects against some degradation when introduced into the very high alkaline plaster. [ 1 ] Before use it must be well beaten, or teased, to separate the lumps. In America, goats ' hair is frequently used, though it is not so strong as ox-hair. The quantity used in good work is one pound of hair to two or three cubic feet of coarse stuff (in the UK up to 12 kg per metric cube). Hair reinforcement in lime plaster is common and many types of hair and other organic fibres can be found in historic plasters [4]. However, organic material in lime will degrade in damp environments particularly on damp external renders.[5] This problem has given rise to the use of polyprolene fibres and cellulose wood fibres in new lime renders [6] Manila hemp fiber has been used as a substitute for hair. Plaster for hair slabs made with manila hemp fiber broke at 195 lb (88 kg), plaster mixed with sisal hemp at 150 lb (68 kg), jute at 145 lb (66 kg), and goats' hair at 144 lb (65 kg). [ citation needed ] Another test was made in the following manner. Two barrels of mortar were made up of equal proportions of lime and sand, one containing the usual quantity of goats' hair, and the other Manila fiber. After remaining in a dry cellar for nine months the barrels were opened. It was found that the hair had been almost entirely eaten away by the action of the lime, and the mortar consequently broke up and crumbled quite easily. The mortar containing the Manila hemp, on the other hand, showed great cohesion, and required some effort to pull it apart, the hemp fiber being undamaged. [ citation needed ] For fine plasterer's sand-work, special sands are used, such as silver sand, which is used when a light color and fine texture are required. In the United Kingdom this fine white sand is procured chiefly from Leighton Buzzard; also in the UK many traditional plasters had crushed chalk as the aggregate, this made a very flexible plaster suitable for timber-frame buildings. For external work Portland cement is the best material on account of its strength, durability, and weather resisting external properties, but not on historic structures that are required to flex and breathe; for this, lime without cement is used. [ 2 ] Sawdust has been used as a substitute for hair and also instead of sand as an aggregate . Sawdust will enable mortar to stand the effects of frost and rough weather. It is useful sometimes for heavy cornices and similar work, as it renders the material light and strong. The sawdust should be used dry. The sawdust is used to bind the mix sometimes to make it go further. The first coat or rendering is from 1 ⁄ 2 to 3 ⁄ 4 inches thick, and is mixed in the proportions of from one part of cement to two of sand to one part to five of sand. The finishing or setting coat is about 3 ⁄ 16 inches thick, and is worked with a hand float on the surface of the rendering, which must first be well wetted. Stucco is a term loosely applied to nearly all kinds of external plastering, whether composed of lime or of cement. At the present time it has fallen into disfavor, but in the early part of the 19th century a great deal of this work was done. Cement has largely superseded lime for this work. The principal varieties of stucco are common, rough, trowelled and bastard. . Roughcast or pebbledash plastering is a rough form of external plastering in much use for country houses. In Scotland it is termed " harling ". It is one of the oldest forms of external plastering. In Tudor times it was employed to fill in between the woodwork of half-timbered framing. When well executed with good material this kind of plastering is very durable. Roughcasting is performed by first rendering the wall or laths with a coat of well-haired coarse stuff composed either of good hydraulic lime or of Portland cement. This layer is well scratched to give a key for the next coat. The second coat is also composed of coarse stuff knocked up to a smooth and uniform consistency. Two finish two techniques can be used: Sgraffito is the name for scratched ornament in plaster. Scratched ornament is the oldest form of surface decoration, and is much used on the continent of Europe, especially in Germany and Italy, in both external and internal situations. Properly treated, the work is durable, effective and inexpensive. A first coat or rendering of Portland cement and sand, in the proportion of one to three, is laid on about an inch thick; then follows the color coat, sometimes put on in patches of different tints as required for the finished design. When this coat is nearly dry, it is finished with a smooth-skimming, 1 ⁄ 12 to 1 ⁄ 8 inch (2.1 to 3.2 mm) thick, of Parian, selenitic or other fine cement or lime, only as much as can be finished in one day being laid on. Then by pouncing through the pricked cartoon, the design is transferred to the plastered surface. Broad spaces of background are now exposed by removing the finishing coat, thus revealing the colored plaster beneath, and following this the outlines of the rest of the design are scratched with an iron knife through the outer skimming to the underlying tinted surface. Sometimes the coats are in three different colors, such as brown for the first, red for the second, and white or grey for the final coat. The pigments used for this work include Indian red, Turkey red, Antwerp blue, German blue, umber, ochre, purple brown, bone black or oxide of manganese for black. Combinations of these colors are made to produce any desired tone. Plasters are applied in successive coats or layers on walls or lathing and gains its name from the number of these coats. The process for three coat work is as follows: The composition of an interior three coat plaster: The hard cements used for plastering, such as Parian, Keene's, and Martin's, are laid generally in two coats, the first of cement and sand 1/2 to 3/4 inches thick, the second or setting coat of neat cement about 1/8 inch thick. These and similar cements have gypsum as a base, to which a certain proportion of another substance, such as alum , borax or carbonate of soda , is added, and the whole baked or calcined at a low temperature. The plaster they contain causes them to set quickly with a very hard smooth surface, which may be painted or papered within a few hours of its being finished. In Australia , plaster or cement render that is applied to external brickwork on dwellings or commercial buildings can be one or two coats. In two coat render a base coat is applied with a common mix of 4 parts sand to one part cement and one part dehydrated lime and water to make a consistent mortar. Render is applied using a hawk and trowel and pushed on about 12 mm thick to begin. For two coat, some plasterers apply two full depth bands of render (one at the base of the wall and one around chest height) which are screeded plumb and square and allowed to dry while applying the first coat over the remaining exposed wall. The render is then scratched to provide a key for the second coat. This method allows the rest of the wall to be rendered and screeded off without the need to continually check if the second coat is plumb. Alternatively, both coats can be applied with the plasterer using a t-bar to screed the final coat until it is plumb, straight and square. The first method is generally used where quality of finish is at a premium. The second method is quicker but can be several millimetres out of plumb. The second coat can be a slightly weaker mix 5/1/1, or the same as the base coat with maybe a water- proofer in the mix added to the water to minimize efflorescence (rising of salts). Some plasterers used lime putty in second coat instead of dehydrated lime in the render. The mortar is applied to about 5 mm thick and when the render hardens is screeded off straight. A wood float or plastic float is used to rub down the walls. Traditionally, water is splashed on walls using a coarse horsehair plasterers brush followed by immediately rubbing the float in a circular or figure 8 motion although a figure of 8 can leave marks. Many modern plasterers use a hose with a special nozzle with a fine mist spray to dampen walls when rubbing up (using a wood float to bring a consistent finish). Using a hose brings a superior finish and is more consistent in colour as there is more chance in catching the render before it has a chance to harden too much. After the work area is floated, the surface is finished with a wet sponge using the same method as floating with a wood float, bringing sand to the surface to give a smooth consistent finish. Materials used in the render are commonly local sands with little clay content with fine to coarse grains. Sand finish is common for external render and may be one or two coats. Plasterers use a t-bar to screed the walls until it is plumb, straight and square. Two coat is superior as, although more expensive, it gives a more consistent finish and has less chance of becoming drummy or cracking. Drumminess occurs when the render doesn't bond completely with the wall, either because the wall is too smooth, a coat is too thick, or the coat is being floated when the render has hardened too much, leaving an air space that makes a drumming sound when a metal tool is "rubbed" over it. For internal walls, two coats is the standard and follows the same method as for external rendering but with a weaker mix of five or six sand to one cement and one lime. However, instead of being finished with a sponge, the second coat is left rough and sometimes will be scored by nails inserted in the float. After drying, the surface is then scraped to remove loose grains of sand before plastering. If the walls are concrete, a splash coat is needed to ensure bonding. A splash coat is a very wet mix of two parts cement to one part sand that is "splashed" on the wall using the plasterers brush until the wall is covered. Special mixes are sometimes required for architectural or practical reasons. For example, A hospital's X-ray room will be rendered with a mix containing Barium sulfate to make the walls impervious to x-rays. Plain, or unenriched, moldings are formed with a running mold of zinc cut to the required profile a process that has remained the same for over 200 years. Enrichments may be moldings added after the main outline molding is set, and are cast in molds made of gelatin or plaster of Paris. Cracks in plastering may be caused by settlement of the building, by the use of inferior materials or by bad workmanship. However, due to none of these, cracks may yet ensue by the too fast drying of the work, caused through the laying of plaster on dry walls which suck from the composition the moisture required to enable it to set, by the application of external heat or the heat of the sun, by the laying of a coat upon one which has not properly set, the cracking in this case being caused by unequal contraction, or by the use of too small a proportion of sand. In older properties, hairline cracks in plastered ceilings can occur due to minor deflection / movement of timber joists which support the floor above. [ 3 ] Traditionally, crack propagation was arrested by stirring chopped horsehair thoroughly into the plaster mix. For partitions and ceilings, plaster slabs are used to finish quickly. For ceilings metal lathing require simply to be nailed to the joists, the joints being made with plaster, and the whole finished with a thin setting coat or slab. In some cases, with fireproof ceilings, for instance, the metal lathing are hung up with wire hangers so as to allow a space of several inches between the soffit of the concrete floor and the ceiling. For partitions metal laths are grouted in with semi-fluid plaster. Where very great strength is required, the work may be reinforced by small iron rods through the slabs. This forms a very strong and rigid partition which is at the same time fire-resisting and of lightweight, and when finished measures only from two to four inches (102 mm) thick. So strong is the result that partitions of this class only two or three inches (76 mm) thick were used for temporary cells for prisoners at Newgate Gaol during the rebuilding of the new sessions house in the Old Bailey in London . The slabs may be obtained either with a keyed surface, which requires finishing with a setting coat when the partition or ceiling is in position, or a smooth finished face, which may be papered or painted immediately the joints have been carefully made. Fibrous plaster is given by plasterers the suggestive name "stick and rag", and this is a rough description of the material, for it is a fibrous composed of plaster laid upon a backing of canvas stretched on wood. It is much used for moldings, circular and enriched casings to columns and girders and ornamental work, which is worked in the shop and fixed in position. Desachy, a French modeler, took out in 1856 a patent for "producing architectural moldings, ornaments and other works of art, with surfaces of plaster," with the aid of plaster, glue, wood, wire, and canvas or other woven fabric. The modern use of this material may be said to have started then, but the use of fibrous plaster was known and practiced by the Egyptians long before the Christian era; for ancient coffins and mummies still preserved prove that linen stiffened with plaster was used for decorating coffins and making masks. Cennino Cennini , writing in 1437, says that fine linen soaked in glue and plaster and laid on wood was used for forming grounds for painting. Canvas and mortar were in general use in Great Britain up to the middle of the 20th century. This work is also much used for temporary work, such as exhibition buildings. There are two main methods in USA used in construction of the interior walls of modern homes, plasterboard, also called drywall, and veneer plastering . In plasterboard a specialized form of sheet rock known as "greenboard" (because on the outer paper coating is greenish) is screwed onto the wall-frames (studs) of the home to form the interior walls. At the place where the two edges of wallboards meet there is a seam. These seams are covered with mesh tape and then the seams and the screw heads are concealed with the drywall compound to make the wall seem as one uniform piece. The drywall plaster is a thick paste. Later this is painted or wallpapered over to hide the work. This process is typically called "taping" and those who use drywall are known as "tapers". Veneer plastering covers the entire wall with thin liquid plaster, uses a great deal of water and is applied very wet. The walls intended to be plastered are hanged with "Blueboard" (named as such for the industry standard of the outer paper being blue-grey in color). This type of sheet rock is designed to absorb some of the moisture of the plaster and thus allow it to cling the plaster better before it sets. Veneer plastering is a one-shot one-coat application; taping usually requires sanding and then adding another coat, since the compound shrinks as it dries. The plasterer usually shows up after the hangers have finished building all the internal walls, by attaching blueboard over the frames of the house with screws. The plasterer is usually a subcontractor working in crews that average about three veterans and one laborer. The job of the laborer is to set up ahead of and clean up behind the plasterers, so they can concentrate on spreading the "mud" on the walls. Normally the contractor has already supplied all the bags of Gypsum plaster that will be needed, as well as any external supply of water if the house is not yet connected. The plastering crew needs to bring their own tools and equipment and sometimes supply their own bead. The Tasks that the plasterer is usually expected to accomplish . The plasterer usually must first staple or tack Cornerbead onto every protruding (external) corner of the inside of the house. Care is taken to make sure this makes the wall look straight and is more of a skill of the eye than anything else. The plasterer needs to fill a 5-gallon bucket partway with water. From this bucket he hangs his trowel or trowels and places into it various tools. Most plasterers have their own preference for the size of the trowel they use. some wield trowels as large as 20 inches long but the norm seems to be a 16"×5". Into the bucket also goes a large brush used to splash water onto the wall and to clean his tools, a paint brush for smoothing corners, and a corner bird for forming corners. These tool buckets are first kept near the mix table and then as the plaster starts to set are moved closer to the wall that is being worked on. Time becomes a big factor here as once the plaster starts to harden (set) it will do so fairly rapidly and the plasterer has a small margin of error to get the wall smooth. Onto the mixing table the plasterer usually sets his " hawk " so it will be handy when he needs to grab it and to keep dirt off of it. Any debris in the plaster can become a major nuisance. Plasterers will typically divide a room, (especially a large or high-ceilinged wall) into top and bottom. The one working on top will do from the ceiling's edge to about belly height and work off a milk crate for an 8-foot (2.4 m) ceiling, or work off stilts for 12-foot-high rooms. For cathedral ceilings or very high walls, staging is set up and one works topside, the others further below. Typically done with the laborer. No plaster globs left on the floors, walls or corner bead edges. (They will show up if painted and interfere with flooring and trim). Remove or neatly stack all trash. All rooms and walls are inspected for cracking and dents or scratches that may have been caused from others bumping into the walls. They are also inspected to make sure no bumps are left on the walls from splashed plaster or water. All rooms are checked to make sure all plaster is knocked out of the outlets so the electrician can install the sockets and to make sure no tools are left behind. This leaves the walls ready for the painters and finishers to come in and do their trade. The home owner and the plasterer's boss will usually decide beforehand what styles they will use in the house . Typically walls are smooth and sometimes ceilings. Usually a homeowner will opt to have the ceilings use a "texture" technique as it is much easier, faster, and thus cheaper than a smooth ceiling. The plasterer quotes prices based on techniques to be used and board feet to be covered to the contractor or homeowner before work begins. The board feet is obtained by the hangers or estimated by the head subcontractor by counting the wallboards that come in an industry standard of 8' to 12' long. He then adds in extra expenses for soffits and cathedral ceilings. Typically if the ceiling is to be smooth it is done first, before the walls. If it is to be textured, it is done after the walls. The reason for this is that invariably when a ceiling is being worked on plaster will fall and splash onto the walls. However a texture mix doesn't need to be smoothed out when it starts to set: The first thing the plasterer tends to do is go over all the mesh-taped seams of the walls he is about to cover; in a very thin swatch. The wallboard draws moisture out of this strip so when the plasterer goes over it again when doing the rest of the wall it will not leave an indented seam that needs further reworking. He then fills in the area near the ceiling so he will not have to stretch to reach it during the rest of the wall; And he forms the corner with his bird. This saves much needed time as this process is a race against the chemical reaction. From the mix table the plasterer scoops some "mud" onto the center of his hawk with his trowel. Holding the hawk in his off-hand and his trowel in his primary the plasterer then scoops a bulging roll of plaster onto his trowel. this takes a bit of practice to master, especially with soupy mixes. Then holding the trowel parallel to the wall and at a slight angle of the wrist he tries to uniformly roll the plaster onto the wall. In a manner similar to a squeegee. He starts about an inch above the floor and works his way upwards to the ceiling. Care is taken to be uniform as possible as it helps in the finishing phase. Depending on the setting time of the plaster. once the moisture of the plaster starts to be drawn by the board a second pass is made. this is called knocking down. it is much like applying paint with a roller in wrist action and purpose. to smooth out any lines and fill in any major voids that will make extra work once the plaster starts to truly set. very little pressure is applied and the trowel is kept relatively flat towards the wall. Sometimes an accelerant will be added to a mix to hasten the time delay from the initial mixing phase to when the plaster starts to set. This is normally done on cold days when setting is delayed or for small jobs to minimize the wait. Once the plaster is on the wall and starts to set (this can be determined by the table that sets first), the plasterer gingerly sprinkles water onto the wall; this helps to stall the setting and to create a slip. He then uses his trowel and often a wetted felt brush held in the opposite hand and lightly touching the wall ahead of the trowel to work this slip into any small gaps (known as "catfaces") in the plaster as well as smooth out the rough lay-on and flatten any air bubbles that formed during setting. This is a crucial time because if the wall gets too hard it is nearly impossible to fill in any gaps as the slip will no longer set with the wall and will instead just dry and fall out. This leads to the need of what is called "grinding" as one must go over the hard wall again and again trying to smooth out the hardened wall and any major catfaces must be filled in with a contour putty, joint compound, or reworked by blending in a fresh, thin coat. The finished wall will look glossy and uniformly flat and is smooth to the touch. After a few days it will become chalky white and can then be painted over. From the time the bags are dumped into the barrel to when the wall is completely set is called a mix. Varying on the technique used and whether accelerant or retardant is added, a mix typically lasts about two hours. The final moments are the most frantic if it is smooth or if the mix sets quicker than anticipated. If this happens it is said the mix has "snapped" and is normally due to using old product or various types of weather (humidity or hot days can cause plaster to set quicker). Normally only three or four mixes are done in a day as plastering is very tiring and not as effective under unnatural lighting in the months with early dusk. Plastering is done year round but unique problems may arise from season to season. In the summer, the heat tends to cause the plaster to set faster. The plaster also generates its own heat and houses can become quite hellish. Typically the plaster crew will try to arrive at the house well before dawn. In winter months, short days cause the need of artificial lighting. At certain angles these lights can make even the smoothest wall look like the surface of the moon. Another dilemma in the winter months is needing to use propane jet heaters (which can stain the plaster yellowish but do not otherwise hurt it), not just to keep the plasterers warm but to also prevent the water in the mix from freezing and generating ice crystals before the plaster has time to set. Also if the water hose is not thoroughly drained before leaving it can freeze over night and be completely stopped up in the morning. Texturing is usually reserved for closets, ceilings and garage walls. [ citation needed ] Typically a retarding agent is added to the mix. this is normally Cream of Tartar (or "Dope" in the plasterer's jargon) and care must be taken with the amount added. Too much and the mix may never set at all. However the amount used is often estimated; much the way one adds a dash of salt to a recipe. you add a small scoop of retarder, dependent on the size of the mix. Retardant is added so that larger mixes can be made, since the texture technique doesn't require the person to wait until it starts to set before working it. The lay-on phase is the same as smooth but it is added with a thicker coat. Once the coat is on uniformly the plasterer then goes back and birds his corners. Staying away from the corner he then gets a trowel with a nice banana curve in it and starts to run it over the wall in a figure eight or Ess pattern, making sure to cross all areas at least once. He adds a little extra plaster to his trowel if needed. The overall effect is layers of paint-like swaths over the whole of the ceiling or wall. He can then just walk away and let it set with care taken not too leave any globs and to make sure the corners look smooth and linear. If a wall is to be smooth and the ceiling textured, typically the wall is done first, then the ceiling after the wall has set. Instead of rebirding the ceiling (which would have been done when the wall was laid on), a clean trowel is held against the wall and its corner is run along the ceiling to "cut it in" and clean the wall at the same time. This line is then smoothed with a paintbrush to make the transition seamless. The sponge (technically called a float), has a circle form and rough surface. it is fixed to a backing with a central handhold and is roughly the size of a standard trowel. Sponge is a variant texture technique and used normally on ceilings and sometimes in closets. Typically when using a sponge; sand is added to the mix and the technique is called sand-sponge. Care must be taken not to stand directly under your trowel when doing this as it is very, very unpleasant, and dangerous to get a grain of sand in your eye; which is compounded by the irritation from the lime as well. This combination can easily scratch the eye. The lay-on and mix is the same as with regular texturing. however after a uniform and smooth coat is placed on the ceiling and the edges are cut in; a special rectangular sponge with a handle is run across the ceiling in overlapping and circular motions. This takes some skill and practice to do well. The overall look is a fishscale type pattern on the ceiling, closet wall, etc. Even though retarder is typically used; care must be taken to clean out the sponge thoroughly when finished as any plaster that hardens inside it will be impossible to remove. Stilts are often required to plaster most ceilings and it is typically harder to lay-on and work than walls. For short ceilings one can also work with milk crates . The difficulty of working upside down often results in plaster bombs splattering on the floors, walls and people below. This is why smooth ceilings, that use no retardant and sometimes even accelerant, are done before the walls. Retarded plaster can easily be scraped off a smooth plaster wall when wet. Any splatters from a smooth ceiling can easily be scraped off bare blueboard but not from an already plastered wall. Care must be taken when standing under your trowel or another plasterer. The general difficulty of working a smooth ceiling fetches a higher cost. The technique is the same as a smooth wall but at an awkward angle for the plasterer. steel straight edge (used for leveling rendered walls and lining plasterboard) In England, fine examples of plasterwork interiors of the early modern period can be seen at Chastleton House , (Oxfordshire), Knole House , (Kent), Wilderhope Manor (Shropshire), Speke Hall , ( Merseyside ), and Haddon Hall , ( Derbyshire ). Some examples of outstanding extant historical plasterwork interiors are found in Scotland , where the three finest specimens of interior plasterwork are elaborate decorated ceilings from the early 17th century at Muchalls Castle , Glamis Castle and Craigievar Castle , all of which are in the northeast region of that country. The craft of modelled plasterwork, inspired by the style of the early modern period, was revived by the designers of the Arts and Crafts movement in late-19th- and early-20th-century England. Notable practitioners were Ernest Gimson , his pupil Norman Jewson , and George P. Bankart, who published extensively on the subject. Examples are preserved today at Owlpen Manor and Rodmarton Manor , both in the Cotswolds . Modern ornate fibrous plasterwork by the specialist company of Clark & Fenn can be seen at Theatre Royal, Drury Lane , the London Palladium , Grand Theatre Leeds , Somerset House , The Plaisterers' Hall and St. Clement Danes Corrado Parducci was a notable plaster worker in the Detroit area during the middle half of the 20th century. Probably his best known ceiling is located at Meadow Brook Hall . This article incorporates text from a publication now in the public domain : Bartlett, James (1911). " Plaster-work ". In Chisholm, Hugh (ed.). Encyclopædia Britannica . Vol. 28 (11th ed.). Cambridge University Press. pp. 784– 786. 4. The Trade Core | What are the 7 stages of plastering | Published 12-April-2025
https://en.wikipedia.org/wiki/Plasterwork
Plastic bending [ 1 ] is a nonlinear behavior particular to members made of ductile materials that frequently achieve much greater ultimate bending strength than indicated by a linear elastic bending analysis. In both the plastic and elastic bending analyses of a straight beam, it is assumed that the strain distribution is linear about the neutral axis (plane sections remain plane). In an elastic analysis this assumption leads to a linear stress distribution but in a plastic analysis the resulting stress distribution is nonlinear and is dependent on the beam's material. The limiting plastic bending strength M r {\displaystyle M_{r}} (see Plastic moment ) can generally be thought of as an upper limit to a beam's load–carrying capability as it only represents the strength at a particular cross–section and not the load–carrying capability of the overall beam. A beam may fail due to global or local instability before M r {\displaystyle M_{r}} is reached at any point on its length. Therefore, beams should also be checked for local buckling, local crippling, and global lateral–torsional buckling modes of failure. Note that the deflections necessary to develop the stresses indicated in a plastic analysis are generally excessive, frequently to the point of incompatibility with the function of the structure. Therefore, separate analysis may be required to ensure design deflection limits are not exceeded. Also, since working materials into the plastic range can lead to permanent deformation of the structure, additional analyses may be required at limit load to ensure no detrimental permanent deformations occur. The large deflections and stiffness changes usually associated with plastic bending can significantly change the internal load distribution, particularly in statically indeterminate beams. The internal load distribution associated with the deformed shape and stiffness should be used for calculations. Plastic bending begins when an applied moment causes the outside fibers of a cross-section to exceed the material's yield strength. Loaded only by a moment, the peak bending stresses occurs at the outside fibers of a cross-section. The cross-section will not yield linearly through the section. Rather, outside regions will yield first, redistributing stress and delaying failure beyond what would be predicted by elastic analytical methods. The stress distribution from the neutral axis is the same as the shape of the stress-strain curve of the material (this assumes a non-composite cross-section). After a cross-section reaches a sufficiently high condition of plastic bending, it acts as a Plastic hinge . Elementary Elastic Bending theory requires that the bending stress varies linearly with distance from the neutral axis , but plastic bending shows a more accurate and complex stress distribution. The yielded areas of the cross-section will vary somewhere between the yield and ultimate strength of the material. In the elastic region of the cross-section, the stress distribution varies linearly from the neutral axis to the beginning of the yielded area. Predicted failure occurs when the stress distribution approximates the material's stress-strain curve. The largest value being that of the ultimate strength. Not every area of the cross-section will have exceeded the yield strength. As in the basic Elastic Bending theory, the moment at any section is equal to an area integral of bending stress across the cross-section. From this and the above additional assumptions, predictions of deflections and failure strength are made. Plastic theory was validated around 1908 by C. v. Bach. [ 2 ]
https://en.wikipedia.org/wiki/Plastic_bending
Plastic coating is a term that is commonly used in technology but is nevertheless ambiguous. It can be understood to mean the coating of plastic (e.g., metallization of plastics [ 1 ] ) or the coating of other materials (e.g., electrical cable ) with plastics. [ 2 ] A polymer coating is a form of plastic coating or surface coating and consists of a plastic base. There are also tribological polymer coatings that can be adapted to numerous application needs thanks to the variety of polymers available. [ 3 ] The coating reduces friction and abrasion , thus preventing the product from wearing out due to corrosion and scratching. [ 4 ] Coatings made of polymers can be produced from various compounds. They can therefore be applied to almost any surface. Polymer coatings are accordingly particularly suitable for places where plain bearings cannot be used. A tribological coating can be used, for example, where space is limited and access is difficult. [ 5 ] In addition, polymer coatings can be customized to suit a wide range of applications, such as particularly high- temperature environments or in the food industry . The steps to prepare parts for coating with a polymer base are cost-effective compared to other coating options. In addition, the coated end products are consumer-friendly: coated parts are popular because liquids such as water and oil bead off when the surface is coated with a hydrophobic material. This makes it easier to maintain and clean the end products. Furthermore, the color of the polymer coating can also be adjusted, however with some limitations. The reason for this is that more pigments are needed depending on the color shade, which ultimately influences the coating. The application of ultra-fine metal coatings to plastic surfaces is becoming increasingly important. [ 2 ] The coating process is also called coating technology. Of particular technical importance is the coating of all kinds of materials with plastics. One example is cable sheathing for electrical cables or the coating of cutlery baskets in dishwashers. Theoretically, coatings are also plastic-like coatings. A boundary can be drawn by whether a reaction or crosslinking of the coating takes place (automotive clear coat) or whether a plastic merely melts and solidifies on the surface (vortex sintering with thermoplastics), but the transitions are fluid. As a rule, plastic coatings have significantly higher film thicknesses than conventional paint. For polymer coating, powder coating is commonly used. [ 10 ] There are also options for wet coating, vacuum coating, dip coating , or thermal spraying. The coating can be applied to a polymer or a polymeric material.
https://en.wikipedia.org/wiki/Plastic_coating
A plastic crystal is a crystal composed of weakly interacting molecules that possess some orientational or conformational degree of freedom. The name plastic crystal refers to the mechanical softness of such phases: they resemble waxes and are easily deformed. If the internal degree of freedom is molecular rotation, the name rotor phase or rotatory phase is also used. Typical examples are the modifications Methane I and Ethane I . In addition to the conventional molecular plastic crystals, there are also emerging ionic plastic crystals, particularly organic ionic plastic crystals (OIPCs) and protic organic ionic plastic crystals (POIPCs). [ 1 ] [ 2 ] POIPCs are solid protic organic salts formed by proton transfer from a Brønsted acid to a Brønsted base and in essence are protic ionic liquids in the molten state, have found to be promising solid-state proton conductors for high temperature proton-exchange membrane fuel cells . [ 1 ] Examples include 1,2,4-triazolium perfluorobutanesulfonate [ 1 ] and imidazolium methanesulfonate . [ 2 ] If the internal degree of freedom freezes in a disordered way, an orientational glass is obtained. The orientational degree of freedom may be an almost free rotation, or it may be a jump diffusion between a restricted number of possible orientations, as was shown for carbon tetrabromide . [ 3 ] X-ray diffraction patterns of plastic crystals are characterized by strong diffuse intensity in addition to the sharp Bragg peaks. [ 1 ] In a powder pattern this intensity appears to resemble an amorphous background as one would expect for a liquid, [ 1 ] but for a single crystal the diffuse contribution reveals itself to be highly structured. The Bragg peaks can be used to determine an average structure but due to the large amount of disorder this is not very insightful. It is the structure of the diffuse scattering that reflects the details of the constrained disorder in the system. Recent advances in two-dimensional detection at synchrotron beam lines facilitate the study of such patterns via techniques such as small-angle X-ray scattering . Plastic crystals were discovered in 1938 by Belgian chemist Jean Timmermans [ 4 ] by their anomalously low entropy of fusion . He found that organic substances having an entropy of fusion lower than approximately 17 J·K −1 ·mol −1 (~2R, where R is the molar gas constant ) had peculiar properties. Timmermans named them molecular globulare . Michils showed in 1948 that these organic compounds are easily deformed and accordingly named them plastic crystals ( cristaux organiques plastiques ). [ 5 ] Some plastic crystals, like aminoborane , when subjected to mechanical stress, exhibit behavior similar to ductile metals such as lead, gold, silver, or copper. This is different from typical molecular crystals, which are brittle and fragile. For instance, as they approach their melting point, they become highly ductile and malleable. Under pressure, these crystals can flow through a hole. They exhibit bending, twisting, and stretching with characteristic necking under appropriate stress. These crystals can be molded into various shapes, much like copper or silver. [ 6 ] Perfluorocyclohexane is plastic to such a degree that it will start to flow under its own weight. [ 7 ] Like liquid crystals , plastic crystals can be considered a transitional stage between real solids and real liquids and can be considered soft matter . Another common denominator is the simultaneous presence of order and disorder. Both types of phases are usually observed between the true solid and liquid phases on the temperature scale: The difference between liquid and plastic crystals is easily observed in X-ray diffraction. Plastic crystals possess strong long range order and therefore show sharp Bragg reflections. [ 1 ] Liquid crystals show none or very broad Bragg peaks because the order is not long range. The molecules that give rise to liquid crystalline behavior often have a strongly elongated or disc like shape. Plastic crystals consist usually of almost spherical objects. In this respect one could see them as opposites. Certain liquid crystals go through a plastic crystal phase before melting. In general, liquid crystals are closer to liquids while plastic crystals are closer to true crystals. Plastic crystals are closely related to condis crystals , conformationally disordered crystals that nevertheless possess translational and rotational order. [ 8 ]
https://en.wikipedia.org/wiki/Plastic_crystal
In the structural engineering beam theory , plastic hinge is the deformation of a section of a beam where plastic bending occurs. [ 1 ] In earthquake engineering plastic hinge is also a type of energy damping device allowing plastic rotation [deformation] of an otherwise rigid column connection. [ 2 ] In plastic limit analysis of structural members subjected to bending, it is assumed that an abrupt transition from elastic to ideally plastic behaviour occurs at a certain value of moment, known as plastic moment (M p ). Member behaviour between M yp and M p is considered to be elastic. When M p is reached, a plastic hinge is formed in the member. In contrast to a frictionless hinge permitting free rotation, it is postulated that the plastic hinge allows large rotations to occur at constant plastic moment M p . Plastic hinges extend along short lengths of beams. Actual values of these lengths depend on cross-sections and load distributions. [ 3 ] But detailed analyses have shown that it is sufficiently accurate to consider beams rigid-plastic, with plasticity confined to plastic hinges at points. While this assumption is sufficient for limit state analysis , finite element formulations are available to account for the spread of plasticity along plastic hinge lengths. [ 4 ] By inserting a plastic hinge at a plastic limit load into a statically determinate beam, a kinematic mechanism permitting an unbounded displacement of the system can be formed. It is known as the collapse mechanism. For each degree of static indeterminacy of the beam, an additional plastic hinge must be added to form a collapse mechanism. [ citation needed ]
https://en.wikipedia.org/wiki/Plastic_hinge
Plastic limit theorems in continuum mechanics provide two bounds [ 1 ] that can be used to determine whether material failure is possible by means of plastic deformation for a given external loading scenario. According to the theorems, to find the range within which the true solution must lie, it is necessary to find both a stress field that balances the external forces and a velocity field or flow pattern that corresponds to those stresses. If the upper and lower bounds provided by the velocity field and stress field coincide, the exact value of the collapse load is determined. [ 2 ] The two plastic limit theorems apply to any elastic-perfectly plastic body or assemblage of bodies. Lower limit theorem: If an equilibrium distribution of stress can be found which balances the applied load and nowhere violates the yield criterion , the body (or bodies) will not fail, or will be just at the point of failure. [ 2 ] Upper limit theorem: The body (or bodies) will collapse if there is any compatible pattern of plastic deformation for which the rate of work done by the external loads exceeds the internal plastic dissipation . [ 2 ]
https://en.wikipedia.org/wiki/Plastic_limit_theorems
A plastic model kit , ( plamo in Eastern influenced parlance), [ citation needed ] is a consumer-grade plastic scale model manufactured as a kit , primarily assembled by hobbyists , and intended primarily for display. A plastic model kit depicts various subjects, ranging from real life military and civilian vehicles to characters and machinery from original kit lines and pop fiction, especially from eastern pop culture. A kit varies in difficulty, ranging from a "snap-together" model that assembles straight from the box, to a kit that requires special tools, paints, and plastic cements. The most popular subjects of plastic models by far are vehicles such as aircraft , ships , automobiles , and armored vehicles such as tanks. The majority of models throughout its early history depict military vehicles, [ citation needed ] due to the wider variety of form and historical context compared to civilian vehicles. Other subjects include science fiction vehicles and mecha, real spacecraft , buildings, animals, human(oid) dolls/action figures, and characters from pop culture. While military, ship, and aircraft modelers prize accuracy above all, modelers of automobiles and science-fiction themes may attempt to duplicate an existing subject, or may depict a completely imaginary subject. The creation of custom automobile models is related to the creation of actual custom cars and often an individual may have an interest in both, although the cost of customizing a real car is obviously enormously greater than that of customizing a model. The first plastic models were injection molded in cellulose acetate (e.g. Frog Penguin and Varney Trains ), but currently most plastic models are injection-molded in polystyrene , and the parts are bonded together, usually with a plastic solvent-based adhesive, although modelers may also use epoxy , cyanoacrylate , and white glue where their particular properties would be advantageous. While often omitted by novice modellers, specially formulated paint is sold for application to plastic models. Complex markings such as aircraft insignia or automobile body decorative details and model identification badges are typically provided with kits as screen-printed water-slide decals . Recently, models requiring less skill, time, and/or effort have been marketed, targeted to younger or less skilled modelers as well as those who just wish to reduce the time and effort required to complete a model. One such trend has been to offer a fully detailed kit requiring normal assembly and gluing, but eliminate the often frustrating task of painting the kit by molding it out of colored plastic, or by supplying it pre-painted and with decals applied. Often these kits are identical to another kit supplied in normal white or gray plastic except for the colored plastic or the prepainting, thus eliminating the large expense of creating another set of molds. Another trend which has become very extensive is to produce kits where the parts snap together, with no glue needed; sometimes the majority of the parts snap together with a few requiring glue. Often there is some simplification of detail as well; for instance, automotive kits without opening hoods and no engine detail, or sometimes opaque windows with no interior detail. These are often supplied in colored plastic, although smaller details would still require painting. Decals are usually not supplied with these but sometimes vinyl stickers are provided for insignia and similar details. Resin casting and vacuum forming are also used to produce models, or particular parts where the scale of production is not such as to support the investment required for injection molding. Plastic ship model kits typically provide thread in several sizes and colors for the rigging . Automobile kits typically contain vinyl tires, although sometimes these are molded from polystyrene as well, particularly in very inexpensive kits. Thin metal details produced by photoetching have become popular relatively recently, both as detailing parts manufactured and sold by small businesses, and as parts of a complete kit. Detail parts of other materials are sometimes included in kits or sold separately, such as metal tubing to simulate exhaust systems, or vinyl tubing to simulate hoses or wiring. Almost all plastic models are designed in a well-established scale. Each type of subject has one or more common scales, though they differ from one to the other. The general aim is to allow the finished model to be of a reasonable size, while maintaining consistency across models for collections. The following are the most common scales for popular subjects: In reality, models do not always conform to their nominal scale; there are 1/25 scale automobile models which are larger than some 1/24 scale models, for instance. For example, the engine in the recent reissue of the AMT Ala Kart show truck is significantly smaller than the engine in the original issue. AMT employees from the 1960s note that, at that time, all AMT kits were packaged into boxes of a standardized size, to simplify shipping; and the overriding requirement of designing any kit was that it had to fit into that precise size of box, no matter how large or small the original vehicle. This practice was common for other genres and manufacturers of models as well. In modern times this practice has become known as fit-the-box scale. In practice, this means that kits of the same subject in nominally identical scales may produce finished models which actually differ in size, and that hypothetically identical parts in such kits may not be easily swapped between them, even when the kits are both by the same manufacturer. The shape of the model may not entirely conform to the subject, as well; reviews of kits in modeling magazines often comment on how well the model depicts the original. The first plastic models were manufactured at the end of 1936 by Frog in the UK , with a range of 1/72nd scale model kits called 'Penguin'. In the late 1940s several American companies such as Hawk , Varney, Empire, Renwal and Lindberg began to produce plastic models. Many manufacturers began production in the 1950s and gained ascendancy in the 1960s such as Aurora , Revell , AMT , and Monogram in America, Airfix in UK and Heller SA in France. Other manufacturers included; Matchbox (UK), Italeri , ESCI , (both Italian) Novo {ex-Frog moulds} (former Soviet Union), and Fujimi , Nichimo , Tamiya Corporation and Bandai (Japan). American model companies who had been producing assembled promotional scale models of new automobiles each year for automobile dealers found a lucrative side business selling the unassembled parts of these "promos" to hobbyists to assemble, thus finding a new revenue stream for the injection molds which were so expensive to update each year. These early models were typically lower in detail than currently standard, with non-opening hoods and no engines, and simplified or no detail on the chassis , which attached to the body with very visible screws. Within a short time, the kit business began to overshadow the production of promos, and the level of accuracy and detail was raised to satisfy the demands of the marketplace. In the 1960s, Tamiya manufactured aircraft kits in the peculiar (at the time) scale of 1/100. Although the range included famous aircraft such as the Boeing B-52 Stratofortress , McDonnell Douglas F-4 Phantom II , North American F-86 Sabre , Dassault Mirage III , Grumman A-6 Intruder and the LTV A-7 Corsair II , it never enjoyed the same success as 1/72 scale kits did. Soon, Tamiya stopped manufacturing 1/100 scale aircraft but re-issued a small selection of them in 2004. Since the 1970s, Japanese firms such as Hasegawa and Tamiya , and since the 1990s also Chinese firms such as DML , AFV Club and Trumpeter have dominated the field and represent the highest level of technology. [ citation needed ] Brands from Russia , Central Europe , and Korea have also become prominent recently with companies like Academy Plastic Model . Many smaller companies have also produced plastic models, both in the past and currently. Prior to the rise of plastic models, shaped wood models were offered to model builders. These wood model kits often required extensive work to create results easily obtained by the plastic models. With the development of new technologies, modeling hobby can also be practiced in the virtual world. The Model Builder game, produced by Moonlit studio, available on Steam (service) , consists of cutting, assembling, and painting airplanes, helicopters, tanks, cars, and others and making dioramas with them. Transferring the hobby to the game world allows novice modelers and people who do not have space, time, or money to buy multiple models to pursue their interests. Another form of practicing in the virtual world is a 3D modeling with the use of such software like Blender , FreeCAD , Lego Digital Designer (superseded by BrickLink Studio ) or LeoCAD, etc. [ 1 ] While injection-molding is the predominant manufacturing process for plastic models, the high costs of equipment and making molds make it unsuitable for lower-yield production. Thus, models of minor and obscure subjects are often manufactured using alternative processes. Vacuum forming is popular for aircraft models, though assembly is more difficult than for injection-molded kits. Early manufacturers of vacuum formed model kits included Airmodel (the former DDR ), Contrail, Airframe (Canada), Formaplane, and Rareplanes (UK). Resin-casting , popular with smaller manufacturers, particularly aftermarket firms (but also producers of full kits), yields a greater degree of detail moulded in situ, but as the moulds used don't last as long, the price of such kits is considerably higher. In recent times, the latest releases from major manufacturers offer unprecedented detail that is a match for the finest resin kits, often including high-quality mixed-media (photo-etched brass, turned aluminum) parts. Many modellers build dioramas as landscaped scenes built around one or more models. They are most common for military vehicles such as tanks , but airfield scenes and 2-3 ships in formation are also popular. Conversions use a kit as a starting point, and modify it to be something else. For instance, kits of the USS Constitution ("Old Ironsides") are readily available, but the Constitution was just one of six sister ships, and an ambitious modeller will modify the kit, by sawing, filing, adding pieces, and so forth, to make a model of one of the others. Scratch building is the creation of a model "from scratch" rather than a manufactured kit. True scratchbuilt models consist of parts made by hand and do not incorporate parts from other kits. These are rare. When parts from other kits are included, the art is technically called "Kit Bashing." Most pieces referred to as "scratchbuilt" are actually a combination of kit bashing and scratchbuilding. Thus, it has become common for either term to be used loosely to refer to these more common hybrid models. Kitbashing is a modelling technique where parts from multiple model kits are combined to create a novel model form. For example, the effects crews on the various Star Trek TV shows frequently kitbashed multiple starship models to quickly create new classes of ship for use in background scenes where details would not be particularly obvious. The demographics of plastic modeling have changed in its half-century of existence, from young boys buying them as toys to older adults building them to assemble large collections. In the United States , as well as some other countries, many modelers are former members of the military who like to recreate the actual equipment they used in service. Technological advances have made model-building more and more sophisticated, and the proliferation of expensive detailing add-ons have raised the bar for competition within modeling clubs. As a result, a kit built "out of the box" on a weekend cannot compare with a kit built over months where a tiny add-on part such as an aircraft seat can cost more than the entire kit itself. Though plastic modeling is generally an uncontroversial hobby, it's not immune to social pressures: [ citation needed ]
https://en.wikipedia.org/wiki/Plastic_model_kit
In structural engineering, the plastic moment (M p ) is a property of a structural section. It is defined as the moment at which the entire cross section has reached its yield stress . This is theoretically the maximum bending moment that the section can resist – when this point is reached a plastic hinge is formed and any load beyond this point will result in theoretically infinite plastic deformation. [ 1 ] In practice most materials are work-hardened resulting in increased stiffness and moment resistance until the material fails. This is of little significance in structural mechanics as the deflection prior to this occurring is considered to be an earlier failure point in the member. In general, the method to calculate M p {\displaystyle M_{p}} first requires calculation of the plastic section modulus Z P {\displaystyle Z_{P}} and then to substitute this into the following formula: For example, the plastic moment for a rectangular section can be calculated with the following formula: where The plastic moment for a given section will always be larger than the yield moment (the bending moment at which the first part of the sections reaches the yield stress).
https://en.wikipedia.org/wiki/Plastic_moment
Plastic pipe is a tubular section, or hollow cylinder, made of plastic . It is usually, but not necessarily, of circular cross-section, used mainly to convey substances which can flow—liquids and gases (fluids), slurries, powders and masses of small solids. It can also be used for structural applications; hollow pipes are far stiffer per unit weight than solid members. Plastic pipework is used for the conveyance of drinking water , waste water , chemicals , heating fluid and cooling fluids , foodstuffs , ultra-pure liquids, slurries , gases , compressed air , irrigation , plastic pressure pipe systems , and vacuum system applications . There are three basic types of plastic pipe: Extruded pipes consisting of one layer of a homogeneous matrix of thermoplastic material which is ready for use in a pipeline. Structured-wall pipes and fittings are products which have an optimized design with regard to material usage to achieve the physical, mechanical and performance requirements. Structured Wall Pipes are tailor made solutions of piping systems, for a variety of applications and in most cases developed in cooperation with users. Pipe incorporating a flexible metallic layer as the middle of three bonded layers. Barrier pipe is used, for example, to provide additional protection for the contents passing through the pipe (particularly drinking water) from aggressive chemicals or other pollution when laid in ground contaminated by previous use. Most plastic pipe systems are made from thermoplastic materials. The production method involves melting the material, shaping and then cooling. Pipes are normally produced by extrusion . [ 1 ] Plastic pipe systems fulfil a variety of service requirements. Product standards for plastics pipe systems are prepared within the CEN/TC155 standards committee. These requirements are described in a set of European Product Standards for each application alongside their specific characteristics, for example: Plastic pipes are capable of fulfilling the specific requirement for each application. They do so over a long lifetime and with reliability and safety. [ 2 ] The key success factor is achieved by maintaining consistently high quality levels. For plastic pipe products, these levels are defined by the different standards. Two aspects are fundamentally important for the performance of plastic pipes: flexibility and long lifetime. [ 3 ] Acrylonitrile butadiene styrene (ABS) is used for the conveyance of potable water, slurries and chemicals. Most commonly used for DWV (drain-waste-vent) applications. It has a wide temperature range, from -40 °C to +60 °C. ABS is a thermoplastic material and was originally developed in the early 1950s for use in oil fields and the chemical industry. The variability of the material and its relative cost effectiveness has made it a popular engineering plastic. It can be tailored to a range of applications by modifying the ratio of the individual chemical components. They are used mainly in industrial applications where high impact strength and rigidity are essential. This material is also used in non-pressure piping systems for soil and waste. [ 5 ] Chlorinated polyvinyl chloride (CPVC) is resistant to many acids, bases, salts, paraffinic hydrocarbons, halogens and alcohols. It is not resistant to solvents, aromatics and some chlorinated hydrocarbons. It can carry higher temperature liquids than uPVC with a max operating temperature reaching 200 °F (93.3 °C). Due to its greater temperature threshold and chemical resistance, CPVC is one of the main recommended material choices in residential, commercial, and industrial water and liquid transport. High-density polyethylene (HDPE) - HDPE pipe is strong, flexible and light weight. It has a zero leak rate when fused together. [ 6 ] PB-1 is used in pressure piping systems for hot and cold potable water, pre-insulated district heating networks, and surface heating and cooling systems. Key properties are weldability, temperature resistance, flexibility and high hydrostatic pressure resistance. One standard type, PB 125, has a minimum required strength (MRS) of 12.5 MPa. It also has low noise transmission, low linear thermal expansion, no corrosion and calcification. PB-1 piping systems are no longer sold in North America. Market share in Europe and Asia is small but steadily growing. In some markets, e.g. Kuwait, UK, Korea and Spain, PB-1 has a strong position. [ 7 ] Polyethylene has been successfully used for the safe conveyance of potable and waste water, hazardous waste, and compressed gases for many years. Two variants are HDPE pipe ( high-density polyethylene ) [ 8 ] and the more heat resistant PEX (cross-linked polyethylene, also XLPE). PE has been used for pipes since the early 1950s. PE pipes are made by extrusion in a variety of sizes dimensions. PE is lightweight, flexible and easy to weld. Its smooth interior finish ensures good flow characteristics. Continuous development of the material has enhanced its performance, leading to rapidly increasing usage by major water and gas utility companies throughout the world. The pipes are also used in lining and trench-less technologies, the so-called no-dig applications where the pipes are installed without digging trenches without any disruption above ground. Here the pipes may be used to line old pipe systems to reduce leakage and improve water quality. These solutions are therefore helping engineers to rehabilitate antiquated pipe systems. Excavation is minimal and the process is carried out quickly below ground. Also for PE pipe material, several studies demonstrated the long track record with expected lifetime of more than 50 years. Cross-linked polyethylene is commonly referred to as XLPE or PEX. It is a thermoplastic material that can be made in three different ways depending how the cross-linking of the polymer chains is being made. PEX was developed in the 1950s. It has been used for pipes in Europe since the early 1970s and has been gaining rapid popularity over the last few decades. Often supplied in coils, it is flexible and can therefore be led around structures without fittings. Its strength at temperatures ranging from below freezing up to almost boiling makes it an ideal pipe material for hot and cold water installations, radiator and under floor heating, de-icing and ceiling cooling applications [ 9 ] Polyethylene of Raised Temperature (RT) or PE-RT expands the traditional properties of polyethylene. Enhanced strength at high temperatures are thus made possible through special molecular design and manufacturing process control. Its resistance to low or high temperatures makes PE-RT ideal for a broad range of hot and cold water pipe applications. Polypropylene is suitable for use with foodstuffs, potable and ultra pure waters, as well as within the pharmaceutical and chemical industries. PP is a thermoplastic polymer made from polypropylene. It was first invented in the 1950s and has been used for pipes since the 1970s. Due to the high impact resistance combined with good stiffness and high chemical resistance makes this material suitable for sewer applications. A good performance at operating temperature range from up to 60 °C (140 °F) (continuous) makes this material suitable for in-house discharge systems for soil & waste. A special PP grade with high temperature behaviour up to 90 °C (194 °F) (short-term) makes that material a good choice for in-house warm water supply. [ 10 ] Polyvinylidene difluoride (PVDF) is a fairly non-reactive, thermoplastic fluoropolymer with excellent chemical and thermal resistance for plastic pipework uses. PVDF resin is produced through polymerization of the vinylidene fluoride monomer. The PVDF resin is then used to make PVDF pipe as well as many other products. Industries and applications select PVDF pipe due to its inert, durable qualities. PVDF piping is used most in the chemical process industry due to its ability to plumb aggressive, corrosive solutions. PVDF pipe also sees common use in high purity applications, semi-conductor fabrication, electronics / electricity, pharmaceutical developments, and nuclear waste processing. PVDF piping specifications and performance characteristics approve PVDF pipe up to 248 °F (120 °C) under pressurized system conditions. The pipe does not support fungus growth according to military test standard method 508, 81-0B. Dissimilar from other common thermoplastic pipes, (uPVC, CPVC, PE, PP), PVDF does not exhibit sensitivity to UV light or ozone oxidative damage, approving it for long term outdoor uses. [ 11 ] uPVC or PVC-U, is a thermoplastic material derived from common salt and fossil fuels. The pipe material has the longest track record of all plastic materials. The first uPVC pipes were made in the 1930s. Beginning in the 1950s, uPVC pipes were used to replace corroded metal pipes and thus bring fresh drinking water to a growing rural and later urban population. uPVC pipes are certified safe for drinking water per NSF Standard 61 and used extensively for water distribution and transmission pipelines throughout North America and around the world. uPVC is allowed for waste lines in homes and is the most often used pipe for sanitary sewers. Further pressure and non-pressure applications in the field of sewers, soil and waste, gas (low pressure) and cable protection soon followed. The material's contribution to public health, hygiene and well-being has therefore been significant. Polyvinyl chloride or uPVC (unplasticized polyvinyl chloride) pipes are not well suited for hot water lines and have been restricted from inside water supply line use in the US for homes since 2006. Code IRC P2904.5 uPVC Not listed. uPVC has high chemical resistance across its operating temperature range, with a broad band of operating pressures. Max operating temperature is reported at 140 °F (60 °C), and max working pressure: 450 psi (3,100 kPa). Due to its long-term strength characteristics, high stiffness and cost effectiveness, uPVC systems account for a large proportion of plastic piping installations and some estimations put it that greater than 2,000,000 miles (3,200,000 km) of uPVC pipe are currently in service across applications. Based on the standard polyvinyl chloride material, three other variants are in use. One variant called OPVC, or PVCO, represents an important landmark in the history of plastic pipe technology. This molecular-oriented bi-axial high performance version combines higher strength with extra impact resistance. A ductile variant is the MPVC, polyvinyl chloride modified with acrylics or chlorinated PE. This more ductile material with high fracture resistance is used in higher-demand applications where resistance against cracking and stress corrosion is important. In several studies the long track record of uPVC pipes has been investigated. Recent investigations at the German KRV and the Dutch TNO have confirmed that uPVC water pressure pipes, when installed correctly have a useful life span of over 100 years. [ 12 ] Plastic pipes have been used in service for over 50 years. The predicted lifetime of plastic piping systems exceeds 100 years. Several industry studies have demonstrated this prognosis. Plastic pipe materials have always been classified on the basis of long-term pressure testing. The measured failure times as a function of the stresses in the pipe wall has been demonstrated in so-called Regression Curves. An extrapolation based on measured failure times has been calculated to reach 50 years. The predicted failure stress at 50 years was taken as a basis for the classification. This value is called MRS, Minimum Required Stress, at 50 years. [ 13 ] Some reasons why plastic piping systems may fail are poor product bonding/gluing during installation and naturally-occurring physical damage, such as from tree root infiltration. Plastic pipes were also found to fail more often during dry, hot summers. [ 14 ] Plastic Pipes are classified by their ring stiffness . The preferred stiffness classes as described in several product standards are: SN2, SN4, SN8 and SN16, where SN is Nominal Stiffness (kN/m2). Stiffness of pipes is important if they are to withstand external loadings during installation. The higher the figure, the stiffer the pipe. [ citation needed ] After correct installation, pipe deflection remains limited but it will continue to some extent for a while. In relation to the soil in which it is embedded, the plastic pipe behaves in a 'flexible' way. This means that further deflection in time depends on the settlement of the soil around the pipe. [ citation needed ] Basically, the pipe follows the soil movement or settlement of the backfill, as technicians call it. This means that good installation of pipes will result in good soil settlement. Further deflection will remain limited. [ citation needed ] For flexible pipes, the soil loading is distributed and supported by the surrounding soil. Stresses and strains caused by the deflection of the pipe will occur within the pipe wall. However, the induced stresses will never exceed the allowed limit values. [ citation needed ] The thermoplastic behavior of the pipe material is such that the induced stresses are relaxing to a low level. Induced strains are far below the allowable levels. [ citation needed ] This flexible behaviour means that the pipe will not fail. It will exhibit only more deflection while keeping its function without breaking. [ citation needed ] However, rigid pipes by their very nature are not flexible and will not follow ground movements. They will bear all the ground loadings, whatever the soil settlement. This means that when a rigid pipe is subject to excessive loading, it will reach the limit for stress values more quickly and break. [ citation needed ] It can therefore be concluded that the flexibility of plastic pipes offers an extra dimension of safety. Buried Pipes need flexibility. [ 15 ] Pipes, fittings, valves , and accessories make up a plastic pressure pipe system. The range of pipe diameters for each pipe system does vary. However, the size ranges from 12 to 400 mm (0.472 to 15.748 in) and 3 ⁄ 8 to 16 in (9.53 to 406.40 mm). Pipes are extruded and are generally available in: 3 m (9.84 ft), 4 m (13.12 ft), 5 m (16.40 ft), and 6 m (19.69 ft) straight lengths and 25 m (82.02 ft), 50 m (164.04 ft), 100 m (328.08 ft), and 200 m (656.17 ft) coils for LDPE and HDPE. Pipe fittings are moulded and come in many sizes: tee 90° equal (straight and reducing), tee 45°, cross equal, elbow 90° (straight and reducing), elbow 45°, short radius bend 90° socket/coupler (straight and reducing), union, end caps, reducing bush, and stub, full face, and blanking flanges. Valves are moulded and also come in many types: ball valves (also multiport valve), butterfly valves , spring-, ball-, and swing-check non-return valves, diaphragm valves , knife gate valve, globe valves and pressure relief/reduction valves. Accessories are solvents, cleaners, glues, clips, backing rings, and gaskets.
https://en.wikipedia.org/wiki/Plastic_pipework
The Mediterranean Sea has been defined as one of the seas most affected by marine plastic pollution . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ excessive citations ] It has concentrations of microplastics which are estimated to be higher than those on average found at the global level. [ 8 ] Studies conducted within the WWF Mediterranean Marine Initiative of 2019 [ 6 ] have estimated that 0.57 million metric tons of plastic enter the Mediterranean Sea every year; this quantity corresponds to the dumping of 33.800 bottles made of plastic into waters every minute, representing important risks for marine ecosystems, human health, but also for the blue economy of the area, whose coastal zones are very densely populated and among the first tourist destinations worldwide. [ 9 ] [ 10 ] Marine plastic pollution was found in Mediterranean waters in amounts similar to those present in the ocean gyres ( Indian Ocean Gyre , North Atlantic Gyre , North Pacific Gyre , South Atlantic Gyre , South Pacific Gyre ). [ 10 ] Therefore, the Mediterranean Sea is oftentimes being defined as the "world's sixth greatest accumulation zone" for marine plastic litter [ 11 ] [ 12 ] [ 13 ] [ 14 ] or as an invisible "sixth garbage patch ", primarily composed of microplastics. [ 15 ] This is an invisible garbage patch as there is no trace of permanent litter accumulation areas in the Mediterranean Sea, primarily because of the semi-enclosed shape of its basin, the cyclonic circulation and the currents present in the region. [ 16 ] [ 17 ] The Mediterranean Sea receives waste from coastal areas and from waters, such as rivers (like in the case of the Nile river , which, as of 2017, [ 18 ] brought around 200 tonnes of plastic waste into the Mediterranean basin yearly). A World Wide Fund for Nature report of 2019 [ 6 ] estimates that, considering the Mediterranean countries, around 70% of plastic pollution coming from water-based sources comes from three areas: Egypt (41.3%), Turkey (19.1%) and Italy (7.6%). Plastic litter originating from land-based sources is instead estimated to be coming from, in decreasing order: Turkey, Morocco, Israel, Spain, France, Syria, Egypt, Albania, Tunisia and Italy. [ 16 ] Initiatives are being implemented at various levels to reduce and end the problem of marine plastics pollution in the Mediterranean Sea; however, the governance of this problem is very complex because of the nature of such plastics (especially microplastics), the transboundary character of this matter, the difficulties connected with the multiplicity of the actors involved, the increasing levels of production of plastics and the issues connected with responsibility at different levels. [ 19 ] [ 20 ] [ 21 ] Studies on marine plastic pollution began in the 1970s, when plastic debris was found in gyres in the Sargasso Sea and the first scientific findings were published in 1972. [ 22 ] [ 23 ] Around a decade later, in the 1980s, the first researches focusing on the problem of marine plastic pollution in the Mediterranean Sea were published. The first [ 24 ] study on the Mediterranean basin focused on an area 40 miles SW of Malta and reported the presence of floating debris , the majority of which was constituted by plastic. [ 25 ] This body of research focusing on the Mediterranean continued growing in the 1990s, with numerous studies on the impacts, especially biological, of plastic pollution in the marine environment [ 26 ] and with the start of interstate discussions, primarily in the form of conferences such as, among others, that of the "International Symposium on plastics in Mediterranean countries". [ 27 ] However, the real increase in the number and variety of such studies only came later, around 2010, when more pieces of information on the quantity and distribution of plastics in the Mediterranean Sea was made accessible, when the knowledge on microplastics was being spread and when the impacts started to be addressed through diverse investigations as a concern. [ 26 ] Recent researches, conducted at various regional, national and international levels, address the impacts of plastic pollution in the Mediterranean Sea from ecological, biological, economic and social points of view and the possibilities in terms of reduction efforts and governance of the problem, also while awaiting the prospective Global plastic pollution treaty. [ 28 ] Plastics accounts for 80% of waste dispersed in the marine and coastal environment of the Mediterranean Sea. [ 24 ] Recent studies focus on the types of plastics found and primarily on the issue of microplastics, both at a global but also at a regional level, as in the case of the Mediterranean Sea, which was identified as a "target hotspot of the world" due to its amounts of microplastics, around four times higher than those present in the North Pacific Ocean. [ 24 ] In the Mediterranean Sea, microplastics are found on surface waters, as well as on beaches and deep seafloor. 94% of microplastic items present in the Mediterranean Sea are thought to be in the seabed, 5% on beaches and 1% on surface waters. [ 29 ] Plastics has been identified as the primary source of litter found on beaches of the Mediterranean Sea. [ 30 ] Researches conducted between 2019 and 2023 within the framework of the COMMON project ("COastal Management and MOnitoring Network for tackling marine litter in the Mediterranean Sea", financed by the European Union through the ENI CBC MED programme [ 31 ] ) show that, out of more than 90.000 collected objects on beaches of Italy, Tunisia and Lebanon, 17.000 are cigarette butts and 6.000 are cotton buds sticks, which contain plastics. [ 32 ] Earlier researches, conducted in 2016, present similar findings and show that the ten "top marine litter items" found on beaches and surface waters of the Mediterranean Sea were, from the most abundant: cigarette butts and filters, plastic and polystyrene pieces, plastic caps and lids and plastic rings from bottle caps and lids, cotton bud sticks, nets and pieces of nets, bottles, foam sponge, cigarette packets, plastic bags and what remains from these when they are ripped off. These items, primarily made of single-use plastics, account for more than 60% of marine waste in the beaches and surface waters of the Mediterranean Sea. [ 33 ] The surface waters of the Mediterranean Sea present concentrations of microplastics that, according to a 2015 study (UNEP/MAP), [ 3 ] are above 100.000 objects per km2, with more than 64.000.000 floating particles per km2. [ 34 ] As of 2019, the most common types of microplastics found are polyethylene, polystyrene, polyester and polypropylene. [ 35 ] Plastic litter that accumulates in the Mediterranean Sea is fragmented into small particles that then tend to gather in the seabed. [ 36 ] Studies aimed at analysing plastic pollution in the sea floor of the Mediterranean have shown that plastic debris can be found in every depth from 900 m to 3000 m; plastic litter could be found in 92.8% of the surveys collected from the Mediterranean deep-sea . [ 37 ] [ 24 ] Particular attention is now being devoted to the presence of microplastics and nanoplastics in the Mediterranean seabed and sediments, the two regions with the highest levels of microplastics in the Mediterranean Sea. [ 24 ] [ 38 ] In 2020, scientists from the University of Manchester analysed sediment samples of the Mediterranean Sea and identified the highest concentration ever recorded of microplastics on sediments of the sea floor; the scientists also discovered that such microplastics are moved by wind, storms, hurricane and currents present in the sea bed which then make them accumulate in specific areas. Once these items are deposited, their degradation is minimal because of a lack of oxygen or light and therefore the microplastics, once they come to the seafloor, remain preserved. [ 39 ] Another study from 2020 has led to the discovery of two types of plastic debris in the Mediterranean seabed for the first time, on Giglio Island , Italy: plasticrust and pyroplastics. Plasticrust had been found for the first time in the Atlantic Ocean. This discovery revealed that the contamination by plastic debris may be more widespread than previously expected. [ 40 ] Researches on the sediment of the Cabrera Archipelago National Maritime-Terrestrial Park marine protected area has found that the majority of plastic debris in these waters is microplastics, probably transported by wave currents and winds; the study also showed that higher amounts of microplastics were found in the marine protected area rather than on other samples taken from different sites, thus opening the debate regarding the protection and conservation of these areas. [ 41 ] [ 30 ] Other studies on the presence of plastic debris in marine protected areas have been conducted, among others, in the waters of the Natural Park of Telaščica bay in Croatia [ 42 ] and on the EU Site of Community Interest in the North-West Adriatic; [ 43 ] both areas present marine litter, primarily in the form of microplastics and mesoplastics. The Mediterranean Sea is considered as a hotspot for plastic pollution and one of the most impacted regions of the world by this problem. [ 3 ] As of 2019, of the estimated 10.000 tons of plastic input per year in the Mediterranean Sea, half of this litter had land-based origins (from coastal zones) whereas the other half originated from rivers and maritime routing. [ 44 ] This is the case for example of the Nile River which, as of 2017, [ 18 ] brought around 200 tonnes of plastic waste into the Mediterranean basin yearly. Once plastic waste reaches Mediterranean waters, it hardly gets out: the peculiar shape of the Mediterranean sea and its currents mean that the outflow circulation of waters is limited and, therefore, that the waste that enters from the coastlines remains inside the Mediterranean, accumulating within it. [ 45 ] Researchers identify some main causes of the large presence of plastic pollution in this Sea: namely its semi-enclosed shape, its densely populated coasts, tourism activities, fishing, shipping, waste disposal problems, increase of the use of plastic and unsustainable consumption patterns. [ 26 ] The primary land-based sources of plastic pollution in the Mediterranean Sea are tourism activities, vast population on the coasts, inefficient management of waste, unsustainable consumption patterns and increase on the use of plastics. [ 26 ] According to a UNEP report, inhabitants and tourists of the Mediterranean region produce 24 million metric tons of plastic waste (of which less than 6% is recycled [ 46 ] [ 26 ] ) each year; this figure normally grows during the summer because of tourism presence and activities. [ 47 ] Mediterranean countries are the world's number one tourism destination (considering both international and domestic visitors) [ 48 ] [ 10 ] [ 49 ] and waste management facilities frequently experience overload. Most of the waste produced is dumped into unprotected landfills and gets into the Mediterranean Sea through stormwater runoffs, wind currents, rivers, wastewater streams. [ 46 ] [ 26 ] To this extent, considering the fact that wastewater is a pathway for waste disposal in open waters, a key challenge is also constituted by the fact that only small percentages of wastewater in the Mediterranean area undergo basic and tertiary treatment. [ 26 ] [ 50 ] Ocean-based sources of plastic litter are connected with some important economic sectors in the Mediterranean: fisheries, shipping and acquaculture , [ 26 ] which generate diverse types of debris that ends up in waters. The shipping sector in the Mediterranean Sea is very important as about 15% of global shipping passes through this region. [ citation needed ] Cruise-liners, military fleets, oil and gas stations, drilling activities are other ocean-based sources of plastic litter: the debris produced is then fragmented into microplastics present at all levels of the Mediterranean basin. [ 46 ] As the Mediterranean Sea is characterised by a rich biodiversity , ecological value and intense presence of economic activities, the current and future impacts of marine plastic pollution in this area are particularly high. [ 2 ] The Mediterranean Sea in fact hosts between 4% and 18% of all marine exemplars, [ 51 ] and tourism on the coastal zones, aquaculture, the fishing industry and maritime transport represent substantial sources of income for the Mediterranean countries . [ 52 ] Research shows that marine plastic pollution has impacts on marine ecosystems and economic activities at various levels, [ 53 ] [ 54 ] but further studies are currently being conducted to thoroughly investigate the size of such impacts in the Mediterranean area, [ 2 ] both on marine biota and on human health. A study conducted by Legambiente on 700 individuals of 6 different fish species discovered that one out of three fish had ingested plastic items and that more than half of the sea turtles that were analysed presented plastic litter either inside or around the body. [ 32 ] Other studies found traces of ingested plastic debris also in the stomachs of seabirds and nanoplastic particles in mussels. [ 55 ] These are just some of the signals of the negative impact that plastic litter has on the marine biodiversity of the Mediterranean basin. Marine plastic litter causes problems not only in terms of the accidental ingestion of plastic items (which can lead to gastrointestinal blockages, diseases and mortality [ 56 ] ), but also in terms of the toxic effects that additives used in plastics can have. Moreover, studies have shown that plastic items soaks up contaminants in a more rapid and efficient way than organic items floating on sea waters. [ 57 ] Long-term exposure to such plastic items and the contaminants, in particular microplastic debris, has been shown to have topological effects on species living in regions where marine plastic pollution is intense, like in the case of the Mediterranean region, whose ecosystem is in constant contact with plastics. [ 32 ] Potential impacts on human health are connected, among others, with the possibility of eating fish that had ingested plastics, of drinking waters that do not undergo treatment, or through releases of chemical substances. [ 26 ] However, effects of plastics (and especially microplastics) on human health are a particularly debated topic within scholars and researchers and more studies are being conducted to assess these effects. [ 58 ] [ 59 ] The dangerous effect of plastic pollution has also noticeable impacts on the blue economy of the Mediterranean basin, in particular on the tourism, fishing and shipping sectors. Calculating the complete economic impacts of marine plastic litter in the Mediterranean Sea is very complicated due to the various sources, the impacts at the environmental, social and economic levels, and the number of involved sectors in different geographic locations. [ 26 ] A 2021 report by UNEP tried estimating annual losses of US$700 million on the entire blue economy of the Mediterranean Sea. [ 10 ] Because of current waves, high amounts of plastic objects accumulate on beaches, thus demanding continuous clean-ups and potentially disincentivizing tourism. [ 60 ] Aquaculture and fishery are impacted as the marine ecosystem is affected and as litter can damage nets, contaminate the catches and also reduce them. [ 26 ] For the fishing sector, researches of 2021 estimate a general annual loss of euros 61.7 millions due to marine debris in European countries; this figure is also comprising a loss due to the decreasing of demand of fish products due to concerns on the quality of these items. [ 24 ] [ 46 ] The specific loss at the Mediterranean level is more complicated to calculate but scholars argue the costs are considered to be higher to the substantial concentration of marine litter. [ 61 ] The shipping sector suffers economic pressures too, as plastic marine litter, among others, can damage motors, which then have to be repaired. Assessments on the impact of plastic marine litter on the shipping sector in the Mediterranean region are being conducted. [ 26 ] Marine pollution in the Mediterranean has led to an increase in the costs sustained by diverse local, regional and national authorities to clean-up the beaches and coastal areas. For example, as of 2016, Nice's administration spent around euros 2 million to clean up beaches. [ 60 ] Analysis conducted within the CleanSea project estimated a cost of Euros 3.800 to clean up one tonne of marine litter in European countries' beaches: Euros 2.200 have been estimated to be spent annually to clean up each tonne of floating plastic litter. [ 62 ] Other impacts have been registered at the local level with losses of income and jobs in the tourism sector, as well as losses in values of residential property. [ 26 ] Marine plastic pollution has been defined as a global concern by the European Union , the G7 and G20, the United Nations Environment Programme (UNEP) and various organisations and institutions at local, regional and international levels. Over the last few years, marine plastic debris has started to be recognised as a relevant issue also in terms of its governance and regulatory complexities, which are also due to the fact that it is a transboundary, "multifaceted" problem, with multiple causes, sources and actors involved, and that requires integrated approaches and solutions at various levels. [ 63 ] [ 64 ] [ 65 ] National, regional and international actors, along with civil society and private industries are trying to address the problem of pollution in the Mediterranean Sea with initiatives, policies, campaigns. The majority of these initiatives addresses marine pollution in general, while also focusing, among others, on the problem of marine plastic pollution in the Mediterranean Sea and region. The Barcelona Convention , which was adopted in 1995, was the first regional treaty aiming at reducing pollution, and marine plastic pollution, in the Mediterranean region; the European Union and all countries with a Mediterranean shoreline are parties to the Convention and to the Protocol for the Protection of the Mediterranean Sea against Pollution from Land Based Sources and to Activities that concern plastic pollution in the Mediterranean basin. [ 66 ] The Barcelona Convention and its Protocols were established within the regional cooperation platform "Mediterranean Action Plan of the United Nations Environment Programme" (UNEP/MAP), the first regional action plan of the UNEP Regional Seas Programme, which was instrumental in the adoption of the Convention itself. [ 67 ] The UNEP/MAP - Barcelona Convention System has been playing a role for responding to environmental challenges threatening marine and coastal ecosystems in the Mediterranean region. [ 68 ] It collects data for marine debris, litter in waters and on coastlines, amounts of plastic litter ingested by marine species. [ 3 ] The first ever legal binding instrument with the purpose of preventing and limiting marine plastic pollution and of cleaning up marine litter already affecting the area of the Mediterranean Sea is the "Regional Plan on Marine Litter Management" (RPML) in the Mediterranean, which was adopted, among others, in the framework of the Barcelona Convention in 2013. [ 69 ] The Plan is further supported by the EU funded "Marine Litter MED II project" (2020-2023), which is focused on countries of the Southern Mediterranean (Algeria, Egypt, Libya, Morocco, Tunisia, Israel and Lebanon) and is built on the results of the Marine Litter MED project, carried out between 2016 and 2019. [ 70 ] Scholars have argued that an international agreement among countries with shorelines on the Mediterranean Sea could be pursued, with actions focused on eliminating plastic waste in nature, on creating plans for the prevention, control and removal of plastic litter from marine ecosystems, on banning specific types of plastic products and prevent their dumping into waters, and on establishing international committees. [ 71 ] [ 15 ] The prospective Global Plastic Pollution Treaty is awaited. [ 10 ] Marine Protected Areas represent a policy instrument which can be helpful in reducing plastic pollution in seas and its impacts on marine ecosystems, as they ban or limit fisheries, some tourism activities, dumping of materials, mining and building of harbours and offshore wind farms. [ 72 ] [ 73 ] Nevertheless, high levels of plastic pollution, especially microplastics, have been recorded in Marine Protected Areas in the Mediterranean Sea. [ 43 ] [ 42 ] Initiatives focusing specifically on Marine Protected Areas and plastic pollution in the Mediterranean region are awaited. [ 16 ] Programmes and strategies at the EU level address the problem of plastic pollution in Europe's seas, therefore also the Mediterranean Sea. Key policies are the EU Green Deal [ 74 ] and the Zero Pollution Action Plan, of which an important goal is that of reducing waste, marine plastic pollution and the dispersal of microplastics. [ 75 ] Among the relevant strategies, some are the Water Framework Directive, [ 76 ] the Industrial Emissions Directive, [ 77 ] the Environmental Liability Directive, [ 78 ] the Environmental Crimes Directive, [ 79 ] the Waste Framework Directive, [ 80 ] the Waste Shipment Regulation, [ 81 ] the Packaging and Packaging Waste Directive, [ 82 ] the Single-Use Plastics Directive. [ 83 ] The Marine Strategy Framework Directive constitutes the EU legal framework for the safeguard and preservation of the European Seas, also from marine plastic litter; [ 84 ] the Directive addresses the importance of identifying the sources of marine litter and its impacts to deploy efficient and comprehensive measures. [ 85 ] Among various actions, there is the European Union's ban on diverse kinds of single-use plastics. [ 86 ] The EU has invited Mediterranean countries to implement legal, administrative and financial actions to create sustainable waste management systems to limit the problem of plastic pollution in the Mediterranean Sea. [ 87 ] [ 15 ] Some of the other actors carrying out activities to raise awareness and build knowledge on the topic of plastic pollution in the Mediterranean Sea involve: the Union for the Mediterranean ; [ 88 ] the International Union for the Conservation of Nature and IUCN-Med, [ 89 ] which conducts researches on macro, micro and nano plastics in the Mediterranean Sea and builds partnerships and alliances for the implementation of projects in the region; WWF with different analyses and projects, such as the WWF Mediterranean Marine Initiative; [ 90 ] the Mediterranean Information Office for Environment, Culture and Sustainable Development (MIO-ECSDE) [ 91 ] and the MARLISCO project; [ 92 ] Mediterranean Experts on Climate and Environmental Change. [ 93 ] States and civil societies actors are also operating and creating partnerships (like in the case of the COastal Management and MOnitoring Network for tackling marine litter in the Mediterranean Sea [ 31 ] ) in awareness-raising initiatives and in clean-up activities on the coastlines of the Mediterranean Sea, like in the case of OGYRE [ 94 ] and ENALEIA, [ 95 ] who are directly cooperating with fishermen in cleaning various seas, including the Mediterranean Sea. Other clean-up activities comprise the "Mediterranean CleanUP" (MCU), [ 96 ] "Clean up the Med" by Legambiente [ 97 ] and spontaneous initiatives at various levels. The Day of the Mediterranean is celebrated each year on 28 November to commemorate the foundation of the Barcelona Convention and to raise awareness on various issues of the Mediterranean basin, among which that of plastic pollution. [ citation needed ]
https://en.wikipedia.org/wiki/Plastic_pollution_in_the_Mediterranean_sea
In mathematics, the plastic ratio is a geometrical proportion equal to 1.324 717 957 244 746 025 96 ... ; [ 2 ] it is the unique real solution of the equation x 3 = x + 1. The adjective plastic does not refer to the artificial material , but to the formative and sculptural qualities of this ratio, as in plastic arts . Three quantities a > b > c > 0 are in the plastic ratio ⁠ ρ {\displaystyle \rho } ⁠ if b + c a = a b = b c = ρ . {\displaystyle {\frac {b+c}{a}}={\frac {a}{b}}={\frac {b}{c}}=\rho \,.} Writing ⁠ b = ρ c {\displaystyle b=\rho c} ⁠ and ⁠ a = ρ b = ρ 2 c {\displaystyle a=\rho b=\rho ^{2}c} ⁠ , the value of ⁠ c {\displaystyle c} ⁠ cancels out and we get ρ + 1 ρ 2 = ρ 2 ρ = ρ 1 . {\displaystyle {\frac {\rho +1}{\rho ^{2}}}={\frac {\rho ^{2}}{\rho }}={\frac {\rho }{1}}\,.} It follows that the plastic ratio is the unique real solution of the cubic equation ρ 3 − ρ − 1 = 0. {\displaystyle \rho ^{3}-\rho -1=0.} Solving the equation with Cardano's formula , w 1 , 2 = 1 2 ( 1 ± 1 3 23 3 ) ρ = w 1 3 + w 2 3 {\displaystyle {\begin{aligned}w_{1,2}&={\frac {1}{2}}\left(1\pm {\frac {1}{3}}{\sqrt {\frac {23}{3}}}\right)\\\rho &={\sqrt[{3}]{w_{1}}}+{\sqrt[{3}]{w_{2}}}\end{aligned}}} or, using the hyperbolic cosine , [ 3 ] ⁠ ρ {\displaystyle \rho } ⁠ is the superstable fixed point of the iteration x ← ( 2 x 3 + 1 ) / ( 3 x 2 − 1 ) {\displaystyle x\gets (2x^{3}+1)/(3x^{2}-1)} , which is the update step of Newton's method applied to ⁠ x 3 − x − 1 = 0 {\displaystyle x^{3}-x-1=0} ⁠ . The iteration x ← 1 + 1 x {\displaystyle x\gets {\sqrt {1+{\tfrac {1}{x}}}}} results in the continued reciprocal square root Dividing the defining trinomial x 3 − x − 1 {\displaystyle x^{3}-x-1} by ⁠ x − ρ {\displaystyle x-\rho } ⁠ one obtains x 2 + ρ x + 1 / ρ {\displaystyle x^{2}+\rho x+1/\rho } , and the conjugate elements of ⁠ ρ {\displaystyle \rho } ⁠ are x 1 , 2 = 1 2 ( − ρ ± i 3 ρ 2 − 4 ) , {\displaystyle x_{1,2}={\frac {1}{2}}\left(-\rho \pm i{\sqrt {3\rho ^{2}-4}}\right),} with x 1 + x 2 = − ρ {\displaystyle x_{1}+x_{2}=-\rho \;} and x 1 x 2 = 1 / ρ . {\displaystyle \;x_{1}x_{2}=1/\rho .} Good approximations for the plastic ratio come from its continued fraction expansion , [1; 3, 12, 1, 1, 3, 2, 3, 2, 4, 2, 141, ...] . [ 4 ] The first few are: Also see the #Van der Laan sequence below. The plastic ratio ⁠ ρ {\displaystyle \rho } ⁠ and golden ratio ⁠ φ {\displaystyle \varphi } ⁠ are the only morphic numbers: real numbers x > 1 for which there exist natural numbers m and n such that Morphic numbers can serve as basis for a system of measure. Properties of ⁠ ρ {\displaystyle \rho } ⁠ (m=3 and n=4) are related to those of ⁠ φ {\displaystyle \varphi } ⁠ (m=2 and n=1). For example, The plastic ratio satisfies the continued radical while the golden ratio satisfies the analogous The plastic ratio can be expressed in terms of itself as the infinite geometric series in comparison to the golden ratio identity Additionally, 1 + φ − 1 + φ − 2 = 2 {\displaystyle 1+\varphi ^{-1}+\varphi ^{-2}=2} , while ∑ n = 0 13 ρ − n = 4. {\displaystyle \sum _{n=0}^{13}\rho ^{-n}=4.} For every integer ⁠ n {\displaystyle n} ⁠ one has ρ n = ρ n − 2 + ρ n − 3 = ρ n − 1 + ρ n − 5 = ρ n − 3 + ρ n − 4 + ρ n − 5 {\displaystyle {\begin{aligned}\rho ^{n}&=\rho ^{n-2}+\rho ^{n-3}\\&=\rho ^{n-1}+\rho ^{n-5}\\&=\rho ^{n-3}+\rho ^{n-4}+\rho ^{n-5}\end{aligned}}} From this an infinite number of further relations can be found. The algebraic solution of a reduced quintic equation can be written in terms of square roots, cube roots and the Bring radical . If y = x 5 + x {\displaystyle y=x^{5}+x} then x = B R ( y ) {\displaystyle x=BR(y)} . Since ρ − 5 + ρ − 1 = 1 , ρ = 1 / B R ( 1 ) . {\displaystyle \rho ^{-5}+\rho ^{-1}=1,\quad \rho =1/BR(1).} Continued fraction pattern of a few low powers ρ − 1 = [ 0 ; 1 , 3 , 12 , 1 , 1 , 3 , 2 , 3 , 2 , . . . ] ≈ 0.7549 ( 25 / 33 ) ρ 0 = [ 1 ] ρ 1 = [ 1 ; 3 , 12 , 1 , 1 , 3 , 2 , 3 , 2 , 4 , . . . ] ≈ 1.3247 ( 45 / 34 ) ρ 2 = [ 1 ; 1 , 3 , 12 , 1 , 1 , 3 , 2 , 3 , 2 , . . . ] ≈ 1.7549 ( 58 / 33 ) ρ 3 = [ 2 ; 3 , 12 , 1 , 1 , 3 , 2 , 3 , 2 , 4 , . . . ] ≈ 2.3247 ( 79 / 34 ) ρ 4 = [ 3 ; 12 , 1 , 1 , 3 , 2 , 3 , 2 , 4 , 2 , . . . ] ≈ 3.0796 ( 40 / 13 ) ρ 5 = [ 4 ; 12 , 1 , 1 , 3 , 2 , 3 , 2 , 4 , 2 , . . . ] ≈ 4.0796 ( 53 / 13 ) . . . ρ 7 = [ 7 ; 6 , 3 , 1 , 1 , 4 , 1 , 1 , 2 , 1 , 1 , . . . ] ≈ 7.1592 ( 93 / 13 ) . . . ρ 9 = [ 12 ; 1 , 1 , 3 , 2 , 3 , 2 , 4 , 2 , 141 , . . . ] ≈ 12.5635 ( 88 / 7 ) {\displaystyle {\begin{aligned}\rho ^{-1}&=[0;1,3,12,1,1,3,2,3,2,...]\approx 0.7549\;(25/33)\\\rho ^{0}&=[1]\\\rho ^{1}&=[1;3,12,1,1,3,2,3,2,4,...]\approx 1.3247\;(45/34)\\\rho ^{2}&=[1;1,3,12,1,1,3,2,3,2,...]\approx 1.7549\;(58/33)\\\rho ^{3}&=[2;3,12,1,1,3,2,3,2,4,...]\approx 2.3247\;(79/34)\\\rho ^{4}&=[3;12,1,1,3,2,3,2,4,2,...]\approx 3.0796\;(40/13)\\\rho ^{5}&=[4;12,1,1,3,2,3,2,4,2,...]\approx 4.0796\;(53/13)\,...\\\rho ^{7}&=[7;6,3,1,1,4,1,1,2,1,1,...]\approx 7.1592\;(93/13)\,...\\\rho ^{9}&=[12;1,1,3,2,3,2,4,2,141,...]\approx 12.5635\;(88/7)\end{aligned}}} The plastic ratio is the smallest Pisot number . [ 6 ] Because the absolute value 1 / ρ {\displaystyle 1/{\sqrt {\rho }}} of the algebraic conjugates is smaller than 1, powers of ⁠ ρ {\displaystyle \rho } ⁠ generate almost integers . For example: ρ 29 = 3480.0002874... ≈ 3480 + 1 / 3479. {\displaystyle \rho ^{29}=3480.0002874...\approx 3480+1/3479.} After 29 rotation steps the phases of the inward spiraling conjugate pair – initially close to ⁠ ± 45 π / 58 {\displaystyle \pm 45\pi /58} ⁠ – nearly align with the imaginary axis. The minimal polynomial of the plastic ratio m ( x ) = x 3 − x − 1 {\displaystyle m(x)=x^{3}-x-1} has discriminant Δ = − 23 {\displaystyle \Delta =-23} . The Hilbert class field of imaginary quadratic field K = Q ( Δ ) {\displaystyle K=\mathbb {Q} ({\sqrt {\Delta }})} can be formed by adjoining ⁠ ρ {\displaystyle \rho } ⁠ . With argument τ = ( 1 + Δ ) / 2 {\displaystyle \tau =(1+{\sqrt {\Delta }})/2\,} a generator for the ring of integers of ⁠ K {\displaystyle K} ⁠ , one has the special value of Dedekind eta quotient Expressed in terms of the Weber-Ramanujan class invariant G n Properties of the related Klein j-invariant ⁠ j ( τ ) {\displaystyle j(\tau )} ⁠ result in near identity e π − Δ ≈ ( 2 ρ ) 24 − 24 {\displaystyle e^{\pi {\sqrt {-\Delta }}}\approx \left({\sqrt {2}}\,\rho \right)^{24}-24} . The difference is < 1/12659 . The elliptic integral singular value [ 8 ] k r = λ ∗ ( r ) {\displaystyle k_{r}=\lambda ^{*}(r)} for ⁠ r = 23 {\displaystyle r=23} ⁠ has closed form expression (which is less than 1/3 the eccentricity of the orbit of Venus). In his quest for perceptible clarity, the Dutch Benedictine monk and architect Dom Hans van der Laan (1904-1991) asked for the minimum difference between two sizes, so that we will clearly perceive them as distinct. Also, what is the maximum ratio of two sizes, so that we can still relate them and perceive nearness. According to his observations, the answers are 1/4 and 7/1 , spanning a single order of size . [ 9 ] Requiring proportional continuity, he constructed a geometric series of eight measures ( types of size ) with common ratio 2 / (3/4 + 1/7 1/7 ) ≈ ρ. Put in rational form, this architectonic system of measure is constructed from a subset of the numbers that bear his name. The Van der Laan numbers have a close connection to the Perrin and Padovan sequences . In combinatorics, the number of compositions of n into parts 2 and 3 is counted by the n th Van der Laan number. The Van der Laan sequence is defined by the third-order recurrence relation V n = V n − 2 + V n − 3 for n > 2 , {\displaystyle V_{n}=V_{n-2}+V_{n-3}{\text{ for }}n>2,} with initial values V 1 = 0 , V 0 = V 2 = 1. {\displaystyle V_{1}=0,V_{0}=V_{2}=1.} The first few terms are 1, 0, 1, 1, 1, 2, 2, 3, 4, 5, 7, 9, 12, 16, 21, 28, 37, 49, 65, 86,... (sequence A182097 in the OEIS ). The limit ratio between consecutive terms is the plastic ratio: ⁠ lim n → ∞ V n + 1 / V n = ρ {\displaystyle \lim _{n\rightarrow \infty }V_{n+1}/V_{n}=\rho } ⁠ . The first 14 indices n for which ⁠ V n {\displaystyle V_{n}} ⁠ is prime are n = 5, 6, 7, 9, 10, 16, 21, 32, 39, 86, 130, 471, 668, 1264 (sequence A112882 in the OEIS ). [ b ] The last number has 154 decimal digits. The sequence can be extended to negative indices using V n = V n + 3 − V n + 1 . {\displaystyle V_{n}=V_{n+3}-V_{n+1}.} The generating function of the Van der Laan sequence is given by The sequence is related to sums of binomial coefficients by The characteristic equation of the recurrence is x 3 − x − 1 = 0 {\displaystyle x^{3}-x-1=0} . If the three solutions are real root ⁠ α {\displaystyle \alpha } ⁠ and conjugate pair ⁠ β {\displaystyle \beta } ⁠ and ⁠ γ {\displaystyle \gamma } ⁠ , the Van der Laan numbers can be computed with the Binet formula [ 11 ] Since | b β n + c γ n | < 1 / α n / 2 {\displaystyle \left\vert b\beta ^{n}+c\gamma ^{n}\right\vert <1/\alpha ^{n/2}} and α = ρ {\displaystyle \alpha =\rho } , the number ⁠ V n {\displaystyle V_{n}} ⁠ is the nearest integer to a ρ n + 1 {\displaystyle a\,\rho ^{n+1}} , with n > 1 and a = ρ / ( 3 ρ 2 − 1 ) = {\displaystyle a=\rho /(3\rho ^{2}-1)=} 0.31062 88296 40467 07776 19027... Coefficients a = b = c = 1 {\displaystyle a=b=c=1} result in the Binet formula for the related sequence P n = 2 V n + V n − 3 {\displaystyle P_{n}=2V_{n}+V_{n-3}} . The first few terms are 3, 0, 2, 3, 2, 5, 5, 7, 10, 12, 17, 22, 29, 39, 51, 68, 90, 119,... (sequence A001608 in the OEIS ). This Perrin sequence has the Fermat property : if p is prime, P p ≡ P 1 mod p {\displaystyle P_{p}\equiv P_{1}{\bmod {p}}} . The converse does not hold, but the small number of pseudoprimes n ∣ P n {\displaystyle \,n\mid P_{n}} makes the sequence special. [ 12 ] The only 7 composite numbers below 10 8 to pass the test are n = 271441, 904631, 16532714, 24658561, 27422714, 27664033, 46672291. [ 13 ] The Van der Laan numbers are obtained as integral powers n > 2 of a matrix with real eigenvalue ⁠ ρ {\displaystyle \rho } ⁠ [ 10 ] Q = ( 0 1 1 1 0 0 0 1 0 ) , {\displaystyle Q={\begin{pmatrix}0&1&1\\1&0&0\\0&1&0\end{pmatrix}},} Q n = ( V n V n + 1 V n − 1 V n − 1 V n V n − 2 V n − 2 V n − 1 V n − 3 ) {\displaystyle Q^{n}={\begin{pmatrix}V_{n}&V_{n+1}&V_{n-1}\\V_{n-1}&V_{n}&V_{n-2}\\V_{n-2}&V_{n-1}&V_{n-3}\end{pmatrix}}} The trace of ⁠ Q n {\displaystyle Q^{n}} ⁠ gives the Perrin numbers. Alternatively, ⁠ Q {\displaystyle Q} ⁠ can be interpreted as incidence matrix for a D0L Lindenmayer system on the alphabet ⁠ { a , b , c } {\displaystyle \{a,b,c\}} ⁠ with corresponding substitution rule { a ↦ b b ↦ a c c ↦ a {\displaystyle {\begin{cases}a\;\mapsto \;b\\b\;\mapsto \;ac\\c\;\mapsto \;a\end{cases}}} and initiator ⁠ w 0 = c {\displaystyle w_{0}=c} ⁠ . The series of words ⁠ w n {\displaystyle w_{n}} ⁠ produced by iterating the substitution have the property that the number of c's, b's and a's are equal to successive Van der Laan numbers. Their lengths are l ( w n ) = V n + 2 . {\displaystyle l(w_{n})=V_{n+2}.} Associated to this string rewriting process is a set composed of three overlapping self-similar tiles called the Rauzy fractal , that visualizes the combinatorial information contained in a multiple-generation letter sequence. [ 14 ] There are precisely three ways of partitioning a square into three similar rectangles: [ 15 ] [ 16 ] The fact that a rectangle of aspect ratio ρ 2 can be used for dissections of a square into similar rectangles is equivalent to an algebraic property of the number ρ 2 related to the Routh–Hurwitz theorem : all of its conjugates have positive real part. [ 17 ] [ 18 ] The circumradius of the snub icosidodecadodecahedron for unit edge length is The unique positive node ⁠ t {\displaystyle t} ⁠ that optimizes cubic Lagrange interpolation on the interval [−1,1] is equal to 0.41779130... The square of ⁠ t {\displaystyle t} ⁠ is the single real root of polynomial P ( x ) = 25 x 3 + 17 x 2 + 2 x − 1 {\displaystyle P(x)=25x^{3}+17x^{2}+2x-1} with discriminant ⁠ D = − 23 3 . {\displaystyle D=-23^{3}.} ⁠ [ 20 ] Expressed in terms of the plastic ratio, t = ρ / ( ρ 2 + 1 ) , {\displaystyle t={\sqrt {\rho }}/(\rho ^{2}+1),} which is verified by insertion into ⁠ P . {\displaystyle P.} ⁠ With optimal node set T = { − 1 , − t , t , 1 } , {\displaystyle T=\{-1,-t,t,1\},} the Lebesgue function ⁠ λ 3 ( x ) {\displaystyle \lambda _{3}(x)} ⁠ evaluates to the minimal cubic Lebesgue constant Λ 3 ( T ) = 1 + t 2 1 − t 2 {\displaystyle \Lambda _{3}(T)={\frac {1+t^{2}}{1-t^{2}}}\,} at critical point x c = ρ 2 t . {\displaystyle x_{c}=\rho ^{2}t.} [ 21 ] [ c ] The constants are related through x c + t = ρ {\displaystyle x_{c}+t={\sqrt {\rho }}} and can be expressed as infinite geometric series x c = ∑ n = 0 ∞ ρ − ( 8 n + 5 ) t = ∑ n = 0 ∞ ρ − ( 8 n + 9 ) . {\displaystyle {\begin{aligned}x_{c}&=\sum _{n=0}^{\infty }{\sqrt {\rho ^{-(8n+5)}}}\\t&=\sum _{n=0}^{\infty }{\sqrt {\rho ^{-(8n+9)}}}.\end{aligned}}} Each term of the series corresponds to the diagonal length of a rectangle with edges in ratio ⁠ ρ 2 , {\displaystyle \rho ^{2},} ⁠ which results from the relation ρ n = ρ n − 1 + ρ n − 5 , {\displaystyle \rho ^{n}=\rho ^{n-1}+\rho ^{n-5},} with ⁠ n {\displaystyle n} ⁠ odd. The diagram shows the sequences of rectangles with common shrink rate ⁠ ρ − 4 {\displaystyle \rho ^{-4}} ⁠ converge at a single point on the diagonal of a rho-squared rectangle with length ρ / = 1 + ρ − 4 . {\displaystyle {\sqrt {\rho {\vphantom {/}}}}={\sqrt {1+\rho ^{-4}}}.} A plastic spiral is a logarithmic spiral that gets wider by a factor of ⁠ ρ {\displaystyle \rho } ⁠ for every quarter turn. It is described by the polar equation r ( θ ) = a exp ⁡ ( k θ ) , {\displaystyle r(\theta )=a\exp(k\theta ),} with initial radius ⁠ a {\displaystyle a} ⁠ and parameter k = 2 ln ⁡ ( ρ ) π . {\displaystyle k={\frac {2\ln(\rho )}{\pi }}.} If drawn on a rectangle with sides in ratio ⁠ ρ {\displaystyle \rho } ⁠ , the spiral has its pole at the foot of altitude of a triangle on the diagonal and passes through vertices of rectangles with aspect ratio ⁠ ρ 2 {\displaystyle \rho ^{2}} ⁠ which are perpendicularly aligned and successively scaled by a factor ⁠ 1 / ρ . {\displaystyle 1/\rho .} ⁠ In 1838 Henry Moseley noticed that whorls of a shell of the chambered nautilus are in geometrical progression: "It will be found that the distance of any two of its whorls measured upon a radius vector is one-third that of the next two whorls measured upon the same radius vector ... The curve is therefore a logarithmic spiral." [ 22 ] Moseley thus gave the expansion rate 3 4 ≈ ρ − 1 / 116 {\displaystyle {\sqrt[{4}]{3}}\approx \rho -1/116} for a quarter turn. [ d ] Considering the plastic ratio a three-dimensional equivalent of the ubiquitous golden ratio, it appears to be a natural candidate for measuring the shell. [ e ] ρ was first studied by Axel Thue in 1912 and by G. H. Hardy in 1919. [ 6 ] French high school student Gérard Cordonnier [ fr ] discovered the ratio for himself in 1924. In his correspondence with Hans van der Laan a few years later, he called it the radiant number ( French : le nombre radiant ). Van der Laan initially referred to it as the fundamental ratio ( Dutch : de grondverhouding ), using the plastic number ( Dutch : het plastische getal ) from the 1950s onward. [ 24 ] In 1944 Carl Siegel showed that ρ is the smallest possible Pisot–Vijayaraghavan number and suggested naming it in honour of Thue. Unlike the names of the golden and silver ratios , the word plastic was not intended by van der Laan to refer to a specific substance, but rather in its adjectival sense, meaning something that can be given a three-dimensional shape. [ 25 ] This, according to Richard Padovan , is because the characteristic ratios of the number, ⁠ 3 / 4 ⁠ and ⁠ 1 / 7 ⁠ , relate to the limits of human perception in relating one physical size to another. Van der Laan designed the 1967 St. Benedictusberg Abbey church to these plastic number proportions. [ 26 ] The plastic number is also sometimes called the silver number, a name given to it by Midhat J. Gazalé [ 27 ] and subsequently used by Martin Gardner , [ 28 ] but that name is more commonly used for the silver ratio 1 + √ 2 , one of the ratios from the family of metallic means first described by Vera W. de Spinadel . Gardner suggested referring to ρ 2 as "high phi", and Donald Knuth created a special typographic mark for this name, a variant of the Greek letter phi ("φ") with its central circle raised, resembling the Georgian letter pari ("Ⴔ").
https://en.wikipedia.org/wiki/Plastic_ratio
Plastic welding is welding for semi-finished plastic materials, and is described in ISO 472 [ 1 ] as a process of uniting softened surfaces of materials, generally with the aid of heat (except for solvent welding ). Welding of thermoplastics is accomplished in three sequential stages, namely surface preparation, application of heat and pressure, and cooling. Numerous welding methods have been developed for the joining of semi-finished plastic materials. Based on the mechanism of heat generation at the welding interface, welding methods for thermoplastics can be classified as external and internal heating methods, [ 2 ] as shown in Fig 1. Production of a good quality weld does not only depend on the welding methods, but also weldability of base materials. Therefore, the evaluation of weldability is of higher importance than the welding operation (see rheological weldability ) for plastics. A number of techniques are used for welding of semi-finished plastic products as given below: Hot gas welding, also known as hot air welding , is a plastic welding technique using heat. A specially designed heat gun, called a hot air welder , produces a jet of hot air that softens both the parts to be joined and a plastic filler rod, all of which must be of the same or a very similar plastic. (Welding PVC to acrylic is an exception to this rule.) Hot air/gas welding is a common fabrication technique for manufacturing smaller items such as chemical tanks , water tanks , heat exchangers , and plumbing fittings . Hot air/gas welding is also a common repair technique used by auto collision shops to repair damage to plastic parts such as plastic bumper covers and other plastic components. In the case of webs and films a filler rod may not be used. Two sheets of plastic are heated via a hot gas (or a heating element ) and then rolled together. This is a quick welding process and can be performed continuously. A plastic welding rod, also known as a thermoplastic welding rod , is a rod with circular or triangular cross-section used to bind two pieces of plastic together. They are available in a wide range of colors to match the base material's color. Spooled plastic welding rod is known as "spline". An important aspect of plastic welding rod design and manufacture is the porosity of the material. A high porosity will lead to air bubbles (known as voids ) in the rods, which decrease the quality of the welding. The highest quality of plastic welding rods are therefore those with zero porosity, which are called voidless . Heat sealing is the process of sealing one thermoplastic to another similar thermoplastic using heat and pressure. The direct contact method of heat sealing utilizes a constantly heated die or sealing bar to apply heat to a specific contact area or path to seal or weld the thermoplastics together. A variety of heat sealers is available to join thermoplastic materials such as plastic films : Hot bar sealer, Impulse sealer, etc. Heat sealing is used for many applications, including heat seal connectors, thermally activated adhesives, and film or foil sealing. Common applications for the heat sealing process: Heat seal connectors are used to join LCDs to PCBs in many consumer electronics, as well as in medical and telecommunication devices. Heat sealing of products with thermal adhesives is used to hold clear display screens onto consumer electronic products and for other sealed thermo-plastic assemblies or devices where heat staking or ultrasonic welding is not an option due to part design requirements or other assembly considerations. Heat sealing also is used in the manufacturing of bloodtest film and filter media for the blood, virus and many other test strip devices used in the medical field today. Laminate foils and films often are heat sealed over the top of thermoplastic medical trays, Microtiter (microwell) plates, bottles and containers to seal and/or prevent contamination for medical test devices, sample collection trays and containers used for food products. [ 4 ] Medical and the Food Industries manufacturing Bag or flexible containers use heat sealing for either perimeter welding of the plastic material of the bags and/or for sealing ports and tubes into the bags. With freehand welding, the jet of hot air (or inert gas) from the welder is placed on the weld area and the tip of the weld rod at the same time. As the rod softens, it is pushed into the joint and fuses to the parts. This process is slower than most others, but it can be used in almost any situation. With speed welding, the plastic welder, similar to a soldering iron in appearance and wattage, is fitted with a feed tube for the plastic weld rod. The speed tip heats the rod and the substrate, while at the same time it presses the molten weld rod into position. A bead of softened plastic is laid into the joint, and the parts and weld rod fuse. With some types of plastic such as polypropylene, the melted welding rod must be "mixed" with the semi-melted base material being fabricated or repaired. These welding techniques have been improved over time and have been utilized for over 50 years by professional plastic fabricators and repairers internationally. Speed tip welding method is a much faster welding technique and with practice can be used in tight corners. A version of the speed tip "gun" is essentially a soldering iron with a broad, flat tip that can be used to melt the weld joint and filler material to create a bond. Extrusion welding allows the application of bigger welds in a single weld pass. It is the preferred technique for joining material over 6 mm thick. Welding rod is drawn into a miniature hand held plastic extruder, plasticized, and forced out of the extruder against the parts being joined, which are softened with a jet of hot air to allow bonding to take place. This is the same as spot welding except that heat is supplied with thermal conduction of the pincher tips instead of electrical conduction. Two plastic parts are brought together where heated tips pinch them, melting and joining the parts in the process. Related to contact welding, this technique is used to weld larger parts, or parts that have a complex weld joint geometry. The two parts to be welded are placed in the tooling attached to the two opposing platens of a press. A hot plate, with a shape that matches the weld joint geometry of the parts to be welded, is moved in position between the two parts. The two opposing platens move the parts into contact with the hot plate until the heat softens the interfaces to the melting point of the plastic. When this condition is achieved the hot plate is removed, and the parts are pressed together and held until the weld joint cools and re-solidifies to create a permanent bond. Hot-plate welding equipment is typically controlled pneumatically, hydraulically, or electrically with servo motors. This process is used to weld automotive under hood components, automotive interior trim components, medical filtration devices, consumer appliance components, and other car interior components. Similar to hot plate welding, non-contact welding uses an infrared heat source to melt the weld interface rather than a hot plate. This method avoids the potential for material sticking to the hot plate, but is more expensive and more difficult to achieve consistent welds, particularly on geometrically complex parts. High Frequency welding, also known as Dielectric Sealing or Radio Frequency (RF) Heat Sealing, is a very mature technology that has been around since the 1940s. High frequency electromagnetic waves in the range of radio frequencies can heat certain polymers up to soften the plastics for joining. Heated plastics under pressure weld together. Heat is generated within the polymer by the rapid reorientation of some chemical dipoles of the polymer, which means that the heating can be localized, and the process can be continuous. Only certain polymers which contain dipoles can be heated by RF waves, in particular polymers with high loss power. Among these, PVC , polyamides (PA) and acetates are commonly welded with this technology. In practice, two pieces of material are placed on a table press that applies pressure to both surface areas. Dies are used to direct the welding process. When the press comes together, high frequency waves (usually 27.120 MHz ) are passed through the small area between the die and the table where the weld takes place. This high frequency (radio frequency) heats the plastic which welds under pressure, taking the shape of the die. RF welding is fast and relatively easy to perform, produces a limited degradation of the polymer even welding thick layers, does not create fumes, requires a moderate amount of energy and can produce water-, air-, and bacteria-proof welds. Welding parameters are welding power, (heating and cooling) time and pressure, while temperature is generally not controlled directly. Auxiliary materials can also be used to solve some welding problems. This type of welding is used to connect polymer films used in a variety of industries where a strong consistent leak-proof seal is required. In the fabrics industry, RF is most often used to weld PVC and polyurethane (PU) coated fabrics. Other materials commonly welded using this technology are nylon, PET, PEVA, EVA and some ABS plastics. Exercise caution when welding urethane as it has been known to give off toxic cyanide gasses when melting. When an electrical insulator, like a plastic, is embedded with a material having high electrical conductivity, like metals or carbon fibers, induction welding can be performed. The welding apparatus contains an induction coil that is energised with a radio-frequency electric current. This generates an electromagnetic field that acts on either an electrically conductive or a ferromagnetic workpiece. In an electrically conductive workpiece, the main heating effect is resistive heating, which is due to induced currents called eddy currents . Induction welding of carbon fiber reinforced thermoplastic materials is a technology commonly used in for instance the aerospace industry. [ 5 ] In a ferromagnetic workpiece, plastics can be induction-welded by formulating them with metallic or ferromagnetic compounds, called susceptors . These susceptors absorb electromagnetic energy from an induction coil, become hot, and lose their heat energy to the surrounding material by thermal conduction. Injection welding is similar/identical to extrusion welding, except, using certain tips on the handheld welder, one can insert the tip into plastic defect holes of various sizes and patch them from the inside out. The advantage is that no access is needed to the rear of the defect hole. The alternative is a patch, except that the patch can not be sanded flush with the original surrounding plastic to the same thickness. PE and PP are most suitable for this type of process. The Drader injectiweld is an example of such tool. In ultrasonic welding, high frequency (15 kHz to 40 kHz) low amplitude vibration is used to create heat by way of friction between the materials to be joined. The interface of the two parts is specially designed to concentrate the energy for the maximum weld strength. Ultrasonic can be used on almost all plastic material. It is the fastest heat sealing technology available. In friction welding, the two parts to be assembled are rubbed together at a lower frequency (typically 100–300 Hz) and higher amplitude (typically 1 to 2 mm (0.039 to 0.079 in)) than ultrasonic welding. The friction caused by the motion combined with the clamping pressure between the two parts creates the heat which begins to melt the contact areas between the two parts. At this point, the plasticized materials begin to form layers that intertwine with one another, which therefore results in a strong weld. At the completion of the vibration motion, the parts remain held together until the weld joint cools and the melted plastic re-solidifies. The friction movement can be linear or orbital, and the joint design of the two parts has to allow this movement. Spin welding is a particular form of frictional welding. With this process, one component with a round weld joint is held stationary, while a mating component is rotated at high speed and pressed against the stationary component. The rotational friction between the two components generates heat. Once the joining surfaces reach a semi-molten state, the spinning component is stopped abruptly. Force on the two components is maintained until the weld joint cools and re-solidifies. This is a common way of producing low- and medium-duty plastic wheels, e.g., for toys, shopping carts, recycling bins, etc. This process is also used to weld various port openings into automotive under hood components. This technique requires one part to be transmissive to a laser beam and either the other part absorptive or a coating at the interface to be absorptive to the beam. The two parts are put under pressure while the laser beam moves along the joining line. The beam passes through the first part and is absorbed by the other one or the coating to generate enough heat to soften the interface creating a permanent weld. Semiconductor diode lasers are typically used in plastic welding. Wavelengths in the range of 808 nm to 980 nm can be used to join various plastic material combinations. Power levels from less than 1W to 100W are needed depending on the materials, thickness and desired process speed. [ citation needed ] Diode laser systems have the following advantages in joining of plastic materials [ citation needed ] : Requirements for high strength joints include adequate transmission through upper layer, absorption by lower layer, materials compatibility (wetting), good joint design (clamping pressure, joint area), and lower power density. [ citation needed ] Some materials that can be joined include polypropylene , polycarbonate , acrylic , nylon , and ABS . [ citation needed ] Specific applications include sealing, welding, or joining of: catheter bags, medical containers, automobile remote control keys, heart pacemaker casings, syringe tamper evident joints, headlight or tail-light assemblies, pump housings, and cellular phone parts. [ citation needed ] New fiber laser technology allows for the output of longer laser wavelengths, with the best results typically around 2,000 nm, significantly longer than the average 808 nm to 1064 nm diode laser used for traditional laser plastic welding. [ citation needed ] Because these longer wavelengths are more readily absorbed by thermoplastics than the infrared radiation of traditional plastic welding, it is possible to weld two clear polymers without any colorants or absorbing additives. Common applications will mostly fall in the medical industry for devices like catheters and microfluidic devices. The heavy use of transparent plastics, especially flexible polymers like TPU, TPE and PVC, in the medical device industry makes transparent laser welding a natural fit. Also, the process requires no laser absorbing additives or colorants making testing and meeting biocompatibility requirements significantly easier. In solvent welding, a solvent is applied which can temporarily dissolve the polymer at room temperature. When this occurs, the polymer chains are free to move in the liquid and can mingle with other similarly dissolved chains in the other component. Given sufficient time, the solvent will permeate through the polymer and out into the environment, so that the chains lose their mobility. This leaves a solid mass of entangled polymer chains which constitutes a solvent weld. This technique is commonly used for connecting PVC and ABS pipe, as in household plumbing. The "gluing" together of plastic (polycarbonate, polystyrene or ABS) models is also a solvent welding process. Dichloromethane (methylene chloride) can solvent weld polycarbonate and polymethylmethacrylate . It is a primary ingredient in some solvent cements. [ 6 ] ABS plastic is typically welded with acetone based solvents which are often sold as paint thinners or in smaller containers as nail polish remover. [ citation needed ] Solvent welding is a common method in plastics fabrication and used by manufacturers of in-store displays, brochure holders, presentation cases and dust covers. Another popular use of solvents in the hobby segment is model building from injection molded kits for scale models of aircraft, ships and cars which predominantly use polystyrene plastic. In order to test plastic welds, there are several requirements for both the inspector as well as the test method. Furthermore, there are two different types of testing weld quality. These two types are destructive and non-destructive testing. Destructive testing serves to qualify and quantify the weld joint whereas nondestructive testing serves to identify anomalies, discontinuities, cracks, and/or crevices. As the names of these two tests implies, destructive testing will destroy the part that is being tested while nondestructive testing enables the test piece to be used afterwards. There are several methods available in each of these types. This section outlines some requirements of testing plastic welds as well as the different types of destructive and non-destructive methods that are applicable to plastic welding and go over some of the advantages and disadvantages. Some standards like the American Welding Society (AWS) require the individuals who are conducting the inspection or test to have a certain level of qualification. For example, AWS G1.6 is the Specification for the Qualification of Plastic Welding Inspectors for Hot Gas, Hot Gas Extrusion, and Heated Tool Butt Thermoplastic Welds. This particular standard dictates that in order to inspect the plastic welds, the inspector needs one of 3 different qualification levels. These levels are the Associate Plastics Welding Inspector (APWI), Plastics Welding Inspector (PWI), and Senior Plastics Welding Inspector (SPWI). Each of these levels have different responsibilities. For example, the APWI has to have direct supervision of a PWI or SPWI in order to conduct the inspection or prepare a report. These three different levels of certification also have different capability requirements, education requirements, and examination requirements. Additionally, they must be able to maintain that qualification every 3 years. [ 7 ] The bend test uses a ram to bend the test coupon to a desired degree. This test setup is shown in Figure 2. A list of the minimum bend angles and ram displacements for different plastic materials can be found in the DVS Standards, DVS2203-1 and DVS2203-5. Some of the ram speeds, bend angle, and displacement information from DVS2203-1 are shown in Table 1 and Table 2. Some of the main advantages of the bend test are it provides qualitative data for tensile, compressive, and shear strain. These results typically lead to a higher confidence level in the quality of the weld joint and process. In contrast, some of the disadvantages are it requires multiple test pieces. It is typically recommended to use a minimum of 6 different test samples. Another disadvantage is that it does not provide specific values for evaluating the joint design. Moreover, large amounts of effort may need to go into preparing the part for testing. This could cause an increase in cost and schedule depending on the complexity of the part. Lastly, like all destructive tests, the part and/or weld seam is destroyed and cannot be used. [ 9 ] When conducting the tensile test, a test piece is pulled until it breaks. This test is quantitative and will provide the ultimate tensile strength, strain, as well as the energy to failure if it has extensometers attached to the sample. Additionally, the results from a tensile test cannot be transferable to that of a creep test. [ 10 ] The rate at which the specimen is pulled depends on the material. Additionally, the shape of the specimen is also critical. [ 9 ] DVS2203-5 and AWS G1.6 are great sources for providing these details. Examples of the shapes are shown in Figure 3 through Figure 5. Additionally, the testing speed per material is shown in Table 3. One advantage of the tensile test is that it provides quantitative data of the weld for both weld seam and the base material. Additionally, the tensile test is easy to conduct. A major disadvantage of this testing is the amount of preparation required to conduct the test. Another disadvantage is that it does not provide the long-term weld performance. Additionally, since this is also a type of destructive test, the part is destroyed in order to collect this data. [ 9 ] Also known as the Tensile Impact Test, the Impact Test uses a specimen that is clamped into a pendulum. The test specimen looks like the one shown in Figure 4. The pendulum swings down and strikes the specimen against an anvil breaking the specimen. This test enables the impact energy to be determined for the weld seam and base material. Additionally, the permanent fracture elongation can be calculated by measuring the post-test specimen length. The main advantage of this test is that quantitative data is obtained. Another advantage is that it is easy to set up. The disadvantages are that it too can have a great deal of preparation in order to conduct this test. Also, like the tensile test, there is not a long term weld performance determined, and the part is destroyed. [ 9 ] There are two types of creep tests, the Tensile Creep Test and the Creep Rupture Test. Both creep tests look at the long-term weld performance of the test specimen. These tests are typically conducted in a medium at a constant temperature and constant stress. This test requires a minimum of 6 specimens in order to obtain enough data to conduct a statistical analysis. [ 11 ] This test is advantageous in that it provides quantitative data on the long-term weld performance; however, it has its disadvantages as well. There is a lot effort that needs to go into preparing the samples and recording where exactly the specimen came from and the removal method used. This is critical because how the specimen is removed from the host part can greatly influence the test results. Also, there has to be strict control of the test environment. A deviation in the medium's temperature can cause the creep rupture time to vary drastically. In some cases, a temperature change of 1 degree Celsius affected the creep rupture time by 13%. [ 9 ] Lastly, this test is again a destructive test, so the host part will be destroyed by conducting this type of test. Visual inspection, just like the name implies, is a visual investigation of the weldment. The inspector is typically looking for visual indications such as discolorations, weld defects, discontinuities, porosity, notches, scratches, etc. Typically visual inspection is broken down into different categories or groups for the qualifying inspection criteria. These groupings may vary among standards and each group has a certain level of imperfections that they consider acceptable. There are 5 tables and a chart found in DVS Standard DVS2202-1 that show different types of defects found by visual examination and their permissible acceptance criteria. [ 12 ] Visual inspection is very advantageous in the fact that it is quick, easy, inexpensive, and requires very simple tools and gauges in order to conduct. Because it is so quick, it is typically required to have a weld pass visual inspection prior to being able to have any additional nondestructive test conducted to the specimen. In contrast, the inspection needs to be completed by someone who has a lot of experience and skill. Additionally, this type of test will not give any data into the quality of the weld seam. Because of the low cost, if a part is suspected to have issues, follow on testing can be conducted without much initial investment. [ 9 ] [ 13 ] X-ray testing of plastics is similar to that of metal weldments, but uses much lower radiation intensity due to the plastics having a lower density than metals. The x-ray testing is used to find imperfections that are below the surface. These imperfections include porosity, solid inclusions, voids, crazes, etc. The x-ray transmits radiation through the tested object onto a film or camera. This film or camera will produce an image. The varying densities of the object will show up as different shades in the image thus showing where the defects are located. One of the advantages of X-ray is that it provides a way to quickly show the flaws both on the surface and inside the weld joint. Additionally, the X-ray can be used on a wide range of materials. They can be used to create a record for the future. One of the disadvantages of X-ray is that it is costly and labor-intensive. Another is that it cannot be used in the evaluation of the weld seam quality or optimize the process parameters. Additionally, if the discontinuity is not aligned properly with the radiation beam, it can be difficult to detect. A fourth disadvantage is that access to both sides of the component being measured is required. Lastly, it presents a health risk due to the radiation that is transmitted during the X-ray process. [ 9 ] [ 13 ] Ultrasonic testing utilizes high frequency sound waves passing through the weld. The waves are reflected or refracted if they hit an indication. The reflected or refracted wave will have a different amount of time it requires to travel from the transmitter to the receiver than it will if an indication was not present. This change in time is how the flaws are detected. The first advantage that ultrasonic testing provides is that it allows for a relatively quick detection of the flaws inside of the weld joint. This test method also can detect flaws deep inside the part. Additionally, it can be conducted with access from only one side of the part. In contrast, there are several disadvantages of using ultrasonic testing. The first is that it cannot be used to optimize the process parameters or evaluate the seam quality of the weld. Secondly, it is costly and labor-intensive. It also requires experienced technicians to conduct the test. Lastly, there are material limitations with plastics due to transmission limitations of the ultrasonic waves through some of the plastics. [ 9 ] [ 13 ] The image in Figure 6 shows an example of ultrasonic testing. High voltage testing is also known as spark testing. This type of testing utilizes electrically conductive medium to coat the weld. After the weld is coated, the weld is exposed to a high voltage probe. This test shows an indication of a leak in the weld when an arc is observed through the weld. This type of testing is advantageous in the fact that it allows for quick detection of the flaws inside the weld joint and that you only have to have access to one side of the weld. One disadvantage with this type of testing is that there is not a way to evaluate the weld seam quality. Additionally, the weld has to be coated with conductive material. [ 9 ] Leak-Tightness Testing or Leak Testing utilizes either liquid or gas to pressurize a part. This type of testing is typically conducted on tubes, containers, and vessels. Another way to leak-test one of these structures is to apply a vacuum to it. One of the advantages is that it is a quick simple way for the weld flaw to be detected. Additionally, it can be used on multiple materials and part shapes. On the other hand, it has a few disadvantages. Firstly, there is not a way to evaluate the weld seam quality. Secondly, it has an explosion hazard associated with it if over pressurization occurs during testing. Last, it is limited to tubular structures., [ 9 ]
https://en.wikipedia.org/wiki/Plastic_welding
In physics and materials science , plasticity (also known as plastic deformation ) is the ability of a solid material to undergo permanent deformation , a non-reversible change of shape in response to applied forces. [ 1 ] [ 2 ] For example, a solid piece of metal being bent or pounded into a new shape displays plasticity as permanent changes occur within the material itself. In engineering, the transition from elastic behavior to plastic behavior is known as yielding . Plastic deformation is observed in most materials, particularly metals , soils , rocks , concrete , and foams . [ 3 ] [ 4 ] [ 5 ] [ 6 ] However, the physical mechanisms that cause plastic deformation can vary widely. At a crystalline scale, plasticity in metals is usually a consequence of dislocations . Such defects are relatively rare in most crystalline materials, but are numerous in some and part of their crystal structure; in such cases, plastic crystallinity can result. In brittle materials such as rock, concrete and bone, plasticity is caused predominantly by slip at microcracks . In cellular materials such as liquid foams or biological tissues , plasticity is mainly a consequence of bubble or cell rearrangements, notably T1 processes . For many ductile metals, tensile loading applied to a sample will cause it to behave in an elastic manner. Each increment of load is accompanied by a proportional increment in extension. When the load is removed, the piece returns to its original size. However, once the load exceeds a threshold – the yield strength – the extension increases more rapidly than in the elastic region; now when the load is removed, some degree of extension will remain. Elastic deformation , however, is an approximation and its quality depends on the time frame considered and loading speed. If, as indicated in the graph opposite, the deformation includes elastic deformation, it is also often referred to as "elasto-plastic deformation" or "elastic-plastic deformation". Perfect plasticity is a property of materials to undergo irreversible deformation without any increase in stresses or loads. Plastic materials that have been hardened by prior deformation, such as cold forming , may need increasingly higher stresses to deform further. Generally, plastic deformation is also dependent on the deformation speed, i.e. higher stresses usually have to be applied to increase the rate of deformation. Such materials are said to deform visco-plastically . The plasticity of a material is directly proportional to the ductility and malleability of the material. Plasticity in a crystal of pure metal is primarily caused by two modes of deformation in the crystal lattice: slip and twinning. Slip is a shear deformation which moves the atoms through many interatomic distances relative to their initial positions. Twinning is the plastic deformation which takes place along two planes due to a set of forces applied to a given metal piece. Most metals show more plasticity when hot than when cold. Lead shows sufficient plasticity at room temperature, while cast iron does not possess sufficient plasticity for any forging operation even when hot. This property is of importance in forming, shaping and extruding operations on metals. Most metals are rendered plastic by heating and hence shaped hot. Crystalline materials contain uniform planes of atoms organized with long-range order. Planes may slip past each other along their close-packed directions, as is shown on the slip systems page. The result is a permanent change of shape within the crystal and plastic deformation. The presence of dislocations increases the likelihood of planes. On the nanoscale the primary plastic deformation in simple face-centered cubic metals is reversible, as long as there is no material transport in form of cross-slip . [ 7 ] Shape-memory alloys such as Nitinol wire also exhibit a reversible form of plasticity which is more properly called pseudoelasticity . The presence of other defects within a crystal may entangle dislocations or otherwise prevent them from gliding. When this happens, plasticity is localized to particular regions in the material. For crystals, these regions of localized plasticity are called shear bands . Microplasticity is a local phenomenon in metals. It occurs for stress values where the metal is globally in the elastic domain while some local areas are in the plastic domain. [ 8 ] In amorphous materials, the discussion of "dislocations" is inapplicable, since the entire material lacks long range order. These materials can still undergo plastic deformation. Since amorphous materials, like polymers, are not well-ordered, they contain a large amount of free volume, or wasted space. Pulling these materials in tension opens up these regions and can give materials a hazy appearance. This haziness is the result of crazing , where fibrils are formed within the material in regions of high hydrostatic stress . The material may go from an ordered appearance to a "crazy" pattern of strain and stretch marks. These materials plastically deform when the bending moment exceeds the fully plastic moment. This applies to open cell foams where the bending moment is exerted on the cell walls. The foams can be made of any material with a plastic yield point which includes rigid polymers and metals. This method of modeling the foam as beams is only valid if the ratio of the density of the foam to the density of the matter is less than 0.3. This is because beams yield axially instead of bending. In closed cell foams, the yield strength is increased if the material is under tension because of the membrane that spans the face of the cells. Soils, particularly clays, display a significant amount of inelasticity under load. The causes of plasticity in soils can be quite complex and are strongly dependent on the microstructure , chemical composition, and water content. Plastic behavior in soils is caused primarily by the rearrangement of clusters of adjacent grains. Inelastic deformations of rocks and concrete are primarily caused by the formation of microcracks and sliding motions relative to these cracks. At high temperatures and pressures, plastic behavior can also be affected by the motion of dislocations in individual grains in the microstructure. [ 9 ] Time-independent plastic flow in both single crystals and polycrystals is defined by a critical/maximum resolved shear stress ( τ CRSS ), initiating dislocation migration along parallel slip planes of a single slip system, thereby defining the transition from elastic to plastic deformation behavior in crystalline materials. The critical resolved shear stress for single crystals is defined by Schmid’s law τ CRSS =σ y /m, where σ y is the yield strength of the single crystal and m is the Schmid factor. The Schmid factor comprises two variables λ and φ, defining the angle between the slip plane direction and the tensile force applied, and the angle between the slip plane normal and the tensile force applied, respectively. Notably, because m > 1, σ y > τ CRSS . There are three characteristic regions of the critical resolved shear stress as a function of temperature. In the low temperature region 1 ( T ≤ 0.25 T m ), the strain rate must be high to achieve high τ CRSS which is required to initiate dislocation glide and equivalently plastic flow. In region 1, the critical resolved shear stress has two components: athermal ( τ a ) and thermal ( τ *) shear stresses, arising from the stress required to move dislocations in the presence of other dislocations, and the resistance of point defect obstacles to dislocation migration, respectively. At T = T *, the moderate temperature region 2 (0.25 T m < T < 0.7 T m ) is defined, where the thermal shear stress component τ * → 0, representing the elimination of point defect impedance to dislocation migration. Thus the temperature-independent critical resolved shear stress τ CRSS = τ a remains so until region 3 is defined. Notably, in region 2 moderate temperature time-dependent plastic deformation (creep) mechanisms such as solute-drag should be considered. Furthermore, in the high temperature region 3 ( T ≥ 0.7 T m ) έ can be low, contributing to low τ CRSS , however plastic flow will still occur due to thermally activated high temperature time-dependent plastic deformation mechanisms such as Nabarro–Herring (NH) and Coble diffusional flow through the lattice and along the single crystal surfaces, respectively, as well as dislocation climb-glide creep. During the easy glide stage 1, the work hardening rate, defined by the change in shear stress with respect to shear strain ( dτ / dγ ) is low, representative of a small amount of applied shear stress necessary to induce a large amount of shear strain. Facile dislocation glide and corresponding flow is attributed to dislocation migration along parallel slip planes only (i.e. one slip system). Moderate impedance to dislocation migration along parallel slip planes is exhibited according to the weak stress field interactions between these dislocations, which heightens with smaller interplanar spacing. Overall, these migrating dislocations within a single slip system act as weak obstacles to flow, and a modest rise in stress is observed in comparison to the yield stress. During the linear hardening stage 2 of flow, the work hardening rate becomes high as considerable stress is required to overcome the stress field interactions of dislocations migrating on non-parallel slip planes (i.e. multiple slip systems), acting as strong obstacles to flow. Much stress is required to drive continual dislocation migration for small strains. The shear flow stress is directly proportional to the square root of the dislocation density (τ flow ~ ρ ½ ), irrespective of the evolution of dislocation configurations, displaying the reliance of hardening on the number of dislocations present. Regarding this evolution of dislocation configurations, at small strains the dislocation arrangement is a random 3D array of intersecting lines. Moderate strains correspond to cellular dislocation structures of heterogeneous dislocation distribution with large dislocation density at the cell boundaries, and small dislocation density within the cell interior. At even larger strains the cellular dislocation structure reduces in size until a minimum size is achieved. Finally, the work hardening rate becomes low again in the exhaustion/saturation of hardening stage 3 of plastic flow, as small shear stresses produce large shear strains. Notably, instances when multiple slip systems are oriented favorably with respect to the applied stress, the τ CRSS for these systems may be similar and yielding may occur according to dislocation migration along multiple slip systems with non-parallel slip planes, displaying a stage 1 work-hardening rate typically characteristic of stage 2. Lastly, distinction between time-independent plastic deformation in body-centered cubic transition metals and face centered cubic metals is summarized below. Plasticity in polycrystals differs substantially from that in single crystals due to the presence of grain boundary (GB) planar defects, which act as very strong obstacles to plastic flow by impeding dislocation migration along the entire length of the activated slip plane(s). Hence, dislocations cannot pass from one grain to another across the grain boundary. The following sections explore specific GB requirements for extensive plastic deformation of polycrystals prior to fracture, as well as the influence of microscopic yielding within individual crystallites on macroscopic yielding of the polycrystal. The critical resolved shear stress for polycrystals is defined by Schmid’s law as well (τ CRSS =σ y /ṁ), where σ y is the yield strength of the polycrystal and ṁ is the weighted Schmid factor. The weighted Schmid factor reflects the least favorably oriented slip system among the most favorably oriented slip systems of the grains constituting the GB. The GB constraint for polycrystals can be explained by considering a grain boundary in the xz plane between two single crystals A and B of identical composition, structure, and slip systems, but misoriented with respect to each other. To ensure that voids do not form between individually deforming grains, the GB constraint for the bicrystal is as follows: ε xx A = ε xx B (the x-axial strain at the GB must be equivalent for A and B), ε zz A = ε zz B (the z-axial strain at the GB must be equivalent for A and B), and ε xz A = ε xz B (the xz shear strain along the xz-GB plane must be equivalent for A and B). In addition, this GB constraint requires that five independent slip systems be activated per crystallite constituting the GB. Notably, because independent slip systems are defined as slip planes on which dislocation migrations cannot be reproduced by any combination of dislocation migrations along other slip system’s planes, the number of geometrical slip systems for a given crystal system - which by definition can be constructed by slip system combinations - is typically greater than that of independent slip systems. Significantly, there is a maximum of five independent slip systems for each of the seven crystal systems, however, not all seven crystal systems acquire this upper limit. In fact, even within a given crystal system, the composition and Bravais lattice diversifies the number of independent slip systems (see the table below). In cases for which crystallites of a polycrystal do not obtain five independent slip systems, the GB condition cannot be met, and thus the time-independent deformation of individual crystallites results in cracks and voids at the GBs of the polycrystal, and soon fracture is realized. Hence, for a given composition and structure, a single crystal with less than five independent slip systems is stronger (exhibiting a greater extent of plasticity) than its polycrystalline form. Although the two crystallites A and B discussed in the above section have identical slip systems, they are misoriented with respect to each other, and therefore misoriented with respect to the applied force. Thus, microscopic yielding within a crystallite interior may occur according to the rules governing single crystal time-independent yielding. Eventually, the activated slip planes within the grain interiors will permit dislocation migration to the GB where many dislocations then pile up as geometrically necessary dislocations. This pile up corresponds to strain gradients across individual grains as the dislocation density near the GB is greater than that in the grain interior, imposing a stress on the adjacent grain in contact. When considering the AB bicrystal as a whole, the most favorably oriented slip system in A will not be the that in B, and hence τ A CRSS ≠ τ B CRSS . Paramount is the fact that macroscopic yielding of the bicrystal is prolonged until the higher value of τ CRSS between grains A and B is achieved, according to the GB constraint. Thus, for a given composition and structure, a polycrystal with five independent slip systems is stronger (greater extent of plasticity) than its single crystalline form. Correspondingly, the work hardening rate will be higher for the polycrystal than the single crystal, as more stress is required in the polycrystal to produce strains. Importantly, just as with single crystal flow stress, τ flow ~ρ ½ , but is also inversely proportional to the square root of average grain diameter (τ flow ~d -½ ). Therefore, the flow stress of a polycrystal, and hence the polycrystal’s strength, increases with small grain size. The reason for this is that smaller grains have a relatively smaller number of slip planes to be activated, corresponding to a fewer number of dislocations migrating to the GBs, and therefore less stress induced on adjacent grains due to dislocation pile up. In addition, for a given volume of polycrystal, smaller grains present more strong obstacle grain boundaries. These two factors provide an understanding as to why the onset of macroscopic flow in fine-grained polycrystals occurs at larger applied stresses than in coarse-grained polycrystals. There are several mathematical descriptions of plasticity. [ 12 ] One is deformation theory (see e.g. Hooke's law ) where the Cauchy stress tensor (of order d-1 in d dimensions) is a function of the strain tensor. Although this description is accurate when a small part of matter is subjected to increasing loading (such as strain loading), this theory cannot account for irreversibility. Ductile materials can sustain large plastic deformations without fracture . However, even ductile metals will fracture when the strain becomes large enough—this is as a result of work hardening of the material, which causes it to become brittle . Heat treatment such as annealing can restore the ductility of a worked piece, so that shaping can continue. In 1934, Egon Orowan , Michael Polanyi and Geoffrey Ingram Taylor , roughly simultaneously, realized that the plastic deformation of ductile materials could be explained in terms of the theory of dislocations . The mathematical theory of plasticity, flow plasticity theory , uses a set of non-linear, non-integrable equations to describe the set of changes on strain and stress with respect to a previous state and a small increase of deformation. If the stress exceeds a critical value, as was mentioned above, the material will undergo plastic, or irreversible, deformation. This critical stress can be tensile or compressive. The Tresca and the von Mises criteria are commonly used to determine whether a material has yielded. However, these criteria have proved inadequate for a large range of materials and several other yield criteria are also in widespread use. The Tresca criterion is based on the notion that when a material fails, it does so in shear, which is a relatively good assumption when considering metals. Given the principal stress state, we can use Mohr's circle to solve for the maximum shear stresses our material will experience and conclude that the material will fail if where σ 1 is the maximum normal stress, σ 3 is the minimum normal stress, and σ 0 is the stress under which the material fails in uniaxial loading. A yield surface may be constructed, which provides a visual representation of this concept. Inside of the yield surface, deformation is elastic. On the surface, deformation is plastic. It is impossible for a material to have stress states outside its yield surface. The Huber–von Mises criterion [ 13 ] is based on the Tresca criterion but takes into account the assumption that hydrostatic stresses do not contribute to material failure. M. T. Huber was the first who proposed the criterion of shear energy. [ 14 ] [ 15 ] Von Mises solves for an effective stress under uniaxial loading, subtracting out hydrostatic stresses, and states that all effective stresses greater than that which causes material failure in uniaxial loading will result in plastic deformation. Again, a visual representation of the yield surface may be constructed using the above equation, which takes the shape of an ellipse. Inside the surface, materials undergo elastic deformation. Reaching the surface means the material undergoes plastic deformations.
https://en.wikipedia.org/wiki/Plasticity_(physics)
A plasticolous lichenized fungus is a lichen that grows on plastic surfaces. [ 1 ] This behaviour was first observed in 1994 when foliicolous (leaf-dwelling) lichens were found growing on plastic tape [ 2 ] but they have since been observed growing on artificial plastic leaves, plastic signs [ 3 ] and nylon nets. [ 4 ] The phenomenon of lichens growing on artificial substrates , including plastics, is part of a broader category sometimes referred to as "omnicolous" lichens – those capable of colonizing various manufactured materials including ropes, glass, leather, metals, concrete, and brick. [ 5 ] A study conducted in the Garhwal Himalaya region of India documented 19 species of lichens colonizing a 15-year-old nylon net house at an altitude of 2,700 m (8,900 ft). Of these, 12 species were reported for the first time as plasticolous lichen mycota from India, and 9 species were documented as plasticolous for the first time globally. The family Parmeliaceae dominated the findings with eight species, followed by Physciaceae with seven species. Other families represented included Candelariaceae , Chrysotrichaceae , Collemataceae , and Ramalinaceae . The genus Heterodermia was most prevalent, followed by Parmotrema and Phaeophyscia . [ 5 ] This article about lichens or lichenology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plasticolous_lichen
A plastid is a membrane-bound organelle found in the cells of plants , algae , and some other eukaryotic organisms. Plastids are considered to be intracellular endosymbiotic cyanobacteria . [ 1 ] Examples of plastids include chloroplasts (used for photosynthesis ); chromoplasts (used for synthesis and storage of pigments); leucoplasts (non-pigmented plastids, some of which can differentiate ); and apicoplasts (non-photosynthetic plastids of apicomplexa derived from secondary endosymbiosis). A permanent primary endosymbiosis event occurred about 1.5 billion years ago in the Archaeplastida clade— land plants , red algae , green algae and glaucophytes —probably with a cyanobiont , a symbiotic cyanobacteria related to the genus Gloeomargarita . [ 2 ] [ 3 ] Another primary endosymbiosis event occurred later, between 140 and 90 million years ago, in the photosynthetic plastids Paulinella amoeboids of the cyanobacteria genera Prochlorococcus and Synechococcus , or the "PS-clade". [ 4 ] [ 5 ] Secondary and tertiary endosymbiosis events have also occurred in a wide variety of organisms; and some organisms developed the capacity to sequester ingested plastids—a process known as kleptoplasty . A. F. W. Schimper [ 6 ] [ a ] was the first to name, describe, and provide a clear definition of plastids, which possess a double-stranded DNA molecule that long has been thought of as circular in shape, like that of the circular chromosome of pro karyotic cells —but now, perhaps not; (see "..a linear shape" ). Plastids are sites for manufacturing and storing pigments and other important chemical compounds used by the cells of autotrophic eukaryotes . Some contain biological pigments such as used in photosynthesis or which determine a cell's color. Plastids in organisms that have lost their photosynthetic properties are highly useful for manufacturing molecules like the isoprenoids . [ 8 ] In land plants , the plastids that contain chlorophyll can perform photosynthesis , thereby creating internal chemical energy from external sunlight energy while capturing carbon from Earth's atmosphere and furnishing the atmosphere with life-giving oxygen. These are the chlorophyll-plastids —and they are named chloroplasts ; (see top graphic). Other plastids can synthesize fatty acids and terpenes , which may be used to produce energy or as raw material to synthesize other molecules. For example, plastid epidermal cells manufacture the components of the tissue system known as plant cuticle , including its epicuticular wax , from palmitic acid —which itself is synthesized in the chloroplasts of the mesophyll tissue . Plastids function to store different components including starches , fats , and proteins . [ 9 ] All plastids are derived from proplastids, which are present in the meristematic regions of the plant. Proplastids and young chloroplasts typically divide by binary fission , but more mature chloroplasts also have this capacity. Plant proplastids (undifferentiated plastids) may differentiate into several forms, depending upon which function they perform in the cell, (see top graphic). They may develop into any of the following variants: [ 10 ] Leucoplasts differentiate into even more specialized plastids, such as: Depending on their morphology and target function, plastids have the ability to differentiate or redifferentiate between these and other forms. Each plastid creates multiple copies of its own unique genome, or plastome , (from 'plastid genome')—which for a chlorophyll plastid (or chloroplast) is equivalent to a 'chloroplast genome', or a 'chloroplast DNA'. [ 11 ] [ 12 ] The number of genome copies produced per plastid is variable, ranging from 1000 or more in rapidly dividing new cells , encompassing only a few plastids, down to 100 or less in mature cells, encompassing numerous plastids. A plastome typically contains a genome that encodes transfer ribonucleic acids ( tRNA )s and ribosomal ribonucleic acids ( rRNAs ). It also contains proteins involved in photosynthesis and plastid gene transcription and translation . But these proteins represent only a small fraction of the total protein set-up necessary to build and maintain any particular type of plastid. Nuclear genes (in the cell nucleus of a plant) encode the vast majority of plastid proteins; and the expression of nuclear and plastid genes is co-regulated to coordinate the development and differention of plastids. Many plastids, particularly those responsible for photosynthesis, possess numerous internal membrane layers. Plastid DNA exists as protein-DNA complexes associated as localized regions within the plastid's inner envelope membrane ; and these complexes are called 'plastid nucleoids '. Unlike the nucleus of a eukaryotic cell, a plastid nucleoid is not surrounded by a nuclear membrane. The region of each nucleoid may contain more than 10 copies of the plastid DNA. Where the proplastid ( undifferentiated plastid ) contains a single nucleoid region located near the centre of the proplastid, the developing (or differentiating) plastid has many nucleoids localized at the periphery of the plastid and bound to the inner envelope membrane. During the development/ differentiation of proplastids to chloroplasts—and when plastids are differentiating from one type to another—nucleoids change in morphology, size, and location within the organelle. The remodelling of plastid nucleoids is believed to occur by modifications to the abundance of and the composition of nucleoid proteins. In normal plant cells long thin protuberances called stromules sometimes form—extending from the plastid body into the cell cytosol while interconnecting several plastids. Proteins and smaller molecules can move around and through the stromules. Comparatively, in the laboratory, most cultured cells—which are large compared to normal plant cells—produce very long and abundant stromules that extend to the cell periphery. In 2014, evidence was found of the possible loss of plastid genome in Rafflesia lagascae , a non-photosynthetic parasitic flowering plant, and in Polytomella , a genus of non-photosynthetic green algae . Extensive searches for plastid genes in both taxons yielded no results, but concluding that their plastomes are entirely missing is still disputed. [ 13 ] Some scientists argue that plastid genome loss is unlikely since even these non-photosynthetic plastids contain genes necessary to complete various biosynthetic pathways including heme biosynthesis. [ 13 ] [ 14 ] Even with any loss of plastid genome in Rafflesiaceae , the plastids still occur there as "shells" without DNA content, [ 15 ] which is reminiscent of hydrogenosomes in various organisms. Plastid types in algae and protists include: The plastid of photosynthetic Paulinella species is often referred to as the 'cyanelle' or chromatophore, and is used in photosynthesis. [ 17 ] [ 18 ] It had a much more recent endosymbiotic event, in the range of 140–90 million years ago, which is the only other known primary endosymbiosis event of cyanobacteria. [ 19 ] [ 20 ] Etioplasts , amyloplasts and chromoplasts are plant-specific and do not occur in algae. [ citation needed ] Plastids in algae and hornworts may also differ from plant plastids in that they contain pyrenoids . [ 21 ] In reproducing, most plants inherit their plastids from only one parent. In general, angiosperms inherit plastids from the female gamete , where many gymnosperms inherit plastids from the male pollen . Algae also inherit plastids from just one parent. Thus the plastid DNA of the other parent is completely lost. In normal intraspecific crossings—resulting in normal hybrids of one species—the inheriting of plastid DNA appears to be strictly uniparental; i.e., from the female. In interspecific hybridisations, however, the inheriting is apparently more erratic. Although plastids are inherited mainly from the female in interspecific hybridisations, there are many reports of hybrids of flowering plants producing plastids from the male. Approximately 20% of angiosperms, including alfalfa ( Medicago sativa ), normally show biparental inheriting of plastids. [ 22 ] The plastid DNA of maize seedlings is subjected to increasing damage as the seedlings develop. [ 23 ] The DNA damage is due to oxidative environments created by photo-oxidative reactions and photosynthetic / respiratory electron transfer . Some DNA molecules are repaired but DNA with unrepaired damage is apparently degraded to non-functional fragments. DNA repair proteins are encoded by the cell's nuclear genome and then translocated to plastids where they maintain genome stability/ integrity by repairing the plastid's DNA. [ 24 ] For example, in chloroplasts of the moss Physcomitrella patens , a protein employed in DNA mismatch repair (Msh1) interacts with proteins employed in recombinational repair ( RecA and RecG) to maintain plastid genome stability. [ 25 ] Plastids are thought to be descended from endosymbiotic cyanobacteria . The primary endosymbiotic event of the Archaeplastida is hypothesized to have occurred around 1.5 billion years ago [ 26 ] and enabled eukaryotes to carry out oxygenic photosynthesis . [ 27 ] Three evolutionary lineages in the Archaeplastida have since emerged in which the plastids are named differently: chloroplasts in green algae and/or plants, rhodoplasts in red algae , and muroplasts in the glaucophytes. The plastids differ both in their pigmentation and in their ultrastructure. For example, chloroplasts in plants and green algae have lost all phycobilisomes , the light harvesting complexes found in cyanobacteria, red algae and glaucophytes, but instead contain stroma and grana thylakoids . The glaucocystophycean plastid—in contrast to chloroplasts and rhodoplasts—is still surrounded by the remains of the cyanobacterial cell wall. All these primary plastids are surrounded by two membranes. The plastid of photosynthetic Paulinella species is often referred to as the 'cyanelle' or chromatophore, and had a much more recent endosymbiotic event about 90–140 million years ago; it is the only known primary endosymbiosis event of cyanobacteria outside of the Archaeplastida. [ 17 ] [ 18 ] The plastid belongs to the "PS-clade" (of the cyanobacteria genera Prochlorococcus and Synechococcus ), which is a different sister clade to the plastids belonging to the Archaeplastida. [ 4 ] [ 5 ] In contrast to primary plastids derived from primary endosymbiosis of a prokaryoctyic cyanobacteria, complex plastids originated by secondary endosymbiosis in which a eukaryotic organism engulfed another eukaryotic organism that contained a primary plastid. [ 28 ] When a eukaryote engulfs a red or a green alga and retains the algal plastid, that plastid is typically surrounded by more than two membranes. In some cases these plastids may be reduced in their metabolic and/or photosynthetic capacity. Algae with complex plastids derived by secondary endosymbiosis of a red alga include the heterokonts , haptophytes , cryptomonads , and most dinoflagellates (= rhodoplasts). Those that endosymbiosed a green alga include the euglenids and chlorarachniophytes (= chloroplasts). The Apicomplexa , a phylum of obligate parasitic alveolates including the causative agents of malaria ( Plasmodium spp.), toxoplasmosis ( Toxoplasma gondii ), and many other human or animal diseases also harbor a complex plastid (although this organelle has been lost in some apicomplexans, such as Cryptosporidium parvum , which causes cryptosporidiosis ). The ' apicoplast ' is no longer capable of photosynthesis, but is an essential organelle, and a promising target for antiparasitic drug development. Some dinoflagellates and sea slugs , in particular of the genus Elysia , take up algae as food and keep the plastid of the digested alga to profit from the photosynthesis; after a while, the plastids are also digested. This process is known as kleptoplasty , from the Greek, kleptes ( κλέπτης ), thief. In 1977 J.M Whatley proposed a plastid development cycle which said that plastid development is not always unidirectional but is instead a complicated cyclic process. Proplastids are the precursor of the more differentiated forms of plastids, as shown in the diagram to the right. [ 29 ]
https://en.wikipedia.org/wiki/Plastid
A plastid is a membrane-bound organelle found in plants , algae and other eukaryotic organisms that contribute to the production of pigment molecules. Most plastids are photosynthetic , thus leading to color production and energy storage or production. There are many types of plastids in plants alone, but all plastids can be separated based on the number of times they have undergone endosymbiotic events. Currently there are three types of plastids; primary, secondary and tertiary. Endosymbiosis is reputed to have led to the evolution of eukaryotic organisms today, although the timeline is highly debated. [ 1 ] The first plastid is highly accepted within the scientific community to be derived from the engulfment of cyanobacteria ancestor into a eukaryotic organism. [ 4 ] Evidence supporting this belief is found in many morphological similarities such as the presence of a two plasma membranes . It is thought that the first membrane belonged to the cyanobacteria ancestor. During phagocytosis , a vesicle engulfs a molecule with its plasma membrane to allow safe import. When the cyanobacteria became engulfed, the bacterium avoided digestion and led to the double membrane found in primary plastids. [ 4 ] However, in order to live in symbiosis, the eukaryotic cell that engulfed the cyanobacterium must now provide proteins and metabolites to maintain the functions of the bacteria in exchange for energy. Thus, an engulfed cyanobacterium must give up some of its genetic material to allow for endosymbiotic gene transfer to the eukaryote, a phenomenon that is thought to be extremely rare due to the "learned nature" of the interactions that must occur between the cells to allow for processes such as; gene transfer, protein localization , excretion of highly reactive metabolites, and DNA repair . [ 1 ] This would mean a reduction in genome size for the cyanobacteria, but also an increase in cytobacterial genes within the eukaryotic genome. The Synechocystis sp . strain PCC6803 is a unicellular fresh water cyanobacteria that encodes 3725 genes, and a 3.9 Mb sized genome. [ 5 ] However, most plastids rarely exceed 200 protein coding genes. [ 4 ] It has been proposed this the closest living relative of the ancestral engulfed cyanobacterium is Gloeomargarita lithophora . [ 6 ] [ 7 ] [ 8 ] Separately, somewhere about 90–140 million years ago, primary endosymbiosis happened again in the amoeboid Paulinella with a cyanobacterium in the genus Prochlorococcus . This independently evolved chloroplast is often called a chromatophore instead of a chloroplast. [ 9 ] A 2010 study sequenced the genome of a cyanobacterium that was living extracellularly in endosymbiosis with the water-fern Azolla filiculoides . Endosymbiosis was supported by the fact that the cyanobacterium was unable to grow autonomously, and the observance of the cyanobacterium being vertically transferred between succeeding generations. After cyanobacterium genome analysis, the researchers found that over 30% of the genome was made up of pseudogenes . In addition, roughly 600 transposable elements were found within the genome. The pseudogenes were found in genes such as dnaA , DNA repair genes, glycolysis and nutrient uptake genes. dnaA is essential to initiation of DNA replication in prokaryotic organisms, thus Azolla filiculoides is thought to provide nutrients, and transcriptional factors for DNA replication in exchange for fixed nitrogen that is not readily available in water. Although the cyanobacterium had not been completely engulfed in the eukaryotic organism, the relationship is thought to demonstrate the precursor to endosymbiotic primary plastids. [ 10 ] Secondary endosymbiosis results in the engulfment of an organism that has already performed primary endosymbiosis. Thus, four plasma membranes are formed. The first originating from the cyanobacteria, the second from the eukaryote that engulfed the cyanobacteria, and the third from the eukaryote who engulfed the primary endosymbiotic eukaryote. [ 11 ] Chloroplasts contain 16S rRNA and 23S rRNA . 16S and 23S rRNA is found only in prokaryotes by definition. [ 12 ] Chloroplasts and mitochondria also replicate semi-autonomously outside of the cell cycle replication system via binary fission . [ 12 ] Consistent with the theory, decreased genome size within the organelle and gene integration into the nucleus occurred. Chloroplasts genomes encode 50-200 proteins, compared to the thousands in cyanobacterium. [ 13 ] Furthermore, in Arabidopsis, nearly 20% of the nuclear genome originate from cyanobacterium, the highly recognized origin of chloroplasts. [ 13 ] Recent studies have been able to identify the speed and size at which chloroplast genes are able to incorporate themselves into the host genome. Using chloroplast transformation genes encoding spectinomycin and kanamycin resistance were inserted into the DNA of chloroplasts found in tobacco plants. After subjecting the plants to spectinomycin and kanamycin selection , some plants began to tolerate spectinomycin and kanamycin. [ 13 ] Roughly 1 in every 5 million cells on the tobacco leaves highly expressed spectinomycin and kanamycin resistant genes. [ 13 ] By using the cells expressing resistances, they were able to grow tobacco from these cells to maturity. Once mature, the plants were mated with wild-type plants, and 50% of the progeny expressed spectinomycin and kanamycin resistance genes. Pollen was thought not to be able to transfer chloroplast DNA in tobacco (which later turned out not to be as true as was thought at the time), [ 14 ] thus leading to believe that the genes were incorporated into the tobaccos genome. Furthermore, 11kb of integrated chloroplast DNA was introduced to the host genome, transferring more DNA that previously predicted at a faster rate than previously predicted. [ 13 ] Although previous endosymbiotic events resulted in the increase in the number of membranes, tertiary plastids can have 3-4 membranes. The most largely studied tertiary plastids are found in dinoflagellates , where several independent tertiary endosymbiosis events have occurred. In the groups that contains a haplophyte plastid, these tertiary plastids are believed to have been derived from a red algae replacing secondary plastids. [ 15 ] Consistent with our previous rules for reduction in genome size, and incorporation of genes into the host genome, tertiary plastid genome consists of about 14 genes. These genes are broken down further into small minicircles that contain 1-3 genes. [ 16 ] These genomes are circular like prokaryotic genomes. Further, they only encode atpA , atpB , petB , perD, psaA, psaB , psbA-E, psbI, 16S and 23S rRNA. These genes play vital proteins used in photosystem I and II, indicating further their cyanobacterial origin. Unusually, the three lineages that contain a haplophyte plastid each acquired their plastid independently. [ 17 ] "Dinotoms" ( Durinskia and Kryptoperidinium ) have plastids derived from diatoms . [ 18 ] [ 19 ] These are highly unusual among tertiary endosymbioants as the symbioant is not reduced to a mere plastid: instead, it still has a DNA-containing nucleus, a large volume of cytoplasm, and even its own DNA-containing mitochondria. [ 20 ] [ 21 ] Two previously undescribed dinoflagellates ("MGD" and "TGD") contain a green algal endosymbioant that has a nucleus, most closely related to Pedinomonas . [ 22 ]
https://en.wikipedia.org/wiki/Plastid_evolution
Plastid terminal oxidase or plastoquinol terminal oxidase (PTOX) is an enzyme that resides on the thylakoid membranes of plant and algae chloroplasts and on the membranes of cyanobacteria . The enzyme was hypothesized to exist as a photosynthetic oxidase in 1982 and was verified by sequence similarity to the mitochondrial alternative oxidase (AOX). [ 1 ] The two oxidases evolved from a common ancestral protein in prokaryotes , and they are so functionally and structurally similar that a thylakoid-localized AOX can restore the function of a PTOX knockout. [ 2 ] Plastid terminal oxidase catalyzes the oxidation of the plastoquinone pool, which exerts a variety of effects on the development and functioning of plant chloroplasts . The enzyme is important for carotenoid biosynthesis during chloroplast biogenesis . In developing plastids , its activity prevents the over-reduction of the plastoquinone pool. Knockout plants for PTOX exhibit phenotypes of variegated leaves with white patches. Without the enzyme, the carotenoid synthesis pathway slows down due to the lack of oxidized plastoquinone with which to oxidize phytoene , a carotenoid intermediate. The colorless compound phytoene accumulates in the leaves, resulting in white patches of cells. [ 3 ] PTOX is also thought to determine the redox poise of the developing photosynthetic apparatus and without it, plants fail to assemble organized internal membrane structures in chloroplasts when exposed to high light during early development. [ 1 ] [ 4 ] Plants deficient in the IMMUTANS gene that encodes the oxidase are especially susceptible to photooxidative stress during early plastid development. The knockout plants exhibit a phenotype of variegated leaves with white patches that indicate a lack of pigmentation or photodamage. This effect is enhanced with increased light and temperature during plant development. The lack of plastid terminal oxidase indirectly causes photodamage during plastid development because protective carotenoids are not synthesized without the oxidase. [ 5 ] The enzyme is also thought to act as a safety valve for stress conditions in the photosynthetic apparatus. By providing an electron sink when the plastoquinone pool is over-reduced, the oxidase is thought to protect photosystem II from oxidative damage. Knockouts for Rubisco and photosystem II complexes, which would experience more photodamage than normal, exhibit an upregulation of plastid terminal oxidase. [ 6 ] This effect is not universal because it requires plants to have additional PTOX regulation mechanisms. While many studies agree with the stress-protective role of the enzyme, one study showed that overexpression of PTOX increases the production of reactive oxygen species and causes more photodamage than normal. This finding suggests that an efficient antioxidant system is required for the oxidase to function as a safety valve for stress conditions and that it is more important during chloroplast biogenesis than in the regular functioning of the chloroplast. [ 7 ] The most confirmed function of plastid terminal oxidase in developed chloroplasts is its role in chlororespiration . In this process, NADPH dehydrogenase (NDH) reduces the quinone pool and the terminal oxidase oxidizes it, serving the same function as cytochrome c oxidase from mitochondrial electron transport . In Chlamydomonas , there are two copies of the gene for the oxidase. PTOX2 significantly contributes to the flux of electrons through chlororespiration in the dark. [ 8 ] There is also evidence from experiments with tobacco that it functions in plant chlororespiration as well. [ 9 ] In fully developed chloroplasts, prolonged exposure to light increases the activity of the oxidase. Because the enzyme acts at the plastoquinone pool in between photosystem II and photosystem I , it may play a role in controlling electron flow through photosynthesis by acting as an alternative electron sink. Similar to its role in carotenoid synthesis, its oxidase activity may prevent the over-reduction of photosystem I electron acceptors and damage by photoinhibition. A recent analysis of electron flux through the photosynthetic pathway shows that even when activated, the electron flux plastid terminal oxidase diverts is two orders of magnitude less than the total flux through photosynthetic electron transport . [ 10 ] This suggests that the protein may play less of a role than previously thought in relieving the oxidative stress in photosynthesis. Plastid terminal oxidase is an integral membrane protein , or more specifically, an integral monotopic protein and is bound to the thylakoid membrane facing the stroma . Based on sequence homology, the enzyme is predicted to contain four alpha helix domains that encapsulate a di-iron center. The two iron atoms are ligated by six essential conserved histidine and glutamate residues – Glu136, Glu175, His171, Glu227, Glu296, and His299. [ 11 ] The predicted structure is similar to that of the alternative oxidase , with an additional Exon 8 domain that is required for the plastid oxidase's activity and stability. The enzyme is anchored to the membrane by a short fifth alpha helix that contains a Tyr212 residue hypothesized to be involved in substrate binding. [ 12 ] The oxidase catalyzes the transfer of four electrons from reduced plastoquinone to molecular oxygen to form water . The net reaction is written below: 2 QH 2 + O 2 → 2 Q + 2 H 2 O Analysis of substrate specificity revealed that the enzyme almost exclusively catalyzes the reduction of plastoquinone over other quinones such as ubiquinone and duroquinone . Additionally, iron is essential for the catalytic function of the enzyme and cannot be substituted by another metal cation like Cu 2+ , Zn 2+ , or Mn 2+ at the catalytic center. [ 13 ] It is unlikely that four electrons could be transferred at once in a single iron cluster, so all of the proposed mechanisms involve two separate two-electron transfers from reduced plastoquinone to the di-iron center. In the first step common to all proposed mechanisms, one plastoquinone is oxidized and both irons are reduced from iron(III) to iron(II). Four different mechanisms are proposed for the next step, oxygen capture. One mechanism proposes a peroxide intermediate, after which one oxygen atom is used to create water and another is left bound in a diferryl configuration. Upon one more plastoquinone oxidation, a second water molecule is formed and the irons return to a +3 oxidation state. The other mechanisms involve the formation of Fe(III)-OH or Fe(IV)-OH and a tyrosine radical. [ 14 ] These radical-based mechanisms could explain why over-expression of the PTOX gene causes increased generation of reactive oxygen species . The enzyme is present in organisms capable of oxygenic photosynthesis , which includes plants , algae , and cyanobacteria . Plastid terminal oxidase and alternative oxidase are thought to have originated from a common ancestral di-iron carboxylate protein. Oxygen reductase activity was likely an ancient mechanism to scavenge oxygen in the early transition from an anaerobic to aerobic world. The plastid oxidase first evolved in ancient cyanobacteria and the alternative oxidase in Pseudomonadota before eukaryotic evolution and endosymbiosis events. Through endosymbiosis, the plastid oxidase was vertically inherited by eukaryotes that evolved into plants and algae . Sequenced genomes of various plant and algae species shows that the amino acid sequence is more than 25% conserved, which is a significant amount of conservation for an oxidase. This sequence conservation further supports the theory that both the alternative and plastid oxidases evolved before endosymbiosis and did not significantly change through eukaryote evolution. [ 15 ] There also exist PTOX cyanophages that contain copies of the gene for the plastid oxidase. They are known to act as viral vectors for movement of the gene between cyanobacterial species. Some evidence suggests that the phages may use the oxidase to influence photosynthetic electron flow to produce more ATP and less NADPH because viral synthesis utilizes more ATP. [ 1 ]
https://en.wikipedia.org/wiki/Plastid_terminal_oxidase
A plastivore is an organism capable of degrading and metabolising plastic . [ 1 ] [ 2 ] [ 3 ] [ 4 ] While plastic is normally thought of as non- biodegradable , a variety of bacteria, fungi and insects have been found to degrade it. Plastivores are "organisms that use plastic as their primary carbon and energy source". [ 3 ] This does not necessarily mean being able to fulfill all biological needs from plastic alone. For example, mealworms fed only on plastic show very little weight gain, unlike mealworms fed on a normal diet of bran . [ 5 ] This is due to plastic lacking water and nutrients needed to grow. [ 5 ] Plastic-fed mealworms can still derive energy from their diet, so they do not lose weight like starved mealworms do. [ 5 ] For both bacterial and fungal plastivores, the first step is adhesion of spores to the plastic surface via hydrophobic interactions. [ 6 ] Bacterial plastivores, when cultured on plastic, form biofilms on the surface as the second step. [ 7 ] [ 8 ] [ 9 ] Using enzymes , they increase the roughness of the surface and oxidize the plastic. [ 7 ] [ 8 ] [ 9 ] Oxidation forms oxygenated groups such as carbonyl groups , used by the bacteria for carbon and energy, and also converts the plastic into smaller molecules ( depolymerization ). [ 7 ] [ 8 ] For fungal plastivores, the second step is growth of mycelia (root-like structures of fungi, composed of thread-like hyphae ) on the surface, while the third step is secretion of enzymes. [ 6 ] Both the enzymes as well as the mechanical force produced by fungal hyphae degrades the plastic. [ 6 ] The same basic steps of oxidation and depolymerization also occur in insect plastivores. [ 10 ] For insects, the bacteria in their guts plays a role in digesting plastic. In mealworms, inhibiting these bacteria by giving antibiotics removes the ability to digest polystyrene, but low-density polyethylene can still be digested to an extent. [ 9 ] [ 10 ] The insects themselves also play a role: saliva of waxworms contains enzymes that oxidize and depolymerize polyethylene. [ 11 ] The following is not an exhaustive list. Plastivorous activity seems to be quite common in nature, with a 2011 sampling of endophytic fungi in the Amazon finding that almost half of the fungi showed some activity. [ 12 ] The plastic pollution in the oceans supports many species of bacteria. The alkaliphilic bacteria Bacillus pseudofirmus and Salipaludibacillus agaradhaerens can degrade low-density polyethylene (LDPE). These bacteria can degrade LDPE on their own but work more quickly as a consortium of both species, and degradation is faster still when iron oxide nanoparticles are added. [ 7 ] Exiguobacterium sibiricum and E. undae , isolated from a wetland in India, can degrade polystyrene. [ 8 ] Similarly, Exiguobacterium sp. strain YT2 has been isolated from the gut of mealworms, which are themselves plastivores, and can degrade polystyrene on its own, though less quickly than mealworms. [ 9 ] Acinetobacter sp. AnTc-1, isolated from the gut of plastivorous red flour beetle larvae, can likewise degrade polystyrene on its own. [ 13 ] Ideonella sakaiensis and Comamonas testosteroni can degrade polyethylene terephthalate . [ 14 ] [ 15 ] Aspergillus tubingensis and several isolates of Pestalotiopsis are capable of degrading polyurethane. [ 6 ] [ 12 ] Polycarbonate , the main material in CDs , is attacked by a range of fungi: Bjerkandera adusta [ 16 ] (initially misidentified as Geotrichum sp. [ 17 ] ), Chaetomium globosum , Trichoderma atroviride , Coniochaeta sp., Cladosporium cladosporioides and Penicillium chrysogenum . [ 18 ] Mealworms ( Tenebrio molitor ), a species commonly used as animal feed, can consume polyethylene and polystyrene. [ 5 ] [ 9 ] [ 10 ] Its congener T. obscurus can also consume polystyrene, [ 19 ] as can superworm ( Zophobas morio ) and red flour beetle ( Tribolium castaneum) from different genera in the same family. [ 20 ] [ 13 ] Plastivory also occurs in Lepidoptera , with waxworms ( Galleria mellonella ) able to consume polyethylene. [ 11 ] [ 21 ] Even homogenising waxworms and applying the homogenate to polyethylene can cause degradation. [ 21 ] This species is the fastest known organism to chemically modify polyethylene, with oxidation occurring within one hour from exposure. [ 11 ]
https://en.wikipedia.org/wiki/Plastivore
As the tip of a plant shoot grows, new leaves are produced at regular time intervals if temperature is held constant. This time interval is termed the plastochron (or plastochrone ). [ 1 ] The plastochrone index and the leaf plastochron index are ways of measuring the age of a plant dependent on morphological traits rather than on chronological age. [ clarification needed ] Use of these indices removes differences caused by germination, developmental differences and exponential growth. The spatial pattern of the arrangement of leaves is called phyllotaxy whereas the time between successive leaf initiation events is called the plastochron and the rate of emergence from the apical bud is the phyllochron . In 1951, F. J. Richards introduced the idea of the plastochron ratio and developed a system of equations to describe mathematically a centric representation using three parameters: plastochron ratio, divergence angle, and the angle of the cone tangential to the apex in the area being considered. [ 2 ] [ 3 ] Emerging phyllodes or leaf variants experience a sudden change from a high humidity environment to a more arid one. There are other changes they encounter such as variations in light level, photoperiod and the gaseous content of the air. This botany article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plastochron
Plastocyanin is a copper-containing protein that mediates electron-transfer . It is found in a variety of plants, where it participates in photosynthesis . The protein is a prototype of the blue copper proteins , a family of intensely blue-colored metalloproteins . Specifically, it falls into the group of small type I blue copper proteins called "cupredoxins". [ 1 ] In photosynthesis , plastocyanin functions as an electron transfer agent between cytochrome f of the cytochrome b 6 f complex from photosystem II and P700+ from photosystem I . Cytochrome b 6 f complex and P700 + are both membrane-bound proteins with exposed residues on the lumen-side of the thylakoid membrane of chloroplasts . Cytochrome f acts as an electron donor while P700+ accepts electrons from reduced plastocyanin. [ 2 ] Plastocyanin was the first of the blue copper proteins to be characterised by X-ray crystallography . [ 3 ] [ 2 ] [ 4 ] It features an eight-stranded antiparallel β-barrel containing one copper center. [ 3 ] Structures of the protein from poplar, algae , parsley , spinach, and French bean plants have been characterized crystallographically. [ 3 ] In all cases the binding site is generally conserved. Bound to the copper center are four ligands : the imidazole groups of two histidine residues (His37 and His87), the thiolate of Cys84 and the thioether of Met92 . The geometry of the copper binding site is described as a ‘distorted tetrahedral’. The Cu-S (Cys) contact is much shorter (207 picometers ) than Cu-S (Met) (282 pm) bond. The elongated Cu-thioether bond appears to destabilise the Cu II state thereby enhancing its oxidizing power. The blue colour (597 nm peak absorption) is assigned to a charge transfer transition from S pπ to Cu dx 2 -y 2 . [ 5 ] In the reduced form of plastocyanin, His-87 becomes protonated. While the molecular surface of the protein near the copper binding site varies slightly, all plastocyanins have a hydrophobic surface surrounding the exposed histidine of the copper binding site. In plant plastocyanins, acidic residues are located on either side of the highly conserved tyrosine -83. Algal plastocyanins, and those from vascular plants in the family Apiaceae , contain similar acidic residues but are shaped differently from those of plant plastocyanins—they lack residues 57 and 58. In cyanobacteria , the distribution of charged residues on the surface is different from eukaryotic plastocyanins and variations among different bacterial species is large. Many cyanobacterial plastocyanins have 107 amino acids. Although the acidic patches are not conserved in bacteria, the hydrophobic patch is always present. These hydrophobic and acidic patches are believed to be the recognition/binding sites for the other proteins involved in electron transfer. Plastocyanin (Cu 2+ Pc) is reduced (an electron is added) by cytochrome f according to the following reaction: After dissociation, Cu + Pc diffuses through the lumen space until recognition/binding occurs with P700 + , at which point P700 + oxidizes Cu + Pc according to the following reaction: The redox potential is about 370 mV [ 6 ] and the isoelectric pH is about 4. [ 7 ] A catalyst's function is to increase the speed of the electron transfer ( redox ) reaction. Plastocyanin is believed to work less like an enzyme where enzymes decrease the transition energy needed to transfer the electron. Plastocyanin works more on the principles of entatic states where it increases the energy of the reactants, decreasing the amount of energy needed for the redox reaction to occur. Another way to rephrase the function of plastocyanin is that it can facilitate the electron transfer reaction by providing a small reorganization energy , which has been measured to about 16–28 kcal/mol (67–117 kJ/mol). [ 8 ] To study the properties of the redox reaction of plastocyanin, methods such as quantum mechanics / molecular mechanics (QM/MM) molecular dynamics simulations. This method was used to determine that plastocyanin has an entatic strain energy of about 10 kcal/mol (42 kJ/mol). [ 8 ] Four-coordinate copper complexes often exhibit square planar geometry , however plastocyanin has a trigonally distorted tetrahedral geometry . This distorted geometry is less stable than ideal tetrahedral geometry due to its lower ligand field stabilization as a result of the trigonal distortion. This unusual geometry is induced by the rigid “pre-organized” conformation of the ligand donors by the protein, which is an entatic state . Plastocyanin performs electron transfer with the redox between Cu(I) and Cu(II), and it was first theorized that its entatic state was a result of the protein imposing an undistorted tetrahedral geometry preferred by ordinary Cu(I) complexes onto the oxidized Cu(II) site. [ 10 ] However, a highly distorted tetrahedral geometry is induced upon the oxidized Cu(II) site instead of a perfectly symmetric tetrahedral geometry. A feature of the entatic state is a protein environment that is capable of preventing ligand dissociation even at a high enough temperature to break the metal-ligand bond. In the case of plastocyanin, it has been experimentally determined through absorption spectroscopy that there is a long and weak Cu(I)-S Met bond that should dissociate at physiological temperature due to increased entropy. However, this bond does not dissociate due to the constraints of the protein environment dominating over the entropic forces. [ 11 ] In ordinary copper complexes involved in Cu(I)/Cu(II) redox coupling without a constraining protein environment, their ligand geometry changes significantly, and typically corresponds to the presence of a Jahn-Teller distorting force . However, the Jahn-Teller distorting force is not present in plastocyanin due to a large splitting of the d x2-y2 and d xy orbitals (See Blue Copper Protein Entatic State ). Additionally, the structure of plastocyanin exhibits a long Cu(I)-S Met bond (2.9Å) with decreased electron donation strength. This bond also shortens the Cu(I)-S Cys bond (2.1Å), increasing its electron donating strength. Overall, plastocyanin exhibits a lower reorganization energy due to the entatic state of the protein ligand enforcing the same distorted tetrahedral geometry in both the Cu(II) and Cu(I) oxidation states, enabling it to perform electron transfer at a faster rate. [ 13 ] The reorganization energy of blue copper proteins such as plastocyanin from 0.7 to 1.2 eV (68-116 kJ/mol) compared to 2.4 eV (232 kJ/mol) in an ordinary copper complex such as [Cu(phen) 2 ] 2+/+ . [ 10 ] Usually, plastocyanin can be found in organisms that contain chlorophyll b and cyanobacteria , as well as algae that contain chlorophyll c . Plastocyanin has also been found in the diatom , Thalassiosira oceanica , which can be found in oceanic environments. It was surprising to find these organisms containing the protein plastocyanin because the concentration of copper dissolved in the ocean is usually low (between 0.4 – 50 nM). However, the concentration of copper in the oceans is comparatively higher compared to the concentrations of other metals such as zinc and iron . Other organisms that live in the ocean, such as other phytoplankton species, have adapted to where they do not need as high of concentrations of these low concentration metals (Fe and Zn) to facilitate photosynthesis and grow. [ 14 ]
https://en.wikipedia.org/wiki/Plastocyanin
A plastomer is a polymer material which combines qualities of elastomers and plastics , such as rubber -like properties with the processing ability of plastic. [ 1 ] As such, the word plastomer is a portmanteau of the words plastic and elastomer . Significant plastomers are ethylene - alpha olefin copolymers . [ 2 ] [ 3 ] This article about polymer science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plastomer
Plastoquinone ( PQ ) is a terpenoid - quinone ( meroterpenoid ) molecule involved in the electron transport chain in the light-dependent reactions of photosynthesis . The most common form of plastoquinone, known as PQ-A or PQ-9, is a 2,3-dimethyl-1,4- benzoquinone molecule with a side chain of nine isoprenyl units. There are other forms of plastoquinone, such as ones with shorter side chains like PQ-3 (which has 3 isoprenyl side units instead of 9) as well as analogs such as PQ-B, PQ-C, and PQ-D, which differ in their side chains. [ 1 ] The benzoquinone and isoprenyl units are both nonpolar , anchoring the molecule within the inner section of a lipid bilayer , where the hydrophobic tails are usually found. [ 1 ] Plastoquinones are very structurally similar to ubiquinone, or coenzyme Q10 , differing by the length of the isoprenyl side chain, replacement of the methoxy groups with methyl groups , and removal of the methyl group in the 2 position on the quinone. Like ubiquinone, it can come in several oxidation states: plastoquinone, plastosemiquinone (unstable), and plastoquinol , which differs from plastoquinone by having two hydroxyl groups instead of two carbonyl groups . [ 2 ] Plastoquinol, the reduced form, also functions as an antioxidant by reducing reactive oxygen species , some produced from the photosynthetic reactions, that could harm the cell membrane. [ 3 ] One example of how it does this is by reacting with superoxides to form hydrogen peroxide and plastosemiquinone. [ 3 ] The prefix plasto- means either plastid or chloroplast , alluding to its location within the cell. [ 4 ] The role that plastoquinone plays in photosynthesis, more specifically in the light-dependent reactions of photosynthesis, is that of a mobile electron carrier through the membrane of the thylakoid . [ 2 ] Plastoquinone is reduced when it accepts two electrons from photosystem II and two hydrogen cations (H + ) from the stroma of the chloroplast, thereby forming plastoquinol (PQH 2 ). It transfers the electrons further down the electron transport chain to plastocyanin , a mobile, water-soluble electron carrier, through the cytochrome b 6 f protein complex. [ 2 ] The cytochrome b 6 f protein complex catalyzes the electron transfer between plastoquinone and plastocyanin, but also transports the two protons into the lumen of thylakoid discs . [ 2 ] This proton transfer forms an electrochemical gradient, which is used by ATP synthase at the end of the light dependent reactions in order to form ATP from ADP and P i . [ 2 ] Plastoquinone is found within photosystem II in two specific binding sites, known as Q A and Q B . The plastoquinone at Q A , the primary binding site, is very tightly bound, compared to the plastoquinone at Q B , the secondary binding site, which is much more easily removed. [ 5 ] Q A is only transferred a single electron, so it has to transfer an electron to Q B twice before Q B is able to pick up two protons from the stroma and be replaced by another plastoquinone molecule. The protonated Q B then joins a pool of free plastoquinone molecules in the membrane of the thylakoid. [ 2 ] [ 5 ] The free plastoquinone molecules eventually transfer electrons to the water-soluble plastocyanin so as to continue the light-dependent reactions. [ 2 ] There are additional plastoquinone binding sites within photosystem II (Q C and possibly Q D ), but their function and/or existence have not been fully elucidated. [ 5 ] The p-hydroxyphenylpyruvate is synthesized from tyrosine , while the solanesyl diphosphate is synthesized through the MEP/DOXP pathway . Homogentisate is formed from p-hydroxyphenylpyruvate and is then combined with solanesyl diphosphate through a condensation reaction . The resulting intermediate, 2-methyl-6-solanesyl-1,4-benzoquinol is then methylated to form the final product, plastoquinol-9. [ 1 ] This pathway is used in most photosynthetic organisms, like algae and plants. [ 1 ] However, cyanobacteria appear to not use homogentisate for synthesizing plastoquinol, possibly resulting in a pathway different from the one shown below. [ 1 ] Some derivatives that were designed to penetrate mitochondrial cell membranes ( SkQ1 (plastoquinonyl-decyl-triphenylphosphonium), SkQR1 (the rhodamine -containing analog of SkQ1), SkQ3 ) have anti-oxidant and protonophore activity. [ 6 ] SkQ1 has been proposed as an anti-aging treatment, with the possible reduction of age-related vision issues due to its antioxidant ability. [ 7 ] [ 8 ] [ 9 ] This antioxidant ability results from both its antioxidant ability to reduce reactive oxygen species (derived from the part of the molecule containing plastoquinonol), which are often formed within mitochondria, as well as its ability to increase ion exchange across membranes (derived from the part of the molecule containing cations that can dissolve within membranes). [ 9 ] Specifically, like plastoquinol, SkQ1 has been shown to scavenge superoxides both within cells (in vivo) and outside of cells (in vitro). [ 10 ] SkQR1 and SkQ1 have also been proposed as a possible way to treat brain issues like Alzheimer's due to their ability to potentially fix damages caused by amyloid beta . [ 9 ] Additionally, SkQR1 has been shown as a way to reduce the issues caused by brain trauma through its antioxidant abilities, which help prevent cell death signals by reducing the amounts of reactive oxygen species coming from mitochondria. [ 11 ]
https://en.wikipedia.org/wiki/Plastoquinone
A plate-fin heat exchanger is a type of heat exchanger design that uses plates and finned chambers to transfer heat between fluids, most commonly gases. It is often categorized as a compact heat exchanger to emphasize its relatively high heat transfer surface area to volume ratio. The plate-fin heat exchanger is widely used in many industries, including the aerospace industry for its compact size and lightweight properties, as well as in cryogenics where its ability to facilitate heat transfer with small temperature differences is utilized. [ 1 ] Aluminum alloy plate-fin heat exchangers, often referred to as Brazed Aluminum Heat Exchangers, have been used in the aircraft industry for more than 75 years and adopted into the cryogenic air separation industry around the time of the second world war and shortly afterward into cryogenic processes in chemical plants such as Natural Gas Processing. They are also used in railway engines and motor cars. Stainless steel plate fins have been used in aircraft for over 35 years and are now becoming established in chemical plants. [ 2 ] Originally conceived by an Italian mechanic, Paolo Fruncillo. A plate-fin heat exchanger is made of layers of corrugated sheets separated by flat metal plates, typically aluminium, to create a series of finned chambers. Separate hot and cold fluid streams flow through alternating layers of the heat exchanger and are enclosed at the edges by side bars. Heat is transferred from one stream through the fin interface to the separator plate and through the next set of fins into the adjacent fluid. The fins also serve to increase the structural integrity of the heat exchanger and allow it to withstand high pressures while providing an extended surface area for heat transfer. A high degree of flexibility is present in plate-fin heat exchanger design as they can operate with any combination of gas, liquid, and two-phase fluids. [ 3 ] Heat transfer between multiple process streams is also accommodated, [ 4 ] with a variety of fin heights and types as different entry and exit points available for each stream. The main four type of fins are: plain , which refer to simple straight-finned triangular or rectangular designs; herringbone , where the fins are placed sideways to provide a zig-zag path; and serrated and perforated which refer to cuts and perforations in the fins to augment flow distribution and improve heat transfer. A disadvantage of plate-fin heat exchangers is that they are prone to fouling due to their small flow channels. They also cannot be mechanically cleaned and require other cleaning procedures and proper filtration for operation with potentially-fouling streams. Various heat exchanger flow arrangements exist, such as parallel flow, cross-flow, and counterflow. In parallel flow, fluids enter the heat exchanger through their tubes, and the fluids flow in the same direction. In counterflow, the fluids flow in opposing directions. Counterflow provides the most efficient transfer of heat, as it is able to transfer the most heat from the heat transfer medium. Cross-flow has fluids travel perpendicular to one another through a heat exchanger. Exchangers may also employ corrugations or fins to alter their heat transfer rates through directing fluids to certain parts of heat exchangers, or increasing wall surface area. [ 5 ] Increasing the efficiency of heat exchangers can also be done through increasing the surface area of the wall between the two fluids. By providing more contact points for heat transfer to occur, the rate of transfer is increased. This method can be observed in household radiators which maintain a curvy, sinusoidal cross section to maximize surface contact between the heated water inside and the air of a room. In a plate-fin heat exchanger, the fins are easily able to be rearranged. This allows for the two fluids to result in crossflow, counterflow, cross-counterflow or parallel flow. If the fins are designed well, the plate-fin heat exchanger can work in perfect countercurrent arrangement. [ 6 ] The cost of plate-fin heat exchangers is generally higher than conventional heat exchangers due to a higher level of detail required during manufacture. However, these costs can often be outweighed by the cost saving produced by the added heat transfer. Plate-fin heat exchangers are generally applied in industries where the fluids have little chances of fouling. The delicate design as well as the thin channels in the plate-fin heat exchanger make cleaning difficult or impossible. Applications of plate-fin heat exchangers include: Coulson, J. and Richardson, J (1999). Chemical Engineering- Fluid Flow. Heat Transfer and Mass Transfer- Volume 1; Reed Educational & Professional Publishing LTD
https://en.wikipedia.org/wiki/Plate-fin_heat_exchanger
A plate is a structural element which is characterized by a three-dimensional solid whose thickness is very small when compared with other dimensions. [ 1 ] The effects of the loads that are expected to be applied on it only generate stresses whose resultants are, in practical terms, exclusively normal to the element's thickness. Their mechanics are the main subject of the plate theory . Thin plates are initially flat structural members bounded by two parallel planes, called faces, and a cylindrical surface, called an edge or boundary. The generators of the cylindrical surface are perpendicular to the plane faces. The distance between the plane faces is called the thickness (h) of the plate. It will be assumed that the plate thickness is small compared with other characteristic dimensions of the faces (length, width, diameter, etc.). Geometrically, plates are bounded either by straight or curved boundaries. The static or dynamic loads carried by plates are predominantly perpendicular to the plate faces. [ 2 ] This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Plate_(structure)
The Plate Boundary Observatory (PBO) was the geodetic component of the EarthScope Facility. EarthScope was an Earth science program that explored the 4-dimensional structure of the North American Continent. [ 1 ] EarthScope (and PBO) was a 15-year project (2003-2018) funded by the National Science Foundation (NSF) in conjunction with NASA . PBO construction (an NSF MREFC) took place from October 2003 through September 2008. [ 2 ] Phase 1 of operations and maintenance concluded in September 2013. Phase 2 of operations ended in September 2018, along with the end of the EarthScope project. In October 2018, PBO was assimilated into a broader Network of the Americas (NOTA), along with networks in Mexico ( TLALOCNet ) and the Caribbean ( COCONet ), as part of the NSF's Geodetic Facility for the Advancement of Geosciences (GAGE) . GAGE is operated by EarthScope Consortium. PBO precisely measured Earth deformation resulting from the constant motion of the Pacific and North American tectonic plates in the western United States. These Earth movements can be very small and incremental and not felt by people, or they can be very large and sudden, such as those that occur during earthquakes and volcanic eruptions. The high-precision instrumentation of the PBO enabled detection of motions to a sub-centimeter level. PBO measured Earth deformation through a network of instrumentation including: high precision Global Positioning System (GPS) and Global Navigation Satellite System (GNSS) receivers, strainmeters, seismometers , tiltmeters , and other geodetic instruments. The PBO GPS network included 1100 stations extending from the Aleutian Islands south to Baja and eastward across the continental United States. During the construction phase, 891 permanent and continuously operating GPS stations were installed, and another 209 existing stations were integrated (PBO Nucleus stations) into the network. Geodetic imaging data was transmitted, often in realtime, from a wide network of GPS stations, augmented by seismometers, strainmeters and tiltmeters, complemented by InSAR ( interferometric synthetic aperture radar ), LiDAR (light-activated radar), and geochronology . The GPS stations were categorized into clusters. The transform cluster was near the San Andreas Fault in California ; the subduction cluster was in the Cascadia subduction zone (northern California, Oregon , Washington , and southern British Columbia ); the extension cluster was in the Basin and Range region; the volcanic cluster was in the Yellowstone caldera , the Long Valley caldera , and the Cascade Volcanoes ; the backbone cluster was at 100–200 km intervals across the United States to provide complete spatial coverage. Data from the PBO was, and NOTA data continue to be, transmitted to the GAGE Facility, operated by EarthScope Consortium, to the data center where it is collected, archived and distributed. These data sets continue to be freely and openly available to the public, with equal access provided for all users. PBO data includes the raw data collected from each instrument, quality-checked data in formats commonly used by PBO's various user communities, and processed data such as calibrated time series, velocity fields, and error estimates. Some scientific questions that addressed by the EarthScope project and the PBO data include: [ 3 ]
https://en.wikipedia.org/wiki/Plate_Boundary_Observatory
A roll bender is a mechanical jig having three rollers used to bend a metal bar into a circular arc. The rollers freely rotate about three parallel axes, which are arranged with uniform horizontal spacing. Two outer rollers, usually immobile, cradle the bottom of the material while the inner roller, whose position is adjustable, presses on the topside of the material. Roll bending may be done to both sheet metal and bars of metal. If a bar is used, it is assumed to have a uniform cross-section , but not necessarily rectangular, as long as there are no overhanging contours, i.e. positive draft . Such bars are often formed by extrusion . The material to be shaped is suspended between the rollers. The end rollers support the bottomside of the bar and have a matching contour (inverse shape) to it in order to maintain the cross-sectional shape. Likewise, the middle roller is forced against the topside of the bar and has a matching contour to it. [ 1 ] After the bar is initially inserted into the jig, the middle roller is manually lowered and forced against the bar with a screw arrangement. This causes the bar to undergo both plastic and elastic deformation. The portion of the bar between the rollers will take on the shape of a cubic polynomial , which approximates a circular arc . [ 2 ] The rollers are then rotated moving the bar along with them. For each new position, the portion of the bar between the rollers takes on the shape of a cubic modified by the end conditions imposed by the adjacent sections of the bar. When either end of the bar is reached, the force applied to the center roller is incrementally increased, the roller rotation is reversed and as the rolling process proceeds, the bar shape becomes a better approximation to a circular arc. During the rolling process, the force applied to the center roller is incrementally increased to gradually bring the arc of the bar to the desired radius. The plastic deformation of the bar is retained throughout the process. However, the elastic deformation is reversed as a section of bar leaves the area between the rollers. This springback needs to be compensated in adjusting the middle roller to achieve a desired radius. The amount of spring back depends upon the elastic compliance (inverse of stiffness ) of the material relative to its ductility . Aluminum alloys, for example, tend to have high ductility relative to their elastic compliance, whereas steel tends to be the other way around. Therefore aluminum bars are more amenable to bending into an arc than are steel bars.
https://en.wikipedia.org/wiki/Plate_bending_machine
A plate column (or tray column [ 1 ] ) is equipment used in chemistry to carry out unit operations where it is necessary to transfer mass between a liquid phase and a gas phase. In other words, it is a particular gas-liquid contactor . [ 2 ] The peculiarity of this gas-liquid contactor is that the gas comes in contact with liquid through different stages; [ 1 ] each stage is delimited by two plates (except the stages at the top and bottom of the column). Some common applications of plate columns are distillation , gas-liquid absorption and liquid-liquid extraction . In general, plate columns are suitable for both continuous and batch operations. The feed to the column can be liquid, gas or gas and liquid at equilibrium . Inside the column there are always two phases: one gas phase and one liquid phase. The liquid phase flows downward through the column via gravity, [ 1 ] while the gas phase flows upward. These two phases come in contact in correspondence of holes, valves or bubble caps that fill the area of the plates. [ 2 ] Gas moves to the higher plate through these devices, while the liquid moves to the lower plate through a downcomer. [ 1 ] The liquid is collected to the bottom of the column and it undergoes evaporation through a reboiler , while the gas is collected to the top and it undergoes condensation through a condenser . The liquid and gas produced at the top and at the bottom are in general recirculated. In the simplest case, there are just one feed stream and two product streams. In the case of the fractionating column , there are instead many product streams.
https://en.wikipedia.org/wiki/Plate_column
A plate heat exchanger is a type of heat exchanger that uses metal plates to transfer heat between two fluids . This has a major advantage over a conventional heat exchanger in that the fluids are exposed to a much larger surface area because the fluids are spread out over the plates. This facilitates the transfer of heat, and greatly increases the speed of the temperature change. Plate heat exchangers are now common and very small brazed versions are used in the hot-water sections of millions of combination boilers . The high heat transfer efficiency for such a small physical size has increased the domestic hot water (DHW) flowrate of combination boilers. The small plate heat exchanger has made a great impact in domestic heating and hot-water. Larger commercial versions use gaskets between the plates, whereas smaller versions tend to be brazed. The concept behind a heat exchanger is the use of pipes or other containment vessels to heat or cool one fluid by transferring heat between it and another fluid. In most cases, the exchanger consists of a coiled pipe containing one fluid that passes through a chamber containing another fluid. The walls of the pipe are usually made of metal , or another substance with a high thermal conductivity , to facilitate the interchange, whereas the outer casing of the larger chamber is made of a plastic or coated with thermal insulation , to discourage heat from escaping from the exchanger. The world's first commercially viable plate heat exchanger (PHE) was invented by Dr Richard Seligman in 1923 and revolutionized methods of indirect heating and cooling of fluids. Dr Richard Seligman founded APV in 1910 as the Aluminum Plant & Vessel Company Limited, a specialist fabricating firm supplying welded vessels to the brewery and vegetable oil trades. Also, it set the norm for today's computer-designed thin metal plate Heat Exchangers that are used all over the world. [ 1 ] The plate heat exchanger (PHE) is a specialized design well suited to transferring heat between medium- and low-pressure fluids. Welded, semi-welded and brazed heat exchangers are used for heat exchange between high-pressure fluids or where a more compact product is required. In place of a pipe passing through a chamber, there are instead two alternating chambers, usually thin in depth, separated at their largest surface by a corrugated metal plate. The plates used in a plate and frame heat exchanger are obtained by one piece pressing of metal plates. Stainless steel is a commonly used metal for the plates because of its ability to withstand high temperatures, its strength, and its corrosion resistance. The plates are often spaced by rubber sealing gaskets which are cemented into a section around the edge of the plates. The plates are pressed to form troughs at right angles to the direction of flow of the liquid which runs through the channels in the heat exchanger. These troughs are arranged so that they interlink with the other plates which forms the channel with gaps of 1.3–1.5 mm between the plates. The plates are compressed together in a rigid frame to form an arrangement of parallel flow channels with alternating hot and cold fluids. The plates produce an extremely large surface area, which allows for the fastest possible transfer. Making each chamber thin ensures that the majority of the volume of the liquid contacts the plate, again aiding exchange. The troughs also create and maintain a turbulent flow in the liquid to maximize heat transfer in the exchanger. A high degree of turbulence can be obtained at low flow rates and high heat transfer coefficient can then be achieved. As compared to shell and tube heat exchangers, the temperature approach (the smallest difference between the temperatures of the cold and hot streams) in a plate heat exchangers may be as low as 1 °C whereas shell and tube heat exchangers require an approach of 5 °C or more. For the same amount of heat exchanged, the size of the plate heat exchanger is smaller, because of the large heat transfer area afforded by the plates (the large area through which heat can travel). Increase and reduction of the heat transfer area is simple in a plate heat-exchanger, through the addition or removal of plates from the stack. All plate heat exchangers look similar on the outside. The difference lies on the inside, in the details of the plate design and the sealing technologies used. Hence, when evaluating a plate heat exchanger, it is very important not only to explore the details of the product being supplied but also to analyze the level of research and development carried out by the manufacturer and the post-commissioning service and spare parts availability. An important aspect to take into account when evaluating a heat exchanger are the forms of corrugation within the heat exchanger. There are two types: intermating and chevron corrugations. In general, greater heat transfer enhancement is produced from chevrons for a given increase in pressure drop and are more commonly used than intermating corrugations. [ 2 ] There are so many different ways of modifications to increase heat exchangers efficiency that it is extremely doubtful that any of them will be supported by a commercial simulator. In addition, some proprietary data can never be released from the heat transfer enhancement manufacturers. However, it does not mean that any of the pre-measurements for emerging technology are not accomplish by the engineers. Context information on several different forms of changes to heat exchangers is given below. The main objective of having a cost benefit heat exchanger compared to the usage of a traditional heat exchanger must always be fulfilled by heat exchanger enhancement. Fouling capacity, reliability and safety are other considerations that should be tackled. First is Periodic Cleaning. Periodic cleaning (on-site cleaning) is the most efficient method to flush out all the waste and dirt that over time decreases the efficiency of the heat exchanger. This approach requires both sides of the PHE (Plate Heat Exchanger) to be drained, followed by its isolation from the fluid in the system. From both sides, water should be flushed out until it runs completely clear. The flushing should be carried out in the opposite direction to regular operations for the best results. Once it is done, it is then time to use a circular pump and a solution tank to pass on a cleaning agent while ensuring that the agent is compatible with the PHE (Plate Heat Exchanger) gaskets and plates. Lastly, until the discharge stream runs clear, the system should be flushed with water again. To achieve improvement in PHEs, two important factors have to be considered, namely the amount of heat transfer and pressure drop, such that the amount of heat transfer needs to be increased and pressure drops need to be decreased. In plate heat exchangers, due to the presence of corrugated plate, there is a significant resistance to flow with high friction loss. Thus to design plate heat exchangers, one should consider both factors. For various ranges of Reynolds numbers, many correlations and chevron angles for plate heat exchangers exist. The plate geometry is one of the most important factors in heat transfer and pressure drop in plate heat exchangers; however, such a feature is not accurately prescribed. In the corrugated plate heat exchangers, because of narrow paths between the plates, there is a large pressure capacity and the flow becomes turbulent along the path. Therefore, it requires more pumping power than the other types of heat exchangers. Therefore, higher heat transfer and less pressure drop are targeted. The shape of plate heat exchanger is very important for industrial applications that are affected by pressure drop. [ citation needed ] Design calculations of a plate heat exchanger include flow distribution and pressure drop and heat transfer. The former is an issue of Flow distribution in manifolds . [ 3 ] A layout configuration of plate heat exchanger can be usually simplified into a manifold system with two manifold headers for dividing and combining fluids, which can be categorized into U-type and Z-type arrangement according to flow direction in the headers, as shown in manifold arrangement. Bassiouny and Martin developed the previous theory of design. [ 4 ] [ 5 ] In recent years Wang [ 6 ] [ 7 ] unified all the main existing models and developed a most completed theory and design tool. The total rate of heat transfer between the hot and cold fluids passing through a plate heat exchanger may be expressed as: Q = UA∆Tm where U is the Overall heat transfer coefficient , A is the total plate area, and ∆Tm is the Log mean temperature difference . U is dependent upon the heat transfer coefficients in the hot and cold streams. [ 2 ] Their cleaning helps to avoid fouling and scaling without the heat exchanger needing to be shut down or operations disrupted. In order to avoid heat exchanger performance to decrease and service life of the tube extension, the OnC (Online Cleaning) can be used as a standalone approach or in conjunction with chemical treatment. The re-circulating ball type system and the brush and basket system are some of OnC techniques. OfC (Offline Cleaning) is another effective cleaning method that effectively increases the performance of heat exchangers and decreases operating expenses. This method, also known as pigging, uses a shape like bullet device that is inserted in each tube and using high air pressure to force down the tube. Chemical washing, hydro-blasting and hydro-lancing are other widely used methods other than OfC. Both these techniques, when used frequently, will restore the exchanger into its optimum efficiency until the fouling and scaling begin to slip slowly and adversely affecting the efficiency of the heat exchanger. Operation and maintenance cost is necessary for a heat exchanger. But there are different ways to minimize the cost. Firstly, cost can be minimized by reducing fouling formation on heat exchanger that decreases the overall heat transfer coefficient. According to analysis estimated, effect of fouling formation will generate a huge cost of operational losses which more than 4 billion dollars. The total fouling cost including capital cost, energy cost, maintenance cost and cost of profit loss. Chemical fouling inhibitors is one of the fouling control method. For example, acrylic acid/hydroxypropyl acrylate (AA/HPA) and acrylic acid/sulfonic acid (AA/SA) copolymers can be used to inhibit the fouling by deposition of calcium phosphate. Next, deposition of fouling can also be reduced by installing the heat exchanger vertically as the gravitational force pulls any of the particles away from the heat transfer surface in the heat exchanger. Second, operation cost can be minimized when saturated steam is used compared to superheated steam as a fluid. Superheated steam acts as an insulator and poor heat conductor, it is not suitable for heat application such as heat exchanger.
https://en.wikipedia.org/wiki/Plate_heat_exchanger
A plate lifter or plate wobbler is a novelty item [ 1 ] consisting of a tube with a small flat [ 2 ] bladder on one end and a bulb on the other. [ 3 ] The bladder is to be placed under a plate; inflating it will make the plate wobble. [ 4 ] This fake demonstration of psychokinesis [ 2 ] is intended to provoke surprise and merriment. [ 5 ] [ 6 ] A free plate wobbler was included with the first issue of the British comic magazine Monster Fun and promoted as a "monster mirth maker". [ 1 ] "Plate lifter" is the name of various devices used in industry for handling heavy metal plates. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Plate_lifter